It’s a brand new day not very far in the longer term. You get up; your wristwatch has recorded how long you’ve slept, and monitored your heartbeat and respiratory. You drive to work; automotive sensors track your speed and braking. You pick up some breakfast in your way, paying electronically; the transaction and the calorie content of your meal are recorded.

Then you may have a automotive accident. You phone your insurance company. Your call is answered immediately. The voice on the opposite end knows your name and amiably chats to you about your pet cat and the way your favourite football team did on the weekend.

You’re talking to a chat-bot. The reason it “knows” a lot about you is since the insurance company is using artificial intelligence to scrape details about you from social media. It knows lots more besides, since you’ve agreed to let it monitor your personal devices in exchange for cheaper insurance premiums.

This isn’t science fiction. More than three-quarters of insurance executives imagine artificial intelligence will revolutionise the industry inside a couple of years. By 2030, in line with McKinsey futurists, artificial intelligence will mean your automotive and life insurance premiums could change based on whether you choose to take one route or one other.

It can be sold to you on the promise of more personalised service, faster claims processing and lower premiums – and it’ll deliver on those guarantees, for essentially the most part.

But there are ethical risks too – data privacy and discrimination amongst them. An insurance company might use your data to determine how much you can be willing to pay for canopy. It might sell the knowledge to a 3rd party. The AI might determine you pose a greater risk due to your age, sex, income or ethnicity.

The web of things

Though the insurance industry typically has an unenviable status for taking people’s money then refusing to pay, it’s a highly competitive sector. The less agile will probably not survive against competitors using AI to remain profitable while lowering their premiums.

To offer lower premiums, an insurer must know a person is, the truth is, a lower risk. The enabling technology is the web of things, the collective name for the billions of internet-connected sensors embedded in all manner of objects we use daily. They are in phones, watches, cars, fitness trackers, home assistants and lots of other things. Collectively they form an “ecosystem” of sensors.

Data collected over time allow the insurer to make an individually tailored risk profile based on an individual’s actual behaviour, a practice often known as .

Getting ‘smart’

To lower your home and contents insurance, the insurance company will patch into the AI hub that runs your “smart home” through its ecosystem of sensors.

If there may be a pattern of burglaries within the neighbourhood, the house hub will know, since it is connected to the insurer’s network. Locks and alarms will be primed and police called at the primary sign of trouble. To manage the danger of fireplace, sensors will monitor heat, humidity and detect smoke. If the stove gets left on, the house hub will turn it off before it becomes an issue.

To calculate lower automotive insurance premiums, your insurance company should want to monitor the way in which you drive and maintain your automotive.

Health insurance premiums may require giving the insurer access to your medical records and wearing a fitness tracker.



A brand new industry sector will emerge. Specialist corporations that deploy IoT sensors and gather the information will partner with insurers to form a brand new business ecosystem. The whole industry will shift from purely reactive insurance to proactive, risk-minimising cover.

It all sounds quite positive. But there are also broader risks within the narrow pursuit of minimising insurance risk.

China’s surveillance state gives a glimpse of 1 dystopian future using AI. Exploiting the technology to maximise private profit is one other.
Wu Hong/EPA

Discrimination

One very clear danger is the issue of profiling – being judged the next or lower insurance risk since you belong to a specific demographic group.

AI can now differentiate risk into a whole lot of things. Algorithms scan these aspects to discover clusters of previously unrecognised risk. They can even deduce clusters on their very own.

But these conclusions may unintentionally discriminate. There are already many examples where AI algorithms have inadvertently amplified stereotypes.

The case of predictive policing in Durham, England, illustrates the issue. Police there developed an algorithm to higher predict the danger posed by people charged with an offence should they be granted bail. What it did was discriminate against poorer people on the idea of where they lived.

Opportunistic pricing

There can be the prospect of more individualised discrimination.

Already quite well-known is the issue of genetic discrimination – the danger of a health or life insurer increasing premiums and even denying cover for certain conditions based on what your DNA reveals about your genetic disposition to certain conditions.



AI opens up a complete recent area of personalised discrimination, based on what it will possibly glean out of your behaviours and preferences.

For one thing, the plethora of information potentially available to AI can tell an insurer lots about your spending habits. Where do you shop? What do you purchase? When do you spend? Do you hunt down bargains or pay full price?

Knowing all it will help an insurance company estimate if it will possibly get away with charging you top price.

Some within the industry argue that that is just how markets operate, but when it’s facilitated by unprecedented access to private information, it becomes a highly questionable practice.

Loss of privacy

An insurer may also be tempted to make use of the information for purposes apart from assessing risk. Given its value, the information is perhaps sold to 3rd parties for various purposes to offset the fee of collecting it. Advertisers, marketers, lobbyists and political parties are all insatiably hungry for detailed demographic data.



Contrary to what people might think, this data is the property of the person it pertains to. It is owned by whoever paid for it. Consumers have to be legally protected against their data getting used for other purposes without their informed consent.

Managing risk

With any powerful recent technology there are advantages and risks. The advantages ought to be made clear and the risks managed all the way down to a suitable level. There is in fact irony in having to administer the danger of managing risk.

Insurance corporations have a job to do to make sure customers can trust there may be way more upside than downside in AI. They might want to adopt transparently fair, if not benevolent, practices that contribute to the greater good. It needs to be about greater than profit.

This article was originally published at theconversation.com