Award-winning insurtech company, Spixii discusses AI and its potential ethical problems in risk modelling.
If you were offered unlimited knowledge, would you say yes?
While chatbots can offer customers financial advice and guidance, they also serve a second purpose: providing companies with unprecedented amounts of original customer data, based on customer interaction. Artificial intelligence (AI) can rapidly analyse this data to give insurers rich, highly granular recommendations and insights in real-time.
On the one hand, this enables companies to personalise their digital interaction and enhance their customer service. Historically, customers would talk to a local broker in-person to buy insurance. With the evolution of the internet, this exchange moved online and lost the personal touch, opening a land of confusion for customers. Yet, insurance companies can now use technology to scale this personalisation to enhance their digital customer experience.
However, this level of personalisation risks excluding one customer group who may need it most: young people who lack digital skills. According to this article in the Guardian, 3% of all 15-24 year old lack digital skills; “these include the ability to use a search engine to find information, complete online application forms, manage money or solve a problem using a digital service.” That’s 300,000 young people “trapped in a cycle of disadvantage and vulnerability”. They are unlikely to have a digital footprint, leaving limited data for AI to analyse and use to personalise services aimed at themselves..
Furthermore, such granular data offers a further risk: effectively outpricing at-risk consumer groups. The FSB report, published in November last year, argues:
“Consumer advocacy groups point out that machine learning tools can yield combinations of borrower characteristics that simply predict race or gender, factors that fair lending laws prohibit considering … These algorithms might rate a borrower from an ethnic minority at higher risk of default because similar borrowers have traditionally been given less favourable loan conditions.”
If the algorithms are too effective at predicting who will make a claim or have been trained using historical data, premiums may surge and exclude whole consumer groups unjustly. This undermines the fundamental mission of insurance: to empower and protect via risk-sharing, pooling and subsidising different risks.
So, what can insurance companies do?
AI is commonly seen as ‘black-box’. As this report highlights, the ‘black-box’ nature of complex neural networks can lead to undesirable outcomes. In the study, a transparent and interpretable “rule-based model” was still preferred over a more complicated model, despite it being less accurate. Stay tuned as we’ll be examining this further in part two.
AI, machine learning and chatbots pose a potential superpower for insurers.
However, to make insurance accessible, companies must consider the ethical implications of these new technologies. Customer insight can be highly granular in order to manage risk better, but insurers should resist the temptation to use them to exclude at-risk groups. After all, insurers are designed to provide risk management and indemnity, or in other words, paying claims. AI must also be inclusive.
Once insurers have this in place, the unlocked potential for personalisation will be extraordinary – resulting in less stressful, more delightful experiences for the end user, better coaching to manage risks, and helping more people get better protected.
Interested in what chatbot technology can do for your business?
Spixii is an award-winning insurtech company that works with insurance companies including Allianz, Bupa, BNP Paribas and RGA. Every year, Spixii runs a number of workshops with insurance professionals, offering intensive consulting on these chatbot principles and more. To find out more, please visit spixii.com/workshops.