Insurance
The role of AI in insurance is multifaceted. Within the insurance industry itself, AI can impact not only customer interactions and data processing, but also the methodology of risk and pricing analysis. In addition, insurance companies need to assess the risks of using AI in general situations in order to offer coverage.
Use of AI within the Insurance Industry
Regarding the use of AI within the insurance industry itself, some risk factors, though potentially relevant to outcomes, may not be legal for public policy reasons (similar to, say, hiring). Since a combination of otherwise legal inputs may be highly correlated to illegal inputs (e.g. income and address may correlate to race), there is a trade-off between the increased accuracy of AI with the decreased explainability.
According to one industry report, the use of AI cuts to the heart of insurance models, in its ability to discern a multitude of risk causes and correlations, across various groups– down to the individual level:
​
REGULATION OF ARTIFICIAL INTELLIGENCE IN INSURANCE: Balancing consumer protection and innovation (Noordhoek, The Geneva Association, 2023):
In a recent report, the Dutch Financial Market Authority (AFM) concluded that, while some groups of customers might face higher premiums or become uninsurable, individual risk assessments are generally considered fair and offer opportunities for risk reduction and mitigation. They also determined that governments have a role in supporting those customers who become uninsurable due to individualized pricing. This highlights that AI lays bare issues that would otherwise not be visible and potentially merit a societal discussion. [...]
Finally, yet importantly, though the growing use of AI means that correlation increasingly substitutes causality, existing insurance regulatory practices in pricing and conduct remain rooted in the latter. This limits the use of rating factors to only those that demonstrably influence the risk. This by itself limits the extent to which AI can be used by insurers and supports the main argument of this report– that crosssectoral regulation that covers the use of AI in insurance is less effective than insurance-specific regulation.
​
AI Risk Analysis
In regard to the assessment of the use of AI more generally, the degree of risk can be categorized by the sophistication of the AI used, the maturity of the integration, and the type, scale, and frequency of the potential harm. One insurance study has classified the risks associated with the use of AI into six major categories:
​
1. Data bias or lack of fairness: unintended discrimination against a protected group
2. Cyber: system vulnerabilities or malignant use
3. Algorithmic and performance: failure to meet metrics requirements
4. Lack of ethics, accountability, and transparency: failure to adhere to ethics or accountability requirements, possibly obscured by lack of transparency
5. Intellectual property (IP): use of third-party IP in training data or unintended infringement
6. Privacy: unauthorized use or exposure of personal data in training data or output
In terms of AIITF general working group interdisciplinary analysis and guidance, one stakeholder to consider is the insurance industry:
Insurers can also play an important role in reducing risks associated with ethics, accountability and transparency [...]: assessments of AI, machine learning and analytics models for trustworthiness, robustness, accuracy, transparency, ethical use and governance of data and AI. [They] can find application in multiple industries, including manufacturing (to optimise operations, ensuring product quality, worker safety, and mitigating disruptions in the manufacturing process) and mobility (potential risks associated with self-driving vehicles).
A last word on insurance and AI: when we look at projections for AI growth, mentioned in the introduction as “between USD 2.6 trillion and USD 4.4 trillion” per annum, then AI will become ubiquitous across industry lines. This will bring AI into traditional insurance lines, which if it is not specifically included or excluded, could exacerbate losses. This has been described as ‘silent AI risk’ and has potentially serious consequences for accumulation risks in insurance portfolios. [...] AI may be revolutionary in many ways, but it will sometimes also be fallible. It is for insurers to consider to sustainably provide and create resilience for this emerging technology.