The ethics of predictive analytics in underwriting

The insurance industry has long been built on the principles of assessing and managing risk. Predictive analytics – a blend of statistical methods, machine learning, and Big Data – has revolutionised underwriting, enabling insurers to make more accurate predictions about policyholder behaviour and risk. However, the adoption of these tools also raises significant ethical questions, requiring a careful balance between innovation and fairness.

Tailored products and
personalised pricing
Predictive analytics leverages vast amounts of data to identify patterns and correlations that inform risk assessments. Insurers can now predict the likelihood of claims, policy lapses, or customer preferences with remarkable precision. This capability benefits insurers by reducing underwriting costs and improving efficiency and allows for tailored products and personalised pricing for customers.

For example, telematics in motor insurance uses real-time driving data to predict accident likelihood, enabling insurers to offer risk-based premiums. Similarly, in health insurance, wearable devices collect biometric data to assess individual health risks.

The dilemma
However, the transformative potential of predictive analytics also poses a dilemma. The same tools that enhance decision-making can inadvertently perpetuate biases or lead to unfair outcomes if not managed ethically.
Predictive models are only as unbiased as the data they are trained on. Historical data may reflect societal biases, such as racial, gender, or socioeconomic disparities, leading to unfair treatment of certain groups. For instance, using zip codes as a

proxy for risk might indirectly disadvantage communities with lower socioeconomic status.

Many predictive models, especially those built on machine learning, are complex and often described as “black boxes”. This lack of transparency makes it difficult for insurers to explain decisions to customers, potentially eroding trust. Policyholders have a right to understand how their data influences underwriting decisions, especially when it impacts their premiums or coverage.

The granularity of predictive models allows insurers to segment customers with unprecedented precision. While this can lead to fairer pricing for low-risk individuals, it might render insurance unaffordable or inaccessible for high-risk groups, raising concerns about equity. Insurance must balance risk-based pricing with its societal role of providing a safety net.

Balancing fairness and upholding trust

Insurers must ensure that predictive models are reviewed and validated regularly to avoid reinforcing biases. Creating cross-disciplinary teams, including ethicists, data scientists, and underwriters, can help design models aligned with ethical principles.

A predictive model learns from historical data centred on similar situations to forecast future outcomes and, importantly, enable organisations to intervene proactively. In the insurance context, this allows underwriters to identify emerging risks and tailor their interventions, such as recommending policy adjustments or risk mitigation strategies. However, the justification for differential treatment – offering varied premiums or coverage – relies heavily on the accuracy and fairness of these predictions.

This is where the human element plays a vital role. Underwriters and decision-makers must critically assess the outputs of these models, applying their expertise and ethical judgment to avoid unjust outcomes. Additionally, the unique challenge of predictive validation lies in its temporal nature – whether a prediction about an individual’s risk was correct is only revealed over time. This delayed feedback loop necessitates ongoing human oversight to refine models, balance fairness, and uphold trust.

Clear communication about how data is used and its impact on premiums fosters trust and accountability.

Establishing robust data governance frameworks ensures that data is collected, stored, and used responsibly. Customers should be informed about how their data will be used and have the option to opt out of non-essential data collection. Adopting privacy-by-design principles can help insurers comply with legal and ethical standards.

To avoid excluding high-risk individuals, insurers can explore creative solutions, and design relevant products, to maintain access to affordable coverage. While predictive analytics may prioritise profitability, the industry must not lose sight of its social contract to support vulnerable populations.

Predictive analytics is reshaping the underwriting process, offering significant benefits to insurers and customers alike. However, these advancements come with ethical considerations. By prioritising fairness, transparency, and accountability, the insurance industry can harness the power of predictive analytics responsibly.