Blog > Data Integrity > 3 Data Integrity Examples in the Insurance Industry

3 Data Integrity Examples in the Insurance Industry

Authors Photo Precisely Editor | May 18, 2022

In virtually every industry, data is playing a more important role than ever before in driving decisions. There are vast increases in the amount of data available, including information that would never have been imaginable in the past (such as mobile trace data). At the same time, the ability to store, manage, and process all of that information has become more complex.

In the insurance industry, fine-grained data analysis has resulted in more profit flowing to the bottom line. It’s all about accuracy, consistency, and context in risk assessment. Underpriced policies leave money on the table, or even result in losses. On the other hand, in a competitive market for insurance products, overpriced policies result in lost customers.

Efforts to improve data integrity, and thereby to increase profitability, should include a proactive approach not only to improving the accuracy and consistency of data, but also to adding context to the data being used by the business for decision making.

If there is one weak link in the data integrity chain, it is insurers simply not being able to trust the data they are using. According to a recent Corinium report on the Future of Insurance Data, 24% of industry executives surveyed are “not very confident” that they are using the best data available. Another 41% are only “fairly confident” in their data. When millions of dollars are at stake, “fairly confident” is simply not good enough.

Let’s take a look at how gaps in data integrity – accuracy, consistency, and context built on data integration, data quality, location intelligence, and data enrichment – ultimately impact the bottom line.

1.  Data integrity: Small errors, big impact

Achieving data integrity means starting with a foundation of data integration and data quality. When you think about data quality, what may come to mind immediately are the kind of typos and clerical errors that we have all experienced at one time or another. Someone left off a zero when they entered your income-level into the system, so your application for a new policy is rejected. Your age or other vital statistics are entered incorrectly, so your life insurance policy is priced too high.

For an insurer, those scenarios can result in lost customers or policies that are underpriced relative to the risk associated with them.

When applicants provide false information, it can be even more costly. The process of validating data across multiple sources can help to ferret out fraud and prevent it from impacting the bottom line. If multiple customers report living in a multiple-dwelling unit at the same address, then it is reasonable to conclude that the building truly is a multi-unit structure. If an application for a new policy shows it as a single-family house, then you know you may have a problem.

Each of the components of data integrity – data integration, data quality, location intelligence, and data enrichment – play an important role in this scenario.

  • With reliable integration, data can be synchronized across multiple sources – increasing access to all your data in a timely fashion and resulting in data that is more consistent.
  • Data quality could make the data more accurate as well through standardization, verification, and validation of the data.
  • Location intelligence supported by data enrichment provides essential context about consumers, properties, and risk – enabling organizations to cross-reference internal data with external sources, resulting in better, more actionable insights.

Pro-active data integrity efforts create trusted data with the accuracy, consistency, and context to reveal potential underwriting issues, enable more accurate quotes, and drive better bottom-line results.

2023 Data Integrity Trends & Insights

Results from a Survey of Data and Analytics Professionals

Lebow Report 2023

2. Auto insurance: Where do you park your car?

Anyone who has ever purchased car insurance has been asked where they live, where they work, and how many miles they drive every year. If that’s the only information you have with which to work, you can apply some coarse-grained analysis to that scenario and come up with a risk factor that considers crime, traffic, accident history, and a handful of other data points.

Chances are, though, that the location data associated with that risk assessment is based on ZIP Codes. There may be no distinction between a vehicle parked on the street just outside a college campus (where vandalism might potentially be a problem) and a vehicle parked in a garage on a quiet side street several miles away.

The risk levels associated with those two scenarios could be very different. Lacking a more refined approach to risk assessment, however, policies for those two vehicles could potentially be priced the same, all other things being equal. Underpricing results in claims that exceed estimates. Overpricing means lost customers.

In this example you need the context of the customer to be able to achieve the best outcome for the business. This goes beyond just understanding the differences between the two neighborhoods. It’s also about validating information against third-party sources. If you ask the customer whether they park on the street or in a garage, some percentage of those customers might misrepresent the truth in hopes of paying a lower rate.

Third-party location data provides an opportunity to validate the information provided by the customer. You can know whether a house has a garage, and whether it is a single-car or two-car garage.

Again, you need all the essential elements of data integrity – data integration, data quality, location intelligence and data enrichment – to deliver data with the accuracy, consistency, and context necessary to drive additional profit.

3. Property insurance: How close is the fire?

Consider another scenario: in a drought-stricken area of California, a large US insurer holds homeowner policies for hundreds of properties. When those policies were written, risk assessments were based on ZIP Code. Just as with the auto insurance example, there is little distinction between a downtown property close to the center of town and a freestanding home on a wooded lot adjacent to the national forest.

3 Data Integrity Examples in the Insurance Industry - body image

This is not a hypothetical example. In 2017, California had one of the worst years on record for wildfires. Losses were extensive. The major US insurer in question provided Precisely with a sample of 100 properties from that event on which more than $100 million in claims were paid.

Based on Precisely’s analysis, just 3% of those properties had previously been identified as being at high risk from wildfires. In reality, between half and three-quarters of those properties would have been identified as high risk if the insurer had used more accurate location data.

By focusing on the four pillars of data integrity – data integration, data quality, location intelligence, and data enrichment – some of those losses might have been avoided.  Location intelligence, enhanced by rich third-party data (enrichment), integrated in a timely fashion with other internal data, and implemented as part of an overall data integrity initiative would have rated the risk of those properties more accurately and protected the company from significant losses.

The cost of poor data integrity in this case was over $100 million.

Data integrity: The cost of being wrong

Accuracy in risk assessment has a direct impact on the bottom line. Underpricing policies has obvious implications; payouts exceed estimates, resulting in lost profits.

Nor are those losses offset by errors in the other direction; overpriced policies may make up some of the difference, but more often than not, they drive customers to buy competitively priced policies elsewhere.

According to research carried out by Perr & Knight  (commissioned by Precisely), 5% to 10% of policies are priced incorrectly, and some homeowner policies are underpriced by as much as 86.7% or $2,800 a year per policy.

Consider the implications of that kind of error multiplied by thousands of covered properties. Precisely’s Mike Hofert explains: “It turns out that underpricing is actually worth, in the state of Florida alone, over $100 million in lost premiums… A small percentage of properties change when you improve the data. But that small percentage can still have a major bottom-line impact—particularly in the insurance industry, which is running at an incredibly thin profit margin.” For the insurance industry, an investment in data integrity translates to a clear and measurable ROI.

Along with our broader data integrity offerings, Precisely provides insurance industry solutions to help mitigate insurance risk, define policy coverage and personalize customer interactions.

Precisely partnered with Drexel University’s LeBow College of Business to survey more than 450 data and analytics professionals worldwide about the state of their data programs.  Now, we’re sharing the ground-breaking results in the 2023 Data Integrity Trends and Insights Report.