Shareholder Value Reporting in Europe - Solvency II Based Metrics
We consider the impact of the pandemic to firms’ supplementary reporting metrics, and their level of Solvency II Own Funds and solvency positions.
Recent advances in artificial intelligence have vastly changed the analytic landscape by removing long-entrenched barriers and making the platform much more accessible to insurers, even if they continue to struggle with the integrity of their data. This breakthrough is particularly significant to implementation of claims predictive models. Today’s insurance analytics and artificial intelligence (AI) have enormous flexibility in their ability to ingest data in different formats and structures, opening up previously hard-to-access data sources to analysis.
In an ideal world, data would automatically and immediately transmit after First Notice of Loss (FNOL) from an accident site to an insurer. First reports would be accurate and timely. But claims adjusters live and work in the real world where data is delayed, incomplete and often inaccurate, and once in the system, it can be hard to access because of limitations in legacy claims software that was designed years ago. Even for insurers that have adopted more modern, cloud-based architectural solutions, processes can cause gaps in data, throwing suspicion on its quality. But whether it’s inconsistent coding, missing data or outright data inaccuracies, the conclusion many insurers have come to is that their data lacks some level of integrity.
Add to these concerns the numerous reports that raise questions about insurers’ legacy systems, and the message is loud and clear–If you cannot trust the data, you cannot trust the analysis.
So why invest in predictive analytics if you have questions about your data?
Advances in AI have made predictive analytics for insurance much more accessible and feasible. Shortcomings in structured claims data, which has quantifiable characteristics such as the date of birth of a claimant, can now be readily managed given the flexibility and agility of today’s AI technologies.
This flexibility means that structured claims data can be in virtually any format, have missing fields, or can be riddled with inaccuracies, but still be useful. Insurance analytics and AI are now able to manage most, if not all, claims data shortcomings and still deliver insight on claims performance and insurance claims trends that informs claims decisions.
Unlike regression models that rely heavily on structured data, the best predictive models today deploy a variety of statistical and analytical techniques that are used to identify patterns and trends. The most sophisticated models use natural process language, a branch of AI, which extracts meaning from text such as adjusters’ notes and other word documents.
This ability to read or text mine unstructured data (such as claim descriptions, adjuster notes, or any other free-form text data) provides a rich view of the claim information, allowing claims professionals to understand what is happening on a claim before costs become problematic. Claims with characteristics like co-morbidities or upcoming procedures that have the greatest influence on costs can be identified early in the life cycle of a claim, and directed to seasoned adjusters who can bring in cost containment resources early in the process when they’re most effective. Similarly, the least costly claims can also be identified, fast tracked and closed, freeing up claims’ resources for more complicated cases. This claims triage process allows claims to be efficiently moved to the most appropriate resources. Much of the time-consuming, manual review that often goes into claims assignments is replaced by a process based on quantitative factors, which greatly increases claims automation. Increased automation leads to increased efficiency and fewer manual errors.
Moreover, the unstructured data accessed by AI is often more informative than the structured data. Static in nature, structured data typically gives a snapshot of injuries and fails to capture their evolving nature. Radiating shoulder pain that occurs after an injury was coded, reduced mobility that develops weeks after an accident, or a discussion of possible surgery. These and other latent characteristics of a claim, which reflect the true medical complexities of an accident, are not reflected in structured data but are instead buried in adjusters’ notes or other text documents. AI now provides access to this data and a pathway for insurers to benefit from claims analytics and insight into the costs that are driving the potentially most expensive claims.
A rich view of the data also brings a new dimension of granularity to the data that can reveal subtle, but cogent, shifts in insurance claims trends as they begin to emerge. Claims professionals can see if key indicators are changing when they need to review claims performance, literally at the press of a button, instead of waiting until the end of the month or another arbitrary time that is often beyond their control.
This improved granularity means that a claims professional can gain a sharper understanding of the specific factors that affect his or her company’s claims performance rather than relying on industry trends that may or may not reflect an individual insurer’s claims profile. The need to react to changes can be replaced by the ability to proactively adjust to shifts in trends and thoughtfully intervene as changes start to emerge.
Over time, claims predictive analytics can help to improve an insurer’s overall data quality, since the unstructured data accessed by AI is also used to verify an insurer’s structured data by identifying data that seems out of sync with known trends. This is often observed, for example, in the low rates of obesity in insurers’ claims profiles compared with known rates reported in the U.S. population. The best predictive models will also normalize differences in the way adjusters report data based on the insurer’s unstructured data. This process of investigating and reconciling these differences and anomalies in the data will also reveal ways in which an insurer’s data collection process can be improved. Over time this reiterative loop between structured and unstructured data not only improves the overall quality of an insurer’s data but also the predictive capabilities of the claims model.
Data that many insurers were unaware they had can now provide a gateway to claims predictive analytics that brings new insight to claims performance and bolsters insurers’ ability to compete in an ever increasingly data-driven market. Embracing the technology is simply no longer the risk that it was, no matter what an insurer’s data quality perceived concerns may be. But to truly leverage the hidden intelligence in claims data, insurers need a trusted partner with deep subject matter expertise in data science, domain knowledge of the insurance industry and the ability to keep pace with rapidly changing technologies.
This expertise has been brought together in Milliman NodalTM, the analytics platform specifically built for the insurance industry and designed with actuarial science and claims operations expertise as an end-to-end solution for insurers. Companies that have deployed Nodal reported an average cost savings between 5% and 15%.