The National Association of Insurance Commissioners (NAIC) met in Seattle in August. A wide range of topics relating to the use of technology were discussed. The overall theme centered around the continuing need to balance providing consumers with legislative and model laws protection against the use of advanced (and sometimes invasive) technologies and harnessing emerging technologies to offer them improved products and services.
Early in the year, we reported from the April national meeting that the Big Data and Artificial Intelligence (H) Group was examining the industry’s use of big data and artificial intelligence. The intent of the survey was to:
“Research the use of big data and artificial intelligence (AI) in the business of insurance and evaluate existing regulatory frameworks for overseeing and monitoring their use. Present findings and recommended next steps, if any, to the Innovation and Technology (EX) Task Force, which may include model governance for the use of big data and AI for the insurance industry.”
During the August meeting, the committee shared key data from the survey. These include insurance companies’ prevalent use of artificial intelligence/machine learning (AI/ML). 70% of the surveyed companies ($50 million or more in revenue) currently use or plan to use AI/ML as part of their business strategy.
Organizations choosing not to join the AI/ML bandwagons offered reasons including a lack of compelling business needs and waiting for regulatory guidelines. A more detailed survey analysis can be obtained from the NAIC report Materials – Big Data and Artificial Intelligence (H) Working Group (naic.org). To put things into perspective, according to Stanford’s Artificial Intelligence Index Report 2023, in the Insurtech space, over $1.74 Billion were invested in AI/ML projects. The NAIC anticipates an investment into the AI space to continue in the foreseeable future.
The Innovation, Cybersecurity, and Technology (H) Committee focused on collaboration between state regulators and insurance companies on the development of a regulatory framework that balances the adoption of rapidly evolving technologies and data privacy and cybersecurity concerns. For example, the session heard comments on the Exposure Draft of the Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems. The session chair, Kathleen A. Birrane, summarized a few key positions the NAIC, as an organization, should adopt. She discussed a principles-based (as opposed to a prescription-based) regulatory framework and the need to focus on a governance model validated by industry practices; at the same, the framework should take into consideration practical limitations.