Map of Australia with highlighted cities

Much Ado About AI at I.I.I. Joint Industry Forum

Published on January 25, 2019

Tweet

By Lucian McMahon

You’re familiar with the buzzwords by now. Internet of things. Blockchain. Artificial intelligence.

At the 2019 I.I.I. Joint Industry Forum, a panel on artificial intelligence and insurance cut through the buzz. How can AI be used to help build resilient communities? And how can the insurance industry leverage AI to better help customers address risk?

Pictured left to right: Andrew Robinson, Sean Ringsted, Ahmad Wani, Kyle Schmitt, James Roche

New products, more resilience

Regarding resilience, Ahmad Wani, CEO and co-founder of One Concern, said that AI is being used to harness vast troves of data to identify, on a “hyperlocal level,” the impact of a whole range of hazards. His company is already doing just that, partnering with local governments and insurance companies to better plan for future losses. “We don’t need to wait for disasters to happen to prioritize the response, we need to make changes and to act now before the disaster,” Wani said.

Sean Ringsted, executive vice president, chief digital officer and chief risk officer at the Chubb Group, also pointed out that insurers are already expanding their product offerings thanks to AI and big data. Contingent business interruption, for example: the sheer volume of data can now allow insurers to effectively analyze supply chain risks and price them accordingly.

Transparency and fairness are top of mind

But as Ringsted said, “it’s not all good news and roses.” What sorts of concerns should insurers and the public have about using AI?

Kyle Schmitt, managing direct of the global insurance practice at J.D. Power cited consumer concerns with the data and algorithms used for AI-enabled products. Consumers are deeply concerned with the security and privacy of any data they share with insurers. Per Schmitt, consumers also worry about the fairness of AI products, when algorithms instead of people are making decisions in an opaque way.

This is the so-called “black box problem” of AI, in which complex algorithms will arrive at answers or decisions without anyone being able to explain how they did so. Ringsted stressed that, for AI to be a viable tool, its mechanisms will need to be explainable to regulators and the public.

James Roche, vice president, personal lines product development at ISO, echoed this sentiment: social responsibility requires both robust data stewardship and strict control over AI outputs to ensure that outcomes are fair and within ethical standards.

From a consumer perspective, ensuring ethical use of AI is critical. Schmitt said that at the end of the day consumers are open-minded, “but they just want some ground rules.”

Related Articles

Map of Australia with highlighted trades

January 8, 2019

The Insurance Information Institute’s 37th Annual Property/Casualty Joint Industry Forum

Read more >
Map of Australia with highlighted trades

January 23, 2019

I.I.I. Joint Industry Forum: Talent and leadership keynote

Read more >
Map of Australia with highlighted trades

January 24, 2019

I.I.I. Joint Industry Forum: CEO Conversations

Read more >
Map of Australia with highlighted trades

January 25, 2019

Much Ado About AI at I.I.I. Joint Industry Forum

Read more >
Map of Australia with highlighted trades

January 23, 2020

I.I.I. Joint Industry Forum: Panel Discussion on the Future of Insurance Marketing

Read more >
Map of Australia with highlighted trades

January 10, 2020

I.I.I. Joint Industry Forum: Registration Deadline Fast Approaching

Read more >