Ethics and Artificial Intelligence: 3 questions with Cécile Wendling, Group Head of Foresight at AXA
Relegated to the science fiction shelves until just a few years ago, artificial intelligence has now broken out of the speculative realm and entered our everyday lives. What role should it play within the insurance industry? Most importantly, how can the industry build a solid ethical framework to ensure that this powerful tool is used responsibly? We discuss these questions with Cécile Wendling, Group Head of Foresight at AXA, member of the European Commission’s High Level Expert Group on AI and AXA’s representative on the board of Impact AI.
Cécile Wendling also led a roundtable on governance tools for responsible AI during the conference organized by Impact AI on AXA’s Java site on January 25, 2019.
Artificial intelligence will impact insurance in several ways. First of all, it can help change the way insurance companies interact with customers and improve the customer experience. Take the example of damage occurring overnight during a major disaster. At a time when a traditional call center may be closed or busy, we can now imagine customers contacting a chat bot or voice bot to get instructions on the first steps to take in case of damage.
Another certainty is that AI will transform the way we work internally. Algorithms will enable us to work better and achieve a greater level of operational excellence.
The third point represents a radical change for the insurance business: it concerns AI insurance. In a world increasingly saturated with AI-based solutions, we need to have the ability to insure self-driving vehicles, autonomous shuttles and any type of Business 4.0 organization.
It’s clear that artificial intelligence not only encompasses our industry, but it also extends well beyond it. As in other industries, it does not fundamentally alter our mission as an insurer, but it does invite us to think differently about our business.
We cannot engage with technology without asking ourselves what impact it will have on society and the planet. When we talk about responsible AI, we are asking some basic questions: what are our values? How can we apply our commitments to new products and decisions regarding artificial intelligence? In short, how can we be responsible by design as we develop smart systems and corporate governance tools?
But keep in mind, the intention is not to limit actions – on the contrary! Instead the goal is to define principles for action. When we ask the right questions by design, we can keep from running up against setbacks and stumbling blocks.
Today, many companies publish charters, but do not follow up with any actual measures. We take the opposite approach: one based on action. At AXA, responsible use of AI is a major component of our social responsibility strategy and has already led to several initiatives. AXA Research Fund finances research work to foster the emergence of responsible AI. We make sure our own data scientists develop or contribute to creating tools for responsible AI. In addition, we also participate in the larger debate on these topics. For example, I am a member of the European Commission’s High Level Expert Group on AI. My work with this group aims to ensure the emergence of solid principles for governing AI across Europe.
In concrete terms, implementing responsible AI first means laying the fundamental and essential “bricks”. Some of these bricks are technical. They consist in saying: “this algorithm must comply with these values”. But it also includes governance bricks, which must be integrated into the decision-making processes. AXA’s role is to define these bricks and put them in place.
We are convinced that by taking these commitments, we can inspire other players to follow in our path. We also know that we cannot do this on our own.
That is the purpose of Impact AI, a non-profit organization with Association 1901 status of which AXA is a founding member. I chair the Board of Directors as a Responsible AI specialist. Together with other corporate groups that are also thinking about responsible artificial intelligence and how to implement the right processes and tools to achieve this goal, we decided to share these bricks through an open library that can be accessed by any organization seeking to put them in place.
Sharing open tools and encouraging others to do so as well can help us all save a lot of precious time. With Impact AI, we believe that by acting together, we will greatly enhance our ability to put in place sustainable commitments for the long-term.
We need to stay humble when it comes to technology in general and AI in particular: the road ahead is long and complex. On one hand because researchers do not have all the answers, and on the other because it’s an issue that brings us up against questions we’ve never seen before. Every model has its limits, so we will all have to learn together and get a strong grasp on all facets of new technologies as they emerge. It will not take individual solutions, but instead a community learning together and sharing. That is the aim of our commitment within Impact AI, which will allow us to formulate concrete measures for ensuring responsible and ethical AI.