Digital trust as a competitive advantage

15. November 2024 | Aktuell Allgemein Interviews
Digital trust as a competitive advantage: Lisa Bechtold is committed to innovation through artificial intelligence in a customer-centric way, in compliance with the law and relevant security standards.
Digital trust as a competitive advantage: Lisa Bechtold is committed to innovation through artificial intelligence in a customer-centric way, in compliance with the law and relevant security standards.

Lisa Bechtold, Head of AI Governance, is an inspiring and innovative leader in Zurich Insurance Group’s Technology Department. For her, the responsible use of advanced technologies such as artificial intelligence (AI) has been the foundation of successful, sustainable innovation in the company for many years. Due to her passion for digitalization and algorithmic decision-making processes, the trained lawyer continuously expanded her legal education (doctorate, LL.M. UC Berkeley) with a technological and strategic focus (MIT Sloan, Stanford Graduate School of Business). Lisa Bechtold is committed to AI-based innovation in a customer-centric way, in compliance with the law and relevant security standards.

thebroker talks to Lisa Bechtold, Head of AI Governance at Zurich Insurance Group

AI governance is a relatively new but strategically highly relevant area. How do you see the latest innovations, the added value that AI can bring to humanity? And how do you assess the risks?

It is important that we use AI solutions responsibly in a way that benefits humanity. The spectrum of potential AI-enabled benefits ranges from improvements in service quality and product portfolio expansions/upgrades to efficiency gains, increased productivity and cost reductions. The U.S. Treasury Department recently announced that USD 4 billion in losses could be effectively prevented in 2023 due to the use of AI for fraud prevention. A new generation of specialized Generative AI agents, orchestrated on an AI platform, now enables the identification and design of new use cases along an institution’s value chain. This showcases that, to a great extent, the potential of AI is yet to be explored! Importantly, though, the use of AI also triggers risks, for example in connection with data protection, model performance and error rates, and in the area of cybersecurity. In principle, the more autonomous AI systems become, the greater their vulnerability (example: prompt injection attacks). In the case of generative AI, misinformation (“hallucinations”) once again led to several scandals this year, which were widely discussed in public.

What does the responsible use of AI mean to you? How can you stand out as a company? Why do you consider this area so interesting or, in fact, even fascinating?

The goal of good AI governance must be to optimize the benefits of the technology while effectively minimizing the risks. AI governance is about the “how” we are using AI. How we use AI already affects almost all areas of our private and business lives. Importantly, the acceptance and the potential success of AI applications, tools and use cases will also be influenced by the respective legal system and the culture of a country or region. The questions that arise when assessing and evaluating AI systems and use cases are highly interdisciplinary and require a holistic perspective with a strong strategic component. For me, that’s what makes this field fascinating!

AI innovation, new, disruptive generative AI models are currently being launched almost on a weekly or monthly basis. How can you keep track of all these developments, and ensure that AI is being used in a safe, compliant and responsible way across the organization?

The current speed at which innovations in the field of generative AI are coming onto the market is indeed a challenge: The technology is becoming increasingly complex and sophisticated. Before major investments are made, both the actual added value and the potential risks must be understood. At the same time, it is also important to recognize this: We are living in an incredibly exciting time with disruptive, transformative technological developments. Being able to experience and help shape this is fascinating … and definitely releases additional energy!

When does Responsible AI (RAI) start for you in practice? Are AI systems being tested and evaluated right before a new AI-augmented product or service is being launched?  

Ideally, RAI begins with the development of the use case, including the data preparation and modelling, a forward-looking risk forecast, a preliminary legal review and strategic advice. When integrated early on in the development process, AI governance can create significant business value and promote responsible, high-quality innovation. Importantly,  governance remains relevant throughout the entire lifecycle of an AI solution, even after it has moved into production, in order to continuously ensure safe, compliant and high-quality outcomes.

New AI regulation is currently being developed in various countries and regions. The European Union’s AI Act is expected to have far-reaching implications. How do you view this and what does this wave of regulation mean for your daily work?

There is indeed currently a high level of regulatory momentum in the area of AI governance, both at national level and on the global political stage. Just think of the strong impetus from international organizations and institutions such as the G7, G20, the United Nations, the OECD and the World Economic Forum. However, if you take a closer look at the regulatory and legislative initiatives, you can see a great deal of convergence with regard to the fundamental principles for the responsible use of AI: transparency, explainability, robustness, security, fairness and the accountability of humans are among the guiding principles of the public policy discourse and international AI regulation. Such high-level provides a helpful orientation for organizations operating in multiple jurisdictions. Nevertheless, the implementation of new AI regulation, including new reporting mechanisms as required under the EU AI Act for high-risk AI systems, is a challenging task.

How are you and your team involved in AI innovation?

AI governance plays an important role for both internally developed and externally sourced tools and applications throughout the lifecycle of an AI system. Zurich has traditionally been committed to innovation in the area of digitalization and AI. In this context, we also coach international start-ups that develop AI-enhanced services relevant to Zurich and present them, for example, as part of the Zurich Innovation Championships. AI governance is an integral part of the value chain with a focus on promoting responsible AI innovation.

You cover a broad spectrum of specializations. From AI and data in all facets, to data governance and digital regulation, data protection, operational and digital risk management, digital transformation, sustainability and corporate governance. Which area interests you the most?

In fact, all of these areas are relevant and interlinked in my current, very interdisciplinary range of responsibilities. The sustainable use of new technologies such as generative AI is becoming increasingly relevant – there is significant potential for optimization in the areas of sustainability and technology worldwide. One key aspect is that, for the use of large language models (LLMs), energy consumption needs to be understood, and alternative AI models can be considered, without compromising on quality in terms of performance and output. Making a contribution to the sustainable optimization of AI is very important to me personally.

Since 2023, you have been a member of the AI Governance Alliance (AIGA) of the World Economic Forum. What are the tasks of this Alliance and what are your responsibilities?

The AI Governance Alliance of the World Economic Forum promotes global growth and economic prosperity through AI innovations, always putting humans at the center.

Our work focuses on responsible and sustainable governance, the use of robust, secure and resilient technology in line with regulation and technical safety standards for AI, which we co-shape. In this context, we evaluate findings and practical insights from the various industry sectors within the Alliance. Within the WEF, we also work together with the Cybersecurity Center, the Financial and Monetary Systems and the Health and Healthcare Center on joint initiatives for the safe and responsible use of AI. 

Do you expect AI to eliminate jobs at scale? How will AI impact the job market?

In my view, Fei-Fei Li, Director of the Human-Centered AI Institute (HAI) at Stanford University, once made a very clear point: AI can increasingly be used for tasks traditionally performed by humans. However, AI cannot and should not replace humans.  Earlier technical and technological revolutions, such as the industrial revolution in the mid-19th century, have also transformed the labor market, and yet: New jobs have been created. AI has already created new role profiles and will continue to disrupt the job market. Overall, the economic policy objective is to ensure that transformations triggered by technological revolutions benefit humanity, as emphasized in a study published earlier this year by the International Monetary Fund.

How do you see the benefits of AI for society at large, and for the corporate sector?

The spectrum of benefits that AI can generate for humans individually and for society as a whole is very broad. It ranges from improving the quality of decision-making (based on more complete and accurate information) to simplifying our everyday lives through AI applications (voice assistants, logins via facial recognition, various generative AI models such as ChatGPT, DALL-E, Gemini, Claude), which enable us as individuals to do new things and, at the same time, give us time back. For the greater good of society, AI is already being used for climate protection, including the analysis of satellite data, risk modelling and development of more efficient environmental protection measures. There are also grounds for optimism in the education and healthcare sectors. If used correctly, the transformation potential of AI is significant in many areas, both socio-politically and economically for a wide range of industries. In the insurance sector, in particular, traditional AI can serve to optimize risk modelling, while generative AI can achieve a multitude of efficiency gains, for example via knowledge bots or in the area of claims processing and fraud prevention.

AI not only learns to react to emotions, it can already be emotional itself. Elon Musk is working on the development of several robots. His already 2-year-old “Optimus” is said to have a head start on other car brands, thanks to its brain based on autopilot software (Tesla). This robot is undergoing further training and is to be offered at a price of 20,000 dollars. What’s your view of such developments?

There are actually various use cases where AI needs to be able to show empathy: For example, when answering calls to medical or emergency hotlines. Here, an AI system must be able to correctly classify a person’s emotional state via voice- or face-based emotion recognition, in order to generate an appropriate response to the situation and, if taken to the extreme, even save lives. It is crucial to note that AI-based analysis and handling of human emotions leads to great responsibility. This is a highly sensitive area that is becoming increasingly relevant in practice. Fundamentally, however, I do not believe that the goal should be to “humanize” AI systems. Rather, AI should be used to improve the quality of human life in the areas where it matters the most.

Please tell us how many hours are in your day and how do you balance everything?

My day also only has 24 hours, and the current technological advances happening at the speed of light require me to constantly adjust my priorities. Overall, I consider myself very fortunate to be able to work at the intersection of technology, law and business strategy, together with a fantastic team in a vibrant international environment. 

How does your multifaceted background help you in your daily work?

When assessing AI applications, tools and use cases, various risk factors such as the decision-making autonomy of the AI model, model complexity, data quality and data protection aspects as well as the possible implications (financial, operational, strategic aspects and reputation) must be understood and addressed. This requires a holistic approach and often interdisciplinary cooperation, in which my large network helps me a lot. It is important to clearly communicate often highly complex interrelationships in order to achieve a common understanding among a wide range of stakeholders.

What do you do in your “free time”?

I love the mountains and like to retreat there to find new energy and inspiration. Music, sport and the arts are also very important to me – but my family takes top priority in my (limited) free time!

Finally, we would like to know: Do you, like Elon Musk, also own one or more robots?

After experimenting with a robot vacuum cleaner for the first time a good 15 years ago, we currently limit ourselves to a window-cleaning robot, which is actually very efficient. All the other robots in our household, which can be used for much more complex tasks, are owned and controlled by our three children.

DISCLAIMER:

This interview reflects the personal perspective of Dr. Lisa Bechtold and not necessarily the perspective of Zurich Insurance Group.

Dr. Lisa Bechtold, LL.M. (Berkeley): Head of AI Governance, Zurich Insurance Group, Switzerland, Zurich

As Head of AI Governance, Lisa is driving innovation-enabling governance of Zurich’s AI solutions in line with international regulations, and industry standards at the intersection of strategy, riks management and cutting-edge technology. She joined Zurich in 2009 and held various senior positions in Group Legal (Corporate Finance, Governance), Group Risk Management (Digital Risk Governance, Data & AI Risk), before moving to Group Technology & Operations as Global Lead AI & Data Governance in 2021. Since 2023, she focuses on the operationalization of safe and robust AI governance.

Prior to joining Zurich, Lisa started her career as an attorney-at-law in M&A and Asset Management in Germany and the U.S. (New York). She holds a Ph.D. in international law (Cologne), an LL.M. degree (Berkeley) and completed an executive education program on AI at MIT Sloan School of Management as well as a Leadership Program at Stanford Graduate School of Business.

Read also: ATTENTION: AI use from a legal perspective


Tags: #AI #AI Governance #AI impact on job market #AI innovation #AI regulations #AI solutions #Benefits #EU AI Act #Generative AI #Handling of human emotions #Potential risks #Prompt injection attacks #RAI