ATTENTION: AI use from a legal perspective

6. Mai 2024 | Allgemein Aktuell Interviews
ATTENTION: AI use from a legal perspective. David Rosenthal, Team Head/Partner at VISCHER AG.
ATTENTION: AI use from a legal perspective. David Rosenthal, Team Head/Partner at VISCHER AG.

AI tools such as ChatGPT or Microsoft Copilot are now familiar to most people and are used in many workplaces. However, data protection must not be ignored. In mid-March 2024, the EU passed the AI Act, according to which certain AI applications are to be banned.

thebroker in an interview with David Rosenthal, one of Switzerland’s leading experts in data and technology law. He is Team Head/Partner at VISCHER AG.

David Rosenthal, you speak several languages, including Chinese. How did that come about?

(Laughs) You’re confusing me with my digital twin, an AI-generated avatar of me. He can speak a lot more languages, but his French supposedly has an African accent. I can’t judge that myself. I’m not gifted in languages. But I believe that I should know what I’m advising on from my own experience. That’s why I’ve got my own avatar. But we also made some for customers. An avatar like this is very practical for explainer and training videos. But I initially had an uneasy feeling when I saw how simple it was to create something like this – and how it could potentially be misused.

What about data protection when using AI tools such as ChatGPT and Microsoft Copilot?

This is a difficult issue. First of all, there are a whole range of variants of these tools with different contracts. Some fulfil the requirements for business customers, others don’t at all. And they change all the time. Transparency in this market is therefore not yet very high. A few months ago, we did an overview of the offerings from OpenAI and Microsoft; people tore it out of our hands. However, the good news is that AI can be used in compliance with data protection regulations. We employ a direct interface to the OpenAI language models internally and allow our people to use this also for personal data. It doesn’t cost much either.

Are there differences in the use of free or paid versions?

At work, the free versions should only be fed with prompts and information that do not contain any data about other people, no customer data and no company secrets. Then they are ok. With the paid versions, a company can ideally obtain a contract and other commitments, which then allow it to go further. By commitments I mean, for example, that the inputs and outputs of an AI system will not be used by the provider for training purposes. This is understandably a no-go for most businesses. We recently published an AI checklist for provider contracts. We use it to check the contracts of all kinds of providers for our customers.

Does the data protection law have to be complied with?

Oh yes, of course, as always when we process data about other people at work. There are basically no new rules when it comes to AI, and your gut feeling is usually a good guide. We all know, for example, that an AI can produce incorrect or biased answers, and we basically also know that we must not use such answers in an uncontrolled manner. Data protection requires this, too, when it comes to personal data. Sometimes the distinction is more difficult. For example, if we have a job applicant’s CV checked by an AI: Do we have to tell them? Many people now think no, because it is somehow clear that companies do these kind of things, but there are also other views. Data protection also says that if it is important for someone to know what we are doing with their data, then this must be made transparent. And if I let the AI automatically sort out the job applicants, then a majority believes that something like this must be made transparent in any case and the person must also be able to speak to a human being. And again, this is also what the Data Protection Act says. Yet, the question of whether their data can be used for AI training is ultimately less clear. Here, for example, we advise companies to describe these use cases in their privacy policy, then they have at least something to rely on. Data protection law is often not a precise science.

What rules are needed when using AI in companies?

We advise companies to do three things: Firstly, they should appoint a person for each AI application who is responsible for data protection and other legal requirements. This is usually the person who also decides on the application in other respects. Secondly, employees should be told how they should use AI, e.g. which tools they are allowed to use with which data, which they are not allowed to use, who to report incidents to and what rules of conduct are expected of them, ideally in conjunction with training. Thirdly, a company should not simply allow an uncontrolled proliferation of tools and applications, but should check each use for its risks, consider how these can be minimized if necessary and keep an overview of what AI is being used in the company. If you want to go through this, you can take a look at GAIRA Light. This is a tool developed by us that we have made available as open source for free. It also helps with data protection compliance.

Will the AI Act also apply to Switzerland?

Yes, it will, but only under certain circumstances. On the provider side, it applies in Switzerland to anyone who brings AI systems onto the EU market, distributes them there or uses them there. On the user side, it always applies in Switzerland if an AI system produces an output that is also to be used in the EU. In other words, if someone also operates a chatbot for people from the EU or sends AI-generated texts to people in the EU. It is not yet clear whether we will get something like the AI Act in Switzerland, but I don’t think so. The Federal Council wants to report at the end of this year on what it believes is necessary for Switzerland in this regard.

Which AI applications are banned?

Not many. The AI Act is actually about product safety. It defines high-risk AI systems that have to fulfil certain requirements, just like other critical products such as pacemakers, lifts or toys. The AI Act does not aim to regulate the use of AI in general. It only has very few rules for normal AI systems. And it only prohibits a few applications. For example, if an employer wants to use AI to determine the emotions of its employees. They are allowed to check for fatigue, as this serves to protect health, but letting the AI use data to find out whether an employee is angry with their boss is not allowed. However, companies are allowed to do this with customers, and more and more companies are actually exactly this. Social scoring, as practiced in China for example, is also a prohibited practice. So it’s not just about things that the private sector does; government AI applications are also being restricted. At least in the EU.

What consequences will the EU AI Act have for companies?

Anyone who wants to launch an AI-supported product on the market will have to deal with it. Unless they only want to do so outside the EU, but that is unlikely to happen. I therefore expect the AI Act to become a kind of „global standard“ for the AI products it covers. As I said, this is only a small part of the range. However, it makes things much more complex for the providers concerned once they are in this area. This area primarily includes those AI applications that have been defined as „highly riskl“, such as the example I mentioned of analyzing applicants‘ CVs. Start-ups in particular will struggle with this. And the AI Act is by no means the only law that needs to be complied with. I don’t like the flood of regulation, especially from the EU, but it is a reality and will not go away. Fortunately, the AI Act will only have a moderate impact on the user side of AI in Switzerland. We can only hope that users will benefit from the higher quality and safety of the products, even if we are not in the EU.

Can the use of AI be regulated by rules and bans at all?

Of course it can. Companies usually will want to comply, or at least try to. The big ones certainly will. Otherwise there is a threat of legal action. This will probably be less the case for users, especially outside the EU. But FINMA, for example, expects regulated institutions to comply with the AI Act where it is applicable. It will certainly also have a positive effect. But at what cost? We advisors will benefit, but it will not be able to solve all the problems of AI. The legislator doesn’t even know what these solutions really look like. It does set out a whole host of requirements as to what AI systems must or must not do, but almost all of them are buzzwords that simply pass the hot potato to the providers. In the sense of: solve this or you’ll get a fine. And the Act doesn’t even address some of the important issues, such as the problem that the development of powerful AI models requires more and more data and resources, which fewer and fewer players in the market can provide. This will lead to oligopolies and new dependencies with all their consequences.

Who will be responsible for AI governance in companies in the future, will there be or will there have to be AI officers?

The company management. Under Swiss law, the BoD has ultimate responsibility. It will then delegate this to the management board, but must oversee whether and how compliance is implemented. We recommend that companies make a distinction between those who operate AI for the business and decide on it, and those who advise internally on which rules to fulfil and monitor compliance. There is no need for an AI officer in the sense of a separate legal department that deals with AI, such as the Data Protection Officer. Some larger companies have an AI officer, but they don’t actually have anything to do with compliance, but rather the task of driving forward the use of AI. If we are dealing with a smaller company, the person who was previously responsible for data protection will usually also take care of the AI issue, and the management will then decide which risks are taken, which tools are used and what employees are allowed to do.

Can copyrighted works be used for training AI models?

Here I have to give the standard answer of all lawyers: It depends. Firstly, it depends on the country, secondly on who is being trained and for what purpose, and thirdly on how and under what conditions the content has been made available. The usual practice is that everything that is somehow publicly accessible is used, but publicly accessible does not simply mean freely usable, which is why there are already first court cases to clarify the limits of copyright. OpenAI has been sued several times. Put simply, the rule in copyright law is that the owner of the rights to a work may determine what is done with it, i.e. I need their consent – or the law provides for an exception. So I need permission – from the rights holder or the legislator. We are now discussing questions such as whether a publication implicitly includes this authorization or whether certain cases of authorization provided for in the law apply. One example is the right to make copies of published works for internal information purposes. Researchers also have their own exceptions. And then there are arguments such as that the training of a computer does not need anything at all, for example because it is like a person reading a text – no permission is needed for that either. Or the exact opposite is argued, because the training of a computer is not aimed at the further use of the work. So there are many exciting discussions in legal theory.

What consequences can be expected if such works are used?

My pragmatic assessment: not many. Of course, we could come to the legal conclusion that a company has infringed copyright by training its AI and is therefore liable to pay compensation or that those responsible could even be fined under criminal law. This can and will certainly happen. In the USA, lawsuits against OpenAI, Microsoft and the like are likely to lead to settlements involving large sums of money. But the wheel of time cannot be turned back, and there is a great social need for us to feed AI models with the knowledge of our world – and this can be found in books, newspaper articles, YouTube videos, online platforms and so on. I am convinced that where we have room for maneuver in interpreting the law to enable such developments, it will ultimately be used. At worst, the law will be adapted, but this is usually not necessary. And in the end, it’s often just about money and finding a balance. Today, YouTube & Co. are used as a source for AI training. But platforms like YouTube only became so big and popular because people were able to upload any videos they wanted to these platforms – regardless of copyright restrictions. The operators then found ways and means in the background to appease rights holders and share in their profits from the business. That is the macro view. Of course, an individual company still has to clarify what can be done with which data and how. That’s our job. Companies have often even contractually agreed what they are allowed to do with which data, for example their customers‘ data. I would recommend taking a close look and clarifying what is possible.

Under what conditions can it be used?

If a company wants to use its customers‘ data to train an AI, then it’s best if it stipulates this in the contracts and does not exclude it in any case. Some confidentiality clauses in contracts, for example, not only contain the rule that the other party’s data must be kept secret, but also that it may not be used for purposes other than the contract. In this case, the data itself may not be used for training that does not serve the contract. If data relating to individual, identifiable people is also to be used for the training, then this should also be noted in the privacy policy as a purpose of use. In our experience, however, the reverse case, where an AI tool is used and it generates out a copyrighted work of a third party that may not be used for this reason, is very rare, unless someone provokes the system to do exactly that and, for example, circumvents the filters and quardriails that many AI service providers use today to prevent this. There are also various strategies, such as prompting, that can be used to minimize the risk of copyright infringement. In other words, in the end it’s about learning how to deal with these new technical possibilities of AI, just as we once had to learn with the internet. Today, the internet is completely ubiquitous and people get on well with it.

David Rosenthal studied law at the University of Basel in Switzerland, but also has a background as a software developer. He has been active in data and technology law for over 25 years and is known far beyond Switzerland for his work, tools and publications. Today, he is a partner at the Swiss business law firm VISCHER, where he heads the data law department, among other things. He is a lecturer at the Swiss Federal Institute of Technology ETH in Zurich and at the University of Basel. www.rosenthal.ch

Jungo Connectivity: AI software for driver and interior recognition


Tags: #AI #AI Act #AI application #ChatGPT #Data protection #Legal perspective #Microsoft Copilot #Prohibited #Regulation #Responsibility