The Unbearable Responsibility of Artificial Intelligence

+ By dr. Aliz Kovács

Artificial intelligence (AI) has become indispensable in the world of advertising agencies but it requires a great deal of attention and careful risk management.

blogpost_header_ENG-1 (3)
Artificial intelligence (AI) has become indispensable not only in our daily lives but also in the world of advertising agencies.

Undoubtedly, AI will transform the way we work, as well as the positions, and market opportunities as we know them. If an agency wants to remain competitive, it needs to continuously monitor the market and AI-based tools to understand how they can assist working or bring exciting nuances to campaigns. However, this requires a great deal of attention and careful risk management. Each AI-based tool needs to be examined on a case-by-case basis, weighing the potential risks, as suggested by our colleague, legal expert Aliz Kovács.

Companies have found themselves in a difficult situation because employees have already started using AI-based tools in their work, since they save energy and money, provide inspiration, and help with experimentation. However, from a legal perspective, they are operating in uncertain territory. Because of this, it is now essential for companies to provide internal guidelines for their teams on how and what tools they can use in their work. Failure to do so can expose both their company and clients to serious risks. It is no wonder that in addition to tech giants, many banks have decided to prohibit the use of ChatGPT by their employees.

What legal dilemmas surround the use of AI?

Let’s start with general questions and then focus on generative artificial intelligence.

Perhaps one of the most important questions is clarifying the roles of responsibility, which, according to experts, the current regulatory environment does not provide a satisfactory answer to. In September 2022, the European Commission published a proposal for a directive aimed at addressing liability issues related to AI. Although the proposal has not been adopted yet, based on the wording, it holds the AI system operator responsible for the damages caused by the system and facilitates the burden of proof for the injured parties. According to the proposal, if the damage can be attributed to human action or omission, and the AI system only provided advice or information that was taken into account by a human agent, determining the causation is no more difficult than in situations where there was no AI system involved. However, these systems are highly complex, requiring significant professional background knowledge even to prove whether the damage occurred due to the AI system’s involvement or contribution, or independently of these.

Another question concerns data protection and the right to privacyas people may share personal data with AI-powered tools. Therefore, it is essential that such tools comply with relevant data protection regulations. In April 2023, the Italian data protection authority temporarily banned the operation of ChatGPT in Italy, citing a data protection incident in March 2023, where email addresses and banking data were leaked. The authority stated that users were not adequately informed, and the legal basis for data processing was also insufficient. It listed the points that the alleged infringer would have to meet in order to avoid a ban on its operation. In addition to the above, a significant risk for agencies is the leakage of not only personal data, but also the trade secrets of clients. Therefore, it is essential to examine AI-based tools from this perspective as well, in addition to educating employees.

The third major issue is discrimination, as AI algorithms can exhibit errors and biases in their decision-making processes, leading to discrimination against racial, religious, gender, or other groups. This is particularly relevant when using such software for workforce selection processes. One striking example is the infamous case of the Dutch tax authorities, where AI was used to verify whether parents were lawfully claiming childcare benefits. Nearly 26,000 parents were wrongly accused of tax fraud, and legal proceedings were initiated, resulting in significant fines. It later emerged that the AI-based tool used for the verification made discriminatory decisions based on biased patterns. Among other things, it was determined that the tax authority itself, working in secret and lacking full knowledge of the AI tool’s operation, trained the tool with data that included information such as religion, nationality, and origin.

Additionally, the use of AI was overly automated, lacking human oversight and final decision-making. Although this case is outdated, and technology has since surpassed the tool used by the tax authority, the root of the problem remains the same: if we train an AI-based tool with inappropriate data, it will distort the results.

Image

The role of AI in disinformation and misinformation is not negligible. It is becoming increasingly difficult to avoid manipulation and deception, as AI-generated texts, images, and videos, not to mention deep fake solutions, pose a serious challenge for consumers in discerning what is real and what is not. On the other hand, human-led news sources and artistic creations may become more valued.

The main points discussed above address the questions lawmakers are actively grappling with. However, it is not an easy task to create sustainable regulations for a field that grows and evolves at such a rapid pace.

Now let’s focus on generative artificial intelligence, which has already infiltrated agencies’ daily operations.

Generative AI is a system belonging to the category of artificial intelligence that can generate text, images, or other content by learning from input data on a statistical basis to generate new content.

As users of AI-based tools relying on generative AI solutions, we may encounter the following main dilemmas:

Who owns what we create with AI?

In the case of software based on AI foundations, intellectual property provides two forms of protection: copyright and, in certain cases, patents. These legal concepts can ensure that our intellectual property is not used or profited from without our permission. Both inventors and authors can only be natural persons. Although AI-generated content raises questions regarding the identity of the owner, the practice still follows the principle that AI itself cannot be the owner of copyright or patents; only a natural person can. Whether a work is protected by copyright depends on the involvement of a natural person, since if AI is merely a technical and supportive tool, the human will be considered the author. However, if the person plays a passive role or does not participate in the process at all, the resulting works will not be protected by copyright. Therefore, if we use a chatbot-based system for brainstorming in order to create a social post, the copywriter will still be the copyright owner. Conversely, if we generate an image using an AI tool without any added value in the final work apart from the instruction (prompt), we cannot be the copyright owners of the work, and copyright protection will not apply to it (as justified by the opinion of the United States Copyright Office). Of course, each case needs to be evaluated individually to determine the level of AI and human involvement in the work in order to establish whether copyright protection exists.

What does this mean in practice? Let’s take the case of OpenAI, the developer of ChatGPT, as an example, based on the license agreement that users enter into with the platform. According to the license agreement issued on March 14, 2023, OpenAI transfers the rights arising from the content generated by its tools to the user to the fullest extent. However, it is the user’s responsibility to ensure that the content complies with applicable legal regulations and the license agreement. Therefore, the content generated in this way can be freely used, even for commercial purposes, but the legal compliance rests with the user, leading us to the next legal dilemma when using the tool.

How do we know that AI did not plagiarize what it created?

In the spirit of transparency, it is important to know what data a particular AI tool has been trained on because, as discussed above, inadequate training data can result in the AI generating false, misleading, or discriminatory responses. This information is not publicly available for OpenAI; all we know is that their models, including GPT-3.5, were trained on publicly available internet data up until September 2021, while newer models have access to real-time data. It is crucial to understand that the fact that a work (such as an image, literary work, or piece of music) is publicly accessible on the internet, does not automatically mean that it can be freely used for commercial or any other purposes from a copyright perspective. This brings us to a crucial question for agencies. We only provide products to our clients that do not infringe upon the rights of third parties. This means we do not use images, music, or literary works for which we do not have the appropriate usage permissions. This cannot be ensured in the case of OpenAI models (such as those running under ChatGPT or the GPT models powering DALL-E 2). On the other hand, Adobe has released a generative AI-based tool called “Firefly,” which, according to the manufacturer, has been trained entirely on properly licensed legal content.

In January 2023, it was reported that Getty Images initiated legal proceedings against Stability AI for copyright infringement, as they allegedly used millions of their licensed images to train their tool. Even though the fact that Stability AI uses open-source code may help prove the case, the outcome of this lawsuit is expected to be a landmark decision.

The proposed EU AI regulation has already formulated the requirement for transparency in the case of generative AI tools to disclose the copyrighted works they have been trained on. Furthermore, the proposal analyzes and classifies different artificial intelligence systems used in various applications based on the risks they pose to users. Each class entails different obligations for service providers and users. On June 14, 2023, the Members of the European Parliament adopted their compromise amendments on the AI regulation. Negotiations with member states will now commence in the Council to finalize the regulation, potentially making the European Union the first in the world to provide comprehensive regulation on the use of AI-based tools.

Image

But what should we do then?

We need to continuously monitor the latest tools not only from a technical standpoint but also from a legal compliance perspective.

  • It is important to check what data a particular tool has been trained on, if we have the opportunity to do so.
  • If possible, it is advisable to configure the tool, especially when using it in a corporate environment, so that it cannot store the provided information and results or use them for further learning.
  • Do not share personal data or trade secrets with such tools, and avoid searching for a specific, niche question which the business secret or brief could be inferred from.
  • It is important to avoid misleading the consumer by omitting the fact that the content was generated with the help of AI, in case the published content could even be used that way, of course.
  • Always involve human proofreading, the final decision should be made exclusively with human intervention, and make sure to continuously monitor the operation of the AI. Before publishing the content created in this way, it is important to subject it to a legal review, taking into account both general post-production legal checks and the legal specifics related to AI.
  • It is crucial to maintain a critical approach regarding content generated by AI and not to blindly trust the credibility and authenticity of easily obtained answers, without being diligent.

Although many brands confidently embrace the use of AI, it is never known how much money they have allocated for the future handling of potential legal risks. In addition, agencies have a significant role in assessing the legal risks of individual tools, and not only in informing clients about the latest and most exciting AI-based tools but also addressing the potential legal risks arising from their use.

Finally, we asked AI for a punchline: We hope that our article on the legal risks and evaluation of AI tools did not cause too much anxiety, and if any litigation arises, don’t forget to activate the “AI Guardian Angel” mode, and your legal problems will evaporate like a non-existent database in the cloud! Thank you, ChatGPT!

The above article should under no circumstances be considered legal advice. Its purpose is to provide general information on questions related to artificial intelligence. If you need legal advice, we recommend consulting a law firm with the appropriate expertise.