Legal issues concerning ChatGPT
Legal issues concerning ChatGPT
The chatbot ChatGPT from the American technology company OpenAI attracted a lot of attention when it was released. In February, the chatbot had 100 million active users, making it the fastest-growing internet application of all time. A variety of sectors are exploring how ChatGPT can optimize workflows.
However, the innovative nature of ChatGPT presents users and third parties with various legal challenges, which we would like to outline in the following.
Copyright
ChatGPT’s ability to produce high quality text raises several copyright issues. These include, in particular, the question of who can make copyright claims in the chatbot’s products and how ChatGPT can cite sources in its texts.
Copyrights in ChatGPT’s contributions.
A work is only protected by copyright if it reaches the so-called level of creation (Section 2 (2) UrhG). This can only be achieved if the work is created by a human being. Consequently, a copyright protection of the services of ChatGPT is excluded. The same applies in American law (see e.g. here). However, ChatGPT’s response may be a copyrighted work if the human user’s contribution can be considered an individual intellectual work. If the answers of ChatGPT are compiled in a way that requires substantial investment, an ancillary copyright arises according to §§ 87a – 87e UrhG.
Claims of the originator of the original source
Contrary to its designation as “artificial intelligence” (AI), ChatGPT does not have its own knowledge, but creates its answers from existing texts. These texts can be protected by copyright according to § 2 UrhG. For the user it is problematic that the chatbot cannot specify from which sources the information originates. It is therefore almost impossible to estimate to what extent ChatGPT has taken its cue from an individual text. Therefore, in the event that a text written by ChatGPT is published, there is a risk that the originator of the original text will claim copyright or ancillary copyright.
Liability issues
If damages occur as a result of a ChatGPT response, the question arises as to whether OpenAI can be held liable.
Applicable law
The adequate solution of liability cases in case of autonomous wrong decisions of ChatGPT is difficult within the current legal regulations. Generally, the starting point for liability in German civil law is human action. This is missing in case of an answer by ChatGPT. According to the German Product Liability Act, the producer of a product is liable for damages to persons and property for private use without having to prove fault. However, the Product Liability Act only applies to physical property. In addition, producer liability pursuant to Section 823 (1) of the German Civil Code (BGB) which extends to non-physical products may also be considered. The reason for the infringement of legal rights must be the breach of a duty of care (e.g. in the case of faulty design or product observation). If such a breach of duty exists, the fault of the producer is presumed. However: Since highly developed artificial intelligences can hardly be traced by humans anymore, it is difficult to prove that the damage occurred due to the breach of a traffic safety duty.
AI Liability Directive
The EU Commission is planning yet another legal regulation on AI: the European AI Liability Directive. This is intended to update non-contractual liability in the event of autonomous errors by artificial intelligences. The focus of the directive is Art. 3 AI Liability Directive Draft, according to which an operator of a high-risk AI under the AI Act, if there is a suspicion that this AI has caused harm to another person, will be obliged to disclose the available evidence. Since the decision-making process of sophisticated AI such as ChatGPT is often hardly comprehensible, the fault of the operator is to be presumed in certain cases (e.g., if the operator breaches a statutory duty of care) according to Article 4 of the AI Liability Directive Draft. However, the AI Liability Directive does not provide for a general no-fault liability for damages caused by autonomous decisions of an AI system. In any case, if the AI Liability Directive is implemented, the evidentiary problems described above would be reduced and action against AI operators would be more promising.
ChatGPT and data protection
ChatGPT offers great potential for companies to use it in many ways. For example, the social network Snapchat is planning to add a chatbot based on ChatGPT to its app. However, if personal data of third parties is passed on to ChatGPT, this must be permissible under data protection law. Equally, ChatGPT must be able to demonstrate a legal basis for the use of personal data in the tool (which, according to the Italian data protection supervisory authority, is currently not the case).
OpenAI as a processor according to Art. 28 DSGVO.
Anyone who processes personal data (e.g., name, age, gender, health status or marital status) of third parties may only disclose such data to other persons under certain conditions, according to Art. 28 of the General Data Protection Regulation (GDPR). In the case of an instruction-based activity, a contract must be concluded in which the processor undertakes to adequately protect the personal data of the data subject(s) (“order processing agreement” or GCU, Art. 28 (3) of the GDPR). Without the conclusion of such a contract with OpenAI, the transfer of personal data of third parties violates the GDPR. However, OpenAI only concludes such an agreement with users of the GPT programming interface (cf. Section 5c of the OpenAI GTC), but not with users of ChatGPT. Thus, no third party personal data should be transferred to ChatGPT.
Transfer to the United States
Both OpenAI’s headquarters and servers are located in the United States. If personal data is transferred to a non-EU country, special requirements apply for this according to Art. 44-50 DSGVO. If, pursuant to Art. 45 GDPR, an adequacy decision of the European Commission exists for the country in question (e.g., for Canada or Japan), the data can be transferred to the third country without further hurdles if the data protection regulations are otherwise complied with. There is currently no such agreement for the USA after the ECJ also declared the so-called “Privacy Shield” agreement null and void.
Since data is transferred to the USA when ChatGPT is used, suitable guarantees for the protection of personal data must exist in accordance with Art. 46 DSGVO. This can be done, among other things, through the use of standard contractual clauses. As there is currently no possibility to conclude an agreement with such standard contractual clauses as a user of ChatGPT, the processing/transfer cannot be carried out in a legally compliant manner. Therefore, we currently recommend that you do not submit any personal data to ChatGPT, as its transfer to the USA is currently not permitted. However, general inquiries without providing personal data remain possible.
Data about private individuals (“right to be forgotten”)
Currently, ChatGPT is a useful tool, yet the chatbot’s answers are often incorrect. If ChatGPT provides incorrect information about a living person, the data subject has a right to correction under Article 16 GDPR. In addition, the right to be forgotten pursuant to Art. 17 of the GDPR may also apply in certain cases. In the so-called “Google Spain” decision (Case C-131/12), the ECJ ruled that search engine providers may be obliged to delete a search result even if there is no claim against the operator of a website itself under Art. 17 GDPR. It is conceivable that the ECJ will also make a corresponding ruling on services such as ChatGPT.
AI Act
The AI Act (“AI Regulation”), which is currently in the legislative process, is intended to regulate the use of AI all over the EU. Some AI applications are to be banned completely (e.g. “social credit” systems). In addition, a category of “high-risk applications” is also to be introduced. Such applications are not prohibited in principle, but high requirements are placed on their operation. If an AI is to be trained with data files (especially “machine-learning”), it must be ensured that the data is representative, error-free and complete; discrimination is to be prevented. In addition, operators are also subject to a whole range of other transparency, security and information obligations.
ChatGPT and other chatbots are to be classified as a high-risk application under the AI Act. Whether and how the operation of ChatGPT will comply with the AI Act is completely uncertain. It will be up to OpenAI to implement the various requirements of the AI Act. Since an end date for the AI Act legislative process is not yet in sight, the regulations do not matter at this time. However, ChatGPT will certainly become one of the most interesting use cases in the future regarding the new law.
Conclusion on ChatGPT legal issues.
ChatGPT will certainly occupy the courts many times in the coming years. The classification of ChatGPT in the current legal system causes some difficulties. Many civil law norms originate from a time when an AI with such a capability was not yet imaginable. Legislative action will be required at various points to ensure appropriate classification of advanced AIs.
Your expert for data protection law
Dr. Matthias Lachenmann, Attorney at Law and Partner
Phone: +49 221 / 270 956 – 180, e-mail: matthias.lachenmann@bho-legal.com