May 03, 2023

Use of AI-Generative Tools: New Horizons, New Risks

It seems that, these days, everywhere you look there is a new artificial intelligence (AI) tool doing something that until recently we thought was unthinkable. As the field of AI evolved, the proliferation of the internet in everyday lives and our increasing dependency on and willingness to adopt new technologies has led to the development of new predictive algorithms and other models that are based on machine learning.

The most interesting and visible development in the field of AI in recent times is AI-generative tools. These are AI tools that help you generate content. Some are designed to create visual art (such as Stable Diffusion or Midjourney) while others can create written content (such as ChatGPT) or computer code (such as CoPilot). These AI-generative tools are no longer only a novelty but are more and more becoming tools that businesses use to create their products and offerings.

As with any ground-breaking new technologies, the deployment and current popularity of AI-generative tools create both enormous potential but also raise many unanswered questions from a legal perspective and creates new types of legal risks. Most of these new legal risks are difficult to mitigate given the lack of suitable statutory infrastructure to define what should and should not be done when using AI-generative tools. It is well known that the law lags behind technology – and even more so with respect to technologies such as AI-generative tools that were not even imagined by legislators when they enacted our current laws and regulations. This means that, when using AI-generative tools, businesses enter uncharted territory in legal terms that require them to navigate legal risks that may have not been previously considered and a regulatory landscape that does not actually exist (not fully anyway). It is worth mentioning that, as lawmakers are still scrambling to react, the courts are already called into action – there are already several litigation cases pending with respect to certain legal aspects of AI-generative tools. It’s not for nothing that Mira Murati, the CTO of OpenAI (the company responsible for developing ChatGPT), called on governments and regulatory authorities to regulate the field.

While changes to the legal framework in the field of intellectual property (IP) will be needed and are expected in order to respond to the rapid development of AI technologies, the common use of AI-generative tools, which became a reality almost overnight, poses a completely new set of challenges with respect to works created using such AI-generative tools. Use of these tools raises issues and questions ranging from whether works created through the use of AI-generative tools merit IP protection at all, who owns them, whether their creation does not infringe upon the rights of third parties (including third parties whose content was used to train the AI-generative tool or that the AI-generative tool has access to and may use to generate new content) and whether the use of AI-generative tools can lead to unintended and unwanted disclosure of confidential information and loss of IP rights.

Here is a practical guide providing a brief summary of the main legal risks we currently identify in the use of AI-generative tools and how businesses should approach these risks:

  1. Prior to using AI-generative tools, businesses should carefully review the relevant AI-generative tool’s terms of use and policies relating to IP. In the end, an AI-generative tool is a software and its use is subject to certain legal terms. Understanding the contractual terms that apply is the first step in understanding whether the content being used in or created by an AI-generative tool will be free from any claims by the publisher of the AI-generative tool, what types of representations, assurances, and indemnities the publisher of the AI-generative tool provides and the degree of legal protection it is willing to afford its users. The terms of use or service agreements should contain – at the very least – relevant indemnities for IP infringement arising from the use of the AI-generative tools.
  2. Businesses should take into account that their ability to claim ownership of anything created through the use of AI-generative tools may be questionable. This is not limited to the risk of infringement claims by third parties but may also mean that the business using the AI-generative tool may not claim ownership of the copyright to the works generated in this way through the AI tool. In addition to the use of AI-generative tools raising the question of who actually owns the resulting content (even if the terms of use provide that it is the user, this may be a waiver by the publisher of the AI-generative tool but it does not cover for any third party claims), it is possible that content created using an AI-generative tool will not be afforded any IP protection at all – since most IP laws were created with a human inventor or author in mind.[1]
  3. Another point to consider is that the use of an AI-generative tool may lead to the generated content be subjected to the terms of third-party licenses that the business using the AI-generative tool did not intend to be subjected to. This may be especially problematic where such licenses are “copyleft” licenses that may open the door to undesirable legal effects (such as obligations to share generated computer code with the general public for free). It is therefore important to run open-source analysis tools prior to using any computer code generated by an AI-generative tool.
  4. Businesses that use AI-generative tools as part of their product or service provided to customers should also consider disclosing such use to customers, and expressly specifying or disclaiming matters related to IP ownership and liability in their customer agreements.
  5. It is also recommended that companies review and update their internal policies to clarify that any action of feeding confidential information in the form of prompts to AI-generative tools is to be avoided .[2]
  6. There is some uncertainty as to the use of copyrighted materials to “train” an AI-generative tool where elements of such materials may end up in content generated by the AI-generative tool.[3] This should be taken under consideration by both companies that develop AI-generative tools and businesses that use such tools.
  7. Businesses that engage contractors, especially in fields that relate to artistic content (e.g. graphic design, advertising, etc.) should be aware that their contractors may be using AI-generative tools and address the risks involved in contractual instruments with such contractors.
  8. It is recommended for all businesses, regardless of industry, to provide appropriate guidance to employees in order to create awareness to the intellectual property risks involved in the use of AI-generative tools in work-related activities and have suitable internal policies in place.

The above points are merely intended to provide initial practical guidance with respect to some of the risks in using AI-generative tools known today. As this field is still evolving, new answers as well as new questions will arise. We continue to follow the developments in this legal sphere.

If you require specific advice in this area feel free to contact us.

 

 

[1] The US Copyright Office (“UCO”) issued a policy statement in March 2023 reiterating the US Copyright Act’s requirement of human authorship to register copyright works, stating that when AI technology determines the expressive elements of the output, the generated materials do not fulfill the human authorship requirement. The UCO did clarify that certain works containing AI-generated materials may contain sufficient human authorship so as to be afforded copyright protection.

[2] One example of such risk manifesting itself is the case of Samsung which, having found that employees revealed confidential information using ChatGPT, banned the use of ChatGPT and restricted employees’ ability to access AI-generative tools [https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak?utm_source=website&utm_medium=share&utm_campaign=copy].

[3] While use of copyrighted materials is most likely allowed under the “fair use” doctrine, there is still a legal risk in the possibility that parts of materials used for training an AI-generative tools may find their way to generated content. This is, in part, the claim in a request to approve a class action filed in the U.S. in connection with GitHub’s CoPilot.

 

Hit enter to search or ESC to close