Please wait while the page is loading...

loader

OpenAI is hesitant to watermark ChatGPT-generated text

08 August 2024

OpenAI is hesitant to watermark ChatGPT-generated text

OpenAI has developed a tool that watermarks and detects ChatGPT-generated text with “high accuracy,” which has been ready for about a year, according to internal documents reviewed by The Wall Street Journal. OpenAI updated its blog post on August 4, 2024, confirming the watermarking feature and its extensive search on text provenance through solutions such as watermarking, classifiers and metadata.

Watermarking involves embedding an invisible identifier within the text that can be detected by the tool. On the other hand, classifiers determine ChatGPT content by analyzing patterns and word usage, while metadata carries additional information in the text. As it is cryptographically signed, metadata offers a more precise method with no “false positives” compared to watermarking.

From a legal perspective, tracking content helps ensure compliance with copyright laws and agreements. It provides a record that can be used in legal contexts to address infringement or disputes. As technology progresses, watermarking AI-generated content will increasingly become important, especially when there is a lack of legal framework around generative AI.

“As we know, copyright laws protect authors and their works; the first step is to identify who created which works,” said Frank Liu, head and partner at Shanghai Pacific Legal in Shanghai. He explained that it would be helpful to have tools that can identify the legal facts regarding the source of the work.

Meryl Koh, Director of IP and Dispute Resolution, Drew & Napier, Singapore

However, text watermarking isn’t without challenges. OpenAI wrote in the blog post: “While it has been highly accurate and even effective against localized tampering, such as paraphrasing, it is less robust against globalized tampering, like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character – making it trivial to circumvention by bad actors.”

Meryl Koh, director of IP and dispute resolution at Drew & Napier in Singapore, said systems like ChatGPT use algorithms to scrape data from various public sources, potentially storing unauthorized copyrighted material during data mining. “However, given that the training data used by generative AI have been kept highly confidential and can be difficult to reverse-engineer, the only evidence one can procure to show that a specific copyrighted work has been used in the initial data mining stage is if the output of the generative AI is identical or substantially similar to the said copyright work (i.e., aggrieved copyright owners are likely left with no recourse where when the output is not substantially similar to the copyright work),” explained Koh.

Reportedly, discussion about the watermarking tool started even before ChatGPT’s launch in 2022. However, its release has been delayed due to concerns that it might “disproportionately impact some groups,” like stigmatizing using AI as a useful writing tool for non-native English speakers. Moreover, a 2023 survey by OpenAI showed that deploying it would lead to a decrease in ChatGPT users who would likely move to rival apps without watermarking features.

Aside from generated text, OpenAI’s efforts also include expanding its image deception tools, with new features for DALL-E 3 currently in the works. DALL-E 3, released in 2021, is a tool that creates images based on text descriptions.

Earlier this year, OpenAI added Coalition for Content Provenance and Authenticity (C2PA) metadata to images generated and edited by DALL-E 3. C2PA metadata provides the general information and an image’s provenance while C2PA credentials show details of when and where an image was created and edited. C2PA is a widely adopted standard for digital content certification by software firms and camera manufacturers, among others.

Tools like watermarking and content authentication systems could also help raise awareness of original works. “These tools may not only help us identify the true author and assist in deciding cases of infringement but also help common users distinguish between AI-generated products and works created by humans,” said Liu.

However, while accurately tracking content creation won’t directly solve the issue of identifying the author or addressing infringement issues, he acknowledged it would be “helpful to some extent.”

According to him, there is currently no special evidence requirement regarding copyright when using generative AI in China. The law requires that originality must reflect an author’s personalized expression. AI cannot be regarded as the creator of copyrighted work. Therefore, when determining the originality of the work, the court may consider the contributed intentions, ideas and intellectual input into the content production process and whether it reflects the creator’s personality. “The evidence in this regard would be important,” added Liu.

“These days, technology sometimes develops too quickly for legislation to follow. The hope is that legal frameworks could incorporate these standards to better reflect modern-day norms and standards. At the very least, legislators may be pushed towards drafting soft law mechanisms such as clear guidelines on the adoption of text or image watermarking as tracking mechanisms that could be implemented as best practices across various industries,” said Koh.

More of OpenAI’s efforts are explored in its case study with the Partnership on AI.

- Cathy Li