Please wait while the page is loading...

loader

Proactive protection and legal risk management for AI ad campaigns

01 August 2024

Proactive protection and legal risk management for AI ad campaigns

As AI transforms the advertising industry, IP concerns linking AI and campaigns call for proactive strategies. Experts share best practices for ad campaigns to avoid or minimize risk with Excel V. Dyquiangco.

The integration of artificial intelligence in advertising campaigns has revolutionized the way brands connect with consumers. AI’s ability to analyze vast amounts of data, predict consumer behaviour and generate personalized content has made it an invaluable tool for marketers. However, this technological advancement brings to the forefront critical issues surrounding intellectual property, posing both opportunities and challenges.

While AI technologies, including machine learning algorithms and natural language processing, have enabled advertisers to create highly targeted and efficient campaigns, these technologies can also analyze consumer data to identify trends, preferences and behaviours, allowing brands to tailor their messages to specific audiences. AI can also generate creative content, from personalized emails to dynamic social media ads, making campaigns more engaging and relevant.

But with the rise of AI in advertising, IP concerns have become increasingly complex. Traditional IP laws were not designed with AI-generated content in mind, leading to potential legal grey areas.

According to Robyn Chatwood, an IP and technology partner at Dentons in Melbourne, AI can certainly add value for many in the ad tech ecosystem.

“Although we seem to be at peak hype cycle now, using artificial intelligence tools can generate value for ad campaigns, and so we see that their use is accelerating,” she said. “Use cases that we advise regularly include where our clients are using AI to better manage workflows or collect and analyze data, such as the features of bid requests or user databases or to set prices for auctions and so on. Solution providers claim their AI algorithms can ensure budgets are optimized because target customers will receive relevant offers and information only based on real-time and fast information feedback on user requests and bids. Some claim to detect and lower fraudulent and bot traffic.” 

With intellectual property at the heart of such a wide variety of applications – whether AI is used for segmentation and channel selection based on likely engagement (that is, predicting behaviour and optimizing context or synergies) or analyzing page content and past performance of ad placement – AI is commonly being used to generate ad copy and to personalize messages.

“Whilst recognizing the potential opportunities and value that AI technology brings, it is imperative that businesses involved in advertising and promotions have proactive strategies for managing how AI tools are used to minimize their exposure to IP infringement, breach of confidentiality and, importantly, to protect valuable ad campaign assets,” she said.

Chatwood emphasized three key categories of IP concerns with regard to the link between AI and campaigns: infringement, ownership and confidentiality.

“Infringement of another person’s IP rights is probably the primary intellectual property concern that most businesses focus on when using generative AI tools for ad campaigns in Australia,” she said. “Although the specific tests for what infringement is differs from country to country, generally, unauthorized use is the key risk. Specifically, copyright laws prohibit the unauthorized use of copyrighted works (such as literary or photographic works). In Australia, to prove infringement of a copyrighted work, the copyright owner must show that a substantial part of it was copied and none of the small number of exceptions applies, which may be a defence to unauthorized copying.”

“The concern arises if, for example, you are using output generated by AI where the AI system is trained on data or works in which copyright subsists – whether artworks, photos or copy. For an AI system to be trained, it usually needs to copy underlying copyrighted works, even if those works are not being used in the outputs being sought. Information on whether a particular copyright work (that is, the input into the system) is part of the training data is not something usually disclosed by AI solution providers. The training data set is often their confidential information. So, you may not know you are using infringing works,” she explained.

In addition, Chatwood pointed out another infringement issue related to AI systems potentially producing output that infringes on others’ intellectual property rights. In Australia, courts first examine whether the AI-generated output is objectively like a copyrighted work or a substantial part of it. The court then considers if the work was copied.

The opacity of the input training data makes it difficult for copyright owners to establish a link between the protected work and the AI-generated output. This challenge is compounded when the output merely resembles the style of the original work. Additionally, AI systems often draw from multiple inputs, making it harder to trace the inspiration. This issue is particularly prominent in musical works.

“We are yet to work through all of these thorny issues in Australian courts,” she said. “Also unclear are the issues as to who would, in fact, be the copyright infringer. Is it the developer of the AI tool? Or the user of the AI tool?”

In December 2023, the Australian Government established the Copyright and Artificial Intelligence Reference Group (CAIRG) to address future copyright challenges from AI. CAIRG is engaging with stakeholders across sectors, focusing on potential copyright infringement and the status of AI-generated outputs.

In the next few months of March 2024, the Australian Senate also formed a Select Committee on Adopting Artificial Intelligence to explore AI’s opportunities and impacts in Australia. The committee will report its findings to Parliament in September 2024.

“In terms of ownership, in recent times, a number of cases around the world and in Australia have demonstrated that courts are reluctant to grant patent protection for inventions generated by AI systems,” said Chatwood. “Many IP laws, such as those in Australia, require a human to be the inventor. Copyright law, at least in Australia, requires a human author for works.”

She added: “This leads to the other key issue relating to intellectual property for ad campaigns – that is, who owns the AI output? If you create valuable copy or content for your ad, then you usually want to own it. However, not all AI systems have terms of service that assign that ownership to you and (even if they did do that) the relevant copyright law may not in fact protect the work as a machine produced it. This may mean that you cannot prevent others from using it – which may mean it has less value to you as an ad campaign asset in the future.”

In terms of confidentiality, an agency producing a campaign may be privy to sensitive information from its client, such as new product launch information or other research material. “The danger with some AI systems is that the information put into the system by the user is then the source for the program to further train and improve the AI software. If prompts or other inputs include such confidential information, then there is a risk that this will breach non-disclosure obligations or inappropriately disseminate sensitive trade secrets to other users, who may be competitors,” she said.

It is important to note that in Australia, companies find it challenging to ensure that AI-generated content for ad campaigns does not infringe on existing copyrights, trademarks or patents. Text produced by generative AI platforms could potentially violate someone else’s IP rights, as AI might reproduce copyrighted material used in its training. Currently, Australian copyright law lacks a fair use exception for training generative AI.

AI systems are also prone to “hallucinations,” generating inaccurate, infringing or misleading content, which raises further concerns.

Adding to the complexity is the proliferation of non-harmonized laws dealing with AI. Australia has yet to pass specific AI legislation, resulting in a patchwork of regulations. In contrast, the European Union has already enacted specific AI laws. “Many of our clients are grappling with the complexity of the laws – and also the fact that the IP laws are simply not keeping up with the advances in technology and its rapid adoption,” said Chatwood.

Best practices in ad campaigns

Much like Australia, there are currently no specific statutory regulations protecting the IP of AI-generated materials in Indonesia. According to Article 1, Paragraph 2 of Law No. 28 of 2014 on Copyright, copyright is granted to the “author” of the works, and in this context, the “author” is a legal person (a human being). Aside from that, Article 1, Paragraph 3 of the law sets out that the nature of the protected works also stems from the inspiration, ability, thoughts, imagination, sharpness, skills or expertise of the person creating the works and not the AI.

A recent case involving an Indonesian singer and artist highlights the importance of company oversight when using third-party advertising agencies. The artist allegedly used part of a foreign artist’s work for a mini album cover without permission. Although this incident does not directly involve AI, it underscores the need for clear guidelines when outsourcing creative work. Companies should establish strict parameters for third-party agencies, including potentially restricting the use of AI tools, to avoid copyright infringement risks.

Justisiari Perdana Kusumah, managing partner at K&K Advocates in Jakarta, said: “There are no specific prohibitions regarding the protection of AI-generated marketing materials as trademarks. If AI-generated marketing materials fall within the protected types of trademarks under the trademark law, they can be protected as trademarks. Examples of such protected works include drawings, logos, names, words, colour schemes, sounds or holograms.”

He added that it is important to note that relying on AI to generate trademarks is still risky now since there is no surefire guarantee that the resultant product will be free of infringement (most image models still rely on datasets containing publicly available data and information that most likely contains registered trademarks).

“To minimize risk, we recommend businesses that use AI to cross-check their generated materials before using them on whether there any potential IP infringement,” he said. “Furthermore, it is recommended to limit alterations to the company’s original creations to cosmetic improvements only.”

With AI-driven advertising becoming more prevalent, companies can safeguard their proprietary data and AI algorithms from being misused or stolen – aside from standard measures like access control, NDAs and raising awareness on the importance of safeguarding confidential information – Kusumah said that companies could require employees to do these steps to protect their proprietary data when using AI models to generate ad content:

  • Turn off any human review feature in the generative AI platform when possible. This will reduce the chances of prompts being stored and disclosed to irrelevant third parties, who may have bad faith and use the prompts against the company’s interest.
  • Ensure that prompts entered to generate ad content do not disclose the details of the company’s marketing campaigns or initiatives by using general sentences that are still related to the campaign or initiative (i.e. “create a picture of a man holding a candlelight in a dark room by himself” instead of “make a picture of a man holding a candlelight in a dark room and create a caption signifying the luminance of the new product Peterson X551 bedroom lamp”).
  • If a company’s tools are processing data to train AI datasets, opt-out from such automated processing if made possible by the tool’s provider. This will significantly reduce the chance of your proprietary data being used to train an AI model’s datasets, which could potentially lead to accidental disclosure of the company’s proprietary to third parties.

“Companies need to conduct consultation, analysis and programming according to the law and regulations. For example, to partner with a legal and technology consultant to conduct a legal risk assessment of the AI tool and its training data to identify potential copyright or trademark concerns,” said Kusumah. “They also need to develop a risk management and crisis management plan in case an ad campaign inadvertently infringes on someone else’s IP rights.”


Law firms