The surge in AI-generated content has sparked a global legal reckoning, with courts, policymakers and businesses grappling with questions of ownership, accountability, and fairness. Panisa Suwanmatajarn says that the outcomes of these disputes will shape not just legal precedent but the trajectory of AI itself.
Artificial intelligence has transitioned from science fiction to an everyday tool, reshaping industries from healthcare to entertainment. As AI systems compose symphonies, draft legal documents, and design cutting-edge pharmaceuticals, they challenge the bedrock of intellectual property law: the principle that creativity and invention are inherently human endeavours. The surge in AI-generated content has sparked a global legal reckoning, with courts, policymakers and businesses grappling with questions of ownership, accountability and fairness.
Understanding intellectual property in the age of AI
Intellectual property law protects creations of the mind, traditionally categorized into four pillars:
AI disrupts this framework by autonomously generating outputs that mimic – or surpass – human creativity. For instance, OpenAI’s DALL·E produces intricate artwork, while tools like ChatGPT draft essays and code. These advancements force a reevaluation of foundational IP concepts: Who owns AI-generated content? Can a machine be an inventor or author? How do we balance innovation with creators’ rights?
The legal status of AI as an inventor or author
Patents: The DABUS saga and human-centric inventorship
The most contentious patent debate revolves around DABUS, an AI system developed by Stephen Thaler. Courts in the United Kingdom, United States, European Union and India have unanimously ruled that only humans can be inventors under existing patent laws. In 2023, the UK Supreme Court emphasized that patents exist to incentivize human ingenuity, not machine output.
However, South Africa and Saudi Arabia broke ranks, granting a patent to DABUS in 2021 (South African Patent No. ZA2021/03245). This jurisdictional split underscores the lack of global consensus and the pressure to adapt patent frameworks as AI becomes integral to research and development.
Copyrights: From art to code
Copyright law faces similar upheaval. In 2023, a U.S. court denied copyright protection for an AI-generated artwork, stating that “authorship” requires human agency. This contrasts with China’s pragmatic approach in which a Beijing court granted copyright to an AI-generated article in 2023, citing the human effort in designing the AI’s parameters. Meanwhile, Singapore’s 2021 Copyright Act introduced a “computational data analysis” exception, allowing AI training on copyrighted material if lawfully accessed.
Real-world implications and emerging legal disputes
Copyright battles: The data scraping wars part one, news and publishing
The New York Times versus OpenAI and Microsoft (2023): The Times sued OpenAI for using its articles to train ChatGPT without permission, arguing that AI-generated summaries threaten its revenue and journalistic integrity. OpenAI defended its actions under “fair use,” setting the stage for a precedent-setting clash over AI’s use of copyrighted news content.
Associated Press (AP) licensing model: In contrast, the AP partnered with OpenAI, licensing its archive for AI training. This symbiotic approach highlights potential compromises between media and tech giants.
Copyright battles: The data scraping wars part two, literature and art
Authors Guild versus OpenAI (2023): Bestselling authors like George R.R. Martin and Jodi Picoult sued OpenAI, alleging that ChatGPT was trained on pirated copies of their books. The lawsuit claims AI-generated summaries and derivatives could devalue their work, echoing concerns from comedians and visual artists.
Stability AI, Midjourney, and DeviantArt Class Action (2023): Artists accused these platforms of scraping billions of copyrighted images to train AI models, enabling users to replicate their styles. Getty Images filed a parallel lawsuit against Stability AI, signaling a broader pushback against unlicensed data use
Copyright battles: The data scraping wars part two, music and entertainment
“Heart on My Sleeve” AI-Generated track (2023): A viral song using AI-cloned vocals of Drake and The Weeknd was pulled from streaming platforms after Universal Music Group (UMG) cited copyright infringement. The incident exposed gaps in regulating AI’s use of vocal likenesses and musical catalogs.
Trademark conflicts: Deepfakes and brand integrity
AI-Generated celebrity endorsements (2024): A skincare brand faced lawsuits for using deepfake videos of Scarlett Johansson and Tom Hanks to promote products. These cases test trademark laws against AI’s ability to mimic personas, with plaintiffs arguing false endorsement and brand dilution.
Global perspectives on AI and IP
China: Incentivizing AI innovation
The Beijing Internet Court granted copyright to an AI-generated article in 2023, provided humans guided the output. This contrasts with stricter U.S. and EU stances, revealing China’s strategy to lead in AI development through flexible IP policies.
EU: Regulatory leadership
The EU’s AI Act prioritizes transparency and human oversight, though it avoids redefining inventorship. The EPO maintains that only humans can patent inventions, leaving AI’s role in R&D unresolved.
Singapore: A balanced model
Singapore’s 2021 Copyright Act, with its computational data analysis (CDA) exception, fosters AI development while respecting copyrights.
The future of AI and IP law
The legal system must evolve to address three core challenges:
-
Ownership frameworks: Should AI-generated works belong to developers, users, or the public domain?
-
Transparency: Mandating disclosure of training data to prevent misuse.
-
Global harmonization: Bridging jurisdictional gaps through treaties or organizations like WIPO.
Several approaches deserve serious consideration:
AI-specific IP categories: Rather than forcing AI-generated works into existing frameworks, we might develop sui generis protections with different terms and limitations. The EU’s database right provides one model – protection designed specifically for compilations that required substantial investment but lacked creative originality.
Attribution-based models: Some creative fields value recognition above compensation. Open-source software thrives on attribution, suggesting possible frameworks where AI systems must identify their training sources without necessarily securing permissions or making payments.
Enhanced authorship identification: To help prevent AI from infringing proprietary work, creators should clearly identify authorship in their works through digital signatures, blockchain verification, or standardized metadata. This would allow AI systems to recognize protected content and either avoid it or seek appropriate licenses.
National IP clearance centres: Each country should establish centralized repositories for proprietary works that function as clearance centers, handling licensing arrangements between creators and AI developers. These centers would streamline permissions, ensure fair compensation and provide legal certainty for all parties involved.
Expanded fair use doctrines: Courts could develop AI-specific fair use factors that balance the interests of rights holders against the social benefits of AI innovation. Such an approach would require case-by-case analysis but could avoid the rigidity of legislative solutions.
Technical solutions: Watermarking and fingerprinting technologies could help track AI training data and generated outputs, creating an audit trail that facilitates compensation without imposing undue burdens on innovation.
Revenue-sharing models: Compensating creators when their data trains AI systems, potentially through collective licensing schemes similar to those used in music.
The most promising approaches acknowledge this paradigm shift without abandoning creators to technological determinism. We need neither blind resistance to AI nor uncritical techno-optimism, but rather thoughtful frameworks that preserve what matters most: the ability of creators to control and benefit from their work while enabling legitimate innovation.
Conclusion
The collision of AI and IP law is not a distant future – it is here now. From patent offices rejecting AI inventors to artists battling data scraping, the legal system is scrambling to reconcile centuries-old laws with 21st-century technology. While jurisdictions like China and Singapore embrace flexibility, others cling to human-centric doctrines, creating a fragmented global landscape.
The outcomes of these disputes will shape not just legal precedent but the trajectory of AI itself. Will innovation be stifled by rigid laws, or can we craft frameworks that reward both human creativity and machine intelligence? The answer will determine who benefits from the AI revolution – and who gets left behind.