The EU’s Artificial Intelligence Act: What does it mean for Asia and the rest of the world?

31 May 2024

The EU’s Artificial Intelligence Act: What does it mean for Asia and the rest of the world?

The EU’s comprehensive AI Act may influence global AI regulation, especially in Asia, where countries are evaluating the need for similar policies. Espie Angelica A. de Leon examines the likelihood of other countries adopting the EU’s regulatory approach.

The concept of artificial intelligence has been around for some time. Before, AI was mostly part of conversations within the tech industry, but now, AI is everyone’s buzzword. Indeed, as a conversation piece, AI has taken off from the mouths of technology experts to take its place in daily discourse – among students, artists, journalists, lawyers, marketing professionals, teachers and practically everyone outside the tech community, so much so that AI has become a hot topic in conferences, workshops, roundtable discussions and podcasts. The reason for this is simple: AI technology is rapidly advancing, allowing its applications to cut through industries across the continents.

AI’s now mainstream popularity has prompted some jurisdictions to come up with regulations, policies or guidelines for the technology. It should be noted that the development and adoption of AI are fraught with risk issues, including IP infringement and cybersecurity challenges.

Foremost of these jurisdictions is the European Union. Adopted by the EU Parliament on March 13, 2024, the EU’s AI Act is the first comprehensive legal framework in the world governing the development, market deployment and use of AI.

How will the AI Act impact Asia and the rest of the world? Are countries in Asia bound to follow suit and craft their own legal frameworks, principles or guidelines? Or should they?

IP and cybersecurity in the AI Act

Before we answer these questions, let us first take a peek into the EU’s AI Act, particularly its IP and cybersecurity provisions.

The act, which will become a law after the European Council passes it, generally classifies AI applications into categories based on their risk factors: unacceptable, high risk, limited risk and minimal risk.

The “unacceptable” category refers to AI applications that are deemed harmful. These include the following: facial recognition technology for identifying individuals in public spaces; biometric classification to indicate sexual preference, political and religious orientation and the like; social scoring systems which may lead to discrimination; manipulative and deceptive techniques and others.

“High risk” applications include automated processing of personal data, medical uses, self-driving cars, CV scanning tools for ranking job applicants and others. Meanwhile, AI used for entertainment purposes is categorized under “limited risk” and “minimal risk.”

Under Article 53, providers of general-purpose AI models should have a policy to comply with EU law on copyright and related rights.

Providers should also prepare a sufficiently detailed summary of the content used for training the AI model. According to Rohan Swarup, an associate partner at Singh & Singh in New Delhi, the summary must be comprehensive to include, for example, the main data collections or data sets used in training the model and a narrative explanation about other data sources used. The summary must be made publicly available.

Providers of general-purpose AI models in the market must comply with this regardless of the jurisdiction in which the copyright-relevant acts in connection with the training of those AI models take place. “The act recognizes that this is necessary to ensure a level playing field among providers of general-purpose AI models where no provider should be able to gain a competitive advantage in the union market by applying lower copyright standards than those provided in the union,” explained Swarup.

The act also provides that any use of copyright-protected content requires the copyright owner’s authorization unless the use of the content falls under the purview of any relevant exceptions and limitations. “The European Union already has in place Directive (EU) 2019/790, which provides that rights holders may choose to reserve their rights over their works or other subject matter to prevent text and data mining unless this is done for the purposes of scientific research,” said Swarup.

The act’s cybersecurity provisions set out the technical requirements to ensure the security and integrity of AI systems, especially those considered “high risk.”

For violations such as using AI-enabled manipulative techniques or using biometric data to infer private information, the regulation imposes stiff penalties – €15 million (US$16.3 million) or 3 percent of annual global turnover for most violations. It can go as high as €35 million (US$38 million) or 7 percent of annual global turnover.

The AI Act’s impact on Asia and beyond

So what does the EU’s groundbreaking AI Act mean for the rest of the world, including Asia? How will it impact the region in particular?

“All we are waiting for is a trailblazer,” said Thai Gia Han, an associate and co-head of IP and TMT practice group at Indochine Counsel in Ho Chi Minh. “Personally, I welcome the AI Act of the European Union, recognizing its pivotal role in establishing a much-needed regulatory framework for the rapidly evolving field of technology. The act’s clear guidelines and standards mirror the success of initiatives like the EU General Data Protection Regulation (GDPR), suggesting its potential to set a new global legal standard. As AI continues to dominate

discussions worldwide, the EU’s leadership in this area is poised to shape international legal trends. In the realms of IP and cybersecurity, I believe that the act will play a significant role in bolstering IP rights protection and mitigating cybersecurity risks,” said Han.

Tracy Lu, managing associate at Allens in Sydney, believes the AI Act’s impact on the rest of the world will be significant “not the least because the EU has made no secret of its ambition to be a global leader in AI.” She added: “The EU AI Act is expressly stated to have extra-territorial application and requires, in some circumstances, compliance with the EU Copyright Directive – this will have a direct impact on foreign businesses supplying AI products to the EU.” To be more specific, such impact will be felt by providers and developers of AI models outside the EU as well as users outside the union. This is especially true if the AI system being deployed in the EU market is a “high risk” one or if the system’s output is used in the EU.

Ramifications of the AI Act may also be seen in AI development itself. “The EU is a significant market for many companies worldwide, including those in Asia. Companies that develop or export AI technologies to the EU will need to comply with the act’s requirements to access this market. This may influence the design and development of AI systems globally to meet EU standards,” Swarup pointed out.

This scenario is poised to change the face of business competition in Asia in particular. Some businesses in the region will be able to comply with the EU regulations while some will not. Those who do may gain a competitive edge. Those who do not will risk losing market share and will be left behind by their competitors, but there is a silver lining. “This could stimulate innovation as Asian companies strive to develop AI solutions that meet both EU and domestic regulations, fostering the emergence of more advanced, ethical and secure AI technologies in the region,” explained Swarup.

Should Asian jurisdictions follow the example of the EU and set up their own legislation or regulations for AI? And if they do establish legislation or regulations, should they be similar to those in the European Union?

According to Lu, the AI Act will certainly be closely studied by lawmakers around the world. Others agree as far as the act being used as a model or blueprint is concerned.

“Other countries and regions may look to the EU’s approach as a model for developing their own AI policies and regulations. AI technology is rapidly advancing and being widely adopted in many Asian countries. Thus, there is a growing need for regulations to ensure the ethical and responsible use of AI in these countries,” said Xiaofang Li, attorney-at-law at CCPIT Patent & Trademark Law Office in Beijing.

“Considering that the EU’s stringent data protection standards, such as those outlined in the GDPR, have already influenced data privacy regulations worldwide, the AI Act’s emphasis on data protection and privacy may lead to similar regulations in Asian countries to facilitate data transfers and ensure compliance with EU standards for businesses operating globally,” added Swarup.

For Stanley Lai, head of IP and co-head of cybersecurity and data protection at Allen & Gledhill in Singapore, whether a country in Asia will adopt something like the EU’s AI Act all boils down to its appetite for risk, the culture of compliance and its existing laws and/or regulations.

These factors pose several challenges for Asian nations despite the potential benefits of adopting regulations similar to the AI Act. “Asia is a region of diverse cultures, regulatory frameworks and technological landscapes,” noted Swarup. “Crafting uniform regulations that accommodate regional nuances while aligning with global standards requires careful consideration and collaboration among stakeholders.”

All of these issues, on top of logistical challenges associated with the implementation and enforcement of AI regulations, are especially true in countries with limited regulatory capacity and resources. To ensure compliance among businesses and organizations, there should be robust enforcement mechanisms, capacity building and stakeholder engagement.

“Whilst the AI Act will certainly set a strong starting point to spark international discourse on whether each country ought to adopt its very own AI Act, at this point, I think that it is unlikely that most Asian countries would implement legislation that is equivalent to an AI Act with a similar scope,” said Lai, “particularly in view of the severe penalties that may be incurred upon breach of the EU AI Act.”

Instead, he believes Asian nations are more likely to adopt other approaches, such as the enactment of statutes or the issuance of guidelines and/or policies.

It’s already happening in Singapore. The government does not have a legislative framework governing AI adoption, but it has certainly put in place some non-enforceable guidelines and principles with IP and cybersecurity provisions. These are the following: 1) AI Governance Framework for Generative AI (Draft), 2) Proposed Advisory Guidelines on the use of Personal Data in AI Recommendations and Decision Systems, 3) Guidelines for Secure AI System Development by the Cyber Security Agency of Singapore, along with 23 other international agencies from 18 countries, and 4) IP and Artificial Intelligence Information Note by the Intellectual Property Office of Singapore. The latter explains the key issues that AI innovators should be aware of and provides an overview of how different types of IP rights can be used to protect different AI innovations.

“At this point in time, we do not think that Singapore will enact substantive legislation relating to AI that is similar to the EU AI Act. We are of the view that it should not be like the one by the EU. We observe that the Singapore government has largely preferred to adopt a less statutory and prescriptive approach towards AI regulation, choosing to issue guidelines and recommendations instead of adopting a penalty-based prescriptive structure,” Lai said.

They also reckon that the technical revolution happening in the field of generative AI is poised to tap into the potentials of small-language models and retrieval-augmented generation. According to Lai, these developments are expected to further propel technology in the areas of machine learning, natural language processing and computer vision. “A less prescriptive approach to regulation will provide flexibility in coping with technological changes,” he said.

Echoing Lai’s statement, Swarup reminded that countries in Asia must strike a balance between fostering AI innovation and safeguarding against potential risks and harms, however delicate the task may be. “Flexible and adaptive regulatory frameworks are essential to support innovation while protecting societal interests,” he stated.

India also does not have any legislation, policy or regulation dedicated to AI. Swarup believes it should, with provisions for IP and cybersecurity, as it is crucial for the country’s advancement in AI territory. He cited that clear regulations can encourage innovation by providing incentives for companies and individuals to invest in AI research and development without fear of IP theft. He added that the AI policy should have ethical guidelines as well, for fairness, transparency, accountability and the ethical use of AI algorithms and data.

Vietnam is in the same boat, with no AI legislative framework, regulation or policy yet in place. Han believes there is a possibility for change in Vietnam’s AI scene, but not in the immediate future. She explained: “Vietnam has shown remarkable agility in aligning with global legal trends, particularly as it emerges as a new go-to market for tech investors and startups. A prime illustration is the recent enactment of the Personal Data Protection Decree, closely modelled after the GDPR. Accordingly, the development of a legal framework for the AI sector within the jurisdiction can be anticipated, drawing inspiration and guidance from the AI Act. Nonetheless, given the nascent stage of AI adoption in Vietnam in comparison with other developed countries worldwide, comprehensive research of the matter at hand remains imperative, indicating that immediate changes may not be imminent.”

The government of Japan announced during the 2023 G-7 Digital and Technology Ministers’ Meeting that it prefers softer guidelines rather than strict regulations for AI. However, the Nikkei Business Daily reported in February 2024 that the Democratic Liberal Party, Japan’s ruling party, is aiming to create legislation to regulate generative AI before the end of 2024. The party’s AI project group will formulate preliminary rules, which may include penalties for foundation model developers. More recently, in April 2024, Asia News Network reported that a government panel in Japan created a draft interim report expected to serve as a guide for developers and consumers of generative AI. The government panel is made up of experts on IP rights in the AI realm. Among others, the report requires AI providers to create terms of use that will include the protection of IP rights.

Meanwhile, the Cyberspace Administration of China (CAC) and six other government agencies jointly issued the Interim Measures for the Administration of Generative AI Services on July 10, 2023, containing 24 rules for regulating AI services. While promoting the development of generative AI, the Interim Measures also aim to prevent the risks involved, such as content security risks, data security risks and the risk of IP infringement.

When it comes to new technologies, Li’s belief mirrors that of Lai and Swarup. She opined that between development and legislation, development must come first. “Jurisdictions or countries should keep pace with the development of AI technology, and make laws or regulations to prevent possible risks in time. However, regulating any new technologies, including AI, should be cautious to avoid affecting the development of technologies,” she explained. “Too much and too early regulation would affect the development and implementation of AI technology in respective jurisdictions.”

Stepping outside Asia, let’s focus on Australia. No AI-specific copyright law has been proposed in the country. However, in 2023, a series of copyright roundtable discussions between the Australian Federal Attorney-General’s Department and stakeholders were held. One of the topics taken up was the implications of AI for copyright law. Thereafter, the department announced it would establish a copyright and AI reference group to serve as “a standing mechanism for ongoing engagement with stakeholders across a wide range of sectors.”

When asked whether Australia should have its own AI legislation, policy, guidelines or such, Lu responded by saying that the question should instead be whether Australian copyright law is sufficiently technology agnostic and strikes the right balance between the interests of copyright owners and copyright users within the context of AI. “To answer that question, stakeholder engagement is key. I also think it cannot be assumed that what works in other jurisdictions would also work in Australia, which has some significant differences in copyright law when compared with other jurisdictions,” said Lu. To illustrate, Australia has no text and data mining exceptions or standalone database rights unlike the EU. Also, Australia has no broad “fair use” defences, unlike the U.S.

The question of whether countries in Asia should have their own laws or legislation on AI similar to the AI Act may elicit different views. But one thing’s for certain: AI is evolving rapidly and it has encroached into multiple aspects of our lives. Everyone has to adapt to these changes in one way or another. No question about that.

Law firms

Please wait while the page is loading...