“Whilst the AI Act will certainly set a strong starting point to spark international discourse on whether each country ought to adopt its very own AI Act, at this point, I think that it is unlikely that most Asian countries would implement legislation that is equivalent to an AI Act with a similar scope,” said Lai, “particularly in view of the severe penalties that may be incurred upon breach of the EU AI Act.”
Instead, he believes Asian nations are more likely to adopt other approaches, such as the enactment of statutes or the issuance of guidelines and/or policies.
It’s already happening in Singapore. The government does not have a legislative framework governing AI adoption, but it has certainly put in place some non-enforceable guidelines and principles with IP and cybersecurity provisions. These are the following: 1) AI Governance Framework for Generative AI (Draft), 2) Proposed Advisory Guidelines on the use of Personal Data in AI Recommendations and Decision Systems, 3) Guidelines for Secure AI System Development by the Cyber Security Agency of Singapore, along with 23 other international agencies from 18 countries, and 4) IP and Artificial Intelligence Information Note by the Intellectual Property Office of Singapore. The latter explains the key issues that AI innovators should be aware of and provides an overview of how different types of IP rights can be used to protect different AI innovations.
“At this point in time, we do not think that Singapore will enact substantive legislation relating to AI that is similar to the EU AI Act. We are of the view that it should not be like the one by the EU. We observe that the Singapore government has largely preferred to adopt a less statutory and prescriptive approach towards AI regulation, choosing to issue guidelines and recommendations instead of adopting a penalty-based prescriptive structure,” Lai said.
They also reckon that the technical revolution happening in the field of generative AI is poised to tap into the potentials of small-language models and retrieval-augmented generation. According to Lai, these developments are expected to further propel technology in the areas of machine learning, natural language processing and computer vision. “A less prescriptive approach to regulation will provide flexibility in coping with technological changes,” he said.
Echoing Lai’s statement, Swarup reminded that countries in Asia must strike a balance between fostering AI innovation and safeguarding against potential risks and harms, however delicate the task may be. “Flexible and adaptive regulatory frameworks are essential to support innovation while protecting societal interests,” he stated.
India also does not have any legislation, policy or regulation dedicated to AI. Swarup believes it should, with provisions for IP and cybersecurity, as it is crucial for the country’s advancement in AI territory. He cited that clear regulations can encourage innovation by providing incentives for companies and individuals to invest in AI research and development without fear of IP theft. He added that the AI policy should have ethical guidelines as well, for fairness, transparency, accountability and the ethical use of AI algorithms and data.
Vietnam is in the same boat, with no AI legislative framework, regulation or policy yet in place. Han believes there is a possibility for change in Vietnam’s AI scene, but not in the immediate future. She explained: “Vietnam has shown remarkable agility in aligning with global legal trends, particularly as it emerges as a new go-to market for tech investors and startups. A prime illustration is the recent enactment of the Personal Data Protection Decree, closely modelled after the GDPR. Accordingly, the development of a legal framework for the AI sector within the jurisdiction can be anticipated, drawing inspiration and guidance from the AI Act. Nonetheless, given the nascent stage of AI adoption in Vietnam in comparison with other developed countries worldwide, comprehensive research of the matter at hand remains imperative, indicating that immediate changes may not be imminent.”
The government of Japan announced during the 2023 G-7 Digital and Technology Ministers’ Meeting that it prefers softer guidelines rather than strict regulations for AI. However, the Nikkei Business Daily reported in February 2024 that the Democratic Liberal Party, Japan’s ruling party, is aiming to create legislation to regulate generative AI before the end of 2024. The party’s AI project group will formulate preliminary rules, which may include penalties for foundation model developers. More recently, in April 2024, Asia News Network reported that a government panel in Japan created a draft interim report expected to serve as a guide for developers and consumers of generative AI. The government panel is made up of experts on IP rights in the AI realm. Among others, the report requires AI providers to create terms of use that will include the protection of IP rights.
Meanwhile, the Cyberspace Administration of China (CAC) and six other government agencies jointly issued the Interim Measures for the Administration of Generative AI Services on July 10, 2023, containing 24 rules for regulating AI services. While promoting the development of generative AI, the Interim Measures also aim to prevent the risks involved, such as content security risks, data security risks and the risk of IP infringement.
When it comes to new technologies, Li’s belief mirrors that of Lai and Swarup. She opined that between development and legislation, development must come first. “Jurisdictions or countries should keep pace with the development of AI technology, and make laws or regulations to prevent possible risks in time. However, regulating any new technologies, including AI, should be cautious to avoid affecting the development of technologies,” she explained. “Too much and too early regulation would affect the development and implementation of AI technology in respective jurisdictions.”
Stepping outside Asia, let’s focus on Australia. No AI-specific copyright law has been proposed in the country. However, in 2023, a series of copyright roundtable discussions between the Australian Federal Attorney-General’s Department and stakeholders were held. One of the topics taken up was the implications of AI for copyright law. Thereafter, the department announced it would establish a copyright and AI reference group to serve as “a standing mechanism for ongoing engagement with stakeholders across a wide range of sectors.”
When asked whether Australia should have its own AI legislation, policy, guidelines or such, Lu responded by saying that the question should instead be whether Australian copyright law is sufficiently technology agnostic and strikes the right balance between the interests of copyright owners and copyright users within the context of AI. “To answer that question, stakeholder engagement is key. I also think it cannot be assumed that what works in other jurisdictions would also work in Australia, which has some significant differences in copyright law when compared with other jurisdictions,” said Lu. To illustrate, Australia has no text and data mining exceptions or standalone database rights unlike the EU. Also, Australia has no broad “fair use” defences, unlike the U.S.
The question of whether countries in Asia should have their own laws or legislation on AI similar to the AI Act may elicit different views. But one thing’s for certain: AI is evolving rapidly and it has encroached into multiple aspects of our lives. Everyone has to adapt to these changes in one way or another. No question about that.