To celebrate its 20th anniversary, Korea IT Times is presenting special contributions that share the profound insights of opinion leaders from Korea and abroad. Following the notable experts who authored articles in July and August, we introduce the contributors for September:
- Prof. Jong-Shik Kim: Digitalization and Digital Transformation.
- Hyunseok Shin, CEO of Smilegate Vietnam: Changes and Strategies in Software Development due to AI (Part 1).
- Sukhvinder Singh: South Korea's Economic Miracle: A Critical Analysis of Challenges and Future Prospects.
- Agustín Liserra, CEO and Co-founder of Num Finance: Stabilizing Volatile Markets: RWA Investments, Carry Trade, and Currency Risk Strategies.
- Hyunseok Shin, CEO of Smilegate Vietnam: Can AI replace the technical skills of developers leading the digital world? (Part 2)
- Alex Haigh, Managing Director Asia Pacific, Brand Finance: Brand Resilience: How South Korea's Leading Brands Adapt to Market Fluctuations.
- Jinkook Kim, CEO of Coreline Soft: The New Changes AI will Bring to the Emergency Room.
- Josh Lee Kok Thong, FPF Managing Director Asia Pacific: APAC is at Risk of AI Regulatory Fragmentation.
- New Role for the Asia Institute building closer ties between US, Korea, and Japan.
- Hyunseok Shin, CEO of Smilegate Vietnam: Replacement of software development due to AI, present and future(Part 3).
By Josh Lee Kok Thong and Dominic Paulger (Future of Privacy Forum)
Policymakers around the world are grappling with how to govern powerful and rapidly advancing artificial intelligence (AI) technologies. While the European Union (EU) is taking a unified omnibus approach through the recently enacted AI Act, the Asia-Pacific (APAC) region is witnessing significant divergence in approaches to AI governance, with major jurisdictions pursuing markedly different regulatory strategies.
This regulatory fragmentation could create a complex patchwork of rules and policies across APAC, complicating compliance for businesses that operate across the region and potentially impeding the development and adoption of AI technologies.
Jurisdictions Share Common Goals but Differ on Approaches
These conclusions are drawn from a year-long study by the Future of Privacy Forum that examined AI governance frameworks and emerging generative AI policies in five key APAC jurisdictions: Australia, China, Japan, Singapore, and South Korea. The study found that these jurisdictions share common goals around promoting responsible AI
development but vary significantly in their regulatory approaches.
At one end of the spectrum, China has taken the most assertive stance toward regulating AI. It swiftly enacted binding regulations targeting generative AI and related technologies like deepfakes. These regulations impose strict obligations on providers of AI-powered services and introduce a licensing and registration scheme for certain kinds of algorithms.
At the other end of the spectrum, jurisdictions like Australia, Japan, and Singapore have (thus far) favored voluntary frameworks and industry collaboration over hard regulation. For instance, Australia held a large-scale public consultation on the way forward for AI governance and regulation. It appears to be taking an iterative approach, prioritizing the development of targeted regulation and guidance for high-risk AI applications. Japan is prioritizing international cooperation, particularly through its G7 presidency in 2023, to establish global norms for advanced AI systems. This work has also shaped the development of its domestic AI guidance. Singapore has developed a Model AI Governance Framework for Generative AI that provides a roadmap for a wide range of future initiatives. It is also focusing on AI governance testing initiatives through AI Verify and Project Moonshot.
South Korea falls in the middle of the spectrum. While the Ministry of Science and ICT has been working on comprehensive AI legislation, the Personal Information Protection Commission has been very active in building capabilities around AI and issuing detailed guidance on AI privacy issues.
Some variation in AI governance is expected and reflects each jurisdiction’s unique context. However, significant regulatory fragmentation risks creating major compliance challenges. Further, the existence of existing laws, such as data protection laws, complicates the picture as it is still not fully clear how these laws will apply to generative AI. Companies operating across multiple Asian markets thus face a web of regulatory obligations that increase the complexity of compliance and could potentially hold back the region's ability to realize the benefits of AI. The lack of a coherent regional approach may also weaken APAC’s voice in shaping global AI governance.
Emerging Consensus Offers Hope for Regional Interoperability
The study found some encouraging areas of emerging consensus across the five jurisdictions examined. There is broad agreement on the key risks posed by AI systems, including:
- The potential for factual inaccuracies and "hallucinations" in AI-generated content.
- A lack of transparency in how AI systems function.
- Privacy concerns related to the use of personal data to train AI models.
- The potential for malicious use, including spreading misinformation.
- The risk of biased or discriminatory outputs.
Similarly, there is alignment on some recommended governance measures, such as:
- Developing internal AI governance policies and risk management frameworks.
- Enhancing transparency through documentation and disclosures.
- Implementing security controls and incident response processes.
- Creating mechanisms to authenticate AI-generated content.
These areas of consensus could serve as building blocks for greater regional alignment on AI governance. Policymakers can focus on expanding these points of agreement while still factoring in national issues and differing regulatory philosophies.
At the same time, APAC governments could explore several steps to promote more cohesive AI governance. These include: first, establishing regional AI governance discussions and fora to facilitate policy coordination and sharing of best regulatory and industry practices; second, developing common taxonomies and definitions around AI to ensure conceptual alignment; third, creating mechanisms to mutually recognize AI certifications and assessments to reduce duplicative compliance burdens; and fourth, collaborating on developing shared AI standards and testing protocols, as well as enhancing capacity-building and knowledge-sharing.
Importantly, pursuing greater alignment does not require a one-size-fits-all approach. The goal should be to pursue interoperability and mutual compatibility rather than rigid uniformity. Jurisdictions should retain flexibility to adapt frameworks to their specific contexts.
In sum, the APAC region has an opportunity to leverage its technological prowess and innovation capacity to become a leader in effective and balanced AI governance. By working towards greater alignment in AI governance, the region can create an environment that fosters responsible innovation while protecting citizens from potential risks associated with AI. Achieving this will require policymakers to look beyond national borders and work towards a shared vision for the responsible development of AI.
About the Authors
Josh Lee Kok Thong is the Managing Director for FPF’s Asia-Pacific region, where he leads a team furthering FPF's mission of advancing principled and pragmatic data protection practices in support of emerging technologies. In this role, he regularly advises government leaders, lawmakers, and industry leaders. Prior to FPF, Josh drove Singapore’s AI governance policies in the Singapore Government. Josh is also an adjunct faculty member of the Singapore Management University Yong Pung How School of Law, where he teaches AI law, policy, and ethics, and a member of Singapore’s Law Reform Subcommittee on Robotics and AI. Recognized as an Asia 21 Next Generation Fellow and one of Asia Law Portal’s Top 30 to Watch in the business of law in Asia, Josh is a graduate of the University of California, Berkeley, and the Singapore Management University.
Dominic Paulger serves as the Deputy Director of the Future of Privacy Forum's Singapore office, where he leads the team’s research and analysis on the privacy implications of new and emerging technologies, including AI, across the Asia-Pacific region. Dominic's notable publications include a 2022 comparative study of legal basis for processing personal data across 14 APAC jurisdictions, a 2023 summary of developments in cross-border data transfer regulations in APAC, and a 2024 report on governance frameworks for generative AI in 5 key Asian markets. He is a graduate of Singapore Management University and King's College, London.

