Korea IT Times celebrates its 20th anniversary with Insightful columns from local and international thought leaders. Following contributions from experts from all walks of life in July, August, September, and October, the column will continue in November and December.
- Jinkook Kim, CEO of Coreline Soft: "Lung Cancer Awareness Month" in November: The No. 1 killer... Early Detection Saves Lives
- Ananth Lazarus, Managing Director, APAC, GTDC: Co-opetition: How Collaboration is Strengthening Cybersecurity Defenses in South Korea
- Jeremy Foo, Global Head of Gaming at TZ APAC: Building a Sustainable Web3 Gaming Ecosystem: Why Real Utility, Fair Play, and Innovation Are Key
- James Toledano, Chief Operating Officer of Unity Wallet: Crypto For Beginners: 4 Things To Know Before Getting Started
- James Lee, Head of the North American Division at Coreline Soft: Coraline Soft and Temple Lung Center, Opening a New Chapter in Lung Health
- Jason Lau, ISACA Board Director: AI in 2024: Balancing Innovation with Security and Ethics
- Byoung Min Im, Columnist: Malaysia's Zuhor State will be the next Singapore
- Alexandre Dreyfus, founder and CEO of Chiliz and Socios.com: Blockchain's Game-Changer Ushers in a New Era of Sports Engagement
By Jason Lau
Artificial intelligence (AI) is no longer just a tech industry buzzword; it’s the engine driving innovation across every sector, from healthcare to finance and defense. As AI reshapes our world at an unprecedented pace, it brings both opportunity and complexity. While the promise of enhanced efficiency, predictive insights, and automation is clear, AI’s rapid adoption also raises critical questions: Are we prepared to govern this technology responsibly? Do we have the right skills, policies, and ethical frameworks to manage AI’s risks?
Countries like South Korea, with their National Strategy for Artificial Intelligence, are setting ambitious goals to lead in AI while maintaining a commitment to security and ethical governance. Their approach demonstrates a model of balanced, forward-thinking innovation that the world can learn from. However, as recent findings from ISACA’s 2024 AI Pulse Poll and State of Cybersecurity 2024 Report reveal, many organizations globally face significant gaps in AI skills, governance, and ethical deployment, leaving critical vulnerabilities unaddressed.
Bridging the AI Skills Gap
One of the most pressing challenges in the AI landscape is the skills gap. According to ISACA’s recent poll, only 25% of digital trust professionals describe themselves as highly familiar with AI, while 46% see themselves as beginners. This lack of foundational knowledge poses real risks, especially since 40% of organizations globally offer no AI training at all. Another 32% limit training only to tech-focused roles, which can leave entire teams underprepared for the risks and opportunities of AI.
In South Korea, there’s a growing awareness that AI knowledge shouldn’t be siloed. The country has pushed for AI literacy through both national and corporate training initiatives, aiming to create a digitally aware workforce. This type of inclusive approach to AI education, extending beyond just tech roles, is crucial in ensuring that everyone involved can understand and manage AI’s risks and rewards responsibly.
Building Stronger AI Policies
In addition to the skills gap, policy development around AI remains inconsistent. ISACA’s survey shows that only 15% of organizations have a comprehensive AI policy in place, and only 35% of cybersecurity teams are involved in setting these policies. South Korea’s strategy offers a sharp contrast: their National Strategy for AI includes formal governance frameworks that address both ethical standards and data privacy concerns.
At the 2024 Seoul AI Summit, South Korean leaders called for “Responsible AI” practices, especially in areas like defense, where misuse could have life-or-death implications. This blend of broad national policy with sector-specific guidance offers a balanced approach that other countries can look to when shaping their own AI regulations.
Prioritizing Ethics in AI Deployment
Ethics in AI deployment is another critical area where organizations are falling short. According to ISACA, only 34% of respondents believe their organizations give adequate attention to ethical standards in AI, and just 32% feel data privacy and bias issues are fully addressed. South Korea’s proactive stance in tackling ethical issues, such as those related to AI-generated deepfakes, is an example of how countries can address these emerging threats head-on. With South Korea accounting for 53% of global deepfake content targeting public figures, the country has established initiatives like the Digital Sex Crime Support Center to combat these ethical challenges. This commitment to transparency and ethical AI deployment serves as a model for addressing misinformation and data misuse on a global scale.
Taking Action Towards Responsible AI
As AI permeates every corner of industry and society, organizations worldwide face a pivotal choice. Will they harness AI’s transformative power while committing to responsible governance, or risk letting it grow unchecked? South Korea’s approach—integrating innovation with ethical and security safeguards—illustrates that it is possible to embrace AI responsibly, setting a standard that others can follow.
For organizations just beginning their AI journey, ISACA’s resources offer invaluable guidance and provide a roadmap for organizations looking to navigate this complex environment responsibly. For organizations developing their own policies, ISACA’s Policy Template Library Toolkit includes templates for acceptable AI use and risk management, helping companies establish clear, adaptable guidelines. For those seeking to expand their AI knowledge and skills, ISACA’s own AI Essentials course is designed for professionals across roles, helping them understand core AI concepts and ethical considerations. And for those tasked with oversight, the Auditing Generative AI course equips teams with skills to assess the risks associated with popular generative AI tools. Addressing skills gaps, building clear policies, and embedding ethical standards aren’t just best practices—they’re essential for fostering digital trust in an AI-driven world.
In the end, creating a secure and ethical AI future depends on taking action now. By investing in skills, policies, and ethics, organizations can harness AI’s full potential and build a foundation of trust that will support the next wave of innovation. Responsible AI is not only achievable but necessary, and the choices we make today will shape AI’s role in our future.

