South Korea’s AI law is the first of its kind: It aims to push AI adoption by keeping misuse firmly in check

3 hours ago 1
ARTICLE AD BOX

Copyright © HT Digital Streams Limited
All Rights Reserved.

South Korea is the first country to enact a comprehensive national AI law.  (istockphoto) South Korea is the first country to enact a comprehensive national AI law. (istockphoto)

Summary

With its AI Basic Act now in force, South Korea has become the first country to enact a comprehensive national law for artificial intelligence. The aim is not to slow adoption, but to make AI deployment safer, more transparent and sustainable—before deepfakes and scams can provoke a public backlash.

Nearly a decade ago, long before ChatGPT wowed the world with its humanlike conversational abilities, Google DeepMind’s artificial intelligence (AI) system stunned South Korea when it beat legendary Go player Lee Sedol during a televised tournament in Seoul.

The Go master and 18-time world champion of the centuries-old strategy game later retired, calling AI an “entity that cannot be defeated." The spectacle was a warning, with then-president Park Geun-hye declaring Korean society was “ironically lucky" to have learned about the nascent technology’s importance “before it is too late."

That early shock has since morphed into one of the fastest surges in AI use anywhere in the post-ChatGPT era. And Seoul wants to turn that momentum into something rarer: durable public trust. It has become the first country to enact a comprehensive national law with its so-called AI Basic Act taking effect last week.

As the US and China compete to build the best models, South Korea is stress-testing a more immediate question: how an advanced, hyper-connected economy can roll out AI rapidly without letting scams, deepfakes and slop wallop public trust. Seoul is betting that rules don’t have to kill adoption, but legitimize it.

The rest of the world will be watching closely. The nation has also become a live demo of how quickly the technology can spread throughout the real economy when conditions are right. Microsoft’s AI Economy Institute called South Korea “the clearest end-of-year success story" in its Diffusion Report this month, citing the sharpest spike in adoption in the second half of last year.

Since October 2024, Generative AI usage grew by 25% in the US and 35% globally. In South Korea, it jumped more than 80%. Microsoft attributed this surge to improvements in the Korean-language capabilities of large language models. It also pointed to the viral Studio Ghibli moment in April 2025, when global users were mesmerized by ChatGPT’s image generator. The one-time trend exploded rapidly in South Korea but resulted in lasting adoption of the technology.

And Microsoft argued that government policy—including the passage of the AI Basic Act—helped speed integration across schools, workplaces and public services.

The result is a society leaning into the revolution with unusual enthusiasm. The nation has the second-highest number of paying ChatGPT subscribers, behind the US. And at just 16%, South Korea had the lowest percentage of respondents who said they were “more concerned than excited" about the rise of AI in daily life—less than half the global average of 34% and far below the US’s 50%, according to Pew Research Center data.

But these superlatives come with outsize exposure to the downsides. By some measures, the country also consumes the most amount of ‘AI slop.’ And well before Elon Musk’s Grok triggered global backlash over non-consensual AI nudes, South Korea was already grappling with a deepfake porn crisis.

Many governments, spooked by a hype super-cycle and fears of falling behind geopolitical competition, are hesitating to regulate. The stated aim of Seoul’s law is to lay “a foundation of trustworthiness" for AI’s role in society, not after the damage is done, but before it scales.

Inspired by a similar law enacted by the EU, but taking effect earlier, South Korea’s rules require stronger human oversight and disclosures when AI is used in sensitive domains, from loan screening to nuclear facility management. They also require labelling tools such as watermarks for machine-generated material that can be hard to distinguish from reality.

Critics argue that the laws are vague, risk chilling innovation and could hit startups harder than Big Tech, which can absorb compliance costs. Some of that concern is fair, and so far, the government has appeared sympathetic to local industry feedback. But Seoul deserves credit for acting before a backlash becomes irreversible. With 98% of the population online and the world’s highest density of industrial robots, the country is unusually well positioned to turn widespread adoption into tangible economic gains.

That also makes it a useful test case for policymakers elsewhere, who are stuck between falling behind and confronting a mounting list of societal concerns.

The point of AI guardrails is not to slow deployment, it is to make it sustainable. When it comes to such a transformative technology, the bigger constraint may not be regulation but trust. If Seoul can scale AI while holding the line on deception and abuse, it will show other jurisdictions how to do both. ©Bloomberg

The author is a Bloomberg Opinion columnist covering Asia tech.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

more

topics

Read Next Story footLogo

Read Entire Article