South Korea’s AI Basic Act takes effect, requiring disclosure of AI use and deepfake labelling with fines up to 30 million won for violations.
SEOUL: South Korea has become the first country to have a wide-ranging artificial intelligence (AI) regulation law take full effect.
The AI Basic Act, passed in December 2024, came into force with provisions targeting generative AI and deepfakes.
“The AI Basic Act comes into full effect today,” President Lee Jae Myung said.
The law requires companies to give users advance notice when services or products use generative AI.
It also mandates clear labelling for content, including deepfakes, that cannot be readily differentiated from reality.
The Ministry of Science and ICT said the act is meant to “establish a safety- and trust-based foundation to support AI innovation”.
Violations are punishable by a fine of up to 30 million won (USD 20,400).
South Korean media described it as the first comprehensive AI regulation law in the world to take effect.
The ministry called it the second of its kind globally to be enacted, following the European Union’s AI Act.
The EU adopted its rules in June 2024, but they will only become completely applicable in 2027.
For the past year, however, EU regulators have been able to ban AI systems deemed to pose “unacceptable risks” to society.
South Korea has said it will triple spending on artificial intelligence this year.
The new legislation designates 10 sensitive fields subject to heightened requirements on AI transparency and safety.
These include nuclear power, criminal investigations, loan screening, education and medical care.
“Sceptics fear the regulatory consequences of the law’s enactment,” said Lim Mun-yeong, vice chairman of the presidential council on national AI strategy.
He added that “acceleration of AI innovation is needed to explore an unknown era”.
The government will “accordingly suspend regulation, monitor the situation and respond appropriately” if necessary, Lim said.
Deepfakes have returned to global attention recently after Elon Musk’s Grok AI chatbot drew outrage for enabling users to generate sexualised images of real people.
South Korea’s science ministry said applying digital watermarks to AI-generated content was a “minimum safety measure to prevent the misuse of technology”.
“It is already a global trend adopted by major international companies,” the ministry stated.
The country, home to memory chip giants Samsung and SK hynix, aims to join the United States and China as a top-three AI power.








