• 2025-07-29 09:00 AM

PETALING JAYA: The government’s proposal to mandate the labelling of AI-generated content under the forthcoming Online Safety Act is a positive step but experts warned it must be reinforced by technical and legal safeguards to be effective.

UTM Assoc Prof Dr Zool Hilmi Ismail said the move is timely but enforcing it across all platforms will be technically complex and may fall short of preventing malicious AI use.

“Detecting AI-generated content isn’t foolproof. The tools we have are still developing and can struggle with new or cleverly altered content. Studies show that deepfake detectors often get it wrong with poor-quality videos, content involving non-Western faces or when the AI has used tricks to avoid detection. In general, technology can’t keep up with how fast AI is advancing.”

Zool noted that while labelling AI content could increase transparency, it also raises difficult questions about accountability.

“Who is responsible – the creator, platform or third-party aggregators? Logging digital signatures or watermarks could help, but setting up and regulating such a system end-to-end is complex.”

He also cautioned that labelling alone will not deter more harmful applications of AI, such as scams and voice-cloning fraud.

“Malaysia needs to strengthen technical safeguards, such as digital watermarking at the point of creation, robust detection systems and fast takedown mechanisms. We also need advanced forensic tools to support investigations.”

IIUM Assoc Prof Dr Mahyuddin Daud described the labelling proposal as a “significant legal development”.

“It mirrors steps taken in countries such as Spain, where failing to label AI content is now considered a serious offence. China is also moving in this direction, requiring platforms to disclose watermarks and metadata.”

However, Mahyuddin stressed that without strong enforcement, the law will have little effect.

“Malaysia should introduce clear penalties, such as fines tied to platform revenues and legal duties for platforms to actively detect and remove harmful AI content.”

He also recommended mandatory standards for verifying AI material, backed by independent audits, along with specific laws targeting high-risk content such as deepfake pornography and political disinformation.

On July 13, Communications Minister Datuk Fahmi Fadzil said the government may consider making it mandatory for digital platforms to label AI-generated content under the Online Safety Act. He said it was crucial in addressing concerns about boundaries and the risk of misinformation and fake news spreading through social media. He added that the government may consider similar requirements under the Online Safety Act, which is expected to come into force by the end of the year.