Experts warn of rising digital exploitation, urge stronger laws, parent education and cohesive national response.

PETALING JAYA: The use of artificial intelligence (AI) to create child sexual abuse material (CSAM) must be treated with the same severity as physical abuse, said MCA information chief Chan Quin Er.

She said there is no such thing as “fake” child abuse when AI-generated images depict minors in sexually explicit situations.

Such content still constitutes a criminal offence under existing Malaysian laws.

“Whether the image is real, digitally altered or entirely computer-generated, once it portrays a child in a sexual context, it amounts to a clear act of violation,” she said in
a statement.

Chan cited several legal frameworks that address such offences, including the Sexual Offences Against Children Act 2017, which criminalises the creation, possession or distribution of CSAM regardless of whether it is AI-generated.

She also referred to Section 292 of the Penal Code and Section 233 of the Communications and Multimedia Act, both of which cover the circulation of obscene content.

In addition, she highlighted the Anti-Sexual Harassment Act 2022, which encompasses a broader definition of sexual harassment, including digital forms.

Chan warned that normalising AI-generated CSAM through entertainment or social media risks eroding public understanding of what constitutes a criminal offence.

“There’s a perception that if it’s not real, then it’s not a big deal. However, if it involves a child and is sexual in nature, it is a crime. The creation or distribution of deepfake child pornography must be met with the full force of the law.”

Following Chan’s statement, theSun spoke with two cybersecurity experts, both of whom highlighted the growing dangers of AI in public spaces.

Global Centre for Cyber Safety director Assoc Prof Datuk Dr Husin Jazri described
AI-generated CSAM as a “real and formidable” threat, driven by the increasing accessibility of generative AI tools.

While Malaysia has multiple cybersecurity agencies, he said there is a need for a more cohesive and proactive approach.

“Such cases often go unnoticed because we lack a centralised body focused specifically on digital safety for individuals. A dedicated centre based at a university could lead research, education and policy development, provided it receives the necessary support.”

He added that cyber safety has yet to be systematically integrated into school curricula, leaving many parents without adequate guidance.

“A structured public awareness campaign would go a long way in helping families take preventative measures.”

Universiti Malaya cybersecurity specialist Dr Nor Badrul Anuar Jumaat said the rise of generative AI has significantly altered the landscape of online exploitation.

“With advanced tools such as OpenAI’s Sora, Google’s Veo and image generators such as Midjourney and Stable Diffusion, criminals can now produce realistic fake images or videos using nothing more than simple text prompts.”

He added that photographs of children shared on social media can be misused and manipulated with AI voice cloning tools, such as www.elevenlabs.io and www.vo3ai.com.

Dr Badrul said one of the most immediate solutions lies in educating parents.

“Parents must understand that even ordinary photos, such as a child in school uniform or at home, can reveal too much. Once online, this content can be copied, altered and repurposed.”

He advised parents to exercise caution by adjusting privacy settings, avoiding posts with location or identifying details, and involving their children in decisions about what content is shared.

He also recommended educational resources such as the MCMC’s Klik Dengan Bijak campaign and UMCybersafe as valuable tools to raise awareness on the issue.