ARTIFICIAL Intelligence (AI) is a dynamic force shaping our everyday lives, transforming how we learn, communicate and engage with online information.
In an era where AI shapes what we read, watch and believe, critical thinking is no longer optional – it is essential.
A recent report by cloud-native data specialist Domo reveals the staggering scale of AI activity, indicating that every minute, ChatGPT is prompted 6,944 times and at least one deepfake is created. This data underscores the profound and pervasive influence of AI, from simple to complex tasks.
Across the world, the rapid spread of deepfakes has sparked outrage, highlighting the pressing need for stricter regulations against non-consensual synthetic media.
AI-generated explicit images falsely depicting public figures, including Pope Francis, Taylor Swift and more recently in Malaysia, professional cosplayer Elyana Sparks, have spread across social media, raising concerns over digital exploitation and privacy violations.
Malaysia’s deputy communications minister revealed that the Malaysian Communications and Multimedia Commission removed 1,225 AI-generated explicit content posts last year. While alarming, this trend is not new, highlighting the ongoing challenge of regulating AI-driven digital abuse.
Anyone can be targeted through deepfakes. Scammers have also exploited the image of local entrepreneur and influencer Khairul Aming, using a deepfake video to falsely advertise a cooking wok.
Similarly, local prominent figures like Datuk Seri Siti Nurhaliza and
Datuk Lee Chong Wei have been exploited in deepfake investment scams, highlighting the growing threat of AI-driven deception.
These cases underscore the rising threat of AI-driven deception and the urgent need for stronger public education, stricter regulations and greater accountability from platform service providers.
The education system must build the public’s ability to critically analyse AI outputs, understand algorithmic
bias and equip them with skills to understand and critically analyse AI systems.
Recent research by Royal Melbourne Institute of Technology University researchers, Prof Nicola Henry and Alice Witt, reveals that there are gaps in public understanding of deepfakes and how to detect them.
As AI-generated media becomes more sophisticated, the need for media and information literacy (MIL) and AI literacy education is critical, particularly in distinguishing between consensual and non-consensual digital content and addressing the harms of deepfake abuse.
There is no magic bullet to counter this. The rise of deepfake abuse, underscores the urgent need for enhanced public education, and policymakers should be concerned in developing it well from the outset.
Regulators, the public education system and tech service providers must work together to sustain and strengthen educational initiatives that empower the public with the skills to identify, question and mitigate the risks posed by AI-driven manipulation.
Equipping citizens with MIL is not about telling people what to think but empowering them by equipping them with the tools and skills to think critically and evaluate the sources, content and impact of information in the age of AI.
It means asking questions like “Where did this piece of content come from?” and “Why was this piece of content created?”
Developing critical thinking competency means being able to ask tough questions about AI-generated content, algorithmic decisions and the motives behind AI-powered systems.
In 2024, Unesco underscored the necessity of equipping individuals with MIL competencies when dealing with AI-generated content because it brings forth the importance of teaching and training people to use synthetic media and interact with non-human agents in their everyday life activities.
MIL competencies foster responsible usage by helping users distinguish between authentic and AI-generated media, identify misinformation and assess the ethical implications of synthetic media.
Unesco’s policy brief on integrating MIL into AI literacy frameworks outlines essential competencies across four key categories:
Knowledge: Understanding AI’s potentials and risks, recognising the geopolitical landscape of AI and assessing its societal and environmental impacts.
Skills: Developing the ability to use AI tools responsibly, to critically evaluate information sources and interact effectively with AI systems.
Attitudes: Fostering critical and creative thinking regarding AI applications, acknowledging personal biases and promoting ethical considerations in AI use.
Values: Advocating digital rights, privacy and inclusivity while opposing mass surveillance and supporting the well-being of all individuals in AI interactions.
AI’s advancement should lead to human progress, not displacement or harm. As AI advances, so must our skills, knowledge and societal structures to ensure meaningful coexistence.
The future of AI is not solely in the hands of developers but in collective human agency, through ethical oversight, collaboration and empowerment.
Equipping citizens with MIL is crucial as this will enable them to engage critically and responsibly in societal affairs while shaping AI’s trajectory for the common good.
Lai Cheng Wong is an educator and
MIL advocate for Asean Network. Comments: letters@thesundaily.com