Recent Jalur Gemilang gaffes highlight importance of human oversight in content generating process

PETALING JAYA: A string of slip-ups in images of the Jalur Gemilang involving a national paper, global expo and ministry report has shone a spotlight on the risks of unchecked AI use.

Universiti Malaysia Kelantan Institute for Artificial Intelligence and Big Data (Aibig) director Dr Muhammad Akmal Remli said such errors point to flaws in how global AI models are trained and deployed, especially when it comes to culturally specific content.

“Inaccuracies like a wrongly rendered Jalur Gemilang happen because the AI model may not have had sufficient exposure to correct representations of Malaysian symbols during training,” he said, adding that while some AI tools perform impressively when generating generic content, they often falter with highly specific cultural or national elements such as flags.

“AI can generate a wide range of content such as text, voice, images and videos based on prompts. But what many do not realise is when AI is asked to create an image of a classroom with a Malaysian flag, the outcome depends on how the AI interprets those prompts through numerical tokens and what it has learnt from its training data.”

Muhammad Akmal said this reflects a broader issue – many generative AI models are built and trained within global frameworks that often under-represent countries like Malaysia.

“Global AI models frequently lack sufficient regional and cultural training data. There is an opportunity here for Malaysia to develop its own AI systems.”

He said with government backing, local startups and tech companies could step up and train models using Malaysian data involving cultural symbols, traditions and languages.

Muhammad Akmal also said using AI in government and media settings, especially for public content, requires greater caution.

“Incidents like these are a wake-up call. We must use AI responsibly, not just chase trends.”

He emphasised the need for safeguards at several levels, including determining if AI-generated content is even necessary, and conducting rigorous reviews before publication.

“Human oversight is not optional. It is essential.”

While some have called for new laws to regulate AI, Muhammad Akmal said Malaysia already has a framework in place.

The National Guidelines on AI Governance and Ethics issued by the Science, Technology and Innovation Ministry last year aim to encourage responsible AI use across all sectors.

“Instead of piling on new rules, the focus should be on tightening implementation through proper training and awareness, among government and media professionals,” he said, adding that AI should complement and not replace human decision-making.

“AI is a tool. It can help spark ideas or automate tasks, but humans must still lead. Particularly with editorial or official content, relying solely on AI without verifying the output could result in serious slip-ups.

“Experts can spot errors AI might overlook. This collaboration delivers efficiency without compromising accuracy, which is critical when dealing with culturally or nationally sensitive content.”

He also urged developers to improve the quality and diversity of training data.

“Biases or inaccuracies in datasets will inevitably surface in AI output. Developers must aim for high-quality, representative data, especially in culturally sensitive areas.”

Despite the recent flag-related blunders, Muhammad Akmal believes public confidence in AI remains intact.

“I don’t think trust in AI has been lost. Most people will likely see this as a human oversight. But it’s a timely reminder that working with AI requires extra care, especially when national identity is involved.”

To promote responsible AI use, he called for proactive public engagement.

“Institutions should hold dialogues, training sessions and awareness campaigns for the public on responsible AI practices.

“At Aibig, we run regular training programmes to equip participants with best practices in AI safety and ethics.”

Muhammad Akmal said as Malaysia advances into the digital era, AI can be a powerful ally, but only when guided by human judgement, local insight and ethical responsibility.