PETALING JAYA: The increasing adoption of artificial intelligence (AI) by companies, which poses a significant risk of displacing a large portion of the workforce, has prompted calls for emphasis on helping affected workers maintain employment.
According to the Statistics Department, there were 558,500 unemployed individuals in Malaysia as of August.
The situation may become severe as the implementation of AI leads to a reshaping of the job market.
Concerns have particularly grown following the recent decision by TikTok to lay off 481 employees in Malaysia as part of a global workforce reduction as the social media platform shifts its focus to increased use of AI in content moderation.
The move has primarily affected employees involved in content moderation, who were informed of their layoffs via email earlier this month.
Universiti Sains Islam Malaysia human resources and development lecturer Assoc Prof Dr Abdul Rahim Zumrah said the rise of AI will inevitably impact job roles in the tech industry.
He said the current situation could contribute to a higher unemployment rate as compensation is one of the biggest liabilities for companies.
“To reduce this liability, many organisations will opt for more cost-effective alternatives, such as AI, to replace employees,” he told theSun.
“While technology and innovation are unavoidable, profit-driven companies focus on cutting costs, with one key strategy being to minimise their workforce.”
Abdul Rahim said some big companies were willing to invest in technology if it could deliver the same output or service as human workers.
He stressed that laid-off individuals may face significant challenges when seeking new employment, particularly due to the intense competition in today’s job market.
“The main obstacle is often related to salary expectations, which can be a major barrier when applying for positions at other organisations,” he said.
“While their work experience may not be an issue, the real challenge lies in whether they are willing to accept a lower salary or a reduced position in order to secure a new job.”
International Islamic University Malaysia Department of Mechatronics Engineering associate professor Dr Yasir Mohd Mustafah said the layoffs by TikTok highlight the growing impact of AI on jobs and in this case content moderation work where AI is increasingly able to recognise inappropriate content.
“This shift by TikTok, while increasing their efficiency, might remove an important aspect of human understanding of local cultural context to the task, which could later bring problems to the company.
“Nonetheless, Malaysians need to acknowledge this trend brought by AI, especially in digital service industries, and must work hard to improve their knowledge on AI-related fields and find ways to adapt to new roles.”
Yasir said the future of jobs will likely hold a hybrid model of human and AI collaboration.
“While AI handles routine tasks such as quick filtering of digital content, humans will focus on complex analyses and issues requiring critical thinking, empathy and cultural understanding,” he said.
Malaysian Animation Educators Association president Assoc Prof Ahamad Tarmizi Azizan said while AI has become an efficient tool in moderating content, it is still far from perfect.
He said AI often struggles with nuanced contexts, cultural sensitivities or sarcasm, which may cause misinterpretation and inappropriate flagging or censoring.
“Therefore, human oversight remains essential to ensure that AI-driven content moderation systems function effectively and ethically.
“This oversight is crucial for addressing complex cases, reducing biases in AI decisions and ensuring that the moderation process aligns with ethical standards and user expectations,” he said.
Ahamad Tarmizi added that the shift toward AI in content moderation will inevitably transform roles within the tech industry, with repetitive, high-volume tasks largely delegated to AI systems.
He stressed that deploying AI in content moderation raises significant ethical concerns, including algorithms that may inherit or even amplify biases present in their training data, leading to discriminatory or inconsistent moderation.