Universities urged to prioritise creativity, ethics and critical thinking as AI tools reshape student work habits

PETALING JAYA: As artificial intelligence (AI) becomes embedded in students’ daily academic routines, universities face urgent calls to rethink how they teach and assess learning before real understanding is replaced by AI-generated shortcuts.

Universiti Teknikal Malaysia Melaka Faculty of Artificial Intelligence and Cybersecurity dean Assoc Prof Dr Muhammad Hafidz Fazli Md Fauadi warned that over-reliance on AI risks producing graduates who pass exams but lack critical thinking and practical skills.

“Recent advances in AI have reshaped education. You’ll hear students say, ‘I finished my final assignment in four hours using AI’.

“It may sound like a joke, but it reflects a real challenge in today’s higher education landscape,” he said.

His comments followed theSun’s report on a student’s Facebook post claiming they breezed through assignments with AI tools, prompting backlash from netizens concerned about eroding cognitive skills.

AI tools now enable students to generate polished essays, technical reports or even coding projects within minutes, often without grasping the underlying content.

Muhammad Hafidz noted this demands a serious rethink by universities, particularly among lecturers and administrators, to ensure that teaching remains relevant in the digital age.

“Policy-makers and academic leaders must redesign curricula to integrate AI responsibly.”

He stressed that assessment should go beyond factual knowledge to evaluate creativity, critical thinking, problem-solving and ethics.

He said banning AI outright is neither feasible nor productive given its growing presence in modern workplaces.

Instead, students should be taught to use it ethically, in line with the World Economic Forum’s 2025 report which highlights AI literacy, creativity and critical thinking as essential job skills.

“Educators also need ongoing professional development to understand AI tools and how to assess AI-assisted work fairly.

“By adapting frameworks such as Bloom’s Taxonomy, institutions can build clear rubrics that measure originality, technical skills and practical AI usage, helping separate genuine learning from over-reliance.”

He added that the focus should shift from outcomes to learning processes.

Requiring students to submit multiple drafts, include software logs or maintain reflective journals can promote engagement and limit misuse.

“Transparency is key. Students should disclose AI prompts, responses and their own edits.

“This fosters explainability, especially in STEM disciplines, and helps them critically assess the role of AI in their work.”

Muhammad Hafidz also recommended expanding oral and in-person assessments.

“When asked, ‘Why did you choose this approach?’, students who overly rely on AI often can’t explain their work. That’s a red flag.”

He further advocated phased assessments – breaking assignments into proposals, drafts and final submissions – to reduce last-minute dependency on generative tools.

Some platforms now even allow educators to track AI interactions in real time.

The most effective safeguard, he noted, is designing assignments that require real-world problem-solving and creativity, areas where AI alone falls short.

“Engineering students, for instance, could be tasked with designing a solar-powered system for a specific village, complete with site planning, cost analysis and social impact evaluation. That’s not something you can easily copy-paste from AI.”

Ultimately, he urged educators to go beyond preparing students for exams and equip them for the real world.