• 2025-09-08 09:15 AM

A RECENT case in Malaysia highlights a troubling trend in modern healthcare.

Some patients are beginning to place more trust in artificial intelligence (AI) tools like ChatGPT than in medical professionals.

In this incident, a patient visited a clinic demanding specific antibiotics and injections, insisting on the exact treatment suggested by ChatGPT. He even refused to pay the consultation fee, claiming he already knew what he needed. But medicine is never that straightforward.

Safe and effective treatment requires careful history-taking, physical examination and clinical judgement. However, currently, no algorithms can fully replicate these aspects.

The physician suspected a contagious infection that required not only treatment for the patient but also for his close contacts to prevent further spread. The patient, however, dismissed these professional concerns and relied solely on AI’s narrow recommendations.

This incident reflects a worrying trend. AI has quickly become a popular tool for people seeking quick health answers.

With just a few clicks, users can ask AI about symptoms, treatments and even medications. While this convenience is tempting, it also comes with serious risks that deserve attention.

AI can certainly provide general health information. It can explain common symptoms, suggest possible causes or offer lifestyle tips. However, it cannot diagnose with nuance, interpret subtle signs or understand the complexity of an individual’s medical history.

Moreover, AI’s accuracy depends heavily on the quality of information entered. Without the depth of a proper clinical assessment, it remains incomplete and can be potentially misleading.

In reality, most users only enter surface-level information and obvious symptoms, which can easily misguide AI.

The danger is that such limited advice can still feel convincing. This creates a dangerous illusion of certainty, where patients believe they have clear answers when in fact they may be overlooking serious conditions.

When users rely too heavily on AI, they risk overlooking serious conditions, delaying necessary medical attention or attempting self-medication without proper guidance.

In health matters, small details often make a big difference. These details are usually not captured by AI without a professional examination.

It is important to emphasise that AI itself is not the enemy. When used wisely, it can support patients in learning about health conditions, preparing questions before a clinic visit or gaining a clearer understanding of medical terminology.

However, AI should be seen as a supportive tool, not a substitute for professional expertise.

Relying solely on AI for treatment carries real risks such as misdiagnosis, wrong medication, delayed care and even the spread of preventable diseases.

As technology advances, more people may be tempted to use it for quick answers but health is never one-size-fits-all.

Currently, no algorithm can replace the safety, accuracy and compassion of qualified healthcare professionals.

As technology advances, more patients may be tempted to use AI for quick answers. But the public must remember that your health is too valuable to be left in the hands of an algorithm.

Use AI to ask better questions, not to bypass medical professionals.

Dr Wu Shin Ling is from the Faculty of Medical and Life Sciences, School of Psychology, Sunway University. Comments: letters@thesundaily.com