• 2025-08-07 08:02 AM

PARIS: An image by AFP photojournalist Omar al-Qattaa shows a severely malnourished girl in Gaza amid Israel’s blockade.

Social media users questioned Grok, Elon Musk’s AI chatbot, about the photo’s origin.

Grok incorrectly stated the image depicted a Yemeni child from 2018.

The photo actually shows nine-year-old Mariam Dawwas in Gaza City on August 2, 2025.

Before the war, Mariam weighed 25 kilograms, her mother told AFP.

Now, she weighs only nine kilograms and survives mostly on milk, which is “not always available.”

When challenged, Grok insisted it does not spread fake news and relies on verified sources.

However, the chatbot later repeated the same incorrect claim about the photo’s origin.

Grok has previously generated controversial responses, including praise for Nazi leader Adolf Hitler.

Louis de Diesbach, a researcher in technological ethics, highlighted AI’s limitations.

He described AI tools as “black boxes” with unclear reasoning behind their responses.

Grok’s biases align with Elon Musk’s ideological leanings, according to Diesbach.

AI chatbots are not designed for verifying facts but for generating content, he explained.

Another AFP photo of a starving Gazan child was also misidentified by Grok as being from Yemen.

Diesbach warned against relying on AI for factual accuracy, calling chatbots “friendly pathological liars.”

Mistral AI’s Le Chat also misidentified the Gaza famine photo as Yemeni.

AI’s training data and alignment phase influence its responses, making errors persistent.

Experts urge caution when using AI tools for fact-checking due to their unreliable nature. - AFP