Your Title

HUMAN trafficking involves the movement of human beings primarily for forced labour, sexual exploitation, organ trade or coerced criminal activity, affecting millions of people worldwide.

With the looming growth of technology and multimedia, traffickers are increasingly turning to social media and other online platforms to locate, groom and recruit potential victims, advertise and sell them, transfer illicit funds and even monitor their victims.

Additionally, traffickers are exploiting internet technologies to directly harm victims, leading to the emergence of “cyber-trafficking” – a new form of trafficking where victims are exploited through online means.

Cyber-trafficking has been further exacerbated by advancements in artificial intelligence (AI) technologies, which allow traffickers to manipulate appearances and identities, making it easier for them to evade detection and continue exploiting individuals.

Some traffickers use AI to automate and expand their scope of operations, targeting vulnerable individuals through online platforms and social media.

For example, AI-powered social media algorithms allow traffickers to target potential victims with deceptive advertisements, such as fake job offers, romantic schemes, and more, all the while maintaining anonymity.

They can also use deepfake technology to create falsified images or videos of victims for online commercial sex markets and pornography. These images are
often graphic, disturbing and non-consensual.

Traffickers also use deepfake images and videos to deceive victims by impersonating a prominent figure to gain confidence and trust. This method is often used for job scams and sex trafficking.

While AI has been profoundly used by criminal organisations and traffickers to sustain their networks and operate sophisticatedly, it has also offered law enforcement agencies new tools to counter human trafficking.

Law enforcement agencies have increasingly adopted AI technologies in their efforts to counter human trafficking. This is a progressive step forward in enhancing investigations and intercepting the modus operandi of trafficking operations.

For example, AI can analyse message histories shared in online chatrooms or between traffickers and buyers, or between traffickers and victims. It can also examine video footage and photos for signs of trafficking, a task that has traditionally been labour-intensive and cumbersome.

With advancements in technology, AI can perform these tasks, easing the strain on labour and minimising the psychological burden on law enforcement officers who are under intense pressure to rescue victims.

However, the effectiveness of using AI for countering human trafficking remains uncertain and raises ethical and legal concerns, risking potential harm to victims of trafficking. These harms include, but are not limited to, potential violation of data privacy and AI biases that create discriminatory outcomes.

Currently, the Malaysian Parliament has yet to pass a Bill to address human trafficking cases that involve explicit images created through deepfake technology, or a Bill that comprehensively addresses AI-enabled human trafficking cases.

Unlike the European Union (EU), which launched a legislation for AI on July 12, namely the EU’s Artificial Intelligence Act (EU AI Act) which came into force on Aug 1, Malaysia’s anti-human trafficking laws and regulatory frameworks remain inadequate due to the rapid development of AI.

Existing laws, such as the Anti-Trafficking in Persons and Anti-Migrant Smuggling Act 2007 (Atipsom), Penal Code, Child Act 1991, Sexual Offences Act 2017 and Immigration Act 1959/53, focus primarily on traditional trafficking methods.

For example, Atipsom and Penal Code do not address anonymised trafficking operations, AI-manipulated images and identities and the exploitation of global digital platforms.

Even the Evidence Act 1950 does not accurately capture AI technology and its inventions under section 90
of the Act, which allows the admissibility of documents produced by computers and the statements contained within them.

What we can currently rely on for now is the extra-territorial scope of the EU AI Act, which governs the development, deployment, supply and use of AI systems based on the risk level of the systems.

Article 2(1)(a) of the Act states
that “the EU AI Act applies to providers placing on the market or putting into service AI systems or GPAI models in the EU, irrespective of whether those providers are established or located within the EU or in a third country”.

In this case, Malaysian entities may be affected if they, with or without a physical presence in the EU, make the output of an AI system available for use in the EU.

However, the EU AI Act does not apply to AI systems used for
military, defence, national security or personal use.

Given the lacuna in the law relating to the use of AI, the Malaysian government should decide on culpability, entity and methods of prosecution when it involves AI.

Who can we place the blame on
if traffickers use chatbots, deepfake images, video or AI-related technologies to deceive, coerce or abuse the vulnerability of a person? Can we call this an “offenderless” crime or do we penalise the programmer for his/her inherent role, which is tantamount to a breach of natural justice?

At present, laws are enforceable
by and against legal persons only. Within this context, human beings, states, businesses, professional bodies, companies and corporations are legal persons and can be
brought to court.

Can we now consider machines themselves as legal persons?

But how can we establish the intention of the robot?

Can a robot claim defence currently available to people, such
as diminished responsibility, provocation, self-defence, necessity, mental disorder or intoxication should it begin to malfunction or make flawed decisions?

Currently, there is no recognition of robots as legal persons – so they cannot be held liable or culpable for any wrongdoings or harm caused to anyone.

In conclusion, AI has become a double-edged sword in the context of human trafficking. While it offers traffickers new tools to expand their operations, it also empowers law enforcement with the ability to detect and dismantle trafficking networks more efficiently.

AI and algorithms also have the potential to enhance various aspects of criminal justice decision-making. However, lawmakers need to update and amend the current anti-human trafficking legal frameworks to effectively counter human trafficking.

New laws and regulations should focus on utilising AI while eliminating privacy infringements and the biased nature of AI.

It should also include situations of “offenderless” crime in situations where it is hard to detect the traffickers. The law as it stands now lacks clarity.

Therefore, it is imperative for states to not only embrace the use of AI in countering human trafficking but also address the culpability of AI when used in trafficking operations.

The writer is a criminologist and deputy dean at the Faculty of Law, Universiti Malaya.
Comments: letters@thesundaily.com