‘Big Brother is watching you’: AI and its impact on social media

Introduction

Artificial Intelligence (AI) is a technology that aims to build systems that think and act like humans. The suggestion is that AI has become our “Big Brother” - with algorithms learning our personal preferences and recommending products we are more likely to buy or videos we are more likely to watch; with cameras and sensors watching every move we make. While AI is intended to improve our standards of living and quality of life, and reduce the risk of human error, there is vast room for abuse. Here, we explore how the use of AI impacts social media platforms, and suggest steps to safeguard such platforms and consumers, including the role of the Malaysian Content Code 2022 (Code) in regulating AI content.

The impact of AI on social media platforms

AI is based on “deep learning”, where algorithms are “trained” to recognise deeply buried patterns and correlate multifarious data points for a desired outcome. Social media platforms such as Meta, Instagram, TikTok and Twitter use AI algorithms as recommendation engines. Based on your browsing habits and history, social media platforms can differentiate between what you click, watch, or buy.

However, the use of AI algorithms promotes not just your preferences, but potentially harmful content. In the United Kingdom for instance, 14-year-old Molly Russell was found to have consumed countless hours of content on self-harm and suicide on Instagram before she took her own life (the Molly Russell Case). Instagram’s algorithm not only helps users locate specific types of content, but also recommends similar posts to the user. At her inquest in October 2022, the coroner ruled that Instagram and other social media platforms contributed to her death.

Social media platforms also rely on facial recognition software to tag people in photos. This has given rise to the use of “deepfakes” and “deepfake technology” to create images and videos of fake events, spread fake news, discredit sources, and destroy reputations. The dangers of deepfakes cannot be underestimated, as the fabrication of realistic videos, or fake news by AI, can potentially create a zero-trust society where people are unable to distinguish truth from lies.

Suggested steps to safeguard social media platforms and the impact of the Content Code 2022

The Code - updated in May 2022 - aims to ensure effective self-regulation of the development, production, and dissemination of content, and introduces clearer guidance on, amongst others, menacing content, and false content. Under the Code, all suicide-related content must be reported and shared ethically and responsibly based on best practices and media guidelines, and the Code also expressly prohibits the distribution of false content ie, false material or incomplete information that is likely to mislead.

The Molly Russell Case illustrates how social media platforms can be held liable based on how its AI algorithms push forward harmful content. The owners of social media platforms utilising AI should be guided by the Code and consider the following steps:

(a) implementing systems to identify reasonably foreseeable risks of harm arising from their platform design and taking proportionate steps to mitigate those risks;

(b) considering age-appropriate content moderation particularly for children and possibly, restricting content generated from third parties; and

(c) ensuring advertisements do not misrepresent matters likely to influence consumers (eg, where advertisers use fearmongering to compel consumers to purchase certain products/services).

Although the Code is only mandatory for selected stakeholders in the content and media industry in Malaysia, social media platforms should note that adherence to the Code can be a defence to legal action under the Code and Communications and Multimedia Act 1998 (CMA).

Conclusion

While AI has destructive capacity, if properly governed, it can be harnessed to generate better economic value, productivity, and safety. Left unchecked, abuse of AI algorithms may lead to other potential liabilities under the law, such as for improper use of network facilities under Section 233 of the CMA. Countries like the UK and Singapore are actively taking steps to address the harm posed to users of social media platforms by proposing stringent legislation that places liability directly on these platforms. In Malaysia, apart from being guided by the Code, social media platform operators are encouraged to seek legal advice to prevent future setbacks due to the lack of legal awareness and the risk of harm that could arise because of AI algorithms.

This article is contributed by Yiew Xiu Ning of Christopher & Lee Ong (www.christopherleeong.com)