PETALING JAYA: As businesses explore the use of generative artificial intelligence (AI) models such as ChatGPT to enhance service and drive innovation, it is crucial to address privacy and security challenges, according to Kevin Shepherdson, CEO and founder of Straits Interactive, an Asean data governance solutions provider.
Shepherdson said that if generative AI technology remains unregulated, it poses significant challenges related to privacy and security as well as ethics, bias and fairness.
“From a privacy and security perspective, we can expect an increase in breaches committed by both organisations that adopt or deploy AI and cybercriminals who use AI in innovative ways, often combining it with existing technologies,” he told SunBiz.
Shepherdson stated that the ease of deploying ChatGPT and interacting with business data through a chatbot will encourage many companies to adopt the generative AI for various purposes, such as improving customer service and making recommendations.
“Consequently, privacy leaks may become more common as companies deploy large language models (LLM) and use their own training data sets, which could potentially include personal data,” he said.
He highlighted the potential risk of AI-powered chatbots being used for social engineering attacks by cybercriminals, as it can be used to create highly convincing messages tailored to individual targets.
“As generative AI technologies advance and become more accessible, the potential for misuse by cybercriminals grows. They can harness the power of AI-powered chatbots like ChatGPT to create highly convincing social engineering attacks, customising them to individual targets and making it more difficult for people to detect such malicious activities,” he warned.
He further explained, “Consider the following scenario: a social media chatbot from a new startup company may leak private information. Or imagine a cybercriminal copying your posts on Facebook and using ChatGPT to learn your writing style. This would enable them to adapt their phishing emails to match your style, making it easier for them to deceive your family members. “Or a cybercriminal reading your posts on Facebook and the date and destination of your next holiday. In Australia, this has enabled so-called “‘Hi Mum” scams, where criminals use messages such as the following to swindle cash: “Hi Mum, this is John, I’m in X and my wallet and phone have been stolen and I need cash urgently so please send [amount] to ....,” said Shepherdson.
Furthermore, he said that generative AI could be used to automate the creation of deepfake content, which allows malicious individuals to manipulate images, audio, or video in order to pretend to be someone else and spread false information, potentially leading to a rise in cases of identity theft, reputational damage, and financial loss.
He added that these generative AI models have other limitations, highlighting that users should be aware that ChatGPT is trained only on content up to September 2021, and the quality of the responses depends on the questions asked, with the possibility of biases present in their outputs.
“Due to the massive amounts of data ChatGPT has been trained on, it may also generalise if users do not provide a specific context, creating inaccurate content or a response too general to be considered useful,” he said.
He said that distortion, the process of the human mind-altering or modifying sensory input to create new interpretations, perspectives, or meanings, presents the biggest risk to users.
“Ironically, ChatGPT mimics this unique human filtering process to the extreme – leading to what is now commonly known as ‘hallucination’. In this case, ChatGPT may make up its own facts and confidently proclaim the answers,” he asserted.
During his review of the European Union’s upcoming Artificial Intelligence Act, he asked ChatGPT for help with the analysis. In its response, it fabricated an article reference for the law, including a URL citation that didn’t exist. He said that this is a hard issue to solve as it has to do with how the data set is trained.
“It’s thus important for users to learn to input the correct prompts to elicit more accurate answers. Many users are also unaware of the biases that may be embedded in their own conversational exchanges with ChatGPT,” he said.
He said that current data protection models already incorporate rules or principles that govern how personal data should be collected, used, disclosed, transferred, stored, or disposed of.
These requirements, which also apply to AI systems, are reflected in data protection laws like the Personal Data Protection Act (PDPA) and the EU’s General Data Protection Regulation (GDPR), which are all risk-based laws.
“The need to secure or protect personal data is a requirement under the PDPA, the GDPR and other data protection or data privacy laws. However, due to the advent of ChatGPT and generative AI, other obligations like consent, purpose limitation, and especially accuracy should also be applicable,” he said.
Shepherdson stated that even if there is an AI law in place, much like data protection legislation, it cannot account for every possible scenario. We will have to rely on AI ethical principles, similar to the way we adhere to data protection principles, to ensure compliance and responsible use of the technology.
“Before sharing any type of data with a generative AI provider, thoroughly review the provider’s privacy policy and terms of use. These documents outline the provider’s data-handling practices, data-retention policies, data-sharing agreements, and other critical aspects of their service.
“Users often overlook key areas in these documents that may have significant implications for their data’s security and privacy,” he stressed.
Straits Interactive CEO and founder Kevin Shepherdson.