Are chatbots stealing your personal data?

Are chatbots stealing your personal data?
Source: Daily Mail Online

It's the revolutionary new technology that is transforming the world of work.

Generative artificial intelligence (AI) creates, summarises and stores reams of data and documents in seconds, saving workers valuable time and effort, and companies lots of money.

But as the old saying goes, you don't get something for nothing.

As the uncontrolled and unapproved use of unvetted AI tools such as ChatGPT and Copilot soars, so too does the risk that company secrets or sensitive personal information such as salaries or health records are being unwittingly leaked.

This hidden and largely unreported risk of serious data breaches stems from the default ability of AI models to record and archive chat history, which is used to help train the AI to better respond to questions in the future.

As these conversations become part of the AI's knowledge base, retrieval or deletion of data becomes almost impossible.

'It's like putting flour into bread,' said Ronan Murphy, a tech entrepreneur and AI adviser to the Irish government. 'Once you've done it, it's very hard to take it out.'

This 'machine learning' means that highly sensitive information absorbed by AI could resurface later if prompted by someone with malicious intent.

Experts warn that this silent and emerging threat from so-called 'shadow AI' is as dangerous as the one already posed by scammers like those who recently targeted Marks & Spencer, costing the retailer £300 million.

M&S fell victim to a 'ransomware' attack, where hackers tricked company insiders into giving away computer passwords and other codes.

Its chairman, Archie Norman, told MPs last week that the hack was caused by 'sophisticated impersonation' of one of its third-party users.

Four people have been arrested by police investigating the cyber attacks on M&S and fellow retailers Co-op and Harrods.

But cyber criminals are also using confidential data voraciously devoured by chatbots like ChatGPT to hack into vulnerable IT systems.

'If you know how to prompt it, the AI will spill the beans,' Murphy said.

The scale of the problem is alarming. A recent survey found that nearly one in seven of all data security incidents is linked to generative AI.

Another found that almost a quarter of 8,000 firms surveyed worldwide gave their staff unrestricted access to publicly available AI tools.

That puts confidential data such as meeting notes, disciplinary reports or financial records 'at serious risk' that 'could lead employees to inadvertently propagate threats', a report from technology giant Cisco said.

'It's like the invention of the internet - it's just arrived and it's the future - but we don't understand what we are giving to these systems and what's happening behind the scenes at the back end,' said Cisco cyber threat expert Martin Lee.

One of the most high-profile cybersecurity 'own-goals' in recent years was scored by South Korean group Samsung. The consumer electronics giant banned employees from using popular chatbots like ChatGPT after discovering in 2023 that one of its engineers had accidentally pasted secret code and meeting notes onto an AI platform.

Banks have also cracked down on the use of ChatGPT by staff amid concerns about the regulatory risks they face from sharing sensitive financial information. But as organisations put guardrails in place to keep their data secure, they also don't want to miss out on what may be a once-in-a-generation chance to steal a march on their rivals.

'We're seeing companies race ahead with AI implementation as a means of improving productivity and staying one step ahead of competitors,' said Ruben Miessen, co-founder of compliance software group Legalfly, whose clients include banks, insurers and asset managers.

'However, a real risk is that the lack of oversight and any internal framework is leaving client data and sensitive personal information potentially exposed,' he added.

The answer though, isn't to limit AI usage.

'It's about enabling it responsibly,' Miessen said.

Murphy added: 'You either say no to everything or figure out a plan to do it safely.'

'Protecting sensitive data is not sexy, it's boring and time-consuming.' But unless adequate controls are put in place, 'you make a hacker's job extremely easy'.