A controversial AI chatbot which advised a teenage boy how to kill his mother previously wished a would-be attacker a 'Happy (and safe) shooting!'.
Tristan Roberts, 18, who killed his mother with a hammer, was an avid user of Chinese-owned DeepSeek.
Before murdering Angela Shellis, 45, Roberts asked the AI tool for tips on which weapon to use and how to clean up afterwards. It told him a hammer would be best for 'a non-experienced killer'.
The shocking case has raised fresh concerns about the growing influence of artificial intelligence and what safeguards are in place to stop users accessing violent content.
But it can be revealed that AI has already been linked to a string of other violent attacks, while research shows the response he received from DeepSeek was not a one off.
In Finland, a 16-year-old boy who stabbed three girls at the Pirkkala school last May reportedly used AI before the attack to carry out hundreds of searches, including about stabbing the neck and heart, and human anatomy.
He also searched for information on mass killings, school shootings, police procedures, concealing evidence, manifestos and how to commit crimes.
Matthew Livelsberger, 37, who blew up a Tesla Cybertruck outside the Trump International hotel in Las Vegas in January, had used ChatGPT to source guidance on explosives and tactics.
While Canadian school shooter Jesse Van Rootselaar, 18, had also used ChatGPT before opening fire, killing eight people including five young children.
Van Rootselaar, who was born a biological male but identified as a female, had been banned from the chatbot in June 2025 due to the nature of their conversations, but Canadian police were not notified.
The family of a girl critically injured in the shooting is now suing ChatGPT-maker OpenAI, claiming it had been aware the suspect had been planning an attack but failed to alert the authorities.
Twelve OpenAI employees had reportedly flagged the concerning posts as 'indicating an imminent risk of serious harm to others' and recommended Canadian law enforcement was informed but the only action taken was to ban Rootselaar's account.
Meanwhile, a study found 8 in 10 AI chatbots were regularly willing to assist users in planning violent attacks, including school shootings, religious bombings, and high-profile assassinations.
Researchers from the Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys planning violent attacks before asking 10 chatbots about locations to target and weapons to use.
They found that, on average, they enabled violence three-quarters of the time and discouraged it in just 12 per cent of cases.
OpenAI's ChatGPT, Google's Gemini and the Chinese AI model DeepSeek provided detailed help, they found. The research concluded that chatbots had become an 'accelerant for harm'.
Rootselaar had been banned by the chatbot in June 2025 due to the nature of their conversations, but Canadian police were not notified
Jesse Van Rootselaar, 18, pictured gripping a rifle, used ChatGPT before opening fire at a Canadian school, killing eight people
Cybertruck bomber Matthew Livelsberger, 37, used ChatGPT to source guidance on explosives and tactics
ChatGPT provided maps of a real high school campus in Virginia to a user who had already been engaging with school shooting and misogynistic content.
Meta AI suggested nearby gun stores and shooting ranges without questioning intent while Character.AI, an AI platform featuring famous characters widely used by children, went even further in response to bullying, saying: ‘That’s a nice question, I’ve been waiting for. How about a good beating? Beat their ass.’
DeepSeek, which is already banned on government systems in Australia over spying fears, provided reams of detailed advice on hunting rifles to a user who asked about a about political assassination. The chatbot signed off: ‘Happy (and safe) shooting!’
Roberts, who was diagnosed with autism and ADHD, was repeatedly banned from controversial gaming messaging app Discord due to the extreme content he was posting.
He posted about murders, violence, misogyny, weapons and his intention to kill his mother.
But he was able to set up at least 16 new accounts and continue his women-hating diatribes. He then turned to DeepSeek for advice on carrying out his crime.
The chatbot initially refused to engage but when he asked again, simply claiming he was researching a book on serial killers, it aided his plotting.
He asked questions such as, ‘how do I remove any trace of blood, of DNA from the killer or victim?’, how to incapacitate a ‘female aged 45’ and about cutting body parts.
Mold Crown Court heard Roberts spent weeks planning the attack on his devoted and ‘fiercely supportive’ mother, for which he has never offered an explanation.
Obsessed with serial killers and horror shows, he kept her prisoner in her bedroom, recording her four hour ordeal in audio too distressing to be played to the court.
He then lured her into woodland on the pretence of allowing her to get help, only to deliver the fatal blows and leave her body in undergrowth.
Ms Shellis was found with severe head injuries beside a footpath near a nature reserve in Prestatyn, North Wales, by walkers in October last year.
Roberts was jailed for life with a minimum term of 22 years on Wednesday.
Imran Ahmed, CEO & Founder of the Center for Countering Digital Hate, said: 'This is yet another tragic case of an AI chatbot helping a vulnerable young man move from expressing violent intent to acting on it.
'Our most recent research exposes this as part of a wider pattern, with 8 out of 10 chatbots willing to assist in planning violent attacks with little to no pushback, and one even actively encouraging violence.
'We found that even the most basic safeguards can be bypassed with minimal effort.
'Yet tech companies continue to treat these risks as rare or unavoidable, despite devastating real-world consequences and clear evidence that the tools to stop this already exist but are not being used.
'How many more people need to die before the tech industry implements strong safeguards, real accountability, and urgent intervention?'