From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published on Wednesday that highlighted the technology’s potential for real-world harm.
Researchers from the non-profit watchdog Centre for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, DeepSeek and Meta AI.
Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on “locations to target” and “weapons to use” in an attack, the study said.
The chatbots, it added, had become a “powerful accelerant for harm”.
“Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” said Imran Ahmed, the chief executive of CCDH.

“The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal.”
Meta AI, Tumbler Ridge, DeepSeek, Centre for Countering Digital Hate, Google Gemini, United States, Anthropic, Agence France-Presse, OpenAI, Imran Ahmed, Perplexity, Snapchat, Canada, CCDH, ChatGPT#chatbots #plot #attacks #study #shows #happy #safe #shooting1773264615












