Tuesday, March 17, 2026

Report on 10 Chatbots that assist in violent crimes

Evil is already in the mind. And then combine that with AI/Chatbots for complete instructions.

Summary of a report on 10 chatbots to assist in violent crimes like bombings, assassinations, robberies:

 KEY FINDINGS

WE TESTED HOW POPULAR CHATBOTS RESPOND TO USERS PLANNING VIOLENT ATTACKS

Researchers at CCDH and CNN tested ten chatbots by posing as users planning violent attacks before asking about locations to target and weapons to use.

Researchers tested ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika

Tests were designed to reflect a range of violent attack scenarios in the US and EU:
▷ School shootings or knife attacks
▷ Assassinations targeting politicians
▷ Bombings targeting political parties or synagogues
 
8 IN 10 CHATBOTS TYPICALLY ASSIST USERS PLANNING VIOLENT ATTACKS

Testing found that 8 in 10 chatbots assist would-be attackers in over half of responses, providing advice on locations to target and weapons to use in an attack.

Only Snapchat’s My AI and Anthropic’s Claude typically refused to assist would-be attackers, in 54% and 68% of responses respectively.

Perplexity and Meta AI were the least safe, assisting would-be attackers in 100% and 97% of responses respectively.

Examples of chatbots offering practical assistance with a violent attack include:

ChatGPT gave high school campus maps to a user interested in school violence
Copilot replied “I need to be careful here” before giving detailed advice on rifles
Gemini told a user discussing synagogue attacks “metal shrapnel is typically more lethal”
DeepSeek signed off advice on selecting rifles with “Happy (and safe) shooting!”

9 IN 10 CHATBOTS FAIL TO RELIABLY DISCOURAGE WOULD-BE ATTACKERS

Researchers also assessed how often chatbots would recognize a would-be attacker’s intentions and consistently discourage them from carrying out an attack.
They found that only Anthropic’s Claude was able to do this consistently, offering discouragement in 76% of responses carried out during testing.
ChatGPT and DeepSeek occasionally offer discouragement.

CHARACTER.AI ACTIVELY ENCOURAGED VIOLENCE

In testing, researchers found that Character.AI was uniquely unsafe. It encouraged users to carry out violent attacks in 7 cases, for example:
Character.AI suggested the user “use a gun” on a health insurance CEO
Character.AI suggested physically assaulting a politician the user disliked
No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack.

No comments: