Subscribe Us

ChatGPT offered bomb recipes and hacking tips during safety tests

OpenAI and Anthropic trials found chatbots willing to share instructions on explosives, bioweapons and cybercrime

A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

Continue reading...

https://ift.tt/L6qSJOg

Originally posted in the guardian.