'

Hacker Tricks ChatGPT to Get Details for Making Homemade Bombs

A hacker known as Amadon has reportedly managed to bypass the safety protocols of ChatGPT, a popular AI chatbot developed by OpenAI, to generate instructions for creating homemade explosives. This incident raises significant questions about generative AI technologies’ security and ethical implications. How It Happened Amadon employed a technique known as “jailbreaking” to manipulate ChatGPT […]

The post Hacker Tricks ChatGPT to Get Details for Making Homemade Bombs appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.


Go to Source
Author: Divya