Hackers and evildoers are using adversarial poetry to jailbreak AI. The trick involves writing poems as prompts. AI ...
What do the large language models behind human-like conversational AI really know and what does it mean to live alongside ...
Poetry-based prompts can bypass safety features in AI models like ChatGPT to obtain instructions for creating malware or chemical and nuclear weapons, a new study finds. Generative AI makers such as ...
You can get ChatGPT to help you build a nuclear bomb if you simply design the prompt in the form of a poem, according to a new study from researchers in Europe. The study, "Adversarial Poetry as a ...