Microsoft is warning users of a newly discovered AI jailbreak attack that can cause a generative AI model to ignore its guardrails and return malicious or unsanctioned responses to user prompts. The ...
Popular AI models like OpenAI's GPT and Google's Gemini are liable to forget their built-in safety training when fed malicious prompts using the "Skeleton Key" method. As Microsoft detailed in a blog ...