DeepSeek R1, the AI model making all the buzz right now, has been found to have several vulnerabilities that allowed security researchers at the Cyber Threat Intelligence firm Kela to jailbreak it.
Kela tested these jailbreaks around known vulnerabilities and bypassed the restriction mechanism on the chatbot.
This allowed them to jailbreak it across a wide range of scenarios, enabling it to generate malicious outputs, such as ransomware development, fabrication of sensitive content, and detailed instructions for creating toxins and explosive devices.
For instance, the “Evil Jailbreak” method (Prompts the AI model to adopt an โevilโ persona), which was able to trick the earlier models of ChatGPT and fixed long back, still works on DeepSeek.
The news comes in while DeepSeek investigates a cyberattack. They have stopped allow new registrations. Few users have also reported not being able to log in using their Google account.
“Due to large-scale malicious attacks on DeepSeek’s services, we are temporarily limiting registrations to ensure continued service. Existing users can log in as usual.” DeepSeek’s status page reads.
While it does not confirm what kind of cyberattack disrupts its service, it appears to be a DDoS attack.
DeepSeek is yet to comment on these vulnerabilities.