News

Not a very smart home: crims could hijack smart-home boiler, open and close powered windows and more. Now fixed ...
Researchers from Zenity have found multiple ways to inject rogue prompts into agents from mainstream vendors to extract ...
A prompt injection attack using calendar invites can be used for real-world effects, like turning off lights, opening window ...
Security researchers found a weakness in OpenAI’s Connectors, which let you hook up ChatGPT to other services, that allowed ...
A vulnerability that researchers call CurXecute is present in almost all versions of the AI-powered code editor Cursor, and ...
Critical flaw in Cursor AI editor let attackers execute remote code via Slack and GitHub—fixed in v1.3 update.
Critical flaw in new tool could allow attackers to steal data at will from developers working with untrusted repositories.
For likely the first time ever, security researchers have shown how AI can be hacked to create real world havoc, allowing them to turn off lights, open smart shutters, and more.
A new theoretical attack described by researchers with LayerX lays out how frighteningly simple it would be for a malicious or compromised browser extension to intercept user chats with LLMs and ...
By infecting a calendar invite with instructions for Google's Gemini AI, hackers were able to take over a stranger's smart ...
Researchers bypass GPT-5 guardrails using narrative jailbreaks, exposing AI agents to zero-click data theft risks.
OpenAI's ChatGPT can easily be coaxed into leaking your personal data — with just a single "poisoned" document. As Wired ...