Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it
A security researcher, working with colleagues at Johns Hopkins University , opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic’s Claude Code Secur...
Source: venturebeat.com
A security researcher, working with colleagues at Johns Hopkins University , opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic’s Claude Code Security Review action post its own API key as a comment . The same prompt injection worked on Google’s Gemini CLI Action and GitHub’s Copilot Agent (Microsoft). No external infrastructure required. Aonan Guan, the researcher who discovered the vulnerability, alongside Johns Hopkins colleagues Zhengyu Liu a