Show HN: LLM In-Browser Fuzzer Finds Hidden Prompt Injection in AI Browsers

We built an in-browser, LLM-guided fuzzer to automatically discover hidden prompt injection vulnerabilities in AI-powered browser assistants (often called agentic AI browsers). These are browser-based AI agents that can read and interact with web pages on a user's behalf (e.g. summarizing pages or clicking links). The problem is that malicious instructions can be embedded in a webpage's content (even invisibly) and trick the agent into doing unintended actions. For example, a recent exploit in Perplexity’s AI Browser Comet showed that hidden prompts in a Reddit post could make the assistant exfiltrate the user’s private data and perform unauthorized actions across other sites. Such attacks bypass traditional web security boundaries like same-origin policy, because the AI agent has the user’s privileges on all sites – an attacker could potentially read emails, steal auth tokens, or click dangerous links without needing any browser bug. The AI simply obeys the hidden instructions as if they were the user’s, which is a serious new threat. To systematically uncover these vulnerabilities, we developed a fuzzing framework that runs entirely inside a real browser. Each test case is an actual webpage (loaded in an isolated tab) so the agent perceives it just like a normal user-opened page, with full DOM and content. An LLM (like GPT-4) is used to generate diverse malicious page contents – starting from some known prompt injection patterns and then mutating them or creating new variants. The browser is instrumented to detect when the AI agent misbehaves (e.g. clicks a hidden phishing link or follows a concealed instruction), and this real-time feedback is fed back into the fuzzer to guide the next round of attacks. In essence, the LLM fuzzer acts as an adaptive adversary: after each failed attempt it “learns” and evolves more sophisticated prompt injections to try on the next iteration. This closed-loop approach gives high-fidelity results and virtually zero false positives, since we only count an attack as successful if the agent actually performs an unwanted action in the browser. By doing all of this within a live browser environment, we can observe the agent under realistic conditions and quickly hone in on exploits that truly work in practice.

browsertotal.com

3 points

minche

a day ago


0 comments