These are the malicious commits in question:
https://github.com/aws/aws-toolkit-vscode/commit/678851b
https://github.com/aws/aws-toolkit-vscode/commit/1294b38
Which were made using an "inappropriately scoped GitHub token" from build config files:
https://aws.amazon.com/security/security-bulletins/AWS-2025-...
> The incident points to a gaping security hole in generative AI that has gone largely unnoticed [...] The hacker effectively showed how easy it could be to manipulate artificial intelligence tools — through a public repository like Github — with the the right prompt.
Use of an LLM seems mostly incidental and not the source of any security holes in this case (at least not as far as we know - may be that vibe coding is responsible for the incorrectly scoped token). The attacker with write access to the repo could have just as easily made the extension run `rm -rf /` directly.