TeamPCP Hackers Focus on AI Developers, Planting Malicious Code to Disrupt Projects
🎙️ Dive Deeper with Our Podcast!
BLOG SUMMARY
A sophisticated threat actor group called TeamPCP has executed one of the most damaging supply chain attacks targeting the AI development community. By first compromising Trivy, a popular open-source vulnerability scanner, they obtained credentials that allowed them to inject malicious code into LiteLLM — a widely used AI gateway framework — reaching an estimated 95 million developers worldwide.
The FBI Cyber Division has issued a formal alert. This blog breaks down how the two-phase attack unfolded, how TeamPCP leveraged AI tools like Anthropic’s Claude to accelerate their operation, what organizations must do to protect their AI development pipelines, and how Technijian can help you strengthen your defenses.
When threat actors choose their targets carefully, the damage they cause can ripple far beyond a single organization. That is precisely what happened when the group known as TeamPCP set its sights on the AI developer ecosystem — not just attacking one tool, but engineering a cascading failure across the open-source supply chain that serves millions of developers globally.
On March 27, 2026, the FBI Cyber Division released a formal security advisory confirming that TeamPCP had successfully infiltrated two of the most widely used developer tools in the artificial intelligence space. The breach did not rely on a zero-day exploit or a sophisticated network intrusion. Instead, it exploited something far more common and correctable: poor credential management and misplaced trust in third-party software.
For businesses building AI-powered applications, this incident is a clear warning. The tools your development team trusts every day may be your greatest vulnerability.
Understanding the Two-Phase Supply Chain Attack
What made the TeamPCP operation especially damaging was its structure. Rather than attacking a single target, the group designed a two-phase intrusion that used the first compromise as a direct stepping stone into the second. Each phase was calculated, and each depended on the widespread trust developers place in open-source tooling.
Phase One: Compromising Trivy, the Security Scanner
The attack began with Trivy, an open-source vulnerability scanner maintained by Aqua Security. Widely adopted by DevSecOps teams for its ability to scan container images, file systems, and code repositories, Trivy is considered a trusted part of the modern development workflow. That trust became a liability.
TeamPCP deployed an automated agent that interacted with the Trivy toolchain in a way that caused the scanner to inadvertently expose its own GitHub authentication credentials. Once the attackers held those GitHub keys, they had write access to the Trivy public repository and used it to publish a corrupted version of the tool. Aqua Security confirmed that only the community open-source version was affected; their commercial platform was not exposed.
Key Takeaway: Automated tooling in CI/CD pipelines can expose credentials if they are not stored securely. Static secrets in environment variables or accessible config files represent a significant and often underestimated attack surface.
Phase Two: Infiltrating LiteLLM and Reaching 95 Million Developers
The second phase targeted LiteLLM, an open-source AI gateway that allows developers to connect their applications to large language models including GPT-5, Anthropic’s Claude, Google Gemini, and others through a unified API interface. Because LiteLLM’s own development pipeline relied on the already-compromised version of Trivy, the malicious code embedded in Trivy was able to extract the publishing credentials for the LiteLLM package repository.
With those credentials in hand, TeamPCP pushed a backdoored release of LiteLLM to its public distribution channel. The infected version was automatically downloaded and installed by developers who had configured their systems to accept the latest stable release. The breach was only discovered when the malicious payload caused widespread system crashes. The total exposure reached approximately 95 million developers before the package was pulled.
How TeamPCP Used AI to Accelerate the Attack
One of the most striking aspects of this incident is that TeamPCP did not just attack AI development tools — they actively used AI to carry out the attack itself. A representative of the group confirmed that Anthropic’s Claude was used to write specific components of the malware deployed during the campaign.
How AI Was Weaponized in This Attack
- Malware scripting: The group used Claude to generate lateral movement scripts — programs designed to help the malware spread across infected network environments after initial deployment, automating what would otherwise require skilled manual effort.
- Credential harvesting: Claude was used to accelerate the writing of scripts that systematically extracted GitHub keys and package-publishing credentials from environments exposed by the Trivy compromise.
- Operational velocity: By offloading scripting to an AI assistant, TeamPCP compressed their operation’s timeline significantly, moving from initial compromise to mass distribution faster than traditional attack timelines would allow.
The Attack Flow: Step by Step
For security teams seeking to understand the chain of events and identify potential detection opportunities, here is how the intrusion progressed from start to finish:
- Step 1 — Initial Reconnaissance: TeamPCP identified Trivy as a widely trusted tool integrated into thousands of CI/CD pipelines, making it a high-leverage entry point into the AI supply chain.
- Step 2 — Automated Credential Theft: The group deployed an automated agent that manipulated Trivy’s execution environment, causing it to leak its own GitHub authentication keys.
- Step 3 — Malicious Trivy Release: Using the stolen GitHub credentials, attackers pushed a backdoored version of Trivy to the public repository, distributing to active users and downstream environments automatically.
- Step 4 — LiteLLM Credential Extraction: The infected Trivy instance running within LiteLLM’s development environment extracted the LiteLLM publishing keys and sent them to attacker-controlled infrastructure.
- Step 5 — Mass Distribution: TeamPCP used the stolen publishing credentials to release a malware-laced version of LiteLLM. Developers who auto-updated received the infected package.
- Step 6 — Crash Discovery and Response: The malicious payload caused system crashes across affected environments. LiteLLM engaged Google Mandiant to contain and remediate the incident.
TeamPCP’s Business Model: Access Brokering and Extortion
TeamPCP operates as an initial access broker — a category of threat actor that specializes in breaking into systems and monetizing that access in multiple ways. Their two primary revenue streams are the sale of network access to ransomware operators and direct extortion of victims. They identify the most valuable footholds, auction them to ransomware groups, or approach organizations directly demanding payment.
For AI-focused companies, this threat model is particularly concerning. Access to an AI model’s training pipeline, proprietary datasets, or production inference environment could carry significant competitive and regulatory consequences. TeamPCP’s attack demonstrates that AI infrastructure is now firmly in scope for sophisticated, financially motivated threat actors.
Why AI Development Pipelines Are an Attractive Target
The Problem of Unverified Trust in Open-Source Tools
Modern software development depends heavily on open-source packages and tools. Teams routinely integrate dozens or hundreds of third-party libraries into their projects, and the pace of development means formal security reviews rarely keep up. Developers frequently update dependencies automatically, trusting that the package registry has not been compromised — an assumption the TeamPCP attack has now definitively challenged.
Secrets Management Failures Are Pervasive
The attack worked because authentication credentials — GitHub tokens and package publishing keys — were accessible within automated pipeline environments. Organizations routinely store API keys and signing certificates in ways that make them accessible to the tooling that needs them, often without adequate isolation or monitoring. A single compromised tool can harvest a wide range of credentials and rapidly expand an attacker’s foothold.
The AI Supply Chain Is Still Maturing
Compared to traditional software development, the AI ecosystem is newer, faster-moving, and has not yet developed the same culture of rigorous security review. Frameworks like LiteLLM are often adopted quickly because they solve immediate, pressing needs — connecting to multiple LLM providers through a unified interface is genuinely valuable. Security considerations sometimes follow adoption rather than preceding it.
Immediate Steps Organizations Should Take
- Audit your open-source dependencies. Conduct a full inventory of every third-party library and tool used across your development pipelines. Pay particular attention to tools with elevated permissions. Flag all recent updates and verify their integrity against known-good checksums.
- Implement secrets management best practices. No authentication credential should be stored in plain text within a pipeline environment. Use dedicated solutions such as HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault. Rotate credentials regularly and implement alerts for unexpected credential access.
- Restrict package registry permissions. Publishing credentials for npm, PyPI, Docker Hub, and similar registries should be tightly controlled, scoped to minimum necessary access, and protected by hardware security keys or dedicated CI/CD service accounts.
- Apply zero-trust principles to your supply chain. Treat every third-party update as potentially hostile until verified. Implement automated integrity checks, review changelogs before applying updates to production, and pin critical dependencies to specific verified versions.
- Monitor for anomalous behavior in build environments. Invest in behavioral monitoring within CI/CD pipelines. Unusual outbound network connections, credential access outside normal patterns, or unexpected process spawns should generate immediate alerts.
Frequently Asked Questions (FAQ)
What is TeamPCP and why are they targeting AI developers?
TeamPCP is a financially motivated cybercriminal group operating as an initial access broker. They have shifted focus toward AI development infrastructure because it represents high-value targets: proprietary models, large datasets, and widely used open-source tools whose compromise can amplify into millions of affected systems.
Were commercial users of Trivy or LiteLLM affected?
Aqua Security confirmed that only the open-source community edition of Trivy was compromised; their commercial platform was not affected. For LiteLLM, users of the public open-source distribution who received the malicious update were exposed. Organizations should verify which version they are running against the affected release window identified by Google Mandiant.
How did the attackers use Anthropic’s Claude in the attack?
A TeamPCP representative confirmed they used Claude to write specific malware components including lateral movement scripts and credential harvesting tools, allowing them to accelerate their development timeline. The claim suggests attackers found ways to work around Claude’s safeguards, either through careful prompt engineering or by using Claude for components that appeared benign in isolation.
How can I tell if my systems were infected by the malicious LiteLLM update?
The most immediate indicator was unexpected system crashes following an update. Beyond that, look for unauthorized outbound network connections from build or inference environments, unexplained credential rotation events, and anomalous access to secrets management systems. Cross-reference the indicators of compromise published by Google Mandiant against your log data.
What is a supply chain attack and why is it so dangerous?
A supply chain attack targets the software and tools organizations rely on rather than attacking them directly. By compromising an upstream provider, attackers reach enormous numbers of victims through a single breach. The danger lies in the implicit trust developers place in tools already integrated into their workflow — once a trusted tool is compromised, malicious updates are often accepted without additional scrutiny.
What is an initial access broker and how does TeamPCP monetize attacks?
An initial access broker specializes in gaining unauthorized entry into systems and selling that access to other criminal groups. TeamPCP either sells compromised network access to ransomware operators who then deploy their own malware, or directly extorts victims by threatening to publish stolen data or cause further disruption.
Is it safe to continue using open-source AI tools after this attack?
Open-source AI tools remain essential and avoiding them entirely is neither practical nor necessary. The answer is to use them with appropriate verification in place: pin dependencies to specific verified versions, implement integrity checks before deploying updates, conduct internal code audits for tools with elevated permissions, and monitor build environments for anomalous behavior.
What role did Google Mandiant play in the response?
LiteLLM engaged Google Mandiant — one of the world’s leading incident response firms — to investigate the scope of the breach, identify the specific malicious code injected, secure their publishing infrastructure, and develop a remediation plan. Mandiant’s work also provides the broader security community with detailed indicators of compromise.
How Technijian Can Help
The TeamPCP attack is a defining moment for AI pipeline security — and it will not be the last of its kind. At Technijian, we specialize in helping technology-driven organizations build resilient, security-first infrastructure. Here is how we can help you respond to threats like TeamPCP and strengthen your defenses before the next attack arrives:
- Supply chain security assessments: We audit your third-party dependencies, build pipelines, and package management practices to identify exposure points before attackers find them.
- Secrets management implementation: We design and deploy enterprise-grade credential vaulting solutions that remove static secrets from your pipelines and implement automated rotation policies.
- AI infrastructure security reviews: We conduct targeted security reviews of AI development environments, including model gateways, LLM API integrations, and training pipelines, applying standards that match the sensitivity of your data and models.
- CI/CD pipeline hardening: We implement behavioral monitoring, integrity verification, and least-privilege access controls across your continuous integration and delivery infrastructure.
- Incident response readiness: We help your team build and rehearse response playbooks so that if a supply chain compromise occurs, you can detect it fast, contain it quickly, and communicate clearly.
- Ongoing threat intelligence: We keep your security team informed about emerging threat actor tactics — including AI-assisted attacks — so your defenses evolve as fast as the threats do.