CybersecurityHQ Weekly Brief — {{first_name | Reader}}

In partnership with:

Smallstep Hardware is the new MFA. Start with Wi-Fi, then extend device identity with ACME DA across apps and infrastructure—only trusted devices get access.

LockThreat AI-powered GRC that replaces legacy tools and unifies compliance, risk, audit and vendor management in one platform.

The record behind this brief is public, inspectable, and citable.

CYBERSECURITYHQ
Weekly Brief
Structural Pressure Observation
Pressure Class: The Identity System That Isn't Modeled
28 April 2026

Last week's brief documented four simultaneous failures in the remediation substrate. The catalog narrowed. The endpoint agent became the privilege escalation. The management interface ran a thirteen-year-old execution path. The AI assistant treated attacker-writable content as instruction. The loop organizations use to close the gap between discovery and fix was itself under pressure at each stage.

This week the pressure migrated into an adjacent layer. Not the remediation substrate, but the automation layer that organizations have built on top of it. Systems that inject credentials into pipelines, serve AI inference workloads, and manage software delivery are being exploited for the authority they carry as a byproduct of doing their job. That authority was never their primary function. It is the target.

The Password Manager That Manages Production

On April 22, a malicious build of the Bitwarden CLI was available on the npm registry for ninety minutes, between 5:57 and 7:30 PM Eastern time. The compromised package, @bitwarden/[email protected], was linked to the ongoing TeamPCP supply chain campaign and the Shai-Hulud worm, now in its third confirmed wave. The initial access vector was a compromised Checkmarx GitHub Action that Bitwarden's CI/CD pipeline consumed as a dependency.

The malicious payload targeted what the CLI is actually trusted with in enterprise environments: AWS, Azure, and GCP credentials, GitHub and npm tokens, SSH keys, environment variables, shell history, and AI tooling configuration including MCP-related files. Everything the CI runner held. The payload then used harvested GitHub tokens to create public repositories in victims' namespaces, exfiltrating the collected secrets in encrypted form before the malicious package was detected.

The ninety-minute window is not the point. What matters is what the Bitwarden CLI is inside a modern build pipeline.

Inside a pipeline, the CLI is a credential injection mechanism. Teams wire it into CI/CD systems specifically to give those pipelines programmatic access to a shared vault. The CLI is trusted with production credentials because that is its function. When the package is compromised, the pipeline runs normally. Tests pass. Builds succeed. The exfiltration happens in the background through the preinstall hook, before any application code executes.

If an execution environment can act with credentials, it is an identity system whether you model it as one or not.

The organization's automation did not fail. It ran exactly as designed, including the part that stole everything.

The AI Node That Scans Your Network

On April 21, a server-side request forgery vulnerability in LMDeploy, CVE-2026-33626, was published to GitHub. LMDeploy is an open-source toolkit for deploying and serving large language models, including vision-language models. At 3:35 AM UTC on April 22, Sysdig's honeypot infrastructure recorded the first confirmed exploitation attempt, 12 hours and 31 minutes after publication.

The vulnerability exists in the vision-language image loader. The load_image() function accepts URLs from user requests without validating whether those URLs resolve to internal or private network addresses. An attacker who can submit image URLs to any LMDeploy endpoint processing vision-language requests can direct the server to make outbound connections on their behalf.

The attacker observed in the wild did not simply confirm the vulnerability and stop. In a single eight-minute session, they used the image loader as a generic HTTP scanning primitive: AWS Instance Metadata Service, Redis, MySQL, a secondary HTTP administrative interface, and an out-of-band DNS exfiltration endpoint. A scripted port sweep of the loopback interface executed across 36 seconds.

The SSRF class is not what matters here. What sits on the other side of it is.

GPU inference nodes typically run with broad IAM roles. They need access to S3 buckets for model artifacts, training data, and in many deployments, cross-account assume-role. One successful request to the AWS IMDS endpoint does not return instance metadata. It returns temporary credentials with whatever IAM permissions the instance role carries.

The AI inference node does not appear on most organizations' access reviews. Its IAM role is sized for the model artifacts it needs, not for the credential blast radius it creates. And that role is reachable through any SSRF-class vulnerability in the inference layer, exploitable within hours of disclosure.

The same principle applies as with the Bitwarden CLI. The inference node can act with credentials. Excluding it from the access review did not make it safer. It made it invisible.

The Campaign That Does Not Stop

Both the Bitwarden and LMDeploy incidents fall within the same week. The Bitwarden compromise connects directly to TeamPCP, the same threat actor behind the Trivy scanner compromise documented in the March 31 brief, the Checkmarx KICS and LiteLLM incidents documented in March, and the European Commission breach documented in the April 7 brief.

That campaign has now demonstrated operational reach across open-source vulnerability scanners, application security tooling, LLM frameworks, password manager distribution infrastructure, and EU government cloud environments. The connecting thread is not a sector. It is a class of target: developer tooling with privileged access to downstream environments.

TeamPCP is not running a campaign. It is operating a credential supply chain economy.

The FBI assessed in late March that 300 gigabytes of stolen credentials were being processed for extortion. Mandiant projected over 5,000 SaaS environments compromised. The Bitwarden incident adds CI/CD credential stores from an unknown number of organizations that installed the malicious package during the ninety-minute exposure window. Each compromised tool in the chain is not an endpoint. It is a collection node feeding a processing pipeline that converts developer credentials into production access, at scale, on a timeline the affected organizations have not yet mapped.

Organizations that have not confirmed whether their build pipelines pulled any package version in the TeamPCP chain since March are not managing residual risk. They are operating with an unresolved exposure that connects to active ransomware deployment infrastructure.

The Common Condition

These events do not share an attacker (LMDeploy was not attributed to TeamPCP) and do not share a mechanism. What they share is the structural condition underneath.

Modern organizations have distributed credential authority across their automation stack. CI/CD pipelines hold cloud credentials because deploying to production requires them. AI inference nodes carry IAM roles originally scoped for model artifact access that also happen to reach the broader cloud account. Secrets managers inject production credentials at build time, making the build runner a live credential holder for the duration of every job. None of these systems was designed as an identity system. None is treated as one.

That is the gap.

Most privileged access programs were built around a mental model that predates the current automation stack. Ask a security team which systems hold production credentials and they will list IAM consoles, secrets managers, and identity providers. Ask whether the CI/CD runner is on that list, and the honest answer, more often than it should be, is no.

Any system that holds credentials and can act on them is a privileged identity in practice. The question of whether it is inventoried, reviewed, or even recognized as such is a governance question, not a technical one. The automation layer has grown into a distributed identity infrastructure. Most access programs have not caught up.

When the build pipeline runs with the Bitwarden CLI, it runs with the authority of every secret in that vault. When the inference node processes a vision-language request, it runs with the authority of its IAM role. When either is compromised, the attacker does not need to compromise the vault or the IAM console. They need only the execution context.

The authority was already there. It was delegated to the automation layer, and the automation layer was not treated as a control surface worth protecting at the level appropriate to the authority it holds.

What This Breaks

A credential inventory that excludes execution environments is not a partial inventory. It is a different thing entirely, one that does not measure what it claims to measure. The runner's access is effective access. If it is not on the inventory, the inventory is not governing it.

IAM role sizing for AI infrastructure has not kept pace with the SSRF attack surface that AI inference deployments introduce. A role sized for model artifact access grants cloud account access to any attacker who can exploit the inference layer. The disclosure-to-exploitation window for AI infrastructure vulnerabilities, now documented at under 13 hours in this case, is shorter than patch deployment cycles in most environments. These two facts together mean that AI infrastructure is operating with privileged access that cannot be protected by patching alone.

The TeamPCP campaign is not a supply chain incident. It is a credential accumulation operation with confirmed ransomware and extortion downstream functions. Organizations modeling it as a dependency hygiene problem are applying a remediation framework to a threat that has already moved past the exploit phase. The relevant question is not whether affected packages have been removed. It is what those packages accessed before they were removed.

Three Questions

Most organizations cannot answer whether their CI/CD runners are part of their privileged access inventory. If yours can, what is the scope of credentials those runners can access, and when was that scope last reviewed?

Which AI inference deployments in your environment run with IAM roles that include access to resources beyond what inference requires? The blast radius of a compromised inference node is defined by the role, not by the workload. If you do not know the answer, the role is ungoverned by definition.

If your build pipeline pulled any package in the TeamPCP campaign chain since March 2026, the question is not whether you have rotated credentials. It is what those credentials accessed in the interval between installation and rotation.

An automation layer built for speed holds authority sized for production. A tooling compromise campaign running since March continues to accumulate credentials from that layer. An AI inference node exploited within thirteen hours of disclosure reveals IAM authority no one assessed when the node was deployed.

Execution environments are already part of your identity system. The gap is not conceptual. It is operational, and it is being exploited.

Reply

Avatar

or to participate

Keep Reading