
CybersecurityHQ Weekly Brief — {{first_name | Reader}}
In partnership with:
Opal Security — The programmable access platform bridging policy intent and enforcement, combining AI with CISO context and engineer precision.
Smallstep — Hardware is the new MFA. Start with Wi-Fi, then extend device identity with ACME DA across apps and infrastructure—only trusted devices get access.
LockThreat — AI-powered GRC that replaces legacy tools and unifies compliance, risk, audit and vendor management in one platform.
The record behind this brief is public, inspectable, and citable.
CYBERSECURITYHQ
Weekly Brief
Structural Pressure Observation
Pressure Class: Continuity Without Correctness
5 May 2026
Last week's brief named the identity system that organizations have built without modeling it as one. CI/CD runners, AI inference nodes, secrets manager CLIs. Each holds production authority as a byproduct of its operational function. None appears on most privileged access inventories.
This week, the distribution infrastructure through which those systems receive their code became the attack surface. The packages were malicious. That part is not the point. The authorization they carried was legitimate, and every verification primitive confirmed it.
The OIDC Fix That Moved the Surface
On April 29, between 09:55 and 14:00 UTC, malicious versions of four SAP Cloud Application Programming packages were published to the official @cap-js npm scope. The following day, PyTorch Lightning versions 2.6.2 and 2.6.3 were compromised through the same campaign. Seven independent security firms confirmed the activity. SAP issued an official statement confirming unauthorized workflow modification.
The mechanism is what matters. The attacker did not steal a long-lived npm token. They compromised a developer account with release workflow permissions, modified the GitHub Actions release workflow to extract the short-lived OIDC token at the moment of publication, and used that token within its valid window. The resulting packages were signed. They came from the official SAP scope. They passed every automated integrity check.
OIDC trusted publishing was deployed across the major package registries specifically to replace long-lived publishing credentials. The security argument was sound. Long-lived tokens get stolen, sit in breach databases, and surface months later in unexpected contexts. Short-lived tokens bound to specific workflows are harder to exfiltrate and time-limited by design.
What the model assumed is that the workflow generating the token is trustworthy. The attacker modified the workflow.
In over 1,100 poisoned repositories, the payload also injected a malicious hook into .claude/settings.json, triggering execution when any developer opened the repository in an AI coding assistant. This is the first confirmed case of AI coding agent runtime configuration weaponized as a persistence vector. Opening a repository in Claude Code, Gemini CLI, or VS Code became an execution trigger independent of credential theft. The device does not have to install anything. It has to open a folder.
The SAP target selection is worth noting. CAP is the standard development framework for SAP's enterprise customer base. This is not a developer tooling attack aimed at individual contributors. It is a credential collection operation targeting the build infrastructure of enterprise business applications.
The AI Gateway as a Credential Ledger
LiteLLM, a widely deployed AI LLM gateway, carried a pre-authentication SQL injection vulnerability in its API key verification path. The caller-supplied Authorization Bearer header was concatenated directly into the SQL query. An unauthenticated attacker sending a crafted header to any LLM API route could extract from three primary credential tables: virtual API keys, upstream provider credentials, and gateway configuration secrets.
Sysdig recorded the first exploitation attempt thirty-six hours and seven minutes after the advisory reached the GitHub Advisory Database. The attackers did not wait for a public proof-of-concept. They enumerated the correct Prisma table names with exact casing before any PoC existed. The open-source schema was sufficient. Someone had already done the homework.
The blast radius of a compromised LiteLLM instance is not one organization's LLM access. It is the credentials for every AI provider the gateway connects to: OpenAI, Anthropic, AWS Bedrock, Google Vertex. A single pre-auth request against the verification path reaches all of them.
The AI gateway is not on most privileged access inventories. It is deployed by application teams to provide a unified API surface for AI workloads. Its credential aggregation function is the feature, not a side effect. That function makes it a single point of authorized access to the entire AI provider stack. When the gateway is compromised, the authority follows.
The Vulnerability That Requires What Has Already Been Taken
Wiz Research disclosed CVE-2026-3854 in the GitHub git push pipeline on April 28. A single git push from an authenticated user with push access to any repository achieves remote code execution on shared backend storage nodes through a three-stage injection chain: sandbox bypass, hook directory redirect, and path traversal. GitHub.com was patched within two hours of the March 4 disclosure. GitHub Enterprise Server received patches on March 10. At the time of public disclosure, eighty-eight percent of GHES instances remained on vulnerable versions.
The precondition is push access. That sounds like a meaningful barrier. It is not a meaningful barrier in April 2026.
The TeamPCP cascade documented across the previous six weeks harvested GitHub tokens from CI/CD pipelines across multiple ecosystems: Trivy, Checkmarx, Bitwarden CLI, SAP CAP, PyTorch Lightning. A developer whose GitHub token was stolen in any of those operations now provides the authentication precondition for CVE-2026-3854 against any GHES instance they have push access to. No additional phishing required. No new vulnerability required. The credential already exists in adversarial hands.
The access review that classified GHES push access as requiring authentication was accurate. The threat model behind that classification did not account for the population of valid authentication material that has been in attacker possession since March.
The Detection Logic That Is No Longer Private
On April 26, LAPSUS$ published ninety-six gigabytes of Checkmarx source code, derived from credentials stolen in the March 23 Trivy compromise. KICS, the infrastructure-as-code scanner. AST, the application security testing platform. The source code of the tools Checkmarx customers deploy to find vulnerabilities.
On May 2, Trellix disclosed unauthorized access to a portion of its internal source code repository. Trellix develops XDR and endpoint detection products. No attribution. No confirmed exploitation. Forensic investigation ongoing.
Two security vendors. Seven days. Different actors, different access vectors.
Endpoint detection works by defining what behaviors look suspicious. That definition is encoded in the source code. An attacker with access to it knows what the detection agent will and will not flag. They can write malicious code that does not trigger the signatures. No product vulnerability required. Just the source.
The exposure creates attacker knowledge that persists beyond a normal software update cycle. Customers running Checkmarx or Trellix products will continue to see dashboards reporting normal detection status. The dashboards reflect tool execution. They do not reflect whether the tool's detection logic remains ahead of an attacker who has already read it.
The Common Condition
These events do not share an attacker, a mechanism, or a sector. The SAP campaign is attributed to TeamPCP. The LiteLLM exploitation has no confirmed attribution. The Trellix breach has no confirmed actor. GitHub CVE-2026-3854 was a responsible disclosure with no in-wild exploitation confirmed.
What they share is this: the verification primitives organizations rely on to assess the integrity of their software distribution and security tooling all produced the expected result. The OIDC token validation confirmed the token was legitimately issued to an authorized workflow. The npm registry accepted the packages because the scope was correct and the signature was valid. The LiteLLM credentials were real credentials held by a real gateway with real authorization to the connected providers. The GitHub push came from an authenticated user with legitimate push access.
The cryptographic chain was intact in every case. The authority being exercised through that chain was not what the operator had authorized.
Provenance systems verify that authorization is continuous. They do not verify that authorization was correct at the point it was established. When an attacker controls one authorized link in the chain, the signing passes. The token validation passes. The package is distributed. The check at the artifact boundary cannot see what happened upstream of it.
This is not a failure of cryptographic implementation. It is a structural property of where verification operates relative to where authority is exercised.
What This Breaks
Package signing, OIDC token validation, SLSA-based attestation, and software bill of materials verification all confirm that an authorization chain is continuous. None confirms that authorization was correct at any point in that chain. A passing provenance check measures continuity. Organizations treating it as supply chain security assurance are calling it correctness. Those are not the same property.
AI infrastructure inventory that does not include credential aggregation surfaces is not a partial inventory. It is an inventory of the wrong layer. The credential blast radius of a compromised AI gateway is defined by the credentials the gateway holds, not by its infrastructure cost or its classification in the asset register.
The "requires authentication" precondition for critical platform vulnerabilities has been materially weakened by six weeks of developer token harvest. Threat models that still treat authentication as a meaningful access barrier for developer infrastructure are not models of the current environment.
Security tool coverage that does not account for adversarial knowledge of the tool's detection logic is not a gap that shows up on a dashboard. The dashboard continues to report coverage. The coverage continues to fail. The gap appears downstream, in attacks that do not trigger signatures, from actors who studied what those signatures look for.
The operations documented in the past six weeks collectively reveal attackers adapting to the specific authorization mechanisms that govern publication in each ecosystem. The mechanism varies by target. The targeting logic does not. At least one additional software publishing ecosystem will be compromised via its trust and authorization model within 90 days. The most likely surfaces are ecosystems with growing enterprise adoption and authorization models not yet tested under adversarial conditions at scale.
Three Questions
Which elements of your software distribution pipeline rely on OIDC trusted publishing, and when did you last audit the release workflows that earn those tokens? Token rotation is not the control. Workflow integrity is.
Which AI gateways, orchestration platforms, and inference deployments in your environment hold credentials for multiple upstream providers? What is the blast radius of a single pre-auth compromise of each? If you do not have a documented answer, the credential surface is ungoverned.
What sits between your security tooling coverage claims and an adversary who has read the source code of that tooling? If the answer is the tooling itself, the question is not answered.
The authorization was valid. The workflow had already been modified. The packages signed correctly. The production environment they came from did not.
The verification confirmed continuity. It could not confirm correctness. That gap is not theoretical. It is where this week's events occurred, and the organizations affected had no instrumentation that distinguished one from the other.
CHQ has formalized this as a structural position (CHQ-P-2026-016): Cryptographic Provenance Systems Validate Continuity of Authorization, Not Correctness of Authorization.
