Mini Shai-Hulud chained three vulnerabilities in the modern CI/CD security stack. First, attackers poisoned GitHub Actions caches via pull_request_target workflows on public forks — injecting malicious code into trusted build processes without requiring write access to the target repository. Second, the hijacked workflow extracted short-lived OIDC tokens from the CI runner's process memory — tokens that the ecosystem had adopted specifically to replace the long-lived credentials exploited by the original campaign. Third, those OIDC tokens were used to publish malicious packages to npm and PyPI with valid SLSA provenance attestations. The attack did not defeat SLSA cryptographically. It defeated it operationally: by owning the trusted builder, it produced signatures that were mathematically valid but semantically fraudulent.
npm and PyPI quarantined affected packages within hours. GitHub issued emergency guidance on restricting pull_request_target workflow permissions and scoping OIDC token claims. The security community broadly acknowledged that SLSA Build Level 3 provenance, while valuable, does not protect against an attacker who controls the build environment. The incident reopened fundamental questions about whether the supply chain security improvements adopted post- Shai-Hulud 1.0 had displaced risk rather than eliminated it — hardening the token layer while leaving the builder itself as an unexamined trust anchor.
The Museum Placard
By May 2026, the JavaScript ecosystem had absorbed the lessons of Shai-Hulud and acted on them. Maintainers had rotated tokens. Projects had adopted OIDC Trusted Publishing. npm had strengthened publication requirements. SLSA provenance attestations were being adopted as the gold standard of build integrity: a cryptographic guarantee that a released package was built from a specific commit by a specific trusted workflow.
Then Mini Shai-Hulud arrived. It extracted OIDC tokens from the trusted workflows themselves, published malicious packages using those tokens, and attached SLSA Build Level 3 provenance attestations that were mathematically valid in every way.
The security community had hardened the credential layer. The worm moved to the layer beneath it.
---
The Triple-Chain
Mini Shai-Hulud did not attack a single vulnerability. It chained three, in sequence, and required all three to execute in order.
Step 1: Cache Poisoning via pull_request_target
GitHub Actions provides a workflow trigger called pull_request_target designed to allow workflows to comment on pull requests, post test results, or run privileged operations in response to contributions from external forks — while running in the context of the base repository rather than the untrusted fork.
The critical property: pull_request_target workflows run with the secrets and write permissions of the base repository, even when triggered by a pull request from a fork. This is intentional — it allows, for example, a CI bot to post a comment using the repository's GitHub token.
Mini Shai-Hulud exploited a misconfiguration common to many high-profile open source projects: pull_request_target workflows that checked out the fork's code rather than the base repository's code before running. By controlling the fork's code, attackers controlled what ran inside a workflow with base-repository authority.
The cache poisoning technique used this to inject a malicious build artifact into the GitHub Actions cache — the mechanism that lets CI runs reuse artifacts from prior builds to speed up execution. The cached artifact would be consumed by the next legitimate build run, not the attacker's run, ensuring the attack executed in a trusted context with full pipeline credentials.
Step 2: OIDC Token Extraction
Modern CI/CD pipelines that use OIDC Trusted Publishing receive short-lived OIDC tokens — JSON Web Tokens bound to the specific workflow, repository, and branch that requested them. These tokens expire in minutes. They cannot be used to publish packages from an unauthorized workflow. They were specifically designed to replace long-lived npm tokens after Shai-Hulud 1.0.
The cached malicious artifact, executing inside the trusted build runner, extracted the OIDC token from process memory during the build. The token was still valid — it had just been issued to the workflow that the malicious cache artifact was now running inside. From the token's perspective, nothing was wrong: it was being used by the exact workflow and repository it was issued for. The token cannot distinguish between the legitimate build artifact and the poisoned one.
Step 3: Authenticated Publishing with Valid Provenance
The extracted OIDC token was used to publish malicious versions of the target packages to npm and PyPI. Because the token was legitimate, the publication succeeded. Because SLSA provenance is generated from the build runner's attestation of the build context, and the build runner's context was genuine (the correct workflow, the correct repository, the correct branch), the provenance attestation was mathematically valid.
A security engineer inspecting a Mini Shai-Hulud package using standard tooling would see:
- Published from the expected repository ✓
- Published from the expected branch ✓
- SLSA Build Level 3 provenance attestation present ✓
- Provenance signature cryptographically valid ✓
- Package contents: malicious ✗ (not visible from provenance)
The chain produced artifacts that defeated every supply chain security control the industry had standardized on, because those controls verified the origin of the build rather than the integrity of the build environment.
---
The Simultaneous Dual-Registry Strike
Where the original Shai-Hulud targeted npm exclusively, Mini Shai-Hulud attacked npm and PyPI simultaneously — a deliberate escalation that served two purposes.
Maximizing blast radius: Many projects that publish to both registries (Python bindings for JavaScript libraries, cross-language SDKs, infrastructure tools with multi-language clients) had CI/CD pipelines that held publishing credentials for both. A single compromised runner could yield tokens for both registries.
Stress-testing response capacity: Simultaneous compromise across two registries doubled the incident response burden for both npm and PyPI security teams, for the package maintainers, and for the downstream users attempting to identify whether their dependencies were affected. In the hours before quarantine, the dual-registry nature of the attack slowed coordinated response.
The affected packages included TanStack (React Query, React Router), Mistral AI client libraries, UiPath automation packages, and OpenSearch client packages — a cross-section of the ecosystem covering frontend frameworks, AI/ML tooling, enterprise automation, and cloud infrastructure.
CVE-2026-45321 (CVSS 9.6) is the formal tracking identifier for the TanStack arm of this campaign. On May 11, 2026, malicious versions of 42 @tanstack/* packages were published to npm. The packages carried valid OIDC trusted-publisher attestations — the TanStack publish workflow itself was not modified. Any environment that ran npm install, pnpm install, or yarn install against the affected packages between May 11 19:00 UTC and takedown should be treated as fully compromised.
---
The Destructive Payload
Unlike Shai-Hulud 1.0, which focused on credential theft and self-replication, Mini Shai-Hulud's payload included a destructive component: a persistent daemon that, after establishing exfiltration channels, would execute rm -rf ~/ against developer home directories.
This represents a qualitative escalation in threat posture. Credential theft is recoverable: rotate secrets, revoke tokens, audit access logs. Destruction of a developer's home directory — git repositories, configuration files, local key material, build artifacts, personal files — is not automatically recoverable and may be unrecoverable without adequate backup.
The destructive capability appears designed not primarily to cause damage, but to create pressure during incident response: developers whose machines were actively losing data during the compromise had reduced capacity to investigate and remediate the infection in other affected systems.
---
The Six Laws at Work
Law III — Transitive Trust (Again, Deeper)
The original Shai-Hulud exploited transitive trust in the npm dependency graph. Mini Shai-Hulud exploited transitive trust in the CI/CD infrastructure graph: the chain of trust from a GitHub repository to its Actions runners to the npm registry to the SLSA attestation verifier.
Every link in that chain trusted the previous link. The OIDC token trusted the workflow. PyPI trusted the OIDC token. SLSA attestations trusted the build runner. Package consumers trusted the SLSA attestation. The malicious payload moved along this chain of mutual trust relationships without breaking any individual link — because the attack controlled the runner from which all the trust claims propagated.
Law IV — Complexity Accretion
The triple-chain attack was only possible because three separate systems — GitHub Actions cache, OIDC token issuance, and SLSA provenance — had been composed into an integrated CI/CD security architecture without any of the individual systems having full visibility into the others.
pull_request_target was not designed with OIDC token extraction in mind. OIDC token issuance was not designed with cache poisoning in mind. SLSA attestation was not designed to protect against a compromised build environment. Each system addressed its own threat model correctly. Their composition created a new attack surface that none of them individually modeled.
Law I — Boundary Collapse
SLSA provenance was designed as a security boundary: "this package was built by this trusted process." Mini Shai-Hulud demonstrated that the builder — not the build process, not the source code, not the publication token — was the actual trust anchor, and that the builder could be compromised without any of the verification mechanisms detecting the compromise.
The boundary between "the build is trusted" and "the build environment is trusted" was assumed rather than enforced.
Law 0 — Katie's Law
The pull_request_target misconfiguration that Mini Shai-Hulud exploited was not obscure. GitHub had documented the risk. Security researchers had written about it. Advisories had been published. The projects that were compromised were not negligent — they were maintaining large open source codebases with volunteer contributor bases, complex CI/CD configurations accumulated over years, and insufficient dedicated security engineering to audit every workflow trigger for subtle misconfigurations.
The economics of open source infrastructure continue to produce the conditions for this class of attack: high-value targets (popular packages with millions of downstream users) maintained by teams that cannot afford the security engineering overhead commensurate with their blast radius.
---
The Provenance Paradox
The SLSA attestation problem Mini Shai-Hulud exposed deserves specific attention, because it reflects a fundamental limit of cryptographic verification.
SLSA provenance answers the question: "Was this artifact built from this source by this process?" It cannot answer the question: "Was the build process itself operating correctly?" These are different questions, and cryptographic signatures cannot collapse them into one.
A SLSA Build Level 3 attestation means a trusted builder attested to the build. If the trusted builder is compromised — whether by cache poisoning, by a malicious dependency in the build environment, or by any other mechanism that executes code inside the build runner — the attestation is produced by a compromised system. The attestation is valid. The build is not.
This is not a weakness in SLSA's design. It is a statement of what SLSA can and cannot verify. The security gap is in the ecosystem's implicit extension of SLSA's actual guarantees into a broader claim it was never designed to make.
---
What Should Have Stopped This
Restricting pull_request_target scope. Workflows triggered by pull_request_target should always check out the base repository's code, never the fork's. Any workflow that needs to execute fork code should do so in an isolated environment without access to base-repository secrets. This is a configuration requirement, not a platform limitation — GitHub documents it. The gap is that it requires active, ongoing attention from project maintainers rather than safe defaults.
OIDC token scope minimization. OIDC tokens should be scoped to the minimum publication permissions necessary — specific package names, specific version ranges, specific registries. A token that can publish to any package owned by an account is a much larger blast radius than a token that can publish only to mypackage@patch on npm. Both npm and PyPI have moved toward narrower OIDC scoping post-Mini Shai-Hulud, but adoption requires maintainer action.
Build environment integrity verification. SLSA provenance verifies source and builder identity but not build environment integrity. Supplementary controls — build reproducibility checks, dependency pinning in build environments, ephemeral build runners with minimal cached state — reduce the attack surface of the build environment itself. These are architectural requirements that add friction to CI/CD maintenance.
Cache isolation and content verification. GitHub Actions cache entries shared across workflow runs create a state channel that can be used to smuggle malicious artifacts. Restricting cache sharing by branch, signing cached artifacts, or treating cached content as untrusted input would close this specific vector.
---
Curator's Note
Mini Shai-Hulud is a particular kind of disaster: one that happened because previous defenses worked.
The industry responded correctly to Shai-Hulud 1.0. Long-lived tokens were replaced with OIDC. SLSA provenance was adopted. The credential layer was hardened. These were real improvements that raised the cost of certain attacks.
But attackers do not reuse the same attack against hardened targets. They route around the new defenses. Mini Shai-Hulud is what routing around hardened credentials looks like: move to the layer that issues the credentials, compromise it, and produce artifacts that pass every check that was added in response to the previous compromise.
This is not a reason not to harden defenses. It is a reason to understand that hardening any single layer shifts risk to adjacent layers — and that the adjacent layers must be identified and assessed before the hardened layer is announced as a solution.
The worm learned to sign its own releases because the ecosystem learned to require signatures. The next iteration will learn whatever the ecosystem requires next.
EFFODE · LEGE · INTELLEGE