SLSA Certificate Forgery: The New Evolution of Shai-Hulud Attacks- image 1

SLSA Certificate Forgery: The New Evolution of Shai-Hulud Attacks

The article is also available at:
Ukrainian, Azerbaijani, Kazakh, Russian

In May 2026, the security industry faced a precedent that shattered the concept of unconditional trust in digital signatures within development environments. The Shai-Hulud worm, during a massive campaign, did not bypass supply chain protection systems but flawlessly passed all checks, obtaining a legitimate SLSA Build Level 3 certification. The malicious software learned to steal short-term OIDC tokens and use them to request cryptographic certificates, turning the CI/CD infrastructure itself into a tool for legitimizing malicious code.

SLSA Certificate Forgery: The New Evolution of Shai-Hulud Attacks - image 1
ARCHITECTURE VULNERABILITIES

Ephemeral Tokens and Trust Standards

OpenID Connect (OIDC) technology in the context of GitHub Actions addresses a critical security task: continuous integration and deployment workflows need to authenticate with external services without storing long-term secrets in the repository. The system issues a short-term token that is active for only a few minutes. The traditional paradigm assumes that this approach renders the theft of static keys useless.

However, the Shai-Hulud worm operates under different conditions. It initiates its processes directly within the Actions workflow at the moment when the token is active. The attacker does not attempt to falsify an existing signature but approaches the certification system on behalf of a confirmed process account. Because Sigstore receives a legitimate request from a valid OIDC token, the certification center issues an authentic document. Artifact verification standards confirm the source, but they are unable to detect the presence of malicious interference in the process’s memory.

INTERVENTION MECHANICS

Latent Process Analytics via Identity Layer

The main difficulty in detecting such threats lies in the disjointedness between different systems. The malicious code reads the environment variable, directs a request to the OIDC provider GitHub, and then transfers the obtained token to the Fulcio certification center from Sigstore. After updating the entry in the Rekor transparency log, the worm can generate a valid cosign signature for the package being prepared for publication. All these stages are accompanied by completely legitimate API calls.

Vectra AI emphasizes that the movement occurs through the Identity architectural layer and encompasses three independent structures: GitHub’s OIDC provider, Sigstore’s certification center, and the npm registry. Each system records correct authentication in its audit logs. The absence of an alert is precisely because the presence of foreign code that executes these legitimate commands is not recorded.

REAL SCENARIO

Campaign Against TanStack and Open Source

A recent campaign against the TanStack ecosystem demonstrates the exceptional scale of the threat. Within just five hours, 401 malicious versions were published for more than 170 different packages. During workflow execution, stolen credentials were sent to a C2 domain specifically registered for this operation. The issue gained critical status after a group called TeamPCP published the complete source code of the worm on the GitHub platform under the MIT license.

Immediately after the repository was made public, cybercriminals and researchers created 44 forks in less than a day, one of which added support for FreeBSD environments. The openness of the toolkit means that the token extraction technique is no longer exclusive to one group. Ecosystems such as PyPI, RubyGems, and Maven are at risk of new variations of the worm created based on this code.

PROTECTION METHODOLOGY

Behavioral Context of the Vectra AI Platform

Static scanning of artifacts cannot stop this type of intervention, as the package maintains functional integrity, and the certificates are authentic. Vectra AI experts emphasize that the real indicators of compromise are behavioral in nature. The signal of danger is not the API Sigstore call itself but the appearance of outgoing connections to unknown external hosts from the working environment, which concurrently requests the OIDC token and publishes the package within a single session.

Vectra AI’s solution helps detect these patterns through end-to-end monitoring of identifiers, networks, and cloud infrastructure. Analytics establish a deep context: what exactly the process was doing before publishing the result, which cloud services it interacted with, and what atypical traffic it generated. Correlating these events allows identifying the anomaly before the formatted package enters corporate systems.

Summary:

  • Legitimate security certificates solely demonstrate the authenticity of the company’s access, but do not check the intentions of the process executed in CI/CD.
  • The open-source code of the Shai-Hulud worm has exponentially increased the risks of supply chain compromise through ephemeral tokens.
  • Traditional isolated verification tools are ineffective against latent displacement; the only effective defense is the creation of a correlated behavioral context.

iIT Distribution, as a distributor of Vectra AI solutions, provides an expert ecosystem for deploying advanced cybersecurity systems. iITD’s team of specialists ensures full support and assists in designing a comprehensive security architecture that adapts to the specifics of supply chains and protects enterprises from new tactics of cybercriminals at all levels.

News

Current news on your topic

All news
All news