Research
Researchsecurity13 min read

GitHub Actions OIDC tokens and Jenkins plugins show CI/CD infrastructure is now the supply chain target

CI/CD compromise is moving away from poisoned dependencies alone and towards the infrastructure that builds, signs and releases trusted software.

GitHub Actions OIDC token extraction and Jenkins plugin compromise point at the same uncomfortable shift: the modern software supply chain is no longer being attacked only through packages, maintainers or update channels. The build system itself has become the target.

That distinction matters. A malicious dependency has to survive review, resolution, installation and runtime conditions. A compromised CI/CD path can arrive later and with more authority. It can run inside the project’s automation boundary. It can see build-time secrets. It can mint cloud credentials. It can alter release artefacts after source review has finished. It can also make the resulting compromise look like normal engineering output, because the logs, commits and attestations are produced by the same machinery defenders have been taught to trust.

Elastic Security Labs described this gap directly in its 29 April 2026 article, "CI/CD pipeline abuse: the problem no one is watching". The research focused on detecting abuse across GitHub Actions, GitLab CI and Azure DevOps using signal extraction and LLM reasoning. The most important point is not the model choice. It is the premise: CI/CD abuse is observable, repeated and still under-instrumented compared with endpoint, cloud and identity activity.

This is not a separate story from recent supply chain incidents. It is the next layer down. Previous compromises showed attackers moving through GitHub Actions, AI service infrastructure, developer tooling and package ecosystems. The common line is now clearer: attackers are not merely trying to smuggle malicious code into trusted software. They are trying to operate the systems that decide what trusted software is.

CI/CD stopped being plumbing

Build systems were once treated as engineering plumbing. They compiled code, ran tests and published artefacts. That view is obsolete.

A contemporary CI/CD platform is an identity broker, a secrets broker, a release authority, a dependency resolver, a signing workflow, a deployment robot and a forensic record. GitHub Actions workflows can request OpenID Connect tokens and exchange them for cloud-provider credentials. Jenkins installations can load plugins that execute inside the automation environment. GitLab CI and Azure DevOps pipelines can bridge source repositories, package registries, cloud accounts and production deployments.

The result is a concentration of authority. The pipeline is allowed to do things that would look suspicious if a user did them manually. It can assume roles. It can publish containers. It can create release assets. It can deploy infrastructure. It can pull private dependencies. It can sign artefacts. It can do all of this repeatedly, at speed and with limited human attention.

Attackers have noticed because the economics are obvious. Compromising a developer endpoint may yield a workstation. Compromising a package may yield downstream installs if the package has enough reach. Compromising build infrastructure can yield the thing organisations actually ship.

The security model has not caught up. Many organisations still monitor CI/CD as a reliability surface rather than a hostile execution surface. Failed jobs get dashboards. Long-running jobs get alerts. Suspicious credential exchange from a runner often gets much less attention, if it is collected at all. Pipeline changes are reviewed for whether they pass tests, not always for whether they change the authority boundary of the build.

That is the trust collapse: the system that produces evidence of integrity can itself become the compromised party.

OIDC token extraction changes the GitHub Actions failure mode

GitHub Actions OIDC is a legitimate hardening mechanism. It reduces long-lived cloud secrets in repositories by allowing workflows to request short-lived identity tokens. Those tokens can then be exchanged with a cloud provider for temporary credentials, scoped by claims such as repository, branch, environment or workflow.

That design is better than storing static cloud keys in CI. It also creates a different target.

If an attacker can execute code inside a trusted workflow context, the absence of static secrets does not end the incident. The workflow may be able to request an OIDC token. The token may be exchanged for cloud credentials. Those credentials may be short-lived, but short-lived is not the same as harmless. A deployment role can still deploy. A publishing role can still publish. An infrastructure role can still change infrastructure.

This is the subtle part of OIDC abuse: defenders can correctly remove long-lived secrets and still expose valuable authority to pipeline execution. The weakness is not OIDC itself. The weakness is granting broad cloud permissions to jobs on the assumption that only intended workflow code will run.

In a compromised action scenario, the attacker does not need to steal a stored key. The action runs where the job runs. If the job has permission to request an identity token, malicious code can attempt to reach the same token endpoint as legitimate code. If the trust policy at the cloud provider is too broad, the attacker can turn a CI job into a temporary cloud principal.

That changes what should be reviewed. The key questions are no longer limited to "which secrets are present?" They become:

  • Which jobs can request identity tokens?
  • Which branches, tags, environments and workflow names are accepted by cloud trust policies?
  • Which third-party actions execute before token issuance?
  • Which permissions are granted to the resulting cloud role?
  • Which logs capture token requests, failed exchanges and unusual role use?

The dangerous configuration is usually convenient rather than malicious. A team grants a workflow a deployment role because it needs to ship. A trust policy accepts a repository rather than a protected environment because the latter is slower to configure. A reusable workflow grows permissions over time because several teams depend on it. A third-party action is pinned loosely because version drift is annoying. Each choice is understandable. Together they create a build-time identity plane that is easier to use than to reason about.

The move from static secrets to federated identity did not make CI/CD less important to attackers. It made pipeline context more important.

Jenkins plugin compromise attacks the extension layer

Jenkins represents the older but still widely deployed side of the same problem. Its power comes from extensibility. Plugins connect Jenkins to source control, build tools, package repositories, chat systems, test frameworks, cloud services and deployment targets. That plugin ecosystem is also an attack surface woven directly into build authority.

A compromised Jenkins plugin is not just another vulnerable dependency. It is code that may run inside the automation server or its agents, near credentials, job definitions, workspace contents and release tasks. Depending on placement, it can observe builds, alter parameters, intercept credentials, modify artefacts or influence downstream publishing.

The plugin model is structurally tempting to attackers for three reasons.

First, plugins sit at integration points. They are installed precisely because they connect systems that would otherwise be separate. Source code, artefact storage, ticketing, cloud accounts and deployment targets all pass through plugin-mediated paths.

Second, plugins often inherit institutional trust. Once a plugin becomes part of the build estate, it can remain for years. Teams may update it because Jenkins recommends updates, or avoid updating it because old build jobs depend on old behaviour. Both patterns create risk. In one case the update channel becomes sensitive. In the other the estate accumulates known weaknesses.

Third, plugin behaviour is hard to inspect at the point of use. Engineers reviewing a Jenkinsfile can see the pipeline logic. They usually do not audit the plugin implementation that provides a build step. The visible code says "publish", "archive" or "deploy". The trusted extension decides what those verbs actually do.

This is why Jenkins plugin compromise belongs in the same discussion as GitHub Actions OIDC token extraction. Both attack a layer that is treated as enabling infrastructure rather than application code. Both exploit the gap between what defenders review and what actually executes. Both can turn trusted automation into a distribution mechanism.

There is a blunt asymmetry here. Organisations may require two approvals for a source change that affects a production service. The same organisations may allow a build administrator to install or update a plugin that affects every production release path. The first decision is visible in pull requests. The second may be buried in an operations change log, if it is logged with enough detail at all.

The shared failure is authority without adversarial context

The common failure across GitHub Actions, Jenkins, GitLab CI and Azure DevOps is not that automation exists. It is that automation is granted authority under assumptions that belong to a quieter period of software delivery.

Build systems are often trusted because they are internal, because they are configured by engineers, because their job logs are noisy and because disabling them would stop delivery. That operational importance becomes a security exemption. The pipeline cannot be locked down too hard because the business needs releases. The runners need network access because tests need dependencies. The deployment job needs production credentials because manual handoff is slow. The plugin needs administrative privileges because it has always needed them.

Attackers do not need to defeat the formal software development process if they can run inside the informal authority around it. A malicious workflow step can appear after source review through action resolution. A compromised plugin can operate below the visibility of the Jenkinsfile. A poisoned reusable pipeline can be inherited by many repositories. A cloud role trust policy can convert repository context into infrastructure control.

The collapse is therefore not only technical. It is procedural. The organisation says source review is the gate, but the gate is not the whole path. The artefact that reaches production is shaped by source code, dependency resolution, build scripts, runner images, CI secrets, identity federation, plugins, signing keys, package registries and deployment jobs. If any one of those layers can alter the result, then integrity belongs to the whole chain rather than the commit.

This is where supply chain language can become misleading. It makes the problem sound external: upstream packages, maintainers, registries and vendor updates. CI/CD compromise is often internal to the victim’s own delivery system. The attacker may still enter through an external dependency or action, but the valuable step is gaining execution where the organisation itself manufactures trust.

Detection has to move into the pipeline

Elastic Security Labs' emphasis on CI/CD abuse detection across GitHub Actions, GitLab CI and Azure DevOps is the right direction because the pipeline has its own behavioural signals. Treating CI logs as build debris wastes a security data source.

Useful signals include workflow permission changes, first-time OIDC token requests, unusual cloud role exchanges, new third-party actions, unpinned action references, unexpected runner network destinations, artefact checksum drift, suspicious plugin updates, new Jenkins credentials bindings, changes to release jobs and build steps that execute encoded or remote-fetched scripts.

None of these signals is conclusive alone. CI/CD systems are noisy because engineering work is noisy. That does not make them unobservable. It means detections need context: repository sensitivity, branch protection, environment rules, normal release cadence, expected cloud roles and known plugin inventory.

The strongest control is still reducing authority before detection is needed. A workflow that cannot request an OIDC token cannot use one. A cloud trust policy that only accepts protected environments reduces blast radius. A job that pins actions by commit rather than tag removes one mutable reference. A Jenkins controller with a minimal plugin set has fewer privileged extension points. A release role that can publish one artefact cannot casually reconfigure an account.

Practical controls are unglamorous:

  • Pin third-party GitHub Actions to immutable commits.
  • Set default workflow permissions to read-only.
  • Grant id-token: write only to jobs that genuinely need cloud federation.
  • Bind OIDC trust policies to protected branches, environments and exact repository claims.
  • Separate build, test, signing and deployment roles.
  • Treat Jenkins plugin installation and updates as security-relevant change control.
  • Maintain a plugin inventory with owners, versions and justification.
  • Log token requests, cloud role assumptions and deployment actions with repository and workflow context.
  • Alert on new release paths rather than only failed builds.
  • Rebuild critical artefacts in isolated environments where feasible.

This is not a call to freeze automation. Manual release processes are not automatically safer. They are often slower, less reproducible and more dependent on privileged humans. The point is narrower: CI/CD has become a production identity plane, and production identity planes require security engineering rather than trust by habit.

Build trust needs independent verification

The hard problem is that CI/CD systems are asked to generate proof about themselves. A build log says a job ran. An attestation says an artefact was produced by a workflow. A signature says a key signed the output. These are useful records, but their meaning depends on the integrity of the environment that produced them.

That does not make attestations or signatures useless. It makes them insufficient when treated as magic. A signed malicious artefact is still malicious. An attestation from a compromised workflow is still an attestation from a compromised workflow. Provenance helps when the provenance boundary is well understood and independently protected. It misleads when it becomes a badge of comfort.

The mature model is layered. Source review remains necessary. Dependency controls remain necessary. Build isolation, least privilege, deterministic rebuilds, independent verification, constrained identity federation and release monitoring all have to sit beside them. The aim is not perfect trust. It is to stop one compromised automation component from becoming a universal signing oracle, deployment robot or cloud administrator.

This also changes how incidents should be scoped. After a CI/CD compromise, asking whether source code changed is not enough. The investigation has to ask what the pipeline could touch during the window of exposure. Which artefacts were built? Which credentials were available? Which OIDC roles could be assumed? Which packages were published? Which plugins changed? Which downstream systems accepted output from the compromised path?

The uncomfortable answer may be that the source repository is clean and the shipped artefact is not. That is the shape of the problem now.

The target is the trust factory

The older supply chain story was about getting malicious code into a place developers would consume. The newer story is about controlling the systems that transform code into trusted output.

GitHub Actions OIDC token extraction shows how cloud identity can be reached through pipeline execution. Jenkins plugin compromise shows how extension ecosystems can sit inside the build authority boundary. Elastic Security Labs' work on CI/CD abuse detection reflects the same movement across multiple platforms: attackers are operating where engineering systems create, bless and distribute software.

Security teams do not need to treat every workflow as hostile. They do need to stop treating workflows as inert configuration. A pipeline is executable code with identity, network access and release authority. A plugin is not background plumbing. A short-lived token is still a credential. A build artefact is not trustworthy because it came from CI. It is trustworthy only to the extent that the CI path was constrained, observed and independently checked.

The software supply chain did not lose trust all at once. It delegated trust to build systems for years because that made delivery faster. Attackers are now collecting on that delegation. The factory that stamps "release" on software has become the asset worth owning.

Newsletter

One email a week. Security research, engineering deep-dives and AI security insights - written for practitioners. No noise.