←Research
Researchsecurity10 min read

Vercel breached through a compromised Context.ai OAuth grant

A compromised AI productivity tool called Context.ai gave attackers OAuth access to a Vercel employee's Google Workspace, pivoting into internal systems. The AI tool supply chain is the new CI/CD supply chain.

S
Sebastion

Vercel was breached in April 2026 through a compromised third-party AI tool called Context.ai. A Vercel employee had granted the tool OAuth access to their Google Workspace account. When Context.ai was itself compromised, the attacker used that OAuth grant to pivot into the employee's Workspace, then laterally into Vercel's internal systems: Linear, GitHub integrations and the environment variable infrastructure that underpins every deployment on the platform.

Vercel's official security bulletin confirming the breach Vercel's security bulletin, published within a day of the BreachForums listing.

The breach is not large by the usual metrics. Vercel describes the blast radius as "a small number of customers." The immediate damage is 580 employee records, an unknown quantity of environment variables and internal credentials listed for sale on BreachForums at a price of $2 million. But the mechanism is more interesting than the impact. This is the Heroku breach of 2022 replayed with an AI-era entry point, and the structural lesson it carries is one the industry has not yet absorbed.

What the attacker accessed

The initial access path ran through Context.ai's Google Workspace OAuth application, identified by client ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com. According to IONIX, the Context.ai compromise affected "hundreds of users across multiple organisations." Vercel was one of them.

From the employee's Workspace, the attacker reached Vercel's internal tooling. The BreachForums listing, reported by TechRadar, claimed access to "multiple employee accounts with access to several internal deployments, API keys (including some NPM tokens and some GitHub tokens)." Screenshots of an internal Vercel Enterprise dashboard accompanied the listing. The 580-record employee data file included names, Vercel email addresses, account status and activity timestamps.

The environment variables are the sensitive part. Vercel's architecture encrypts all customer environment variables at rest, but provides a mechanism to designate variables as "non-sensitive." Variables without the sensitive flag were enumerable by an authenticated internal actor. CEO Guillermo Rauch, as reported by EdGen Tech, confirmed that the attacker "got further access through their enumeration."

This means any customer who had not explicitly toggled the sensitive flag on their API keys, database credentials or service tokens had those values exposed to an attacker with internal access. The default was exposure.

The environment variable design flaw

Vercel's post-breach guidance tells customers to "enable the sensitive variable feature for encryption at rest" and to "rotate any credentials that lacked sensitive designation." This framing treats the problem as a configuration issue: customers should have known to flip the switch.

That framing is wrong. When your platform stores secrets and the default behaviour is to leave them enumerable by anyone with internal access, you have not built a secure-by-default system. You have built a system where security is opt-in, and then you have blamed your customers for not opting in.

Vercel's environment variables documentation showing the sensitive toggle Vercel's documentation for environment variables. The "sensitive" designation is opt-in, not the default.

The comparison to the CircleCI breach of January 2023 is instructive. In that incident, a stolen SSO session cookie gave an attacker access to customer secrets and environment variables. CircleCI's response included rotating all customer OAuth tokens and advising universal credential rotation. The structural lesson from CircleCI was that platform-level secrets should be treated as compromised whenever an internal access boundary is broken.

Vercel's architecture appears to have learned half that lesson. Sensitive-flagged variables were protected. Everything else was not. For a platform that hosts the frontends of a significant portion of the Web3 ecosystem, including wallet interfaces and DEX deployments, this default is not a configuration oversight. It is a design choice with consequences.

Blockonomi reported that Orca, the Solana-based DEX, proactively rotated all deployment keys after the disclosure. Other crypto projects on Vercel face the same question: did they mark their environment variables as sensitive, or did they trust the platform's defaults?

OAuth and the trust model that keeps breaking

The OAuth application model works on a simple premise: a user grants a third-party application specific permissions (scopes) to act on their behalf within a service. The user trusts the application. The service trusts the user's judgement. If the application is compromised, every user who granted it access becomes a pivot point.

This model has been exploited before. In April 2022, Salesforce-owned Heroku disclosed that stolen OAuth tokens for its GitHub integration were used to download private repositories from dozens of organisations, including npm. The tokens had been issued legitimately. The integration had been authorised by the users. When the tokens were stolen, the scope of the damage was determined not by any exploit but by the breadth of permissions that had been granted in good faith.

The Vercel breach follows the same pattern with one critical difference: the compromised OAuth grant did not belong to a CI/CD tool or a source control integration. It belonged to an AI productivity tool. Context.ai is the kind of application that employees add to their workflow because it makes them faster, because everyone else on the team is using something similar, because the onboarding flow is a single "Authorise" button that asks for Workspace access and the employee has no reason to read the scope list carefully.

The AI tool ecosystem has inherited every pathology of the browser extension ecosystem and the CI/CD integration ecosystem before it. Broad permissions requested by default. Minimal vetting by the user or the organisation. No centralised visibility into what has been authorised. The difference is speed: the number of AI tools requesting OAuth access to enterprise systems has grown faster than any previous category of third-party integration, and security teams have not kept pace.

IONIX's assessment that Context.ai affected hundreds of users across multiple organisations suggests this was not a targeted attack on Vercel specifically.

IONIX threat advisory on the Vercel security incident IONIX's threat advisory detailing the Context.ai compromise and its impact across multiple organisations.

It was an opportunistic compromise of a tool with a broad user base, and Vercel happened to be in the blast radius. The next one will be a different AI tool and a different enterprise, but the mechanism will be identical.

Attribution and the BreachForums listing

The BreachForums listing claimed affiliation with ShinyHunters, a threat actor group known for database breaches and data sales. However, individuals associated with ShinyHunters have publicly disputed any connection to the Vercel incident, according to TechRadar. The actual attacker's identity remains unconfirmed.

Rauch's statement, reported by EdGen Tech, offered his own assessment: "We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel." The claim that the attacker was "accelerated by AI" is unverifiable from the outside, but it adds a recursive quality to the incident: an AI tool was the entry point, and AI may have been the accelerant.

The $2 million ransom demand is modest by current standards. Whether Vercel has engaged with the demand is unknown. The company's public response has focused on customer notification, credential rotation guidance and the engagement of Google Mandiant for forensic investigation.

The AI tool supply chain

The security industry spent the past four years learning that CI/CD pipelines are supply chain attack surfaces. The Codecov breach of 2021, the Heroku OAuth compromise of 2022, the CircleCI breach of 2023 and the tj-actions supply chain attack of 2025 all demonstrated that the tools developers use to build and deploy software are themselves targets. The response was hardening: pinned dependencies, OIDC tokens instead of long-lived secrets, workflow permissions audits.

AI tools represent the next iteration of the same problem, and the defences have not caught up. An AI coding assistant that requests read access to a GitHub organisation is functionally identical to a CI/CD integration that requests the same scope. An AI meeting summariser that requests full Google Workspace access is granting itself the same lateral movement potential that an attacker would pay for. The difference is that AI tools are adopted faster, vetted less and granted broader permissions because their utility is immediate and their risk is abstract.

Organisations that spent the last two years tightening their CI/CD supply chain may find they have a parallel, unmonitored supply chain of AI tools running through their employees' OAuth grants. Context.ai was one tool at one company. The pattern is already widespread.

What Vercel did right

The response deserves recognition. Vercel published a security bulletin within a day of the BreachForums listing. Rauch posted detailed, specific transparency statements on X rather than hiding behind a press release. The company engaged Google Mandiant, notified law enforcement, contacted Context.ai directly and began rolling out dashboard improvements for environment variable management. Services remained operational throughout.

Compared to incidents where companies wait weeks to disclose, minimise the scope in initial statements or hide behind legal counsel, Vercel's response was fast, specific and technically honest.

TechRadar's coverage of the Vercel breach confirmation TechRadar's report on the Vercel breach, published after the BreachForums listing surfaced.

Rauch's acknowledgement that the "non-sensitive" environment variable default was a factor, rather than blaming customers for misconfiguration, was unusual in its candour.

The speed of response does not fix the architectural issue. But it does set a standard for how platform providers should communicate when the trust boundary breaks.

What defenders should do now

The immediate actions for Vercel customers are straightforward. Audit every environment variable in every project. Enable the sensitive flag on anything that contains a credential, token or key. Rotate any credential that was not marked sensitive before the breach disclosure. Check Google Workspace admin logs for the Context.ai OAuth application client ID and revoke it if present.

The broader action is harder. Organisations need visibility into every OAuth grant their employees have made to third-party AI tools. Most do not have this visibility. Google Workspace admins can review third-party app access in the admin console, but the tool only shows what has been authorised, not whether the authorisation is appropriate. Building a review process for AI tool OAuth grants before they are approved, rather than after they are exploited, is the structural fix.

The pattern repeats

Every few years, the industry learns that a new category of trusted integration is also a new category of attack surface. Browser extensions were trusted, then weaponised. CI/CD tools were trusted, then weaponised. AI productivity tools are trusted now.

The Vercel breach is a small incident with a large lesson. The OAuth model assumes that users and organisations can evaluate the security posture of every application they authorise. That assumption was already failing when the applications were GitHub integrations and CI runners. It fails faster when the applications are AI tools that multiply weekly, request broad scopes by default and are adopted by employees before security teams know they exist.

The next breach that runs through a compromised AI tool will not be novel. It will just be the one where someone notices the pattern has been repeating.

Newsletter

One email a week. Security research, engineering deep-dives and AI security insights - written for practitioners. No noise.