Non-human identities outnumber humans 45 to 1 in cloud environments and most have no monitoring at all
Service accounts, API keys, OAuth tokens and AI agent credentials now vastly outnumber human users in enterprise cloud environments. The security models designed for human identity governance do not apply. The gap is producing a new class of breach.
For every human user in a typical enterprise cloud environment, there are between 25 and 45 non-human identities: service accounts, API keys, OAuth client credentials, machine tokens, CI/CD pipeline secrets, webhook signing keys and, increasingly, AI agent credentials that act autonomously on behalf of no specific person. These identities authenticate, authorise and execute. Most of them do so without multi-factor authentication, without session expiry, without behavioural monitoring and without anyone reviewing whether they still need the permissions they were granted eighteen months ago.
The security industry spent a decade building identity governance around humans. Conditional access policies, passwordless authentication, risk-based step-up challenges, session duration limits, impossible travel detection. These controls assume a user who logs in from a browser, who has a geographic location, who types at a human speed, who can be prompted to verify themselves. Non-human identities satisfy none of these assumptions. They authenticate once, often with a static credential. They operate continuously until someone remembers to rotate the key. Many organisations never do.
What counts as a non-human identity
The term covers everything that authenticates to a system without a human directly present at the keyboard. In practice, the population breaks down into several categories, each with different risk profiles.
Service accounts are the oldest and most familiar. A database connector that authenticates to a cloud SQL instance. A monitoring agent that reads metrics from a Kubernetes API. A backup process that writes to object storage. These have existed since before cloud computing, but cloud platforms made them trivially easy to create and remarkably difficult to inventory. A single Terraform module can instantiate a service account, assign it a role binding and generate a key file in three lines. Decommissioning it requires someone to remember it exists.
API keys and tokens are the credentials that glue SaaS platforms together. Every integration between Slack and Jira, every webhook from GitHub to a CI system, every OAuth token that connects a third-party analytics tool to a data warehouse represents a non-human identity with its own permission scope. These tokens are frequently long-lived. GitHub personal access tokens default to no expiry. Slack bot tokens persist until explicitly revoked. Many organisations have hundreds of these integrations, configured by individual engineers over years, with no centralised register of what connects to what.
CI/CD pipeline identities occupy a particularly sensitive position. A GitHub Actions workflow, a GitLab CI runner, a Jenkins agent: these systems routinely hold credentials that can push code to production, modify infrastructure and access secrets vaults. The tj-actions/changed-files supply chain attack in March 2025 demonstrated what happens when a CI/CD identity is compromised. Every repository using the action had its pipeline secrets, including cloud provider credentials and NPM tokens, exfiltrated through a single malicious commit. The attack did not target humans. It targeted the non-human identities that CI/CD systems use to do their work.
AI agent credentials are the newest and fastest-growing category. An AI coding assistant that can read and write files, execute shell commands and make API calls needs credentials for each of those capabilities. An autonomous agent that monitors a Slack workspace, triages support tickets and updates a CRM needs OAuth tokens for every service it touches. Each of these is a non-human identity. Each one typically receives broad permissions because restricting an agent's access requires predicting every action it might need to take. The entire value proposition of autonomous agents is that they act in ways their operators did not specifically anticipate.
I have written previously about the security vacuum in agentic AI systems. The identity problem compounds it. An AI agent is not just software with permission; it is software that makes its own decisions about how to use that permission. When a service account reads from a database, it executes a query that a human wrote. When an AI agent reads from a database, it constructs its own query based on a natural language prompt that may have been manipulated. The distinction matters enormously for threat modelling, but identity governance systems cannot express it.
Why human-centric controls fail
The entire modern identity stack was designed for a specific threat model: an adversary who steals or guesses a human's credentials and uses them to access resources. The defences follow logically. Require a second factor so that a stolen password is insufficient. Detect impossible travel so that a credential used from Lagos and London within an hour triggers review. Enforce session timeouts so that a stolen session token expires before it can be fully exploited. Monitor typing patterns and mouse movements to detect credential sharing.
None of this applies to a service account that authenticates with a static JSON key from a fixed IP address, running the same API calls every sixty seconds, around the clock. There is no second factor to require. There is no geographic anomaly to detect. There is no session to expire. The legitimate behaviour and the malicious behaviour look identical, because the legitimate behaviour is already automated.
Conditional access policies in Azure AD (now Entra ID) can be scoped to service principals, but in practice most organisations exempt them. A service principal that needs to authenticate programmatically cannot respond to an MFA prompt. The workaround is to exclude it from the policy, grant it a long-lived credential and hope for the best. Microsoft's own documentation acknowledges this gap. Workload identity federation was introduced to eliminate static credentials for some scenarios, but adoption requires re-architecting each integration individually. Most organisations have hundreds.
The result is a two-tier identity regime. Human users face increasingly sophisticated controls. Non-human identities operate with the security posture of 2010: static credentials, no MFA, no session limits, no behavioural analytics, permissions granted at provisioning and never reviewed.
The permission decay problem
Human identities accumulate permissions over time. This is a well-understood problem with well-understood solutions: periodic access reviews, role-based access control, just-in-time elevation. Non-human identities accumulate permissions too, but the review mechanisms do not apply.
When an engineer provisions a service account for a migration project, they grant it the permissions the migration needs. When the migration completes, the service account remains. Its credentials remain valid. Its permissions remain active. No one files a ticket to decommission it because no one remembers it exists. The Terraform state file might reference it, but Terraform state files are not access review tools.
This pattern repeats at scale. A typical enterprise cloud environment contains hundreds of service accounts created for projects that finished months or years ago, integration tokens for SaaS products the organisation no longer uses, API keys generated by engineers who have since left the company. Each one is an authenticated identity with standing permissions and no human oversight.
The problem is worse for AI agents because their permission requirements are genuinely unpredictable. A human user's role can be defined by their job function. A service account's role can be defined by the application it supports. An AI agent's role is defined by whatever task it is given next. That task may require permissions that were not anticipated when the agent was provisioned. The path of least resistance is to grant broad access and rely on the agent's instruction set to constrain its behaviour. This is the same architectural error as running a web application as root and relying on input validation to prevent abuse. It works until it does not.
What the breaches look like
Cloud breaches involving non-human identities rarely look like traditional account compromises. There is no login event to detect. There is no password to reset. The attacker simply uses a valid credential that was never intended to be examined.
The pattern I have observed in my own research is consistent. In the full-stack-ai-agent-template SSRF finding, the webhook service made HTTP requests using credentials that were not scoped to the service's legitimate needs. The service account had network-level access that exceeded its functional requirements. The vulnerability was a code-level SSRF, but the blast radius was determined by the identity's permissions. In the Hermes Agent path traversal, the agent's file system access was constrained only by its prompt instructions, not by its credential scope. The identity had access to everything; the natural language instructions were the security boundary.
These are not edge cases. They are the default architecture. When I audit AI agent projects, the most common finding is not a specific vulnerability in the code. It is that the agent's identity has permissions that no human user would be granted, exemptions from controls that no human user would receive and logging that captures less detail than the audit trail for a junior analyst opening a spreadsheet.
The inventory problem no one wants to fund
Before an organisation can govern its non-human identities, it needs to know how many it has. This turns out to be surprisingly difficult. Cloud platforms provide IAM dashboards, but these show only the identities managed within that platform. They do not show the Slack bot tokens stored in a secrets manager, the third-party SaaS integrations configured through vendor dashboards or the API keys hard-coded in application configuration files that predate the secrets manager.
A complete NHI inventory requires correlating data from the cloud IAM layer, the secrets management system, the CI/CD platform, every integrated SaaS product and the application code itself. No single tool provides this view. The emerging category of non-human identity management platforms, from vendors like Astrix, Oasis and Silverfort, is attempting to solve it, but adoption is early and the problem space is large.
Most organisations do not have an NHI inventory. They have a human identity directory (Active Directory, Okta, Google Workspace) and a vague awareness that service accounts exist. The gap between those two states is where breaches live.
What this means for the perimeter
The traditional network perimeter was replaced by the identity perimeter. Zero trust architectures assume that identity is the control plane: authenticate every request, authorise every action, verify continuously. This model works when the identities can be verified. It fails when a significant fraction of authenticated requests come from identities that cannot be challenged, cannot be monitored behaviourally and are not subject to access review.
Non-human identity sprawl does not just create more attack surface. It undermines the conceptual foundation of identity-centric security. If 97% of authenticated sessions in an environment are non-human and those sessions are exempt from the conditional access policies, behavioural analytics and session controls that define the security model, then the security model governs 3% of the traffic. The remaining 97% operates on trust, static credentials and the hope that no one finds the key.
The industry is building increasingly sophisticated controls for the 3% while largely ignoring the 97%. The next generation of cloud breaches will not come through phishing a human user and defeating their MFA. They will come through finding a service account key in a public repository, exploiting an over-permissioned CI/CD token or manipulating an AI agent into using its legitimate credentials for illegitimate purposes. The credentials will be valid. The permissions will be sufficient. The audit log will show nothing anomalous, because automated behaviour was never baselined in the first place.
The cheapest way past a zero trust architecture has always been to become a trusted identity that no one is watching.
Newsletter
One email a week. Security research, engineering deep-dives and AI security insights - written for practitioners. No noise.