In December 2024, attackers gained access to U.S. Treasury systems without phishing a single user and without bypassing any MFA. The entry point was a compromised API key belonging to a third-party SaaS provider — a non-human credential that was not rotated, not monitored, and not governed. One static secret, and a federal agency was breached.
That is the pattern. Non-human identities are the unmanaged perimeter of most IAM programs. Organizations have invested heavily in securing human identities — MFA, lifecycle automation, access certifications — and left machine identities to accumulate in the dark. AI agents are about to make that gap catastrophic.
What NHIs Actually Are
A non-human identity (NHI) is any identity that authenticates to a system without a human being the direct actor. That definition covers a wide taxonomy, and the distinctions matter for how you govern them.
Service accounts — directory accounts used by applications or scheduled tasks to run with delegated permissions. Often created once, then forgotten. Frequently over-privileged because "it was easier at the time."
API keys — static, long-lived tokens used to authenticate to SaaS platforms, internal services, and cloud APIs. The most common form of secrets sprawl. GitHub's secret scanning data found 39 million exposed secrets in public repositories in 2024 alone.
OAuth clients and service principals — machine-to-machine OAuth flows used by applications to authenticate without user context. Common in cloud workloads. In Okta, these are registered as service apps. Governance here is largely invisible to most teams.
CI/CD tokens — credentials embedded in pipeline configurations, GitHub Actions secrets, or environment variables. The CircleCI breach in January 2023 came down to this: long-lived API tokens stolen from a CI environment forced every CircleCI customer to rotate all their secrets simultaneously.
Certificates — TLS certs, mutual TLS client certificates, and code signing certs used for service-to-service trust. Certificate sprawl is its own discipline, and expired certs causing outages is a recurring story across every industry.
AI agents — this is the new category, and the one most teams are completely unprepared for. LLMs and autonomous agents that call APIs, read and write data, execute code, and spawn sub-agents all need identities to act. More on this shortly.
Click to zoomThe Scale Problem
Machine identities vastly outnumber human identities in any organization with meaningful cloud presence and SaaS integrations. Most of those machine identities have never been formally inventoried, let alone governed. And unlike human identities, they have no natural lifecycle — no start date, no HR-driven role changes, no termination event to trigger deprovisioning.
These identities accumulate indefinitely. Their associated secrets sprawl across codebases, CI pipelines, documentation wikis, Slack messages, and developer laptops. A service account created for a project three years ago is likely still active — because nothing triggered its decommission, and nobody knows what it still touches.
Compromised credentials are the dominant attack vector in cloud breaches, according to the Verizon DBIR. The credential in question is rarely a human's — it is a static API key that was never rotated, a service account with excessive permissions, or a CI token that lived far past its useful life.
Secrets sprawl makes rotation feel riskier than inaction. A single integration often involves credentials stored in multiple places: the application config, the CI/CD secret store, a developer's local .env file, and a setup document from the original build. When you need to rotate, you have to find every copy. Teams make the quiet calculus of leaving credentials in place, all the time — and attackers know it.
Why Traditional IAM Governance Breaks
The governance frameworks most organizations run were designed for human identities. They assume a few things that are simply not true for machine identities.
MFA as a control. You cannot MFA a service account. The entire authentication model for NHIs is credential-based — a secret, a key, a certificate. The controls that make human identity resilient do not apply. That means the credential itself has to be the primary security control, which requires rotation policies, secrets management infrastructure, and expiration enforcement that most teams do not have.
Offboarding triggers. The JML lifecycle works because HR events drive identity events. There is no equivalent for NHIs. A service account created for a project that ended years ago is still active because nothing triggered its decommission.
Ownership clarity. Human identities are owned by the person. NHIs are owned by whoever created them — or the team that uses the service, or the platform team. In practice, NHIs are frequently unowned or owned by someone who has since left the organization. Unowned identities do not get reviewed, do not get rotated, and do not get decommissioned.
Access certifications. Running a certification campaign against human identities is already difficult. Doing it for NHIs requires knowing they exist, knowing who owns them, and having a reviewer who understands what the entitlement actually does. Most campaigns simply exclude service accounts because including them produces noise, not signal.
AI Agents: A New Class of NHI
LLMs are no longer just generating text. They are being deployed as autonomous agents that take actions: calling APIs, reading and modifying data, running code, browsing the web, and coordinating with other agents. To do any of that, they need identities. And those identities need permissions.
Consider what a production AI agent actually does. It authenticates with an OAuth client credential or API key. It calls your CRM to read customer records. It calls your ticketing system to create or update issues. It may spawn sub-agents to parallelize tasks, each of which needs its own credential or inherits permissions from the parent. The agent operates autonomously, at machine speed, across multiple systems simultaneously.
The permissions question for AI agents is genuinely unsolved for most organizations. What should an agent be allowed to read? Write? Delete? Who approves that scope? When the agent spawns a sub-agent, does it inherit the full permission set or a constrained subset? Nobody has clean answers yet — and the blast radius of getting it wrong is not theoretical.
A compromised human identity gives an attacker human-speed access to whatever that person could reach. A compromised AI agent identity potentially gives an attacker machine-speed access across multiple systems, operating continuously and autonomously. The combination of broad permissions, high autonomy, and machine execution speed creates a threat profile that has no precedent in traditional IAM.
The market has responded accordingly. CyberArk's acquisition of Venafi — completed in October 2024 for $1.54 billion — was the clearest signal yet that machine identity has become a board-level security investment. Dedicated NHI platforms like Oasis Security and Entro Security both raised significant rounds in 2024, reflecting how fast this space is moving.
Click to zoomGoverning NHIs in Practice
The governance principles are not exotic. The hard part is applying them consistently across an identity category that was never designed with governance in mind.
Start with Discovery
You cannot govern what you cannot see. Pull every service account from your directory, every OAuth client from your Okta tenant, every secret from your CI/CD system, every API key from your cloud IAM. For most organizations, this inventory does not exist in a single place. Building it — even imperfectly — immediately surfaces risk that was previously invisible.
Assign Explicit Ownership
Every NHI needs an owner — not a team, a person. That individual is responsible for reviewing the credential on a defined cadence, approving rotation, and decommissioning the identity when it is no longer needed. Without explicit ownership, nothing else in the program works.
Enforce Least Privilege at Creation
Service accounts and API keys accumulate permissions the same way human accounts do — through one-off grants that never get reviewed. For new NHIs, define the minimum permission set before the credential is issued, not after the fact during an audit.
Eliminate Long-Lived Static Credentials
Short-lived tokens tied to OIDC workload identity federation eliminate the core CI/CD breach attack surface entirely. GitHub Actions, Google Cloud, AWS, and Azure all support this natively — allowing pipeline workloads to authenticate with short-lived tokens, with no stored secrets at all. If a credential is compromised and expires in fifteen minutes, the blast radius is bounded.
Govern Service App OAuth Clients in Okta
Service applications using OAuth 2.0 client credentials can be brought under governance through Okta API Access Management — a registry of client credentials, scoped permissions, and policy enforcement for app-to-app authentication. Okta Workflows can enforce NHI lifecycle events: deactivating a service app when an owning team is disbanded, triggering rotation on a schedule, or alerting when a client credential approaches expiration.
Include NHIs in Certification Campaigns
Service accounts and OAuth clients should appear in access certification campaigns as a defined population with assigned reviewers who understand what the entitlements do. The reviewer for an NHI should be its designated owner. Teams that run their first NHI-scoped certification almost always find significant over-provisioning — the entitlement cleanup alone justifies the effort.
Click to zoomMy Take
Most IAM programs treat non-human identities as a secondary problem — something to address after the human identity work is done. That sequencing made some sense five years ago. It does not make sense now.
Machine credentials are the primary attack vector in cloud environments. The Treasury breach, the CircleCI incident, the BeyondTrust API key — these are all the same class of failure: ungoverned machine credentials with no rotation, no ownership, and no decommission path.
The organizations getting ahead of this treat NHI discovery as a continuous process, enforce ownership as a hard requirement, and extend their existing lifecycle governance model to cover machine identities with the same rigor they apply to humans. Not a separate program — the same one, extended. Okta gives you the building blocks to do exactly that.
If you are standing up NHI governance or want to understand where your current program has gaps, our IAM Architecture & Governance Advisory is how we scope and deliver those engagements.
