Most organizations have mature processes for managing human identities. Onboarding, offboarding, access reviews, least privilege — these are established practices, even if execution is inconsistent. The problem is that human identities are no longer the majority of what’s accessing your systems.
Service accounts, API keys, OAuth tokens, automation scripts, and now AI agents — non-human identities outnumber human ones in most enterprise environments by a significant margin, and the governance frameworks built for people don’t translate.
This is the identity crisis that showed up repeatedly at RSAC 2026 and hasn’t left the conversation since. It’s worth going deeper.
The Scale of the Problem
Non-human identities proliferate in ways that human identity management never had to account for. A developer spins up a service account for a deployment pipeline. An integration requires an API key. A SaaS tool is granted OAuth access to your cloud environment. An AI agent is deployed to automate a workflow and given permissions to read and write across multiple systems.
Each of these creates an identity with access. Most of them are created without going through any formal IAM process. Many of them are never formally deprovisioned. Some of them accumulate permissions over time that far exceed what the original use case required.
The result is an identity landscape where a significant portion of what has access to your most sensitive systems is invisible to your identity governance program.
Why Traditional IAM Doesn’t Solve This
Human identity management works because there’s a defined lifecycle — hire, role change, termination — with clear trigger events for provisioning and deprovisioning. Non-human identities don’t follow that lifecycle.
Service accounts don’t get terminated when the project they were created for ends. API keys don’t expire unless someone explicitly sets an expiration. OAuth grants persist indefinitely unless revoked. AI agents can be granted access with a few clicks by a business unit that has no visibility into what permissions they’re actually provisioning.
Access reviews — a cornerstone of human IAM governance — are almost never extended to service accounts and machine identities in a meaningful way. Most organizations can tell you who has access to their critical systems. Very few can tell you what has access.
The AI Agent Dimension
Agentic AI introduces a new category of non-human identity that compounds an already difficult problem. AI agents are being deployed to automate workflows across email, calendars, file systems, CRM platforms, and business applications. They need broad access to function. They operate continuously. And they make decisions and take actions at a speed that makes real-time human oversight impractical.
From an identity and access management perspective, an AI agent looks like a service account with unusually broad permissions and a mandate to act autonomously. The governance questions are the same ones that apply to any privileged account — who owns it, what access does it have, how is that access reviewed, and what happens when it’s compromised — but the risk calculus is different because the agent’s actions can have downstream consequences at machine speed.
Most IAM programs have no policy for AI agent identities. That’s a gap that needs to close before the agents are deployed, not after.
Where to Start
- Build your non-human identity inventory. You can’t govern what you haven’t found. Start with a discovery exercise across your cloud environments, SaaS platforms, and on-premises systems. Catalog service accounts, API keys, OAuth grants, and automated processes with system access. This inventory will be incomplete on the first pass. Do it anyway.
- Apply the same least privilege standard you apply to humans. Every non-human identity should have the minimum access required to perform its function. Most of them don’t. Remediation takes time, but establishing the standard and working toward it is the goal.
- Set expiration dates on API keys and service account credentials. Credentials that don’t expire don’t get rotated. Implement automatic expiration policies and build the renewal process into your existing workflows.
- Extend your access review cycle to non-human identities. Even a semi-annual review of your highest-privilege service accounts and API grants is more than most organizations are currently doing. Start there.
- Define an AI agent identity policy before you need one. If your organization is deploying or evaluating AI agents, write the policy now — ownership requirements, access provisioning process, review cadence, and incident response procedures. Establish the governance framework before the agents are in production.
The human identity problem took years to get under control. The non-human identity problem is already larger and moving faster. The organizations that start building visibility and governance now will be significantly better positioned than those that wait for an incident to force the conversation.
Discussion Questions
- Does your organization have a current inventory of non-human identities — service accounts, API keys, OAuth grants, and automated processes with system access? When was it last reviewed?
- Are non-human identities included in your access review cycle? If not, what would it take to add them?
- Does your organization have a documented policy for AI agent identity governance? If agents are being deployed without one, what’s the path to establishing it?
Further Reading
- NIST SP 800-207 Zero Trust Architecture (Non-Human Identity Context): https://csrc.nist.gov/publications/detail/sp/800-207/final
- CIS Controls v8 – Control 5 (Account Management): https://www.cisecurity.org/controls/account-management
- CISA Identity and Access Management Guidance: https://www.cisa.gov/identity-and-access-management
Leave a Reply