More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions as Over-Privileged Access Becomes Widespread, Cloud Security Alliance Study Finds
More Than Two-Thirds of Organizations Cannot Clearly Distinguish AI Agent from Human Actions as Over-Privileged Access Becomes Widespread, Cloud Security Alliance Study Finds
Rapid AI agent expansion is exposing structural gaps in access control maturity, credential hygiene, and identity attribution
SAN FRANCISCO--(BUSINESS WIRE)--RSAC - Seventy-three percent of organizations expect AI agents to become vital within the next year, yet 68% can’t clearly distinguish between human and AI agent activity, according to a new survey report from the Cloud Security Alliance (CSA), the world’s leading not-for-profit organization committed to AI, cloud, and Zero Trust cybersecurity education.
AI agents are already embedded within enterprise environments, and as these systems take on more autonomous roles, organizations must address new challenges around identity and access.
Share
Commissioned by Aembit, The Identity and Access Gaps in the Age of Autonomous AI report found that as AI agents take on greater autonomy and operational responsibility within organizations, the identity and access management (IAM) models used to manage them have failed to keep pace, leaving gaps that must be addressed if organizations are to successfully manage risk and enable their secure adoption.
“AI agents are already embedded within enterprise environments, and as these systems take on more autonomous roles, organizations must address new challenges around identity and access,” said Hillary Baron, AVP of Research, Cloud Security Alliance. “The survey data indicates that existing IAM approaches were not designed for autonomous agents and are showing strain as deployments scale.”
Among the survey’s key findings:
- AI agents are already operating across enterprise systems. AI agents are widely deployed across enterprise workflows namely task automation agents (67%), research agents (52%), developer-assist agents (50%), and security or monitoring agents (50%). In fact, most deployments extend beyond isolated test settings — 85% of organizations report that they use AI agents in production environments. This cross-environment interaction makes it harder to maintain consistent identity governance and permission boundaries.
- Most AI agents borrow identities. AI agents often exist in an identity gray area: 52% of organizations use workload identities, 43% rely on shared service accounts, and 31% allow agents to operate under human user identities. Without a defined taxonomy, this identity patchwork can lead to unintended consequences, where AI agents inherit permissions beyond their intended role.
- AI agents often inherit access and expand the attack surface. Agent access commonly derives from existing human permissions or automation logic. Nearly three-quarters (74%) say agents often receive more access than necessary, and 79% believe agents create new access pathways that are difficult to monitor. More than half (52%) say agents inherit access originally intended for humans or other systems at least sometimes.
- No single team owns AI agent identity and access. Responsibility is fragmented across departments: 28% say security leads, followed by development/engineering (21%) and IT (19%). Only 9% identify IAM teams as the primary owner. This distributed ownership can lead to inconsistent controls and slower coordination when issues arise.
- Confidence in AI agent access exceeds control maturity. While 57% report moderate or high confidence in identity scoping, operational practices reveal gaps. One-third of organizations (33%) do not know how often AI agent credentials are rotated, 32% aren’t certain how much time is required to implement and maintain authentication or credential handling for a typical AI agent, and only 22% report that access frameworks are applied very consistently to AI agents.
- Governance mechanisms used as stopgap for missing identity controls. Many organizations are relying on governance mechanisms to manage risk where identity-level IAM controls are not yet consistently embedded for AI agents. Disabling identities or revoking tokens (49%) are the most common containment actions, while 42% report terminating the compute environment where an agent runs. Only 33% report removing or modifying access policies in real time, suggesting that current control strategies emphasize oversight and containment over embedded, identity-bound, real-time enforcement.
“AI agents are inheriting human permissions, operating under shared accounts, and expanding the attack surface in ways that existing IAM tools weren’t designed to handle,” said David Goldschlag, co-founder and CEO of Aembit. “The survey makes the stakes clear: Agentic autonomy without identity-level access controls is a risk organizations can’t afford to ignore.”
Aembit commissioned CSA to develop a survey and report to better understand the industry’s knowledge, attitudes, and opinions regarding autonomous AI agents. Aembit financed the project and co-developed the questionnaire with CSA research analysts. The survey was conducted online by CSA in January 2026, and it received 228 responses from IT and security professionals from organizations of various sizes and locations. CSA’s research analysts performed the data analysis and interpretation for this report.
Download the Identity and Access Gaps in the Age of Autonomous AI survey report.
About Aembit
Aembit is the identity and access management platform for agentic AI and workloads. It enforces access based on identity, context, and centrally managed policies, giving organizations a singular place to control access risk from AI agents, automate credential management, and accelerate AI adoption. With Aembit, enterprises can confidently control access to sensitive resources across all the workloads that power their business. Users can visit aembit.io and follow the company on LinkedIn.
About Cloud Security Alliance
The Cloud Security Alliance (CSA) is the world’s leading not-for-profit organization committed to awareness, practical implementation, and credentialing of forward-looking cybersecurity topics, including AI, cloud, and Zero Trust. In an era where digital transformation drives business success, CSA stands as the global authority ensuring organizations can operate securely while harnessing cutting-edge technology. Through volunteer-driven research, globally-accepted standards, and award-winning vendor-neutral education programs that unite technical experts, industry practitioners, and varied associations, governments, chapters, and corporate members, CSA bridges the gap between innovation and pragmatic security execution. Visit CSA’s website to learn more.
Contacts
Media Contact
Kristina Rundquist
ZAG Communications for the CSA
kristina@zagcommunications.com
