AI Agents Are Creating Insider Security Threat Blind Spots, Research Finds

Artificial intelligence agents, autonomous software that performs tasks or makes decisions on behalf of humans, are becoming increasingly prolific in businesses.They can significantly improve efficiency by taking repetitive tasks off employees’ plates, such as calling sales leads or handling data entry.However, by virtue of AI agents’ ability to operate outside of the user’s control, they also introduce a new security risk: Users may not always be aware of what their AI agents are doing, and these agents can interact with each other to expand the scope of their capabilities.

A 2025 survey of U.S.-based IT leaders from BeyondID argued that many companies have not made that shift.BeyondID said only 30% of organizations regularly map non-human identities such as AI agents to critical assets — even as agents log in, access sensitive systems, and trigger actions that used to be limited to employees.  “AI is no longer just a tool,” BeyondID CEO Arun Shrestha said in the report announcement.“It’s acting like a user.”  See more: TechRepublic coverage has tracked the shift as agentic tools become a more common layer in enterprise software.  Exabeam SPONSOREDExabeam is a leading provider of security information and event management (SIEM) solutions, combining UEBA, SIEM, SOAR, and TDIR to accelerate security operations.

Its Security Operations platforms enables security teams to quickly detect, investigate, and respond to threats while enhancing operational efficiency.Explore the Exabeam Platform Impersonation anxiety rises faster than agent governance BeyondID’s survey data suggested that security leaders were already thinking about agent-driven identity abuse, but many did not rank non-human identity security as a top operational priority.The firm said AI impersonation of users was the top concern for 37% of security leaders, while only 6% ranked securing non-human identities as their most difficult challenge.  “AI agents don’t need to be malicious to be dangerous,” BeyondID said in a press release, framing the gap as a governance failure and warning that unchecked agents could become “shadow users” with broad access and limited accountability.  The risk is not only about an agent being “hacked.” It can also be about over-permissioned service accounts, weak lifecycle processes, or unclear ownership, conditions that have long affected machine identities and now apply to agents that can plan and act with less direct human input.

Healthcare stands out as a high-pressure test case The healthcare sector is particularly at risk, as it has rapidly adopted AI agents for tasks like diagnostics and appointment scheduling, yet it remains highly vulnerable to identity-related attacks.Of the IT leaders BeyondID surveyed who work in healthcare, 61% said their business had experienced such an attack, while 42% said they had failed a compliance audit related to identity.“AI agents are now handling Protected Health Information (PHI), accessing medical systems, and interacting with third parties often without strong oversight,” the researchers wrote.

The BeyondID report also pointed to the sensitive context: agents handling protected health information, interacting with third parties, and connecting to clinical and administrative systems where downtime and data exposure can carry high costs.  From predictions to frameworks: 2025–2026 marked a shift in “agent security” Since the BeyondID report’s mid-2025 release, several signals suggest the industry’s conversation has moved from general warnings to more structured approaches.Gartner predicted in June that 33% of enterprise software applications will include agentic AI by 2028, up from less than 1% in 2024, while also warning that more than 40% of agentic AI projects could be canceled by the end of 2027.A few months later, OWASP’s GenAI Security Project published the “OWASP Top 10 for Agentic Applications for 2026,” focusing on risks specific to autonomous, tool-using systems and the controls needed to reduce them.  In parallel, organizations and governments have shown signs of caution about agent autonomy.

Must-read security coverage UK Police Convicts Pair in £5.5 Billion Bitcoin Launder Case Blackpoint Cyber vs.Arctic Wolf: Which MDR Solution is Right for You? How GitHub Is Securing the Software Supply Chain 8 Best Enterprise Password Managers Monitoring agents like insiders: Third-party systems and identify-first vendors As the industry leans into agent governance, one practical gap remains: visibility into what agents actually do after they authenticate.That is where SIEM and behavior-analytics platforms have tried to extend traditional “insider threat” concepts to non-human identities.

This month, Exabeam — a pioneer of SIEM and user and entity behavior analytics (UEBA) capabilities — announced that its New-Scale launch added AI Agent Security and Agent Behavior Analytics.This intends to detect suspicious deviations in agent activity or human misuse of AI agents and automatically provides evidence within investigation timelines.While SIEM/UEBA tools can help detect when agents use that access in unexpected or risky ways, identity tools can help define and constrain agent access.

To avoid treating agent security as purely a SOC monitoring problem, vendors in identity and governance have been emphasizing agent-specific identity primitives.Last July, Microsoft introduced Microsoft Entra Agent ID as a way to give each AI agent a unique identifier and apply identity controls such as conditional access, least privilege, and lifecycle management.  Identity security vendor SailPoint published research in May 2025 that reported widespread AI agent usage alongside policy and governance gaps, another indicator that the market is treating agents as a distinct identity-security problem rather than a generic “AI risk.”  As agentic AI becomes even more capable, it will also introduce new vulnerabilities in parallel.Organizations need to keep abreast of the technology to mitigate risk.

What security teams can do next BeyondID’s recommendations centered on three moves that map closely to how enterprises already secure human users: map AI identities to critical systems, enforce least privilege, and monitor behavior continuously.The difference in 2026 is that more security teams now have multiple vendor paths to operationalize those steps, from identity governance for non-human identities to SOC monitoring and analytics to agent-specific risk frameworks and testing guidance.TechRepublic has published additional guidance on AI security tools and on reducing “shadow AI” risk, which can provide practical next steps for readers trying to translate agent governance into day-to-day controls.  Subscribe to the Cybersecurity Insider Newsletter Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.

Delivered every Monday, Tuesday and Thursday Subscribe to the Cybersecurity Insider Newsletter Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.Delivered every Monday, Tuesday and Thursday

Read More
Related Posts