Microsoft Cyber Pulse shows 80% of large enterprises already use AI agents, while governance struggles to keep pace

Microsoft Cyber Pulse

As organizations accelerate the adoption of artificial intelligence, Microsoft’s latest Cyber Pulse report highlights a critical shift: AI agents are rapidly becoming part of everyday work, and security, governance, and trust are now essential to scaling them safely.

According to the report, more than 80% of Fortune 500 companies globally are already using AI agents, many of them created with low‑code and no‑code tools by non‑technical employees. At the same time, only 47% of organizations have implemented dedicated security controls for generative AI, and 29% of employees report using unsanctioned AI agents for work tasks.

Industries such as financial services, manufacturing, shared service centers, and digital-first public services, are also among the global leaders in AI agent adoption, according to Microsoft telemetry. This makes governance especially important, as AI agents increasingly handle sensitive data, automate decisions, and interact with customers and internal systems.

The Cyber Pulse report warns that AI agents are not inherently risky, but unmanaged agents can become a security blind spot, similar to unmanaged user accounts or shadow AI. Microsoft’s AI Red Team research shows that agents can be misdirected through manipulated inputs or unclear task framing if safeguards are not in place, reinforcing the need for centralized oversight and clear rules.

“Organizations in the Baltics are highly ambitious when it comes to digital transformation, and AI agents are a natural next step,” said Renate Strazdiņa, National Technology Officer North Europe Multi-Country Cluster at Microsoft. “But speed must go hand in hand with trust. The message of the Cyber Pulse report is clear: AI agents should be treated like digital employees — with defined roles, limited access, and continuous oversight. Those who build security and governance in from the start will be able to innovate faster and with greater confidence.”

Renate Strazdina, NTO North Europe Multi-Country Cluster at Microsoft
Renate Strazdina, NTO North Europe Multi-Country Cluster at Microsoft

Microsoft emphasizes that securing AI agents is not about slowing innovation, but about enabling sustainable growth. The Cyber Pulse report outlines five foundational areas for safe AI agent adoption: centralized visibility of all agents, least privilege access, real-time monitoring, interoperability across platforms, and built-in security protections.

With upcoming regulatory requirements such as the EU AI Act, organizations that invest early in governance, transparency, and security will be better positioned to meet compliance obligations while maintaining trust with customers and partners.

Methodology

The Cyber Pulse report is based on Microsoft first‑party telemetry measuring active AI agents built with Microsoft Copilot Studio and Agent Builder, as well as a multinational survey of 1,725 data security leaders conducted in 2025.

Top image: AI agents are now widely used across industries, with over 80% of Fortune 500 companies deploying them to automate everyday work, often faster than organizations can properly see or control them. The blog highlights that without strong, shared governance, observability, and security, this rapid adoption creates new risks—but when managed well, responsible AI becomes a powerful competitive advantage. Images courtesy of Microsoft.