
Mallory Mooney
In 2025, many of the long-standing cloud security concerns remained, but new areas of focus also developed. The significant increase in AI adoption enabled organizations to deliver features faster but also introduced new attack surfaces, such as untrusted or unpredictable user input for large language model (LLM) applications. At the same time, long-lived credentials and vulnerabilities in third-party packages continued to expose cloud environments to risk.
Datadog researchers observed that attackers targeted identities, development pipelines, and AI tooling in ways that enabled them to move undetected. While there was progress in mitigating these risks this year, organizations also need to reevaluate their security controls in light of attackers’ shifting focus.
This roundup looks at the following areas of focus for 2025 and provides actionable guidance for mitigating associated risks:
- Shifting cloud environment perimeters expose security gaps
- AI adoption introduces new security risks
- Attackers are pivoting to target supply chains and open source dependencies
Shifting cloud environment perimeters expose security gaps
Securing cloud identities remained one of the hardest challenges to address in 2025, as attackers continued to find and exploit common configuration issues, such as long-lived credentials and overprivileged roles. Over the past decade, cloud environment perimeters have shifted from being defined by networks to being shaped by identities, and we’ve seen them shift again in recent years to a focus on data. Data perimeters are the mechanisms for securing how cloud identities, resources, and networks interact, but these controls are often disabled by default. Any resulting gaps can expose cloud APIs and their environment to risk.
Datadog researchers observed attackers taking advantage of these security gaps to avoid detection. For example, attackers launched a phishing campaign targeting organizations that use Okta and Microsoft 365 for identity management and collaboration. Earlier this year, attackers abused legitimate discovery tools, such as AWS Resource Explorer, to enumerate resources without being detected. In Azure environments, persistence and privilege escalation techniques enabled attackers to exploit overly permissive service principals (SPs).
Where to start
Protecting cloud identities requires both monitoring for suspicious activity and proactively securing environments. For example, long-lived access keys are a primary cause of data breaches, so minimizing the use of AWS IAM user access keys, Entra ID app registration keys, and Google Cloud service account keys can reduce that risk. For AWS, you can also reduce the risk of credential theft through data perimeters around your identities, resources, and networks.
Monitoring access key usage as organizations implement these guardrails, coupled with analyzing actions from newly created identities or credentials, can help them detect the beginning stages of an attack. Actions such as attaching a new set of permissions to another identity are commonly seen in cross-account attack patterns in AWS environments. To learn more about how attackers take advantage of cloud identities, check out our guides on identifying risky behavior in cloud environments and detecting phishing campaigns via Amazon SES.
References:
- Adversary-in-the-middle phishing campaign targeting Microsoft 365 and Okta users
- 2025 Cloud Security Report and learnings
- Quiet enumeration techniques that use AWS Resource Explorer
- Entra ID persistence and privilege escalation research
AI introduces new security risks
Organizations continued to quickly adopt AI in 2025, but this rapid growth made it more challenging to maintain secure cloud environments. That’s because AI-powered applications rely on new technologies and interfaces that are designed to accept unpredictable human input, and these new layers add complexity and risk to cloud environments. On top of that, security requirements and threat models for AI systems are still developing, so organizations may not always be aware of the risks and how to mitigate them.
Datadog researchers noted how AI adoption has benefited both organizations and attackers. For example, AI systems became a target in supply chain attacks, where attackers focused on vulnerabilities in popular MCP servers and supporting tooling such as Claude Code. Simultaneously, attackers used AI tools to scale and refine their attacks, as noted in Datadog’s Q3 threat roundup.
Where to start
To stay ahead of these threats, organizations should monitor AI systems with the same preparedness they apply to any other cloud service. That means treating models and orchestration layers as components in existing workflows. This will typically look like tracking AI system interactions with databases and other cloud resources, monitoring for unusual API activity, and ensuring their supporting tooling isn’t running with unpatched CVEs. Datadog’s MCP research shows how attackers can exploit weak validation in a Postgres MCP server to issue unexpected database queries—a well-known type of vulnerability but applied to new technology.
Creating LLM guardrails and watching for anomalous calls from MCP servers help surface this behavior early and reduce the scope of an attack. For an in-depth look into how to monitor AI systems, read our guides on the common MCP security risks and how attackers abuse AI infrastructure, supply chains, and interfaces.
References:
- MCP vulnerability case study: PostgreSQL
- CVE-2025-52882: WebSocket authentication bypass in Claude Code extensions
Attackers shift focus to supply chain and open source risks
In 2025, threats progressively targeted developer environments, CI/CD pipelines, and third-party dependencies. The widespread supply chain attack that hit npm in September demonstrates how organizations often overlook vulnerabilities in their CI/CD pipelines, even though these environments are becoming frequent areas of focus for attackers.
Datadog researchers noted several specific ways attackers focused on supply chain components, such as launching a self-replicating npm worm that successfully extracted data from over 500 unique GitHub users. Attackers also used common phishing techniques to take control of accounts for popular npm packages. And like other areas of cloud infrastructure, attackers targeted the supply chain for long-lived credentials and used malicious plugins and malware to create backdoors into cloud environments.
Where to start
Protecting supply chains means prioritizing CI/CD pipeline security with steps such as enforcing signing for container images and reducing the number of long-lived credentials, since these are consistent entry points for attackers. Monitoring for vulnerabilities in third-party dependencies and unusual pipeline activity helps organizations mitigate issues before they escalate. Suspicious package installations or unusual GitHub Actions triggers can signal pipeline compromise. For more information on protecting environments from recent supply chain compromises, see Datadog’s approach to mitigating these attacks.
References:
- Analysis: Shai-Hulud 2.0 npm worm
- Learnings from recent npm supply chain attacks
- MUT-4831: Trojanized npm packages deliver Vidar infostealer malware
- CVE-2025-29927: The Next.js Middleware Authorization Bypass Vulnerability
- Weaponizing misconfigured Redis to mine cryptocurrency at scale
What to take into 2026
In 2025, attackers increasingly focused on the supporting systems of cloud environments, such as identities, CI/CD pipelines, and AI tooling. Moving forward, organizations should build monitoring that more closely connects these supporting systems. At Datadog, we merged our SRE and security groups, which is one example of how organizations can combine both incident management and security expertise to connect standard monitoring telemetry with cloud and identity misconfigurations, supply chain vulnerabilities, and AI system behavior more efficiently.
Check out our documentation to learn more about Datadog’s security offerings. If you don’t already have an account, you can sign up for a free 14-day trial.





