State of Cloud Security | Datadog
State of Cloud Security

/ / / / / / /

Securely configuring the potentially thousands of cloud identities, workloads, and other resources needed to support the high pace of modern software development is difficult—but also critical to prevent attackers from breaching these systems, where security gaps too often go unnoticed.

For this report, we analyzed security posture data from a sample of thousands of organizations that use AWS, Azure, or Google Cloud. In particular, we focused on understanding how organizations approach and mitigate common risks that frequently lead to documented public cloud security incidents. (A detailed methodology is available in the annex.) Our findings suggest that, while some elements of strong cloud security posture show signs of improvement, organizations still face significant challenges. These include managing static, long-lived credentials; securely configuring user roles, access, and privileges within cloud resources; and enforcing best-practice safeguards such as multi-factor authentication (MFA).


Fact 1

Long-lived credentials continue to be a risk

Long-lived credentials—i.e., those that are static and do not expire—are well-known as a major cause of cloud security breaches, and they continue to be a widespread issue in cloud environments. These types of credentials are widely regarded as insecure, not only because they never expire but also because they can easily be leaked in source code, container images, or configuration files. Indeed, leaks of long-lived credentials are one of the most common causes for security breaches in the cloud.

Despite common knowledge of this attack vector, we identified that organizations still have room to improve in replacing long-lived credentials with more secure solutions that use centralized identity management and short-lived credentials. We analyzed usage of long-lived credentials across AWS, Azure, and Google Cloud and found common trends across accounts in all three platforms. In AWS, 76 percent of IAM users have active access keys. In Azure AD, 50 percent of applications have active credentials, while 27 percent of Google Cloud service accounts have active access keys. The lower number for Google Cloud is likely due to the presence of service accounts created and managed by Google, which developers typically do not use for other purposes.

Across AWS, Azure, and Google Cloud, roughly half of access keys are more than 1 year old, and more than one in 10 are older than 3 years old. This demonstrates that access keys tend to live for longer than they should.

Long-lived credentials are a risk across all cloud providers

For AWS, we were able to compare these figures with historical data that we compiled for our 2022 report. Since that time, the situation with AWS credentials has unfortunately not improved. Many of the 76 percent of IAM users that have access keys—up from 72 percent from one year ago—do not actively use them. Among these users:

  • Nearly half (49 percent) have an access key that has not been used in the past 90 days, up from 40 percent a year ago.
  • One in three have active credentials that are older than 1 year and have not been used in the past 30 days, up from 25 percent a year ago.
Unused access keys are still not being deprovisioned

It’s generally recommended to avoid long-lived credentials, both for humans and workloads, and instead to use identity federation and platform authentication. For AWS, this means using IAM Identity Center (formerly AWS SSO) federated with a central identity provider and avoiding IAM users. On Google Cloud, service accounts should not have access keys, and organizations should instead use secure authentication mechanisms that create time-bound credentials, such as service account impersonation, instance roles, or workload identity. In Azure, organizations should avoid the use of Azure AD application credentials when possible and instead opt for managed identities and workload identity.

“Long-lived credentials can be avoided in almost all use cases in the cloud in 2023. It’s great to see cloud providers make federation available in a growing number of scenarios. Additionally, one of the biggest benefits of temporary credentials is support for rich attribution. Actions performed in the cloud can be attributed to specific runs of CI jobs or instances of an auto-scaled application. This can be invaluable for troubleshooting, not just security. Their time-limited nature also makes attribution of compromise easier—one needs only query several hours of logs, rather than potentially months or years.”

Aidan Steele
Senior Engineer and AWS Serverless Hero
Fact 2

MFA for cloud access is not sufficiently enforced

Unlike on-premise environments, cloud providers’ administrative interfaces are APIs that are, by design, exposed to the internet. This is why, in the cloud, identity is the new perimeter—and securing identities is of paramount importance. Using and enforcing MFA is one of the most basic and effective steps in that effort. Data from both Microsoft and Google indicates that usage of MFA can prevent the vast majority of account takeovers. MFA can be enforced at the account or tenant level, and organizations should consider its use—particularly phishing-resistant methods such as FIDO2 security keys—as an essential part of a healthy cloud security posture.

In our research, we analyzed MFA usage in AWS and Azure AD—reliably reporting on MFA usage in Google Cloud is not feasible at this time due to some limitations of Google Workspace Audit logs (e.g., logins using passkeys do not appear as MFA events). For each organization, we analyzed the percentage of its users that had successfully authenticated without MFA in October 2023.

In AWS, we identified that nearly a third (31 percent) of IAM users with console access have no MFA enforced, affecting two out of five organizations. We also found that 45 percent of AWS organizations had one or more IAM users authenticate to the AWS console without using MFA, and only 20 percent of Azure organizations had all Azure AD users authenticate with MFA.

MFA is not consistently enforced across organizations

These findings illustrate that, while industry statistics suggest MFA adoption is increasing, a substantial portion of organizations are not proactively enforcing it, leaving them at increased risk of credential theft or password-stuffing attacks.

In AWS, it’s recommended to avoid using IAM users and instead rely on a third-party identity provider that allows MFA to be enforced. If you do use IAM users in AWS, you can enforce MFA through the use of the aws:MultiFactorAuthPresent condition key. In Azure AD, MFA should typically be made mandatory by using a conditional access policy and ensuring that legacy authentication is disabled. In Google Workspace, MFA can be enforced at the organization or organization unit level.

Fact 3

In AWS, IMDSv2 is still widely unenforced, but adoption is rising

In AWS, enforcing the Instance Metadata Service V2 (IMDSv2) is critical to protect against server-side request forgery (SSRF) attacks, one of the most common ways attackers steal and abuse cloud credentials. IMDSv2 enforces the presence of additional HTTP headers, making it much harder for attackers to steal credentials from an EC2 instance.

Although IMDSv2 was released in 2019, our 2022 study shows that, at the time, only 7 percent of EC2 instances were enforcing the usage of IMDSv2. Today, 21 percent of EC2 instances enforce IMDSv2. Organizations enforce IMDSv2 on 25 percent of their EC2 instances on average, up from 11 percent in September 2022. Although current adoption is still insufficient, we’re glad that organizations are starting to increase enforcement of IMDSv2.

Organizations enforce IMDSv2 on twice as many EC2 instances as a year ago

However, enforcement varies based on age of deployment. Just 13 percent of EC2 instances older than one year enforce IMDSv2, compared to 31 percent of instances launched in the past several weeks. This reinforces that organizations are increasingly aware of IMDSv2, and that shorter-lived EC2 instances that are designed to be easily taken down or re-deployed benefit more from recent security improvements.

When an instance is insecurely configured and does not enforce IMDSv2, it’s still possible to use it—although an attacker would likely opt to use the insecure IMDSv1. We identified that nearly three out of four EC2 instances (73 percent) had only used IMDSv2 over the 14-day period between October 12 and October 26, showing the disconnect between what’s enforced and what’s actually used. In fact, this means that the vast majority of EC2 instances could enforce IMDSv2 without any functional impact or disruption. It’s recommended to enforce IMDSv2 on all your EC2 instances. (See also AWS’ own guide.)

The majority of EC2 instances exclusively use IMDSv2, but few enforce it

You can enforce IMDSv2 on your instances through the use of service control policies (SCPs). AWS has also recently released a setting at the Amazon Machine Instance (AMI) level that enforces IMDSv2 by default for all EC2 instances launched from that AMI. Finally, the AWS documentation features a dedicated guide about transitioning to IMDSv2.

“Seeing improvement in IMDSv2 enforcement in 2023 likely shows that the most impactful change has been AWS implementing more secure defaults within its AMIs and Compute services. Many companies are also recognizing the critical importance of enforcing IMDSv2 usage on the internet edges of their environments. However, there is a long road ahead, as SSRF and proxy attacks are still largely obscure concepts unknown to many builders in the cloud. Cloud providers are starting to provide more detailed warnings and click-through advisory notices when permitting less secure options in some services. Compute services and, in particular, IMDS are worthy of such warnings. Additionally, feedback and pressure should continue to be placed on commercial products that do not support IMDSv2.”

Houston Hopkins
Senior Security Engineering Manager, Cloud Security at Robinhood
Fact 4

Adoption of public access blocks in cloud storage services is increasing

Public storage buckets are now a well-understood source of data leakage in cloud environments. Attackers stealing sensitive data from cloud storage have been documented countless times, including in 2023. These breaches are often due to the complexity of securely configuring storage buckets, and although buckets are private by default, there are many mechanisms by which they can inadvertently be made publicly available. In addition, some storage buckets were created a long time ago—for instance, Amazon S3 was publicly released in 2006, more than 17 years ago—before some modern safeguards were commonplace.

Today, cloud providers implement mechanisms to proactively block public access to cloud storage, even on misconfigured buckets, and prevent buckets from becoming publicly available by mistake. These allow practitioners to secure a whole AWS account, Azure storage account, or Google Cloud project at once and ensure that human error doesn’t turn into a data breach. (Note that we did not include Google Cloud Storage in our analysis, because public access in Google Cloud is typically blocked at the project, folder, or organization level through an organization policy constraint.)

On AWS, we identified that nearly three-quarters (72 percent) of S3 buckets are covered by a public S3 access block, either at the bucket or account level, up from half (52 percent) in October 2022. This shows that organizations are increasingly aware of this mechanism, introduced in 2018.

A rising number of S3 buckets are protected by public access blocks

On Azure, two out of five blob storage containers (21 percent) are in a storage account that proactively blocks public access, making sure that any dangerously configured blob storage it contains is not effectively made public.

Overall, a small portion (1.5 percent) of Amazon S3 buckets are public (i.e., they have a public bucket policy and are not covered by a public access block at the account or bucket level). Similarly, a small number of Azure storage blob containers (5 percent) are publicly accessible and in a storage account not blocking public access. While these buckets don’t necessarily contain sensitive data, they could inadvertently become repositories for sensitive information in the future.

Adoption of public access blocks varies across cloud storage services

We believe that more organizations proactively block S3 public access due to wider awareness of this practice, the high number of documented security breaches related to public access, and because AWS introduced the S3 public access block in 2018. Microsoft also released the equivalent Azure feature in 2020. In addition, since April 2023, AWS has been blocking public access by default on all buckets, which is not the case for Azure—although it’s also planned in the near future. Organizations should continue to audit the configuration of their storage buckets and ensure public access blocks are enforced, except in specific use cases where public access is warranted or necessary. In many such cases, leveraging a content distribution network (CDN) service such as Amazon CloudFront or Azure CDN is more performant, cost-effective, and secure than exposing buckets directly.

Fact 5

A substantial portion of cloud workloads are excessively privileged

In cloud environments, configuring permissions can be a tedious task that often leads to workloads having more privileges than needed. Often, a seemingly innocuous set of permissions could allow an attacker to escalate their privileges. While full administrator access is relatively simple to identify, “shadow administrator” permissions—which allow an identity to indirectly escalate their privileges and access sensitive data—are typically harder to spot.

In AWS, only a small number (1.5 percent) of Amazon EC2 instances have full administrator privileges. An attacker compromising such an instance—for example, by exploiting a web application that runs on it—would be able to obtain the credentials made available via the instance metadata service and thereby access sensitive data in the account.

But this figure doesn’t tell the full story. An attacker does not need full administrator privileges to have a substantial impact—there are other, more common and challenging-to-uncover types of permissions they can leverage. We found that:

  • 5.4 percent of EC2 instances have risky permissions that allow lateral movement in the account, such as connecting to other instances using SSM Sessions Manager.
  • 7.2 percent allow an attacker to gain full administrative access to the account by privilege escalation, such as permissions to create a new IAM user with administrator privileges.
  • 20 percent have excessive data access, such as listing and accessing data from all S3 buckets in the account.

(Note that these conditions are not mutually exclusive—a specific instance can fall into several of these categories.)

Overall, nearly one in four EC2 instances (23 percent) have administrator or highly sensitive permissions to the AWS account they run in.

Risky but hard-to-detect permissions impact many AWS workloads

On Google Cloud, 20 percent of virtual machines (VMs) have privileged “editor” permissions on the project they run in, through the use of the default compute service account with the unrestricted cloud-platform scope. In addition, 17 percent have full read access to Google Cloud Storage (GCS) buckets and BigQuery datasets through the same mechanism—so in total, more than one in three Google Cloud VMs (37 percent) have sensitive permissions to a project.

37 percent of Google Cloud VMs have potentially risky permissions

Organizations using Google Cloud should enable the “Disable automatic IAM grants for default service accounts” organization policy and ensure that virtual machines use a non-default service account.

Managing IAM permissions of cloud workloads is not an easy task. Administrator access is not the only risk to monitor—it’s critical to also be wary of sensitive permissions that allow a user to access sensitive data or escalate privileges. Because cloud workloads are a common entry point for attackers, it’s critical to ensure permissions on these resources are as limited as possible.

“Insecure default configurations from cloud providers optimize for the ease of initial adoption, but they are often at odds with the level of hardening required for most production deployments. Initial configurations such as over-privileged IAM roles and permissive firewall rules tend to have inertia—once deployed and working, they are more challenging to lock down after the fact. Therefore, the discipline of hardening the security of initial configurations early in the deployment lifecycle is a key virtue of a mature cloud-native organization's culture.”

Brad Geesaman
Staff Security Engineering, Ghost Security
Fact 6

Many virtual machines remain publicly exposed to the internet

Exposing virtual machines (VMs) to the public internet is a significant risk in cloud environments. Attackers frequently target exposed services by running brute-force attacks or exploiting known protocol-level vulnerabilities such as BlueKeep, which the US Cybersecurity and Infrastructure Security Agency (CISA) still reports as one of the most commonly exploited vulnerabilities of 2022.

We identified that 7 percent of EC2 instances, 3 percent of Azure VMs, and 13 percent of Google Cloud VMs are publicly exposed to the internet—i.e., they have at least one port allowing traffic from the internet.

Among instances that are publicly exposed, HTTP and HTTPS are the most commonly exposed ports, and are not considered risky in general. After these, SSH and RDP remote access protocols are common. The most commonly exposed database technologies are MongoDB, Redis, MSSQL, and Elasticsearch.

SSH and RDP are often accessible from the internet

We hypothesize that the higher public exposure and strong prevalence of open SSH and RDP ports in Google Cloud is due to the fact that default pre-populated firewall rules in the default network allow SSH and RDP from the internet. While developers can disable provisioning of this network for new projects through the “skip default network creation” organization policy constraint, it does not apply to existing projects. The resulting differences in percentage of internet-exposed VMs show once again the practical impact that insecure defaults can have on organizations’ security posture.

Across all cloud platforms, organizations should avoid exposing VMs to the public internet and instead use IAM-based secure access mechanisms such as AWS SSM Sessions Manager, Amazon EC2 Instance Connect, Google Cloud Identity-Aware Proxy (IAP), Google Cloud OS Login, or Azure Bastion. This especially applies to database systems running on VMs, since databases typically contain sensitive data and are straightforward for attackers to discover—for instance, through passive network scanning search engines like Shodan or Censys. Databases should be kept on internal networks and accessed through one of the mechanisms previously mentioned.


Conclusion

Our findings reflect ongoing improvements in security posture across cloud environments in AWS, Google Cloud, and Azure. We believe this is due to the cloud providers working to deliver more secure defaults on their platforms, as well as adoption of solutions that scan for insecure configurations—not to mention greater awareness around cloud security risks in general.

That said, improving and maintaining cloud security posture is an ongoing process—in complex environments with accelerated software development life cycles, it’s not always easy to identify and fix issues such as long-lived credentials, lagging MFA adoption, IMDSv2 enforcement, or excessive privileges. Continuously scanning for misconfigurations and quickly remediating these issues when they are discovered remain the best strategies for strengthening cloud security, so breaches can be avoided and developers can continue shipping software at speed and scale.

Secure your cloud infrastructure with Datadog.

Methodology

Findings are based on data collected between September 2023 and October 2023.

Population

For this report, we analyzed the cloud security posture of a sample of thousands of organizations. Data in this report has come from customers of Datadog Cloud Security Management (CSM) and is likely skewed toward the positive as a result of these organizations’ increased maturity.

Fact 1

For AWS, we considered IAM users that have at least one active access key. When an IAM user had several active access keys, we considered only the oldest one.

For Google Cloud, we considered service accounts that are not disabled and have at least one active access key. We excluded from our analysis Google-managed service accounts for which it’s not possible to create user-managed access keys.

For Azure AD, we considered Azure AD app registrations that had active “password” credentials (corresponding to static access keys that can be exchanged for OAuth 2.0 access tokens).

Fact 2

For AWS, we considered IAM users that have a “console profile” enabled and no MFA device attached. For historical console authentication events in AWS, we analyzed 30 days of CloudTrail logs and queried ConsoleLogin events whose outcome was successful. We excluded SAML federated authentications. We determined that an authentication event was using MFA if the additionalEventData.MFAUsed was set to Yes, in accordance with the AWS documentation.

For Azure AD, we analyzed 30 days of Azure AD sign-in logs and queried successful authentication events only. We then considered that an attempt had used MFA if one of the following was true:

  • The properties.authenticationRequirement field was set to multiFactorAuthentication (this would be explicit use of MFA).
  • If the field properties.authenticationDetails.authenticationStepResultDetail was any of the values below, meaning that the event corresponds to a re-authentication event that previously required strong authentication.
"completed in the cloud"
"has expired due to the policies configured on tenant"
"registration prompted"
"satisfied by claim in the token" 
"satisfied by claim provided by external provider"
"satisfied by strong authentication"
"skipped as flow exercised was windows broker logon flow"
"skipped due to location" 
"skipped due to registered device"
"skipped due to remembered device"
"successfully completed"

Fact 3

In the graph that depicts average enforcement of IMDSv2, we computed for each organization the percentage of its EC2 instances where IMDSv2 is enforced and averaged this number across all organizations. We used this method so as not to overrepresent organizations that have a large number of EC2 instances and instead to measure adoption trends.

For historical data, we queried 14 days of CloudTrail logs and used the field userIdentity.sessionContext.ec2RoleDelivery to determine if IMDSv2 had been used to request initial session credentials. We only analyzed CloudTrail generated by EC2 instances, meaning where userIdentity.session_name starts with i-.

Fact 4

For AWS, we considered an S3 bucket public if both of the following were true:

  • The bucket policy allows s3:GetObject on all objects in the bucket to a wildcard principal.
  • The public access block configuration of neither the bucket nor the AWS account has restrict_public_buckets set to true.

We considered that an S3 bucket is “covered by a public access block” if the bucket public access block configuration or the AWS account public access block configuration has the four settings block_public_acls, block_public_policy, ignore_public_acls, and restrict_public_buckets set to true.

We excluded from the analysis S3 buckets that are made public through the static website feature, as this is a valid use case for public S3 buckets that is not necessarily covered by CloudFront, and also because it gives a strong signal that the bucket was explicitly meant to be public by design.

For Azure, we considered that an Azure blob storage container is publicly accessible if both of the following were true:

  • Its PublicAccess field was set to blob or container.
  • The storage account it resides in does not block public access—i.e., its allowBlobPublicAccess attribute is not set to false.

Fact 5

For AWS, we considered that an EC2 instance has an administrative role if it has an instance role that’s attached to either the AdministratorAccess AWS-managed policy, or to a custom policy that has at least one statement allowing all actions on all resources, with no conditions.

We considered that an EC2 instance has “excessive data access” if one of the following conditions was true:

  • Its instance role has any of the following combinations of permissions on the resource *, through an inline or attached policy.
s3:listallmybuckets, s3:listbucket, s3:getobject
dynamodb:listtables, dynamodb:scan
dynamodb:listtables, dynamodb:exporttabletopointintime
ec2:describesnapshots, ec2:modifysnapshotattribute
ec2:describesnapshots, ebs:listsnapshotblocks, ebs:listchangedblocks, ebs:getsnapshotblock
rds:describedbclustersnapshots, rds:modifydbclustersnapshotattribute
rds:describedbsnapshots, rds:modifydbsnapshotattribute
secretsmanager:listsecrets, secretsmanager:getsecretvalue
ssm:describeparameters, ssm:getparameter
ssm:describeparameters, ssm:getparameters
secretsmanager:getparametersbypath
  • Its instance role is attached to one of the following AWS managed policies (which all meet the criteria above).
AdministratorAccess
AdministratorAccess-Amplify
AmazonApplicationWizardFullaccess
AmazonDataZoneProjectRolePermissionsBoundary
AmazonDocDBConsoleFullAccess
AmazonDocDBElasticFullAccess
AmazonDocDBFullAccess
AmazonDynamoDBFullAccess
AmazonDynamoDBFullAccesswithDataPipeline
AmazonDynamoDBReadOnlyAccess
AmazonEC2FullAccess
AmazonEC2RoleforDataPipelineRole
AmazonElasticMapReduceforEC2Role
AmazonElasticMapReduceFullAccess
AmazonElasticMapReduceReadOnlyAccess
AmazonElasticMapReduceRole
AmazonGrafanaRedshiftAccess
AmazonLaunchWizard_Fullaccess
AmazonLaunchWizardFullaccess
AmazonLaunchWizardFullAccessV2
AmazonMacieServiceRole
AmazonMacieServiceRolePolicy
AmazonRDSFullAccess
AmazonRedshiftFullAccess
AmazonS3FullAccess
AmazonS3ReadOnlyAccess
AmazonSageMakerFullAccess
AmazonSSMAutomationRole
AmazonSSMFullAccess
AmazonSSMReadOnlyAccess
AWSBackupServiceRolePolicyForBackup
AWSCloudTrailFullAccess
AWSCodePipelineReadOnlyAccess
AWSCodeStarServiceRole
AWSConfigRole
AWSDataExchangeFullAccess
AWSDataExchangeProviderFullAccess
AWSDataLifecycleManagerServiceRole
AWSDataPipelineRole
AWSElasticBeanstalkCustomPlatformforEC2Role
AWSElasticBeanstalkFullAccess
AWSElasticBeanstalkReadOnlyAccess
AWSIoTDeviceTesterForFreeRTOSFullAccess
AWSLambdaFullAccess
AWSLambdaReadOnlyAccess
AWSMarketplaceSellerFullAccess
AWSMarketplaceSellerProductsFullAccess
AWSOpsWorksRole
DatabaseAdministrator
DataScientist
NeptuneConsoleFullAccess
NeptuneFullAccess
ReadOnlyAccess
SecretsManagerReadWrite
ServerMigrationServiceRole
SystemAdministrator
VMImportExportRoleForAWSConnector

We considered that an EC2 instance has “permissions allowing privilege escalation” if one of the following conditions was true:

  • Its instance role has any of the following combinations of permissions on the resource *, through an inline or attached policy.
iam:createaccesskey
iam:createloginprofile
iam:updateloginprofile
iam:updateassumerolepolicy
iam:createpolicyversion
iam:attachrolepolicy
iam:putrolepolicy
iam:createuser, iam:putuserpolicy, iam:createaccesskey
iam:createuser, iam:putuserpolicy, iam:createloginprofile
iam:createuser, iam:addusertogroup, iam:createaccesskey
iam:createuser, iam:addusertogroup, iam:createloginprofile
iam:createuser, iam:attachuserpolicy, iam:createaccesskey
iam:createuser, iam:attachuserpolicy, iam:createloginprofile
iam:passrole, lambda:createfunction, lambda:addpermission
iam:passrole, codestar:createproject
iam:passrole, datapipeline:createpipeline
iam:passrole, cloudformation:createstack
iam:passrole, lambda:createfunction, lambda:createeventsourcemapping
iam:passrole, lambda:createfunction, lambda:invokefunction
iam:passrole, ec2:runinstances
  • Its instance role is attached to one of the following AWS managed policies (which all meet the criteria above):
AdministratorAccess
AdministratorAccess-Amplify
AmazonDynamoDBFullAccess
AmazonDynamoDBFullAccesswithDataPipeline
AmazonEC2ContainerServiceFullAccess
AmazonEC2SpotFleetTaggingRole
AmazonECS_FullAccess
AmazonElasticMapReduceFullAccess
AmazonElasticMapReduceRole
AutoScalingServiceRolePolicy
AWSBatchServiceRole
AWSCodeStarServiceRole
AWSDataPipelineRole
AWSEC2FleetServiceRolePolicy
AWSEC2SpotFleetServiceRolePolicy
AWSEC2SpotServiceRolePolicy
AWSElasticBeanstalkFullAccess
AWSElasticBeanstalkManagedUpdatesServiceRolePolicy
AWSElasticBeanstalkService
AWSLambda_FullAccess
AWSLambdaFullAccess
AWSMarketplaceFullAccess
AWSMarketplaceImageBuildFullAccess
AWSOpsWorksRegisterCLI
AWSServiceRoleForAmazonEKSNodegroup
AWSServiceRoleForGammaInternalAmazonEKSNodegroup
AWSServiceRoleForSMS
DataScientist
EC2FleetTimeShiftableServiceRolePolicy
IAMFullAccess
ServerMigrationServiceLaunchRole

We considered that an EC2 instance has “permissions allowing lateral movement” if one of the following conditions was true:

  • Its instance role has any of the following combinations of permissions on the resource *, through an inline or attached policy.
ec2-instance-connect:sendsshpublickey
ssm:startsession
ssm:sendcommand
ec2:getserialconsoleaccessstatus, ec2:describeinstances, ec2:describeinstancetypes, ec2-instance-connect:sendserialconsolesshpublickey
  • Its instance role is attached to one of the following AWS managed policies (which all meet the criteria above).
AdministratorAccess
AmazonApplicationWizardFullaccess
AmazonLaunchWizardFullaccess
AmazonSSMAutomationRole
AmazonSSMFullAccess
AmazonSSMMaintenanceWindowRole
AmazonSSMServiceRolePolicy
AWSOpsWorksCMServiceRole
EC2InstanceConnect

When computing effective permissions of an IAM role, SCPs and permissions boundaries were not taken into account.

For Google Cloud, we considered that:

  • a VM has administrator privileges on the project when the default compute service account is used with the cloud-platform scope.
  • a VM has read access to the project’s cloud storage when the default compute service account is used with the devstorage.read_only scope.

We excluded VMs in a project where the default compute service account had not been granted editor or owner access. In other words, we did take into account the case where the recommended automaticIamGrantsForDefaultServiceAccounts organization policy is in effect, “neutralizing” the default permissions of the default compute service account.

Fact 6

For AWS, we considered that a virtual machine is publicly available if all of the following were true:

  • It has a public IP address attached.
  • It’s in a public subnet (i.e., a subnet that has a default route to an internet gateway).
  • It has a security group that allows at least one port from 0.0.0.0/0 that is not blocked by the network ACL attached to the subnet.

For Azure, we considered that a virtual machine is publicly available if both of the following were true:

  • It has a network interface with a public IP and a network security group that allows at least one port from 0.0.0.0/0.
  • It has a network interface with a public IP with the deprecated “basic” SKU and no network security group attached.

For Google Cloud, we considered that a virtual machine is publicly available if both of the following were true:

  • It has a network interface with a public IP.
  • At least one of the firewall rules that apply to the instance allows at least one port from 0.0.0.0/0, taking into account ‘deny’ rules and relative priority.

For Google Cloud, we define “firewall rules that apply to the instance” as:

  • Firewall rules that apply to the whole network where the virtual machine is located
  • Firewall rules that apply to the service account the virtual machine uses

For all three clouds, we did not take into account port ranges wider than 10 ports. This is because when a large number of ports are open (e.g., 1-65,535), it’s not possible to assume the original intent.

In the graph showing the distribution of open ports across publicly exposed instances, a specific instance may have one or multiple open ports and might consequently appear in several categories.

We purposely did not include the HTTP and HTTPS ports in the graph, as exposing web applications to the internet is not generally regarded as a bad practice.