Findings are based on data collected between September 2023 and October 2023.
For this report, we analyzed the cloud security posture of a sample of thousands of organizations. Data in this report has come from customers of Datadog Cloud Security Management (CSM) and is likely skewed toward the positive as a result of these organizations’ increased maturity.
For AWS, we considered IAM users that have at least one active access key. When an IAM user had several active access keys, we considered only the oldest one.
For Google Cloud, we considered service accounts that are not disabled and have at least one active access key. We excluded from our analysis Google-managed service accounts for which it’s not possible to create user-managed access keys.
For Azure AD, we considered Azure AD app registrations that had active “password” credentials (corresponding to static access keys that can be exchanged for OAuth 2.0 access tokens).
For AWS, we considered IAM users that have a “console profile” enabled and no MFA device attached. For historical console authentication events in AWS, we analyzed 30 days of CloudTrail logs and queried ConsoleLogin
events whose outcome was successful. We excluded SAML federated authentications. We determined that an authentication event was using MFA if the additionalEventData.MFAUsed
was set to Yes
, in accordance with the AWS documentation.
For Azure AD, we analyzed 30 days of Azure AD sign-in logs and queried successful authentication events only. We then considered that an attempt had used MFA if one of the following was true:
- The
properties.authenticationRequirement
field was set to multiFactorAuthentication
(this would be explicit use of MFA). - If the field
properties.authenticationDetails.authenticationStepResultDetail
was any of the values below, meaning that the event corresponds to a re-authentication event that previously required strong authentication.
"completed in the cloud"
"has expired due to the policies configured on tenant"
"registration prompted"
"satisfied by claim in the token"
"satisfied by claim provided by external provider"
"satisfied by strong authentication"
"skipped as flow exercised was windows broker logon flow"
"skipped due to location"
"skipped due to registered device"
"skipped due to remembered device"
"successfully completed"
In the graph that depicts average enforcement of IMDSv2, we computed for each organization the percentage of its EC2 instances where IMDSv2 is enforced and averaged this number across all organizations. We used this method so as not to overrepresent organizations that have a large number of EC2 instances and instead to measure adoption trends.
For historical data, we queried 14 days of CloudTrail logs and used the field userIdentity.sessionContext.ec2RoleDelivery
to determine if IMDSv2 had been used to request initial session credentials. We only analyzed CloudTrail generated by EC2 instances, meaning where userIdentity.session_name
starts with i-
.
For AWS, we considered an S3 bucket public if both of the following were true:
- The bucket policy allows
s3:GetObject
on all objects in the bucket to a wildcard principal. - The public access block configuration of neither the bucket nor the AWS account has
restrict_public_buckets
set to true
.
We considered that an S3 bucket is “covered by a public access block” if the bucket public access block configuration or the AWS account public access block configuration has the four settings block_public_acls
, block_public_policy
, ignore_public_acls
, and restrict_public_buckets
set to true
.
We excluded from the analysis S3 buckets that are made public through the static website feature, as this is a valid use case for public S3 buckets that is not necessarily covered by CloudFront, and also because it gives a strong signal that the bucket was explicitly meant to be public by design.
For Azure, we considered that an Azure blob storage container is publicly accessible if both of the following were true:
- Its
PublicAccess
field was set to blob
or container
. - The storage account it resides in does not block public access—i.e., its
allowBlobPublicAccess
attribute is not set to false
.
For AWS, we considered that an EC2 instance has an administrative role if it has an instance role that’s attached to either the AdministratorAccess
AWS-managed policy, or to a custom policy that has at least one statement allowing all actions on all resources, with no conditions.
We considered that an EC2 instance has “excessive data access” if one of the following conditions was true:
- Its instance role has any of the following combinations of permissions on the resource
*
, through an inline or attached policy.
s3:listallmybuckets, s3:listbucket, s3:getobject
dynamodb:listtables, dynamodb:scan
dynamodb:listtables, dynamodb:exporttabletopointintime
ec2:describesnapshots, ec2:modifysnapshotattribute
ec2:describesnapshots, ebs:listsnapshotblocks, ebs:listchangedblocks, ebs:getsnapshotblock
rds:describedbclustersnapshots, rds:modifydbclustersnapshotattribute
rds:describedbsnapshots, rds:modifydbsnapshotattribute
secretsmanager:listsecrets, secretsmanager:getsecretvalue
ssm:describeparameters, ssm:getparameter
ssm:describeparameters, ssm:getparameters
secretsmanager:getparametersbypath
- Its instance role is attached to one of the following AWS managed policies (which all meet the criteria above).
AdministratorAccess
AdministratorAccess-Amplify
AmazonApplicationWizardFullaccess
AmazonDataZoneProjectRolePermissionsBoundary
AmazonDocDBConsoleFullAccess
AmazonDocDBElasticFullAccess
AmazonDocDBFullAccess
AmazonDynamoDBFullAccess
AmazonDynamoDBFullAccesswithDataPipeline
AmazonDynamoDBReadOnlyAccess
AmazonEC2FullAccess
AmazonEC2RoleforDataPipelineRole
AmazonElasticMapReduceforEC2Role
AmazonElasticMapReduceFullAccess
AmazonElasticMapReduceReadOnlyAccess
AmazonElasticMapReduceRole
AmazonGrafanaRedshiftAccess
AmazonLaunchWizard_Fullaccess
AmazonLaunchWizardFullaccess
AmazonLaunchWizardFullAccessV2
AmazonMacieServiceRole
AmazonMacieServiceRolePolicy
AmazonRDSFullAccess
AmazonRedshiftFullAccess
AmazonS3FullAccess
AmazonS3ReadOnlyAccess
AmazonSageMakerFullAccess
AmazonSSMAutomationRole
AmazonSSMFullAccess
AmazonSSMReadOnlyAccess
AWSBackupServiceRolePolicyForBackup
AWSCloudTrailFullAccess
AWSCodePipelineReadOnlyAccess
AWSCodeStarServiceRole
AWSConfigRole
AWSDataExchangeFullAccess
AWSDataExchangeProviderFullAccess
AWSDataLifecycleManagerServiceRole
AWSDataPipelineRole
AWSElasticBeanstalkCustomPlatformforEC2Role
AWSElasticBeanstalkFullAccess
AWSElasticBeanstalkReadOnlyAccess
AWSIoTDeviceTesterForFreeRTOSFullAccess
AWSLambdaFullAccess
AWSLambdaReadOnlyAccess
AWSMarketplaceSellerFullAccess
AWSMarketplaceSellerProductsFullAccess
AWSOpsWorksRole
DatabaseAdministrator
DataScientist
NeptuneConsoleFullAccess
NeptuneFullAccess
ReadOnlyAccess
SecretsManagerReadWrite
ServerMigrationServiceRole
SystemAdministrator
VMImportExportRoleForAWSConnector
We considered that an EC2 instance has “permissions allowing privilege escalation” if one of the following conditions was true:
- Its instance role has any of the following combinations of permissions on the resource
*
, through an inline or attached policy.
iam:createaccesskey
iam:createloginprofile
iam:updateloginprofile
iam:updateassumerolepolicy
iam:createpolicyversion
iam:attachrolepolicy
iam:putrolepolicy
iam:createuser, iam:putuserpolicy, iam:createaccesskey
iam:createuser, iam:putuserpolicy, iam:createloginprofile
iam:createuser, iam:addusertogroup, iam:createaccesskey
iam:createuser, iam:addusertogroup, iam:createloginprofile
iam:createuser, iam:attachuserpolicy, iam:createaccesskey
iam:createuser, iam:attachuserpolicy, iam:createloginprofile
iam:passrole, lambda:createfunction, lambda:addpermission
iam:passrole, codestar:createproject
iam:passrole, datapipeline:createpipeline
iam:passrole, cloudformation:createstack
iam:passrole, lambda:createfunction, lambda:createeventsourcemapping
iam:passrole, lambda:createfunction, lambda:invokefunction
iam:passrole, ec2:runinstances
- Its instance role is attached to one of the following AWS managed policies (which all meet the criteria above):
AdministratorAccess
AdministratorAccess-Amplify
AmazonDynamoDBFullAccess
AmazonDynamoDBFullAccesswithDataPipeline
AmazonEC2ContainerServiceFullAccess
AmazonEC2SpotFleetTaggingRole
AmazonECS_FullAccess
AmazonElasticMapReduceFullAccess
AmazonElasticMapReduceRole
AutoScalingServiceRolePolicy
AWSBatchServiceRole
AWSCodeStarServiceRole
AWSDataPipelineRole
AWSEC2FleetServiceRolePolicy
AWSEC2SpotFleetServiceRolePolicy
AWSEC2SpotServiceRolePolicy
AWSElasticBeanstalkFullAccess
AWSElasticBeanstalkManagedUpdatesServiceRolePolicy
AWSElasticBeanstalkService
AWSLambda_FullAccess
AWSLambdaFullAccess
AWSMarketplaceFullAccess
AWSMarketplaceImageBuildFullAccess
AWSOpsWorksRegisterCLI
AWSServiceRoleForAmazonEKSNodegroup
AWSServiceRoleForGammaInternalAmazonEKSNodegroup
AWSServiceRoleForSMS
DataScientist
EC2FleetTimeShiftableServiceRolePolicy
IAMFullAccess
ServerMigrationServiceLaunchRole
We considered that an EC2 instance has “permissions allowing lateral movement” if one of the following conditions was true:
- Its instance role has any of the following combinations of permissions on the resource
*
, through an inline or attached policy.
ec2-instance-connect:sendsshpublickey
ssm:startsession
ssm:sendcommand
ec2:getserialconsoleaccessstatus, ec2:describeinstances, ec2:describeinstancetypes, ec2-instance-connect:sendserialconsolesshpublickey
- Its instance role is attached to one of the following AWS managed policies (which all meet the criteria above).
AdministratorAccess
AmazonApplicationWizardFullaccess
AmazonLaunchWizardFullaccess
AmazonSSMAutomationRole
AmazonSSMFullAccess
AmazonSSMMaintenanceWindowRole
AmazonSSMServiceRolePolicy
AWSOpsWorksCMServiceRole
EC2InstanceConnect
When computing effective permissions of an IAM role, SCPs and permissions boundaries were not taken into account.
For Google Cloud, we considered that:
- a VM has administrator privileges on the project when the default compute service account is used with the
cloud-platform
scope. - a VM has read access to the project’s cloud storage when the default compute service account is used with the
devstorage.read_only
scope.
We excluded VMs in a project where the default compute service account had not been granted editor
or owner
access. In other words, we did take into account the case where the recommended automaticIamGrantsForDefaultServiceAccounts
organization policy is in effect, “neutralizing” the default permissions of the default compute service account.
For AWS, we considered that a virtual machine is publicly available if all of the following were true:
- It has a public IP address attached.
- It’s in a public subnet (i.e., a subnet that has a default route to an internet gateway).
- It has a security group that allows at least one port from 0.0.0.0/0 that is not blocked by the network ACL attached to the subnet.
For Azure, we considered that a virtual machine is publicly available if both of the following were true:
- It has a network interface with a public IP and a network security group that allows at least one port from 0.0.0.0/0.
- It has a network interface with a public IP with the deprecated “basic” SKU and no network security group attached.
For Google Cloud, we considered that a virtual machine is publicly available if both of the following were true:
- It has a network interface with a public IP.
- At least one of the firewall rules that apply to the instance allows at least one port from 0.0.0.0/0, taking into account ‘deny’ rules and relative priority.
For Google Cloud, we define “firewall rules that apply to the instance” as:
- Firewall rules that apply to the whole network where the virtual machine is located
- Firewall rules that apply to the service account the virtual machine uses
For all three clouds, we did not take into account port ranges wider than 10 ports. This is because when a large number of ports are open (e.g., 1-65,535), it’s not possible to assume the original intent.
In the graph showing the distribution of open ports across publicly exposed instances, a specific instance may have one or multiple open ports and might consequently appear in several categories.
We purposely did not include the HTTP and HTTPS ports in the graph, as exposing web applications to the internet is not generally regarded as a bad practice.