For this report, we compiled usage data from thousands of companies in Datadog's customer base. But while Datadog customers cover the spectrum of company size and industry, they do share some common traits. First, they tend to be serious about software infrastructure and application performance. They also skew toward adoption of cloud platforms and services more than the general population. All the results in this article are biased on the fact that the data comes from our customer base, a large but imperfect sample of the entire global market.
AWS, Azure, Google Cloud, and other cloud platforms offer a variety of compute services to help developers solve problems for their customers. Serverless has evolved as a category label and marketing term for a subset of those compute services that share at least some of a common set of principles:
- Improve time to delivery and time to value for developers
- Reduce operational costs
- Abstract infrastructure configuration and management
The majority of these services also provide the opportunity for—but no guarantee of—cost savings with granular pricing models that only charge for the resources applications consume in a multi-tenant model, and scale down to zero billable consumption when resources are not in use.
The precise level of infrastructure abstraction and the granularity of billing behavior exist on a spectrum. We see five common categories that have emerged across clouds, with significant functional overlap and blurring of these lines between categories across and within clouds:
- Containers with serverless orchestrators
- Application platform as a service (PaaS)
- Fully managed container applications
- Functions as a Service (FaaS)
- Edge functions
For the purposes of this report, we are focused primarily on the adoption of serverless functions, fully managed container applications, and edge functions. We have also considered PaaS and containers with serverless orchestrators in certain facts to provide a more complete view of the technology choices that our customer base is making to deliver value for their customers quickly, despite those services not fitting in with all possible definitions of serverless.
In our 2022 The State of Serverless report, we found that over half of all organizations running in each major cloud (AWS, Google Cloud, and Azure) had adopted serverless. We consider an organization to be a customer of a given cloud if they run at least five hosts per month in that particular cloud. They are also considered a customer of a cloud if they run at least five functions or one serverless application per month in that cloud.
In 2023, we expanded our definition of serverless workloads and the set of metrics we used to detect host-based usage within each cloud. This resulted in a higher number of total detected customers of each cloud but a lower detected percentage of serverless organizations within Azure and AWS in 2022. However, each cloud continued to show steady growth year over year in serverless adoption.
Note: In order to determine what percentage of organizations have adopted serverless in each cloud, we included customers monitoring the following technologies:
- AWS: AWS Lambda, AWS App Runner, ECS Fargate, EKS Fargate, AWS CloudFront Functions
- Azure: Azure Functions, Azure Container Apps, Azure Container Instances
- Google Cloud: Google Cloud Functions, Google App Engine-Flex, Google Cloud Run
Some organizations meet both sets of criteria, whereas others meet only one. For the purposes of this fact, organizations that met the latter set of criteria as of May 2023 are considered to have adopted serverless compute, and we make year-over-year comparisons with May 2022.
We share the same definition of serverless organizations from Fact 1.
We share the same definition of serverless organizations from Fact 1. To identify customers using emerging cloud platforms, we include organizations submitting metrics and/or logs from Cloudflare Workers, Fastly Compute@Edge, Vercel Functions, and Netlify Functions.
We broke down invocations from monitored Lambda functions in May 2023 using the runtime metadata associated with invocation metrics. We combined the runtime versions for each language to aggregate invocations at the language level.
To determine relative cold start times, we examined the Datadog enhanced
init_duration metric in May 2023. We combined the runtime versions for each language to aggregate initialization duration times at the language level. The relative cold start times are based on the median cold start duration in each language. Percent of Lambda functions with more than 1,024 MB in allocated memory is based on metadata from invoked Lambda functions in May 2023.
Based on metadata from invoked Lambda functions in May 2023. We define ARM-eligible runtimes as any runtimes compatible with ARM as of May 2023.
Based on metadata from invoked Lambda functions in May 2023.
Based on a sample of monitored Lambda functions from May 2023.