Updated November 2022. This research builds on previous editions of our container usage report, container orchestration report, and Docker research report. Click here to download the graphs for each fact. And click here to download our deck of the report.
Modern engineering teams continue to expand their use of containers, and today container-based microservice applications are pervasive. Growing container usage is driving organizations to adopt complementary technologies that simplify how they operate their clusters, and this expanding container environment presents organizations with security challenges.
For this report, we examined more than 1.5 billion containers run by tens of thousands of Datadog customers to understand the state of the container ecosystem. Read on for more insights and trends gathered from the latest real-world usage data.
“This survey demonstrates that the container and Kubernetes revolution is going from strength to strength. The results reveal how cloud native organizations using containers and Kubernetes are not just moving faster but gaining increased confidence—building and deploying larger applications and workloads in more mission-critical, production environments than ever.
Cloud native organizations are well placed for the road ahead, thanks to the innovation driven by more than 175,000 contributors in the cloud native ecosystem. The technology they are creating means engineering teams of all sizes can build and run applications to meet the economic demands of today's apps.”
—Priyanka Sharma, Executive Director, Cloud Native Computing Foundation
Kubernetes continues to be the most popular container management system
Kubernetes is more popular than ever. Today, nearly half of container organizations run Kubernetes to deploy and manage containers in a growing ecosystem. Tools like Amazon Elastic Kubernetes Services (Amazon EKS) Blueprints and Amazon EKS Anywhere—as well as other managed Kubernetes services—make it easy for teams to run Kubernetes clusters in the cloud and on-premise.
“At AWS, we’re committed to giving our customers a streamlined Kubernetes experience so they can easily manage and scale their clusters while benefiting from the security and resiliency of a fully managed AWS service. New capabilities like Amazon EKS Blueprints and Amazon EKS Anywhere make it faster and easier for customers to configure and deploy Kubernetes clusters across AWS and on-premises environments, so they can get the same, consistent Amazon EKS experience wherever they need it to best support their applications and end users.”
—Barry Cooks, Vice President, Kubernetes at Amazon Web Services
Serverless container technologies continue to grow in popularity across all major public clouds
Usage of serverless container technologies from all major cloud providers—including AWS App Runner, AWS Fargate, Azure Container Apps, Azure Container Instances (ACI), and Google Cloud Run—increased from 21 percent in 2020 to 36 percent in 2022 (YTD). This echoes increases we saw in previous research that included a shift of Amazon ECS users toward AWS Fargate.
Customers cite reducing the need to provision and manage underlying infrastructure as one of the main reasons for adopting serverless technologies for containers. Those customers not using serverless technologies prefer the control and flexibility they get from managing their own infrastructure.
Use of multiple cloud providers increases with organization size
Our data shows that over 30 percent of container organizations using 1,000 or more hosts work in multiple clouds, and that multi-cloud usage is lowest among organizations running the fewest hosts. Also, we see that multi-cloud organizations have more containers on average than single-cloud organizations.
Kubernetes Ingress usage is rising
To manage requests from outside of the cluster at scale, administrators often use Ingress to configure routes to multiple services in the cluster. Today, more than 35 percent of organizations use Ingress, which has been generally available since Kubernetes version 1.19 was released in August 2020.
As our customers operate more clusters and pods, they face increasing complexity in routing and network management. Many early adopters of Kubernetes used cloud-provided load balancers to route traffic to their services. But Ingress is often more cost efficient, and its adoption has increased steadily since its release.
Kubernetes Gateway API—which graduated to beta in July 2022—is the next step in the evolution of network management for containers. Gateway API provides advanced networking capabilities, including the use of custom resources and role-oriented design that uses API resources to model organizational roles. We look forward to seeing whether Gateway API displaces Ingress or whether the two technologies are used side by side.
Service meshes are still early and Istio dominates usage
Service meshes provide service discovery, load balancing, timeouts, and retries, and allow administrators to manage the cluster's security and monitor its performance. Our previous research illustrated the early adoption of service meshes, and the initial patterns we saw are largely unchanged. Among our customers, we primarily see Istio and Linkerd, with Istio being more than three times as popular as Linkerd.
“Service meshes have proven the value of delivering consistent security, observability & control for traffic in the enterprise. Istio has clearly established itself as the leading mesh solution and I'm proud of the work the community has done to get to this point. The recently completed donation of Istio to the CNCF will grow and strengthen our community to build on this success.”
—Louis Ryan, Co-creator of Istio and Principal Engineer at Google
Most hosts use a Kubernetes release that's more than eighteen months old
Kubernetes releases three new versions per year to provide users with new features, security improvements, and bug fixes. We've seen in previous research that users often prefer to wait more than a year before adopting those new versions. We've learned anecdotally that some customers' reason for this delay is to ensure the stability of their clusters and compatibility with API versions. Today, the most popular version in use is v1.21, which was released in April 2021 and officially passed its end of life date earlier this year.
Over 30 percent of hosts running containerd use an unsupported version
Previous research shows an increase in usage of containerd, which is one of the CRI-compliant runtimes that organizations can adopt as Dockershim is being deprecated. We've found that only about 69 percent of containerd hosts are using version 1.5 or 1.6, which are the actively supported versions. Notably, about 31 percent of containerd hosts are using versions 1.4 or older, which have passed their end of life dates.
Running older software versions presents issues around security and compliance and, in the case of container runtimes, introduces the risk of vulnerabilities such as container escapes. The fact that many hosts are using unsupported container runtime versions highlights the challenges organizations face in running appropriate tooling to maintain container security and compliance. Serverless container technologies reduce the risks of outdated runtimes and the burden of manual updates, which may be one reason we've seen a shift to serverless containers across all clouds.
Access management is improving but continues to be a challenge
Kubernetes administrators use role-based access control (RBAC) to allow subjects (users, groups, or service accounts) to access or modify resources in the cluster. According to security best practices, subjects should only have necessary permissions, and administrators must use caution when granting RBAC privileges that are associated with escalation risks. These include permissions that enable subjects to list all secrets or create workloads, certificates, or token requests that could allow them to modify their own privileges.
The good news is that as organizations deploy more clusters, a decreasing percentage of those clusters use overly permissive privileges. We suspect that the number is coming down as organizations adopt security practices such as permission audits, and tools such as automated RBAC scanners. However, we found that about 40 percent of clusters still use lax privileges, which presents a security risk.
NGINX, Redis, and Postgres are—once again—the most popular container images
As of September 2022, the most popular off-the-shelf container images are:
- NGINX: This is once again the most popular container image. NGINX provides caching, load balancing, and proxying capabilities to nearly 50 percent of organizations that use containers.
- Redis: Organizations can deploy Redis in a container to use as a key-value data store, cache, or message broker.
- Postgres: Usage of this relational database has grown slightly from last year.
- Elasticsearch: This highly performant document store and search engine continues to be one of the most popular images in use.
- Kafka: Organizations can easily add event streaming capabilities to their applications by deploying Kafka in a container.
- RabbitMQ: RabbitMQ supports decoupled architecture in microservice-based applications.
- MongoDB: MongoDB continues to be one of the most popular NoSQL databases in use.
- MySQL: This open source database is lower on the list than it used to be. But MySQL's performance and scalability give it a continual spot in the list of most popular container images.
- Calico: Calico is a networking provider that lets administrators manage the security of the network within their Kubernetes clusters.
- GitLab: To help teams adopt and maintain DevOps practices, GitLab provides repository management, issue tracking, and CI/CD pipelines.
- Vault: Teams can use Vault to simplify secrets management and help maintain secure applications.
In Kubernetes StatefulSets, we found that Redis, Postgres, Elasticsearch, RabbitMQ, and Kafka were the most commonly deployed images.