CNCF’s KubeCon + CloudNativeCon is the most important event for Kubernetes adopters and technologists. The first KubeCon, which took place in San Francisco in November 2015, gathered around 500 developers and early adopters to discuss the technology and its future. Its 2020 North America edition marked its fifth anniversary and had 25,000 registrants.
Kubernetes has passed the early adoption phase and entered the early majority phase, with enterprises embracing Kubernetes and cloud native technologies for their application deployment needs. This maturity is reflected in their flagship conference content: the sessions are less about Kubernetes development itself, and more about end-user migration stories, security, developer experience, and the cloud native ecosystem of projects around Kubernetes.
As silver sponsors of this year’s conference, Datadog held a booth where we showcased the latest updates to our Kubernetes monitoring solution, such as our revamp of the Live Containers page. We are also big Kubernetes users ourselves, and a couple of talks from our engineers were accepted. Laurent Bernaille shared some of our failure (and recovery) stories around our Kubernetes platform, and Tabitha Sable discussed ways in which you can misconfigure your TLS certificates in Kubernetes—and demonstrated sample attacks that your clusters might be vulnerable to in those cases.
We also spent a lot of time watching talks, visiting other sponsor booths, and talking to the community to better understand the latest trends. In this post, we want to share our findings on the current state of the cloud native and Kubernetes ecosystem, as well as what its future holds.
One of the week’s main topics was developer experience (DX) in Kubernetes. The promise of the cloud native journey is that developers are able to focus more (and ideally, only) on their business logic code, and less (and ideally, not at all) on infrastructure. Kubernetes is not quite there, as users still need to think about the different Kubernetes API objects, as well as networking concepts such as ingress.
Many companies and open source projects have risen to the challenge of improving Kubernetes DX. The talks we heard on this topic can be divided into two different categories: Configuration management and GitOps, and developer workflows and Kubernetes-based platforms.
As your number of applications deployed to Kubernetes grows, managing the different configuration files in YAML alone becomes extremely tedious and complex. Several engineering teams presented talks on this topic, which suggests that it is one of the greatest challenges when it comes to DX.
Katie Gamanji, in her talk, The Building Blocks of DX, Phillip Wittrock and Gabbi Fisher, in their talk, Five Hundred Twenty-five Thousand K8s CLIs, and Jesse Suen and Daniel Thomson, in their talk, Eating Your Vegetables: How to Manage 2.5 Million Lines of YAML, gave an overview of the different options available for application configuration management and provided frameworks to select the best options for your use case.
There has been a lot of conversation about whether Kubernetes should be exposed directly to developers or if there should be a platform built on top of Kubernetes to provide an extra layer of abstraction.
The CNCF is more interested in providing OSS building blocks for others to build platforms than in providing a new platform themselves. Many vendors are already building platforms based on Kubernetes, and we saw many talks about end-to-end developer workflows and end-user internal platforms use cases. These talks included Colin Murphy’s Managing Developer Workflows with the Kubernetes API and David Sudia’s More Power, Less Pain: Building an Internal Platform with CNCF Tools.
With Kubernetes entering the enterprise, security and governance are more important than ever. Cloud native workloads are ephemeral and dynamic by nature and therefore require new strategies and solutions to remain secure. This year’s conference paid great attention to shift-left security and DevSecOps, and attendees were also introduced to several projects and solutions related to governance and securing workloads at runtime.
In a very explanatory and fun talk, Steven Terrana and Dan “POP” Papandrea gave a good end-to-end overview of DevSecOps and the different activities that are involved, from image scanning and static analysis to runtime security.
Additionally, Daniel Feldman explained how SPIFFE and Spire, two CNCF incubating projects, can help your organization move away from perimeter security and into a zero-trust environment, in which each service on your cluster has to prove its identity to any other service when making a request.
When it comes to governance, most projects are geared towards maintaining policy as code. This allows operations teams to keep their policies in both version control and as part of their CI/CD infrastructure, ensuring that they can be audited and peer reviewed. Barak Schoster presented Checkov, an open source project that allows operators to define policies for Kubernetes YAML files and integrate its checks as part of the applications CI/CD pipeline. Jeremy Rickard explained how Open Policy Agent, a CNCF incubating project, allows you to move policy checks from the CI/CD pipeline to the cluster itself. This project lets you write Kubernetes workloads policies that get ensured by validating and mutating admission controllers, without having to write those controllers explicitly.
Another topic that gathered a lot of attention was the management of multiple clusters. As larger companies adopt Kubernetes, the ability to effectively and securely manage several clusters becomes more important, and new solutions are being developed to make sure Kubernetes is ready for it.
There were several talks related to network routing between clusters. Leigh Capili gave an overview of some of the strategies available to route traffic between clusters, from using NodePort or Ingresses to using BGP to share route information between clusters. In a more practical talk, Daniel Bryant and Thomas Rampelberg demonstrated how to configure Linkerd and Ambassador, two CNCF projects, to enable cross-cluster communication.
Looking to the future of multi-cluster workload management, Vallery Lancey introduced some early work to define a model for cluster scheduling. This would make clusters behave like nodes, where workloads would have cluster constraints and a scheduler would do the actual work of selecting which clusters that workload would be scheduled to. This new API is being designed as part of the SIG-Multicluster working group.
We’ve highlighted some of the key topics at KubeCon + CloudNativeCon 2020, but this post is not exhaustive. For instance, developers have started to leverage Kubernetes for other types of workloads, such as machine learning, edge computing, and network functions virtualization. These workloads require specific solutions, so we saw an increase in sessions dedicated to these verticals.
Datadog will continue to participate in KubeCon + CloudNativeCon and their ecosystem to ensure that we remain the best tool for monitoring your Kubernetes clusters and their diverse workloads. To learn how we use Datadog to gain visibility into our own Kubernetes clusters, you can watch our latest “Datadog on” episode on Kubernetes monitoring.