
Lee Avital

Anton Ippolitov

Anusha Podila

Jon Wolgast
When organizations first begin deploying workloads on Kubernetes, it's common for them to start with a permissive egress traffic policy that allows any workload to reach the internet. This approach can make it easier for teams to stay agile and to get services up and running in fast-moving environments. But as your Kubernetes footprint grows, it's important to minimize public internet access on a per-workload basis to improve your organization's security posture. Tools like Cilium and the AWS VPC Container Network Interface (CNI) can help you lock down traffic, but switching to a deny-by-default policy after workloads are already live presents a logistical problem: How can you tighten controls in this way without disrupting connectivity or causing a production outage? Even beginning to tackle the challenge is difficult because you need a way to analyze your traffic patterns at scale before you can appropriately devise and apply policies.
Datadog Cloud Network Monitoring (CNM) helps you roll out policy changes safely by giving you a clear picture of your entire organization's traffic patterns before you apply potentially disruptive traffic policies. In particular, using CNM, you can analyze egress traffic across thousands of Kubernetes namespaces to pinpoint the services for which you need to make outbound exceptions.
We know many teams are facing this very challenge when they decide to shift from allow-by-default to deny-by-default egress policies in Kubernetes. This blog post aims to help these teams by providing guidance about how to plan for and perform this policy change through the following steps:
- Detecting outbound network traffic with CNM
- Creating a list of namespaces that need outbound access
- Creating and applying the network policies
Detecting outbound network traffic with CNM
Kubernetes namespaces provide logical isolation between workloads within a physical cluster. By default, pods in these namespaces can access any external destination on the internet, but this default permissiveness can create a significant security risk. If an attacker compromises a pod, they potentially have a clear path to exfiltrate data or set up command-and-control to conduct further attacks anywhere on the internet.
To block outbound Kubernetes traffic by default, teams can use Cilium Network Policies (CNPs) or native kubernetes NetworkPolicies. But before they can create the network policies, organizations with a large Kubernetes footprint need a way to determine which of their many namespaces truly need outbound traffic, and which don't.
Here's where CNM comes in. CNM enables you to visualize and analyze the flow of data to, from, and within your network. When enabled in the Datadog Agent across your environment, CNM gives you pervasive visibility into network traffic among your services, containers, availability zones, or any other tagged components.
Querying data gathered by CNM
The first step in locking down egress traffic is identifying which Kubernetes namespaces are currently making outbound connections.
If you're new to CNM, start by enabling it on your Kubernetes clusters and allow it to collect network traffic for an extended period of time. This will allow CNM to gather detailed metadata for every network connection, including both pod-to-pod and pod-to-internet communications.
After the collection period is over, navigate to the CNM dashboard and perform a query to determine which pods have made outbound requests during this time. For example, you can use the query, shown in the screenshot below, to filter out private IPv4 address ranges, the link-local 169.254.0.0/16
address range, and the host loopback 127.0.0.0/8
address range.
What remains is traffic destined for the public internet. To view this egress traffic by source namespace and external destination, group the data by using the client_kube_namespace
and server_domain
tags. We will refer to the two namespaces listed in the CLIENT column, kube-system
and gmp-system
, as we continue to walk through our solution in this blog post.

Creating a list of namespaces that need outbound access
The next step is to create three lists of namespaces, saved as separate text files: the first containing the names of the namespaces with egress traffic (the results of the query above), the second containing all your namespaces, and the third containing the namespaces without egress traffic. The steps to create these files are described below.
Creating a text file with namespaces requiring outbound traffic
To create a file listing the namespaces with egress traffic, you can use CNM to export the query results. To do so, use the Download as CSV button appearing above the query results:

You can then transfer the namespaces within this CSV file to a text file as follows:
tail -n +2 cnm_output.csv | cut -d, -f1 | sort | uniq > namespaces_with_internet_egress.txt
As alluded to above, for the simplified example that we're walking through in this blog post, we'll assume that the contents of the namespaces_with_internet_egress.txt file include only the following two namespaces:
kube-systemgmp-system
Next, you'll need a separate list of all the namespaces in your environment.
Getting a complete list of namespaces from a Kubernetes cluster
You can create a list of all your namespaces by using the kubectl get namespaces
command against a kubernetes cluster, filtering for metadata.name
and directing the output to a text file as follows:
kubectl get namespaces -o custom-columns=":metadata.name" > k8s_namespaces.txt
To flesh out our example scenario, we will assume the output reveals the following five namespaces:
defaultkube-node-leasekube-publickube-systemgmp-system
Creating a list of namespaces without egress traffic
Next, you want to compare the two lists to find namespaces that exist in Kubernetes but have not shown any observed egress traffic in the selected time period. These are the namespaces that you would want to lock down.
To perform this comparison, you can use grep
with the invert match option -v
as follows:
grep -v -f namespaces_with_internet_egress.txt k8s_namespaces.txt > namespaces_without_internet_egress.txt
This command takes all the lines from the list of all namespaces (k8s_namespaces.txt) that do not match any line in the list of namespaces with egress traffic (namespaces_with_internet_egress.txt), and then saves them to a new file named namespaces_without_internet_egress.txt.
In our example scenario, the output would appear as follows in namespaces_without_internet_egress.txt:
defaultkube-node-leasekube-public
Creating and applying the network policies
Once you've identified which Kubernetes namespaces require internet access, the final step is to create and apply egress policies that restrict outbound traffic accordingly. The goal is to adopt a deny-by-default model while allowing internet access where needed, and only where needed. Below, we walk through how to implement this solution by using two common approaches: Cilium and AWS VPC CNI.
Option 1: Creating and applying the network policies via Cilium
There are many ways you can configure Cilium network policies to block egress traffic to the public internet. This guide won't cover every possible scenario but aims to provide general guidelines on how you can use Datadog CNM to create Network Policies that suit your needs in order to adopt a deny-by-default stance.
The steps below assume that your clusters do not have Cilium installed and that you are starting with Cilium from scratch.
1. Install Cilium
To safely introduce policies without disrupting traffic, start by installing Cilium with Policy Audit Mode turned on. This allows you to define and test policies without enforcing them immediately.
You can use the official Cilium CLI to do this, as follows:
cilium install --set policyAuditMode=true
2. Apply a default cluster-wide egress policy
Next, define a baseline egress policy that gives every pod in the cluster permission to reach all necessary internal or private endpoints. Skip public internet destinations for now, as we will handle those in the next step. If you are just getting started, it is fine to start with wide permissions and scope them down later on.
For example, a good starting point would be a cluster-wide policy that looks like this:
apiVersion: cilium.io/v2kind: CiliumClusterwideNetworkPolicymetadata: name: default-cluster-egressspec: description: "Baseline egress to internal endpoints" endpointSelector: {} egress: - toEntities: # All entities inside the current cluster - cluster - toCIDRSet: # Internal CIDRs - cidr: 10.0.0.0/8 - cidr: 172.16.0.0/12 - cidr: 192.168.0.0/16
Note, however, that if Policy Audit Mode is disabled and your Policy Enforcement Mode is set to default
, this operation can actually be dangerous. In the default
mode, all pods have unrestricted network access until they are selected by at least one policy. When a policy is applied to a pod, the pod enters into the deny-by-default state, allowing only the traffic that is explicitly authorized by the policy rules.
This means that if you apply the above policy with Policy Audit Mode disabled in default
enforcement mode, all pods in the cluster will immediately start dropping traffic to all destinations not explicitly mentioned in the policy. If you happen to have such a configuration, you can avoid this behavior by using the enableDefaultDeny
flag in the policy spec:
enableDefaultDeny: egress: false
3. Add namespace-level policies to permit public internet egress
Now, you will need to craft CNPs for each namespace which requires public internet egress.
For example, the first namespace in our cnm_namespaces.txt file is kube-system
. We can start by adding the client_kube_namespace:kube-system
filter to our initial CNM query to retrieve all public internet endpoints accessed by pods in this namespace. Once we have this list, we can either use Cilium's DNS-based policy rules or simple CIDR-based rules.
Here is an example policy that uses both types of rules for our kube-system
namespace:
apiVersion: cilium.io/v2kind: CiliumNetworkPolicymetadata: name: public-internet-egress namespace: kube-systemspec: description: Public internet egress for all pods in the namespace endpointSelector: {} egress: - toCIDR: # Allow egress to one IP address - 1.1.1.1/32 - toFQDNs: # Allow egress to a domain name - matchName: logging.googleapis.com # Enable toFQDN proxy - toPorts: - ports: - port: '53' protocol: ANY rules: dns: - matchPattern: '*'
4. Verify that policies function as expected
If you have enabled Policy Audit Mode, the policies you just deployed should not be functional. While Policy Audit Mode is enabled, you should take this opportunity to verify that they do not drop any unintended traffic. There are multiple ways to do this—for example, by using Cilium's Hubble CLI.
5. Enable deny-by-default policy enforcement in Cilium
Once you have confirmed your policies are functioning as expected, switch Cilium to the always
enforcement mode to have the policies actually take effect:
cilium upgrade --set enforcementMode=always
In this mode, all pods in the cluster will be in deny-by-default mode, and all network traffic will need to be explicitly allowed. After enabling the policy enforcement, you should double-check once again that none of your applications are dropping unintended traffic. (To do this, you can use Hubble's hubble_drop_total
metric, for example.)
Option 2: Creating and applying the network policy through AWS VPC CNI-based egress controls
If you're using the AWS VPC CNI plug-in, note that it does not support cluster-wide policies or DNS-based policies. Nor does it have any form of audit mode. This means that as soon as a policy selects a pod, it will immediately switch into the deny-by-default mode. Given this situation, you will need to create namespace-specific policies and ensure that they allow all relevant traffic right away.
1. Ensure the AWS VPC CNI plug-in is configured for Network Policy support
You first need to enable the enableNetworkPolicy
option for your AWS VPC CNI plug-in. The specific details of this setup will depend on your environment. If you want an example from AWS EKS, you can consult the official documentation.
Once network policy support has been enabled, you can start creating two sets of network policies: one for namespaces that don't need public internet egress, and one for namespaces that do.
2. Create a policy for namespaces with public internet egress
The policy for namespaces that need public internet egress can simply allow all egress traffic at first:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-all-trafficspec: podSelector: {} egress: - to: - ipBlock: cidr: 0.0.0.0/0
You can safely apply this policy to all namespaces in the namespaces_with_internet_egress.txt file. It can be scoped down later on. The goal right now is to be able to switch this cluster into deny-by-default mode.
3. Create a policy for namespaces without public internet egress
For namespaces that don't require public internet egress, we can scope down access to private IP ranges and cluster-local endpoints as follows:
apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: deny-public-internet-accessspec: podSelector: {} policyTypes: - Egress egress: # allow access to all pods in the cluster - to: - namespaceSelector: {} podSelector: {}
# allow DNS - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns ports: - protocol: UDP port: 53
# allow private ranges - to: - ipBlock: cidr: 10.0.0.0/8 - ipBlock: cidr: 172.16.0.0/12 - ipBlock: cidr: 192.168.0.0/16
4. Confirm that policies work as intended
Before switching the cluster to deny-by-default mode, you should verify that the policies you have created do not cause any unintended traffic drops. One way to accomplish this is to enable AWS VPC CNI network policy logs. These logs can be written to disk or sent to CloudWatch. They can also be ingested into Datadog if needed.
5. Enable deny-by-default policy enforcement
Finally, once you are ready, you can switch the cluster into deny-by-default mode by setting NETWORK_POLICY_ENFORCING_MODE
to strict
.
Use CNM to create targeted network policies
Knowing where to apply egress controls in a large Kubernetes environment is challenging, especially when dealing with a large number of namespaces. Datadog Cloud Network Monitoring helps you tackle this challenge by enabling you to determine which namespaces communicate with the outside world. By connecting network traffic to application-level context like service names, namespaces, and environments, you can identify exactly where outbound access is needed and where it isn't. That visibility makes it possible to apply well-scoped policies at scale through mechanisms such as Cilium or AWS VPC CNI.
If you'd like to learn more about Datadog's Cloud Network Monitoring product, see our documentation. And if you're not yet a Datadog customer, sign up for a 14-day free trial.