Demystifying Kubernetes Networking — Episode 3

In this episode 3, we will deep dive into the Kubernetes Networking Model!

Sanjit Mohanty
5 min readMay 29, 2022

In the last episode of Demystifying Kubernetes Networking, we learnt about Pause Containers & its relevance in Kubernetes.

If you compare other networking platforms with Kubernetes, you will observe that Kubernetes has a very different & unique approach that is based on a flat network structure. The reason for this flat structure is that it eliminates the need to map host ports to container ports. This allows to run distributed systems and share machines between applications without allocating any ports.

In a typical flat network topology, devices (i.e. pods in Kubernetes context) are connected to a single switch instead of several separate switches. This reduces the number of switches and routers in the network.

Before we understand the Kubernetes networking model, let us first try to understand the major goals of Kubernetes Networking model -

Goal 1# Containers in the same hosts should be able to talk with each other.

Goal 2# Containers on different hosts should be able to talk with each other.

Goal 3# Containers are assigned with unique IP address.

Goal 4# Containers should be able to communicate with services.

In this episode, we’ll try understanding how in Kubernetes the first two above goals has been realised.

Goal 3 is something we have already discussed in the previous episodes.

Goal 4 is the topic for the subsequent episode.

Understanding the Docker Networking Model

For historical reasons, Kubernetes Networking model is a response to overcome some of the shortcoming of Docker networking model. So, in order to understand how networking works in Kubernetes it does make sense to take a look at how it works in Docker first.

Figure 1: Docker networking model

In case of Docker setup, each host has its own virtual network serving all of the containers on that host. Containers in the same host communicate with one another using this virtual network bridge.

But what about communication between containers across different hosts? In case of Docker setup, cross host communication is aided by a Proxy. Container in one host willing to communicate a container in another host, needs to go though the proxy, making sure no two containers use the same port on a host.

The Kubernetes networking model was created in response to this Docker model. It was designed to improve on some of the limitations of the Docker model.

Understanding the Kubernetes Networking Model

In case of Kubernetes networking model, there is one virtual network for the entire cluster in contrast to what we saw in the Docker setup. This enables the containers from one host to reach out directly to the containers in another host without a need of a Proxy.

Figure 2: Simplified Kubernetes Networking Model

Kubernetes got 2 networks — Node & Pod Networks. (There is actually one more called the Service Network which we will talk in our next episode).

Each pod & service has a unique IP within the cluster.

Figure 3: A more involved Kubernetes Networking Model

Kubernetes itself doesn’t implement the node network. But each node needs to talk with the other. Kubernetes presents a network plugin interface called CNI for the pod network and the third party vendors implements this pod network. As we have learnt earlier, this pod network is big & flat and stretches across all node. Each node gets allocated a subset of addresses from it.

Once the pod gets spun off, they get scheduled to a particular node. The IP address that they get is one from the range of addresses allocated to that node. These are unique IP addresses.

What is a Bridge Interface cbr0 in Kubernetes?

If you have noticed in the previous diagram (refer to Figure 3) where I was explaining about Kubernetes networking model, there exists a component called as Bridge Interface labelled as cbr0. In this section we will try to understand more about it.

Let’s first understand it in context of Linux.

In Linux, a bridge behaves like a network switch. It forwards packets between interfaces that are connected to it. It’s usually used for forwarding packets on routers, on gateways or between VMs and network namespaces on a host.

Bridges are used when one intends to establish communication between VMs or containers and hosts.

Figure 4: Bridge in Linux

In Kubernetes, this bridge is called cbr0. Every pod on a node is part of the bridge, and the bridge connects all pods on the same node together (refer to Figure 3).

When a request hits the bridge, the bridge asks all the connected pods if they have the right IP address to handle the original request. Remember that each pod has its own IP address and it knows its own IP address. If one of the pod does, the bridge will store this information and also forward data to the original back so that its network request is completed.

Revisiting major goals of Kubernetes Networking model

Goal 1 # Containers in the same hosts should be able to talk with each other.

There are two scenarios where the traffic does not leave the host —

  1. This is either when the service called is running on the same node,
  2. or it is in the same container collection within a single pod.

In the case of calling localhost:80 from container 1 in our first pod and having the service running in the container 2, the traffic will pass the network device and forward the packet to its destination. In this case the route the traffic travels is quite short. Refer to Figure 5 below.

Figure 5: Traffic routing within a Kubernetes Node

It gets a bit longer when we communicate to a different pod. The traffic will pass over to cbr0, which next will notice that we communicate on the same subnet and therefore directly forwards the traffic to its destination pod, as shown in Figure 5.

Goal 2 # Containers on different hosts should be able to talk with each other.

This gets a bit more complicated when we leave the node.

cbr0 will now pass the traffic to the next node, whose configuration is managed by the CNI. These are basically just routes of the subnets with the destination host as a gateway. The destination host can then just proceed on its own cbr0 and forward the traffic to the destination pod, as shown below.

Figure 6: Traffic routing across pod in Kubernetes Cluster

--

--

Sanjit Mohanty
Sanjit Mohanty

Written by Sanjit Mohanty

Engineering Manager, Broadcom | Views expressed on my blogs are solely mine; not that of present/past employers. Support my work https://ko-fi.com/sanjitmohanty

No responses yet