Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Linux Internals of Kubernetes Networking

Introduction

This blog is a hands-on guide designed to help you understand Kubernetes networking concepts by following along. We'll use K3s, a lightweight Kubernetes distribution, to explore how networking works within a cluster.

System Requirements

Before getting started, ensure your system meets the following requirements:

  • A Linux-based system (Ubuntu, CentOS, or equivalent).
  • At least 2 CPU cores and 4 GB of RAM.
  • Basic familiarity with Linux commands.

Installing K3s

To follow along with this guide, we first need to install K3s—a lightweight Kubernetes distribution designed for ease of use and optimized for resource-constrained environments.

Install K3s

You can install K3s by running the following command in your terminal:

CODE: https://gist.github.com/velotiotech/47d6efb8523f591139ab390f0336381c.js

This script will:

  1. Download and install the K3s server.
  2. Set up the necessary dependencies.
  3. Start the K3s service automatically after installation.

Verify K3s Installation

After installation, you can check the status of the K3s service to make sure everything is running correctly:

CODE: https://gist.github.com/velotiotech/be2f5a3a792aa2f05f971ac58e350373.js

If everything is correct, you should see that the K3s service is active and running.

Set Up kubectl

K3s comes bundled with its own kubectl binary. To use it, you can either:

Use the K3s binary directly:

CODE: https://gist.github.com/velotiotech/7845f0d04f9325d3caf2d053a01df3ec.js

Or set up the kubectl config file by exporting the Kubeconfig path:

CODE: https://gist.github.com/velotiotech/fb97b1c588550dbcac93b319e805e12d.js

Understanding Kubernetes Networking

In Kubernetes, networking plays a crucial role in ensuring seamless communication between pods, services, and external resources. In this section, we will dive into the network configuration and explore how pods communicate with one another.

Viewing Pods and Their IP Addresses

To check the IP addresses assigned to the pods, use the following kubectl command:

CODE: https://gist.github.com/velotiotech/1961a4cdd5ec38f7f0fbe0523821dc7f.sh

This will show you a list of all the pods across all namespaces, including their corresponding IP addresses. Each pod is assigned a unique IP address within the cluster.

You’ll notice that the IP addresses are assigned by Kubernetes and typically belong to the range specified by the network plugin (such as Flannel, Calico, or the default CNI). K3s uses Flannel CNI by default and sets default pod CIDR as 10.42.0.0/24. These IPs allow communication within the cluster.

Observing Network Configuration Changes

Upon starting K3s, it sets up several network interfaces and configurations on the host machine. These configurations are key to how the Kubernetes networking operates. Let’s examine the changes using the IP utility.

Show All Network Interfaces

Run the following command to list all network interfaces:

CODE: https://gist.github.com/velotiotech/76eeebd48eca1f47afd6ac1c7b69ac45.js

This will show all the network interfaces.

  • lo, enp0s3, and enp0s9 are the network interfaces that belong to the host.  
  • flannel.1 interface is created by Flannel CNI for inter-pod communication that exists on different nodes.
  • cni0 interface is created by bridge CNI plugin for inter-pod communication that exists on the same node.
  • vethXXXXXXXX@ifY interface is created by bridge CNI plugin. This interface connects pods with the cni0 bridge.

Show IP Addresses

To display the IP addresses assigned to the interfaces:

CODE: https://gist.github.com/velotiotech/5130628514cfc4705dd27c1627d5dd91.js

You should see the IP addresses of all the network interfaces. With regards to K3s-related interfaces, only cni0 and flannel.1 have IP addresses. The rest of the vethXXXXXXXX interfaces only have MAC addresses; the details regarding this will be explained in the later section of this blog.

Pod-to-Pod Communication and Bridge Networks

Mermaid Code for the diagram, feel free to edit the color scheme  ``` graph TB %% Host Network Interface enp0s9[Host Interface enp0s9 192.168.2.224]  %% CNI0 Bridge cni0[cni0 Bridge 10.42.0.1/24]  %% Pod Network Namespaces subgraph pod1[Pod 1 Network Namespace] eth0_1[eth0 10.42.0.2] end  subgraph pod2[Pod 2 Network Namespace] eth0_2[eth0 10.42.0.3] end  subgraph pod3[Pod 3 Network Namespace] eth0_3[eth0 10.42.0.4] end  %% veth pairs veth1_host[veth1] veth2_host[veth2] veth3_host[veth3]  %% Connections enp0s9 --- cni0  cni0 --- veth1_host cni0 --- veth2_host cni0 --- veth3_host  veth1_host === eth0_1 veth2_host === eth0_2 veth3_host === eth0_3  %% Styling with improved contrast classDef interface fill:#d4edff,stroke:#0066cc,stroke-width:2px,color:black classDef bridge fill:#ffecd4,stroke:#cc6600,stroke-width:2px,color:black classDef namespace fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,color:black classDef veth fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px,color:black  class enp0s9 interface class cni0 bridge class pod1,pod2,pod3 namespace class veth1_host,veth2_host,veth3_host,eth0_1,eth0_2,eth0_3 veth  ```

The diagram illustrates how container networking works within a Kubernetes (K3s) node, showing the key components that enable pods to communicate with each other and the outside world. Let's break down this networking architecture:

At the top level, we have the host interface (enp0s9) with IP 192.168.2.224, which is the node's physical network interface connected to the external network. This is the node's gateway to the outside world.

enp0s9 interface is connected to the cni0 bridge (IP: 10.42.0.1/24), which acts like a virtual switch inside the node. This bridge serves as the internal network hub for all pods running on the node.

Each of the pods runs in its own network namespace, with each one having its own separate network stack, which includes its own network interfaces and routing tables. Each of the pod’s internal interfaces, eth0, as shown in the diagram above, has an IP address, which is the pod’s IP address. eth0 inside the pod is connected to its virtual ethernet (veth) pair that exists in the host’s network and connects the eth0 interface of the pod to the cni0 bridge.

Exploring Network Namespaces in Detail

Kubernetes uses network namespaces to isolate networking for each pod, ensuring that pods have separate networking environments and do not interfere with each other. 

A network namespace is a Linux kernel feature that provides network isolation for a group of processes. Each namespace has its own network interfaces, IP addresses, routing tables, and firewall rules. Kubernetes uses this feature to ensure that each pod has its own isolated network environment.

In Kubernetes:

  • Each pod has its own network namespace.
  • Each container within a pod shares the same network namespace.

Inspecting Network Namespaces

To inspect the network namespaces, follow these steps:

If you installed k3s as per this blog, k3s by default selects containerd runtime, your commands to get the container pid will be different if you run k3s with docker or other container runtimes.

Identify the container runtime and get the list of running containers.

CODE: https://gist.github.com/velotiotech/9dd26a2184217877f0785f9963076ef4.js

Get the container-id from the output and use it to get the process ID

CODE: https://gist.github.com/velotiotech/4382adae1c8691a09bb802a8f9d8aabc.js

Check the network namespace associated with the container

CODE: https://gist.github.com/velotiotech/389c7b858de231209b8f7593ca582e18.js

You can use nsenter to enter the network namespace for further exploration.

Executing Into Network Namespaces

To explore the network settings of a pod's namespace, you can use the nsenter command.

CODE: https://gist.github.com/velotiotech/37060e5c47810f4cedfbbb55514b6086.js

Veth Interfaces and Their Connection to Bridge

Inside the pod’s network namespace, you should see the pod’s interfaces (lo and eth0) and the IP address: 10.42.0.8 assigned to the pod. If observed closely, we see eth0@if13, which means eth0 is connected to interface 13 (in your system the corresponding veth might be different). Interface eth0 inside the pod is a virtual ethernet (veth) interface, veths are always created in interconnected pairs. In this case, one end of veth is eth0 while the other part is if13. But where does if13 exist? It exists as a part of the host network connecting the pod's network to the host network via the bridge (cni0) in this case.

CODE: https://gist.github.com/velotiotech/80512f92763320971514978219198be4.js

Here you see veth82ebd960@if2, which denotes that the veth is connected to interface number 2 in the pod's network namespace. You can verify that the veth is connected to bridge cni0 as follows and that the veth of each pod is connected to the bridge, which enables communication between the pods on the same node.

CODE: https://gist.github.com/velotiotech/c10dae4ca3c9d692a5507c260899d3ab.js

Demonstrating Pod-to-Pod Communication

Deploy Two Pods

Deploy two busybox pods to test communication:

CODE: https://gist.github.com/velotiotech/38540ecd6822e0ff122e9759c351eb2e.js

Get the IP Addresses of the Pods

CODE: https://gist.github.com/velotiotech/471be7195defc5fa185fd1135ecf0dab.js

Pod1 IP : 10.42.0.9

Pod2 IP : 10.42.0.10

Ping Between Pods and Observe the Traffic Between Two Pods

Before we ping from Pod1 to Pod2, we will set up a watch on cni0 and veth pair of Pod1 and pod2 that are connected to cni0 using tcpdump. You can find the veth pair of Pod1 and pod2, which is connected to cni0 using the following commands.

Script to exec into network namespace

You can use the following script to get the container process ID and exec into the pod network namespace directly.

CODE: https://gist.github.com/velotiotech/0e93bbb5e4ff160eba49e99cbdf33cf3.js

Open three terminals and set up the tcpdump listeners: 

# Terminal 1 - Watch traffic on cni0 bridge 

CODE: https://gist.github.com/velotiotech/1e031df3e2a8c30b495883a2e11d5d8a.js

 # Terminal 2 - Watch traffic on veth1 (Pod1's veth pair)

CODE: https://gist.github.com/velotiotech/4b195888db21bd3a154b2f16e5432453.js

# Terminal 3 - Watch traffic on veth2 (Pod2's veth pair) 

CODE: https://gist.github.com/velotiotech/487090c8b2ea1b2794ff7dbc89bed681.js

Exec into Pod1 and ping Pod2:

CODE: https://gist.github.com/velotiotech/783f3dcaf0a0bb1d319922a4b73891d2.js

Watch results on veth3a94f27 pair of Pod1.

Watch results on cni0:

Watch results on veth18eb7d52 pair of Pod2:

Observing the timestamps for each request and reply on different interfaces, we get the flow of request/reply, as shown in the diagram below.

Deeper Dive into the Journey of Network Packets from One Pod to Another

We have already seen the flow of request/reply between two pods via veth interfaces connected to each other in a bridge network. In this section, we will discuss the internal details of how a network packet reaches from one pod to another.

   

Packet Leaving Pod1’s Network

Inside Pod1’s network namespace, the packet originates from eth0 (Pod1’s internal interface) and is sent out via its virtual ethernet interface pair in the host network. The destination address of the network packet is 10.0.0.10, which lies within the CIDR range 10.42.0.0 - 10.42.0.255 hence it matches the second route.

The packet exits Pod1’s namespace and enters the host namespace via the connected veth pair that exists in the host network. The packet arrives at bridge cni0 since it is the master of all the veth pairs that exist in the host network.

Once the packet reaches cni0, it gets forwarded to the correct veth pair connected to Pod2.

Packet Forwarding from cni0 to Pod2’s Network

When the packet reaches cni0, the job of cni0 is to forward this packet to Pod2. cni0 bridge acts as a Layer2 switch here, which just forwards the packet to the destination veth. The bridge maintains a forwarding database and dynamically learns the mapping of the destination MAC address and its corresponding veth device. 

You can view forwarding database information with the following command:

CODE: https://gist.github.com/velotiotech/3cbbd9e51cb88c715d9d85e04f97f239.js

In this screenshot, I have limited the result of forwarding database to just the MAC address of Pod2’s eth0

  1. First column: MAC address of Pod2’s eth0
  2. dev vethX: The network interface this MAC address is reachable through
  3. master cni0: Indicates this entry belongs to cni0 bridge
  4. Flags that may appear:
    • permanent: Static entry, manually added or system-generated
    • self: MAC address belongs to the bridge interface itself
    • No flag: The entry is Dynamically learned.

Dynamic MAC Learning Process

When a packet is generated with a payload of ICMP requests made from Pod1, it is packed as a frame at layer 2 with source MAC as the MAC address of the eth0 interface in Pod1, in order to get the destination MAC address, eth0 broadcasts an ARP request to all the network interfaces the ARP request contains the destination interface’s IP address.

This ARP request is received by all interfaces connected to the bridge, but only Pod2’s eth0 interface responds with its MAC address. The destination MAC address is then added to the frame, and the packet is sent to the cni0 bridge.

This destination MAC address is added to the frame, and it is sent to the cni0 bridge.  

When this frame reaches the cni0 bridge, the bridge will open the frame and it will save the source MAC against the source interface(veth pair of pod1’s eth0 in the host network) in the forwarding table.


Now the bridge has to forward the frame to the appropriate interface where the destination lies (i.e. veth pair of Pod2 in the host network). If the forwarding table has information about veth pair of Pod2 then the bridge will forward that information to Pod2, else it will flood the frame to all the veths connected to the bridge, hence reaching Pod2.

When Pod2 sends the reply to Pod1 for the request made, the reverse path is followed. In this case, the frame leaves Pod2’s eth0 and is tunneled to cni0 via the veth pair of Pod2’s eth0 in the host network. Bridge adds the source MAC address (in this case, the source will be Pod2’s eth0) and the device from which it is reachable in the forwarding database, and forwards the reply to Pod1, hence completing the request and response cycle.

Summary and Key Takeaways

In this guide, we explored the foundational elements of Linux that play a crucial role in Kubernetes networking using K3s. Here are the key takeaways:

  • Network Namespaces ensure pod isolation.
  • Veth Interfaces connect pods to the host network and enable inter-pod communication.
  • Bridge Networks facilitate pod-to-pod communication on the same node.

I hope you gained a deeper understanding of how Linux internals are used in Kubernetes network design and how they play a key role in pod-to-pod communication within the same node.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Linux Internals of Kubernetes Networking

Introduction

This blog is a hands-on guide designed to help you understand Kubernetes networking concepts by following along. We'll use K3s, a lightweight Kubernetes distribution, to explore how networking works within a cluster.

System Requirements

Before getting started, ensure your system meets the following requirements:

  • A Linux-based system (Ubuntu, CentOS, or equivalent).
  • At least 2 CPU cores and 4 GB of RAM.
  • Basic familiarity with Linux commands.

Installing K3s

To follow along with this guide, we first need to install K3s—a lightweight Kubernetes distribution designed for ease of use and optimized for resource-constrained environments.

Install K3s

You can install K3s by running the following command in your terminal:

CODE: https://gist.github.com/velotiotech/47d6efb8523f591139ab390f0336381c.js

This script will:

  1. Download and install the K3s server.
  2. Set up the necessary dependencies.
  3. Start the K3s service automatically after installation.

Verify K3s Installation

After installation, you can check the status of the K3s service to make sure everything is running correctly:

CODE: https://gist.github.com/velotiotech/be2f5a3a792aa2f05f971ac58e350373.js

If everything is correct, you should see that the K3s service is active and running.

Set Up kubectl

K3s comes bundled with its own kubectl binary. To use it, you can either:

Use the K3s binary directly:

CODE: https://gist.github.com/velotiotech/7845f0d04f9325d3caf2d053a01df3ec.js

Or set up the kubectl config file by exporting the Kubeconfig path:

CODE: https://gist.github.com/velotiotech/fb97b1c588550dbcac93b319e805e12d.js

Understanding Kubernetes Networking

In Kubernetes, networking plays a crucial role in ensuring seamless communication between pods, services, and external resources. In this section, we will dive into the network configuration and explore how pods communicate with one another.

Viewing Pods and Their IP Addresses

To check the IP addresses assigned to the pods, use the following kubectl command:

CODE: https://gist.github.com/velotiotech/1961a4cdd5ec38f7f0fbe0523821dc7f.sh

This will show you a list of all the pods across all namespaces, including their corresponding IP addresses. Each pod is assigned a unique IP address within the cluster.

You’ll notice that the IP addresses are assigned by Kubernetes and typically belong to the range specified by the network plugin (such as Flannel, Calico, or the default CNI). K3s uses Flannel CNI by default and sets default pod CIDR as 10.42.0.0/24. These IPs allow communication within the cluster.

Observing Network Configuration Changes

Upon starting K3s, it sets up several network interfaces and configurations on the host machine. These configurations are key to how the Kubernetes networking operates. Let’s examine the changes using the IP utility.

Show All Network Interfaces

Run the following command to list all network interfaces:

CODE: https://gist.github.com/velotiotech/76eeebd48eca1f47afd6ac1c7b69ac45.js

This will show all the network interfaces.

  • lo, enp0s3, and enp0s9 are the network interfaces that belong to the host.  
  • flannel.1 interface is created by Flannel CNI for inter-pod communication that exists on different nodes.
  • cni0 interface is created by bridge CNI plugin for inter-pod communication that exists on the same node.
  • vethXXXXXXXX@ifY interface is created by bridge CNI plugin. This interface connects pods with the cni0 bridge.

Show IP Addresses

To display the IP addresses assigned to the interfaces:

CODE: https://gist.github.com/velotiotech/5130628514cfc4705dd27c1627d5dd91.js

You should see the IP addresses of all the network interfaces. With regards to K3s-related interfaces, only cni0 and flannel.1 have IP addresses. The rest of the vethXXXXXXXX interfaces only have MAC addresses; the details regarding this will be explained in the later section of this blog.

Pod-to-Pod Communication and Bridge Networks

Mermaid Code for the diagram, feel free to edit the color scheme  ``` graph TB %% Host Network Interface enp0s9[Host Interface enp0s9 192.168.2.224]  %% CNI0 Bridge cni0[cni0 Bridge 10.42.0.1/24]  %% Pod Network Namespaces subgraph pod1[Pod 1 Network Namespace] eth0_1[eth0 10.42.0.2] end  subgraph pod2[Pod 2 Network Namespace] eth0_2[eth0 10.42.0.3] end  subgraph pod3[Pod 3 Network Namespace] eth0_3[eth0 10.42.0.4] end  %% veth pairs veth1_host[veth1] veth2_host[veth2] veth3_host[veth3]  %% Connections enp0s9 --- cni0  cni0 --- veth1_host cni0 --- veth2_host cni0 --- veth3_host  veth1_host === eth0_1 veth2_host === eth0_2 veth3_host === eth0_3  %% Styling with improved contrast classDef interface fill:#d4edff,stroke:#0066cc,stroke-width:2px,color:black classDef bridge fill:#ffecd4,stroke:#cc6600,stroke-width:2px,color:black classDef namespace fill:#e8f5e9,stroke:#2e7d32,stroke-width:2px,color:black classDef veth fill:#f3e5f5,stroke:#6a1b9a,stroke-width:2px,color:black  class enp0s9 interface class cni0 bridge class pod1,pod2,pod3 namespace class veth1_host,veth2_host,veth3_host,eth0_1,eth0_2,eth0_3 veth  ```

The diagram illustrates how container networking works within a Kubernetes (K3s) node, showing the key components that enable pods to communicate with each other and the outside world. Let's break down this networking architecture:

At the top level, we have the host interface (enp0s9) with IP 192.168.2.224, which is the node's physical network interface connected to the external network. This is the node's gateway to the outside world.

enp0s9 interface is connected to the cni0 bridge (IP: 10.42.0.1/24), which acts like a virtual switch inside the node. This bridge serves as the internal network hub for all pods running on the node.

Each of the pods runs in its own network namespace, with each one having its own separate network stack, which includes its own network interfaces and routing tables. Each of the pod’s internal interfaces, eth0, as shown in the diagram above, has an IP address, which is the pod’s IP address. eth0 inside the pod is connected to its virtual ethernet (veth) pair that exists in the host’s network and connects the eth0 interface of the pod to the cni0 bridge.

Exploring Network Namespaces in Detail

Kubernetes uses network namespaces to isolate networking for each pod, ensuring that pods have separate networking environments and do not interfere with each other. 

A network namespace is a Linux kernel feature that provides network isolation for a group of processes. Each namespace has its own network interfaces, IP addresses, routing tables, and firewall rules. Kubernetes uses this feature to ensure that each pod has its own isolated network environment.

In Kubernetes:

  • Each pod has its own network namespace.
  • Each container within a pod shares the same network namespace.

Inspecting Network Namespaces

To inspect the network namespaces, follow these steps:

If you installed k3s as per this blog, k3s by default selects containerd runtime, your commands to get the container pid will be different if you run k3s with docker or other container runtimes.

Identify the container runtime and get the list of running containers.

CODE: https://gist.github.com/velotiotech/9dd26a2184217877f0785f9963076ef4.js

Get the container-id from the output and use it to get the process ID

CODE: https://gist.github.com/velotiotech/4382adae1c8691a09bb802a8f9d8aabc.js

Check the network namespace associated with the container

CODE: https://gist.github.com/velotiotech/389c7b858de231209b8f7593ca582e18.js

You can use nsenter to enter the network namespace for further exploration.

Executing Into Network Namespaces

To explore the network settings of a pod's namespace, you can use the nsenter command.

CODE: https://gist.github.com/velotiotech/37060e5c47810f4cedfbbb55514b6086.js

Veth Interfaces and Their Connection to Bridge

Inside the pod’s network namespace, you should see the pod’s interfaces (lo and eth0) and the IP address: 10.42.0.8 assigned to the pod. If observed closely, we see eth0@if13, which means eth0 is connected to interface 13 (in your system the corresponding veth might be different). Interface eth0 inside the pod is a virtual ethernet (veth) interface, veths are always created in interconnected pairs. In this case, one end of veth is eth0 while the other part is if13. But where does if13 exist? It exists as a part of the host network connecting the pod's network to the host network via the bridge (cni0) in this case.

CODE: https://gist.github.com/velotiotech/80512f92763320971514978219198be4.js

Here you see veth82ebd960@if2, which denotes that the veth is connected to interface number 2 in the pod's network namespace. You can verify that the veth is connected to bridge cni0 as follows and that the veth of each pod is connected to the bridge, which enables communication between the pods on the same node.

CODE: https://gist.github.com/velotiotech/c10dae4ca3c9d692a5507c260899d3ab.js

Demonstrating Pod-to-Pod Communication

Deploy Two Pods

Deploy two busybox pods to test communication:

CODE: https://gist.github.com/velotiotech/38540ecd6822e0ff122e9759c351eb2e.js

Get the IP Addresses of the Pods

CODE: https://gist.github.com/velotiotech/471be7195defc5fa185fd1135ecf0dab.js

Pod1 IP : 10.42.0.9

Pod2 IP : 10.42.0.10

Ping Between Pods and Observe the Traffic Between Two Pods

Before we ping from Pod1 to Pod2, we will set up a watch on cni0 and veth pair of Pod1 and pod2 that are connected to cni0 using tcpdump. You can find the veth pair of Pod1 and pod2, which is connected to cni0 using the following commands.

Script to exec into network namespace

You can use the following script to get the container process ID and exec into the pod network namespace directly.

CODE: https://gist.github.com/velotiotech/0e93bbb5e4ff160eba49e99cbdf33cf3.js

Open three terminals and set up the tcpdump listeners: 

# Terminal 1 - Watch traffic on cni0 bridge 

CODE: https://gist.github.com/velotiotech/1e031df3e2a8c30b495883a2e11d5d8a.js

 # Terminal 2 - Watch traffic on veth1 (Pod1's veth pair)

CODE: https://gist.github.com/velotiotech/4b195888db21bd3a154b2f16e5432453.js

# Terminal 3 - Watch traffic on veth2 (Pod2's veth pair) 

CODE: https://gist.github.com/velotiotech/487090c8b2ea1b2794ff7dbc89bed681.js

Exec into Pod1 and ping Pod2:

CODE: https://gist.github.com/velotiotech/783f3dcaf0a0bb1d319922a4b73891d2.js

Watch results on veth3a94f27 pair of Pod1.

Watch results on cni0:

Watch results on veth18eb7d52 pair of Pod2:

Observing the timestamps for each request and reply on different interfaces, we get the flow of request/reply, as shown in the diagram below.

Deeper Dive into the Journey of Network Packets from One Pod to Another

We have already seen the flow of request/reply between two pods via veth interfaces connected to each other in a bridge network. In this section, we will discuss the internal details of how a network packet reaches from one pod to another.

   

Packet Leaving Pod1’s Network

Inside Pod1’s network namespace, the packet originates from eth0 (Pod1’s internal interface) and is sent out via its virtual ethernet interface pair in the host network. The destination address of the network packet is 10.0.0.10, which lies within the CIDR range 10.42.0.0 - 10.42.0.255 hence it matches the second route.

The packet exits Pod1’s namespace and enters the host namespace via the connected veth pair that exists in the host network. The packet arrives at bridge cni0 since it is the master of all the veth pairs that exist in the host network.

Once the packet reaches cni0, it gets forwarded to the correct veth pair connected to Pod2.

Packet Forwarding from cni0 to Pod2’s Network

When the packet reaches cni0, the job of cni0 is to forward this packet to Pod2. cni0 bridge acts as a Layer2 switch here, which just forwards the packet to the destination veth. The bridge maintains a forwarding database and dynamically learns the mapping of the destination MAC address and its corresponding veth device. 

You can view forwarding database information with the following command:

CODE: https://gist.github.com/velotiotech/3cbbd9e51cb88c715d9d85e04f97f239.js

In this screenshot, I have limited the result of forwarding database to just the MAC address of Pod2’s eth0

  1. First column: MAC address of Pod2’s eth0
  2. dev vethX: The network interface this MAC address is reachable through
  3. master cni0: Indicates this entry belongs to cni0 bridge
  4. Flags that may appear:
    • permanent: Static entry, manually added or system-generated
    • self: MAC address belongs to the bridge interface itself
    • No flag: The entry is Dynamically learned.

Dynamic MAC Learning Process

When a packet is generated with a payload of ICMP requests made from Pod1, it is packed as a frame at layer 2 with source MAC as the MAC address of the eth0 interface in Pod1, in order to get the destination MAC address, eth0 broadcasts an ARP request to all the network interfaces the ARP request contains the destination interface’s IP address.

This ARP request is received by all interfaces connected to the bridge, but only Pod2’s eth0 interface responds with its MAC address. The destination MAC address is then added to the frame, and the packet is sent to the cni0 bridge.

This destination MAC address is added to the frame, and it is sent to the cni0 bridge.  

When this frame reaches the cni0 bridge, the bridge will open the frame and it will save the source MAC against the source interface(veth pair of pod1’s eth0 in the host network) in the forwarding table.


Now the bridge has to forward the frame to the appropriate interface where the destination lies (i.e. veth pair of Pod2 in the host network). If the forwarding table has information about veth pair of Pod2 then the bridge will forward that information to Pod2, else it will flood the frame to all the veths connected to the bridge, hence reaching Pod2.

When Pod2 sends the reply to Pod1 for the request made, the reverse path is followed. In this case, the frame leaves Pod2’s eth0 and is tunneled to cni0 via the veth pair of Pod2’s eth0 in the host network. Bridge adds the source MAC address (in this case, the source will be Pod2’s eth0) and the device from which it is reachable in the forwarding database, and forwards the reply to Pod1, hence completing the request and response cycle.

Summary and Key Takeaways

In this guide, we explored the foundational elements of Linux that play a crucial role in Kubernetes networking using K3s. Here are the key takeaways:

  • Network Namespaces ensure pod isolation.
  • Veth Interfaces connect pods to the host network and enable inter-pod communication.
  • Bridge Networks facilitate pod-to-pod communication on the same node.

I hope you gained a deeper understanding of how Linux internals are used in Kubernetes network design and how they play a key role in pod-to-pod communication within the same node.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings