Intro To Kubenetes Networking
Kubernetes Network Architecture
Kubernetes is designed to facilitate the deployment and management of distributed applications across a cluster of nodes. Central to its architecture is its network model. Kubernetes adopts a flat network space, allowing all pods to communicate with each other across nodes without network address translation (NAT). This model simplifies the process of container orchestration by ensuring that every container can see the others as if they were on the same physical machine and connected to the same network. This networking philosophy is enforced by a key requirement that all nodes and pods must be able to communicate without any manual intervention to modify IP routing or create custom bridges.
Node Network Communication
Node network communication in Kubernetes is managed through the CNI, which provides various networking functionalities including the creation of network bridges and the setup of IP addresses for pods. Each node in a Kubernetes cluster runs a kubelet, an agent that interacts with the CNI to ensure that pod-to-pod network policies and other networking operations are correctly implemented. Communication between nodes also involves the master node's control of the cluster’s network policies and the scheduling of pods to specific worker nodes. Tools like etcd are used for maintaining the cluster's state, including network configurations and policies, ensuring consistent and reliable node communication across the entire cluster.
Pod Network Communication
Pods in Kubernetes are the smallest deployable units that can be created, scheduled, and managed. Each pod is assigned a unique IP address, which other pods use to communicate with it. This direct assignment simplifies the network model, removing the need for pod-level NAT and allowing for a more transparent and simplified communication architecture. Pods on the same node communicate with each other using the same local host network, while pods on different nodes communicate using the overlay network established by the Container Network Interface (CNI) . This setup ensures that the pod-to-pod communication is seamless, whether within the same node or across different nodes.
To validate that pods from different nodes really share network, we have to make a little effort.
Running a multi-node cluster with minikube is easy with minikube 1.10.1 or higher.
minikube start --nodes 2 -p multinode-test
Next lets create a deployment for our test
Since it is not possible to specify the node the deployment will go to, we will use a little trick with podAntiAffinity this property will ensure pods will land on separate hosts.
Lets create the deployment
kubectl apply -f deployment.yaml
Make sure the deployment is ready
kubectl get deployment
Lets get details of our pods
kubectl get pods -o wide
We can see that the 2 pods we requested has been created and each one has a unique ip address. Note that the pods are indeed deployed on different nodes.
One on multinode-test
and one on multinode-test-m02
.
Now lets exec into pod nginx-multinode-67fccc955c-dv6r8
and get a response from pod nginx-multinode-67fccc955c-dxs79
kubectl exec -it nginx-multinode-67fccc955c-dv6r8 -- /bin/bash
And we can now curl
to the pod dxs79
using its ip address
We got the response we were looking for proving that the pods from different nodes can communicate seamlessly.
Service Layer Communication
Services in Kubernetes are an abstraction which defines a logical set of pods and a policy by which to access them. This abstraction allows Kubernetes to decouple work definitions from the pods. Types of services include ClusterIP, NodePort, and LoadBalancer. A ClusterIP service is the default Kubernetes service which provides a service inside the cluster that other apps inside your cluster can access. There is no external access. On the other hand, a NodePort service exposes the service on each Node’s IP at a specific port, thus making the service accessible from outside the cluster. LoadBalancer service integrates with various cloud providers' load balancers to provide a single point of access for external clients to internal services.
NodePort vs ClusterIP
ClusterIP and NodePort serve distinct purposes. ClusterIP is the most common type used when you need a single internal IP to access the service, making it only reachable within the cluster. This is ideal for cases where you do not need external traffic to reach your application directly. NodePort, on the other hand, extends the capabilities of ClusterIP by making the service accessible on a static port assigned on each node's IP address, allowing external traffic. This type of service is useful for allowing external applications and users to access an internal service at a known port on each node.
Lets demonstrate how services utilize network in kubernetes.\
create a deployment file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2 # Number of replicas
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest # Uses the latest Nginx image from Docker Hub
ports:
- containerPort: 80 # The port on which Nginx listens
Apply deployment.yaml
kubectl apply -f deployment.yaml
make sure the deployment is ready
Lets now expose the deployment with a service
First we will use ClusterIP which is the default service in kubernetes, then we will see what happens if we do the same with NodePort.
ClusterIP
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
We can get the service details with
kubectl describe service nginx-service
Notice the Endpoints section, these are the IPs of the pods which the service serves.
Since we are using ClusterIP the service is not exposed outside the cluster. but we can reach it with ssh-ing into the cluster.
minikube ssh
NodePort
Lets change the service type to NodePort and see what happens.
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
Notice how the NodePort type exposed port 31112
, that means the node is exposing that port to the outer world.
If you are using Linux you could now curl directly to get a response from the service with curl $(minikube ip):31112
, Since im demoing this post on Apple Silicone Mac, i need to do another step which is port forwarding.
Lets port forward the service to our localhost
kubectl port-forward service/nginx-service 8080:8080
and now we can finally curl
to get the response.
[Note that port forwarding will also work with clusterIP type, I showed it like that only because of docker driver limitation on macs. Ideally you'll use $(minikube ip):<NODE_PORT>
]
Summary
This post explored Kubernetes' network architecture, highlighting its flat network model that eliminates the need for NAT and simplifies inter-pod communication across nodes. We discussed how kubelets manage node communication and implement network policies. We also covered pod networking, where each pod's unique IP fosters direct, cluster-wide connections, and service networking through ClusterIP and NodePort services—ClusterIP for internal cluster access and NodePort for external connectivity. These elements underscore Kubernetes capacity to streamline container orchestration and enhance network efficiency and security.