kubeadm on redhat

Kubernetes Cluster Creation with Kubeadm on RedHat 9 Derivatives

Note: I have tested this guide on CentOS 9, Rocky Linux 9.2, and Alma Linux 9.2.

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a flexible and scalable solution for managing modern workloads in production environments. In this guide, we will walk you through the step-by-step process of setting up a Kubernetes cluster using kubeadm on Red Hat and CentOS.

Step 1: Setup Hostnames

Before starting the Kubernetes cluster setup, it is essential to ensure that the IP addresses of all the servers (Master and Workers) are properly mapped to their hostnames in the /etc/hosts file. This step ensures smooth communication among the nodes in the cluster. Open the /etc/hosts file on each server and add the following entries:

vi /etc/hosts  Master Worker-1   Worker-2

Make sure to replace the IP addresses and hostnames with the actual values of your servers. This step will prevent any hostname resolution issues during the Kubernetes cluster setup.

Step 2: Update and Configure the Servers

Before we start installing the required packages, let’s update the package manager cache and install any pending updates:

dnf makecache --refresh
dnf update -y

The servers will reboot to apply the updates.

Step 3: Configure SELinux

Next, we need to configure some system settings for Kubernetes to work properly. Let’s start by disabling SELinux temporarily and modifying the configuration file to set SELinux to permissive mode:

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

Step 4: Configure Kernel Modules

Now, we need to prepare the servers for Kubernetes installation and configure essential kernel modules. These steps are crucial to ensure proper functioning and communication within the Kubernetes cluster.

Load the necessary kernel modules required by Kubernetes:

modprobe overlay
modprobe br_netfilter

To make these modules load at system boot, create a configuration file:

cat > /etc/modules-load.d/k8s.conf << EOF

Step 5: Configure Sysctl Settings

Next, we’ll set specific sysctl settings that Kubernetes relies on:

cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
sysctl --system

These sysctl settings enable IP forwarding and enable bridged network traffic to pass through iptables. This ensures that Kubernetes networking functions as expected. By completing this step, we ensure that the servers are adequately prepared for Kubernetes installation and operation.

Step 6: Disable Swap

Kubernetes requires that swap be disabled on all cluster nodes to ensure optimal performance and stability. Follow these steps to disable swap on each server:

swapoff -a
sed -e '/swap/s/^/#/g' -i /etc/fstab

Step 7: Install CRI (Containerd)

In this step, we will install Containerd on our servers. This container runtime is essential for managing and running containers, which are the building blocks of Kubernetes applications.

What is Containerd?

Containerd is an industry-standard container runtime that provides the core functionality for managing containers on the host system. It is designed to be embedded into higher-level container systems, such as Docker, Kubernetes, and others, making it an ideal choice for our Kubernetes cluster.

Install Containerd:

2) Add Docker CE Repository:

Before installing containerd, we need to add the Docker Community Edition (CE) repository to our system. Docker CE is the free version of Docker that provides the necessary components for managing containers.

dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf makecache

2) Install Containerd.io:

dnf install -y containerd.io

3) Configure Containerd:

After installation, we need to configure Containerd. The configuration file is located at /etc/containerd/config.toml. The default configuration provides a solid starting point for most environments, but we’ll make a small adjustment to enable Systemd Cgroup support.

sudo mkdir /etc/containerd
sudo sh -c "containerd config default > /etc/containerd/config.toml"
sudo sed -i 's/            SystemdCgroup = false/            SystemdCgroup = true/' /etc/containerd/config.toml

Enable and restart the containerd service:

systemctl enable --now containerd.service
sudo systemctl restart containerd.service

Step 8: Configure Firewall Rules

We need to allow specific ports used by Kubernetes components through the firewall. Execute the following commands to add the necessary rules:

firewall-cmd --permanent --add-port={6443,2379,2380,10250,10251,10252,5473}/tcp
firewall-cmd --reload

Step 9: Install Kubernetes Components

Next, we’ll install Kubernetes components, including kubelet, kubeadm, and kubectl. We’ll also add the Kubernetes repository to our package manager.

Create the Kubernetes repository configuration file:

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni

Refresh the package cache:

dnf makecache

Install Kubernetes components:

dnf install -y kubelet-1.28.3-0 kubeadm-1.28.3-0 kubectl-1.28.3-0 --disableexcludes=kubernetes

Enable and start the kubelet service:

systemctl enable --now kubelet.service

Note: The above steps are applicable to both master and worker nodes. The following steps are only applicable to master nodes.

Step 10: Initialize the Kubernetes Control Plane

Now that we have installed the necessary components, it’s time to initialize the Kubernetes control plane on the master node. We’ll use kubeadm init for this purpose and specify the pod network CIDR.

sudo kubeadm config images pull
sudo kubeadm init --pod-network-cidr=

The kubeadm init command will take some time to complete. Once it’s done, you’ll see a message with instructions on how to join worker nodes to the cluster.

Step 11: Copy Configuration File to User’s Directory

On the master node, copy the configuration file to the user’s directory by running the following commands:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 12: Install CNI (Calico)

To enable inter-pod communication within the cluster, we need to install a pod network add-on. In this guide, we’ll use Calico.

Deploy the Tigera Operator for Calico:

kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

Download the custom Calico resources manifest:

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml -O

Adjust the CIDR in the custom resources file:

sed -i 's/cidr: 192\.168\.0\.0\/16/cidr:\/16/g' custom-resources.yaml

Create the Calico custom resources:

kubectl create -f custom-resources.yaml

Step 13: Join Worker Nodes

Once we have successfully initialized the Kubernetes control plane on the master node, we need to join the worker nodes to the cluster. Kubernetes uses a join command that includes a token and the master node’s IP address to enable worker nodes to connect to the cluster.

Get the Join Command on the Master Node:

kubeadm token create --print-join-command

This command will output a token-based join command that includes the master node’s IP address. The token acts as a one-time authentication mechanism to authorize worker nodes to join the cluster.

Join Worker Nodes:

On each worker node, execute the join command obtained from the previous step. This command connects the worker node to the Kubernetes cluster under the control of the master node.

For example, the join command will look something like this:

kubeadm join --token abcdef.1234567890abcdef \
    --discovery-token-ca-cert-hash sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef

The actual command may vary depending on your cluster configuration.

Verify Worker Node Joining:

Back on the master node, you can verify that the worker nodes have successfully joined the cluster by running the following command:

kubectl get nodes


Congratulations! You have successfully set up a Kubernetes cluster using kubeadm on Red Hat and CentOS. Kubernetes provides a powerful platform for deploying and managing containerized applications at scale. You can now explore and deploy your applications on this Kubernetes cluster.

This guide covered the entire process, from preparing the servers, installing the required components, initializing the control plane, and setting up a pod network add-on. Continue your journey with Kubernetes by exploring more advanced topics, such as deploying applications, managing resources, and implementing high availability. Happy Kubernetes clustering!

Similar Posts


  1. I have a question about the Calico install. The Calico installation instructions on the Tigera website talk about installing on every K8s node, but I did the install on only the master node. When I create a service, it does not assign an external IP address. Did I miss a step?

    1. My apologies for the delayed response. As previously mentioned, you only need to install the Tigera operator on the master node. It will automatically configure all the necessary networking components across all nodes. If you have any further questions or concerns, please feel free to ask.

  2. My CIDR for my local network is All of my hosts have addresses in this range (they are all 10.1.10.xxx). Should I have used in my Calico custom-resources.yaml instead the as in the instructions?

    1. In your scenario, where your local network CIDR is, and your hosts have addresses in the range 10.1.10.xxx, you can consider using a different CIDR block for your Calico network. For example, you could use for your Calico network, ensuring it does not overlap with your existing network.

      Always make sure that the CIDR blocks you choose for different networks within your infrastructure, such as VM network and Calico network, do not overlap to prevent any networking conflicts and issues within your Kubernetes cluster.

  3. I followed your instructions and run 3 nodes on alma9.3. Everything works well EXCEPT the calico. I found that you also must enable BGP port on each nodes:

    firewall-cmd –zone=public –add-service=bgp –permanent
    firewall-cmd –reload

Leave a Reply

Your email address will not be published. Required fields are marked *