Install Kubernetes On Ubuntu

  • Use a unique hostname in your local Lan

  • Disable swap

    • In /etc/fstab comment the swap entry

    • Check again using: cat /proc/swaps

  • Letting IP tables see bridged traffic and enable IP forwarding:

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF
    
    sudo modprobe overlay
    sudo modprobe br_netfilter
    
    # sysctl params required by setup, params persist across reboots
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    EOF
    
    # Apply sysctl params without reboot
    sudo sysctl --system
  • Check for Necessary port:

    First, we are installing a single node, and the necessary port that will be used for communication is: 6443

  • Installing container runtime

    Choose Docker, Containerd or CRI-O

    I am going to use docker:

    • sudo apt-get install docker.io --yes

    • Add user to the docker group

      • sudo usermod --groups docker --append user

      • exit

    • And log in once more, and check by running:

      • docker info

  • Installing kubeadm, kubelet and kubectl:

sudo apt-get update

sudo apt-get install -y apt-transport-https ca-certificates curl

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

  • Download the necessary container images:

kubeadm config images list

kubeadm config images pul

Initializing your control-plane node

The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).

  1. (Recommended) If you have plans to upgrade this single control-plane kubeadm cluster to high availability you should specify the -control-plane-endpoint to set the shared endpoint for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.

  2. Choose a Pod network add-on, and verify whether it requires any arguments to be passed to kubeadm init. Depending on which third-party provider you choose, you might need to set the -pod-network-cidr to a provider-specific value. See Installing a Pod network add-on.

  3. (Optional) kubeadm tries to detect the container runtime by using a list of well known endpoints. To use different container runtime or if there are more than one installed on the provisioned node, specify the -cri-socket argument to kubeadm. See Installing a runtime.

  4. (Optional) Unless otherwise specified, kubeadm uses the network interface associated with the default gateway to set the advertise address for this particular control-plane node's API server. To use a different network interface, specify the -apiserver-advertise-address=<ip-address> argument to kubeadm init. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you must specify an IPv6 address, for example -apiserver-advertise-address=2001:db8::101

Initiate the cluster!

sudo kubeadm init --pod-network-cidr 192.168.1.0/24

Check the logs :

journalctl --identifier kubelet

You can View the kubernetes components manifests files in :

touk@k8smaster:~$ ls /etc/kubernetes/manifests/
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml

If you run into a cgroup driver issue:

Let’s configure it manually

Kubernetes cluster use systemd since kubelet is started by systemd, you can view that from

sudo less /var/lib/kubelet/config.yaml

apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10

if you run : docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.10.4
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  v2.17.2
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
 Containers: 1
  Running: 0
  Paused: 0
  Stopped: 1
 Images: 3
 Server Version: 23.0.3
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Using metacopy: false
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 ...

You will se that it is cgroupfs, you will need to reconfigure docker to use systemd instead of cgroupfs

sudo vim /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd

sudo systemctl restart docker.service

since we have changed the unit file:

sudo systemctl daemon-reload

Again restart the service:

sudo systemctl restart docker.service

Check if the cgroup driver has changed : docker info

Reset kubeadm and initialize the cluster

sudo kubeadm reset

do not forge to delete config file from the home directory:

rm .kube/config

And run the commands providing from the output

kubectl get all -namespace kube-system

The core-dns pods are 0/2 and in pending state (they are failing), and that’s because we haven’t installed a container network interface

Installing Project calico container

Check official docs:

Install Calico networking and network policy for on-premises deployments | Calico Documentation

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O

kubectl apply -f calico.yaml

And now the core-dns pods should start

Control plane node isolation

By default, your cluster will not schedule Pods on the control plane nodes for security reasons. If you want to be able to schedule Pods on the control plane nodes, for example for a single machine Kubernetes cluster, run:

kubectl taint nodes --all node-role.kubernetes.io/control-plane-

Last updated