Mastering Kubernetes: Building a Cluster from Scratch Step-by-Step

Mastering Kubernetes: Building a Cluster from Scratch Step-by-Step

Setup jumpbox environment

We start with installing some packages required for the setup process. wget and curl to fetch packages, vim to change configuration files, openssl for all the certificates we are required to create. Last but not least the git command to download the repository with a set of extra files.

sudo apt-get -y install wget curl vim openssl git

We start by cloning the respository with my scripts, config and setup data.

git clone --depth 1 \
  https://github.com/kalaspuffar/kubernetes-the-hard-way.git
cd kubernetes-the-hard-way

We then need to create a machines.txtwith following structure, IP, DNS name, short name and network for each of the worker nodes, these are the networks we want to bridge later.

192.168.6.1 kub-ctrl1.ea.org kub-ctrl1
192.168.6.2 kub-worker1.ea.org kub-worker1 10.200.0.0/24
192.168.6.3 kub-worker2.ea.org kub-worker2 10.200.1.0/24

Next step is to create the directory and download all the files we need, the downloads.txtis taken from the forked repository and updated with the lastest versions for AMD as of the filming of this video.

mkdir downloads

wget -q --show-progress \
  --https-only \
  --timestamping \
  -P downloads \
  -i downloads.txt

To test it out we make the kubectl executable and put it into our local bin directory.

{
  chmod +x downloads/kubectl
  cp downloads/kubectl /usr/local/bin/
}

Run the script to see that everything is ready and we get the version we expect.

kubectl version --client

Create a lot of certificates for communication between services

First we need to create the main certificate for our internal certificate authority. This certificate will be used to sign all the other certificates we will create in the cluster.

{
  openssl genrsa -out ca.key 4096
  openssl req -x509 -new -sha512 -noenc \
    -key ca.key -days 3653 \
    -config ca.conf \
    -out ca.crt
}

Next up we will use the ca.conf file available in the repository to create certificates for all services, workers and admin.

certs=(
  "admin" "kub-worker1" "kub-worker2"
  "kube-proxy" "kube-scheduler"
  "kube-controller-manager"
  "kube-api-server"
  "service-accounts"
)
for i in ${certs[*]}; do
  openssl genrsa -out "${i}.key" 4096

  openssl req -new -key "${i}.key" -sha256 \
    -config "ca.conf" -section ${i} \
    -out "${i}.csr"

  openssl x509 -req -days 3653 -in "${i}.csr" \
    -copy_extensions copyall \
    -sha256 -CA "ca.crt" \
    -CAkey "ca.key" \
    -CAcreateserial \
    -out "${i}.crt"
done

The perticular certificates for each worker host will then be copied to their own kubelet directory.

for host in kub-worker1 kub-worker2; do
  ssh root@$host mkdir /var/lib/kubelet/

  scp ca.crt root@$host:/var/lib/kubelet/

  scp $host.crt \
    root@$host:/var/lib/kubelet/kubelet.crt

  scp $host.key \
    root@$host:/var/lib/kubelet/kubelet.key
done

Last we will copy the certificates created for the controller nodes to their hosts.

scp \
  ca.key ca.crt \
  kube-api-server.key kube-api-server.crt \
  service-accounts.key service-accounts.crt \
  root@kub-ctrl1:~/

Create a lot of configuration files for our services

We will create a bunch of configuration files, each of them have the same components. A main CA cert block, a user certificate block and a context they will be used. These commands will create each configuration file in turn.

Creating the proxy configuration file.

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://kub-ctrl1.ea.org:6443 \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-credentials system:kube-proxy \
    --client-certificate=kube-proxy.crt \
    --client-key=kube-proxy.key \
    --embed-certs=true \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-proxy \
    --kubeconfig=kube-proxy.kubeconfig

  kubectl config use-context default \
    --kubeconfig=kube-proxy.kubeconfig
}

Creating the controller configuration file.

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://kub-ctrl1.ea.org:6443 \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=kube-controller-manager.crt \
    --client-key=kube-controller-manager.key \
    --embed-certs=true \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-controller-manager \
    --kubeconfig=kube-controller-manager.kubeconfig

  kubectl config use-context default \
    --kubeconfig=kube-controller-manager.kubeconfig
}

Creating the scheduler configuration file.

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://kub-ctrl1.ea.org:6443 \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-credentials system:kube-scheduler \
    --client-certificate=kube-scheduler.crt \
    --client-key=kube-scheduler.key \
    --embed-certs=true \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:kube-scheduler \
    --kubeconfig=kube-scheduler.kubeconfig

  kubectl config use-context default \
    --kubeconfig=kube-scheduler.kubeconfig
}

Creating the admin configuration file.

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://127.0.0.1:6443 \
    --kubeconfig=admin.kubeconfig

  kubectl config set-credentials admin \
    --client-certificate=admin.crt \
    --client-key=admin.key \
    --embed-certs=true \
    --kubeconfig=admin.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=admin \
    --kubeconfig=admin.kubeconfig

  kubectl config use-context default \
    --kubeconfig=admin.kubeconfig
}

Creating the worker configuration files.

for host in kub-worker1 kub-worker2; do
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://kub-ctrl1.ea.org:6443 \
    --kubeconfig=${host}.kubeconfig

  kubectl config set-credentials system:node:${host} \
    --client-certificate=${host}.crt \
    --client-key=${host}.key \
    --embed-certs=true \
    --kubeconfig=${host}.kubeconfig

  kubectl config set-context default \
    --cluster=kubernetes-the-hard-way \
    --user=system:node:${host} \
    --kubeconfig=${host}.kubeconfig

  kubectl config use-context default \
    --kubeconfig=${host}.kubeconfig
done

Copy the configuration files for the workers to their workplace on the worker nodes.

for host in kub-worker1 kub-worker2; do
  ssh root@$host "mkdir /var/lib/{kube-proxy,kubelet}"

  scp kube-proxy.kubeconfig \
    root@$host:/var/lib/kube-proxy/kubeconfig \

  scp ${host}.kubeconfig \
    root@$host:/var/lib/kubelet/kubeconfig
done

Copy the admin, controller and scheduler configuration files to the controller.

scp admin.kubeconfig \
  kube-controller-manager.kubeconfig \
  kube-scheduler.kubeconfig \
  root@kub-ctrl1:~/

Lastly we will create an encryption key for any secrets we will create in our cluster.

export ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
envsubst < configs/encryption-config.yaml > encryption-config.yaml
scp encryption-config.yaml root@kub-ctrl1:~/

Installing etcd service

We need a service to manage our configuration object in our cluster, ETCD is a great choice for this and we will just put one on our controller. In a production environment you should create at least 3 virtual machines with an ETCD cluster to ensure that you have high availability. I have video on that topic https://www.youtube.com/watch?v=uC1WPxFzISQ

In this example we will just setup one so we can get on to setting up Kubernetes instead.

First we will copy over the ETCD package to our controller host.

scp \
  downloads/etcd-v3.4.27-linux-amd64.tar.gz \
  units/etcd.service \
  root@kub-ctrl1:~/

We will switch over to the controller in order to continue running commmands.

ssh root@kub-ctrl1

Unpack the etcd software and copy the binaries over to our local bin directory.

{
  tar -xvf etcd-v3.4.27-linux-amd64.tar.gz
  mv etcd-v3.4.27-linux-amd64/etcd* /usr/local/bin/
}

Make the data directory writable by the root user and copy the certificates ETCD needs to operate.

{
  mkdir -p /etc/etcd /var/lib/etcd
  chmod 700 /var/lib/etcd
  cp ca.crt kube-api-server.key kube-api-server.crt \
    /etc/etcd/
}

Copy the service file into our systemd system.

mv etcd.service /etc/systemd/system/

Reload systemd, enable and start the etcd service.

{
  systemctl daemon-reload
  systemctl enable etcd
  systemctl start etcd
}

Setting up services for the controller

Back on the jumpbox we will copy the executible service commands, service definitions and configuration files over to the controller.

scp \
  downloads/kube-apiserver \
  downloads/kube-controller-manager \
  downloads/kube-scheduler \
  downloads/kubectl \
  units/kube-apiserver.service \
  units/kube-controller-manager.service \
  units/kube-scheduler.service \
  configs/kube-scheduler.yaml \
  configs/kube-apiserver-to-kubelet.yaml \
  root@kub-ctrl1:~/

Switch over to the controller again.

ssh root@kub-ctrl1

Create a diretory for the configuration files.

mkdir -p /etc/kubernetes/config

Make the service commands executable and put them into our local bin directory.

{
  chmod +x kube-apiserver \
    kube-controller-manager \
    kube-scheduler kubectl

  mv kube-apiserver \
    kube-controller-manager \
    kube-scheduler kubectl \
    /usr/local/bin/
}

Create the data directory and copy the keys and encryption configuration files into it.

{
  mkdir -p /var/lib/kubernetes/

  mv ca.crt ca.key \
    kube-api-server.key kube-api-server.crt \
    service-accounts.key service-accounts.crt \
    encryption-config.yaml \
    /var/lib/kubernetes/
}

Copy over all the kubernetes configuration files and service files for our main controller functions.

mv kube-apiserver.service /etc/systemd/system/kube-apiserver.service
mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
mv kube-controller-manager.service /etc/systemd/system/
mv kube-scheduler.kubeconfig /var/lib/kubernetes/
mv kube-scheduler.yaml /etc/kubernetes/config/
mv kube-scheduler.service /etc/systemd/system/

Reload systemd, enable and start the apiserver, controller and scheduler.

{
  systemctl daemon-reload

  systemctl enable kube-apiserver \
    kube-controller-manager kube-scheduler

  systemctl start kube-apiserver \
    kube-controller-manager kube-scheduler
}

Apply the glue between the kubelet and apiserver configuration file into our newly started cluster.

kubectl apply -f kube-apiserver-to-kubelet.yaml \
  --kubeconfig admin.kubeconfig

Switch back to the jumpbox and try to reach the API server to see that everything has started correctly.

curl -k --cacert ca.crt https://kub-ctrl1.ea.org:6443/version

Setup services for the workers

Starting our bridge workflow we will set the subnet information from our machine configuration and set it to the kubelet configuration file and bridge network config. Each file will then be copied to each host.

for host in kub-worker1 kub-worker2; do
  SUBNET=$(grep $host machines.txt | cut -d " " -f 4)
  sed "s|SUBNET|$SUBNET|g" \
    configs/10-bridge.conf > 10-bridge.conf 

  sed "s|SUBNET|$SUBNET|g" \
    configs/kubelet-config.yaml > kubelet-config.yaml

  scp 10-bridge.conf kubelet-config.yaml \
  root@$host:~/
done

Let's copy all the executible scripts, network plugins, configuration files, and service definitions over to our workers.

for host in kub-worker1 kub-worker2; do
  scp \
    downloads/runc.amd64 \
    downloads/crictl-v1.31.1-linux-amd64.tar.gz \
    downloads/cni-plugins-linux-amd64-v1.5.1.tgz \
    downloads/containerd-1.7.22-linux-amd64.tar.gz \
    downloads/kubectl \
    downloads/kubelet \
    downloads/kube-proxy \
    configs/99-loopback.conf \
    configs/containerd-config.toml \
    configs/kubelet-config.yaml \
    configs/kube-proxy-config.yaml \
    units/containerd.service \
    units/kubelet.service \
    units/kube-proxy.service \
    root@$host:~/
done

Let's switch over to each worker (steps below needs to be done for all workers).

ssh root@kub-worker1

Install socat, conntrack and ipset required for the networking bridge.

{
  apt-get update
  apt-get -y install socat conntrack ipset
}

Next up we will manually turn swapping off, a Kubernetes cluster should never swap so at this point go in to your fstab file and disable your swapping partition there too.

swapon --show
swapoff -a

We need to create a bunch of directories for our network plugin and kubelet data, configuration and run files.

mkdir -p \
  /etc/cni/net.d \
  /opt/cni/bin \
  /var/lib/kubelet \
  /var/lib/kube-proxy \
  /var/lib/kubernetes \
  /var/run/kubernetes

Unpack the software and copy it to their final destination, most in the bin, local bin and cni bin depending on service.

{
  mkdir -p containerd
  tar -xvf crictl-v1.31.1-linux-amd64.tar.gz
  tar -xvf containerd-1.7.22-linux-amd64.tar.gz -C containerd
  tar -xvf cni-plugins-linux-amd64-v1.5.1.tgz -C /opt/cni/bin/
  mv runc.amd64 runc
  chmod +x crictl kubectl kube-proxy kubelet runc 
  mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
  mv containerd/bin/* /bin/
}

Next we will copy our previously compaired network files for the bridge plugin into the cni configuration diretory.

mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/

Next we will copy configuration file and service file for the containerd.

{
  mkdir -p /etc/containerd/
  mv containerd-config.toml /etc/containerd/config.toml
  mv containerd.service /etc/systemd/system/
}

Next we will copy configuration file and service file for the kubelet.

{
  mv kubelet-config.yaml /var/lib/kubelet/
  mv kubelet.service /etc/systemd/system/
}

Next we will copy configuration file and service file for the proxy.

{
  mv kube-proxy-config.yaml /var/lib/kube-proxy/
  mv kube-proxy.service /etc/systemd/system/
}

Reload systemd, enable and start kubelet, containerd and proxy.

{
  systemctl daemon-reload
  systemctl enable containerd kubelet kube-proxy
  systemctl start containerd kubelet kube-proxy
}

Verify that the workers are running

After you have run previous step for all your workers they should be running. You can check this quickly by running kubectl on your controller host.

ssh root@kub-ctrl1 \
  "kubectl get nodes \
  --kubeconfig admin.kubeconfig"

We want to run the kubelet command easier so we will create an admin configuration file we could use locally to connect to our cluster.

{
  kubectl config set-cluster kubernetes-the-hard-way \
    --certificate-authority=ca.crt \
    --embed-certs=true \
    --server=https://kub-ctrl1.ea.org:6443

  kubectl config set-credentials admin \
    --client-certificate=admin.crt \
    --client-key=admin.key

  kubectl config set-context kubernetes-the-hard-way \
    --cluster=kubernetes-the-hard-way \
    --user=admin

  kubectl config use-context kubernetes-the-hard-way
}

After the file is created we can use the kubectl without any extra configuration.

kubectl get nodes

Setting up bridge between networks of the workers

Bridging the network is a workflow of setting up a route between a machine to all other workers in the cluster, we will use our machine configuration we created at the beginning of this tutorial.

Fetching the ip and subnet for each host and store them in a parameter.

{
  NODE_0_IP=$(grep kub-worker1 machines.txt | cut -d " " -f 1)
  NODE_0_SUBNET=$(grep kub-worker1 machines.txt | cut -d " " -f 4)
  NODE_1_IP=$(grep kub-worker2 machines.txt | cut -d " " -f 1)
  NODE_1_SUBNET=$(grep kub-worker2 machines.txt | cut -d " " -f 4)
}

We will then use the parameters to create the configuration needed to be added to your interface on each host. The controller could have both of these rules. Worker1 only need the part to access Worker2 and vice versa.

echo "  post-down ip route del ${NODE_0_SUBNET} via ${NODE_0_IP} dev enp0s3"
echo "  post-down ip route del ${NODE_1_SUBNET} via ${NODE_1_IP} dev enp0s3"
echo "  post-up ip route add ${NODE_0_SUBNET} via ${NODE_0_IP} dev enp0s3"
echo "  post-up ip route add ${NODE_1_SUBNET} via ${NODE_1_IP} dev enp0s3"

For each node we will then install a couple of packages, we need to ensure that certificates are generally available or else our pods can't download packages from the internet. Curl is always great to have to fetch scripts and net-tools is good if you need to verify your routing or network setup.

apt install -y ca-certificates curl net-tools
update-ca-certificates

Next we will add two kernel modules for the bridge. Bridge netfilter and overlay modules.

{
    echo "
        overlay
        br_netfilter
    " > /etc/modules-load.d/k8s.conf

    modprobe overlay;     
    modprobe br_netfilter; 
}

We need to configure these for forward, ip4 and ip6 tables netfilter.

{
    echo "
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
    " > /etc/sysctl.d/k8s.conf

    sysctl --system; 
}

Installing dashboard

Now we can install the dashboard. This is currently done by using helm so I will use their oneliner to install it. You can download the script and verify it if you want.

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

The dashboard will not start without an DNS, this command will bring it up using the cluster 10.32.0.10 that we have defined in the kubelet configuration file as our cluster DNS network address.

helm repo add coredns https://coredns.github.io/helm
helm --namespace=kube-system install coredns coredns/coredns --set service.clusterIP=10.32.0.10

Verify that the DNS service is up and running and that the service is available so the DNS will work. There is a pod called DNS tools that you can install if you need to verify that the DNS is installed correctly.

kubectl get pods -A
kubectl get svc -A

Now we can run the helm commands to deploy the dashboard.

helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

Wait for the dashboard pods to start up and then proxy the port with the command below and your dashboard should be available from the host you ran it on with the port of 8443.

kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443 --address 0.0.0.0

Setting up metal load balancer for our cluster to share services on separate IPs

I'm not satified with a proxied dashboard. In a production environment you need to be able to reach the dashboard anywhere so installing metal load balancer is a good choice to give it an IP on your network.

First we will install the service.

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml

After it's install it will require a verification of your scripts by calling the HTTPS endpoint of your metall load balancer API and as we don't have any certificate service in our cluster it will always fail, the command below will turn of that feature.

kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io metallb-webhook-configuration

In the yamls diretory of the repository I've added some scripts to create an ip address range for your load balancer, if you don't define anything new servers will just get an address. Then you run the dashboard loadbalancer script and you will reach in on the IP specified in the definition. Remember the IP needs to be within the address range you specified in the address-pool file.

cd yamls
kubectl apply -f address-pool.yaml
kubectl apply -f dashboard-loadbalancer.yaml

Creating admin user

Last but not least we will create an admin user and fetch the token required to login to our dashboard.

kubectl apply -f create-admin.yaml
kubectl describe secret -n kubernetes-dashboard admin-user

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.