One awesome feature of Kubernetes is the replicasets. Simply put, say you want to run multiple pods – x number of pods all the time for fault tolerance. It is often used to guarantee the availability of a specified number of identical pods identified by labels.

Replicasets will enable you do this. I would strongly encourage you to check out to read more about it plus the site has tons of useful documentation for everything related to Kubernetes

Here is a complete yaml file that you can readily use to create replica sets. Note: I will try to keep this file updated as some of the specs may change with future versions. I am using Bitnami’s Ghost container for demo purposes

apiVersion: apps/v1
kind: ReplicaSet
  name: mywebapp
    app: mywebapp
  # modify replicas according to your case
  replicas: 7
      app: mywebapp
        app: mywebapp
      - name: ghost
        image: ghost

If you see the yaml file above, the definition lets you create 7 pods and will ensure at least 7 pods remain active all the time.

Some of this is straightforward yaml definition but important concept is understand that all pods are going to have labels and the replicaset is going to use those labels to ensure at least 7 pods with that label exists.

The line selector: matchLabels: will look at the right pod label and spin up the required number of instances defined.

You can readily use this yaml file and run this command to create the pods

kubectl create -f mywebapp-replicasets.yaml

So, you have your test Kubernetes cluster up and running and you are trying to find out the status of your cluster by typing these commands after a fresh reboot

kubectl cluster-info

You start getting a connection error, so you try out the next command to get the status of your pods

kubectl get pods --all -namespaces
kubernetes error

Same error. So, majority of times the fix is disabling all swaps

sudo swapoff -a

Note, we are talking about test cluster here. In a production cluster, interact with your Linux sys admin to under the impact of these changes. Finally, when you issue the command for your cluster-info, you should get this message

kubernetes cluster info

This is a simple walk-through so you can setup a local cluster up and running. Cloud platforms have their own versions of Kubernetes where master node is usually a managed service and the worker nodes are compute machines. I have avoided putting in specific version numbers to just give you an idea of the install process.

In the future, I believe each OS will have seamless way to install Kubernetes with pre-built installation packages. Now, let us stick with a basic test cluster to play around with.

In my cluster, there are 3 VMs with Ubuntu as the OS

ubuntu1 – will act as the master node

ubuntu 2 & 3 – worker nodes

Ensure you can ssh into the boxes, and you are ready to run the commands. Make sure you have unique MAC and product_uuid in case you have cloned VMs. Each machine should have a dedicated ip.

These commands should be run on all 3 nodes

— Regular package update

sudo apt-get update

— Update for https repo access

sudo apt-get install -y apt-transport-https curl

— Add key for the new repository. Make sure you add the sudo command before your apt-key add

sudo curl -s | sudo apt-key add -

— Now, add the repo

sudo vim /etc/apt/sources.list.d/kubernetes.list
deb kubernetes-xenial main

— Install Kubernetes and Kubernetes networking CNI

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

— The networking aspect now. Pick any of your favorite, but for this cluster, I will pick flannel. You can also use weave net as well

kubectl apply -f

— Let’s install docker,

sudo apt install -y

— Verify install is working by running these commands

which kubeadm
which kubectl
docker -ps

Now, the real fun starts with assigning ubuntu1 as the master node, how do we do that? Execute the command. You can actually add flags to set the token not to expire, but let us keep this simple

sudo kubeadm init

— Follow the instructions on the screen to execute the following commands

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

— Execute this command on the worker nodes (your security token will be different).

kubeadm join --token iptt5e.hszynj998ztugg47 \
    --discovery-token-ca-cert-hash sha256:331560a34e8a1e8b4001462e951e2d502bfe9a3c73279b43ebae7365835a30e3 --skip-preflight-checks

Now, after the command is executed on the worker nodes, we can run the following command to make sure, the cluster is operational. Look for the status of Ready on the nodes

sudo  kubectl get nodes

If you encounter issues, do a check on the specific node.

kubectl describe nodes ubuntu1

In the future posts, we will build up on this cluster and deploy some sample applications. Helm as the package manager has a lot of features that we can explore along with the k8 dashboard.