All the code for this chapter (and other chapters) is available at https://github.com/param108/kubernetes101 check the directory 014
Changing tack a bit, today I am playing with another Kubernetes simulation environment called kind.
What is it ?
Kind allows you to run kubernetes on “nodes” which are docker images. You can configure the number of nodes and then use kubectl as if on a normal cluster.
Kind configuration is pretty straight forward on ubuntu 18.04 and there is also brew install
for Mac folks.
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64
chmod +x ./kind
Creating a cluster is as easy as
$ kind create cluster
Creating cluster "realkube" ...
â Ensuring node image (kindest/node:v1.17.0) đŧ
â Preparing nodes đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
â Joining worker nodes đ
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Would have been cool to call it kindly no ? For example kindly create cluster
đ
Multiple Nodes
The really cool thing here is that Kind supports multiple nodes. To create multiple nodes you need to create a config file.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
# One control plane node and three "workers".
#
# While these will not add more real compute capacity and
# have limited isolation, this can be useful for testing
# rolling updates etc.
#
# The API-server and other control plane components will be
# on the control-plane node.
#
# You probably don't need this unless you are testing Kubernetes itself.
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
$ kind create cluster --name realkube --config config.yml
Creating cluster "realkube" ...
â Ensuring node image (kindest/node:v1.17.0) đŧ
â Preparing nodes đĻ đĻ đĻ đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
â Joining worker nodes đ
Set kubectl context to "kind-realkube"
You can now use your cluster with:
kubectl cluster-info --context kind-realkube
See the 4 packages next to nodes ?. It takes a couple of minutes for all the nodes to come to ready state.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
realkube-control-plane Ready master 2m14s v1.17.0
realkube-worker NotReady <none> 94s v1.17.0
realkube-worker2 NotReady <none> 94s v1.17.0
realkube-worker3 NotReady <none> 94s v1.17.0
And finally, after a while, this is the output of get nodes.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
realkube-control-plane Ready master 2m27s v1.17.0
realkube-worker Ready <none> 107s v1.17.0
realkube-worker2 Ready <none> 107s v1.17.0
realkube-worker3 Ready <none> 107s v1.17.0
Node Affinity
Kubernetes allows you to suggest which nodes a pod can run on. One way to do this is to use spec.nodeSelector.
NodeSelector
Lets use our simple ubuntuPod from chapter 5 to test this out on kind. The Pod spec is available in the directory 014/ubuntuPod.yml
. It is reproduced here.
apiVersion: "v1"
kind: Pod
metadata:
name: ubuntu-pod
labels:
app: ubuntu-pod
version: v5
role: backend
spec:
containers:
- name: ubuntu-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "while [ \"a\" = \"a\" ]; do echo \"Hi\"; sleep 5; done" ]
There are 4 nodes to select from, lets choose the node realkube-worker
.
Label the node
The first step is to label the node. To label a node, you use the following command
kubectl label nodes <node-name> <label-key>=<label-value>
Lets label realkube-worker
with quality
= best
$ kubectl label nodes realkube-worker quality=best
node/realkube-worker labeled
$ kubectl get nodes/realkube-worker -o yaml
apiVersion: v1
kind: Node
metadata:
annotations:
kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: "0"
volumes.kubernetes.io/controller-managed-attach-detach: "true"
creationTimestamp: "2020-04-01T12:12:41Z"
labels:
beta.kubernetes.io/arch: amd64
beta.kubernetes.io/os: linux
kubernetes.io/arch: amd64
kubernetes.io/hostname: realkube-worker
kubernetes.io/os: linux
quality: best
name: realkube-worker
You can see the labels
under metadata
Set NodeSelector in the pod
Add a NodeSelector section to the pod spec.
apiVersion: "v1"
kind: Pod
metadata:
name: ubuntu-pod
labels:
app: ubuntu-pod
version: v5
role: backend
spec:
containers:
- name: ubuntu-container
image: ubuntu
command: ["/bin/bash"]
args: ["-c", "while [ \"a\" = \"a\" ]; do echo \"Hi\"; sleep 5; done" ]
nodeSelector:
quality: best
Lets apply this podspec and check if its node is chosen correctly.
kubectl get pods/ubuntu-pod -o wide
gives a nice output with the necessary node details.
$ kubectl apply -f ubuntuPod.yml
pod/ubuntu-pod created
$ kubectl get pods/ubuntu-pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ubuntu-pod 1/1 Running 0 4m38s 10.244.3.2 realkube-worker <none> <none>
Its a bit hard to see in the above output, but NODE is realkube-worker.
Multiple nodes can have the same label and kubernetes will choose one of them to schedule the pod.
There is a better way to program affinity to particular nodes, we will look at those in the next post.
Delete a cluster
To delete the cluster and nodes, just do
kind delete cluster --name=realkube
Learnings
Kind allows you to simulate multi node clusters on your laptop
You can setup Kind by following instructions here.
You can assign a Pod to a particular node, using a
spec.nodeSelector
.
Conclusion
In this post we have mostly looked at Kind and looked at one way to suggest the node on which a pod should run. In the next post we will look at reasons why we would need to do this and one other way to do this, namely – Affinity and AntiAffinity.