All the code for this chapter (and other chapters) is available at https://github.com/param108/kubernetes101 check the directory 021
In the previous posts we have looked at creation of pods and services. Using Replicasets we were able to scale out the pods and using the service we were able to load-balance traffic across the pods.
Deployment Problems
In a simple scenario, doing everything with a replicaset spec works just fine. When you have a big development team and many services, we need to find an easier way to deploy pods to production. The common tasks required by a deployment system are
- Deploy new versions
- Remove old versions
- Rollback to a previous version if the new version doesn’t roll out properly
We would like to do this without dropping traffic, if possible.
Deployment Resource
Kubernetes provides the deployment resource to handle this problem. The deployment resource provides the above features by managing Replicasets. Here is the spec for a deployment. This is available in 021/webdeployment.yml
in the repository.
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
labels:
app: web
type: deployment
spec:
replicas: 3
selector:
matchLabels:
app: web
type: app
template:
metadata:
name: web
labels:
app: web
type: app
spec:
containers:
- name: web
image: web:v1
imagePullPolicy: Never
command: ["/web"]
args: []
ports:
- containerPort: 8080
env:
- name: db_name
value: web
- name: db_user
value: web
- name: db_pass
value: web
- name: db_host
value: database.components
As you can see, its almost identical to a Replicaset. The only change being the Kind. One reason for this is because the main function of Deployment is to create and delete Replicasets.
Setup
We will use minikube to do this exercise
kubernetes101/021/service$ eval $(minikube -p minikube docker-env)
kubernetes101/021/service$ make dockerv1
...
...
kubernetes101/021/service$ make dockerv2
...
...
kubernetes101/021/service$ kubectl apply -f components_service.yml
...
kubernetes101/021/service$ kubectl apply -f postgres.yml
---
Rolling out changes
Once we have applied the above spec you should see 1 new replicaset and 3 new pods created.
kubernetes101/021/service$ kubectl apply -f webdeployment.yml
deployment.apps/backend created
kubernetes101/021/service$ kubectl get rs
NAME DESIRED CURRENT READY AGE
backend-65b4487bdb 3 3 3 11s
kubernetes101/021/service$ kubectl get pods
NAME READY STATUS RESTARTS AGE
backend-65b4487bdb-456c8 1/1 Running 0 17s
backend-65b4487bdb-9tbbh 1/1 Running 0 17s
backend-65b4487bdb-sjnmv 1/1 Running 0 17s
kubernetes101/021/service$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
backend 3/3 3 3 82s
Checking Status
How do you know if a deployment has been rolled out ? kubectl rollout
kubernetes101/021/service$ kubectl rollout status deployments/backend
deployment "backend" successfully rolled out
Updating a deployment
Suppose a new version of an application becomes available. The easy way to update the deployment is edit it. Usually the new application is a new docker image. In our case we have the v2 image we built in Setup above. Lets use the kubectl set
command change the image in the deployment.
kubernetes101/021/service$ kubectl set image deployments/backend web=web:v2
deployment.apps/backend image updated
kubernetes101/021/service$ kubectl get rs
NAME DESIRED CURRENT READY AGE
backend-65b4487bdb 1 1 1 10m
backend-7d766fcb9f 3 3 2 8s
As you can see here, a new Replicaset was created and the old one was scaled down slowly.
kubernetes101/021/service$ kubectl get rs
NAME DESIRED CURRENT READY AGE
backend-65b4487bdb 0 0 0 12m
backend-7d766fcb9f 3 3 3 108s
History
Now I want to see the revisions that have been deployed.
kubernetes101/021/service$ kubectl rollout history deploy/backend
deployment.apps/backend
REVISION CHANGE-CAUSE
1 <none>
2 <none>
If you want CHANGE-CAUSE to be updated, use –record in all your deployment related commands. The latgest numbered revision is the current revision.
You can look at the deployment spec used in any revision by
kubernetes101/021/service$ kubectl rollout history deploy/backend --revision=1
deployment.apps/backend with revision #1
Pod Template:
Labels: app=web
pod-template-hash=65b4487bdb
type=app
Containers:
web:
Image: web:v1
Port: 8080/TCP
Host Port: 0/TCP
Command:
/web
Environment:
db_name: web
db_user: web
db_pass: web
db_host: database.components
Mounts: <none>
Volumes: <none>
Rollback
If a new rollout pod fails at the time of rollout, the deployment will automatically rollback.
You can also manually rollback using
kubernetes101/021/service$ kubectl rollout undo deployments/backend
deployment.apps/backend rolled back
kubernetes101/021/service$ kubectl rollout history deploy/backend
deployment.apps/backend
REVISION CHANGE-CAUSE
2 <none>
3 <none>
What happened here is that the revision 1 has become revision 3. You can check the history for that revision to verify.
You can also specify –to-revision to rollback to a particular revision.
kubernetes101/021/service$ kubectl rollout undo --to-revision=2 deploy/backend
Learnings
Deployments make deployments more deterministic
Deployments with rolling updates allow upgrades without traffic disruption
Deployments keep track of previous versions
use –record in Deployment to record the command used to update a deployment
Conclusion
With deployments we conclude the major components of a web service. There are a few minor details specifically around passing configuration to a web service using configMaps etc. We will look at that in the next one.