Deployment and Replica Set

A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don’t require updates at all.

How to create a nginx replicaset?

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx-app
        image: nginx

Lets deploy the replica set

$ kubectl apply -f nginxrs.yaml

How to get the list of replicaset

$ kubectl get rs

Check the number of pods running

$ kubectl get pods 

How to delete Replica Set

$ kubectl delete replicaset nginx 

Deployments

Deployments are intended to replace Replication Controllers. They provide the same replication functions with the help of Replica Sets and Deployments also have the ability to rollout changes and roll them back if necessary.

A Deployment controller provides declarative updates for Pods and ReplicaSets.

You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Lets create a nginx Deployment

  • Create a manifest file nginx-deployment.yaml with below content
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.9.1
        ports:
        - containerPort: 80

Lets deploy it

$ kubectl apply -f nginx-deployment.yaml 

Get the current list of current deployments

$ kubectl get deployments 

How to scale-out

$ kubectl scale deployment nginx-deploy --replicas=5 

Now check the deployments and replicas

$ kubectl get deploy

How to scale down the deployment

$ kubectl scale deployment nginx-deploy --replicas=3

Check the deployment and replicas

$ kubectl get deploy

SCALE DEPLOYMENTS

When load increases , we can scale the pods using deployment scaling

  • The below command will scale the nginx-deploy deployment to 10
$ kubectl scale deployment --replicas=10 nginx-deploy
  • Verify by using the below command
$ kubectl get pods

Exposing the service

We know how to expose a Pod using a service. The endpoints will be created based on the label of the Pod.

Here how we can create a service which can be used to access Nginx from outside

  • First we will check the label of the Pod
$ kubectl get pod POD_NAME --show-labels

You will see one of the label is app=nginx

  • Next write a Service spec (nginx-svc.yaml) and use selector as app: nginx
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    run: nginx-svc
  name: nginx-svc
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer

This service will look for Pods with label “run=nginx”

  • Lets create the service
$ kubectl apply -f nginx-svc.yaml
  • How to check if the loadbalancer is up and the service is fine
$ kubectl describe svc nginx-svc
  • Verify the service details
$ kubectl get svc

Now you will be able to see the default nginx page with the FQDN from the above output. In your case thats the ELB created by k8.

Kubernetes deployment strategies

In Kubernetes there are a few different ways to release an application, it is necessary to choose the right strategy to make your infrastructure reliable during an application update.

To know about more please have a look at https://www.cncf.io/wp-content/uploads/2018/03/CNCF-Presentation-Template-K8s-Deployment.pdf

TASK

I. Create a deployment using k8s.gcr.io/echoserver:1.10 and expose it as a service