Skip to content

Docker and k8s

Posted on:September 23, 2022 at 03:22 PM

Definition of DevOps

DevOps is a new term emerging from the collision of two major related trends. The first was also called “agile infrastructure” or “agile operations”; it sprang from applying Agile and Lean approaches to operations work.
The second is a much expanded understanding of the value of collaboration between development and operations staff throughout all stages of the development lifecycle when creating and operating a service, and how important operations has become in our increasingly service-oriented world.

DevOps is a set of software development practices that combines software development with information technology operations to shorten the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.

Microservices

The microservices architecture is a design approach to build a single application as a set of small services. Each service runs in its own process and communicates with other services through a well-defined interface using a lightweight mechanism, typically an HTTP-based application programming interface (API). Microservices are built around business capabilities; each service is scoped to a single purpose. You can use different frameworks or programming languages to write microservices and deploy them independently, as a single service, or as a group of services.

Docker ecosystem consists of

Docker hierarchy

Docker registery and repository

Note: You can set your own private registery using docker trusted registery.

Docker image and container

Docker image: Single file with all deps and config required to run a program.

Docker installation using repo

Image cache on host machine

Contains docker images previously downloaded from docker hub.

Namespacing

Segmenting resources based on requesting process. Isolating resources per process or process group.

Control groups (cgroups)

Limit amount of resources used per process.

Docker Image: fs snapshot + startup command.

FS snapshot: specific files and directories.

Container: Set of process which are assigned a grouping of specific resources.

Create and run container from an image

$ docker run image_name [command]

Run container in background(detached mode)

$ docker run -d image_name [command]

Note: The “command” provided, overrides the startup command.

List currently running containers

$ docker ps

List all containers ever created

$ docker ps --all

Container lifecycle

Docker run = Docker create + Docker start

$ docker create image_name
$ docker start -a container_id

Note: ”- a” attaches docker to container and prints it’s output to console.

Attaching docker cli to running container

$ docker attach containerid

Restart a stopped container

$ docker start -a container_name

Note: Stopped container takes host resources.

Remove stopped containers

$ docker system prune

Note: This will also remove all images downloaded from docker hub.

Retrieving log output

$ docker logs container_id

Note: above command can retrieve logs from stopped containers.

Stopping a container

$ docker stop container_id
$ docker kill container_id

Note: stop(sigterm) allows graceful exit, kill(sigkill) stops the container abruptly.

Multicommand containers

$ docker exec -it container_id command

Note: “it” interactive mode.

Executing command in running containers by getting a shell

$ docker exec -it container_id sh

Below command will give shell but override the default startup

$ docker run -it docker_image sh

Workflow for creating docker image:

Create file, Dockerfile, which specify following.

$ docker build .

Note: Here ’.’ is build context.

$ docker build -f Dockerfile.dev .

Important: Docker builds with cache, so add the changes at the bottom for the cache to take effect.

Tagging an image

$ docker build -t saurabhp75/redis:latest .
$ docker tag

Publishing an image

$ docker push

Note: In the command above:

Manually Creating image from running container

Basic docker workflow

Container port mapping

$ docker run -p 8080:8080 image_name

Specifying a working directory(for COPY)

Note: Any command following will be executed relative to this path in container.

Note: The order of commands in Dockerfile is important. Generally we should segment copy in Dockerfile, so that only the commands below the modified state are executed. Ie better use of caches.

Docker compose

Docker-compose commands:

Specifying environment variables in docker compose config file

environment:
  - REDIS_HOST=redis-server

Run services specified in docker compose file

Similar to $ docker run image

$ docker-compose up

To run services in background

$ docker-compose up -d

Rebuild images and run services

$ docker-compose up --build

Above is similar to

$ docker build .
$ docker run image

Stopping docker-compose containers

$ docker-compose down

Container maintenance with docker compose file

restart: always/"no"/on-failure/unless-stopped

To see status of running services

$ docker-compose ps

Note: Above command should be run from the directory containing docker-compose.yml file

Docker in production environment

Workflow:

Project generator

Run only in dev env. Runs on Dev server. For development use only

$ npm run start

To run test

$ npm run test

Build production version of app

$ npm run build

Docker volumes

Maps host directory to container directory.

$ docker run -v $(pwd):/app imageid

bookmarking volumes

$ docker run -v /app/node_modules -v $(pwd):/app imageid

Note: Here /app/node_modules folder in container should not be mapped to any host folder.

Volumes In docker-compose.yml

volumes:
  - app/node_modules
  - .:/app

Build In docker-compose.yml

build:
  context: .
  dockerfile: Dockerfile.dev

Startup command in docker-compose.yml

command: ["npm", "run", "start"]

Multistep docker builds

Specifying phases in Dockerfile

FROM node:alpine as builder

.travis.yml file:

sudo: required
services:
  - docker

before_install:
   - docker build -t TEST -f Dockerfile.dev .
script:

deploy:
  provider: elasticbeanstalk
  access_key_id:
    secure: "Encrypted <access-key-id>="
  secret_access_key:
    secure: "Encypted <secret-access-key>=" region: "us-east-1"
app: "example-app-name"
env: "example-app-environment" bucket_name: "the-target-S3-bucket"

To run tests and exit, as Travis expects the tests to exit, do the following.

$ docker run image_name npm run test -- --coverage

AWS Elastic beanstalk

To push images to dockerhub from Travis config file

Note: This is generally done under after_success. First login to dockerhub then push the imageid/tagname

- echo DOCKER_PASSWORD | docker login -u "$DOCKER_ID" --password-stdin
- docker push tagname

Note: When we push a single Dockerfile to elastic beanstalk, it automatically builds the image and run the docker container. For running multiple containers EB used ECS (elastic container service).ECS has task definitions which tells how to run each service.

dockerrun.aws.json is similar to docker compose file. But here we don’t build images, rather we download from docker hub. Also, services are called container definition.

Task definitions are defined in Dockerrun.aws.json file.

{
  name:
  image: image in docker hub
  hostname: service name in docker compose
  essential: true/false
memory: 128
portMappings: [ ]
links: [ ]
  }

Running databases inside containers

We generally don’t run dB in containers. Easy to scale, built in logging and maintenance, better security, automated backups and rollback, easy to migrate away from elastic beanstalk and use some other service.

Redis > AWS (EC) ElastiCache Postgres > AWS RDS (relational database service).

AWS regions = Data centres.

AWS VPC (Virtual private cloud)

Each AWS user has his own VPC. All service instance are inside his own VPC cloud which ensures privacy and security. A user has one vpc per region.

Security group(firewall rules).

Defines inbound and outbound traffic on VPC. It is used to connect multiple AWS instances in a VPC. For eg. a redis dB and EB container.

Note: Docker compose is generally used in development environment.

What is Kubernetes aka k8s?

System for running many different containers over multiple different machines.

Why use Kubernetes.

When you need to run many different containers with different images.

If an app has one type of container then Kubernetes is not needed.

Kubernetes:

A cluster consisting of one master and one or more nodes (VMs/physical machine). Master controls what each node runs.

Minikube(used in development)

Managed solutions (used in production) :

Kubectl: used to manage containers in the node/VM. Used both in development and production. kubectl interacts with master in the cluster.

Sanity commands

$ minikube status
$ kubectl cluster-info

$ minikube start creates a VM/node on your local machine.

To get IP of VM

$ minikube ip

Note: Each pod also has an IP address but it is not easily Accessible from outside. Updating a pod may change its IP address, therefore we have service objects which has a selector property which connects with the pod. The user access the pod through service object.

Docker Vs Kubernetes

Kubernetes config files

We feed the two config files to kubectl. Which creates objects from them.

$ kubectl apply -f filename

Note: If filename is a directory, then all the files in the directory will be applied.

To get status of pods

$ kubectl get pods

To get status of services

$ kubectl get services

Objects in k8s

In k8s we define object instead of containers.

Types of objects

Note: We create secret Object using an imperative command instead of config file due to security reasons. Node>pod>containers

Types of services

Every node in k8s cluster has a kube-proxy. Which is one single window to outside world.

k8s development workflow

Imperative Vs Declarative deployment

Name and kind in config file uniquely identifies an object in k8s. This helps in deciding whether to update or create an object.

To get detailed info about an object

$ kubectl describe obj-type [obj-name]

Deployment Vs pod Objects

Pods

Deployment

To delete an object

$ kubectl delete -f config file

To get status of deployments

$ kubectl get deployments

To get the deployment to recreate the pods with latest updated version of docker image is a bit complex. (Kubernetes issue#33664). Because the latest image has no tag , so there is no change in deployment config file. The accepted solution is to use an imperative command to update the image version(a unique tag) the deployment should use.

$ kubectl set image object-type / object-name container-name = image-to-use

Description: Set image property on an object.

To make the docker cli to communicate to docker server in a VM/node

$ eval $(minikube docker env)

Note: this only configures your current terminal window. It can be used to alter the containers in a VM/node.

NodePort Vs ClusterIP services

PVC: persistent volume claim.

PVC>PV>Volume.

Volume (generic container)

Some sort of mechanism which allows a container to access filesystem outside of itself.

Volume (k8s)

An object that allows a container to store data at pod level.

Volume Vs persistent volume(PV)

Volume’s lifecycle is tied to that of pod. Persistent volume outlasts the kid’s lifecycle.

PV Vs PVC

PVC consists of:

PV access modes

Storage classes for PVs

Creating a secret Object

$ kubectl create secret generic secrrt-na me --from-literal key=value

Ingress-nginx(community/k8s led)

Kubernetes-ingress(led by nginx)

How to login to GCP from Travis

Helm, Tiller(package manager for GCP)

RBAC(Role based access control)

local development using skaffold