• United States+1
  • United Kingdom+44
  • Afghanistan (‫افغانستان‬‎)+93
  • Albania (Shqipëri)+355
  • Algeria (‫الجزائر‬‎)+213
  • American Samoa+1684
  • Andorra+376
  • Angola+244
  • Anguilla+1264
  • Antigua and Barbuda+1268
  • Argentina+54
  • Armenia (Հայաստան)+374
  • Aruba+297
  • Australia+61
  • Austria (Österreich)+43
  • Azerbaijan (Azərbaycan)+994
  • Bahamas+1242
  • Bahrain (‫البحرين‬‎)+973
  • Bangladesh (বাংলাদেশ)+880
  • Barbados+1246
  • Belarus (Беларусь)+375
  • Belgium (België)+32
  • Belize+501
  • Benin (Bénin)+229
  • Bermuda+1441
  • Bhutan (འབྲུག)+975
  • Bolivia+591
  • Bosnia and Herzegovina (Босна и Херцеговина)+387
  • Botswana+267
  • Brazil (Brasil)+55
  • British Indian Ocean Territory+246
  • British Virgin Islands+1284
  • Brunei+673
  • Bulgaria (България)+359
  • Burkina Faso+226
  • Burundi (Uburundi)+257
  • Cambodia (កម្ពុជា)+855
  • Cameroon (Cameroun)+237
  • Canada+1
  • Cape Verde (Kabu Verdi)+238
  • Caribbean Netherlands+599
  • Cayman Islands+1345
  • Central African Republic (République centrafricaine)+236
  • Chad (Tchad)+235
  • Chile+56
  • China (中国)+86
  • Christmas Island+61
  • Cocos (Keeling) Islands+61
  • Colombia+57
  • Comoros (‫جزر القمر‬‎)+269
  • Congo (DRC) (Jamhuri ya Kidemokrasia ya Kongo)+243
  • Congo (Republic) (Congo-Brazzaville)+242
  • Cook Islands+682
  • Costa Rica+506
  • Côte d’Ivoire+225
  • Croatia (Hrvatska)+385
  • Cuba+53
  • Curaçao+599
  • Cyprus (Κύπρος)+357
  • Czech Republic (Česká republika)+420
  • Denmark (Danmark)+45
  • Djibouti+253
  • Dominica+1767
  • Dominican Republic (República Dominicana)+1
  • Ecuador+593
  • Egypt (‫مصر‬‎)+20
  • El Salvador+503
  • Equatorial Guinea (Guinea Ecuatorial)+240
  • Eritrea+291
  • Estonia (Eesti)+372
  • Ethiopia+251
  • Falkland Islands (Islas Malvinas)+500
  • Faroe Islands (Føroyar)+298
  • Fiji+679
  • Finland (Suomi)+358
  • France+33
  • French Guiana (Guyane française)+594
  • French Polynesia (Polynésie française)+689
  • Gabon+241
  • Gambia+220
  • Georgia (საქართველო)+995
  • Germany (Deutschland)+49
  • Ghana (Gaana)+233
  • Gibraltar+350
  • Greece (Ελλάδα)+30
  • Greenland (Kalaallit Nunaat)+299
  • Grenada+1473
  • Guadeloupe+590
  • Guam+1671
  • Guatemala+502
  • Guernsey+44
  • Guinea (Guinée)+224
  • Guinea-Bissau (Guiné Bissau)+245
  • Guyana+592
  • Haiti+509
  • Honduras+504
  • Hong Kong (香港)+852
  • Hungary (Magyarország)+36
  • Iceland (Ísland)+354
  • India (भारत)+91
  • Indonesia+62
  • Iran (‫ایران‬‎)+98
  • Iraq (‫العراق‬‎)+964
  • Ireland+353
  • Isle of Man+44
  • Israel (‫ישראל‬‎)+972
  • Italy (Italia)+39
  • Jamaica+1876
  • Japan (日本)+81
  • Jersey+44
  • Jordan (‫الأردن‬‎)+962
  • Kazakhstan (Казахстан)+7
  • Kenya+254
  • Kiribati+686
  • Kosovo+383
  • Kuwait (‫الكويت‬‎)+965
  • Kyrgyzstan (Кыргызстан)+996
  • Laos (ລາວ)+856
  • Latvia (Latvija)+371
  • Lebanon (‫لبنان‬‎)+961
  • Lesotho+266
  • Liberia+231
  • Libya (‫ليبيا‬‎)+218
  • Liechtenstein+423
  • Lithuania (Lietuva)+370
  • Luxembourg+352
  • Macau (澳門)+853
  • Macedonia (FYROM) (Македонија)+389
  • Madagascar (Madagasikara)+261
  • Malawi+265
  • Malaysia+60
  • Maldives+960
  • Mali+223
  • Malta+356
  • Marshall Islands+692
  • Martinique+596
  • Mauritania (‫موريتانيا‬‎)+222
  • Mauritius (Moris)+230
  • Mayotte+262
  • Mexico (México)+52
  • Micronesia+691
  • Moldova (Republica Moldova)+373
  • Monaco+377
  • Mongolia (Монгол)+976
  • Montenegro (Crna Gora)+382
  • Montserrat+1664
  • Morocco (‫المغرب‬‎)+212
  • Mozambique (Moçambique)+258
  • Myanmar (Burma) (မြန်မာ)+95
  • Namibia (Namibië)+264
  • Nauru+674
  • Nepal (नेपाल)+977
  • Netherlands (Nederland)+31
  • New Caledonia (Nouvelle-Calédonie)+687
  • New Zealand+64
  • Nicaragua+505
  • Niger (Nijar)+227
  • Nigeria+234
  • Niue+683
  • Norfolk Island+672
  • North Korea (조선 민주주의 인민 공화국)+850
  • Northern Mariana Islands+1670
  • Norway (Norge)+47
  • Oman (‫عُمان‬‎)+968
  • Pakistan (‫پاکستان‬‎)+92
  • Palau+680
  • Palestine (‫فلسطين‬‎)+970
  • Panama (Panamá)+507
  • Papua New Guinea+675
  • Paraguay+595
  • Peru (Perú)+51
  • Philippines+63
  • Poland (Polska)+48
  • Portugal+351
  • Puerto Rico+1
  • Qatar (‫قطر‬‎)+974
  • Réunion (La Réunion)+262
  • Romania (România)+40
  • Russia (Россия)+7
  • Rwanda+250
  • Saint Barthélemy (Saint-Barthélemy)+590
  • Saint Helena+290
  • Saint Kitts and Nevis+1869
  • Saint Lucia+1758
  • Saint Martin (Saint-Martin (partie française))+590
  • Saint Pierre and Miquelon (Saint-Pierre-et-Miquelon)+508
  • Saint Vincent and the Grenadines+1784
  • Samoa+685
  • San Marino+378
  • São Tomé and Príncipe (São Tomé e Príncipe)+239
  • Saudi Arabia (‫المملكة العربية السعودية‬‎)+966
  • Senegal (Sénégal)+221
  • Serbia (Србија)+381
  • Seychelles+248
  • Sierra Leone+232
  • Singapore+65
  • Sint Maarten+1721
  • Slovakia (Slovensko)+421
  • Slovenia (Slovenija)+386
  • Solomon Islands+677
  • Somalia (Soomaaliya)+252
  • South Africa+27
  • South Korea (대한민국)+82
  • South Sudan (‫جنوب السودان‬‎)+211
  • Spain (España)+34
  • Sri Lanka (ශ්‍රී ලංකාව)+94
  • Sudan (‫السودان‬‎)+249
  • Suriname+597
  • Svalbard and Jan Mayen+47
  • Swaziland+268
  • Sweden (Sverige)+46
  • Switzerland (Schweiz)+41
  • Syria (‫سوريا‬‎)+963
  • Taiwan (台灣)+886
  • Tajikistan+992
  • Tanzania+255
  • Thailand (ไทย)+66
  • Timor-Leste+670
  • Togo+228
  • Tokelau+690
  • Tonga+676
  • Trinidad and Tobago+1868
  • Tunisia (‫تونس‬‎)+216
  • Turkey (Türkiye)+90
  • Turkmenistan+993
  • Turks and Caicos Islands+1649
  • Tuvalu+688
  • U.S. Virgin Islands+1340
  • Uganda+256
  • Ukraine (Україна)+380
  • United Arab Emirates (‫الإمارات العربية المتحدة‬‎)+971
  • United Kingdom+44
  • United States+1
  • Uruguay+598
  • Uzbekistan (Oʻzbekiston)+998
  • Vanuatu+678
  • Vatican City (Città del Vaticano)+39
  • Venezuela+58
  • Vietnam (Việt Nam)+84
  • Wallis and Futuna+681
  • Western Sahara (‫الصحراء الغربية‬‎)+212
  • Yemen (‫اليمن‬‎)+967
  • Zambia+260
  • Zimbabwe+263
  • Åland Islands+358
Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm

Introduction

Kubernetes is getting adopted rapidly across the software industry and is becoming the most preferred option for deploying and managing containerized applications. Once we have a fully functional Kubernetes cluster we need to have an automated process to deploy our applications on it. In this blog post, we will create a fully automated “commit to deploy” pipeline for Kubernetes. We will use CircleCI & helm for it.

What is CircleCI?

CircleCI is a fully managed saas offering which allows us to build, test or deploy our code on every check in. For getting started with circle we need to log into their web console with our GitHub or bitbucket credentials then add a project for the repository we want to build and then add the CircleCI config file to our repository. The CircleCI config file is a yaml file which lists the steps we want to execute on every time code is pushed to that repository.

Some salient features of CircleCI is:

  1. Little or no operational overhead as the infrastructure is managed completely by CircleCI.
  2. User authentication is done via GitHub or bitbucket so user management is quite simple.
  3. It automatically notifies the build status on the github/bitbucket email ids of the users who are following the project on CircleCI.
  4. The UI is quite simple and gives a holistic view of builds.
  5. Can be integrated with Slack, hipchat, jira, etc.

What is Helm?

Helm is chart manager where chart refers to package of Kubernetes resources. Helm allows us to bundle related Kubernetes objects into charts and treat them as a single unit of deployment referred to as release.  For example, you have an application app1 which you want to run on Kubernetes. For this app1 you create multiple Kubernetes resources like deployment, service, ingress, horizontal pod scaler, etc. Now while deploying the application you need to create all the Kubernetes resources separately by applying their manifest files. What helm does is it allows us to group all those files into one chart (Helm chart) and then we just need to deploy the chart. This also makes deleting and upgrading the resources quite simple.

Some other benefits of Helm is:

  1. It makes the deployment highly configurable. Thus just by changing the parameters, we can use the same chart for deploying on multiple environments like stag/prod or multiple cloud providers.
  2. We can rollback to a previous release with a single helm command.
  3. It makes managing and sharing Kubernetes specific application much simpler.

Note: Helm is composed of two components one is helm client and the other one is tiller server. Tiller is the component which runs inside the cluster as deployment and serves the requests made by helm client. Tiller has potential security vulnerabilities thus we will use tillerless helm in our pipeline which runs tiller only when we need it.

Building the Pipeline

Building the pipeline

Overview:

We will create the pipeline for a Golang application. The pipeline will first build the binary, create a docker image from it, push the image to ECR, then deploy it on the Kubernetes cluster using its helm chart.

We will use a simple app which just exposes a `hello` endpoint and returns the hello world message:

package main
import (
"encoding/json"
"net/http"
"log"
"github.com/gorilla/mux"
)
type Message struct {
Msg string
}
func helloWorldJSON(w http.ResponseWriter, r *http.Request) {
m := Message{"Hello World"}
response, _ := json.Marshal(m)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(response)
}
func main() {
r := mux.NewRouter()
r.HandleFunc("/hello", helloWorldJSON).Methods("GET")
if err := http.ListenAndServe(":8080", r); err != nil {
log.Fatal(err)
}
}
view raw main.go hosted with ❤ by GitHub

We will create a docker image for hello app using the following Dockerfile:

FROM centos/systemd
MAINTAINER "Akash Gautam" <akash.gautam@velotio.com>
COPY hello-app /
ENTRYPOINT ["/hello-app"]
view raw Dockerfile hosted with ❤ by GitHub

Creating Helm Chart:

Now we need to create the helm chart for hello app.

First, we create the Kubernetes manifest files. We will create a deployment and a service file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloapp
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: helloapp
env: {{ .Values.labels.env }}
cluster: {{ .Values.labels.cluster }}
spec:
containers:
- name: helloapp
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
readinessProbe:
httpGet:
path: /hello
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
view raw deployment.yaml hosted with ❤ by GitHub

apiVersion: v1
kind: Service
metadata:
name: helloapp
spec:
type: {{ .Values.service.type }}
ports:
- name: helloapp
port: {{ .Values.service.port }}
protocol: TCP
targetPort: {{ .Values.service.targetPort }}
selector:
app: helloapp
view raw service.yaml hosted with ❤ by GitHub

In the above file, you must have noticed that we have used .Values object. All the values that we specify in the values.yaml file in our helm chart can be accessed using the .Values object inside the template.

Let’s create the helm chart now:

helm create helloapp
view raw create-helm.sh hosted with ❤ by GitHub

Above command will create a chart helm chart folder structure for us.

helloapp/
|
|- .helmignore # Contains patterns to ignore when packaging Helm charts.
|
|- Chart.yaml # Information about your chart
|
|- values.yaml # The default values for your templates
|
|- charts/ # Charts that this chart depends on
|
|- templates/ # The template files
view raw helm_chart.sh hosted with ❤ by GitHub

We can remove the charts/ folder inside our helloapp chart as our chart won’t have any sub-charts. Now we need to move our Kubernetes manifest files to the template folder and update our values.yaml and Chart.yaml

Our values.yaml looks like:

image:
tag: 0.0.1
repository: 123456789870.dkr.ecr.us-east-1.amazonaws.com/helloapp
imagePullPolicy: Always
labels:
env: "staging"
cluster: "eks-cluster-blog"
service:
port: 80
targetPort: 8080
type: LoadBalancer
view raw values.yaml hosted with ❤ by GitHub

This allows us to make our deployment more configurable. For example, here we have set our service type as LoadBalancer in values.yaml but if we want to change it to nodePort we just need to set is as NodePort while installing the chart (--set service.type=NodePort). Similarly, we have set the image pull policy as Always which is fine for development/staging environment but when we deploy to production we may want to set is as ifNotPresent. In our chart, we need to identify the parameters/values which may change from one environment to another and make them configurable. This allows us to be flexible with our deployment and reuse the same chart

Finally, we need to update Chart.yaml file. This file mostly contains metadata about the chart like the name, version, maintainer, etc, where name & version are two mandatory fields for Chart.yaml.

version: 1.0.0
appVersion: 0.0.1
name: helloapp
description: Helm chart for helloapp
source:
- https://github.com/akash-gautam/helloapp
view raw Chart.yaml hosted with ❤ by GitHub

Now our Helm chart is ready we can start with the pipeline. We need to create a folder named .circleci in the root folder of our repository and create a file named config.yml in it. In our config.yml we have defined two jobs one is build&pushImage and deploy.

Configure the pipeline:

build&pushImage:
working_directory: /go/src/hello-app (1)
docker:
- image: circleci/golang:1.10 (2)
steps:
- checkout (3)
- run: (4)
name: build the binary
command: go build -o hello-app
- setup_remote_docker: (5)
docker_layer_caching: true
- run: (6)
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_BUILD_NUM' >> $BASH_ENV
- run: (7)
name: Build the docker image
command: docker build . -t ${CIRCLE_PROJECT_REPONAME}:$TAG
- run: (8)
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run: (9)
name: Login to ECR
command: $(aws ecr get-login --region $AWS_REGION | sed -e 's/-e none//g')
- run: (10)
name: Tag the image with ECR repo name
command: docker tag ${CIRCLE_PROJECT_REPONAME}:$TAG ${HELLOAPP_ECR_REPO}:$TAG
- run: (11)
name: Push the image the ECR repo
command: docker push ${HELLOAPP_ECR_REPO}:$TAG

  1. We set the working directory for our job, we are setting it on the gopath so that we don’t need to do anything additional.
  2. We set the docker image inside which we want the job to run, as our app is built using golang we are using the image which already has golang installed in it.
  3. This step checks out our repository in the working directory
  4. In this step, we build the binary
  5. Here we setup docker with the help of  setup_remote_docker  key provided by CircleCI.
  6. In this step we create the tag we will be using while building the image, we use the app version available in the VERSION file and append the $CIRCLE_BUILD_NUM value to it, separated by a dash (`-`).
  7. Here we build the image and tag.
  8. Installing AWS CLI to interact with the ECR later.
  9. Here we log into ECR
  10. We tag the image build in step 7 with the ECR repository name.
  11. Finally, we push the image to ECR.

Now we will deploy our helm charts. For this, we have a separate job deploy.

deploy:
docker: (1)
- image: circleci/golang:1.10
steps: (2)
- checkout
- run: (3)
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run: (4)
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_PREVIOUS_BUILD_NUM' >> $BASH_ENV
- run: (5)
name: Install and confgure kubectl
command: sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && sudo chmod +x /usr/local/bin/kubectl
- run: (6)
name: Install and confgure kubectl aws-iam-authenticator
command: curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator && sudo chmod +x ./aws-iam-authenticator && sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator
- run: (7)
name: Install latest awscli version
command: sudo apt install unzip && curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip &&./awscli-bundle/install -b ~/bin/aws
- run: (8)
name: Get the kubeconfig file
command: export KUBECONFIG=$HOME/.kube/kubeconfig && /home/circleci/bin/aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME
- run: (9)
name: Install and configuire helm
command: sudo curl -L https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
- run: (10)
name: Initialize helm
command: helm init --client-only --kubeconfig=$HOME/.kube/kubeconfig
- run: (11)
name: Install tiller plugin
command: helm plugin install https://github.com/rimusz/helm-tiller --kubeconfig=$HOME/.kube/kubeconfig
- run: (12)
name: Release helloapp using helm chart
command: bash scripts/release-helloapp.sh $TAG
view raw deploy.yaml hosted with ❤ by GitHub

  1. Set the docker image inside which we want to execute the job.
  2. Check out the code using `checkout` key
  3. Install AWS CLI.
  4. Setting the value of tag just like we did in case of build&pushImage job. Note that here we are using CIRCLE_PREVIOUS_BUILD_NUM variable which gives us the build number of build&pushImage job and ensures that the tag values are the same.
  5. Download kubectl and making it executable.
  6. Installing aws-iam-authenticator this is required because my k8s cluster is on EKS.
  7. Here we install the latest version of AWS CLI, EKS is a relatively newer service from AWS and older versions of AWS CLI doesn’t have it.
  8. Here we fetch the kubeconfig file. This step will vary depending upon where the k8s cluster has been set up. As my cluster is on EKS am getting the kubeconfig file via. AWS CLI similarly if your cluster in on GKE then you need to configure gcloud and use the command  `gcloud container clusters get-credentials <cluster-name> --zone=<zone-name>`. We can also have the kubeconfig file on some other secure storage system and fetch it from there.</zone-name></cluster-name>
  9. Download Helm and make it executable
  10. Initializing helm, note that we are initializing helm in client only mode so that it doesn’t start the tiller server.
  11. Download the tillerless helm plugin
  12. Execute the release-helloapp.sh shell script and pass it TAG value from step 4.

In the release-helloapp.sh script we first start tiller, after this, we check if the release is already present or not if it is present then we upgrade otherwise we make a new release. Here we override the value of tag for the image present in the chart by setting it to the tag of the newly built image, finally, we stop the tiller server.

#!/bin/bash
TAG=$1
echo "start tiller"
export KUBECONFIG=$HOME/.kube/kubeconfig
helm tiller start-ci
export HELM_HOST=127.0.0.1:44134
result=$(eval helm ls | grep helloapp)
if [ $? -ne "0" ]; then
helm install --timeout 180 --name helloapp --set image.tag=$TAG charts/helloapp
else
helm upgrade --timeout 180 helloapp --set image.tag=$TAG charts/helloapp
fi
echo "stop tiller"
helm tiller stop

The complete CircleCI config.yml file looks like:

version: 2
jobs:
build&pushImage:
working_directory: /go/src/hello-app
docker:
- image: circleci/golang:1.10
steps:
- checkout
- run:
name: build the binary
command: go build -o hello-app
- setup_remote_docker:
docker_layer_caching: true
- run:
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_BUILD_NUM' >> $BASH_ENV
- run:
name: Build the docker image
command: docker build . -t ${CIRCLE_PROJECT_REPONAME}:$TAG
- run:
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run:
name: Login to ECR
command: $(aws ecr get-login --region $AWS_REGION | sed -e 's/-e none//g')
- run:
name: Tag the image with ECR repo name
command: docker tag ${CIRCLE_PROJECT_REPONAME}:$TAG ${HELLOAPP_ECR_REPO}:$TAG
- run:
name: Push the image the ECR repo
command: docker push ${HELLOAPP_ECR_REPO}:$TAG
deploy:
docker:
- image: circleci/golang:1.10
steps:
- attach_workspace:
at: /tmp/workspace
- checkout
- run:
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run:
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_PREVIOUS_BUILD_NUM' >> $BASH_ENV
- run:
name: Install and confgure kubectl
command: sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && sudo chmod +x /usr/local/bin/kubectl
- run:
name: Install and confgure kubectl aws-iam-authenticator
command: curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator && sudo chmod +x ./aws-iam-authenticator && sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator
- run:
name: Install latest awscli version
command: sudo apt install unzip && curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip &&./awscli-bundle/install -b ~/bin/aws
- run:
name: Get the kubeconfig file
command: export KUBECONFIG=$HOME/.kube/kubeconfig && /home/circleci/bin/aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME
- run:
name: Install and configuire helm
command: sudo curl -L https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
- run:
name: Initialize helm
command: helm init --client-only --kubeconfig=$HOME/.kube/kubeconfig
- run:
name: Install tiller plugin
command: helm plugin install https://github.com/rimusz/helm-tiller --kubeconfig=$HOME/.kube/kubeconfig
- run:
name: Release helloapp using helm chart
command: bash scripts/release-helloapp.sh $TAG
workflows:
version: 2
primary:
jobs:
- build&pushImage
- deploy:
requires:
- build&pushImage
view raw config.yml hosted with ❤ by GitHub

At the end of the file, we see the workflows, workflows control the order in which the jobs specified in the file are executed and establishes dependencies and conditions for the job. For example, we may want our deploy job trigger only after my build job is complete so we added a dependency between them. Similarly, we may want to exclude the jobs from running on some particular branch then we can specify those type of conditions as well.

We have used a few environment variables in our pipeline configuration some of them were created by us and some were made available by CircleCI. We created AWS_REGION, HELLOAPP_ECR_REPO, EKS_CLUSTER_NAME, AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY variables. These variables are set via. CircleCI web console by going to the projects settings. Other variables that we have used are made available by CircleCI as a part of its environment setup process. Complete list of environment variables set by CircleCI can be found here.

Verify the working of the pipeline:

Once everything is set up properly then our application will get deployed on the k8s cluster and should be available for access. Get the external IP of the helloapp service and make a curl request to the hello endpoint

$ curl http://a31e25e7553af11e994620aebe144c51-242977608.us-west-2.elb.amazonaws.com/hello && printf "\n"
{"Msg":"Hello World"}
view raw curl_request.sh hosted with ❤ by GitHub

Now update the code and change the message “Hello World” to “Hello World Returns” and push your code. It will take a few minutes for the pipeline to complete execution and once it is complete make the curl request again to see the changes getting reflected.

$ curl http://a31e25e7553af11e994620aebe144c51-242977608.us-west-2.elb.amazonaws.com/hello && printf "\n"
{"Msg":"Hello World Returns"}

Also, verify that a new tag is also created for the helloapp docker image on ECR.

Conclusion

In this blog post, we explored how we can set up a CI/CD pipeline for kubernetes and got basic exposure to CircleCI and Helm. Although helm is not absolutely necessary for building a pipeline, it has lots of benefits and is widely used across the industry. We can extend the pipeline to consider the cases where we have multiple environments like dev, staging & production and make the pipeline deploy the application to any of them depending upon some conditions. We can also add more jobs like integration tests. All the codes used in the blog post are available here.

Related Reads:

  1. Continuous Deployment with Azure Kubernetes Service, Azure Container Registry & Jenkins
  2. Know Everything About Spinnaker & How to Deploy Using Kubernetes Engine
Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Continuous Integration & Delivery (CI/CD) for Kubernetes Using CircleCI & Helm

Introduction

Kubernetes is getting adopted rapidly across the software industry and is becoming the most preferred option for deploying and managing containerized applications. Once we have a fully functional Kubernetes cluster we need to have an automated process to deploy our applications on it. In this blog post, we will create a fully automated “commit to deploy” pipeline for Kubernetes. We will use CircleCI & helm for it.

What is CircleCI?

CircleCI is a fully managed saas offering which allows us to build, test or deploy our code on every check in. For getting started with circle we need to log into their web console with our GitHub or bitbucket credentials then add a project for the repository we want to build and then add the CircleCI config file to our repository. The CircleCI config file is a yaml file which lists the steps we want to execute on every time code is pushed to that repository.

Some salient features of CircleCI is:

  1. Little or no operational overhead as the infrastructure is managed completely by CircleCI.
  2. User authentication is done via GitHub or bitbucket so user management is quite simple.
  3. It automatically notifies the build status on the github/bitbucket email ids of the users who are following the project on CircleCI.
  4. The UI is quite simple and gives a holistic view of builds.
  5. Can be integrated with Slack, hipchat, jira, etc.

What is Helm?

Helm is chart manager where chart refers to package of Kubernetes resources. Helm allows us to bundle related Kubernetes objects into charts and treat them as a single unit of deployment referred to as release.  For example, you have an application app1 which you want to run on Kubernetes. For this app1 you create multiple Kubernetes resources like deployment, service, ingress, horizontal pod scaler, etc. Now while deploying the application you need to create all the Kubernetes resources separately by applying their manifest files. What helm does is it allows us to group all those files into one chart (Helm chart) and then we just need to deploy the chart. This also makes deleting and upgrading the resources quite simple.

Some other benefits of Helm is:

  1. It makes the deployment highly configurable. Thus just by changing the parameters, we can use the same chart for deploying on multiple environments like stag/prod or multiple cloud providers.
  2. We can rollback to a previous release with a single helm command.
  3. It makes managing and sharing Kubernetes specific application much simpler.

Note: Helm is composed of two components one is helm client and the other one is tiller server. Tiller is the component which runs inside the cluster as deployment and serves the requests made by helm client. Tiller has potential security vulnerabilities thus we will use tillerless helm in our pipeline which runs tiller only when we need it.

Building the Pipeline

Building the pipeline

Overview:

We will create the pipeline for a Golang application. The pipeline will first build the binary, create a docker image from it, push the image to ECR, then deploy it on the Kubernetes cluster using its helm chart.

We will use a simple app which just exposes a `hello` endpoint and returns the hello world message:

package main
import (
"encoding/json"
"net/http"
"log"
"github.com/gorilla/mux"
)
type Message struct {
Msg string
}
func helloWorldJSON(w http.ResponseWriter, r *http.Request) {
m := Message{"Hello World"}
response, _ := json.Marshal(m)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(response)
}
func main() {
r := mux.NewRouter()
r.HandleFunc("/hello", helloWorldJSON).Methods("GET")
if err := http.ListenAndServe(":8080", r); err != nil {
log.Fatal(err)
}
}
view raw main.go hosted with ❤ by GitHub

We will create a docker image for hello app using the following Dockerfile:

FROM centos/systemd
MAINTAINER "Akash Gautam" <akash.gautam@velotio.com>
COPY hello-app /
ENTRYPOINT ["/hello-app"]
view raw Dockerfile hosted with ❤ by GitHub

Creating Helm Chart:

Now we need to create the helm chart for hello app.

First, we create the Kubernetes manifest files. We will create a deployment and a service file:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: helloapp
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: helloapp
env: {{ .Values.labels.env }}
cluster: {{ .Values.labels.cluster }}
spec:
containers:
- name: helloapp
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
readinessProbe:
httpGet:
path: /hello
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
view raw deployment.yaml hosted with ❤ by GitHub

apiVersion: v1
kind: Service
metadata:
name: helloapp
spec:
type: {{ .Values.service.type }}
ports:
- name: helloapp
port: {{ .Values.service.port }}
protocol: TCP
targetPort: {{ .Values.service.targetPort }}
selector:
app: helloapp
view raw service.yaml hosted with ❤ by GitHub

In the above file, you must have noticed that we have used .Values object. All the values that we specify in the values.yaml file in our helm chart can be accessed using the .Values object inside the template.

Let’s create the helm chart now:

helm create helloapp
view raw create-helm.sh hosted with ❤ by GitHub

Above command will create a chart helm chart folder structure for us.

helloapp/
|
|- .helmignore # Contains patterns to ignore when packaging Helm charts.
|
|- Chart.yaml # Information about your chart
|
|- values.yaml # The default values for your templates
|
|- charts/ # Charts that this chart depends on
|
|- templates/ # The template files
view raw helm_chart.sh hosted with ❤ by GitHub

We can remove the charts/ folder inside our helloapp chart as our chart won’t have any sub-charts. Now we need to move our Kubernetes manifest files to the template folder and update our values.yaml and Chart.yaml

Our values.yaml looks like:

image:
tag: 0.0.1
repository: 123456789870.dkr.ecr.us-east-1.amazonaws.com/helloapp
imagePullPolicy: Always
labels:
env: "staging"
cluster: "eks-cluster-blog"
service:
port: 80
targetPort: 8080
type: LoadBalancer
view raw values.yaml hosted with ❤ by GitHub

This allows us to make our deployment more configurable. For example, here we have set our service type as LoadBalancer in values.yaml but if we want to change it to nodePort we just need to set is as NodePort while installing the chart (--set service.type=NodePort). Similarly, we have set the image pull policy as Always which is fine for development/staging environment but when we deploy to production we may want to set is as ifNotPresent. In our chart, we need to identify the parameters/values which may change from one environment to another and make them configurable. This allows us to be flexible with our deployment and reuse the same chart

Finally, we need to update Chart.yaml file. This file mostly contains metadata about the chart like the name, version, maintainer, etc, where name & version are two mandatory fields for Chart.yaml.

version: 1.0.0
appVersion: 0.0.1
name: helloapp
description: Helm chart for helloapp
source:
- https://github.com/akash-gautam/helloapp
view raw Chart.yaml hosted with ❤ by GitHub

Now our Helm chart is ready we can start with the pipeline. We need to create a folder named .circleci in the root folder of our repository and create a file named config.yml in it. In our config.yml we have defined two jobs one is build&pushImage and deploy.

Configure the pipeline:

build&pushImage:
working_directory: /go/src/hello-app (1)
docker:
- image: circleci/golang:1.10 (2)
steps:
- checkout (3)
- run: (4)
name: build the binary
command: go build -o hello-app
- setup_remote_docker: (5)
docker_layer_caching: true
- run: (6)
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_BUILD_NUM' >> $BASH_ENV
- run: (7)
name: Build the docker image
command: docker build . -t ${CIRCLE_PROJECT_REPONAME}:$TAG
- run: (8)
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run: (9)
name: Login to ECR
command: $(aws ecr get-login --region $AWS_REGION | sed -e 's/-e none//g')
- run: (10)
name: Tag the image with ECR repo name
command: docker tag ${CIRCLE_PROJECT_REPONAME}:$TAG ${HELLOAPP_ECR_REPO}:$TAG
- run: (11)
name: Push the image the ECR repo
command: docker push ${HELLOAPP_ECR_REPO}:$TAG

  1. We set the working directory for our job, we are setting it on the gopath so that we don’t need to do anything additional.
  2. We set the docker image inside which we want the job to run, as our app is built using golang we are using the image which already has golang installed in it.
  3. This step checks out our repository in the working directory
  4. In this step, we build the binary
  5. Here we setup docker with the help of  setup_remote_docker  key provided by CircleCI.
  6. In this step we create the tag we will be using while building the image, we use the app version available in the VERSION file and append the $CIRCLE_BUILD_NUM value to it, separated by a dash (`-`).
  7. Here we build the image and tag.
  8. Installing AWS CLI to interact with the ECR later.
  9. Here we log into ECR
  10. We tag the image build in step 7 with the ECR repository name.
  11. Finally, we push the image to ECR.

Now we will deploy our helm charts. For this, we have a separate job deploy.

deploy:
docker: (1)
- image: circleci/golang:1.10
steps: (2)
- checkout
- run: (3)
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run: (4)
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_PREVIOUS_BUILD_NUM' >> $BASH_ENV
- run: (5)
name: Install and confgure kubectl
command: sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && sudo chmod +x /usr/local/bin/kubectl
- run: (6)
name: Install and confgure kubectl aws-iam-authenticator
command: curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator && sudo chmod +x ./aws-iam-authenticator && sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator
- run: (7)
name: Install latest awscli version
command: sudo apt install unzip && curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip &&./awscli-bundle/install -b ~/bin/aws
- run: (8)
name: Get the kubeconfig file
command: export KUBECONFIG=$HOME/.kube/kubeconfig && /home/circleci/bin/aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME
- run: (9)
name: Install and configuire helm
command: sudo curl -L https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
- run: (10)
name: Initialize helm
command: helm init --client-only --kubeconfig=$HOME/.kube/kubeconfig
- run: (11)
name: Install tiller plugin
command: helm plugin install https://github.com/rimusz/helm-tiller --kubeconfig=$HOME/.kube/kubeconfig
- run: (12)
name: Release helloapp using helm chart
command: bash scripts/release-helloapp.sh $TAG
view raw deploy.yaml hosted with ❤ by GitHub

  1. Set the docker image inside which we want to execute the job.
  2. Check out the code using `checkout` key
  3. Install AWS CLI.
  4. Setting the value of tag just like we did in case of build&pushImage job. Note that here we are using CIRCLE_PREVIOUS_BUILD_NUM variable which gives us the build number of build&pushImage job and ensures that the tag values are the same.
  5. Download kubectl and making it executable.
  6. Installing aws-iam-authenticator this is required because my k8s cluster is on EKS.
  7. Here we install the latest version of AWS CLI, EKS is a relatively newer service from AWS and older versions of AWS CLI doesn’t have it.
  8. Here we fetch the kubeconfig file. This step will vary depending upon where the k8s cluster has been set up. As my cluster is on EKS am getting the kubeconfig file via. AWS CLI similarly if your cluster in on GKE then you need to configure gcloud and use the command  `gcloud container clusters get-credentials <cluster-name> --zone=<zone-name>`. We can also have the kubeconfig file on some other secure storage system and fetch it from there.</zone-name></cluster-name>
  9. Download Helm and make it executable
  10. Initializing helm, note that we are initializing helm in client only mode so that it doesn’t start the tiller server.
  11. Download the tillerless helm plugin
  12. Execute the release-helloapp.sh shell script and pass it TAG value from step 4.

In the release-helloapp.sh script we first start tiller, after this, we check if the release is already present or not if it is present then we upgrade otherwise we make a new release. Here we override the value of tag for the image present in the chart by setting it to the tag of the newly built image, finally, we stop the tiller server.

#!/bin/bash
TAG=$1
echo "start tiller"
export KUBECONFIG=$HOME/.kube/kubeconfig
helm tiller start-ci
export HELM_HOST=127.0.0.1:44134
result=$(eval helm ls | grep helloapp)
if [ $? -ne "0" ]; then
helm install --timeout 180 --name helloapp --set image.tag=$TAG charts/helloapp
else
helm upgrade --timeout 180 helloapp --set image.tag=$TAG charts/helloapp
fi
echo "stop tiller"
helm tiller stop

The complete CircleCI config.yml file looks like:

version: 2
jobs:
build&pushImage:
working_directory: /go/src/hello-app
docker:
- image: circleci/golang:1.10
steps:
- checkout
- run:
name: build the binary
command: go build -o hello-app
- setup_remote_docker:
docker_layer_caching: true
- run:
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_BUILD_NUM' >> $BASH_ENV
- run:
name: Build the docker image
command: docker build . -t ${CIRCLE_PROJECT_REPONAME}:$TAG
- run:
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run:
name: Login to ECR
command: $(aws ecr get-login --region $AWS_REGION | sed -e 's/-e none//g')
- run:
name: Tag the image with ECR repo name
command: docker tag ${CIRCLE_PROJECT_REPONAME}:$TAG ${HELLOAPP_ECR_REPO}:$TAG
- run:
name: Push the image the ECR repo
command: docker push ${HELLOAPP_ECR_REPO}:$TAG
deploy:
docker:
- image: circleci/golang:1.10
steps:
- attach_workspace:
at: /tmp/workspace
- checkout
- run:
name: Install AWS cli
command: export TZ=Europe/Minsk && sudo ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > sudo /etc/timezone && sudo apt-get update && sudo apt-get install -y awscli
- run:
name: Set the tag for the image, we will concatenate the app verson and circle build number with a `-` char in between
command: echo 'export TAG=$(cat VERSION)-$CIRCLE_PREVIOUS_BUILD_NUM' >> $BASH_ENV
- run:
name: Install and confgure kubectl
command: sudo curl -L https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl -o /usr/local/bin/kubectl && sudo chmod +x /usr/local/bin/kubectl
- run:
name: Install and confgure kubectl aws-iam-authenticator
command: curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator && sudo chmod +x ./aws-iam-authenticator && sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator
- run:
name: Install latest awscli version
command: sudo apt install unzip && curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && unzip awscli-bundle.zip &&./awscli-bundle/install -b ~/bin/aws
- run:
name: Get the kubeconfig file
command: export KUBECONFIG=$HOME/.kube/kubeconfig && /home/circleci/bin/aws eks --region $AWS_REGION update-kubeconfig --name $EKS_CLUSTER_NAME
- run:
name: Install and configuire helm
command: sudo curl -L https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz | tar xz && sudo mv linux-amd64/helm /bin/helm && sudo rm -rf linux-amd64
- run:
name: Initialize helm
command: helm init --client-only --kubeconfig=$HOME/.kube/kubeconfig
- run:
name: Install tiller plugin
command: helm plugin install https://github.com/rimusz/helm-tiller --kubeconfig=$HOME/.kube/kubeconfig
- run:
name: Release helloapp using helm chart
command: bash scripts/release-helloapp.sh $TAG
workflows:
version: 2
primary:
jobs:
- build&pushImage
- deploy:
requires:
- build&pushImage
view raw config.yml hosted with ❤ by GitHub

At the end of the file, we see the workflows, workflows control the order in which the jobs specified in the file are executed and establishes dependencies and conditions for the job. For example, we may want our deploy job trigger only after my build job is complete so we added a dependency between them. Similarly, we may want to exclude the jobs from running on some particular branch then we can specify those type of conditions as well.

We have used a few environment variables in our pipeline configuration some of them were created by us and some were made available by CircleCI. We created AWS_REGION, HELLOAPP_ECR_REPO, EKS_CLUSTER_NAME, AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY variables. These variables are set via. CircleCI web console by going to the projects settings. Other variables that we have used are made available by CircleCI as a part of its environment setup process. Complete list of environment variables set by CircleCI can be found here.

Verify the working of the pipeline:

Once everything is set up properly then our application will get deployed on the k8s cluster and should be available for access. Get the external IP of the helloapp service and make a curl request to the hello endpoint

$ curl http://a31e25e7553af11e994620aebe144c51-242977608.us-west-2.elb.amazonaws.com/hello && printf "\n"
{"Msg":"Hello World"}
view raw curl_request.sh hosted with ❤ by GitHub

Now update the code and change the message “Hello World” to “Hello World Returns” and push your code. It will take a few minutes for the pipeline to complete execution and once it is complete make the curl request again to see the changes getting reflected.

$ curl http://a31e25e7553af11e994620aebe144c51-242977608.us-west-2.elb.amazonaws.com/hello && printf "\n"
{"Msg":"Hello World Returns"}

Also, verify that a new tag is also created for the helloapp docker image on ECR.

Conclusion

In this blog post, we explored how we can set up a CI/CD pipeline for kubernetes and got basic exposure to CircleCI and Helm. Although helm is not absolutely necessary for building a pipeline, it has lots of benefits and is widely used across the industry. We can extend the pipeline to consider the cases where we have multiple environments like dev, staging & production and make the pipeline deploy the application to any of them depending upon some conditions. We can also add more jobs like integration tests. All the codes used in the blog post are available here.

Related Reads:

  1. Continuous Deployment with Azure Kubernetes Service, Azure Container Registry & Jenkins
  2. Know Everything About Spinnaker & How to Deploy Using Kubernetes Engine

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings