• United States+1
  • United Kingdom+44
  • Afghanistan (‫افغانستان‬‎)+93
  • Albania (Shqipëri)+355
  • Algeria (‫الجزائر‬‎)+213
  • American Samoa+1684
  • Andorra+376
  • Angola+244
  • Anguilla+1264
  • Antigua and Barbuda+1268
  • Argentina+54
  • Armenia (Հայաստան)+374
  • Aruba+297
  • Australia+61
  • Austria (Österreich)+43
  • Azerbaijan (Azərbaycan)+994
  • Bahamas+1242
  • Bahrain (‫البحرين‬‎)+973
  • Bangladesh (বাংলাদেশ)+880
  • Barbados+1246
  • Belarus (Беларусь)+375
  • Belgium (België)+32
  • Belize+501
  • Benin (Bénin)+229
  • Bermuda+1441
  • Bhutan (འབྲུག)+975
  • Bolivia+591
  • Bosnia and Herzegovina (Босна и Херцеговина)+387
  • Botswana+267
  • Brazil (Brasil)+55
  • British Indian Ocean Territory+246
  • British Virgin Islands+1284
  • Brunei+673
  • Bulgaria (България)+359
  • Burkina Faso+226
  • Burundi (Uburundi)+257
  • Cambodia (កម្ពុជា)+855
  • Cameroon (Cameroun)+237
  • Canada+1
  • Cape Verde (Kabu Verdi)+238
  • Caribbean Netherlands+599
  • Cayman Islands+1345
  • Central African Republic (République centrafricaine)+236
  • Chad (Tchad)+235
  • Chile+56
  • China (中国)+86
  • Christmas Island+61
  • Cocos (Keeling) Islands+61
  • Colombia+57
  • Comoros (‫جزر القمر‬‎)+269
  • Congo (DRC) (Jamhuri ya Kidemokrasia ya Kongo)+243
  • Congo (Republic) (Congo-Brazzaville)+242
  • Cook Islands+682
  • Costa Rica+506
  • Côte d’Ivoire+225
  • Croatia (Hrvatska)+385
  • Cuba+53
  • Curaçao+599
  • Cyprus (Κύπρος)+357
  • Czech Republic (Česká republika)+420
  • Denmark (Danmark)+45
  • Djibouti+253
  • Dominica+1767
  • Dominican Republic (República Dominicana)+1
  • Ecuador+593
  • Egypt (‫مصر‬‎)+20
  • El Salvador+503
  • Equatorial Guinea (Guinea Ecuatorial)+240
  • Eritrea+291
  • Estonia (Eesti)+372
  • Ethiopia+251
  • Falkland Islands (Islas Malvinas)+500
  • Faroe Islands (Føroyar)+298
  • Fiji+679
  • Finland (Suomi)+358
  • France+33
  • French Guiana (Guyane française)+594
  • French Polynesia (Polynésie française)+689
  • Gabon+241
  • Gambia+220
  • Georgia (საქართველო)+995
  • Germany (Deutschland)+49
  • Ghana (Gaana)+233
  • Gibraltar+350
  • Greece (Ελλάδα)+30
  • Greenland (Kalaallit Nunaat)+299
  • Grenada+1473
  • Guadeloupe+590
  • Guam+1671
  • Guatemala+502
  • Guernsey+44
  • Guinea (Guinée)+224
  • Guinea-Bissau (Guiné Bissau)+245
  • Guyana+592
  • Haiti+509
  • Honduras+504
  • Hong Kong (香港)+852
  • Hungary (Magyarország)+36
  • Iceland (Ísland)+354
  • India (भारत)+91
  • Indonesia+62
  • Iran (‫ایران‬‎)+98
  • Iraq (‫العراق‬‎)+964
  • Ireland+353
  • Isle of Man+44
  • Israel (‫ישראל‬‎)+972
  • Italy (Italia)+39
  • Jamaica+1876
  • Japan (日本)+81
  • Jersey+44
  • Jordan (‫الأردن‬‎)+962
  • Kazakhstan (Казахстан)+7
  • Kenya+254
  • Kiribati+686
  • Kosovo+383
  • Kuwait (‫الكويت‬‎)+965
  • Kyrgyzstan (Кыргызстан)+996
  • Laos (ລາວ)+856
  • Latvia (Latvija)+371
  • Lebanon (‫لبنان‬‎)+961
  • Lesotho+266
  • Liberia+231
  • Libya (‫ليبيا‬‎)+218
  • Liechtenstein+423
  • Lithuania (Lietuva)+370
  • Luxembourg+352
  • Macau (澳門)+853
  • Macedonia (FYROM) (Македонија)+389
  • Madagascar (Madagasikara)+261
  • Malawi+265
  • Malaysia+60
  • Maldives+960
  • Mali+223
  • Malta+356
  • Marshall Islands+692
  • Martinique+596
  • Mauritania (‫موريتانيا‬‎)+222
  • Mauritius (Moris)+230
  • Mayotte+262
  • Mexico (México)+52
  • Micronesia+691
  • Moldova (Republica Moldova)+373
  • Monaco+377
  • Mongolia (Монгол)+976
  • Montenegro (Crna Gora)+382
  • Montserrat+1664
  • Morocco (‫المغرب‬‎)+212
  • Mozambique (Moçambique)+258
  • Myanmar (Burma) (မြန်မာ)+95
  • Namibia (Namibië)+264
  • Nauru+674
  • Nepal (नेपाल)+977
  • Netherlands (Nederland)+31
  • New Caledonia (Nouvelle-Calédonie)+687
  • New Zealand+64
  • Nicaragua+505
  • Niger (Nijar)+227
  • Nigeria+234
  • Niue+683
  • Norfolk Island+672
  • North Korea (조선 민주주의 인민 공화국)+850
  • Northern Mariana Islands+1670
  • Norway (Norge)+47
  • Oman (‫عُمان‬‎)+968
  • Pakistan (‫پاکستان‬‎)+92
  • Palau+680
  • Palestine (‫فلسطين‬‎)+970
  • Panama (Panamá)+507
  • Papua New Guinea+675
  • Paraguay+595
  • Peru (Perú)+51
  • Philippines+63
  • Poland (Polska)+48
  • Portugal+351
  • Puerto Rico+1
  • Qatar (‫قطر‬‎)+974
  • Réunion (La Réunion)+262
  • Romania (România)+40
  • Russia (Россия)+7
  • Rwanda+250
  • Saint Barthélemy (Saint-Barthélemy)+590
  • Saint Helena+290
  • Saint Kitts and Nevis+1869
  • Saint Lucia+1758
  • Saint Martin (Saint-Martin (partie française))+590
  • Saint Pierre and Miquelon (Saint-Pierre-et-Miquelon)+508
  • Saint Vincent and the Grenadines+1784
  • Samoa+685
  • San Marino+378
  • São Tomé and Príncipe (São Tomé e Príncipe)+239
  • Saudi Arabia (‫المملكة العربية السعودية‬‎)+966
  • Senegal (Sénégal)+221
  • Serbia (Србија)+381
  • Seychelles+248
  • Sierra Leone+232
  • Singapore+65
  • Sint Maarten+1721
  • Slovakia (Slovensko)+421
  • Slovenia (Slovenija)+386
  • Solomon Islands+677
  • Somalia (Soomaaliya)+252
  • South Africa+27
  • South Korea (대한민국)+82
  • South Sudan (‫جنوب السودان‬‎)+211
  • Spain (España)+34
  • Sri Lanka (ශ්‍රී ලංකාව)+94
  • Sudan (‫السودان‬‎)+249
  • Suriname+597
  • Svalbard and Jan Mayen+47
  • Swaziland+268
  • Sweden (Sverige)+46
  • Switzerland (Schweiz)+41
  • Syria (‫سوريا‬‎)+963
  • Taiwan (台灣)+886
  • Tajikistan+992
  • Tanzania+255
  • Thailand (ไทย)+66
  • Timor-Leste+670
  • Togo+228
  • Tokelau+690
  • Tonga+676
  • Trinidad and Tobago+1868
  • Tunisia (‫تونس‬‎)+216
  • Turkey (Türkiye)+90
  • Turkmenistan+993
  • Turks and Caicos Islands+1649
  • Tuvalu+688
  • U.S. Virgin Islands+1340
  • Uganda+256
  • Ukraine (Україна)+380
  • United Arab Emirates (‫الإمارات العربية المتحدة‬‎)+971
  • United Kingdom+44
  • United States+1
  • Uruguay+598
  • Uzbekistan (Oʻzbekiston)+998
  • Vanuatu+678
  • Vatican City (Città del Vaticano)+39
  • Venezuela+58
  • Vietnam (Việt Nam)+84
  • Wallis and Futuna+681
  • Western Sahara (‫الصحراء الغربية‬‎)+212
  • Yemen (‫اليمن‬‎)+967
  • Zambia+260
  • Zimbabwe+263
  • Åland Islands+358
Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

A Practical Guide to Deploying Multi-tier Applications on Google Container Engine (GKE)

Introduction

All modern era programmers can attest that containerization has afforded more flexibility and allows us to build truly cloud-native applications. Containers provide portability - ability to easily move applications across environments. Although complex applications comprise of many (10s or 100s) containers. Managing such applications is a real challenge and that’s where container orchestration and scheduling platforms like Kubernetes, Mesosphere, Docker Swarm, etc. come into the picture. 
Kubernetes, backed by Google is leading the pack given that Redhat, Microsoft and now Amazon are putting their weight behind it.

Kubernetes can run on any cloud or bare metal infrastructure. Setting up & managing Kubernetes can be a challenge but Google provides an easy way to use Kubernetes through the Google Container Engine(GKE) service.

What is GKE?

Google Container Engine is a Management and orchestration system for Containers. In short, it is a hosted Kubernetes. The goal of GKE is to increase the productivity of DevOps and development teams by hiding the complexity of setting up the Kubernetes cluster, the overlay network, etc.

Why GKE? What are the things that GKE does for the user?

  • GKE abstracts away the complexity of managing a highly available Kubernetes cluster.
  • GKE takes care of the overlay network
  • GKE also provides built-in authentication
  • GKE also provides built-in auto-scaling.
  • GKE also provides easy integration with the Google storage services.

In this blog, we will see how to create your own Kubernetes cluster in GKE and how to deploy a multi-tier application in it. The blog assumes you have a basic understanding of Kubernetes and have used it before. It also assumes you have created an account with Google Cloud Platform. If you are not familiar with Kubernetes, this guide from Deis  is a good place to start.

Google provides a Command-line interface (gcloud) to interact with all Google Cloud Platform products and services. gcloud is a tool that provides the primary command-line interface to Google Cloud Platform. Gcloud tool can be used in the scripts to automate the tasks or directly from the command-line. Follow this guide to install the gcloud tool.

Now let's begin! The first step is to create the cluster.

Basic Steps to create cluster

In this section, I would like to explain about how to create GKE cluster. We will use a command-line tool to setup the cluster.

Set the zone in which you want to deploy the cluster

$ gcloud config set compute/zone us-west1-a
view raw config.sh hosted with ❤ by GitHub

Create the cluster using following command,

$ gcloud container --project <project-name> \
clusters create <cluster-name> \
--machine-type n1-standard-2 \
--image-type "COS" --disk-size "50" \
--num-nodes 2 --network default \
--enable-cloud-logging --no-enable-cloud-monitoring

Let's try to understand what each of these parameters mean:

--project: Project Name

--machine-type: Type of the machine like n1-standard-2, n1-standard-4

--image-type: OS image."COS" i.e. Container Optimized OS from Google: More Info here.

--disk-size: Disk size of each instance.

--num-nodes: Number of nodes in the cluster.

--network: Network that users want to use for the cluster. In this case, we are using default network.

Apart from the above options, you can also use the following to provide specific requirements while creating the cluster:

--scopes: Scopes enable containers to direct access any Google service without needs credentials. You can specify comma separated list of scope APIs. For example:

  • Compute: Lets you view and manage your Google Compute Engine resources
  • Logging.write: Submit log data to Stackdriver.

You can find all the Scopes that Google supports here: .

--additional-zones: Specify additional zones to high availability. Eg. --additional-zones us-east1-b, us-east1-d . Here Kubernetes will create a cluster in 3 zones (1 specified at the beginning and additional 2 here).

--enable-autoscaling : To enable the autoscaling option. If you specify this option then you have to specify the minimum and maximum required nodes as follows; You can read more about how auto-scaling works here. Eg:   --enable-autoscaling --min-nodes=15 --max-nodes=50

You can fetch the credentials of the created cluster. This step is to update the credentials in the kubeconfig file, so that kubectl will point to required cluster.

$ gcloud container clusters get-credentials my-first-cluster --project project-name

Now, your First Kubernetes cluster is ready. Let’s check the cluster information & health.

$ kubectl get nodes
NAME    STATUS    AGE   VERSION
gke-first-cluster-default-pool-d344484d-vnj1  Ready  2h  v1.6.4
gke-first-cluster-default-pool-d344484d-kdd7  Ready  2h  v1.6.4
gke-first-cluster-default-pool-d344484d-ytre2  Ready  2h  v1.6.4
view raw get_nodes.sh hosted with ❤ by GitHub

After creating Cluster, now let's see how to deploy a multi tier application on it. Let’s use simple Python Flask app which will greet the user, store employee data & get employee data.

Application Deployment

I have created simple Python Flask application to deploy on K8S cluster created using GKE. you can go through the source code here. If you check the source code then you will find directory structure as follows:

TryGKE/
├── Dockerfile
├── mysql-deployment.yaml
├── mysql-service.yaml
├── src    
├── app.py    
└── requirements.txt    
├── testapp-deployment.yaml    
└── testapp-service.yaml
view raw directory.sh hosted with ❤ by GitHub

In this, I have written a Dockerfile for the Python Flask application in order to build our own image to deploy. For MySQL, we won’t build an image of our own. We will use the latest MySQL image from the public docker repository.

Before deploying the application, let’s re-visit some of the important Kubernetes terms:

Pods:

The pod is a Docker container or a group of Docker containers which are deployed together on the host machine. It acts as a single unit of deployment.

Deployments:

Deployment is an entity which manages the ReplicaSets and provides declarative updates to pods. It is recommended to use Deployments instead of directly using ReplicaSets. We can use deployment to create, remove and update ReplicaSets. Deployments have the ability to rollout and rollback the changes.

Services:

Service in K8S is an abstraction which will connect you to one or more pods. You can connect to pod using the pod’s IP Address but since pods come and go, their IP Addresses change.  Services get their own IP & DNS and those remain for the entire lifetime of the service. 

Each tier in an application is represented by a Deployment. A Deployment is described by the YAML file. We have two YAML files - one for MySQL and one for the Python application.

1. MySQL Deployment YAML

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: admin
- name: MYSQL_ROOT_PASSWORD
value: admin
image: 'mysql:latest'
name: mysql
ports:
- name: mysqlport
containerPort: 3306
protocol: TCP

2. Python Application Deployment YAML

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: ajaynemade/pymy:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000

Each Service is also represented by a YAML file as follows:

1. MySQL service YAML

apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
ports:
- port: 3306
targetPort: 3306
protocol: TCP
name: http
selector:
app: mysql

2. Python Application service YAML

apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: LoadBalancer
ports:
- name: test-service
port: 80
protocol: TCP
targetPort: 5000
selector:
app: test-app

You will find a ‘kind’ field in each YAML file. It is used to specify whether the given configuration is for deployment, service, pod, etc.

In the Python app service YAML, I am using type = LoadBalancer. In GKE, There are two types of cloud load balancers available to expose the application to outside world.

  1. TCP load balancer: This is a TCP Proxy-based load balancer. We will use this in our example.
  2. HTTP(s) load balancer: It can be created using Ingress. For more information, refer to this post that talks about Ingress in detail.

In the MySQL service, I’ve not specified any type, in that case, type ‘ClusterIP’ will get used, which will make sure that MySQL container is exposed to the cluster and the Python app can access it.

If you check the app.py, you can see that I have used “mysql-service.default” as a hostname. “Mysql-service.default” is a DNS name of the service. The Python application will refer to that DNS name while accessing the MySQL Database.

Now, let's actually setup the components from the configurations. As mentioned above, we will first create services followed by deployments.

Services:

$ kubectl create -f mysql-service.yaml
$ kubectl create -f testapp-service.yaml

Deployments:

$ kubectl create -f mysql-deployment.yaml
$ kubectl create -f testapp-deployment.yaml

Check the status of the pods and services. Wait till all pods come to the running state and Python application service to get external IP like below:

$ kubectl get services
NAME            CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes      10.55.240.1     <none>        443/TCP        5h
mysql-service   10.55.240.57    <none>        3306/TCP       1m
test-service    10.55.246.105   35.185.225.67     80:32546/TCP   11s

Once you get the external IP, then you should be able to make APIs calls using simple curl requests.

Eg. To Store Data :

curl -H "Content-Type: application/x-www-form-urlencoded" -X POST  http://35.185.225.67:80/storedata -d id=1 -d name=NoOne
view raw storedata.sh hosted with ❤ by GitHub

Eg. To Get Data :

curl 35.185.225.67:80/getdata/1
view raw getdata.sh hosted with ❤ by GitHub

At this stage your application is completely deployed and is externally accessible.

Manual scaling of pods

Scaling your application up or down in Kubernetes is quite straightforward. Let’s scale up the test-app deployment.

$ kubectl scale deployment test-app --replicas=3

Deployment configuration for test-app will get updated and you can see 3 replicas of test-app are running. Verify it using,

kubectl get pods

In the same manner, you can scale down your application by reducing the replica count.

Cleanup :

Un-deploying an application from Kubernetes is also quite straightforward. All we have to do is delete the services and delete the deployments. The only caveat is that the deletion of the load balancer is an asynchronous process. You have to wait until it gets deleted.

$ kubectl delete service mysql-service
$ kubectl delete service test-service

The above command will deallocate Load Balancer which was created as a part of test-service. You can check the status of the load balancer with the following command.

$ gcloud compute forwarding-rules list

Once the load balancer is deleted, you can clean-up the deployments as well.

$ kubectl delete deployments test-app
$ kubectl delete deployments mysql

Delete the Cluster:

$ gcloud container clusters delete my-first-cluster

Conclusion

In this blog, we saw how easy it is to deploy, scale & terminate applications on Google Container Engine. Google Container Engine abstracts away all the complexity of Kubernetes and gives us a robust platform to run containerised applications. I am super excited about what the future holds for Kubernetes!

Check out some of Velotio's other blogs on Kubernetes.

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

A Practical Guide to Deploying Multi-tier Applications on Google Container Engine (GKE)

Introduction

All modern era programmers can attest that containerization has afforded more flexibility and allows us to build truly cloud-native applications. Containers provide portability - ability to easily move applications across environments. Although complex applications comprise of many (10s or 100s) containers. Managing such applications is a real challenge and that’s where container orchestration and scheduling platforms like Kubernetes, Mesosphere, Docker Swarm, etc. come into the picture. 
Kubernetes, backed by Google is leading the pack given that Redhat, Microsoft and now Amazon are putting their weight behind it.

Kubernetes can run on any cloud or bare metal infrastructure. Setting up & managing Kubernetes can be a challenge but Google provides an easy way to use Kubernetes through the Google Container Engine(GKE) service.

What is GKE?

Google Container Engine is a Management and orchestration system for Containers. In short, it is a hosted Kubernetes. The goal of GKE is to increase the productivity of DevOps and development teams by hiding the complexity of setting up the Kubernetes cluster, the overlay network, etc.

Why GKE? What are the things that GKE does for the user?

  • GKE abstracts away the complexity of managing a highly available Kubernetes cluster.
  • GKE takes care of the overlay network
  • GKE also provides built-in authentication
  • GKE also provides built-in auto-scaling.
  • GKE also provides easy integration with the Google storage services.

In this blog, we will see how to create your own Kubernetes cluster in GKE and how to deploy a multi-tier application in it. The blog assumes you have a basic understanding of Kubernetes and have used it before. It also assumes you have created an account with Google Cloud Platform. If you are not familiar with Kubernetes, this guide from Deis  is a good place to start.

Google provides a Command-line interface (gcloud) to interact with all Google Cloud Platform products and services. gcloud is a tool that provides the primary command-line interface to Google Cloud Platform. Gcloud tool can be used in the scripts to automate the tasks or directly from the command-line. Follow this guide to install the gcloud tool.

Now let's begin! The first step is to create the cluster.

Basic Steps to create cluster

In this section, I would like to explain about how to create GKE cluster. We will use a command-line tool to setup the cluster.

Set the zone in which you want to deploy the cluster

$ gcloud config set compute/zone us-west1-a
view raw config.sh hosted with ❤ by GitHub

Create the cluster using following command,

$ gcloud container --project <project-name> \
clusters create <cluster-name> \
--machine-type n1-standard-2 \
--image-type "COS" --disk-size "50" \
--num-nodes 2 --network default \
--enable-cloud-logging --no-enable-cloud-monitoring

Let's try to understand what each of these parameters mean:

--project: Project Name

--machine-type: Type of the machine like n1-standard-2, n1-standard-4

--image-type: OS image."COS" i.e. Container Optimized OS from Google: More Info here.

--disk-size: Disk size of each instance.

--num-nodes: Number of nodes in the cluster.

--network: Network that users want to use for the cluster. In this case, we are using default network.

Apart from the above options, you can also use the following to provide specific requirements while creating the cluster:

--scopes: Scopes enable containers to direct access any Google service without needs credentials. You can specify comma separated list of scope APIs. For example:

  • Compute: Lets you view and manage your Google Compute Engine resources
  • Logging.write: Submit log data to Stackdriver.

You can find all the Scopes that Google supports here: .

--additional-zones: Specify additional zones to high availability. Eg. --additional-zones us-east1-b, us-east1-d . Here Kubernetes will create a cluster in 3 zones (1 specified at the beginning and additional 2 here).

--enable-autoscaling : To enable the autoscaling option. If you specify this option then you have to specify the minimum and maximum required nodes as follows; You can read more about how auto-scaling works here. Eg:   --enable-autoscaling --min-nodes=15 --max-nodes=50

You can fetch the credentials of the created cluster. This step is to update the credentials in the kubeconfig file, so that kubectl will point to required cluster.

$ gcloud container clusters get-credentials my-first-cluster --project project-name

Now, your First Kubernetes cluster is ready. Let’s check the cluster information & health.

$ kubectl get nodes
NAME    STATUS    AGE   VERSION
gke-first-cluster-default-pool-d344484d-vnj1  Ready  2h  v1.6.4
gke-first-cluster-default-pool-d344484d-kdd7  Ready  2h  v1.6.4
gke-first-cluster-default-pool-d344484d-ytre2  Ready  2h  v1.6.4
view raw get_nodes.sh hosted with ❤ by GitHub

After creating Cluster, now let's see how to deploy a multi tier application on it. Let’s use simple Python Flask app which will greet the user, store employee data & get employee data.

Application Deployment

I have created simple Python Flask application to deploy on K8S cluster created using GKE. you can go through the source code here. If you check the source code then you will find directory structure as follows:

TryGKE/
├── Dockerfile
├── mysql-deployment.yaml
├── mysql-service.yaml
├── src    
├── app.py    
└── requirements.txt    
├── testapp-deployment.yaml    
└── testapp-service.yaml
view raw directory.sh hosted with ❤ by GitHub

In this, I have written a Dockerfile for the Python Flask application in order to build our own image to deploy. For MySQL, we won’t build an image of our own. We will use the latest MySQL image from the public docker repository.

Before deploying the application, let’s re-visit some of the important Kubernetes terms:

Pods:

The pod is a Docker container or a group of Docker containers which are deployed together on the host machine. It acts as a single unit of deployment.

Deployments:

Deployment is an entity which manages the ReplicaSets and provides declarative updates to pods. It is recommended to use Deployments instead of directly using ReplicaSets. We can use deployment to create, remove and update ReplicaSets. Deployments have the ability to rollout and rollback the changes.

Services:

Service in K8S is an abstraction which will connect you to one or more pods. You can connect to pod using the pod’s IP Address but since pods come and go, their IP Addresses change.  Services get their own IP & DNS and those remain for the entire lifetime of the service. 

Each tier in an application is represented by a Deployment. A Deployment is described by the YAML file. We have two YAML files - one for MySQL and one for the Python application.

1. MySQL Deployment YAML

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: admin
- name: MYSQL_ROOT_PASSWORD
value: admin
image: 'mysql:latest'
name: mysql
ports:
- name: mysqlport
containerPort: 3306
protocol: TCP

2. Python Application Deployment YAML

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: ajaynemade/pymy:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5000

Each Service is also represented by a YAML file as follows:

1. MySQL service YAML

apiVersion: v1
kind: Service
metadata:
name: mysql-service
spec:
ports:
- port: 3306
targetPort: 3306
protocol: TCP
name: http
selector:
app: mysql

2. Python Application service YAML

apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: LoadBalancer
ports:
- name: test-service
port: 80
protocol: TCP
targetPort: 5000
selector:
app: test-app

You will find a ‘kind’ field in each YAML file. It is used to specify whether the given configuration is for deployment, service, pod, etc.

In the Python app service YAML, I am using type = LoadBalancer. In GKE, There are two types of cloud load balancers available to expose the application to outside world.

  1. TCP load balancer: This is a TCP Proxy-based load balancer. We will use this in our example.
  2. HTTP(s) load balancer: It can be created using Ingress. For more information, refer to this post that talks about Ingress in detail.

In the MySQL service, I’ve not specified any type, in that case, type ‘ClusterIP’ will get used, which will make sure that MySQL container is exposed to the cluster and the Python app can access it.

If you check the app.py, you can see that I have used “mysql-service.default” as a hostname. “Mysql-service.default” is a DNS name of the service. The Python application will refer to that DNS name while accessing the MySQL Database.

Now, let's actually setup the components from the configurations. As mentioned above, we will first create services followed by deployments.

Services:

$ kubectl create -f mysql-service.yaml
$ kubectl create -f testapp-service.yaml

Deployments:

$ kubectl create -f mysql-deployment.yaml
$ kubectl create -f testapp-deployment.yaml

Check the status of the pods and services. Wait till all pods come to the running state and Python application service to get external IP like below:

$ kubectl get services
NAME            CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes      10.55.240.1     <none>        443/TCP        5h
mysql-service   10.55.240.57    <none>        3306/TCP       1m
test-service    10.55.246.105   35.185.225.67     80:32546/TCP   11s

Once you get the external IP, then you should be able to make APIs calls using simple curl requests.

Eg. To Store Data :

curl -H "Content-Type: application/x-www-form-urlencoded" -X POST  http://35.185.225.67:80/storedata -d id=1 -d name=NoOne
view raw storedata.sh hosted with ❤ by GitHub

Eg. To Get Data :

curl 35.185.225.67:80/getdata/1
view raw getdata.sh hosted with ❤ by GitHub

At this stage your application is completely deployed and is externally accessible.

Manual scaling of pods

Scaling your application up or down in Kubernetes is quite straightforward. Let’s scale up the test-app deployment.

$ kubectl scale deployment test-app --replicas=3

Deployment configuration for test-app will get updated and you can see 3 replicas of test-app are running. Verify it using,

kubectl get pods

In the same manner, you can scale down your application by reducing the replica count.

Cleanup :

Un-deploying an application from Kubernetes is also quite straightforward. All we have to do is delete the services and delete the deployments. The only caveat is that the deletion of the load balancer is an asynchronous process. You have to wait until it gets deleted.

$ kubectl delete service mysql-service
$ kubectl delete service test-service

The above command will deallocate Load Balancer which was created as a part of test-service. You can check the status of the load balancer with the following command.

$ gcloud compute forwarding-rules list

Once the load balancer is deleted, you can clean-up the deployments as well.

$ kubectl delete deployments test-app
$ kubectl delete deployments mysql

Delete the Cluster:

$ gcloud container clusters delete my-first-cluster

Conclusion

In this blog, we saw how easy it is to deploy, scale & terminate applications on Google Container Engine. Google Container Engine abstracts away all the complexity of Kubernetes and gives us a robust platform to run containerised applications. I am super excited about what the future holds for Kubernetes!

Check out some of Velotio's other blogs on Kubernetes.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings