• United States+1
  • United Kingdom+44
  • Afghanistan (‫افغانستان‬‎)+93
  • Albania (Shqipëri)+355
  • Algeria (‫الجزائر‬‎)+213
  • American Samoa+1684
  • Andorra+376
  • Angola+244
  • Anguilla+1264
  • Antigua and Barbuda+1268
  • Argentina+54
  • Armenia (Հայաստան)+374
  • Aruba+297
  • Australia+61
  • Austria (Österreich)+43
  • Azerbaijan (Azərbaycan)+994
  • Bahamas+1242
  • Bahrain (‫البحرين‬‎)+973
  • Bangladesh (বাংলাদেশ)+880
  • Barbados+1246
  • Belarus (Беларусь)+375
  • Belgium (België)+32
  • Belize+501
  • Benin (Bénin)+229
  • Bermuda+1441
  • Bhutan (འབྲུག)+975
  • Bolivia+591
  • Bosnia and Herzegovina (Босна и Херцеговина)+387
  • Botswana+267
  • Brazil (Brasil)+55
  • British Indian Ocean Territory+246
  • British Virgin Islands+1284
  • Brunei+673
  • Bulgaria (България)+359
  • Burkina Faso+226
  • Burundi (Uburundi)+257
  • Cambodia (កម្ពុជា)+855
  • Cameroon (Cameroun)+237
  • Canada+1
  • Cape Verde (Kabu Verdi)+238
  • Caribbean Netherlands+599
  • Cayman Islands+1345
  • Central African Republic (République centrafricaine)+236
  • Chad (Tchad)+235
  • Chile+56
  • China (中国)+86
  • Christmas Island+61
  • Cocos (Keeling) Islands+61
  • Colombia+57
  • Comoros (‫جزر القمر‬‎)+269
  • Congo (DRC) (Jamhuri ya Kidemokrasia ya Kongo)+243
  • Congo (Republic) (Congo-Brazzaville)+242
  • Cook Islands+682
  • Costa Rica+506
  • Côte d’Ivoire+225
  • Croatia (Hrvatska)+385
  • Cuba+53
  • Curaçao+599
  • Cyprus (Κύπρος)+357
  • Czech Republic (Česká republika)+420
  • Denmark (Danmark)+45
  • Djibouti+253
  • Dominica+1767
  • Dominican Republic (República Dominicana)+1
  • Ecuador+593
  • Egypt (‫مصر‬‎)+20
  • El Salvador+503
  • Equatorial Guinea (Guinea Ecuatorial)+240
  • Eritrea+291
  • Estonia (Eesti)+372
  • Ethiopia+251
  • Falkland Islands (Islas Malvinas)+500
  • Faroe Islands (Føroyar)+298
  • Fiji+679
  • Finland (Suomi)+358
  • France+33
  • French Guiana (Guyane française)+594
  • French Polynesia (Polynésie française)+689
  • Gabon+241
  • Gambia+220
  • Georgia (საქართველო)+995
  • Germany (Deutschland)+49
  • Ghana (Gaana)+233
  • Gibraltar+350
  • Greece (Ελλάδα)+30
  • Greenland (Kalaallit Nunaat)+299
  • Grenada+1473
  • Guadeloupe+590
  • Guam+1671
  • Guatemala+502
  • Guernsey+44
  • Guinea (Guinée)+224
  • Guinea-Bissau (Guiné Bissau)+245
  • Guyana+592
  • Haiti+509
  • Honduras+504
  • Hong Kong (香港)+852
  • Hungary (Magyarország)+36
  • Iceland (Ísland)+354
  • India (भारत)+91
  • Indonesia+62
  • Iran (‫ایران‬‎)+98
  • Iraq (‫العراق‬‎)+964
  • Ireland+353
  • Isle of Man+44
  • Israel (‫ישראל‬‎)+972
  • Italy (Italia)+39
  • Jamaica+1876
  • Japan (日本)+81
  • Jersey+44
  • Jordan (‫الأردن‬‎)+962
  • Kazakhstan (Казахстан)+7
  • Kenya+254
  • Kiribati+686
  • Kosovo+383
  • Kuwait (‫الكويت‬‎)+965
  • Kyrgyzstan (Кыргызстан)+996
  • Laos (ລາວ)+856
  • Latvia (Latvija)+371
  • Lebanon (‫لبنان‬‎)+961
  • Lesotho+266
  • Liberia+231
  • Libya (‫ليبيا‬‎)+218
  • Liechtenstein+423
  • Lithuania (Lietuva)+370
  • Luxembourg+352
  • Macau (澳門)+853
  • Macedonia (FYROM) (Македонија)+389
  • Madagascar (Madagasikara)+261
  • Malawi+265
  • Malaysia+60
  • Maldives+960
  • Mali+223
  • Malta+356
  • Marshall Islands+692
  • Martinique+596
  • Mauritania (‫موريتانيا‬‎)+222
  • Mauritius (Moris)+230
  • Mayotte+262
  • Mexico (México)+52
  • Micronesia+691
  • Moldova (Republica Moldova)+373
  • Monaco+377
  • Mongolia (Монгол)+976
  • Montenegro (Crna Gora)+382
  • Montserrat+1664
  • Morocco (‫المغرب‬‎)+212
  • Mozambique (Moçambique)+258
  • Myanmar (Burma) (မြန်မာ)+95
  • Namibia (Namibië)+264
  • Nauru+674
  • Nepal (नेपाल)+977
  • Netherlands (Nederland)+31
  • New Caledonia (Nouvelle-Calédonie)+687
  • New Zealand+64
  • Nicaragua+505
  • Niger (Nijar)+227
  • Nigeria+234
  • Niue+683
  • Norfolk Island+672
  • North Korea (조선 민주주의 인민 공화국)+850
  • Northern Mariana Islands+1670
  • Norway (Norge)+47
  • Oman (‫عُمان‬‎)+968
  • Pakistan (‫پاکستان‬‎)+92
  • Palau+680
  • Palestine (‫فلسطين‬‎)+970
  • Panama (Panamá)+507
  • Papua New Guinea+675
  • Paraguay+595
  • Peru (Perú)+51
  • Philippines+63
  • Poland (Polska)+48
  • Portugal+351
  • Puerto Rico+1
  • Qatar (‫قطر‬‎)+974
  • Réunion (La Réunion)+262
  • Romania (România)+40
  • Russia (Россия)+7
  • Rwanda+250
  • Saint Barthélemy (Saint-Barthélemy)+590
  • Saint Helena+290
  • Saint Kitts and Nevis+1869
  • Saint Lucia+1758
  • Saint Martin (Saint-Martin (partie française))+590
  • Saint Pierre and Miquelon (Saint-Pierre-et-Miquelon)+508
  • Saint Vincent and the Grenadines+1784
  • Samoa+685
  • San Marino+378
  • São Tomé and Príncipe (São Tomé e Príncipe)+239
  • Saudi Arabia (‫المملكة العربية السعودية‬‎)+966
  • Senegal (Sénégal)+221
  • Serbia (Србија)+381
  • Seychelles+248
  • Sierra Leone+232
  • Singapore+65
  • Sint Maarten+1721
  • Slovakia (Slovensko)+421
  • Slovenia (Slovenija)+386
  • Solomon Islands+677
  • Somalia (Soomaaliya)+252
  • South Africa+27
  • South Korea (대한민국)+82
  • South Sudan (‫جنوب السودان‬‎)+211
  • Spain (España)+34
  • Sri Lanka (ශ්‍රී ලංකාව)+94
  • Sudan (‫السودان‬‎)+249
  • Suriname+597
  • Svalbard and Jan Mayen+47
  • Swaziland+268
  • Sweden (Sverige)+46
  • Switzerland (Schweiz)+41
  • Syria (‫سوريا‬‎)+963
  • Taiwan (台灣)+886
  • Tajikistan+992
  • Tanzania+255
  • Thailand (ไทย)+66
  • Timor-Leste+670
  • Togo+228
  • Tokelau+690
  • Tonga+676
  • Trinidad and Tobago+1868
  • Tunisia (‫تونس‬‎)+216
  • Turkey (Türkiye)+90
  • Turkmenistan+993
  • Turks and Caicos Islands+1649
  • Tuvalu+688
  • U.S. Virgin Islands+1340
  • Uganda+256
  • Ukraine (Україна)+380
  • United Arab Emirates (‫الإمارات العربية المتحدة‬‎)+971
  • United Kingdom+44
  • United States+1
  • Uruguay+598
  • Uzbekistan (Oʻzbekiston)+998
  • Vanuatu+678
  • Vatican City (Città del Vaticano)+39
  • Venezuela+58
  • Vietnam (Việt Nam)+84
  • Wallis and Futuna+681
  • Western Sahara (‫الصحراء الغربية‬‎)+212
  • Yemen (‫اليمن‬‎)+967
  • Zambia+260
  • Zimbabwe+263
  • Åland Islands+358
Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Taking Amazon's Elastic Kubernetes Service for a Spin

With the introduction of Elastic Kubernetes service at AWS re: Invent last year, AWS finally threw their hat in the ever booming space of managed Kubernetes services. In this blog post, we will learn the basic concepts of EKS, launch an EKS cluster and also deploy a multi-tier application on it.

What is Elastic Kubernetes service (EKS)?

Kubernetes works on a master-slave architecture. The master is also referred to as control plane. If the master goes down it brings our entire cluster down, thus ensuring high availability of master is absolutely critical as it can be a single point of failure. Ensuring high availability of master and managing all the worker nodes along with it becomes a cumbersome task in itself, thus it is most desirable for organizations to have managed Kubernetes cluster so that they can focus on the most important task which is to run their applications rather than managing the cluster. Other cloud providers like Google cloud and Azure already had their managed Kubernetes service named GKE and AKS respectively. Similarly now with EKS Amazon has also rolled out its managed Kubernetes cluster to provide a seamless way to run Kubernetes workloads.

Key EKS concepts:

EKS takes full advantage of the fact that it is running on AWS so instead of creating Kubernetes specific features from the scratch they have reused/plugged in the existing AWS services with EKS for achieving Kubernetes specific functionalities. Here is a brief overview:

IAM-integration: Amazon EKS integrates IAM authentication with Kubernetes RBAC ( role-based access control system native to Kubernetes) with the help of Heptio Authenticator which is a tool that uses AWS IAM credentials to authenticate to a Kubernetes cluster. Here we can directly attach an RBAC role with an IAM entity this saves the pain of managing another set of credentials at the cluster level.

Amazon's Elastic Kubernetes Service

Container Interface:  AWS has developed an open source cni plugin which takes advantage of the fact that multiple network interfaces can be attached to a single EC2 instance and these interfaces can have multiple secondary private ips associated with them, these secondary ips are used to provide pods running on EKS with real ip address from VPC cidr pool. This improves the latency for inter pod communications as the traffic flows without any overlay.  

EKS Container Interface

ELB Support:  We can use any of the AWS ELB offerings (classic, network, application) to route traffic to our service running on the working nodes.

Auto scaling:  The number of worker nodes in the cluster can grow and shrink using the EC2 auto scaling service.

Route 53: With the help of the External DNS project and AWS route53 we can manage the DNS entries for the load balancers which get created when we create an ingress object in our EKS cluster or when we create a service of type LoadBalancer in our cluster. This way the DNS names are always in sync with the load balancers and we don’t have to give separate attention to it.   

Shared responsibility for cluster: The responsibilities of an EKS cluster is shared between AWS and customer. AWS takes care of the most critical part of managing the control plane (api server and etcd database) and customers need to manage the worker node. Amazon EKS automatically runs Kubernetes with three masters across three Availability Zones to protect against a single point of failure, control plane nodes are also monitored and replaced if they fail, and are also patched and updated automatically this ensures high availability of the cluster and makes it extremely simple to migrate existing workloads to EKS.

Cluster Shared Responsibility

Prerequisites for launching an EKS cluster:

1.  IAM role to be assumed by the cluster: Create an IAM role that allows EKS to manage a cluster on your behalf. Choose EKS as the service which will assume this role and add AWS managed policies ‘AmazonEKSClusterPolicy’ and ‘AmazonEKSServicePolicy’ to it.

IAM Role

2.  VPC for the cluster:  We need to create the VPC where our cluster is going to reside. We need a VPC with subnets, internet gateways and other components configured. We can use an existing VPC for this if we wish or create one using the CloudFormation script provided by AWS here or use the Terraform script available here. The scripts take ‘cidr’ block of the VPC and three other subnets as arguments.

Launching an EKS cluster:

1.  Using the web console: With the prerequisites in place now we can go to the EKS console and launch an EKS cluster when we try to launch an EKS cluster we need to provide a the name of the EKS cluster, choose the Kubernetes version to use, provide the IAM role we created in step one and also choose a VPC, once we choose a VPC we also need to select subnets from the VPC where we want our worker nodes to be launched by default all the subnets in the VPC are selected we also need to provide a security group which is applied to the elastic network interfaces (eni) that EKS creates to allow control plane communicate with the worker nodes.

NOTE: Couple of things to note here is that the subnets must be in at least two different availability zones and the security group that we provided is later updated when we create worker node cluster so it is better to not use this security group with any other entity or be completely sure of the changes happening to it.

Launching EKS Cluster

2. Using awscli :

aws eks create-cluster --name eks-blog-cluster --role-arn arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role  
--resources-vpc-config subnetIds=subnet-0b8da2094908e1b23,subnet-01a46af43b2c5e16c,securityGroupIds=sg-03fa0c02886c183d4
view raw ascii.sh hosted with ❤ by GitHub

{
"cluster": {
"status": "CREATING",
"name": "eks-blog-cluster",
"certificateAuthority": {},
"roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role",
"resourcesVpcConfig": {
"subnetIds": [
"subnet-0b8da2094908e1b23",
"subnet-01a46af43b2c5e16c"
],
"vpcId": "vpc-0364b5ed9f85e7ce1",
"securityGroupIds": [
"sg-03fa0c02886c183d4"
]
},
"version": "1.10",
"arn": "arn:aws:eks:us-east-1:XXXXXXXXXXXX:cluster/eks-blog-cluster",
"createdAt": 1535269577.147
}
}

In the response, we see that the cluster is in creating state. It will take a few minutes before it is available. We can check the status using the below command:

aws eks describe-cluster --name=eks-blog-cluster
view raw eks.js hosted with ❤ by GitHub

Configure kubectl for EKS:

We know that in Kubernetes we interact with the control plane by making requests to the API server. The most common way to interact with the API server is via kubectl command line utility. As our cluster is ready now we need to install kubectl.

1.  Install the kubectl binary

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s
https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

Give executable permission to the binary.

chmod +x ./kubectl
view raw permission.sh hosted with ❤ by GitHub

Move the kubectl binary to a folder in your system’s $PATH.

sudo cp ./kubectl /bin/kubectl && export PATH=$HOME/bin:$PATH
view raw path.sh hosted with ❤ by GitHub

As discussed earlier EKS uses AWS IAM Authenticator for Kubernetes to allow IAM authentication for your Kubernetes cluster. So we need to download and install the same.

2.  Install aws-iam-authenticator

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
view raw install_aws.sh hosted with ❤ by GitHub

Give executable permission to the binary

chmod +x ./aws-iam-authenticator

Move the aws-iam-authenticator binary to a folder in your system’s $PATH.

sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator
view raw move_aws.sh hosted with ❤ by GitHub

3.  Create the kubeconfig file

First create the directory.

mkdir -p ~/.kube

Open a config file in the folder created above

sudo vi .kube/config-eks-blog-cluster
view raw open_config.sh hosted with ❤ by GitHub

Paste the below code in the file

clusters:      
- cluster:      
server: https://DBFE36D09896EECAB426959C35FFCC47.sk1.us-east-1.eks.amazonaws.com        
certificate-authority-data: ”....................”        
name: kubernetes        
contexts:        
- context:            
cluster: kubernetes            
user: aws          
name: aws        
current-context: aws        
kind: Config      
preferences: {}        
users:          
- name: aws            
user:                
exec:                    
apiVersion: client.authentication.k8s.io/v1alpha1                    
command: aws-iam-authenticator                    
args:                      
- "token"                      
- "-i"                    
- “eks-blog-cluster"

Replace the values of the server and certificate-authority data with the values of your cluster and certificate and also update the cluster name in the args section. You can get these values from the web console as well as using the command.

aws eks describe-cluster --name=eks-blog-cluster
view raw describe_eks hosted with ❤ by GitHub

Save and exit.

Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.

export KUBECONFIG=$KUBECONFIG:~/.kube/config-eks-blog-cluster
view raw export.sh hosted with ❤ by GitHub

To verify that the kubectl is now properly configured :

kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 443/TCP 50m
view raw configured.sh hosted with ❤ by GitHub

Launch and configure worker nodes :

Now we need to launch worker nodes before we can start deploying apps. We can create the worker node cluster by using the CloudFormation script provided by AWS which is available here or use the Terraform script available here.

  • ClusterName: Name of the Amazon EKS cluster we created earlier.
  • ClusterControlPlaneSecurityGroup: Id of the security group we used in EKS cluster.
  • NodeGroupName: Name for the worker node auto scaling group.
  • NodeAutoScalingGroupMinSize: Minimum number of worker nodes that you always want in your cluster.
  • NodeAutoScalingGroupMaxSize: Maximum number of worker nodes that you want in your cluster.
  • NodeInstanceType: Type of worker node you wish to launch.
  • NodeImageId: AWS provides Amazon EKS-optimized AMI to be used as worker nodes. Currently AKS is available in only two AWS regions Oregon and N.virginia and the AMI ids are ami-02415125ccd555295 and ami-048486555686d18a0 respectively
  • KeyName: Name of the key you will use to ssh into the worker node.
  • VpcId: Id of the VPC that we created earlier.
  • Subnets: Subnets from the VPC we created earlier.
EKS Worker Nodes

To enable worker nodes to join your cluster, we need to download, edit and apply the AWS authenticator config map.

Download the config map:

curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml
view raw config_map.sh hosted with ❤ by GitHub

Open it in an editor

apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
view raw aws-auth.yaml hosted with ❤ by GitHub

Edit the value of rolearn with the arn of the role of your worker nodes. This value is available in the output of the scripts that you ran. Save the change and then apply

kubectl apply -f aws-auth-cm.yaml
view raw apply.sh hosted with ❤ by GitHub

Now you can check if the nodes have joined the cluster or not.

kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-2-171.ec2.internal Ready 12s v1.10.3
ip-10-0-3-58.ec2.internal Ready 14s v1.10.3
view raw cluster_join.sh hosted with ❤ by GitHub

Deploying an application:

As our cluster is completely ready now we can start deploying applications on it. We will deploy a simple books api application which connects to a mongodb database and allows users to store,list and delete book information.

1. MongoDB Deployment YAML

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodb
spec:
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- name: mongodbport
containerPort: 27017
protocol: TCP

2. Test Application Development YAML

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: akash125/pyapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000

3. MongoDB Service YAML

apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
name: mongodbport
selector:
app: mongodb

4. Test Application Service YAML

apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: LoadBalancer
ports:
- name: test-service
port: 80
protocol: TCP
targetPort: 3000
selector:
app: test-app

Services

$ kubectl create -f mongodb-service.yaml
$ kubectl create -f testapp-service.yaml
view raw services.sh hosted with ❤ by GitHub

Deployments

$ kubectl create -f mongodb-deployment.yaml
$ kubectl create -f testapp-deployment.yaml$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 12m
mongodb-service ClusterIP 172.20.55.194 <none> 27017/TCP 4m
test-service LoadBalancer 172.20.188.77 a7ee4f4c3b0ea 80:31427/TCP 3m
view raw deployments.sh hosted with ❤ by GitHub

In the EXTERNAL-IP section of the test-service we see dns of an load balancer we can now access the application from outside the cluster using this dns.

To Store Data :

curl -X POST -d '{"name":"A Game of Thrones (A Song of Ice and Fire)“, "author":"George R.R. Martin","price":343}' http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books
{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}
view raw data_store.sh hosted with ❤ by GitHub

To Get Data :

curl -X GET http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books
[{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}]
view raw get_data.sh hosted with ❤ by GitHub

We can directly put the URL used in the curl operation above in our browser as well, we will get the same response.

Deployment on EKS

Now our application is deployed on EKS and can be accessed by the users.

Comparison BETWEEN GKE, ECS and EKS:

Cluster creation: Creating GKE and ECS cluster is way simpler than creating an EKS cluster. GKE being the simplest of all three.

Cost: In case of both, GKE and ECS we pay only for the infrastructure that is visible to us i.e., servers, volumes, ELB etc. and there is no cost for master nodes or other cluster management services but with EKS there is a charge of 0.2 $ per hour for the control plane.

Add-ons: GKE provides the option of using Calico as the network plugin which helps in defining network policies for controlling inter pod communication (by default all pods in k8s can communicate with each other).

Serverless: ECS cluster can be created using Fargate which is container as a Service (CaaS) offering from AWS. Similarly EKS is also expected to support Fargate very soon.

In terms of availability and scalability all the services are at par with each other.

Conclusion:

In this blog post we learned the basics concepts of EKS, launched our own EKS cluster and deployed an application as well. EKS is much awaited service from AWS especially for the folks who were already running their Kubernetes workloads on AWS, as now they can easily migrate to EKS and have a fully managed Kubernetes control plane. EKS is expected to be adopted by many organisations in near future.

References:

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Taking Amazon's Elastic Kubernetes Service for a Spin

With the introduction of Elastic Kubernetes service at AWS re: Invent last year, AWS finally threw their hat in the ever booming space of managed Kubernetes services. In this blog post, we will learn the basic concepts of EKS, launch an EKS cluster and also deploy a multi-tier application on it.

What is Elastic Kubernetes service (EKS)?

Kubernetes works on a master-slave architecture. The master is also referred to as control plane. If the master goes down it brings our entire cluster down, thus ensuring high availability of master is absolutely critical as it can be a single point of failure. Ensuring high availability of master and managing all the worker nodes along with it becomes a cumbersome task in itself, thus it is most desirable for organizations to have managed Kubernetes cluster so that they can focus on the most important task which is to run their applications rather than managing the cluster. Other cloud providers like Google cloud and Azure already had their managed Kubernetes service named GKE and AKS respectively. Similarly now with EKS Amazon has also rolled out its managed Kubernetes cluster to provide a seamless way to run Kubernetes workloads.

Key EKS concepts:

EKS takes full advantage of the fact that it is running on AWS so instead of creating Kubernetes specific features from the scratch they have reused/plugged in the existing AWS services with EKS for achieving Kubernetes specific functionalities. Here is a brief overview:

IAM-integration: Amazon EKS integrates IAM authentication with Kubernetes RBAC ( role-based access control system native to Kubernetes) with the help of Heptio Authenticator which is a tool that uses AWS IAM credentials to authenticate to a Kubernetes cluster. Here we can directly attach an RBAC role with an IAM entity this saves the pain of managing another set of credentials at the cluster level.

Amazon's Elastic Kubernetes Service

Container Interface:  AWS has developed an open source cni plugin which takes advantage of the fact that multiple network interfaces can be attached to a single EC2 instance and these interfaces can have multiple secondary private ips associated with them, these secondary ips are used to provide pods running on EKS with real ip address from VPC cidr pool. This improves the latency for inter pod communications as the traffic flows without any overlay.  

EKS Container Interface

ELB Support:  We can use any of the AWS ELB offerings (classic, network, application) to route traffic to our service running on the working nodes.

Auto scaling:  The number of worker nodes in the cluster can grow and shrink using the EC2 auto scaling service.

Route 53: With the help of the External DNS project and AWS route53 we can manage the DNS entries for the load balancers which get created when we create an ingress object in our EKS cluster or when we create a service of type LoadBalancer in our cluster. This way the DNS names are always in sync with the load balancers and we don’t have to give separate attention to it.   

Shared responsibility for cluster: The responsibilities of an EKS cluster is shared between AWS and customer. AWS takes care of the most critical part of managing the control plane (api server and etcd database) and customers need to manage the worker node. Amazon EKS automatically runs Kubernetes with three masters across three Availability Zones to protect against a single point of failure, control plane nodes are also monitored and replaced if they fail, and are also patched and updated automatically this ensures high availability of the cluster and makes it extremely simple to migrate existing workloads to EKS.

Cluster Shared Responsibility

Prerequisites for launching an EKS cluster:

1.  IAM role to be assumed by the cluster: Create an IAM role that allows EKS to manage a cluster on your behalf. Choose EKS as the service which will assume this role and add AWS managed policies ‘AmazonEKSClusterPolicy’ and ‘AmazonEKSServicePolicy’ to it.

IAM Role

2.  VPC for the cluster:  We need to create the VPC where our cluster is going to reside. We need a VPC with subnets, internet gateways and other components configured. We can use an existing VPC for this if we wish or create one using the CloudFormation script provided by AWS here or use the Terraform script available here. The scripts take ‘cidr’ block of the VPC and three other subnets as arguments.

Launching an EKS cluster:

1.  Using the web console: With the prerequisites in place now we can go to the EKS console and launch an EKS cluster when we try to launch an EKS cluster we need to provide a the name of the EKS cluster, choose the Kubernetes version to use, provide the IAM role we created in step one and also choose a VPC, once we choose a VPC we also need to select subnets from the VPC where we want our worker nodes to be launched by default all the subnets in the VPC are selected we also need to provide a security group which is applied to the elastic network interfaces (eni) that EKS creates to allow control plane communicate with the worker nodes.

NOTE: Couple of things to note here is that the subnets must be in at least two different availability zones and the security group that we provided is later updated when we create worker node cluster so it is better to not use this security group with any other entity or be completely sure of the changes happening to it.

Launching EKS Cluster

2. Using awscli :

aws eks create-cluster --name eks-blog-cluster --role-arn arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role  
--resources-vpc-config subnetIds=subnet-0b8da2094908e1b23,subnet-01a46af43b2c5e16c,securityGroupIds=sg-03fa0c02886c183d4
view raw ascii.sh hosted with ❤ by GitHub

{
"cluster": {
"status": "CREATING",
"name": "eks-blog-cluster",
"certificateAuthority": {},
"roleArn": "arn:aws:iam::XXXXXXXXXXXX:role/eks-service-role",
"resourcesVpcConfig": {
"subnetIds": [
"subnet-0b8da2094908e1b23",
"subnet-01a46af43b2c5e16c"
],
"vpcId": "vpc-0364b5ed9f85e7ce1",
"securityGroupIds": [
"sg-03fa0c02886c183d4"
]
},
"version": "1.10",
"arn": "arn:aws:eks:us-east-1:XXXXXXXXXXXX:cluster/eks-blog-cluster",
"createdAt": 1535269577.147
}
}

In the response, we see that the cluster is in creating state. It will take a few minutes before it is available. We can check the status using the below command:

aws eks describe-cluster --name=eks-blog-cluster
view raw eks.js hosted with ❤ by GitHub

Configure kubectl for EKS:

We know that in Kubernetes we interact with the control plane by making requests to the API server. The most common way to interact with the API server is via kubectl command line utility. As our cluster is ready now we need to install kubectl.

1.  Install the kubectl binary

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s
https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

Give executable permission to the binary.

chmod +x ./kubectl
view raw permission.sh hosted with ❤ by GitHub

Move the kubectl binary to a folder in your system’s $PATH.

sudo cp ./kubectl /bin/kubectl && export PATH=$HOME/bin:$PATH
view raw path.sh hosted with ❤ by GitHub

As discussed earlier EKS uses AWS IAM Authenticator for Kubernetes to allow IAM authentication for your Kubernetes cluster. So we need to download and install the same.

2.  Install aws-iam-authenticator

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/bin/linux/amd64/aws-iam-authenticator
view raw install_aws.sh hosted with ❤ by GitHub

Give executable permission to the binary

chmod +x ./aws-iam-authenticator

Move the aws-iam-authenticator binary to a folder in your system’s $PATH.

sudo cp ./aws-iam-authenticator /bin/aws-iam-authenticator
view raw move_aws.sh hosted with ❤ by GitHub

3.  Create the kubeconfig file

First create the directory.

mkdir -p ~/.kube

Open a config file in the folder created above

sudo vi .kube/config-eks-blog-cluster
view raw open_config.sh hosted with ❤ by GitHub

Paste the below code in the file

clusters:      
- cluster:      
server: https://DBFE36D09896EECAB426959C35FFCC47.sk1.us-east-1.eks.amazonaws.com        
certificate-authority-data: ”....................”        
name: kubernetes        
contexts:        
- context:            
cluster: kubernetes            
user: aws          
name: aws        
current-context: aws        
kind: Config      
preferences: {}        
users:          
- name: aws            
user:                
exec:                    
apiVersion: client.authentication.k8s.io/v1alpha1                    
command: aws-iam-authenticator                    
args:                      
- "token"                      
- "-i"                    
- “eks-blog-cluster"

Replace the values of the server and certificate-authority data with the values of your cluster and certificate and also update the cluster name in the args section. You can get these values from the web console as well as using the command.

aws eks describe-cluster --name=eks-blog-cluster
view raw describe_eks hosted with ❤ by GitHub

Save and exit.

Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look for your cluster configuration.

export KUBECONFIG=$KUBECONFIG:~/.kube/config-eks-blog-cluster
view raw export.sh hosted with ❤ by GitHub

To verify that the kubectl is now properly configured :

kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 172.20.0.1 443/TCP 50m
view raw configured.sh hosted with ❤ by GitHub

Launch and configure worker nodes :

Now we need to launch worker nodes before we can start deploying apps. We can create the worker node cluster by using the CloudFormation script provided by AWS which is available here or use the Terraform script available here.

  • ClusterName: Name of the Amazon EKS cluster we created earlier.
  • ClusterControlPlaneSecurityGroup: Id of the security group we used in EKS cluster.
  • NodeGroupName: Name for the worker node auto scaling group.
  • NodeAutoScalingGroupMinSize: Minimum number of worker nodes that you always want in your cluster.
  • NodeAutoScalingGroupMaxSize: Maximum number of worker nodes that you want in your cluster.
  • NodeInstanceType: Type of worker node you wish to launch.
  • NodeImageId: AWS provides Amazon EKS-optimized AMI to be used as worker nodes. Currently AKS is available in only two AWS regions Oregon and N.virginia and the AMI ids are ami-02415125ccd555295 and ami-048486555686d18a0 respectively
  • KeyName: Name of the key you will use to ssh into the worker node.
  • VpcId: Id of the VPC that we created earlier.
  • Subnets: Subnets from the VPC we created earlier.
EKS Worker Nodes

To enable worker nodes to join your cluster, we need to download, edit and apply the AWS authenticator config map.

Download the config map:

curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/1.10.3/2018-07-26/aws-auth-cm.yaml
view raw config_map.sh hosted with ❤ by GitHub

Open it in an editor

apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
view raw aws-auth.yaml hosted with ❤ by GitHub

Edit the value of rolearn with the arn of the role of your worker nodes. This value is available in the output of the scripts that you ran. Save the change and then apply

kubectl apply -f aws-auth-cm.yaml
view raw apply.sh hosted with ❤ by GitHub

Now you can check if the nodes have joined the cluster or not.

kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-2-171.ec2.internal Ready 12s v1.10.3
ip-10-0-3-58.ec2.internal Ready 14s v1.10.3
view raw cluster_join.sh hosted with ❤ by GitHub

Deploying an application:

As our cluster is completely ready now we can start deploying applications on it. We will deploy a simple books api application which connects to a mongodb database and allows users to store,list and delete book information.

1. MongoDB Deployment YAML

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodb
spec:
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- name: mongodbport
containerPort: 27017
protocol: TCP

2. Test Application Development YAML

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: test-app
spec:
replicas: 1
template:
metadata:
labels:
app: test-app
spec:
containers:
- name: test-app
image: akash125/pyapp
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000

3. MongoDB Service YAML

apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
ports:
- port: 27017
targetPort: 27017
protocol: TCP
name: mongodbport
selector:
app: mongodb

4. Test Application Service YAML

apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
type: LoadBalancer
ports:
- name: test-service
port: 80
protocol: TCP
targetPort: 3000
selector:
app: test-app

Services

$ kubectl create -f mongodb-service.yaml
$ kubectl create -f testapp-service.yaml
view raw services.sh hosted with ❤ by GitHub

Deployments

$ kubectl create -f mongodb-deployment.yaml
$ kubectl create -f testapp-deployment.yaml$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 12m
mongodb-service ClusterIP 172.20.55.194 <none> 27017/TCP 4m
test-service LoadBalancer 172.20.188.77 a7ee4f4c3b0ea 80:31427/TCP 3m
view raw deployments.sh hosted with ❤ by GitHub

In the EXTERNAL-IP section of the test-service we see dns of an load balancer we can now access the application from outside the cluster using this dns.

To Store Data :

curl -X POST -d '{"name":"A Game of Thrones (A Song of Ice and Fire)“, "author":"George R.R. Martin","price":343}' http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books
{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}
view raw data_store.sh hosted with ❤ by GitHub

To Get Data :

curl -X GET http://a7ee4f4c3b0ea11e8b0f912f36098e4d-672471149.us-east-1.elb.amazonaws.com/books
[{"id":"5b8fab49fa142b000108d6aa","name":"A Game of Thrones (A Song of Ice and Fire)","author":"George R.R. Martin","price":343}]
view raw get_data.sh hosted with ❤ by GitHub

We can directly put the URL used in the curl operation above in our browser as well, we will get the same response.

Deployment on EKS

Now our application is deployed on EKS and can be accessed by the users.

Comparison BETWEEN GKE, ECS and EKS:

Cluster creation: Creating GKE and ECS cluster is way simpler than creating an EKS cluster. GKE being the simplest of all three.

Cost: In case of both, GKE and ECS we pay only for the infrastructure that is visible to us i.e., servers, volumes, ELB etc. and there is no cost for master nodes or other cluster management services but with EKS there is a charge of 0.2 $ per hour for the control plane.

Add-ons: GKE provides the option of using Calico as the network plugin which helps in defining network policies for controlling inter pod communication (by default all pods in k8s can communicate with each other).

Serverless: ECS cluster can be created using Fargate which is container as a Service (CaaS) offering from AWS. Similarly EKS is also expected to support Fargate very soon.

In terms of availability and scalability all the services are at par with each other.

Conclusion:

In this blog post we learned the basics concepts of EKS, launched our own EKS cluster and deployed an application as well. EKS is much awaited service from AWS especially for the folks who were already running their Kubernetes workloads on AWS, as now they can easily migrate to EKS and have a fully managed Kubernetes control plane. EKS is expected to be adopted by many organisations in near future.

References:

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings