K8S Multi Node Cluster Configuration over AWS Cloud using Ansible

Yash Modi
6 min readMay 2, 2021


Before going through the configuration, let’s first see what is Kubernetes and what are it’s advantages and it’s benefits over docker.

What is Kubernetes?

Kubernetes is an open-source container orchestration platform that enables the operation of an elastic web server framework for cloud applications. Kubernetes can support data center outsourcing to public cloud service providers or can be used for web hosting at scale.

Website and mobile applications with complex custom code can deploy using Kubernetes on commodity hardware to lower the costs on web server provisioning with public cloud hosts and to optimize software development processes.

Kubernetes features

Kubernetes features the ability to automate web server provisioning according to the level of web traffic in production. Web server hardware can be located in different data centers, on different hardware, or through different hosting providers. Kubernetes scales up web servers according to the demand for the software applications, then degrades web server instances during downtimes. Kubernetes also has advanced load balancing capabilities for web traffic routing to web servers in operations.

Kubernetes advantages

The main advantage of Kubernetes is the ability to operate an automated, elastic web server platform in production without the vendor lock-in to AWS with the EC2 service. Kubernetes runs on most public cloud hosting services and all of the major companies offer competitive pricing. Kubernetes enables the complete outsourcing of a corporate data center.

Kubernetes can also be used to scale web and mobile applications in production to the highest levels of web traffic. Kubernetes allows any company to operate its software code at the same level of scalability as the largest companies in the world on competitive data center pricing for hardware resources.

Kubernetes vs. Docker

Kubernetes is an open-source container orchestration platform. Docker is the main container virtualization standard used with Kubernetes. Other elastic web server orchestration systems are Docker Swarm, CoreOS Tectonic, and Mesosphere. Intel also has a competing container standard with Kata, and there are several Linux container versions.

Docker has the largest share of the container virtualization marketplace for software products. Docker is a software development company that specializes in container virtualization, whereas Kubernetes is an open-source project supported by a community of coders that includes professional programmers from all of the major IT companies.

So, that was a brief introduction about Kubernetes. But, since we have to automate the kubernetes configuration using ansible, let’s have an introduction about Ansible also.

What is Ansible?

Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.

Designed for multi-tier deployments since day one, Ansible models your IT infrastructure by describing how all of your systems inter-relate, rather than just managing one system at a time.

It uses no agents and no additional custom security infrastructure, so it’s easy to deploy — and most importantly, it uses a very simple language (YAML, in the form of Ansible Playbooks) that allow you to describe your automation jobs in a way that approaches plain English.

Playbooks: A Simple+Powerful Automation Language

Playbooks can finely orchestrate multiple slices of your infrastructure topology, with very detailed control over how many machines to tackle at a time. This is where Ansible starts to get most interesting.

Ansible’s approach to orchestration is one of finely-tuned simplicity, as we believe your automation code should make perfect sense to you years down the road and there should be very little to remember about special syntax or features.

Extend Ansible: Modules, Plugins and API

Should you want to write your own, Ansible modules can be written in any language that can return JSON (Ruby, Python, bash, etc). Inventory can also plug in to any datasource by writing a program that speaks to that datasource and returns JSON. There’s also various Python APIs for extending Ansible’s connection types (SSH is not the only transport possible), callbacks (how Ansible logs, etc), and even for adding new server side behaviors.

That was a brief Introduction about RedHat Automation Tool — Ansible.

Now, let’s go through the objectives of the task and then start building the task using ansible.

🔅Create Ansible Playbook to launch 3 AWS EC2 Instance
🔅 Create Ansible Playbook to configure Docker over those instances.
🔅 Create Playbook to configure K8S Master, K8S Worker Nodes on the above created EC2 Instances using kubeadm.
🔅 Convert Playbook into roles and Upload those role on your Ansible Galaxy.

Now, let’s first create an ansible role that will launch 3 Instances in AWS…

# ansible-galaxy init ec2

This command will create a role for ec2. Now, let’s write the code for launching instance in this role in /tasks/main.yml file.

Complete code is uploaded in GitHub

If you want to know that how it will be connected to Aws and update the ansible inventory automatically, i have explained it in my previous article.

Now, we let’s create another role for configuring kubernetes master node in AWS Cloud.

# ansible-galaxy init kubemasternode

This command will create a role for ec2. Now, let’s write the code for launching instance in this role in /tasks/main.yml file.

Complete code is uploaded in GitHub

Now, we let’s create another role for configuring kubernetes slave node in AWS Cloud.

# ansible-galaxy init kubeslavenode

This command will create a role for ec2. Now, let’s write the code for launching instance in this role in /tasks/main.yml file.

Complete code is uploaded in GitHub

Now, since all the roles are created, Let’s create a final playbook that will run all the roles and configure the complete architecture automatically in single click.

#vim play.yml

Complete code is uploaded in GitHub

Now, the complete code is ready.

So, let’s run this playbook and the complete configuration will be done automatically.

# ansible-playbook — vault-id prod@prompt play.yml

And thus, all the configuration is done successfully.

See that Aws Instances are launched Successfully…

Now, let’s go inside Master Node and check the nodes…

Now, let’s create a deployment in master node and verify the complete configuration is done successfully.

And then, connect to the Ip of master node with port 32033 and check the connectivity…

Now, whenever we refresh our browser, Our load balancer will work properly and our clients will be distributed among 5 containers running in 2 different managed nodes.

And thus, all the objectives of the task is completed successfully.

The complete code of the task is uploaded in GitHub for reference.

All the screenshots are uploaded in GitHub.

And all the roles are uploaded in Ansible-galaxy.

Thanks for reading the Article.

Hope You Liked it.

Have a good day. :)