A Comprehensive Guide to Docker Swarm: Deploying and Scaling Containers on AWS

Ayushmaan Srivastav
3 min readFeb 17, 2024

--

Introduction: In today’s fast-paced world, the deployment and scaling of applications have become critical aspects of software development. Docker, with its containerization technology, has revolutionized the way we deploy, manage, and scale applications. In this guide, we will walk you through the process of setting up a Docker Swarm cluster on AWS, deploying services, ensuring fault tolerance, and scaling your applications horizontally.

Step 1: Launching Amazon Linux Instances on AWS Cloud To kickstart our journey, we need to set up our infrastructure on the cloud. Launching four Amazon Linux instances on AWS will serve as the foundation for our Docker Swarm cluster. Ensure that each instance meets the minimum hardware requirements for running Docker containers.

Step 2: Installing Docker on Instances With our instances up and running, the next step is to install Docker on each of them. Use the appropriate package manager for Amazon Linux and follow the Docker installation instructions.

yum install docker
systemctl start docker
systemctl enable docker

Step 3: Starting Docker Services After successfully installing Docker, start the Docker services on each instance to prepare for Docker Swarm initialization.

systemctl start docker

Step 4: Configuring Inbound Rules on the Master Node To enable communication between the Swarm nodes, configure inbound rules on the master node. This ensures that the nodes can connect seamlessly.

Step 5: Pinging from Slave to Master (Connectivity Check) Verify the connectivity between the master and slave nodes by pinging from slave to master.

Step 6: Initializing the Docker Swarm Cluster (On Master) Initialize the Docker Swarm cluster on the master node and specify the advertise address.

docker swarm init — advertise-addr <Master_IP>

Step 7: Listing Nodes in the Cluster Check the status of nodes in the Swarm cluster using the following command:

docker node ls

Step 8: Adding Worker Nodes During Swarm initialization, a pre-created command for adding worker nodes is provided. Execute this command on each slave node.

On the Master Node: After initializing the Docker Swarm on the master node using the docker swarm init command, the output will include a command to join worker nodes to the swarm. Look for a command similar to the following:

docker swarm join — token SWMTKN-<token> <master-node-ip>:<port>

  1. The <token> and <master-node-ip>:<port> are specific to your swarm setup.
  2. Copy the Join Command: Copy the entire docker swarm join command from the master node terminal.
  3. On the Slave Node: Connect to the slave node, open a terminal, and paste the copied docker swarm join command. It should look something like this:

docker swarm join — token SWMTKN-<token> <master-node-ip>:<port>

  1. Execute the command on the slave node. This action will join the slave node to the Docker Swarm as a worker.
  2. Verify Node Joining: To verify that the slave node has successfully joined the swarm, you can go back to the master node and run the following command:

docker node ls

Deploying and Managing Services in Docker Swarm:

In a multi-tier architecture, deploying services becomes crucial for ensuring fault tolerance and scalability. Docker Swarm introduces the concept of services, which can be single containers or multiple containers.

Creating a Service: Use the following command to create a service:

docker service create — name webserver httpd

Listing Services and Service Tasks:

  • docker service ls: List all services.
  • docker service ps <service_name>: Obtain detailed information about a service.

Scaling Services: Scale a service horizontally to handle increased load.

docker service scale webserver=5

Accessing Services with Load Balancer:

By default, Swarm provides a load balancer for services. Expose services to the outside world using the --publish keyword.

docker service create — name webserver — publish 8080:80 httpd

Monitoring and Scaling:

Swarm’s manager node actively monitors containers. If a container fails, Swarm automatically relaunches it on another node, ensuring fault tolerance.

Scaling in and out becomes effortless with a simple command:

docker service scale webserver=10

Conclusion: In this comprehensive guide, we explored the process of setting up a Docker Swarm cluster on AWS, deploying services, ensuring fault tolerance, and scaling applications horizontally. Docker Swarm provides a powerful and flexible platform for managing containers at scale, making it an excellent choice for modern application deployment. By following the step-by-step instructions and utilizing the Docker commands provided, you can confidently build and manage your containerized applications in a robust and scalable environment.

--

--

No responses yet