Managing Docker daemons with Docker Swarm

tux

26 Oct '24

A 'swarm' refers to a large group of people are insects moving together. The key word here is 'together'. Like a swarm of bees that move together, Docker Swarm allows containers to work together as a cluster. It is Docker's native container orchestration tool, offering management and scaling of containerised applications. Similar to the more well-known Kubernetes, Docker Swarm orchestrates the deployment, scaling, and operation of containers, but with a focus on ease of use and integration within the Docker ecosystem."

With Docker Swarm or container orchestration tools in general, it offers advantages in the form of high availability and scalability. There is high availability as the swarm can be configured to ensure that x number of container instances or container stacks are always running. If a node becomes unavailable, the swarm will launch the containers on another node to maintain availability. Docker Swarm enables dynamic scaling, allowing users to increase or decrease the number of service instances based on real-time demands. This optimises resource usage leading to better cost management.

In this tutorial, we will set up a Docker Swarm to manage a cluster of 3 nodes. Next, a service will be initialised to create container instances which run Nginx. One of the nodes is both the master and worker node. For refence on setting up the nodes with EC2, please refer to my previous article. Set up 3 nodes and SSH into one of the nodes. This node will be the master.

Creating the Swarm

  1. Initialise the swarm

    The ip address can be obtained from AWS console or with ifconfig.

    
        docker swarm init --advertise-addr ip_address
                

  2. Add workers to swarm

    On initialising the swarm, a command snippet will be provided in the terminal how to add worker nodes. The snippet has a token which authenticates the worker node's request. Port 2377 is the default port used by the manager node for communication. Execute this command at the other two nodes (EC2 instances).

    
        docker swarm join --token provided token ip_address
        :2377
                


Creating a Docker Service

The final step is to create a Docker service. It is used to define and manage containers in a Docker Swarm. This allows the containerised application to be scalable and reliable by ensuring that a consistent number of containers is always up and running.


    docker service create --name name_of_service --replicas number of containers --publish 8080:80 nginx


We can easily scale or adjust the number of running containers using the command below. The service ensures the desired number of container is always running.


    docker service scale name_of_service=desired_number_of_containers

    docker service scale mywebservice=2
                
    

Conclusion

In this article, we have created a Docker Swarm of 3 nodes. These nodes were created in EC2. Upon creating the swarm, we initialised a service which manages the desired number of containers. The swarm manager orchestrates where containers are placed across the nodes in the cluster while the service defines the desired state. The video below provides a comprehensive overview of the implementations discussed above, offering detailed insights and practical demonstrations.