CLOSE
megamenu-tech
CLOSE
service-image
Blogs
Introduction to Docker Swarm (Part 3): Scaling, Rolling updates, and more

Docker

Introduction to Docker Swarm (Part 3): Scaling, Rolling updates, and more

##dockerSwarm

##softwaredevelopment

##microservices

##docker

Technology, Published On : 16 May 2024
Docker 3

This is part 3 of a 3-part blog series looking at Docker Swarm. We will be picking up from where we left off in the previous article. In the previous article, we integrated the Traefik load balancer to redirect requests to our services. In production, we need a way to scale our server so that if the traffic to our web app increases we can add more servers to handle the load. On the other hand, if the traffic is less we need a way to downscale the server to save cost. Docker Swarm has features to achieve these needs.

In this article,

  • We will see how we can scale up and scale down our application.
  • We will perform a rolling update to a running Docker Swarm cluster.
  • We will add constraints on which node should run a particular service.

Prerequisites:

Since we are picking from where we left off in the previous article, we are assuming that you can run Docker swarm along with Traefik. Please check this Github repo in case you haven’t set up the project.

1. Scaling the Application

If you check the deploy configuration in our docker-compose.yml file we have instructed Docker to create 1 replica of our nodeapp service.

deploy:

replicas: 1 // <-- we have mentioned here

This is not a fixed number. This is just our initial configuration. We can add more replicas to a running service dynamically. Let's add one more replica of the node service to our cluster. After enabling Docker Swarm as we did in part 1, run the following command to deploy the application:

docker stack deploy -c docker-compose.yml node_stack

You can run the following command to increase the number of replicas:

docker service scale node_stack_nodeapp=2

The above command will add one more additional container to the node_stack_nodeapp service. If you run docker service ls , you will notice that there are 2 replicas of the service.

ID NAME MODE REPLICAS IMAGE
p95yf0r0cxay node_stack_nodeapp replicated 2/2 127.0.0.1:5000/nodeapp:latest

If you go to http://localhost:3000 in your browser you will notice that the application will work the same way as before. This is because of the Traefik load balancer. It will distribute the requests between the two replicas. If you run the docker ps command, you will see two running containers:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b706babfd3ad 127.0.0.1:5000/nodeapp:latest "docker-entrypoint.s…" 2 minutes ago Up 2 minutes 3000/tcp node_stack_nodeapp.2.6rxqcv1sgrsgflhsn7kwlh6by
5f7ba8e89bfb 127.0.0.1:5000/nodeapp:latest "docker-entrypoint.s…" 11 minutes ago Up 11 minutes 3000/tcp node_stack_nodeapp.1.s8p7yyr2mlozitmnc23elolg2
This is how we can scale up our application. To scale down our application we can simply run the same command with 1 replica as shown below:
docker service scale node_stack_nodeapp=1

2. Rolling Update:

To demonstrate rolling update let's modify our node js application. One index.js file and modify the text that we return in our API. I have modified the text from “Hello world!” to “Sample!” like below:

app.get('/', function(req, res){
res.send("Sample!"); // <--- change this text
});

The next step is to build the docker image with our new changes. Run the following command to build the image:

docker compose -f docker-compose.yml build nodeapp

Next, we need to publish our image to the register:2 service that hosts our images (Please refer to part 1).

docker compose -f docker-compose.yml push nodeapp

Run the following command to trigger the rolling update

docker service update --image 127.0.0.1:5000/nodeapp node_stack_nodeapp

We need to pass the image (127.0.0.1:5000/nodeapp) and the service (node_stack_nodeapp) that we want to update.

The output should look something like the following:

node_stack_nodeapp
overall progress: 2 out of 2 tasks
1/2: running [==================================================>]
2/2: running [==================================================>]
verify: Service converged

Here, since we have two replicas of the node service, based on our initial configuration, Docker will update one container at a time. During this process, if you go to http://localhost:3000 and keep refreshing the page, for a brief moment while the update is running, you will switch between “Hello world!” and “Sample!”. Once both containers are updated it will only display “Sample!”. This is how you can do rolling updates for Docker Swarm.

3. Handling service placement:

So far we have deployed our application to a single node. When we have more than one node in our cluster, Docker Swarm will automatically distribute the services between these nodes. For example, let’s say that you have one manager node and another worker node in our cluster. Docker Swarm can detect that and add replicas of the services between the two nodes. Remember to give proper permissions to the required ports between the two nodes so that Docker Swarm can do its orchestration.

If we are trying to deploy this sample application that we built using two nodes, we can’t determine which node will have our Nodeapp service and Traefik service. We need to know which node is hosting the Traefik service because that is the node that will receive the incoming requests. So we need a way through which we can add constraints saying that among the nodes in a network, we always need to deploy the Traefik service to a particular node. To do that, we can make use of the placement attribute. Under the deploy section of the Traefik service we can add the placement configuration like the following:

deploy:
mode: global
placement: # <------- add placement section here
constraints:
- node.hostname==MasterHost

Here we are telling Docker to always deploy the Traefik service to a node that has the hostname MasterHost. Similarly, we can add different conditions here based on your needs.

Bonus: (Generating TLS certificate using Traefik)

If you are deploying your application to a server. It is good practice to run our application in HTTPS. To enable HTTPS we need to generate a SSL or TLS certificate. You will need to purchase a domain. Depending on the service provider you are using to deploy the server, you need to add proper configuration so that the requests coming to the domain reach our server (The node that runs Traefik). We won’t be covering the deployment process in this tutorial but we will see how to configure Traefik so that the TLS certificate gets generated automatically after deploying the stack.

Add the port for HTTPS:

Add port 443 to the Traefik service so that it can listen to the requests over HTTPS.

traefik:
image: traefik:v2.10.7
deploy:
mode: global
networks:
- sample-net
ports:
- target: 443 # <----- add 443 here
published: 443
protocol: tcp
mode: host
- target: 80
published: 80 # <------- change this from 3000 to 80
protocol: tcp
mode: host

Change the published port from 3000 to port 80. I will explain why we need this shortly.

Add certificate resolver for Traefik:

Open the traefik.yml file and add the following:

certificatesResolvers:
nodeapp_resolver:
acme:
caServer: https://acme-v02.api.letsencrypt.org/directory
# caServer: https://acme-staging-v02.api.letsencrypt.org/directory
certificatesDuration: 2160
email: email@gmail.com
storage: acme.json
httpChallenge:
# used during the challenge
entryPoint: web

Under certificatesResolvers section we create a resolver called nodeapp_resolver. By using acme we can configure automatic certificate generation.

caServer: https://acme-v02.api.letsencrypt.org/directory

We are using Let’s Encrypt to generate the certificate. There is a rate limit on the number of certificates you can generate per day. So for testing, you can use their staging URL which I have added as a comment.

certificatesDuration: 2160

We configured the validity of the certificate to 90 days.

email: email@gmail.com

We need to provide an email for registration

storage: acme.json

We have specified the file to store the certificate

httpChallenge:

entryPoint: web

To generate a certificate we need to enable either a DNS challenge or a HTTP challenge. Remember to give access to port 80 to your server to make the challenge work. We have mapped the web endpoint here which uses port 80. This is the reason why we modified port 3000 to port 80 in the Traefik service in the docker-compose.yml file.

Enable TLS for the nodeapp service:

Open the docker-compose.yml file again and add the following under the nodeapp service’s dynamic Traefik configuration.

- "traefik.http.routers.api-redirect.tls=true"
- "traefik.http.routers.api-redirect.tls.certresolver=nodeapp_resolver"

We need to enable TLS for this service and assign the certificate resolver. Then add the domain name to the rule as shown below:

- "traefik.http.routers.api-redirect.rule=Host(`<DOMAIN_NAME>`) && PathPrefix(`/`)"

Replace <DOMAIN_NAME> with your domain name.

These are all the changes you need to make to generate a TLS certificate automatically using Traefik.

You can refer to this GitHub repo for the entire source code.

Conclusion:

In conclusion, Docker Swarm coupled with Traefik and Let’s Encrypt presents a powerful combination for orchestrating containerized applications with ease, efficiency, and security. By leveraging Traefik as a dynamic load balancer and Let’s Encrypt for automatic SSL/TLS certificate generation, we ensure seamless scalability and robust encryption for our services.

Dinesh

Dinesh Murali

Lead-Technology

Software engineer by job and adventure seeker by nature. I thrive on developing awesome applications. When not working, I love being in nature and exploring the great outdoors.

Modal_img.max-3000x1500

Discover Next-Generation AI Solutions for Your Business!

Let's collaborate to turn your business challenges into AI-powered success stories.

Get Started