Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

Message Broker with RabbitMQ

Introduction
As I started my journey as a Backend Developer intern, I was neck-deep in boring CRUD operations. It
felt like I was on a never-ending loop, just churning out these tasks day in and day out. But amidst this
routine, I couldn't shake off my curiosity about diving into new backend technologies.

A chat with a senior developer at work sparked my interest in RabbitMQ, a tool we used in our projects
but one I hadn't explored much. I decided to take on a small project to understand what RabbitMQ was all
about.

In the world of microservices, RabbitMQ acts like a behind-the-scenes messenger. It ensures that
messages from one microservice to another (imagine 10,000 messages flying around every second!) reach
their destination smoothly, even if the sender and receiver aren't always available at the same time. It's
like having a dependable postman for your digital messages, making sure they get where they need to go,
no matter how busy things get.

In our project, RabbitMQ had a crucial role. It took messages from one part of our system (the sender)
and passed them along to another part (the receiver) without them having to talk directly. If everything
went well and the receiver said, "Got it!" (we call this an acknowledgment or ACK), RabbitMQ would
remove the message from its list. But if the receiver didn't reply after trying three times, RabbitMQ knew
something was wrong and cleared the message to keep things moving. Otherwise, the message waited in
line patiently until the receiver was ready to handle it. Thanks to RabbitMQ, our system's communication
was reliable and smooth sailing.

To sum it up, RabbitMQ keeps our system organized and running smoothly. It's like the conductor of an
orchestra, ensuring each part plays its tune at the right time. Depending on whether the receiver
acknowledges the message or not, RabbitMQ knows what to do next: either remove it from the queue or
try again. It's a simple but powerful tool that makes our backend work seamlessly.

System Overview
The system consists of several components:

● Feeder: The load tester generates many requests(e.g.: 10000) per second.
● Receiver: This independent service publishes messages to the message broker for processing.
● RabbitMQ: This message broker acts as an intermediary, receiving messages from microservices
and routing them to available server instances.
● Services (Multiple Docker Containers): These containers run the server application responsible
for processing requests and interacting with the database.
● Monitoring System (Optional): This can be a separate service to monitor system health,
message queue lengths, and server performance metrics.

Fig: System Overview


Message Flow
1. Feeder generates a bulk amount of messages/second: We will use a load tester as the feeder to
generate a bulk amount of requests per second.
2. The receiver receives messages: The receiver has an open port to receive this huge amount of
requests. After getting this huge amount of requests, it just stores the requests in the queue of
RabbitMQ for later asynchronous messaging to the subscribers.
3. RabbitMQ Operation: RabbitMQ is always busy publishing messages using the amqp protocol.
It operates asynchronously and doesn’t depend on the high incoming requests from the publisher.
4. The subscribers receive messages: as the role of the subscriber, a separate microservice with
multiple replicas (configured in the docker-compose file) is always listening to the RabbitMQ.
RabbitMQ publishes messages all the time, and here on the subscriber side, among multiple
service copies, whichever is available (not busy in processing any request or not down) receives
every single request.
5. Request processing: after successfully storing the request in the database, returns an ACK to the
RabbitMQ and by default, the message gets dequeued from the RabbitMQ queue. If any error
occurs in subscriber end processes, no ACK is returned and a retry count (stored with the request
body) gets increased. If the retry count crosses the threshold, a negative acknowledgment
(NACK) is returned to RabbitMQ and the message doesn't get re-queued.
6. Message redelivery: If the server fails to acknowledge a message within a specific timeout,
RabbitMQ redelivers the message to another available server instance. This process repeats for a
predefined number of attempts before the message is considered failed.
7. Server failover: RabbitMQ stops routing messages to that instance if a server becomes
unavailable. The remaining healthy server instances continue processing messages from the
queue, ensuring system availability.

Work Flow

Day 1 - Getting Started with RabbitMQ: On our first day, we immersed ourselves in the world of
RabbitMQ by following the official documentation using
Golang(https://www.rabbitmq.com/tutorials/tutorial-one-go). We began by implementing a basic Pub/Sub
scenario, which served as our introduction to the messaging world. This involved creating two small
programs in Go: a producer responsible for sending a single message, and a consumer tasked with
receiving messages and printing them out.

In the sender program, we delved into the basics of establishing a RabbitMQ connection. This involved
creating a connection object, which abstracted away the complexities of socket connections, protocol
version negotiation, and authentication. We then set up a channel, where most of the API for interacting
with RabbitMQ resides. Finally, we declared a queue to which we could send messages and proceeded to
publish a message to the queue.
Transitioning to the receiver program, we learned how to set up a similar RabbitMQ connection to listen
for messages. Just like in the sender program, we created a channel and declared the same queue as the
sender to ensure consistency. This allowed us to receive messages asynchronously from RabbitMQ and
process them as needed. Overall, it was a simple yet foundational exploration of RabbitMQ's messaging
capabilities.

Day 2 - Message Durability and Work Queues: Building upon our initial understanding, we delved
deeper into RabbitMQ's features on our second day. We focused on message durability
(https://www.rabbitmq.com/tutorials/tutorial-two-go#message-durability), an essential aspect of building
reliable messaging systems. RabbitMQ, by default, does not save messages to disk, so we explored the
concept of Work Queues as a solution. Work Queues enable us to defer resource-intensive tasks,
scheduling them for later processing.

To ensure message reliability, we also studied message acknowledgments


(https://www.rabbitmq.com/tutorials/tutorial-two-go#message-acknowledgment). When a consumer
receives and processes a message, it sends an acknowledgment (ACK) back to RabbitMQ, indicating
successful processing. This ensures that messages are never lost, even if a consumer fails unexpectedly.
RabbitMQ can then re-queue unacknowledged messages for processing by another consumer, ensuring
that no message goes unnoticed.

Additionally, we learned about publishing messages in Persistent mode, instructing RabbitMQ to save
messages to disk for added reliability. This guarantees that messages are not lost in the event of
RabbitMQ or consumer failures, i.e.: In this way, when terminating a worker using CTRL+C while it is
processing a message, nothing is lost.

Furthermore, we explored the concept of prefetch count in RabbitMQ channels. Prefetch count determines
how many messages RabbitMQ will deliver to a consumer before waiting for acknowledgments. By
setting the prefetch count to 1, we ensured that RabbitMQ would only deliver one message to a worker at
a time, preventing a single worker from being overloaded with messages. This helped us balance the
workload across multiple workers and improve the overall efficiency of our system.

By understanding these concepts, we gained insights into building robust and fault-tolerant messaging
systems.

Day 3 - Microservices and Fault Tolerance: On our third day, we embarked on the journey of
transitioning our sender and receiver programs into separate microservices. This architectural shift
allowed us to explore fault tolerance and scalability in distributed systems. We leveraged a popular web
framework called ECHO to facilitate routing within our microservices, simplifying the handling of HTTP
requests. In addition to ECHO, we integrated MySQL as our database management system, providing
persistence to our microservices. This ensured that data remained intact across server restarts or failures,
enhancing the reliability of our system.

However, the main challenge arose in ensuring fault tolerance within our microservices architecture. To
address this, we turned to containerization using Docker. We containerized our sender and receiver
microservices, along with RabbitMQ and MySQL, ensuring consistency across different environments.
Using docker-compose, we orchestrated the deployment of our microservices and their dependencies. We
deployed multiple instances of the receiver microservice to achieve redundancy and fault tolerance,
ensuring uninterrupted service even in the face of unexpected errors or server outages. Furthermore, we
configured RabbitMQ to redeliver messages to other available receiver instances if one instance failed to
process them. This feature enhanced the reliability and availability of our system, guaranteeing that
messages were processed and delivered even in the event of failures.

Gannt Chart

You might also like