AWS Questions - Tomi

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Hey Tomi, please find the answers below to your questios;

Q1: How do you prefer to scale the backend?

Ans: I prefer to scale the backend by using a distributed architecture or microservices that can
scale independently. Additionally, implementing a caching mechanism helps reduce the load on
the backend server, which results in faster response times. In cases of very high traffic, I prefer
horizontal scaling by adding more servers to the infrastructure

Q2:Which metrics are important to collect to see if server is ok?

Ans: System-level metrics involve checking CPU, memory, and disk usage to prevent the server
from running out of space. Application-level metrics help monitor the services on the server and
their dependencies, allowing me to identify which service might be impacting overall server
performance. Additionally, security metrics are essential for detecting potential threats to the
server

Q3: Which monitoring system do you prefer and why?

Ans: In one of the project I was working on managing ECS clusters, Amazon CloudWatch has
been very helpful for me. It centralizes monitoring and logging, providing a clear, comprehensive
view of all ECS tasks and services. Real-time data monitoring and detailed insights, like CPU
and memory usage, allow me to understand our clusters' performance and resource needs
intimately and figuring out autoscalling issues.
The real value-add for me has been CloudWatch Alarms. These alarms alert me to potential
issues, like high CPU usage or task failures, enabling quick responses. Even better, they can
trigger automated actions, like scaling resources to match demand, ensuring efficient and
optimal resource utilization. The custom dashboards in CloudWatch are another boon, allowing
me to tailor views to our specific needs, making daily checks more efficient.
Overall, CloudWatch has transformed my approach to ECS cluster monitoring.

Q4: What is difference b/w a monitoring system and Observability platform?

Ans: Monitoring provides a broad perspective on system health using predetermined metrics, it
relies on predefined metrics and alerts. while observability takes a more extensive approach,
delving deeper into the understanding of system behavior. It achieves this by analyzing diverse
data types, particularly crucial in intricate and dynamic environments. It often handles
unstructured data like logs, traces etc.
Q5: How do you know that backend needs to be scaled?

Ans: Increased traffic, high resource utilization, slow response times, elevated error rates, and
unexpected traffic patterns are indicators that indicates me to scale the backend.

Q6:How to make backend fault tolerant?

Ans: By implementing redundancy through backup servers and distributed architecture, allowing
for load balancing. Enable automated failover mechanisms to swiftly switch to backup systems
in case of primary component failures. Regularly back up critical data, monitor system health,
and set up alerts for timely issue detection. System design is also crucial, it should be able to
gracefully degrade performance under high loads or failures, prioritize essential functionalities,
and isolate services to contain failures.

Q7: Which triggers do we have to configure to alert ourselves?

Ans: Some triggers that should be configured are Monitor high CPU and memory usage, low
disk space, response times, error rates, request threshold and network latency, triggering alerts
for anomalies.

Q8: What part can be a bottleneck when we enable load balancing and what problem can we
face configuring load balancing?

Ans: When enabling load balancing, it can itself become a bottleneck. Load balancers typically
have their own IP address or DNS name that clients use to send requests. The load balancer
distributes incoming requests to backend servers based on its configuration. However, the
concentration of incoming requests at the load balancer may exceed its capacity, leading to
potential issues. Additionally, load balancers can introduce extra network traffic, contributing to
possible network bottlenecks.

You might also like