Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

August19: (a) You have been asked to help optimise a cloud application that is not scaling well.

After observation and profiling, you notice that there is a lot of serialisation around a shared
counter protected by a lock. Explain why this shared counter is preventing the application from
scaling. Suggest an alternative design and argue why this design will increase performance and
scalability.

Ans:

 The shared counter protected by a lock is likely preventing the application from scaling due to
contention and serialization issues.
 When multiple threads or processes attempt to access the shared counter concurrently, they
may be forced to wait for the lock to be released, leading to contention.
 This contention can cause performance degradation, as threads or processes may be idle,
waiting for the lock, which can result in decreased throughput and increased response times.

 Furthermore, serialization, which is the process of converting data structures into a format that
can be stored or transmitted, can also hinder scalability.
 If the shared counter is being serialized and deserialized frequently, it can introduce additional
overhead and delays in the application, especially if the counter is being updated frequently.

 An alternative design to mitigate these issues and increase performance and scalability would be
to use a distributed counter instead of a shared counter protected by a lock.
 A distributed counter can be implemented using techniques such as sharding, where the counter
is partitioned across multiple instances, or by using a distributed data store that supports atomic
increments.

 Here are some reasons why this alternative design can increase performance and scalability:

 Reduced contention:
1. By distributing the counter across multiple instances, contention for the lock can be
significantly reduced or eliminated altogether.
2. This allows for concurrent updates to the counter, which can improve throughput and
reduce response times.

 Parallelism:
1. With a distributed counter, multiple threads or processes can increment the counter in
parallel, without having to wait for a lock.
2. This can increase the overall processing capacity of the application, enabling it to
handle a higher load.
 Scalability:
1. Distributed counters can be horizontally scaled by adding more instances or partitions as
needed.
2. This allows the application to handle increased load and scale seamlessly as the
workload grows, without being limited by a single shared counter protected by a lock.

 Lower serialization overhead:


1. Distributed counters typically do not require frequent serialization and deserialization, as
the updates can be performed atomically within each partition or instance.
2. This can reduce the overhead of serialization and improve performance.

In conclusion,

1. Replacing the shared counter protected by a lock with a distributed counter can help mitigate
contention and serialization issues, leading to improved performance and scalability of the cloud
application.
2. It allows for concurrent updates, parallelism, and scalability, while reducing serialization
overhead, ultimately leading to better scalability and performance under high loads.

(b) A company you have been working for is refusing to allow their applications to be optimised.
They think this is a waste of time. Using your knowledge of cloud computing, construct a
financially motivated argument for why applications should be optimized.

Ans:

1. Cost savings:
 Optimizing applications can lead to significant cost savings in the cloud.
 Many cloud service providers charge based on resource usage, such as CPU cycles,
memory, and storage.
 If an application is inefficient and not optimized, it may consume more resources than
necessary, resulting in higher cloud usage costs.
 By optimizing the application, reducing resource utilization, and improving efficiency, the
company can save on cloud computing costs over time, leading to cost savings and
potentially improved profitability.

2. Increased ROI:
 Investing in application optimization can yield a higher return on investment (ROI).
 Optimizing an application can result in improved performance, which can lead to higher
customer satisfaction, increased user engagement, and potentially higher revenue.
 In addition, with a more efficient and scalable application, the company can better utilize
cloud resources, resulting in better ROI on their cloud computing investments.

3. Competitive advantage:
 Optimized applications can provide a competitive advantage in the market. Customers
today have high expectations for application performance and responsiveness.
 If an application is slow, unresponsive, or inefficient, it can lead to customer
dissatisfaction and loss of business to competitors.
 On the other hand, an optimized application that delivers superior performance and user
experience can attract and retain customers, leading to a competitive advantage in the
market.

4. Scalability:
 Optimizing applications can enable greater scalability and agility in the cloud.
 Cloud computing allows for flexible scaling of resources up or down based on demand.
 However, if an application is not optimized, it may not be able to take full advantage of
cloud scalability, resulting in either underutilized resources or performance bottlenecks
during peak loads.
 Optimizing the application can ensure that it can effectively scale in the cloud.

5. Operational efficiency:
 Optimized applications can improve operational efficiency.
 Applications that are inefficient or resource-intensive can require more maintenance,
monitoring, and troubleshooting, resulting in increased operational overhead and costs.
 By optimizing the application, reducing resource consumption, and improving
performance, the company can streamline operations, reduce maintenance efforts, and
lower operational costs, resulting in improved operational efficiency and potentially higher
profits.

In summary,

 Optimizing applications in the cloud can result in cost savings, increased ROI, competitive advantage,
scalability and agility, and operational efficiency.
 These financial benefits can contribute to the company's bottom line, improving profitability and
helping the company stay competitive in the market.
MAY20 : (a) Explain with the aid of a diagram how the passive listener approach functions.
Evaluate the effect changing the time between simulated requests has on the the application. Is
there.

Ans:

 The passive listener approach is a design pattern used in distributed systems where one or more
components passively listen for events or messages without actively initiating any actions. These
passive listeners wait for incoming events or messages, and when they receive one, they react
accordingly.

 In the context of a cloud application, the passive listener approach can be implemented using
components such as message queues, event-driven architectures, or callbacks. These components
"listen" for incoming events or messages, but they do not actively initiate any actions until an event or
message is received.

Here's a high-level description of how the passive listener approach works:

 Event/Message generation: Events or messages are generated by various components or external


systems, such as user requests, system events, or API calls.

 Event/Message delivery: Events or messages are delivered to the passive listener component(s),
which are "listening" for incoming events or messages.

 Event/Message processing: The passive listener component(s) receive the event or message and
process it accordingly. This can involve performing certain actions, triggering business logic, or
updating data.

 Passive state: Once the event or message has been processed, the passive listener component(s)
return to their passive state and wait for the next event or message to be delivered.

 The passive listener approach can have implications on the performance and scalability of the
application, particularly in terms of the time between simulated requests, which refers to the rate
at which events or messages are generated and delivered to the passive listeners.

 Effect of changing the time between simulated requests:

 Higher time between simulated requests:


1. If the time between simulated requests is too high, the passive listener components may
spend more time in a passive state, waiting for events or messages to arrive.
2. This can result in lower overall application responsiveness, as there may be delays in
processing events or messages, and the application may not be able to respond to events in
real-time.
3. However, this may also result in lower resource utilization, as the passive listener
components are not actively processing events or messages continuously.

 Lower time between simulated requests:

1. If the time between simulated requests is too low, the passive listener components may
be overwhelmed with a high volume of incoming events or messages.
2. This can lead to increased processing overhead, potential contention or serialization
issues, and decreased performance or scalability.
3. The passive listener components may struggle to keep up with the rate of incoming
events, resulting in delays or backlogs in event processing.

In general, the time between simulated requests should be carefully chosen to strike a balance
between responsiveness and resource utilization.
It should be based on the specific requirements of the application, the processing capacity of the
passive listener components, and the expected volume of incoming events or messages.
Proper monitoring and tuning of the time between simulated requests can help optimize the
performance and scalability of the application using the passive listener approach.

 It's important to note that the passive listener approach may not be suitable for all applications or
use cases.
 The effectiveness of this approach depends on various factors such as the nature of the events
or messages, the complexity of the processing logic, and the scalability requirements of the
application.
 Proper analysis and evaluation of the application's specific requirements and constraints should
be performed to determine the most appropriate approach for event handling in the cloud
application.
August20 : (b) Explain with the aid of a diagram how the passive listener approach helps to
minimise the amount of instances used in a cloud application.

Ans:

 The passive listener approach can be used to optimize resource utilization in a cloud application
by reducing the number of instances needed to handle incoming events or messages. Here's how
it can work:

 Traditional Approach:
1. In a traditional approach, without using the passive listener pattern, each instance of the
application may actively poll or continuously listen for events or messages.
2. This means that multiple instances of the application may be running in parallel, each
performing the same task of polling or listening for events, which can result in redundant
and inefficient resource utilization.

 Passive Listener Approach:


1. On the other hand, with the passive listener approach, a separate component or service,
such as a message queue or event-driven architecture, is responsible for receiving and
delivering events or messages to the application instances.
2. The application instances do not actively poll or listen for events, but instead, they
passively wait for events or messages to be delivered to them by the separate
component.

 Resource Utilization:
1. This passive listener component can efficiently handle incoming events or messages
from multiple sources and distribute them to the application instances as needed.
2. This can help minimize the number of instances needed, as the passive listener
component can effectively manage the event or message handling process and
distribute the workload across the instances in a more optimized manner.

 Scalability and Flexibility:


1. Additionally, the passive listener approach can provide scalability and flexibility, as the
number of instances can be easily adjusted based on the incoming event or message
load.
2. If the workload increases, more instances can be added, and if the workload decreases,
instances can be scaled down.
3. This can help optimize resource utilization, reduce unnecessary overhead, and minimize
costs associated with running and managing excessive instances in the cloud
environment.
Overall,

The passive listener approach can help minimize the number of instances used in a cloud
application by efficiently managing the event or message handling process, optimizing resource
utilization, and providing scalability and flexibility to adapt to changing workloads.
However, the effectiveness of this approach depends on various factors such as the nature of
the events or messages, the complexity of the processing logic, and the scalability requirements
of the application.
Proper analysis and evaluation of the application's specific requirements and constraints should
be performed to determine the most appropriate approach for event handling in the cloud
application.

MAY21 : Differentiate between synchronus and asynchronus tasks. Explain how they work and
summarise how they can be used to increase the number of requests an instance can handle.

Ans:

Synchronous and asynchronous tasks refer to two different ways of handling tasks in a software
application, particularly in the context of how tasks are executed and how they impact the
performance and scalability of an instance in a cloud environment.

 Synchronous Tasks:
1. In synchronous tasks, the application waits for a task to complete before moving on to
the next task.
2. This means that the application's execution is blocked until the task is finished, and the
application cannot proceed to other tasks until the current task is completed.
3. This can result in slower response times and reduced performance, as the application
has to wait for each task to finish before processing the next one.

 Asynchronous Tasks:
1. In asynchronous tasks, the application does not wait for a task to complete before
moving on to the next task.
2. Instead, the application can continue processing other tasks while the asynchronous
task is being executed in the background.
3. This allows the application to handle multiple tasks concurrently, without being blocked
by the completion of individual tasks.
 How They Work:
1. Synchronous tasks are typically implemented using blocking I/O operations, where the
application waits for the I/O operation to complete before proceeding.
2. Asynchronous tasks, on the other hand, are implemented using non-blocking I/O
operations and asynchronous programming techniques, where the application can
initiate the I/O operation and continue processing other tasks without waiting for the
I/O operation to complete.
3. The result is that the application can continue to process other tasks concurrently
while the asynchronous tasks are being executed in the background.

 Increasing Requests Handling Capacity:


1. Asynchronous tasks can be used to increase the number of requests an instance can
handle in a cloud environment.
2. Since asynchronous tasks allow the application to continue processing other tasks
while waiting for I/O operations to complete, they can help improve the overall
throughput and responsiveness of the application.
3. By leveraging asynchronous programming techniques, an application can handle
multiple requests concurrently, without being blocked by I/O operations or other time-
consuming tasks.
4. This can result in increased performance and scalability, as the application can
efficiently utilize the available resources and handle more requests in parallel, thereby
increasing the number of requests an instance can handle.

In summary,

 synchronous tasks block the application's execution until a task is completed, while
asynchronous tasks allow the application to continue processing other tasks concurrently while
waiting for I/O operations or other time-consuming tasks to complete.
 By leveraging asynchronous programming techniques, an application can increase its request
handling capacity in a cloud environment, leading to improved performance and scalability.

August21 : During optimisation of a cloud application it has been noticed that there is a severe
bottleneck around a shared counter in that processes spend a long time waiting to get access to a lock.
Devise a mechanism that can reduce this bottleneck and explain how it functions. Are there any
disadvantages to this approach?

Ans:
 One approach to reduce the bottleneck around a shared counter in a cloud application is to
implement a distributed counter mechanism. In this approach, instead of using a single shared
counter that requires locks for synchronization, multiple instances of the application can
maintain their own local counters and periodically synchronize them in a distributed manner.

 Here's how this mechanism can work:

1. Local Counters:
 Each instance of the application maintains its own local counter without needing any
locks for synchronization.
 Instances can independently increment their local counters based on their own
processing logic, without waiting for locks or other synchronization mechanisms.

2. Periodic Synchronization:
 Periodically, instances synchronize their local counters with a central counter that can
be stored in a shared database or distributed data store.
 This synchronization can occur at predetermined intervals or triggered by certain
events, such as reaching a threshold or after a certain time has elapsed.

3. Conflict Resolution:
 In case of conflicts where multiple instances update the central counter at the same
time, a conflict resolution mechanism can be implemented to handle the conflicts and
ensure that the final counter value is consistent across all instances.
 This can be achieved using techniques such as optimistic locking, where each instance
updates the counter with a version number or timestamp, and conflicts are resolved
based on the latest version or timestamp.

4. Benefits:
 This distributed counter mechanism can reduce the contention for locks around the
shared counter, as each instance operates independently on its local counter without
waiting for locks.
 This can lead to improved performance and scalability, as instances can increment their
local counters without being blocked by other instances.
 Additionally, the periodic synchronization of local counters can help ensure that the
central counter is eventually consistent across all instances, allowing for accurate
tracking of the shared counter value.

5. Disadvantages:
 There are some potential disadvantages to this approach.
 First, implementing a distributed counter mechanism may introduce additional
complexity in the application's codebase, as it requires handling conflicts and ensuring
consistency across multiple instances.
 Second, the periodic synchronization process may introduce some latency, as instances
may need to wait for the synchronization process to occur before their local counters
are updated.
 Finally, if the central counter is stored in a shared database, there may be potential
performance and scalability limitations associated with the database, such as contention
for database resources or increased latency in accessing the central counter.

In conclusion,

 Implementing a distributed counter mechanism can be a potential solution to reduce the bottleneck
around a shared counter in a cloud application.
 However, it should be carefully evaluated considering the specific requirements and constraints of
the application, as there may be potential disadvantages associated with increased complexity,
latency, and performance limitations. Proper design, implementation, and testing are crucial to
ensure the effectiveness and efficiency of this approach.

May22.(b) Explain the principle of the optimisation method Minimising Work with the aid of two
examples.

Ans:

 The principle of Minimizing Work is an optimization method that aims to reduce the amount of
unnecessary or redundant work performed in a system or process, with the goal of improving
performance and efficiency.
 This principle is often applied in the context of software development and performance
optimization, where minimizing unnecessary work can lead to faster execution times and
improved resource utilization.
 Here are two examples that illustrate the principle of Minimizing Work:

 Caching:
1. Caching is a common technique used in software applications to minimize work by storing
frequently accessed data in a cache for quick retrieval, instead of recalculating or fetching the
data from its original source.
2. For example, in a web application, static assets such as images, CSS files, and JavaScript files can
be cached in a Content Delivery Network (CDN) or a local server cache.
3. This reduces the amount of work required to fetch these assets from the original source, such as
the web server or a remote server, and speeds up the overall performance of the application.

 Lazy Loading:
1. Lazy loading is another example of the Minimizing Work principle, where resources or
data are loaded on-demand only when they are actually needed, instead of loading
everything upfront.
2. For instance, in a web application with a large dataset, lazy loading can be
implemented to load data only when it is requested by the user, rather than loading all
the data at once.
3. This minimizes the unnecessary work of loading and processing data that may not be
immediately needed, and improves the responsiveness and performance of the
application.

 In both of these examples, the principle of Minimizing Work is applied by avoiding redundant
work, such as fetching data that is already available in a cache or loading data that is not
immediately needed. By minimizing unnecessary work, these optimization techniques can
improve the performance, efficiency, and scalability of the system or application.

Aug22.(b) Some cloud based applications benefit from using a Memcache to speed up data access.
Explain how this optimisation technique works and analyse what kind of data would be most suitable
and least suitable for such a technique.

Ans:

 Memcache, or Memcached, is a widely used distributed caching system that can be employed as
an optimization technique in cloud-based applications to speed up data access.
 It works by storing frequently accessed data in memory, which allows for faster retrieval
compared to traditional disk-based storage systems.
 The basic principle of Memcache is simple:
1. when data is requested from an application, the data is first checked in the Memcache
cache.
2. If the data is found in the cache, it can be quickly retrieved from memory and returned to
the application, eliminating the need to fetch the data from the original data source, such as
a database or an API, which can be slower in terms of latency and resource utilization.

 Here's how Memcache works in a simplified diagram:

 Memcache can be used to optimize data access for various types of data in a cloud-based
application. Some examples of data that can be suitable for Memcache include:

 Frequently accessed data:


1. Data that is frequently requested by the application and does not change frequently can
be suitable for Memcache.
2. Examples include configuration data, reference data, or static data that is used across
multiple requests.

 Expensive data retrieval operations:


1. Data that requires time-consuming and resource-intensive retrieval operations, such as
complex database queries or API calls, can benefit from Memcache.
2. Caching the result of these operations in Memcache can reduce the overhead of
repeating the same expensive operations and improve performance.

 Read-heavy workloads:
1. Applications with read-heavy workloads, where data is read more frequently than it is
updated, can benefit from Memcache.
2. Caching frequently accessed data in Memcache can reduce the load on the data source
and speed up the overall data retrieval process.

 On the other hand, data that is least suitable for Memcache includes:

 Highly dynamic data: Data that changes frequently and needs to be updated in real-time may
not be suitable for Memcache, as the cache may become stale quickly and result in incorrect or
outdated data being returned to the application.

 Data with low access frequency: Data that is rarely accessed or has low access frequency may
not provide significant benefits from caching in Memcache, as the overhead of maintaining the
cache may outweigh the potential performance gains.

 Large data sets: Caching large data sets in Memcache may not be practical, as it can consume
significant memory resources and may not provide significant performance improvements
compared to fetching the data directly from the data source.

In conclusion,

Memcache can be an effective optimization technique in cloud-based applications for speeding up data
access for frequently accessed and expensive data retrieval operations. However, careful consideration
should be given to the type of data that is suitable for caching in Memcache, as not all data may benefit
from this technique and it may not always be the optimal solution for every scenario.

You might also like