Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 5

Question Format & QP Setter Information

Name of Examination Continuous Assessment Test II (CAT II), FALL Semester- 2021

Slot: E1 + TE1 Course Mode: CBL Class Number (s): VL2021220103964

Course Code: CSE3035 Course Title: Principles of Cloud Computing

Emp. No.: 12385 Faculty Name: Sendhil Kumar K.S School: SCOPE

Contact No.: 9840068152 Email: Sendhilkumar.ks@vit.ac.in


General Instructions (if any):1. OPEN BOOK Examinations, 2. ….
Q. Question Text
No.
1. a (i) Explain in detail about various challenges for cloud application development and discuss
about various opportunities that exists in existing and new applications

Performance isolation - nearly impossible to reach in a real system, especially when the system is
heavily loaded.
Reliability - major concern; server failures expected when a large number of servers cooperate for
the computations.
Cloud infrastructure exhibits latency and bandwidth fluctuations which affect the application
performance.

Performance considerations limit the amount of data logging; the ability to identify the source of
unexpected results and errors is helped by frequent logging.

Three broad categories of existing applications:


Processing pipelines.
Batch processing systems.
Web applications.

(ii) Justify what kind of applications will be best suitable and not suitable for cloud
environment.

Ideal applications for cloud computing:


Web services.
Database services.
Transaction-based service. The resource requirements of transaction-oriented services benefit
from an elastic environment where resources are available when needed and where one pays only
for the resources it consumes.
Applications unlikely to perform well on a cloud:
 Applications with a complex workflow and multiple dependencies, as is often the case in high-
performance computing.
Applications which require intensive communication among concurrent instances.
When the workload cannot be arbitrarily partitioned.

(OR)
b Elaborate in detail about Map Reduce framework. Draw the architectural diagram for physical
organization of compute nodes and discuss the working of Map Reduce architecture.

Page 1 of 5
An application starts a master instance, M worker instances for the Map phase and later R worker
instances for the Reduce phase.
2.The master instance partitions the input data in M segments.
3.Each map instance reads its input data segment and processes the data.
4.The results of the processing are stored on the local disks of the servers where the map instances
run.
5.When all map instances have finished processing their data, the R reduce instances read the
results of the first phase and merge the partial results.
6.The final results are written by the reduce instances to a shared storage server.
7.The master instance monitors the reduce instances and when all of them report task completion
the application is terminated.

2. a Assume ABC software solutions are providing microservices to the customer. With a neat
architectural diagram discuss in detail about Microservices architecture. Justify at what situations
you will use this architecture in your organization and list out its benefits and challenges for
implementing micro services architecture in your organization.

Page 2 of 5
(OR)
b Assume yourself as a cloud architect in ABC software solutions. Your organization deals with
handling huge amount of data and manage both batch processing and real-time processing of these
data. As a cloud architect design an architecture to handle ingestion, processing, and analysis of
data that is too large or complex for traditional database systems. Justify at what situations you will
use this architecture in your organization and list out its benefits and challenges for implementing
this architecture in your organization.

Page 3 of 5
3. a Discuss in detail about various cloud resource management policies. Justify whether optimal
strategies for these policies can be actually implemented in a cloud? The term “optimal” is used in
the sense of control theory. Support your answer with solid arguments. Optimal strategies for one
class may be in conflict with optimal strategies for one or more of the other classes. Identify and
analyze such cases

Admission control  prevent the system from accepting workload in violation of high-level system
policies.
2.Capacity allocation  allocate resources for individual activations of a service.
3.Load balancing  distribute the workload evenly among the servers.
4.Energy optimization  minimization of energy consumption.
5.Quality of service (QoS) guarantees  ability to satisfy timing or other conditions specified by a
Service Level Agreement
Virtually all optimal, or near-optimal, mechanisms to address the four classes of policies
do not scale up and typically target a single aspect of resource management, e.g., admission
control, but ignore energy conservation; many require complex computations that cannot be
done effectively in the time available to respond. The performance models are very complex,
analytical solutions are intractable, and the monitoring systems used to gather state information
for these models can be too intrusive and unable to provide accurate data. Many
techniques are concentrated on system performance in terms of throughput and time in system,
but they rarely include energy trade-offs or QoS guarantees. Some techniques are based
on unrealistic assumptions; for example, capacity allocation is viewed as an optimization
problem but under the assumption that servers are protected from overload.
Optimal strategies cannot be implemented on a cloud because they require accurate information
about the state of the system and it is infeasible to acquire this information due to the
scale of the system. It is also not feasible to construct the accurate models of such systems
required by control theoretical and utility-based mechanisms used for policy implementation.
Optimal strategies for QoS may conflict with the ones for energy minimization; indeed,
QoS may require the servers to operate outside their optimal region for energy consumption.

(OR)
Page 4 of 5
b Multiple controllers are probably necessary due to the scale of the cloud. Is it beneficial to have
system and application controllers? Can we have specialized controllers for example, some to
monitor performance, others to monitor power consumption? Should all the functions we want to
base the resource management policies can be integrated in a single controller and one controller be
assigned to a given number of servers, or to a geographic region? Justify your answers with solid
arguments and draw necessary architectural diagram.

The scale of a cloud requires some form of clustered organization where each cluster is
managed by local systems responsible for the implementation of the five classes of resource
management policies. These cluster controllers should collaborate with one another for the
implementation of global policies.

A cluster manager could consist of several sub-systems, each one responsible for one the five
classes of policies: admission control, capacity allocation, load balancing, energy consumption,
and Quality of Service. These sub-systems should interact with each other; for example, the
one responsible for admission control should reject additional workload whenever the system
is in danger of failing its QoS guarantees. Similarly, load balancing and energy minimization
should work in concert; when a server is lightly loaded applications running on it should be
migrated to other servers and the server should be switched to a sleep state to save energy.

Each application should be built around an application manager which decides when to
request additional system resources or when to release resources that are no longer needed.

Relate with Architectural Diagram.

Page 5 of 5

You might also like