Professional Documents
Culture Documents
Cloud Aasigment1&2
Cloud Aasigment1&2
Measures of Scalability:
Size Scalability
Geographical Scalability
Administrative Scalability
1. Size Scalability: There will be an increase in the size of the system whenever users
and resources grow but it should not be carried out at the cost of performance and
efficiency of the system. The system must respond to the user in the same manner as it
was responding before scaling the system.
2. Geographical Scalability: Geographical scalability refers to the addition of new
nodes in a physical space that should not affect the communication time between the
nodes.
3. Administrative Scalability: In Administrative Scalability, significant management
of the new nodes which are being added to the system should not be required. To
exemplify, if there are multiple administrators in the system, the system is shared with
others while in use by one of them.
Types of Scalability:
1. Open Cluster :
IPs are needed by every node and those are accessed only through the
internet or web. This type of cluster causes enhanced security concerns.
2. Close Cluster :
The nodes are hidden behind the gateway node, and they provide increased
protection. They need fewer IP addresses and are good for computational
tasks.
Cluster Computing Architecture :
It is designed with an array of interconnected individual computers and
the computer systems operating collectively as a single standalone
system.
It is a group of workstations or computers working together as a single,
integrated computing resource connected via high speed interconnects.
A node – Either a single or a multiprocessor network having memory,
input and output functions and an operating system.
Two or more nodes are connected on a single line or every node might be
connected individually through a LAN connection.
Types of Hypervisor –
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a
“Native Hypervisor” or “Bare metal hypervisor”. It does not require any base server
operating system. It has direct access to hardware resources. Examples of Type 1
hypervisors include VMware ESXi, Citrix XenServer, and Microsoft Hyper-V
hypervisor.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the
underlying hardware rather they run as an application in a Host system(physical
machine). Basically, the software is installed on an operating system. Hypervisor asks
the operating system to make hardware calls. An example of a Type 2 hypervisor
includes VMware Player or Parallels Desktop. Hosted hypervisors are often found on
endpoints like PCs. The type-2 hypervisor is very useful for engineers, and security
analysts (for checking malware, or malicious source code and newly developed
applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating
System alongside the host machine running. These hypervisors usually come with
additional useful features for guest machines. Such tools enhance the coordination
between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the
efficiency of these hypervisors lags in performance as compared to the type-1
hypervisors, and potential security risks are also there an attacker can compromise the
security weakness if there is access to the host operating system so he can also access
the guest operating system.
1. DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the
instructions of the virtual machine instance to one of the other two modules.
2. ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided to the
virtual machine instance. It means whenever a virtual machine tries to execute an
instruction that results in changing the machine resources associated with the
virtual machine, the allocator is invoked by the dispatcher.
3. INTERPRETER:
The interpreter module consists of interpreter routines. These are executed,
whenever a virtual machine executes a privileged instruction.
Cluster Components
1. High Performance :
The systems offer better and enhanced performance than that of mainframe
computer networks.
2. Easy to manage :
Cluster Computing is manageable and easy to implement.
3. Scalable :
Resources can be added to the clusters accordingly.
4. Expandability :
Computer clusters can be expanded easily by adding additional computers to
the network. Cluster computing is capable of combining several additional
resources or the networks to the existing computer system.
5. Availability :
The other nodes will be active when one node gets failed and will function as
a proxy for the failed node. This makes sure for enhanced availability.
6. Flexibility :
It can be upgraded to the superior specification or additional nodes can be
added.
Disadvantages of Cluster Computing :
1. High cost :
It is not so much cost-effective due to its high hardware and its design.
2. Problem in finding fault :
It is difficult to find which component has a fault.
3. More space is needed :
Infrastructure may increase as more servers are needed to manage and
monitor.
Applications of Cluster Computing :
Various complex computational problems can be solved.
It can be used in the applications of aerodynamics, astrophysics and in
data mining.
Weather forecasting.
Image Rendering.
Various e-commerce applications.
Earthquake Simulation.
Petroleum reservoir simulation.
Q4. Enlist benefits of cloud computing.
Flynn’s classification –
1. Single-instruction, single-data (SISD) systems –
An SISD computing system is a uniprocessor machine which is capable of
executing a single instruction, operating on a single data stream. In SISD,
machine instructions are processed in a sequential manner and
computers adopting this model are popularly called sequential computers.
Most conventional computers have SISD architecture. All the instructions
and data to be processed have to be stored in primary memory.
The speed of the processing element in the SISD model is
limited(dependent) by the rate at which the computer can transfer
information internally. Dominant representative SISD systems are IBM
PC, workstations.
2. Single-instruction, multiple-data (SIMD) systems –
An SIMD system is a multiprocessor machine capable of executing the
same instruction on all the CPUs but operating on different data streams.
Machines based on an SIMD model are well suited to scientific computing
since they involve lots of vector and matrix operations. So that the
information can be passed to all the processing elements (PEs) organized
data elements of vectors can be divided into multiple sets(N-sets for N PE
systems) and each PE can process one data set.
Ans-Platform as a Service.
Disadvantages of Paas:
1. Limited control over infrastructure: PaaS providers typically manage
the underlying infrastructure and take care of maintenance and updates,
but this can also mean that users have less control over the environment
and may not be able to make certain customizations.
2. Dependence on the provider: Users are dependent on the PaaS
provider for the availability, scalability, and reliability of the platform, which
can be a risk if the provider experiences outages or other issues.
3. Limited flexibility: PaaS solutions may not be able to accommodate
certain types of workloads or applications, which can limit the value of the
solution for certain organizations.
Assigment2
Q1. Describe Cloud Computing with respect to its cloud service models and explain
the provider-consumer interaction dynamics for Infrastructure-as-a-service.
Software as a Service(SaaS)
Software-as-a-Service (SaaS) is a way of delivering services and applications over the
Internet. Instead of installing and maintaining software, we simply access it via the
Internet, freeing ourselves from the complex software and hardware management. It
removes the need to install and run applications on our own computers or in the data
centers eliminating the expenses of hardware as well as software maintenance.
SaaS provides a complete software solution that you purchase on a pay-as-you-
go basis from a cloud service provider. Most SaaS applications can be run directly
from a web browser without any downloads or installations required. The SaaS
applications are sometimes called Web-based software, on-demand software, or
hosted software.
Advantages of SaaS
1. Cost-Effective: Pay only for what you use.
2. Reduced time: Users can run most SaaS apps directly from their web browser
without needing to download and install any software. This reduces the time spent
in installation and configuration and can reduce the issues that can get in the way
of the software deployment.
3. Accessibility: We can Access app data from anywhere.
4. Automatic updates: Rather than purchasing new software, customers rely on a
SaaS provider to automatically perform the updates.
5. Scalability: It allows the users to access the services and features on-demand.
The various companies providing Software as a service are Cloud9 Analytics,
Salesforce.com, Cloud Switch, Microsoft Office 365, Big Commerce, Eloqua,
dropBox, and Cloud Tran.
Disadvantages of Saas :
1. Limited customization: SaaS solutions are typically not as customizable as on-
premises software, meaning that users may have to work within the constraints of
the SaaS provider’s platform and may not be able to tailor the software to their
specific needs.
2. Dependence on internet connectivity: SaaS solutions are typically cloud-based,
which means that they require a stable internet connection to function properly.
This can be problematic for users in areas with poor connectivity or for those who
need to access the software in offline environments.
3. Security concerns: SaaS providers are responsible for maintaining the security of
the data stored on their servers, but there is still a risk of data breaches or other
security incidents.
4. Limited control over data: SaaS providers may have access to a user’s data,
which can be a concern for organizations that need to maintain strict control over
their data for regulatory or other reasons.
Platform as a Service
Q2. Why is it necessary to secure a Hypervisor? Enlist the different types of threats to
hypervisor and Virtual Machines? Explain any two threats in detail.
Q3. What are the different types of Cloud Security challenges'? Explain the Infrastructure
security at Network & Host Level.
Ans; In this, we will discuss the overview of cloud computing, its need, and mainly
our focus to cover the security issues in Cloud Computing. Let’s discuss it one by one.
Cloud Computing :
Cloud Computing is a type of technology that provides remote services on the internet
to manage, access, and store data rather than storing it on Servers or local drives. This
technology is also known as Serverless technology. Here the data can be anything like
Image, Audio, video, documents, files, etc.
Need of Cloud Computing :
Before using Cloud Computing, most of the large as well as small IT companies use
traditional methods i.e. they store data in Server, and they need a separate Server room
for that. In that Server Room, there should be a database server, mail server, firewalls,
routers, modems, high net speed devices, etc. For that IT companies have to spend lots
of money. In order to reduce all the problems with cost Cloud computing come into
existence and most companies shift to this technology.
Security Issues in Cloud Computing :
There is no doubt that Cloud Computing provides various Advantages but there are
also some security issues in cloud computing. Below are some following Security
Issues in Cloud Computing as follows.
1. Data Loss –
Data Loss is one of the issues faced in Cloud Computing. This is also known as
Data Leakage. As we know that our sensitive data is in the hands of Somebody
else, and we don’t have full control over our database. So, if the security of cloud
service is to break by hackers then it may be possible that hackers will get access
to our sensitive data or personal files.
2. Interference of Hackers and Insecure API’s –
As we know, if we are talking about the cloud and its services it means we are
talking about the Internet. Also, we know that the easiest way to communicate
with Cloud is using API. So it is important to protect the Interface’s and API’s
which are used by an external user. But also in cloud computing, few services are
available in the public domain which are the vulnerable part of Cloud Computing
because it may be possible that these services are accessed by some third parties.
So, it may be possible that with the help of these services hackers can easily hack
or harm our data.
5. Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to
use a feature, etc. are the main problems caused in IT Company who doesn’t have
skilled Employees. So it requires a skilled person to work with Cloud Computing.
1. Q4. What is the importance of cloud disaster recovery? Explain the terms RPO &
RTO in detail
Ans: One of the best practices in well run IT organizations is for CIOs and
IT managers to evaluate the risk of data loss, and establish business
continuity plans that outline backup and recovery along with their
respective Recovery Point Objectives (RPOs) and Recovery Time
Objectives (RTOs). But first, information technology teams and business
stakeholders must start with a common understanding of what RPO and
RTO mean in terms of backup and recovery.
Recovery Point Objective (RPO): The maximum acceptable age of the
data that can be restored (or recovery point) and the version of data
lost. For simplicity, RPO can be thought of as the time between the
time of data loss and the last useful backup of a known good state.
Recovery Time Objective (RTO): The maximum acceptable length of time
required for an organization to recover lost data and get back up and
running. This value may be defined as part of a larger Disaster
Recovery Plan across an organization that also includes applications
like Office 365. For simplicity, RTO can be thought of as the time it
takes, from start to finish, to recover data to an acceptable current
good state.
There’s no “one size fits all” answer when calculating RPO and RTO. IT
and stakeholders must assess the MTPD (Maximum Tolerable Period of
Disruption) on an application-by-application, dataset-by-dataset basis, then
layer protection based on what your organization has defined as
existentially-critical, mission-critical, and optimal-for-performance.
But while SaaS vendors like Microsoft offer some protections for mission-
critical applications and data, IT is still ultimately responsible for ensuring
the organization is protected, and can meet RPO and RTO requirements.
SaaS vendors cannot protect you from you — you need to establish RPO
and RTO to recover from sync errors, administrator errors, and malicious
actors who may corrupt or permanently delete your organization’s data.
1. Review current RPO and RTO to ensure they reflect what serves
your organization best when recovering SaaS data.
2. Assess the potential impact of ransomware and your organization’s
current ransomware response plan.
3. Test your current recovery approaches for adequate RTO of Office
365 mail, data and files.
Q5. What is Kernel-based Virtual rnachine? Identity & Justify its type.
2. Process Virtual Machine : While process virtual machines, unlike system virtual
machine, does not provide us with the facility to install the virtual operating system
completely. Rather it creates virtual environment of that OS while using some app or
program and this environment will be destroyed as soon as we exit from that app. Like
in below image, there are some apps running on main OS as well some virtual
machines are created to run other apps. This shows that as those programs required
different OS, process virtual machine provided them with that for the time being those
programs are running. Example – Wine software in Linux helps to run Windows
applications.
Virtual Machine Language : It’s type of language which can be understood by
different operating systems. It is platform-independent. Just like to run any
programming language (C, python, or java) we need specific compiler that actually
converts that code into system understandable code (also known as byte code). The
same virtual machine language works. If we want to use code that can be executed on
different types of operating systems like (Windows, Linux, etc) then virtual machine
language will be helpful.
Cloud computing security processes should address the security controls the cloud
provider will incorporate to maintain the customer's data security, privacy and
compliance with necessary regulations. The processes will also likely include a business
continuity and data backup plan in the case of a cloud security breach.
The following are 10 security-as-a-service categories are :
1. Identity and Access Management should provide controls for assured identities and
access management. Identity and access management includes people, processes and
systems that are used to manage access to enterprise resources by assuring the identity of
an entity is verified and is granted the correct level of access based on this assured
identity. Audit logs of activity such as successful and failed authentication and access
attempts should be kept by the application/solution.
2. Data Loss Prevention is the monitoring, protecting and verifying the security of data at
rest, in motion and in use in the cloud and on-premises. Data loss prevention services
offer protection of data usually by running as some sort of client on desktops/servers and
running rules around what can be done. Within the cloud, data loss prevention services
could be offered as something that is provided as part of the build, such that all servers
built for that client get the data loss prevention software installed with an agreed set of
rules deployed.
3. Web Security is real-time protection offered either on-premise through
software/appliance installation or via the cloud by proxying or redirecting web traffic to
the cloud provider. This provides an added layer of protection on top of things like AV to
prevent malware from entering the enterprise via activities such as web browsing. Policy
rules around the types of web access and the times this is acceptable also can be enforced
via these web security technologies.
4. E-mail Security should provide control over inbound and outbound e-mail, thereby
protecting the organization from phishing and malicious attachments, enforcing
corporate policies such as acceptable use and spam and providing business continuity
options. The solution should allow for policy-based encryption of e-mails as well as
integrating with various e-mail server offerings. Digital signatures enabling identification
and non-repudiation are features of many cloud e-mail security solutions.
5. Security Assessments are third-party audits of cloud services or assessments of on-
premises systems based on industry standards. Traditional security assessments for
infrastructure and applications and compliance audits are well defined and supported by
multiple standards such as NIST, ISO and CIS. A relatively mature toolset exists, and a
number of tools have been implemented using the SaaS delivery model. In the SaaS
delivery model, subscribers get the typical benefits of this cloud computing variant
elasticity, negligible setup time, low administration overhead and pay-per-use with low
initial investments.
6. Intrusion Management is the process of using pattern recognition to detect and react
to statistically unusual events. This may include reconfiguring system components in real
time to stop/prevent an intrusion. The methods of intrusion detection, prevention and
response in physical environments are mature; however, the growth of virtualization and
massive multi-tenancy is creating new targets for intrusion and raises many questions
about the implementation of the same protection in cloud environments.
7. Security Information and Event Management systems accept log and event
information. This information is then correlated and analyzed to provide real-time
reporting and alerting on incidents/events that may require intervention. The logs are
likely to be kept in a manner that prevents tampering to enable their use as evidence in
any investigations.
8. Encryption systems typically consist of algorithms that are computationally difficult or
infeasible to break, along with the processes and procedures to manage encryption and
decryption, hashing, digital signatures, certificate generation and renewal and key
exchange.
9. Business Continuity and Disaster Recovery are the measures designed and
implemented to ensure operational resiliency in the event of any service interruptions.
Business continuity and disaster recovery provides flexible and reliable failover for
required services in the event of any service interruptions, including those caused by
natural or man-made disasters or disruptions. Cloud-centric business continuity and
disaster recovery makes use of the cloud's flexibility to minimize cost and maximize
benefits.
10. Network Security consists of security services that allocate access, distribute,
monitor and protect the underlying resource services. Architecturally, network security
provides services that address security controls at the network in aggregate or specifically
addressed at the individual network of each underlying resource.
In a cloud/virtual environment, network security is likely to be provided by virtual devices
alongside traditional physical devices.
Q7 What is the Impact of AWS in Cloud Computing? Elaborate any of its two
services in detail.
Ans- Trade fixed expense for variable expense – Instead of having to invest heavily in data
centers and servers before you know how you’re going to use them, you can pay only when
you consume computing resources, and pay only for how much you consume.
Benefit from massive economies of scale – By using cloud computing, you can achieve a
lower variable cost than you can get on your own. Because usage from hundreds of thousands
of customers is aggregated in the cloud, providers such as AWS can achieve higher
economies of scale, which translates into lower pay as-you-go prices.
Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When
you make a capacity decision prior to deploying an application, you often end up either
sitting on expensive idle resources or dealing with limited capacity. With cloud computing,
these problems go away. You can access as much or as little capacity as you need, and scale
up and down as required with only a few minutes’ notice.
Increase speed and agility – In a cloud computing environment, new IT resources are only a
click away, which means that you reduce the time to make those resources available to your
developers from weeks to just minutes. This results in a dramatic increase in agility for the
organization, since the cost and time it takes to experiment and develop is significantly lower.
Stop spending money running and maintaining data centers – Focus on projects that
differentiate your business, not the infrastructure. Cloud computing lets you focus on your
own customers, rather than on the heavy lifting of racking, stacking, and powering servers.
Go global in minutes – Easily deploy your application in multiple regions around the world
with just a few clicks. This means you can provide lower latency and a better experience for
your customers at minimal cost.