Cloud Computing QT Preparation

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

UNITED TECHNICAL COLLEGE COMPUTER ENGINEERING

CLOUD COMPUTING- ELECTIVE I MODEL QUESTIONS


1 a) What is Cloud Computing? Discuss the features that make cloud
computing better than traditional on-premise computing.
=> Cloud computing is a technology paradigm that involves delivering
computing services, such as storage, processing power, and applications,
over the internet. Instead of relying on local servers or personal devices to
handle applications, users can access and utilize computing resources
provided by remote servers. This model offers several advantages over
traditional on-premise computing:

❖ Scalability:
➢ Cloud Computing: Cloud services can be easily scaled up or
down based on demand. Users can dynamically adjust their
computing resources, ensuring they only pay for what they use.
➢ On-Premise Computing: Scaling on-premise infrastructure
often requires significant upfront investments in hardware and
can be time-consuming.
❖ Cost Efficiency:
➢ Cloud Computing: Cloud services operate on a pay-as-you-go
model, allowing users to avoid substantial upfront costs for
hardware and maintenance. This makes it cost-effective for
businesses of all sizes.
➢ On-Premise Computing: Traditional computing involves
substantial capital expenditure for hardware, software licenses,
and maintenance, which can be expensive.
❖ Accessibility and Flexibility:
➢ Cloud Computing: Services are accessible from anywhere with
an internet connection, providing flexibility for remote work
and collaboration.
➢ On-Premise Computing: Accessing on-premise resources may
be limited to physical locations, making it less flexible for
distributed teams or remote work.
❖ Reliability and Availability:
➢ Cloud Computing: Cloud providers typically offer high levels of
redundancy, ensuring high availability and reliability. Data is
often mirrored across multiple servers and locations.
➢ On-Premise Computing: The reliability of on-premise systems
depends on the quality of the infrastructure and the
effectiveness of backup and disaster recovery plans.
❖ Security:
➢ Cloud Computing: Cloud providers invest heavily in security
measures, including encryption, firewalls, and identity
management, often exceeding what individual organizations
can achieve.
➢ On-Premise Computing: Security is the responsibility of the
organization, and vulnerabilities can arise from inadequate
measures, potentially leading to data breaches or other
security issues.
❖ Updates and Maintenance:
➢ Cloud Computing: Service providers handle software updates,
maintenance, and security patches, reducing the burden on
users. This ensures that users always have access to the latest
features and security improvements.
➢ On-Premise Computing: Organizations are responsible for
managing updates, maintenance, and patches, which can be
time-consuming and may result in downtime during upgrades.
❖ Resource Utilization:
➢ Cloud Computing: Resources are shared among multiple users,
optimizing utilization and reducing wasted capacity.
➢ On-Premise Computing: Organizations must provision enough
resources to handle peak workloads, which may result in
underutilization during periods of lower demand.

While cloud computing offers numerous advantages, the choice between


cloud and on-premise solutions depends on factors like data sensitivity,
regulatory compliance, and specific business requirements. Many
organizations opt for hybrid solutions that combine both cloud and on-
premise resources to meet their unique needs.

b) Explain the different types of cloud service models.


2 a) What is Instances? Explain amazon AWS EC2 instances types.
=> An instance refers to a virtual server running on a cloud provider's
infrastructure. It's essentially a dedicated computing environment with
allocated resources like CPU, memory, storage, and networking.

a. General Purpose: Balanced mix of compute, memory, and network


resources and can be used for a variety of diverse workloads. These
instances are ideal for applications that use these resources in equal
proportions such as web servers and code repositories. (M7g, M7i,
M7i-flex, M7a, Mac, M6g, M6i, M6in, M6a, M5, M5n, M5zn, M5a, M4,
T4g, T3, T3a, T2)
b. Compute Optimized: High-performance processing for CPU-intensive
tasks like batch processing, media transcoding, scientific modeling,
etc. (C7g, C7a, C6g, C6gn, C6i, C6id, C6a, C5, C5n, C5a)
c. Memory Optimized: Large memory for in-memory databases, high-
performance computing (HPC) applications, etc. (R7g, R7i, R7a, X2gd,
X2g, X2iezn, X2idn, X2iedn, R6g, R6i, R6gd, R6id, R6a, R5, R5b, R5n,
R5a, R4, X1e, X1, Z1d)
d. Accelerated computing: Graphics processing units (GPUs) or field-
programmable gate arrays (FPGAs) for machine learning, gaming,
graphics-intensive applications, etc. (P4d, P4de, P3dn, P3, G5, G4dn,
G4ad, G4, F1)
e. Storage optimized: High-performance and high-capacity storage for
large databases, data warehouses, etc. (I4i, I4id, I3en, I3, D3, D3en,
H1, Hs1)
f. HPC (High-Performance Computing) Optimized: Enhanced networking
for tightly coupled high-performance applications. (C7g, C6gn, C6i,
C5n, C5n.metal, C5ad, C5a, C4)

Also see here for description

b) You are accidently stopped EC2 instance in a VPC with an associated


Elastic IP. If you start the instance again, what will be the result.

If you stop and then start an Amazon EC2 instance that is within a VPC and has
an Elastic IP (EIP) associated with it, the instance retains its Elastic IP address.
Unlike instances in EC2-Classic, instances in a VPC do not release their
associated Elastic IP addresses when they are stopped.
However, it's important to note a few key points regarding this behavior:

❖ Elastic IP Association: When the instance is stopped and then started


again, the Elastic IP remains associated with the instance. There is no
need to re-associate the Elastic IP after restarting.
❖ Public IP Address: If the instance had a public IP address (which is
different from an Elastic IP), that public IP address is released when the
instance is stopped. Upon restarting the instance, it will receive a new
public IP address if the instance is set to auto-assign public IP. This does
not affect the Elastic IP, which remains consistent.

c) Can you change the private IP address of an EC2 instance while it is in


running or in a stopped state?
The private IP address of an Amazon EC2 instance will never change. It
will not change while an instance is running. It will not change while an
instance is stopped. You cannot change a private IP address.
3 a) What is load balancing? Explain its types. 8

=> Load balancing is a vital technique in cloud computing that distributes


incoming network traffic efficiently across multiple servers, also known as a
server farm or pool.

Application load balancing

Complex modern applications have several server farms with multiple servers
dedicated to a single application function. Application load balancers look at
the request content, such as HTTP headers or SSL session IDs, to redirect
traffic.

For example, an ecommerce application has a product directory, shopping


cart, and checkout functions. The application load balancer sends requests
for browsing products to servers that contain images and videos but do not
need to maintain open connections. By comparison, it sends shopping cart
requests to servers that can maintain many client connections and save cart
data for a long time.

Network load balancing

Network load balancers examine IP addresses and other network information


to redirect traffic optimally. They track the source of the application traffic
and can assign a static IP address to several servers. Network load balancers
use the static and dynamic load balancing algorithms described earlier to
balance server load.

Global server load balancing

Global server load balancing occurs across several geographically distributed


servers. For example, companies can have servers in multiple data centers, in
different countries, and in third-party cloud providers around the globe. In
this case, local load balancers manage the application load within a region or
zone. They attempt to redirect traffic to a server destination that is
geographically closer to the client. They might redirect traffic to servers
outside the client’s geographic zone only in case of server failure.

DNS load balancing

DNS load balancing operates at the network layer (Layer 3 in the OSI model)
and uses the Domain Name System to distribute requests. When a user
attempts to connect to a service, the DNS server will return the IP address of
one of the multiple servers based on various factors, such as the
geographical location of the client, the health of the servers, and load
balancing policies. DNS load balancing is a simple way to distribute traffic
across servers globally and can be used to direct users to the nearest or least
busy server, improving response times and spreading the load.
b) Explain the different types of cloud deployment models.
4 a) What is disaster recovery in cloud? How disaster management is done 7
in GIDC in Nepal?
The term cloud disaster recovery (cloud DR) refers to the strategies and 8
services enterprises apply for the purpose of backing up applications,
resources, and data into a cloud environment. Cloud DR helps protect
corporate resources and ensure business continuity.

b) What is cloud migration? What are the seven cloud migration strategies?
Explain.
Cloud migration refers to the process of moving digital business operations
into the cloud. This process often involves transferring data, applications, and
IT processes from some or all of an organization's existing on-premises
infrastructure to a cloud-based infrastructure. The goal of cloud migration is
to host applications and data in the most effective IT environment possible,
based on factors such as cost, performance, and security.
There are seven commonly recognized cloud migration strategies, often
referred to as the "7 Rs of Cloud Migration." These strategies provide various
approaches for moving applications and data to the cloud, each with its own
use cases and benefits.

❖ Rehosting (Lift and Shift): This involves moving applications and data to
the cloud without making changes. It's the fastest method, as it involves
simply lifting the existing infrastructure and shifting it to a cloud
environment. It's often used when companies want to quickly migrate
to the cloud without the need for immediate transformation.
❖ Replatforming (Lift, Tinker, and Shift): This strategy involves making a
few cloud optimizations to realize a benefit without changing the core
architecture of the application. For example, you might adjust the way
an application interacts with the database to leverage cloud-native
features without redesigning the application.
❖ Repurchasing (Drop and Shop): Moving to a different product, often
involves moving from a traditional on-premises license to a cloud-based
application, such as moving from a self-managed database to a
database-as-a-service (DBaaS) platform or switching to Software-as-a-
Service (SaaS) products.
❖ Refactoring / Rearchitecting: This is the most complex strategy,
involving reimagining how an application is architected and developed,
typically using cloud-native technologies. This is often driven by a need
to add features, scale, or performance that would be difficult to achieve
in the application's existing environment.
❖ Retire: Identifying IT assets that are no longer useful and can be turned
off. This helps to streamline and optimize the infrastructure by
removing unnecessary elements before or after migrating to the cloud.
❖ Retain (or Revisit): Some applications might not be ready for migration,
or it might not make business sense to migrate them at the current
time. In this case, the decision is to keep them on-premises until there
is a clear business case or technical feasibility for migration.
❖ Relocate: For certain types of cloud, especially in the context of
VMware Cloud on AWS, it's possible to relocate entire virtual machines
(VMs) to the cloud. This is similar to rehosting but specific to VMs in
environments that support this seamless transition.

5 a) Explain about the different types of security challenges in cloud 8


computing.
In our technology driven world, security in the cloud is an issue that
should be discussed from the board level. These challenges are:

1. DDoS attacks: A DDoS attack is designed to overwhelm website servers


so it can no longer respond to legitimate user requests. If a DDoS attack
is successful, it renders a website useless for hours, or even days. This
can result in a loss of revenue, customer trust and brand authority.
2. Data breaches: Traditionally, IT professionals have had great control
over the network infrastructure and physical hardware (firewalls, etc.)
securing proprietary data. In the cloud (in private, public and hybrid
scenarios), some of those controls are relinquished to a trusted partner.
Choosing the right vendor, with a strong record of security, is vital to
overcoming this challenge.
3. Data loss: When business critical information is moved into the cloud,
it’s understandable to be concerned with its security. Losing data from
the cloud, either though accidental deletion, malicious tampering (i.e.
DDoS) or an act of nature brings down a cloud service provider, could
be disastrous for an enterprise business. Often a DDoS attack is only a
diversion for a greater threat, such as an attempt to steal or delete data.
4. Insecure access points: A behavioural web application firewall
examines HTTP requests to a website to ensure it is legitimate traffic.
This always-on device helps protect web applications from security
breaches.
5. Notifications and alerts: Awareness and proper communication of
security threats is a cornerstone of network security and the same goes
for cloud security. Alerting the appropriate website or application
managers as soon as a threat is identified should be part of a thorough 7
security plan. Speedy mitigation of a threat relies on clear and prompt
communication so steps can be taken by the proper entities and impact
of the threat minimized.

b) Define regions and availability zones in AWS cloud. What are the best
practices while choosing regions?

In Amazon Web Services (AWS), regions and availability zones are part of the
cloud provider's global infrastructure that allows for the deployment and
management of AWS services across different geographical locations.
❖ Regions: These are specific geographical locations around the world
where AWS clusters its data centers. Each AWS Region consists of
multiple, isolated, and physically separate locations known as
Availability Zones. Regions are independent of one another and
provide redundancy, fault tolerance, and lower latency by allowing
customers to host applications closer to their end-users. Examples of
AWS Regions include US East (N. Virginia), EU (Ireland), and Asia
Pacific (Sydney).
❖ Availability Zones (AZs): Each Region is made up of multiple Availability
Zones, which are distinct locations within a region that are engineered
to be isolated from failures in other AZs. They offer the ability to
operate production applications and databases that are more highly
available, fault-tolerant, and scalable than would be possible from a
single data center. Each Availability Zone has its own power, cooling,
and physical security, and is connected through redundant, ultra-low-
latency networks to other Availability Zones in the same Region.

By utilizing multiple Availability Zones, AWS users can ensure that their
applications are resilient to issues affecting a single location, thereby
improving their overall stability and uptime

Evaluating Regions for deployment

There are four main factors that play into evaluating each AWS Region for a
workload deployment:

I. Compliance. If your workload contains data that is bound by local


regulations, then selecting the Region that complies with the regulation
overrides other evaluation factors. This applies to workloads that are
bound by data residency laws where choosing an AWS Region located
in that country is mandatory.
II. Latency. A major factor to consider for user experience is latency.
Reduced network latency can make substantial impact on enhancing
the user experience. Choosing an AWS Region with close proximity to
your user base location can achieve lower network latency. It can also
increase communication quality, given that network packets have
fewer exchange points to travel through.
III. Cost. AWS services are priced differently from one Region to another.
Some Regions have lower cost than others, which can result in a cost
reduction for the same deployment.
IV. Services and features. Newer services and features are deployed to
Regions gradually. Although all AWS Regions have the same service
level agreement (SLA), some larger Regions are usually first to offer
newer services, features, and software releases. Smaller Regions may
not get these services or features in time for you to use them to support
your workload.

Evaluating all these factors can make coming to a decision complicated. This
is where your priorities as a business should influence the decision.

6 a) You have an application running on your Amazon EC2 instance. You want 5
to reduce the load on your instance as soon as the CPU utilization
reaches 100 percent. How will you do that?
5
To reduce the load on your Amazon EC2 instance when the CPU utilization
reaches 100 percent, you can use a combination of Amazon CloudWatch,
Amazon EC2 Auto Scaling, and Elastic Load Balancing. Here's a step-by-step
approach on how you can set this up:

❖ Monitor CPU Utilization with Amazon CloudWatch:


➢ First, ensure that you have detailed monitoring enabled for your 5
EC2 instance. Amazon CloudWatch provides monitoring for AWS
cloud resources and the applications you run on AWS. You can
use it to collect and track metrics, which you can use to
automate the scaling of your EC2 instances.
➢ Create a CloudWatch alarm that monitors the CPU utilization of
your EC2 instance. You can set the alarm to trigger when the CPU
utilization reaches 100 percent.
❖ Set up Amazon EC2 Auto Scaling:
➢ Create an Auto Scaling group for your EC2 instance. An Auto
Scaling group contains a collection of EC2 instances that are
treated as a logical grouping for the purposes of automatic
scaling and management.
➢ Configure the Auto Scaling group to automatically launch or
terminate instances based on demand or defined conditions,
such as the CloudWatch alarm you set up for CPU utilization.
❖ Configure Scaling Policies:
➢ Define a scaling policy for your Auto Scaling group. This policy
will specify what actions to take (e.g., launch new instances)
when the CloudWatch alarm condition is met (CPU utilization
reaches 100 percent).
➢ You can create a policy that automatically increases the number
of EC2 instances in your Auto Scaling group when the alarm state
is triggered. This will help in distributing the load and reducing
the CPU utilization across instances.
❖ Use Elastic Load Balancing (ELB):
➢ Set up an Elastic Load Balancer to distribute incoming
application traffic across multiple EC2 instances in your Auto
Scaling group. This helps ensure that no single instance bears too
much load.
➢ The ELB will automatically distribute incoming traffic across all
healthy instances in the Auto Scaling group, helping to reduce
the load on any single instance.
❖ Testing and Adjustment:
➢ After setting up, it's important to test your configuration to
ensure that the scaling policies and CloudWatch alarms work as
expected. You may need to adjust thresholds and policies based
on the observed behavior to fine-tune performance and cost.

By implementing these steps, you can automate the process of scaling your
EC2 instances based on CPU utilization, ensuring that your application can
handle increased load without degradation in performance.

b) Which of the following options will be ready to use on the EC2 instance
as soon as it is launched?
a. Elastic IP
b. Private IP
c. Public IP
d. Internet Gateway

When you launch an Amazon EC2 instance, it is immediately assigned a


Private IP address from the Amazon VPC (Virtual Private Cloud) IP address
range. This Private IP address allows the instance to communicate with other
instances within the same VPC. Therefore, the Private IP is ready to use as
soon as the EC2 instance is launched.

A Public IP address can also be assigned to your EC2 instance at launch if it's
being launched into a public subnet in your VPC and the subnet's Public IP
address setting is enabled. This Public IP address enables the EC2 instance to
communicate with the internet. However, it's worth noting that the Public IP
address is dynamic; it changes every time the instance is stopped and
restarted unless you assign an Elastic IP address.

An Elastic IP (EIP) is a static IPv4 address offered by AWS for dynamic cloud
computing. While you can allocate an Elastic IP to your account and associate
it with an instance, it requires manual action to allocate and associate with
the instance after it has been launched. It's not automatically ready as soon
as an EC2 instance is launched but can be quickly associated with an instance
afterward.

An Internet Gateway allows communication between instances in your VPC


and the internet. It must be attached to your VPC, but it's not directly
associated with any single instance. Instead, it serves the entire VPC to
enable instances within the VPC to access the internet, and vice versa,
provided the instances have Public IP addresses or Elastic IP addresses. Like
the Elastic IP, setting up an Internet Gateway involves manual steps in the
VPC configuration and is not inherently "ready to use" on an individual EC2
instance basis upon launch.

In summary, both a Private IP and potentially a Public IP (under the right


conditions) are ready to use as soon as an EC2 instance is launched. Elastic
IPs and Internet Gateways require additional manual configuration steps
before they can be used with an instance.

c) Your organization has around 50 IAM users. Now, it wants to introduce


a new policy that will affect the access permissions of a IAM user. How
can it
implement this without having to apply the policy at the individual user
level?

Implementing a new policy across multiple IAM users without applying it at


the individual user level can be efficiently managed by using IAM groups or
roles in AWS (Amazon Web Services). Here’s a step-by-step approach:

1. Use IAM Groups


Groups in IAM allow you to specify permissions for multiple users, which can
make it easier to manage the permissions for those users. Here’s how to
implement a policy using IAM groups:
❖ Create an IAM Group: Start by creating a new IAM group that
represents the common access level or role that the users will share.
❖ Attach Policy to the Group: Create the new policy that outlines the
access permissions you want to apply. Then, attach this policy to the
group you created. This allows all members of the group to inherit the
permissions from the policy.
❖ Add Users to the Group: Add the IAM users to this group. Any user in
this group will automatically receive the permissions assigned to the
group.

2. Use IAM Roles


For scenarios where the new policy is intended for users assuming specific
roles (perhaps for cross-account access or accessing specific AWS services),
you can use IAM roles.

❖ Create an IAM Role: Define a role that encompasses the permissions


the policy should grant. This includes specifying a trust policy that
defines who can assume the role.
❖ Attach Policy to the Role: Attach the new policy to the role. This policy
specifies what actions are allowed or denied when the role is
assumed.
❖ Allow Users to Assume the Role: Modify the users' permissions to
allow them to assume the newly created role. This can be done by
attaching a policy to the user(s) that grants the sts:AssumeRole
permission for the role you created.

Best Practices
❖ Least Privilege Principle: Always follow the principle of least privilege
by granting only the permissions necessary to perform a task.
❖ Regularly Review and Update: Periodically review your IAM policies
and group memberships to ensure they still align with your
organization's requirements.
❖ Use Managed Policies: Whenever possible, use AWS managed policies
for common permission sets, as they are maintained by AWS and
automatically updated.

By using groups or roles, you can manage access permissions more efficiently
and ensure that changes in policies do not require individual updates to each
user, saving time and reducing the risk of errors.
7 Write short notes on: (Any two) 2
a) AMI ×
b) Grid computing vs Cloud computing 5
c) Security Groups
d) Inbound and Outbound traffic

AMI
An Amazon Machine Image (AMI) is a template that contains a software configuration
(for example, an operating system, an application server, and applications). Within
Amazon Web Services (AWS), AMIs are used to create virtual machines (VMs), known
as instances. You can launch instances from as many different AMIs as you need. AMIs
are a fundamental component of cloud computing in AWS, allowing users to spin up
instances quickly with preconfigured settings, thereby simplifying the process of
scaling and managing applications in the cloud.

Grid Computing vs Cloud Computing

❖ Grid Computing: This is a form of distributed computing where a virtual


supercomputer is composed of a cluster of networked, loosely coupled
computers acting in concert to perform very large tasks. Grid computing
focuses on networks to solve complex problems with the aim of achieving
higher computational speeds.
❖ Cloud Computing: Unlike grid computing, cloud computing involves delivering
various services over the internet, including data storage, servers, databases,
networking, and software, among others. Cloud computing services are
designed to provide easy, scalable access to applications, resources, and
services, and are fully managed by a cloud services provider. It emphasizes
flexibility, scalability, and ease of use.

The key difference lies in their architecture and purpose: grid computing is about
harnessing unused processing cycles of all computers in a network for solving
problems too intensive for any stand-alone machine, whereas cloud computing is
about delivering services over the internet with scalability and flexibility.

Feature Grid Computing Cloud Computing


Resource Resources are owned by Resources are owned and managed
Ownership different organizations or by a centralized cloud service
individuals in a distributed provider.
network.
Dynamic Typically involves a fixed set Allows dynamic scaling of resources
Scaling of resources. based on demand.
Resource Resources are provisioned Resources are provisioned on-
Provisioning within a grid but may require demand, often abstracted from the
manual intervention. physical infrastructure, and
managed automatically by the cloud
provider.
Purpose Primarily used for scientific, Versatile platform catering to a
research, or engineering wide range of applications, from
applications requiring hosting to business services.
massive computational
power.
Task Tasks are divided into smaller Resources are dynamically allocated
Allocation sub-tasks and distributed and de-allocated based on demand.
across the grid. Each node Cloud platforms manage the
works independently on its underlying infrastructure, and users
assigned sub-task. deploy applications in virtual
machines or containers.
Examples SETI@home, Folding@home, Amazon Web Services (AWS),
scientific simulations. Microsoft Azure, Google Cloud
Platform (GCP).

Security Groups
In the context of cloud computing, particularly within AWS, Security Groups act as a
virtual firewall for your instances to control inbound and outbound traffic. Security
groups are associated with EC2 instances and provide security at the protocol and
port access level. Each security group consists of a set of rules that filter traffic coming
into and out of an EC2 instance. These rules can be configured to allow traffic from
specific IP addresses, port numbers, and protocols.
Inbound and Outbound Traffic

❖ Inbound Traffic: This refers to the network traffic that originates from outside
the network's boundaries and reaches the services within the network. In
terms of security groups, inbound rules define the incoming traffic that is
allowed to reach the instances.
❖ Outbound Traffic: Conversely, outbound traffic refers to the network traffic
that originates from within the network or a specific instance and is destined
for the outside of the network. Outbound rules in security groups define the
traffic that is allowed to leave the instances.

Security groups in AWS by default allow all outbound traffic and disallow all inbound
traffic. Users must explicitly set rules to allow inbound traffic to their instances,
thereby providing a customizable security mechanism to control access to instances
based on the users' specific requirements.

Security groups
Security groups in cloud computing are a fundamental component used to control
access to resources in a cloud environment. They act as a virtual firewall for your
servers (instances) and define which inbound and outbound traffic is allowed to or
from them. Here's a closer look at how they function:

Purpose and Functionality

❖ Traffic Control: Security groups are used to control both inbound (ingress) and
outbound (egress) network traffic to cloud resources such as virtual machines
or instances. They define the rules for allowing or denying network traffic
based on IP addresses, port numbers, and protocols (TCP, UDP, ICMP, etc.).
❖ Stateful Inspection: Most cloud providers implement security groups as stateful
firewalls. This means that if an incoming traffic is allowed, the responses to this
traffic are automatically allowed to flow out, regardless of outbound rules (and
vice versa for outbound initiated traffic).
❖ Default Policies: By default, security groups tend to deny all inbound traffic and
allow all outbound traffic. Users can then specify rules to allow specific types of
inbound traffic as needed.
❖ Instance Level: Security groups operate at the instance level, not the subnet
level. This allows for granular control over access to each instance within a
subnet.
❖ No Overlapping: Unlike network ACLs (Access Control Lists), which can have
allow and deny rules, security groups only have allow rules. Traffic that does
not match any allow rule is automatically denied.
❖ Elasticity: Security groups are elastic, meaning that you can change the rules at
any time. Changes are automatically applied to all instances associated with the
security group.

Application in Cloud Environments

❖ Amazon Web Services (AWS): Security groups in AWS EC2 (Elastic Compute
Cloud) are used to control access to instances. Each instance can be associated
with multiple security groups, and each security group can be associated with
multiple instances.
❖ Microsoft Azure: Azure uses Network Security Groups (NSGs) to filter network
traffic to and from Azure resources in an Azure Virtual Network (VNet).
❖ Google Cloud Platform (GCP): GCP utilizes firewall rules within its Virtual
Private Cloud (VPC) to control inbound and outbound traffic to instances.

Best Practices

❖ Principle of Least Privilege: Apply the principle of least privilege by allowing


only necessary traffic to and from your instances.
❖ Regular Reviews and Updates: Regularly review and update security group
rules to ensure they align with the current requirements and do not allow
unnecessary traffic.
❖ Separation of Duties: Use different security groups for different roles within
your environment (e.g., web servers, database servers) to minimize the risk of
unauthorized access.
❖ Logging and Monitoring: Enable logging and monitoring to track the
effectiveness of your security group rules and detect any unauthorized access
attempts.

Security groups are a critical part of securing cloud environments. They provide a
flexible and powerful tool for managing access to resources, ensuring that only
authorized traffic can reach your instances or services.

Inbound and Outbound


In cloud computing, inbound and outbound traffic refers to the flow of data into and
out of a cloud environment. This concept is crucial for understanding how data and
services are accessed and delivered in cloud-based infrastructures. Let's break down
both terms for a clearer understanding:

Inbound Traffic
Inbound traffic, sometimes known as "ingress traffic," refers to all the data that enters
a cloud environment from external sources. This can include:

❖ Requests from clients or users to access web applications or services hosted in


the cloud.
❖ Data being uploaded to cloud storage services from user devices or from other
cloud services.
❖ Synchronization data from on-premises databases to cloud databases.
❖ API calls made to cloud services from external applications.

Inbound traffic is crucial for cloud services that provide interactive platforms, web
hosting, data processing, and storage solutions. It is managed and monitored to
ensure security, as it can be a vector for attacks, and to optimize performance and
resource allocation.

Outbound Traffic
Outbound traffic, or "egress traffic," consists of all the data that leaves a cloud
environment to reach external destinations. This includes:

❖ Data sent from cloud-hosted applications or services to client devices or


external servers.
❖ Emails sent from cloud-based email servers.
❖ Data and files being shared from cloud storage to users or other cloud services.
❖ Responses to API calls made by external applications to services hosted in the
cloud.

Outbound traffic is significant for delivering content and services to users and for
interactions between cloud services and external systems. Monitoring and managing
outbound traffic is essential for cost control, especially in cloud platforms where
egress traffic can incur costs, and for ensuring data security and compliance with
regulations.
Management and Security
Both inbound and outbound traffic need to be carefully managed and secured to
protect cloud resources from unauthorized access and ensure the integrity of data.
This involves:

❖ Implementing firewalls and access control lists (ACLs) to regulate traffic.


❖ Using encryption to protect data in transit.
❖ Monitoring traffic patterns for unusual activity that could indicate a security
threat.
❖ Optimizing traffic flows to ensure efficient use of bandwidth and resources.

Cloud service providers typically offer tools and services to help manage and secure
both inbound and outbound traffic, including network security groups, load balancers,
and traffic monitoring solutions.

Understanding and effectively managing inbound and outbound traffic is fundamental


to operating in the cloud, enabling organizations to leverage cloud computing's
scalability, flexibility, and efficiency while maintaining security and compliance.

Explain how cloud computing environments save energy.


Cloud computing environments can save energy in several ways, contributing both to
operational cost savings and environmental sustainability. Here's how they achieve
this:
❖ Efficient Resource Utilization: Cloud computing allows for the pooling of computing
resources, such as storage, memory, and processing power, which can be dynamically
allocated to meet demand. This means that resources are not wasted on underutilized
infrastructure. Efficiency is further enhanced through virtualization technology, which
allows multiple virtual machines to run on a single physical server, maximizing the
utilization of the underlying hardware.
❖ Economies of Scale: Large cloud providers operate at a scale that allows them to
achieve significant economies of scale. They can invest in the most energy-efficient
hardware and cooling technologies, which might not be economically viable for
smaller operations. Their data centers are designed to optimize airflow and use
advanced cooling methods, reducing the amount of energy needed for cooling.
❖ Renewable Energy Sources: Many cloud providers are investing in renewable energy
sources, such as wind and solar, to power their data centers. By increasing the use of
renewable energy, they reduce dependence on fossil fuels, thereby lowering the
carbon footprint associated with cloud computing operations.
❖ Demand-Based Scaling: Cloud environments can automatically scale computing
resources up or down based on real-time demand. This dynamic scaling ensures that
only the necessary amount of resources is consumed at any given time, avoiding the
energy waste associated with maintaining idle resources.
❖ Data Center Consolidation: Organizations moving their operations to the cloud can
reduce or eliminate their own data center facilities. This consolidation leads to a
reduction in the overall energy consumption since cloud providers can manage
resources more efficiently than smaller, private data centers.
❖ Advanced Cooling Techniques: Cloud providers often employ advanced cooling
techniques and optimize the location of their data centers to take advantage of
natural cooling opportunities (e.g., locating data centers in cooler climates or using
outside air for cooling), which significantly reduces energy consumption compared to
traditional cooling methods.
❖ Optimized Workload Placement: Intelligent algorithms can place workloads in data
centers where energy is less expensive and more sustainable. For instance, non-time-
sensitive tasks can be scheduled to run when renewable energy sources are more
abundant or when overall energy demand is lower.
Green Computing
Green computing is the Eco-friendly use of computers and their resources. It is also
defined as the study and practice of designing, engineering, manufacturing and
disposing computing resources with minimal environmental damage.

Figure – Green Cloud Architecture


Green cloud computing is using Internet computing services from a service provider
that has taken measures to reduce their environmental effect and also green cloud
computing is cloud computing with less environmental impact.
Some measures taken by the Internet service providers to make their services more
green are:
6. Use renewable energy sources.
7. Make the data center more energy efficient, for example by maximizing power
usage efficiency (PUE).
8. Reuse waste heat from computer servers (e.g. to heat nearby buildings).
9. Make sure that all hardware is properly recycled at the end of its life.
10. Use hardware that has a long lifespan and contains little to no toxic materials.

Difference Between Utility Computing and Cloud Computing

Here is the difference between cloud computing and utility computing in tabular form.

Feature Cloud Computing Utility Computing


It follows a consumption-
Users pay for the resources
based pricing model, where
Cost Model they consume through a
users get billed based on their
subscription method.
consumption.
Cloud computing offers
Users have agility in how they
flexibility in resource
can access and use the
allocation, platform choices,
resources. It eliminates initial
Flexibility and deployment options. It
hardware investments,
allows users to choose from
running costs, and much
various cloud services and
more.
providers.
It’s agile, and the good thing
is that a user can scale the
The cloud service provider resources on demand. It
owns and manages the reduces the need for keep
Resource
infrastructure, data centers, requesting the service
Ownership,
servers, and networking. provider add more resources.
equipment. This approach can help
businesses scale their
operations and thrive.
It typically offers high
Reliability depends on the
reliability and availability
infrastructure and systems
Reliability through redundant data
owned and managed by the
centers and SLAs provided by
businesses.
the cloud service provider.
It offers scalability based on
It provides on-demand
actual usage, allowing
scalability, allowing businesses
Scalability businesses to scale resources
to quickly scale up or down
dynamically based on their
based on their needs.
consumption requirements.
● IBM Cloud
● GCP ● IBM Utility Computing
Examples ● Oracle Utility Computing
● AWS

More about cloud and utility computing


What do you mean by on-demand provision in cloud computing? Explain and list out
its benefits.

On-demand provisioning in cloud computing refers to the ability of cloud services to


automatically allocate computing resources, such as processing power, storage, and
network capacity, as needed without requiring human intervention. This model allows
users to access and scale resources in real-time or near-real-time, depending on their
current requirements. The on-demand nature of cloud computing contrasts with
traditional computing models, where resources had to be procured and allocated in
advance, often leading to either resource scarcity or wastage.
Benefits of On-Demand Provisioning
❖ Cost Efficiency: Users pay only for the resources they consume, avoiding the need to
invest in hardware and software that might not be fully utilized. This pay-as-you-go
model can significantly reduce capital expenditure (CapEx) and operational
expenditure (OpEx).
❖ Scalability: Organizations can easily scale their IT infrastructure up or down based on
demand. This flexibility is particularly beneficial for businesses experiencing fluctuating
workloads or rapid growth, as it allows them to adjust their resource consumption
without the need for costly and time-consuming hardware upgrades or downgrades.
❖ Reduced Time to Market: On-demand provisioning enables faster deployment of
applications and services. Since the necessary computing resources can be accessed
immediately, organizations can develop, test, and launch applications more quickly,
giving them a competitive edge.
❖ Improved Business Continuity: The ability to rapidly provision resources on-demand
enhances an organization’s ability to respond to issues and maintain operations during
unexpected surges in demand or in the event of failures. This resilience contributes to
better business continuity planning and disaster recovery strategies.
❖ Focus on Core Business Functions: By outsourcing the management of IT
infrastructure to cloud providers, organizations can focus more on their core business
activities rather than on managing hardware and software. This shift can lead to
improved productivity and innovation.
❖ Enhanced Flexibility and Agility: On-demand provisioning provides businesses with the
agility to respond quickly to market changes. They can experiment with new ideas and
technologies with minimal risk, as they are not committed to long-term investments in
specific hardware or software configurations.
❖ Global Reach: Cloud providers often have data centers spread across various
geographical locations, allowing organizations to deploy services closer to their end-
users. This global reach, enabled by on-demand provisioning, can improve application
performance and user experience by reducing latency.
❖ Environmental Sustainability: By optimizing resource usage and reducing the need for
physical infrastructure, on-demand provisioning contributes to more sustainable
computing practices. It leads to lower energy consumption and reduced carbon
emissions compared to traditional data center models.
On-demand provisioning epitomizes the flexibility and efficiency that cloud computing
offers, enabling organizations to adapt to changing demands quickly while controlling
costs and reducing the complexity of managing IT infrastructure.

Q. What is AMI in EC2? Explain the different types AMI available in AWS?

An Amazon Machine Image (AMI) is a template that contains a software configuration


(for example, an operating system, an application server, and applications). From an
AMI, you launch an instance, which is a copy of the AMI running as a virtual server in
the cloud. You can launch multiple instances of an AMI, as shown in the following
figure.

Your instances keep running until you stop, hibernate, or terminate them, or until
they fail. If an instance fails, you can launch a new one from the AMI.

There aren't specific "types" of AMIs in the traditional sense, however, you can
categorize them based on several characteristics :
1. By Storage for the Root Device:
❖ EBS-backed AMIs: These are the most common type, where the root device
(the primary storage for the operating system) is stored on an Amazon Elastic
Block Store (EBS) volume. This offers advantages like persistence (data survives
instance termination) and flexibility (you can attach different EBS volumes).
❖ Instance store-backed AMIs: Less common, these use temporary storage on
the instance itself for the root device. This is faster for launching but data is lost
upon instance termination.
2. By Operating System:
❖ Linux: By far the most common option, with various distributions like Ubuntu,
Amazon Linux, Red Hat Enterprise Linux (RHEL), CentOS, etc. available.
❖ Windows: Less common, but available for various Windows Server versions.
❖ Other: Specific operating systems like macOS might be available in community
AMIs (not officially supported by AWS).
3. By Architecture:
❖ 32-bit: Older architecture, used less frequently unless there are specific
software compatibility requirements.
❖ 64-bit: The prevailing standard, offering better performance and memory
addressing capabilities.
4. By Launch Permissions:
❖ Public AMIs: Available for anyone to launch within a specific region.
❖ Private AMIs: Only accessible to accounts explicitly granted launch permissions
by the owner.
❖ AWS Marketplace AMIs: Paid AMIs created and shared by other users or
vendors, offering pre-configured environments for specific purposes.
5. By Other Characteristics:
❖ Virtualization type: Hardware Virtual Machine (HVM) or Para-virtualization (PV)
for specific use cases.
❖ Boot mode: UEFI or legacy BIOS, depending on instance type compatibility.
❖ Source: Official AWS AMIs, Community AMIs, or custom-created AMIs.
❖ Region and Zones

Remember, choosing the right AMI depends on your specific needs and requirements.
Consider factors like the desired operating system, storage needs, performance
profile, and security considerations when making your selection.
What is cloud data security?
Cloud data security is the practice of protecting data and other digital information
assets from security threats, human error, and insider threats. It leverages technology,
policies, and processes to keep your data confidential and still accessible to those who
need it in cloud-based environments.
What it protects:
❖ Data: This includes sensitive information like customer details, financial
records, intellectual property, and more.
❖ Applications: These are the programs you use in the cloud, each with its own
security needs.
❖ Infrastructure: This covers the underlying servers, networks, and storage
systems that power the cloud.

Key principles:
❖ Data Confidentiality: Keeping data private and accessible only to authorized
users.
❖ Data Integrity: Ensuring data remains accurate and unaltered.
❖ Data Availability: Making sure data is accessible when needed.

Threats it addresses:
❖ Unauthorized access: Hackers trying to steal or exploit your data.
❖ Data breaches: Accidental or intentional leaks of sensitive information.
❖ Malware: Malicious software that can damage or steal data.
❖ Human error: Mistakes made by employees or users.
❖ Natural disasters: Floods, fires, and other events that can disrupt cloud
services.

Who's responsible?
❖ Cloud providers: They secure their infrastructure and offer various security
features.
❖ Cloud users: You're responsible for configuring your settings, managing access,
and using the cloud securely.

Best practices:
❖ Choose a reputable cloud provider.
❖ Encrypt your data.
❖ Use strong passwords and multi-factor authentication.
❖ Implement robust access controls.
❖ Regularly back up your data.
❖ Stay informed about security threats and updates.

What are the challenges of cloud data security?

Common challenges with data protection in cloud or hybrid environments include:


❖ Lack of visibility. Companies don’t know where all their data and applications
live and what assets are in their inventory.
❖ Less control. Since data and apps are hosted on third-party infrastructure, they
have less control over how data is accessed and shared.
❖ Confusion over shared responsibility. Companies and cloud providers share
cloud security responsibilities, which can lead to gaps in coverage if duties and
tasks are not well understood or defined.
❖ Inconsistent coverage. Many businesses are finding multicloud and hybrid
cloud to better suit their business needs, but different providers offer varying
levels of coverage and capabilities that can deliver inconsistent protection.
❖ Growing cybersecurity threats. Cloud databases and cloud data storage make
ideal targets for online criminals looking for a big payday, especially as
companies are still educating themselves about data handling and
management in the cloud.
❖ Strict compliance requirements. Organizations are under pressure to comply
with stringent data protection and privacy regulations, which require enforcing
security policies across multiple environments and demonstrating strong data
governance.
❖ Distributed data storage. Storing data on international servers can deliver
lower latency and more flexibility. Still, it can also raise data sovereignty issues
that might not be problematic if you were operating in your own data center.

What are the benefits of cloud data security?


1. Greater visibility: Strong cloud data security measures allow you to maintain
visibility into the inner workings of your cloud, namely what data assets you
have and where they live, who is using your cloud services, and the kind of data
they are accessing.
2. Easy backups and recovery : Cloud data security can offer a number of
solutions and features to help automate and standardize backups, freeing your
teams from monitoring manual backups and troubleshooting problems. Cloud-
based disaster recovery also lets you restore and recover data and applications
in minutes.
3. Cloud data compliance: Robust cloud data security programs are designed to
meet compliance obligations, including knowing where data is stored, who can
access it, how it’s processed, and how it’s protected. Cloud data loss
prevention (DLP) can help you easily discover, classify, and de-identify sensitive
data to reduce the risk of violations.
4. Data encryption: Organizations need to be able to protect sensitive data
whenever and wherever it goes. Cloud service providers help you tackle secure
cloud data transfer, storage, and sharing by implementing several layers of
advanced encryption for securing cloud data, both in transit and at rest.
5. Lower costs: Cloud data security reduces total cost of ownership (TCO) and the
administrative and management burden of cloud data security. In addition,
cloud providers offer the latest security features and tools, making it easier for
security professionals to do their jobs with automation, streamlined
integration, and continuous alerting.
6. Advanced incident detection and response: An advantage of cloud data
security is that providers invest in cutting-edge AI technologies and built-in
security analytics that help you automatically scan for suspicious activity to
identify and respond to security incidents quickly.

11. What is DevOps? Explain the DevOps lifecycle.

DevOps stands for development and operations. It’s a practice that aims at merging
development, quality assurance, and operations (deployment and integration) into a
single, continuous set of processes. This methodology is a natural extension of Agile
and continuous delivery approaches.
❖ Plan:
➢ In the planning phase, the team defines the goals and requirements for the software
development and deployment.
➢ Project timelines, resource allocation, and tasks are planned.
➢ Collaboration with stakeholders occurs to gather input and feedback, ensuring a clear
understanding of the project scope.
❖ Code:
➢ Developers write and commit code to a version control system (VCS) like Git.
➢ Code reviews are conducted to maintain code quality and adherence to coding
standards.
➢ Automation tools may be used for code analysis and formatting.
❖ Build:
➢ Continuous Integration (CI) tools automatically build and compile the code.
➢ Automated tests are executed to catch any issues early in the development process.
➢ Build artifacts, such as executable files, are generated.
❖ Test:
➢ Automated testing is performed to validate the functionality, performance, and
security of the application.
➢ Continuous Testing is employed to identify and fix defects as early as possible.
➢ Test environments are set up to mimic production conditions.
❖ Release:
➢ The release phase involves preparing the software for deployment.
➢ Activities may include final testing, documentation updates, and coordination with
stakeholders.
➢ This phase ensures that the software is ready for deployment to production.
❖ Deploy:
➢ Automated deployment tools are utilized to release the application to various
environments.
➢ Continuous Delivery (CD) practices aim to automate the deployment process, making
it reliable and repeatable.
➢ Rollback mechanisms are in place in case deployment issues arise.
❖ Operate:
➢ Operations teams monitor the application and infrastructure in the production
environment.
➢ Logging, monitoring, and alerting tools help identify and respond to issues quickly.
➢ Infrastructure as Code (IaC) may be used to manage and provision infrastructure in a
consistent manner.
❖ Monitor:
➢ Continuous monitoring provides insights into the performance, user experience, and
overall system health.
➢ Metrics and logs are analyzed to identify trends, potential issues, and areas for
improvement.
➢ Monitoring helps ensure that the deployed application meets performance and
reliability expectations.

19. Suppose you created a key in North Virginia region to encrypt my data in Oregon
region. You also added three users to the key and an external AWS account. Then, to
encrypt an object in S3, when you tried to use the same key, it was not listed. Where
did you go wrong?

20. Explain the characteristics and limitations of Cloud computing.


Characteristics:
12.On-demand self-services: The Cloud computing services does not
require any human administrators, user themselves are able to provision,
monitor and manage computing resources as needed.
13.Broad network access: The Computing services are generally provided
over standard networks and heterogeneous devices.
14.Rapid elasticity: The Computing services should have IT resources that
are able to scale out and in quickly and on as needed basis. Whenever the
user require services it is provided to him and it is scale out as soon as its
requirement gets over.
15.Resource pooling: The IT resource (e.g., networks, servers, storage,
applications, and services) present are shared across multiple applications
and occupant in an uncommitted manner. Multiple clients are provided
service from a same physical resource.
16.Measured service: The resource utilization is tracked for each application
and occupant, it will provide both the user and the resource provider with
an account of what has been used. This is done for various reasons like
monitoring billing and effective use of resource.
17.Multi-tenancy: Cloud computing providers can support multiple tenants
(users or organizations) on a single set of shared resources.
18.Virtualization: Cloud computing providers use virtualization technology to
abstract underlying hardware resources and present them as logical
resources to users.
19.Resilient computing: Cloud computing services are typically designed
with redundancy and fault tolerance in mind, which ensures high
availability and reliability.
20.Flexible pricing models: Cloud providers offer a variety of pricing models,
including pay-per-use, subscription-based, and spot pricing, allowing
users to choose the option that best suits their needs.
21.Security: Cloud providers invest heavily in security measures to protect
their users’ data and ensure the privacy of sensitive information.
22.Automation: Cloud computing services are often highly automated,
allowing users to deploy and manage resources with minimal manual
intervention.
23.Sustainability: Cloud providers are increasingly focused on sustainable
practices, such as energy-efficient data centers and the use of renewable
energy sources, to reduce their environmental impact.

limitations
A list of the disadvantage of cloud computing is given below -
1) Internet Connectivity
As you know, in cloud computing, every data (image, audio, video, etc.) is stored on the
cloud, and we access these data through the cloud by using the internet connection. If you
do not have good internet connectivity, you cannot access these data. However, we have
no any other way to access data from the cloud.
2) Vendor lock-in
Vendor lock-in is the biggest disadvantage of cloud computing. Organizations may face
problems when transferring their services from one vendor to another. As different
vendors provide different platforms, that can cause difficulty moving from one cloud to
another.
3) Limited Control
As we know, cloud infrastructure is completely owned, managed, and monitored by the
service provider, so the cloud users have less control over the function and execution of
services within a cloud infrastructure.
4) Security
Although cloud service providers implement the best security standards to store
important information. But, before adopting cloud technology, you should be aware that
you will be sending all your organization's sensitive information to a third party, i.e., a
cloud computing service provider. While sending the data on the cloud, there may be a
chance that your organization's information is hacked by Hackers.

21. You have an application running on your Amazon EC2 instance. You want to
reduce the load on your instance as soon as the CPU utilization reaches 100 percent.
How will you do that?
22. Which of the following options will be ready to use on the EC2 instance as soon as
it is launched? i. Elastic IP ii. Private IP iii. Public IP iv. Internet Gateway
23. Your organization has around 50 IAM users. Now, it wants to introduce a new
policy that will affect the access permissions of a IAM user. How can it implement this
without having to apply the policy at the individual user level?
24. What is data center? List and explain the features of GIDC, Nepal.
=> DATA CENTER
❖ A data center is a storage facility that holds computer facilities such as servers, routers,
switches, and firewalls, as well as supporting components such as backup equipment, fire
suppression systems, and conditioning. A data center can be complicated (dedicated
building) or simple (dedicated building) (2 area or room that houses only a few servers). A
data center can also be private or shared.
❖ Datacenter components are frequently at the heart of an organization's information system
(IS). As result, these important data center facilities often need a substantial investment in
supporting technologies such as air conditioning/climate control systems, fire
suppression/smoke detection, secure entrance am identification, and elevated floors for
easier cabling and water damage avoidance.
❖ When data centers are shared, allowing virtual data center access to numerous
organizations and individuals frequently makes more sense than allowing entire physical
access to multiple companies or persons. Shared data centers are often owned and
managed by a single organization that rents out cent partitions (virtual or physical) to other
client businesses. Client/leasing organizations are frequently time businesses that lack the
financial and technical capabilities necessary for dedicated data center upkeep The leasing
option enables smaller enterprises to benefit from professional data center capabilities
without incurring large capital expenditures.
❖ A data center is a location where an organization's IT operations and equipment are
centralized, as well as where data is stored, managed, and disseminated. Data centers hold a
network's most crucial systems a are important to the day-to-day functioning of the network.
As a result, corporations prioritize the security and dependability of data centers and
associated information.
❖ Although each data center is unique, it may typically be divided into two types: internet-facing
data centers and business (or "internal") data centers. Internet-facing data centers often
handle a small num of apps, are browser-based, and have a large number of unknown users.
Enterprise data centers, on other hand, serve fewer users but host more applications ranging
from off-the-shelf to bespoke applications.
❖ Datacenter designs and needs can vary greatly. A data center designed for a cloud service
provider, such as Amazon EC2, has facility, infrastructure, and security needs that differ
dramatically from a wholly private data center, such as one designed for the Pentagon to
secure sensitive data.
GOVERNMENT INTEGRATED DATA CENTER
❖ A data center is a centralized location for the storage, management, processing, and
exchange of data that exists within a specific enterprise or a specialized facility. In general,
data centers can be broken down into three types - Internet Datacenter (IDC), Storage area
network (SAN), and Enterprise data center (EDC).
❖ An Internet data center (IDC) is a facility that provides data and Internet services for other
companies. GIDC falls in this phase.
❖ A Storage Area Network (SAN) is a network of interconnected storage devices and data
servers usually located within an enterprise data center or as an off-site facility offering
leased storage space.
❖ An Enterprise data center (EDC) is the central processing facility for an enterprise's
computer network
Features of GIDC
❖ High-End Computing Infrastructure
❖ Storage Area Network (SAN) High-Speed Local Area Network
❖ Multi-Tier Security
❖ High-Speed Internet Connectivity
❖ 24*7*365 Help Desk
❖ Multi-level redundant power back-up
❖ Air Conditioning Management
❖ Infrastructure System
❖ Fire Detection & Control System
Infrastructure System
❖ Air-Circulation System: HVAC (Heating, Ventilating, and Air Conditioning)
❖ Security: Biometric Access Control System, Card Reader Access Control System, CCTV
❖ Facility Management System: Water Leakage Sensing
❖ Disaster Prevention System: Firefighting
Electrical System
❖ 200 KVA transformers (3 nos.)
❖ Main Power Switchboard
❖ Emergency Generator: 400 KW
❖ UPS-Redundant 100 KVA, 120 KVA; Batteries: 620.
Facilities of GIDC
❖ Information Technology System
❖ Routers, Backbone Switches, etc. System
❖ Integrated Network Management
❖ Integrated Server Management System
❖ Integrated Storage - 158 TB IPS, Web Application Firewall

25. Explain Amazon EC2 with its basic features.


Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable
computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2
reduces hardware costs so you can develop and deploy applications faster. You can
use Amazon EC2 to launch as many or as few virtual servers as you need, configure
security and networking, and manage storage. You can add capacity (scale up) to
handle compute-heavy tasks, such as monthly or yearly processes, or spikes in
website traffic. When usage decreases, you can reduce capacity (scale down) again.
The following diagram shows a basic architecture of an Amazon EC2 instance
deployed within an Amazon Virtual Private Cloud (VPC). In this example, the EC2
instance is within an Availability Zone in the Region. The EC2 instance is secured
with a security group, which is a virtual firewall that controls incoming and outgoing
traffic. A private key is stored on the local computer and a public key is stored on the
instance. Both keys are specified as a key pair to prove the identity of the user. In
this scenario, the instance is backed by an Amazon EBS volume. The VPC
communicates with the internet using an internet gateway. For more information
about Amazon VPC, see the Amazon VPC User Guide.

Amazon EC2 supports the processing, storage, and transmission of credit card data
by a merchant or service provider, and has been validated as being compliant with
Payment Card Industry (PCI) Data Security Standard (DSS).

Features of Amazon EC2


Amazon EC2 provides the following high-level features:
Instances
Virtual servers.
Amazon Machine Images (AMIs)
Preconfigured templates for your instances that package the components you need
for your server (including the operating system and additional software).
Instance types
Various configurations of CPU, memory, storage, networking capacity, and graphics
hardware for your instances.
Key pairs
Secure login information for your instances. AWS stores the public key and you store
the private key in a secure place.
Instance store volumes
Storage volumes for temporary data that is deleted when you stop, hibernate, or
terminate your instance.
Amazon EBS volumes
Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon
EBS).
Regions, Availability Zones, Local Zones, AWS Outposts, and Wavelength
Zones
Multiple physical locations for your resources, such as instances and Amazon EBS
volumes.
Security groups
A virtual firewall that allows you to specify the protocols, ports, and source IP ranges
that can reach your instances, and the destination IP ranges to which your instances
can connect.
Elastic IP addresses
Static IPv4 addresses for dynamic cloud computing.
Tags
Metadata that you can create and assign to your Amazon EC2 resources.
Virtual private clouds (VPCs)
Virtual networks you can create that are logically isolated from the rest of the AWS
Cloud. You can optionally connect these virtual networks to your own network.
For details about all of the features of Amazon EC2, see Amazon EC2 features.

You might also like