Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 120

Mahavir Education Trust’s

SHAH & ANCHOR KUTCHHI ENGINEERING COLLEGE


Mahavir Education Trust Chowk, Waman Tukaram Patil Marg Chembur,Mumbai-88
Affiliated to University of Mumbai, Approved by D.T.E &A.I.C.T. E, Awarded ‘A’ Grade by D.T.E,
M.S Accredited by National Board of Accreditation (NBA) from A.Y 2022-2023 for 3 years
Awarded ‘A’ Grade by National Assessment and Accreditation Council (NAAC) w.e.f.20.10.2021

Department of Information Technology


Subject: Advance DevOps Semester: V
INDEX
Sr. Title of Experiment/Tutorial / Assignment Date of Date of Page Marks Initials of Teacher
No. Performance Submission No.
1 To understand the benefits of Cloud 12/07/2023 19/07/2023
Infrastructure and Setup
AWS Cloud9 IDE, Launch AWS Cloud9 IDE
and Perform
Collaboration Demonstration.
2 To Build Your Application using AWS 19/07/2023 26/07/2023
CodeBuild and
Deploy on S3 / SEBS using AWS CodePipeline,
deploy
Sample Application on EC2 instance using AWS
CodeDeploy.
3 To understand the Kubernetes Cluster 26/07/2023 02/08/2023
Architecture, install
and Spin Up a Kubernetes Cluster on Linux
Machines/Cloud
Platforms.
4 To install Kubectl and execute Kubectl 02/08/2023 09/08/2023
commands to manage
the Kubernetes cluster and deploy Your First
Kubernetes
Application.
5 To understand terraform lifecycle, core 09/08/2023 23/08/2023
concepts/terminologies and install it on a Linux
Machine.
6 To Build, change, and destroy AWS / GCP 23/08/2023 30/08/2023
/Microsoft Azure/
DigitalOcean infrastructure Using Terraform.
7 To understand Static Analysis SAST process and 30/08/2023 13/09/2023
learn to
integrate Jenkins SAST to SonarQube/GitLab.
8 Create a Jenkins CICD Pipeline with SonarQube 13/09/2023 27/09/2023
/ GitLab
Integration to perform a static analysis of the
code to detect
bugs, code smells, and security vulnerabilities
on a sample
Web / Java / Python application.
9 To Understand Continuous monitoring and 27/09/2023 04/10/2023
Installation and
configuration of Nagios Core, Nagios Plugins
and NRPE
(Nagios Remote Plugin Executor) on Linux
Machine.
10 To perform Port, Service monitoring, 04/10/2023 11/10/2023
Windows/Linux server
monitoring using Nagios.
11 To understand AWS Lambda, its workflow, 11/10/2023 18/10/2023
various functions
and create your first Lambda functions using
Python / Java /
Nodejs.
12 To create a Lambda function which will log 18/10/2023 23/10/2023
“An Image has
been added” once you add an object to a
specific bucket in S3

This is to certify that


Mr. Prema Shankar Semester: V
Division: TE/6 Roll No: 68 Batch: D
Subject: Advance DevOps LAB Academic Year: 2023-24

Name & Signature of Faculty In-charge Head of the Department


ACADEMIC YEAR 2023-2024
Program Outcomes

(POs) Engineering Graduates will be able to:

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of mathematics,
natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems
and design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and
research methods including design of experiments, analysis and interpretation of data, and
synthesis of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modelling to complex engineering
activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional
engineering solutions in societal and environmental contexts, and demonstrate the knowledge
of, and need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities
and norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and receive
clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to
engage in independent and life-long learning in the broadest context of technological change.
Program Specific Outcomes (PSOs)

By the end of the educational experience our students will be able to:
1. The Information Technology graduates are able to analyse, design, develop, test and
apply management principles, mathematical foundations in the development of IT
based solutions for real world and open-ended problems.
2. The Information Technology graduates are able to perform various roles in creating
innovative career paths: to be an entrepreneur, a successful professional, pursue higher
studies with realization of moral values & ethics.

Mapping of PSOs to POs:

PSO Number PO Number


PSO1 PO1, PO2, PO3, PO4, PO5, PO7, PO11
PSO2 PO6, PO8, PO9, PO10, PO12

Program Educational Objectives

Justification:
PSO Number PO Number
PSO1-PO1 Engineering knowledge is basic need to develop and solve IT based solution for real
world and open-ended problems
PSO1-PO2 After identified and applied knowledge, Problem analysis must be performed for a
problem to develop any solution
PSO1-PO3 Development of solution for identified problem with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental
considerations.
PSO1-PO4 Research methods, analysis of data must be performed for developed solution to
provide valid conclusion
PSO1-PO5 Appropriate tools and techniques should be chosen to model a solution for real world
problem
PSO1-PO7 New approaches of IT must demonstrate the knowledge of, and need for sustainable
development.
PSO1-PO11 Understanding of the engineering and management principles and apply these to
one’s own work, as a member and leader in a team, to manage projects and in
multidisciplinary environments is to be practiced for performing various roles in
team activity.
PSO2-PO6 Development of solution must be relevant to professional engineering practice and
follows societal, health, safety, legal and cultural issues.
PSO2-PO8 Ethical principles and commit to professional ethics and responsibilities and norms of
the engineering practice should be carried out in analysis, testing of any new
approaches
PSO2-PO9 Knowledge based system functions effectively when worked as an individual, and
as a member or leader in diverse teams, and in multidisciplinary settings
PSO2-PO10 To lead a group for developing system, effective reports, design documentation,
effective presentations, give and receive clear instructions.
PSO2-PO12 Research, data analysis, testing by applying new approaches to the field of IT ought
to recognize the need of life-long learning in the broadest context of technological
change.

Ms. Jalpa Mehta


Program Coordinator
Information Technology Program
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering College,


Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 1

Date of 12/07/2023
Performance:

Date of 19/07/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO. 01

Aim: To understand the benefits of Cloud Infrastructure and Setup AWS Cloud9 IDE,
Launch AWS Cloud9 IDE and Perform Collaboration Demonstration.

Lab Outcome No: ITL504.1

Lab Outcome: To understand the fundamentals of Cloud Computing and be fully


proficient with Cloud based DevOps solution deployment options to meet your
business requirements.

Theory:

AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets


you write, run,and debug your code with just a browser. It includes a code editor,
debugger, and terminal. Cloud9comes prepackaged with essential tools for popular
programming languages, including JavaScript,Python, PHP, and more, so you don’t
need to install files or configure your development machine tostart new projects. Since
your Cloud9 IDE is cloud-based, you can work on your projects from youroffice,
home, or anywhere using an internet-connected machine. Cloud9 also provides a
seamlessexperience for developing serverless applications enabling you to easily
define resources, debug,and switch between local and remote execution of serverless
applications. With Cloud9, you can quickly share your development environment with
your team, enabling you to pair program real time.

Benefits

CODE WITH JUST A BROWSER

AWS Cloud9 gives you the flexibility to run your development environment on a
managed Amazon EC2 instance or any existing Linux server that supports SSH.
This means that you can write, run, and debug applications with just a browser,
without needing to install or maintain a local IDE. The Cloud9 code editor and
integrated debugger include helpful, time-saving features such as code hinting,
code completion, and step-through debugging. The Cloud9 terminal provides a
browser-based shell experience enabling you to install additional software, do a git
push, or enter commands.
CODE TOGETHER IN REAL TIME

AWS Cloud9 makes collaborating on code easy. You can share your development
environment with your team in just a few clicks and pair program together. While
collaborating, your team members can see each other type in real time, and
instantly chat with one another from within the IDE.

BUILD SERVERLESS APPLICATIONS WITH EASE

AWS Cloud9 makes it easy to write, run, and debug serverless applications. It
preconfigures the development environment with all the SDKs, libraries, and plug-
ins needed for serverless development. Cloud9 also provides an environment for
locally testing and debugging AWS Lambda functions. This allows you to iterate
on your code directly, saving you time and improving the quality of your code.
START NEW PROJECTS QUICKLY

AWS Cloud9 makes it easy for you to start new projects. Cloud9’s development
environment comes prepackaged with tooling for over 40 programming languages,
including Node.js, JavaScript, Python, PHP, Ruby, Go, and C++. This enables you
to start writing code for popular application stacks within minutes by eliminating
the need to install or configure files, SDKs, and plug-ins for your development
machine. Because Cloud9 is cloud-based, you can easily maintain multiple
development environments to isolate your project’s resources.

Steps:

Step 1: Create an AWS account

Click on the URL : https://aws.amazon.com/console/

Create new AWS Account


Complete the AWS sign up process by filling account details.

Note: please keep record of AWS credentials and 12 digit account number

Step 2: Sign in to AWS root user by providing credentials and Go to My


Account and select AWS Management Console.
Step 3: search for IAM service and go to IAM management console

Step 4: click on the user tab and add user


Step 5: Provide user details. user name, select AWS access type as AWS
Management Console Access and provide custom password.
Step 6: Create User Group

Step 7: Review User Details


Step 8: click on your group name which you have created and navigate to
permission tab as shown in figure

Step 9: Now click on Add permission and select Attach Policy after that
search for Cloud9 related policy and select Awscloud9EnviornmentMember
policy , AWSCloud9Administrator and add it.
Step 10: Go back to AWS management console and sign out the root account.

Step 11: Sign in as IAM user created before by providing 12 digit Account
ID and credentials.
Step 12: Find the AWS cloud9 service in the Services console.

Step 13: create an environment and provide the details for the environment as
shown below
Step 14: Keep all the default setting as given below
Step 15: review the settings and create the environment.

Step 16: It will take few minutes to create aws instance for your Cloud 9
Environment
Step 17: Open the cloud9 IDE instance and see the welcome page

Step18: If you check at bottom side Cloud9 IDE also giving you and aws CLI
for command operations: as we here checked git version, iam user details
Step 19: Upload the website folder by selecting upload local files in file section.

Step 20: edit the .html file and save the changes.
Step 21: See the preview of index.html by selecting preview button . explore it
in the browser also.
Step 22: Create another IAM user by sign in to root user account and add
user in IAM management console and follow the procedure of step 4 to Step 9.

A new IAM user is created and credentials

saved. Sign out the root account.

Step 23: Sign in to IAM user created initially and open the Cloud9
environment IDE

Step 24: Click on the share button and invite the new user by providing IAM
user name
Step 25 : Allow RW access to the user and click on ok
Step 26 :Now Open your Browsers Incognito Window and login with new
IAM user created.

Step 27: Go to Cloud9 service and open shared with you environment and
open IDE.
Step 28: Open both IAM users Cloud 9 IDE together in same window

Step 29: Edit the code in both user’s IDEs and see the changes.
Step 30: Also you can do group chat with in the team .

Step 31: you can also explore settings where you can update permissions
of your temmates as from RW to R only or you can remove user too

Conclusion: Hence the AWS Cloud9 IDE has been set up, Cloud9 IDE has been
launched and collaboration demonstration has been performed.
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering


College, Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 2

Date of 19/07/2023
Performance:

Date of 26/07/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 02

Aim: To Build Your Application using AWS CodeBuild and Deploy on


S3/SEBS using AWS CodePipeline, deploy Sample Application on
EC2 instance using AWS CodeDeploy

Lab Outcome No: ITL504.1

Lab Outcome: To understand the fundamentals of Cloud Computing and be


fully proficient with Cloud based DevOps solution deployment options to meet
your business requirements.

Theory: AWS Codepipeline is a continuous delivery service you can use to


model ,visualize and automate the steps required to release your software.
AWS CodePipeline is a fully managed continuous delivery service that helps
you automate your release pipelines for fast and reliable application and
infrastructure updates. CodePipeline automates the build, test, and deploy
phases of your release process every time there is a code change, based on the
release model you define.

Benefits

● Rapid Delivery : AWS CodePipeline automates your software release


process, allowing you to rapidly release new features to your users. With
CodePipeline, you can quickly iterate on feedback and get new features
to your users faster.
● Configurable workflow: AWS CodePipeline allows you to model the
different stages of your software release process using the console
interface, the AWS CLI, AWS CloudFormation, or the AWS SDKs.
You can easily specify the tests to run and customize the steps to deploy
your application and its dependencies.
● Easy to integrate: AWS CodePipeline can easily be extended to adapt to
your specific needs. You can use our pre-built plugins or your own
custom plugins in any step of your release process. For example, you can
pull your source code from GitHub, use your on-premises Jenkins build
server, run load tests using a third-party service, or pass on deployment
information to your custom operations dashboard.
● Get started fast : With AWS CodePipeline, you can immediately begin
to model your software release process. There are no servers to
provisionor set up. CodePipeline is a fully managed continuous delivery
service that connects to your existing tools and systems.
How it works:The following diagram shows an example release process
using CodePipeline.

CodePipeline can deploy applications to EC2 instances by using CodeDeploy,


AWS Elastic Beanstalk, or AWS OpsWorks Stacks. CodePipeline can also
deploy container-based applications to services by using Amazon ECS.
Set up a Continuous Deployment Pipeline using AWS CodePipeline

Step 1: Create an S3 bucket for your application

a. Sign in to the AWS Management Console and open the Amazon S3


console at https://console.aws.amazon.com/s3/.
b. Choose Create bucket.
c. In Bucket name, enter a name for your bucket
(for example, awscodepipeline-demobucket-example-date).
d. In Region, choose the Region where you intend to create your pipeline,
such as US West (Oregon), and then choose Create bucket.
e. After the bucket is created, a success banner displays. Choose Go to
bucket details.
f. On the Properties tab, choose Versioning. Choose Enable versioning,
and then choose Save.

g.

h. Next, download a sample and save it into a folder or directory on your


local computer.
a. Choose one of the following. Choose SampleApp_Windows.zip if
you want to follow the steps in this tutorial for Windows Server
instances.
i. If you want to deploy to Amazon Linux instances using
CodeDeploy, download the sample application here:
SampleApp_Linux.zip.
ii. If you want to deploy to Windows Server instances using
CodeDeploy, download the sample application here:
SampleApp_Windows.zip.
b. Download the compressed (zipped) file. Do not unzip the file.
i. In the Amazon S3 console, for your bucket, upload the file:
a. Choose Upload.
b. Drag and drop the file or choose Add files and browse for the file.
c. Choose Upload.

Step 2: Create Amazon EC2 Windows instances and install the


CodeDeploy agent
In this step, you create the Windows Server Amazon EC2 instances to which
you will deploy a sample application. As part of this process, you install the
CodeDeploy agent on the instances. The CodeDeploy agent is a software
package that enables an instance to be used in CodeDeploy deployments.
To create an instance role

1. Open the IAM console at https://console.aws.amazon.com/iam/).


2. From the console dashboard, choose Roles.
3. Choose Create role.
4. Under Select type of trusted entity, select AWS service. Under Choose
a use case, select EC2, and then choose Next: Permissions.
5. Search for and select the policy
named AmazonEC2RoleforAWSCodeDeploy, and then choose Next:
Tags.
6. Choose Next: Review. Enter a name for the role (for example,
EC2InstanceRole).
To launch instances

1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.


2. From the console dashboard, choose Launch instance, and select
Launch instance from the options that pop up.
3. On the Step 1: Choose an Amazon Machine Image (AMI) page, locate
the Microsoft Windows Server 2019 Base option, and then choose
Select. (This AMI is labeled "Free tier eligible" and can be foundat the
top of the list.)

4. On the Step 2: Choose an Instance Type page, choose the free tier
eligible t2.micro type as the hardware configuration for your instance,
and then choose Next: Configure Instance Details.
5. On the Step 3: Configure Instance Details page, do the following:

 In Number of instances, enter 2.


 In Auto-assign Public IP, choose Enable.
 In IAM role, choose the IAM role you created in the previous
procedure (for example, EC2InstanceRole).

 Expand Advanced Details, and in User data, with


As text selected, enter the following:
 bucket-name
<powershell>is the name of the S3 bucket that contains the
 CodeDeploy
New-Item -PathResource
c:\tempKit-ItemType
files for your Region.-Force
"directory" For example, for
 the US West (Oregon)
powershell.exe -Command Region, replace bucket-name
Read-S3Object -BucketName with aws-
bucket-
codedeploy-us-west-2.
name/latest -Key For a list of bucket names,
codedeploy-agent.msi -Filesee Resource
c:\temp\
Kit Bucket Names
codedeploy- by Region.
agent.msi
Start-Process
 This -Wait
code installs -FilePath c:\temp\codedeploy-agent.msi
the CodeDeploy agent on your instance as it is-
WindowStyle
created. Hidden
This script is written for Windows instances only.
</powershell>
 Leave the rest of the items on the Step 3: Configure Instance
Details page unchanged. Choose Next: Add Storage.
6. Leave the Step 4: Add Storage page unchanged, and then choose Next:
Add Tags.
7. On the Add Tags page, choose Add Tag. Enter Name in the Key field,
enter MyCodePipelineDemo in the Value field, and then choose Next:
Configure Security Group.

Important

The Key and Value boxes are case sensitive.


8. On the Configure Security Group page, allow port 80 communication
so you can access the public instance endpoint.

9. Choose Review and Launch.


10.On the Review Instance Launch page, choose Launch. When
prompted for a key pair, choose Proceed without a key pair.

11.Choose View Instances to close the confirmation page and return to the
console.
12.You can view the status of the launch on the Instances page. When you
launch an instance, its initial state is pending. After the instance starts, its
state changes to running, and it receives a public DNS name. (If the
Public DNS column is not displayed, choose the Show/Hide icon, and
then select Public DNS.)
13.It can take a few minutes for the instance to be ready for you to connect
to it. Check that your instance has passed its status checks. You can
view this information in the Status Checks column.

Step 3: Create an application in CodeDeploy In CodeDeploy,


an application is an identifier, in the form of a name, for the code you want to
deploy. CodeDeploy uses this name to ensure the correct combination of
revision, deployment configuration, and deployment group are referenced
during a deployment.
To create an application in CodeDeploy
1. https://console.aws.amazon.com/codedeploy.
2. If the Applications page does not appear, on the AWS CodeDeploy
menu, choose Applications.
3. Choose Create application.
4. In Application name, enter MyDemoApplication.
5. In Compute Platform, choose EC2/On-premises.
6. Choose Create application.

To create a deployment group in CodeDeploy

1. On the page that displays your application, choose Create deployment


group.
2. In Deployment group name, enter MyDemoDeploymentGroup.
1. In Service Role, choose a service role that trusts AWS
CodeDeploy with, at minimum, the trust and permissions
described in Create a Service Role for CodeDeploy.
For that open IAM console, choose Roles, and then choose Create
role. choose AWS service, common use case EC2 and then Choose
the service that will use this role list, choose CodeDeploy.and select
For EC2/On-Premises deployments.

3. Under Deployment type, choose In-place.


4. Under Environment configuration, choose Amazon EC2 Instances.
Choose Name in the Key field, and in the Value field, enter
MyCodePipelineDemo.

Step 4: Create your first pipeline in CodePipeline


To create a CodePipeline automated release
process

1. Sign in to the AWS Management Console and open the CodePipeline


console at http://console.aws.amazon.com/codesuite/codepipeline/home.
2. On the Welcome page, Getting started page, or the Pipelines page,
choose Create pipeline.
3. In Step 1: Choose pipeline settings, in Pipeline
name, enter MyFirstPipeline.
4. In Service role, do one of the following:
1. Choose New service role to allow CodePipeline to create a new
service role in IAM. In Role name, the role and policy name both
default to this format: AWSCodePipelineServiceRole-region-
pipeline_name. For example, this is the service role created for
this tutorial: AWSCodePipelineServiceRole-eu-west-2-
MyFirstPipeline.
2. Choose Existing service role to use a service role already created
in IAM. In Role name, choose your service role from the list.
5. Leave the settings under Advanced settings at their defaults, and then
choose Next.
6. In Step 2: Add source stage, in Source provider, choose Amazon S3.
In Bucket, enter the name of the S3 bucket you created in . In S3 object
key, enter the object key with or without a file path, and remember to
include the file extension. For example, for SampleApp_Windows.zip,
enter the sample file name as shown in this example:

7. In Step 3: Add build stage, choose Skip build stage, and then accept
the warning message by choosing Skip again. Choose Next.
8. In Step 4: Add deploy stage, in Deploy provider, choose AWS
CodeDeploy. The Region field defaults to the same AWS Region as
your pipeline. In Application name, enter MyDemoApplication, or
choose the Refresh button, and then choose the application name from
the list. In Deployment group, enter MyDemoDeploymentGroup, or
choose it from the list, and then choose Next.
9. In Step 5: Review, review the information, and then choose Create
pipeline.

10.The pipeline starts to run. You can view progress and success and failure
messages as the CodePipeline sample deploys a webpage to each of the
Amazon EC2 instances in the CodeDeploy deployment.
11.After Succeeded is displayed for the action status, in the status area for
the Deploy stage, choose Details. This opens the AWS CodeDeploy
console.
12. In the Deployment group tab, under Deployment lifecycle events, choose
an instance ID. This opens the EC2 console.
13 On the Description tab, in Public DNS, copy the address, and then paste it
into the address bar of your web browser. View the index page for the sample
application you uploaded to your S3 bucket.
The following page is the sample application you uploaded to your S3 bucket.

Conclusion: Hence you build your Application using AWS CodeBuild and Deploy on S3
using Aws CodePipeline also deployed application on EC2 instance using AWS CodeDeploy.
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering


College, Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 3

Date of 26/07/2023
Performance:

Date of 02/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 03

Aim: To understand the Kubernetes Cluster Architecture, install and Spin Up a


Kubernetes Cluster on Linux Machines/Cloud Platforms.

Lab Outcome No.: ITL504.1, ITL504.2

Lab Outcome: To understand the fundamentals of Cloud Computing and be


fully proficient with Cloud based DevOps solution deployment options to meet
your business requirements.
To deploy single and multiple container applications and manage application
deployments with rollouts in Kubernetes

Theory:
Kubernetes is an open-source platform for deploying and managing containers.
It provides a container runtime, container orchestration, container-centric
infrastructure orchestration, self- healing mechanisms, service discovery and
load balancing. It’s used for the deployment, scaling, management, and
composition of application containers across clusters of hosts.
It aims to reduce the burden of orchestrating underlying compute, network, and
storage infrastructure, and enable application operators and developers to focus
entirely on container- centric workflows for self-service operation. It allows
developers to build customized workflows and higher-level automation to
deploy and manage applications composed of multiple containers.
Kubernetes Architecture and Concepts:
From a high level, a Kubernetes environment consists of a control plane
(master),a distributed storage system for keeping the cluster state consistent
(etcd), and a numberof cluster nodes (Kubelets).
Kubernetes Control Plane:
The control plane is the system that maintains a record of all Kubernetes
objects. It continuously manages object states, responding to changes in the
cluster; it also works to makethe actual state of system objects match the desired
state. As the above illustration shows, the control plane is made up of three
major components: kube-apiserver, kube-controller- manager and kube-
scheduler. These can all run on a single master node, or can be replicated
across multiple master nodes for high availability.
The API Server provides APIs to support lifecycle orchestration (scaling,
updates, and so on) for different types of applications. It also acts as the
gateway to the cluster, so the API server must be accessible by clients from
outside the cluster. Clients authenticate via the API Server,and also use it as a
proxy/tunnel to nodes and pods (and services).
Most resources contain metadata, such as labels and annotations, desired state
(specification) and observed state (current status). Controllers work to drive the
actual state toward the desiredstate.
There are various controllers to drive state for nodes, replication (autoscaling),
endpoints (services and pods), service accounts and tokens (namespaces). The
Controller Manager is a daemon that runs the core control loops, watches the
state of the cluster, and makes changes to drive status toward the desired state.
The Cloud Controller Manager integrates into each public cloud for optimal
support of availability zones, VM instances, storage services, and network
services for DNS, routing and load balancing.The Scheduler is responsible for
the scheduling of containers across the nodes in the cluster; it takes various
constraints into account, such as resource limitations or guarantees, and affinity
and anti-affinity specifications.
Cluster Nodes:
Cluster nodes are machines that run containers and are managed by the master
nodes.The Kubelet is the primary and most important controller in Kubernetes. It’s
responsiblefor driving thecontainer execution layer, typically Docker.

Pods and Services:

Pods are one of the crucial concepts in Kubernetes, as they are the key construct that
developersinteractwith. The previous concepts are infrastructure-focused and internal
architecture.
This logical construct packages up a single application, which can consist of multiple
containers andstorage volumes. Usually, a single container (sometimes with some
helper program in an additional container) runs in this configuration – as shown in
the diagram below.
A pod represents a running process on a cluster.
Kubernetes Networking:
Networking Kubernetes has a distinctive networking model for cluster-wide,
podto-pod networking. In most cases, the Container Network Interface (CNI)
uses a simple overlay network (like Flannel) to obscure the underlying network
from the pod by using traffic encapsulation (like VXLAN); it can also use a
fully-routed solution like Calico. In both cases, pods communicate over a
cluster-wide pod network, managed by a CNI provider like Flannel or Calico.
Within a pod, containers can communicate without any restrictions. Containers
within a pod exist within the same network namespace and share an IP. This
means containers can communicate over localhost. Pods can communicate with
each other using the pod IP address,which is reachable across the cluster.
Moving from pods to services, or from external sources to services, requires
going through kube-proxy.
Kubernetes Tooling and Clients:
Here are the basic tools you should know:
Kubeadm bootstraps a cluster. It’s designed to be a simple way for new users to
build clusters(more detail on this is in a later chapter).
Kubectl is a tool for interacting with your existing cluster.
Minikube is a tool that makes it easy to run Kubernetes locally.

Step1: Create two EC2 Instances with ubuntu OS, and attach thefollowing security groups
to it.(Renamethem as K8s-Master andK8s-Slave)
1. All Traffic (IPV4)
2. All Traffic (IPV6)

Step2: Create an IAM user/role with Route53, EC2, IAM and S3 fullaccess.

Step3: Attach IAM role that we just created and attach it to ubuntu server.
Step 4: Connect both the instances using Putty/WinSCP.

Steps to Install Kubernetes on Ubuntu Set up docker

Step 1: Install Docker


Kubernetes requires an existing Docker installation. If you do not have
Kubernetes, install itby following these steps:
1. Update the package list with the command:

sudo apt-get update


2. Next, install Docker with the command:

sudo apt-get install docker.io


3. Repeat the process on each server that will act as a node.
4. Check the installation (and version) by entering the following:

docker ––version

Step 2: Start and Enable Docker


1. Set Docker to launch at boot by entering the following:

sudo systemctl enable docker


2. Verify Docker is running:

sudo systemctl status docker


To start Docker if it’s not running:

sudo systemctl start docker


Repeat on all the other nodes.

Install Kubernetes

Step 3: Add Kubernetes Signing Key


Since you are downloading Kubernetes from a non-standard repository, it is
essential to ensurethat the software is authentic. This is done by adding a signing
key.
1. Enter the following to add a signing key:

Ifcurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
you get an error that curl is not installed, install it with:
| sudo apt-key add
sudo apt-get install curl

2. Then repeat the previous command to install the signing keys. Repeat for each
server node.

Step 4: Add Software Repositories


Kubernetes is not included in the default repositories. To add them, enter the following:

sudo apt-add-repository "deb http://apt.kubernetes.io/


Repeat
kuberneon each server node.
tes-xenial main"

Step 5: Kubernetes Installation Tools


Kubeadm (Kubernetes Admin) is a tool that helps initialize a cluster. It fast-
tracks setup by using community-sourced best practices. Kubelet is the work
package, which runs on every node and starts containers. The tool gives you
command-line access to clusters.
1. Install Kubernetes tools with the command:

sudo apt-get install kubeadm kubelet kubectl


apt-mark hold command
sudo apt-mark makes sure
hold kubeadm that these
kubelet packages does not get auto-
kubectl
upgraded/deleted they remain as it is.
Allow the process to complete.
2. Verify the installation with:

kubeadm version

Kubernetes Deployment

Step 6: Begin Kubernetes Deployment


Start by disabling the swap memory on each server:
sudo swapoff --a

Step 7: Assign Unique Hostname for Each Server Node

Decide which server to set as the master node. Then enter the command:

sudo hostnamectl set-hostname master-node


Next, set a worker node hostname by entering the following on the worker server:

sudo hostnamectl set-hostname worker01


If you have additional worker nodes, use this process to set a unique
hostname on each.Run sudo su to see if the host names have been added
successfully.

Step 8: Initialize Kubernetes on Master Node (Should be run only on master


node)Switch to the master server node, and enter the following:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

You’ll encounter an error regrading insufficient disk and CPU, ignore it


using followingcommand

Once
sudo this command
kubeadm initfinishes, it will display a kubeadm join message
--pod-network-cidr=10.244.0.0/16 at the
--ignore-
pr eflight-errors=all
end. Make a noteofthe whole entry. This will be used to join the worker nodes
to the cluster.
Next, enter the following to create a directory for the cluster:
kubernetes-master:~$ mkdir -p $HOME/.kube
kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HO
ME/.kube/config
kubernetes-master:~$ sudo chown $(id -u):$(id -g)
$HOME/.kube/ config

Step 9: Deploy Pod Network to Cluster

A Pod Network is a way to allow communication between different nodes in the


cluster. Thistutorial uses the flannel virtual network.
Enter the following:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos


Allow the process to complete.
/flannel/master/Documentation/kube-flannel.yml
Verify that everything is running and communicating:

kubectl get pods --all-namespaces


Step 10: Join Worker Node to Cluster

As indicated in Step 7, you can enter the kubeadm join command on each worker
node to connect itto the cluster.
Switch to the worker01 system and enter the command you noted from Step 7:

kubeadm join --discovery-token abcdef.1234567890abcdef --disco very-token-c

Replace the alphanumeric codes with those from your master server. Repeat for
each workernode on the cluster. Wait a few minutes; then you can check the
status of the nodes.
Switch to the master server, and enter:

kubectl get nodes

The system should display the worker nodes that you joined to the cluster.

Conclusion:
Kubernetes Cluster has been installed and spi
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering


College, Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 4

Date of 02/08/2023
Performance:

Date of 09/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 04
Aim: To install Kubectl and execute Kubectl commands to manage the
Kubernetes cluster anddeploy Your First Kubernetes Application

Lab Outcome No.: ITL504.1, ITL504.2

Lab Outcome: To understand the fundamentals of Cloud Computing and be fully


proficient with Cloud based DevOps solution deployment options to meet your
business requirements.
To deploy single and multiple container applications and manage application
deployments with rollouts in Kubernetes.

Theory:

What is kubectl?

 Before learning how to use kubectl more efficiently, you should have a basic
understandingof what it is and how it works.
 From a user’s point of view, kubectl is your cockpit to control Kubernetes. It
allows you toperform every possible Kubernetes operation.
 From a technical point of view, kubectl is a client for the Kubernetes API.
 The Kubernetes API is an HTTP REST API. This API is the real Kubernetes
user interface. Kubernetes is fully controlled through this API. This means that
every Kubernetes operation is exposed as an API endpoint and can be executed
by an HTTP request to this endpoint.
 Consequently, the main job of kubectl is to carry out HTTP requests to the Kubernetes
API:

 Kubernetes is a fully resource-centred system. That means, Kubernetes


maintains an internal state of resources, and all Kubernetes operations are
CRUD operations on these resources. You fully control Kubernetes by
manipulating these resources (and Kubernetes figures out what to do based on
the current state of resources). For this reason, the Kubernetes API refer-ence
is organized as a list of resource types with their associated operations.

The kubectl command line tool lets you control Kubernetes clusters. For configuration,
kubectl looks for a filenamed config in the $HOME/.kube directory. You can specify other
kubeconfig filesby setting the KUBECONFIG environment variable or by setting the --
kubeconfig flag. This overview covers kubectl syntax, describes the command operations,
and provides common examples. For details about each command,including all the supported
flags and subcommands, seethe kubectl reference documentation. For installation instructions
see installing kubectl.

Syntax:
Use the following syntax to run kubectl commands from your terminal window:
kubectl [command] [TYPE] [NAME] [flags]

where command, TYPE, NAME, and flags are:

• command: Specifies the operation that you want to perform on one or


more resources, forexample create, get, describe, delete.

• TYPE: Specifies the resource type. Resource types are case-insensitive


and you can specifythe singular, plural, or abbreviated forms. For example,
the following commands produce the same output:
kubectl get pod
• NAME: Specifies the name of the resource. Names are case-
sensitive. If the name isomitted, details for all resources are
displayed, for example kubectl get pods.
• flags: Specifies optional flags. For example, you can use the -s or --server
flags to specifythe address and port of the Kubernetes API server.
Following are the commands that can be used for getting information about clusters
1. kubectl cluster-info: Display addresses of the master
and services with labelkubernetes.io/cluster-service=true

kubectl cluster-info

2. List all pods in all namespaces

kubectl get pods –-all namespaces


3. Print the supported API versions on the server, in the form of “group/version”.

kubectl api-versions

4. List the nodes in your cluster, along with their labels:

kubectl get nodes

5. List all services in the namespace


Deploy Your First Kubernetes Application:x

On Master Node:

1. Create the Deployment by running the following command:

kubectl create deployment nginx --image=nginx

2. If you want to see all the existing deployments

kubectl get deployment

3. Each Pod has a unique IP address, those IPs are not exposed outside the
cluster without aService. Services allow your applications to receive traffic.
Services can be exposed in different ways by specifying a type in the
Service Spec:
 ClusterIP (default) - Exposes the Service on an internal IP in the cluster.
This type makes theService only reachable from within the cluster.
 NodePort - Exposes the Service on the same port of each selected Node in
the cluster usingNAT. Makes a Service accessible from outside the cluster
using
<NodeIP>:<NodePort>. Superset of ClusterIP.
 LoadBalancer - Creates an external load balancer in the current cloud
(if supported) andassigns a fixed, external IP to the Service. Superset of
NodePort.
 ExternalName - Maps the Service to the contents of the externalName field
(e.g. foo.bar.example.com), by returning a CNAME record with its value. No
proxying ofany kind is set up. This type requires v1.7 or higher of kube-dns

kubectl expose deploy nginx --port 80 --target-port 80 --


typeNodePort
4. Scaling Up: In case of heavy traffic, you may need more instances
of an applicationfollowing command will help to create replicas.

kubectl scale --current-replicas=1 --replicas=2


deployment/ng inx

5 If you want more details about particular pod run following command

kubectl describe pods <pod name>

You can always check status of your pod through this command, it helps to keep track of pod’s
status andcontainers within that pod.

6 Similarly, you can check details of deployments that you have created

kubectl describe deployment/<deployment name>


Ex. kubectl describe deployment/nginx
7 To check what kind of services are currently in use
kubectl get services

8 To get details about all those services such as what kind of services you
have configured,Ips, endpoints, on which port the pod is running etc

kubectl describe services <service name>nginx


9 kubectl delete:
kubectl Delete resources
describe servicesbynginx
filenames, stdin, resources and names,
or by resourcesand label selector.
kubectl delete service <name> kubectl delete deployment <name>

10 If a particular node is not functioning well, you can remove that node
by followingcommand
Drain: stop all the pods and containers running on that node

kubectl drain <node name>

Delete : once all the services running on the nodes are stopped you can delete
it byusingfollowing command.
kubectl delete node <node name>

11 The node can re-join the cluster by using joining link


Conclusion:
Deployment of First Kubernetes Application and Kubectl commands to manage the Kubernetes
cluster hasbeen executed successfully.
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering


College, Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 5

Date of 09/08/2023
Performance:

Date of 23/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 05
Aim: To understand terraform lifecycle, core concepts/terminologies and install it
on a LinuxMachine.

Lab Outcome No.: ITL504.1, ITL504.3

Lab Outcome: To understand the fundamentals of Cloud Computing and be


fully proficient with Cloud based DevOps solution deployment options to meet
your business requirements.
To apply best practices for managing infrastructure as code environments
anduse terraform to define and deploy cloud infrastructure.

Theory:
Introduction to Terraform:
Terraform is an infrastructure as code (IaC) tool that allows you to build,
change, and version infrastructure safely and efficiently. This includes low-level
components such as compute instances, storage, and networking, as well as high-
level components such as DNS entries, SaaSfeatures, etc. Terraform can manage
both existing service providers and custom in-house solutions.
What is Terraform used for:
On the one hand Terraform is used for creating or provision new infrastructure
and formanaging existing infrastructure.
On the other hand it can be used to replicate infrastructure. E.g. when you
want to replicate thedevelopment setup also for staging or production
environment:
How does Terraform work:
Terraforms Architecture Terraform has 2 main components:
1) CORE: Terraform's Core takes two input sources, which are your
configuration files (your desired state) and second the current state
(which is managed by Terraform). Withthis information the Core then
creates a plan of what resources need to be created/changed/removed.
2) Provider: The second part of the Architecture are providers. Providers
can be IaaS (likeAWS, GCP, Azure), PaaS (like Heroku, Kubernetes) or
SaaS services (like Cloudflare).Providers expose resources, which makes
it possible to create infrastructure across all these platforms
Terraform Lifecycle:
Writing Terraform Code:
The first thing in the terraform workflow is to start with writing your Terraform
configuration just like you write code: in your editor of choice. It’s common
practice to store your work in aversion-controlled repository even when you’re
just operating as an individual.
terraform init:
The first thing that you do after writing your code in Terraform is initializing the
code using thecommand terraform init. This command is used to initialize the
working directory containing Terraform configuration files. It is safe to run this
command multiple times.
You can use the init command for:
- Plugin Installation.
- Child Module Installation.
- Backend Initialization.
- terraform plan:
After a successful initialization of the working directory and the
completion of the plugin download, we can create an execution plan
using terraform plan command, thisis a handy way to check whether the
execution plan matches your expectations withoutmaking any changes to
real resources or to the state.
If the Terraform discovers no changes to resources, then the terraform plan
indicatesthat no changes are required to the real infrastructure.
Terraform apply:
After a successful initialization of the working directory and the completion of the
plugin download, we can create an execution plan using terraform plan command,
this is a handy way to check whether the execution plan matches your expectations
without making any changes toreal resources or to the state. If the Terraform
discovers no changes to resources, then the terraform plan indicates that no
changes are required to the real infrastructure.
Terraform also helps to save the plan to a file for later execution with terraform
apply, whichcan be useful while applying automation with Terraform.

Installation of Terraform:
Step 1: Download terraform
$ wget <download link> Ex.
wget
https://releases.hashicorp.com/terraform/1.0.7/terraform_1.0.7_linux_amd64.zip
Step 2: Unzip the downloaded folder
$ unzip <file name>
$ unzip
terraform_1.0.7_linux_amd64.zipStep 3:
Check the version of terraform
$ terraform -v
Step 4: Check the various commands of terraform $ terraform

Conclusion:
Terraform lifecycle, core concepts/terminologies are discussed and Terraform
installed on aLinux Machine successfully
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering


College, Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 6

Date of 23/08/2023
Performance:

Date of 30/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 06

Aim: To build, change, and destroy AWS / GCP /Microsoft


Azure/ Digital Ocean infrastructure Using Terraform.

Lab Outcome No.: ITL504.1, ITL504.3

Lab Outcome: To understand the fundamentals of Cloud Computing and


be fully proficient with Cloud based DevOps solution deployment
options to meet your business requirements.
To apply best practices for managing infrastructure as
codeenvironments and use terraform to define and deploy cloud
infrastructure

Theory:
The terraform {} block contains Terraform settings, including the required
providers Terraform wil luse to provision your infrastructure. For each
provider, the source attribute defines an optional hostname, a namespace, and
the provider type. Terraform installs providers from the Terraform Registry
by default. In this example configuration, the AWS provider's source is
defined as has hicorp/AWS, which is shorthand for
registry.terraform.io/hashicorp/AWS. You can also set a version constraint
for each provider defined in the required providers block. The version
attribute is optional, but we recommendusing it to constrain the provider
version so that Terraform does not install a version of the provider that does
not work with your configuration. If you do not specify a provider version,
Terraform will automatically download the most recent version during
initialization.
Providers:
The provider block configures the specified provider, in this case aws.
A provider is a plugin that Terraform uses to create and manage your
resources.
The profile attribute in the AWS provider block refers Terraform to the AWS
credentials stored in your AWS configuration file, which you created when
you configured the AWS CLI. Never hard-code credentials or other secrets in
your Terraform configuration files. Like other types of code, you may share
and manage your Terraform configuration files using source control, so hard-
coding secret values can expose them to attackers.
You can use multiple provider blocks in your Terraform configuration to
manage resources from different providers. You can even use different
providers together. For example, you could pass the IP address of your
AWS EC2 instance to a monitoring resource from DataDog.
Resources:
Use resource blocks to define components of your infrastructure. A
resource might be a physical or virtual component such as an EC2 instance,
or it can bea logical resource such as a Heroku application.
Resource blocks have two strings before the block: the resource type and the
resource name.
In this example, the resource type is AWS instance and the name is app_server.
The prefix of the type maps to the name of the provider. In the configuration,
Terraform manages the aws_instance resource with the aws provider.
Together, the resource type and resource name form a unique ID for the
resource. For example, the ID for your EC2 instance is
aws_instance.app_server. Resource blocks contain arguments which you use
toconfigure the resource. Arguments can includethings like machine sizes, disk
image names, or VPC IDs. Our providers reference documents the required
andoptional arguments for each resource. For your EC2 instance, the example
configuration sets the AMI ID to an Ubuntu image, and the instance type to
t2.micro, which qualifiesfor AWS' free tier. It also sets a tag to give the
instance a name.

Initialize the directory:


When you create a new configuration or check out an existing
configuration from version controlyou need to initialize the directory with
terraform init.
Initializing a configuration directory downloads and installs the
providers defined in theconfiguration, which in this case is the
aws provider.
Terraform downloads the aws provider and installs it in a hidden
subdirectory of your current working directory, named .terraform. The
terraform init command prints out which version of the provider was
installed. Terraform also creates a lock file named .terraform.lock.hcl which
specifiesthe exact provider versions used, so that you can control when you
want to update the providers used for your project.

Create infrastructure:
Apply the configuration now with the terraform apply command. Terraform
will print output similarto what is shown below. We have truncated some of
the output to save space.
Before it applies any changes, Terraform prints out the execution plan which
describes the actionsTerraform will take in order to change your
infrastructure to match the configuration.
The output format is similar to the diff format generated by tools such as Git.
The output has a + next to aws_instance.app_server, meaning that Terraform
will create this resource. Beneath that, it shows the attributes that will be set.
When the value displayed is (known after apply), it means thatthe value will
not be known until the resource is created. For example, AWS assigns
Amazon Resource Names (ARNs) to instances upon creation, so Terraform
cannot know the value of the arn attribute until you apply the change and the
AWS
provider returns that value from the AWS API. Terraform will now pause
and wait for your approval before proceeding. If anything in the plan seems
incorrect or dangerous, it is safe to abort here with no changes made to your
infrastructure. In this case the plan is acceptable, so type yes at the
confirmation prompt to proceed. Executing theplan will take a few minutes
since Terraform waits for the EC2 instance to become available.

Destroy
The terraform destroy command terminates resources managed by your
Terraform project. This command is the inverse of terraform apply in that
it terminates all the resources specified in your Terraform state. It does not
destroy resources running elsewhere that are not managed by the current
Terraform project.

Creating an EC2 instance:


Step1: Write a script to create an EC2 instance terraform:
{ required_providers { aws = { source = "hashicorp/aws" version = "~> 3.27"
}}
required_version = ">= 0.14.9"
}
provider "aws" { access_key = " " secret_key = " " region
= "us-east-1"
}
resource "aws_instance" "terraform-ec2" { ami = " " instance_type
= "t2.micro"
}

Step 2: Save the terraform Script as with .tf extension.


Step 3: Execute ‘terraform init’ command to initialize
the resources.Step 4: Execute ‘terraform plan’ to see
the available resources.
Step 5: Execute ‘terraform apply’ to apply the configuration, which
will automatically createan EC2 instance based on our
configuration.
Step 6: Execute ‘terraform destroy’ to delete the configuration, which
will automatically deletean EC2 instance.
Creating S3 Bucket using terraform:
Step 1: Write a Terraform Script in Atom for creating S3 Bucket
on Amazon AWS: provider "aws" { access_key = " "
secret_key = "
" region = "us-east-1" }
resource "aws_s3_bucket" "name_resource" { bucket = "unique name of
s3 bucket" acl = "public-read" tags = {
Name = "My
bucket"
Environment
= "Dev"
}
versioning = { enabled = true }
}
Follow the terraform workflow same as above to create and destroy infrastructure.
Uploading file into a bucket:
Step 1: Write a Terraform Script in Atom for uploading a file into AWS:
resource "aws_s3_bucket_object" "object" { bucket = "your_bucket_name"
key
= "new_object_key"source = "path/to/file"
# The filemd5() function is available in Terraform 0.11.12 and
later # For Terraform 0.11.11 and earlier, use the md5() function
and
the file() function:# etag = "${md5(file("path/to/file"))}" etag
= filemd5("path/to/file")
}

Conclusion:
We have successfully created, changed and destroyed AWS infrastructure
UsingTerraform
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering College,


Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 7

Date of 30/08/2023
Performance:

Date of 13/09/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 07
Aim: To understand Static Analysis SAST process and learn to integrate
Jenkins SAST to SonarQube/GitLab.

Lab Outcome No.: ITL504.1, ITL504.4

Lab Outcome: To understand the fundamentals of Cloud Computing and be


fully proficient with Cloud based DevOps solution deployment options to meet
your business requirements.
To identify and remediate application vulnerabilities earlier and help integrate
security in the development process using SAST Techniques

Theory:
SonarQube is a Code Quality Assurance tool that collects and analyzes source
code, and provides reports for the code quality of your project. It combines
static and dynamic analysis tools and enables quality to be measured
continually over time. Everything from minor styling choices, to design errors
are inspected and evaluated by SonarQube. This provides users with a rich
searchable history of the code to analyze where the code is messing up and
determine whether or not it is styling issues, code defeats, code duplication, lack
of test coverage, or excessively complex code. The software will analyze source
code from different aspects and drills down the code layer by layer, moving
module level down to the class level, with each level producing metric values
and statistics that should reveal problematic areas in the source code that needs
improvement.
SonarQube also ensures code reliability, Application security, and reduces
technical debt by making your code base clean and maintainable. SonarQube
also provides support for 27 different languages, including C, C++, Java,
JavaScript, PHP, GO, Python, and much more. SonarQube also provides Ci/CD
integration, and gives feedback during code review with branch analysis and
pull request decoration.

A SonarQube instance comprises three components:


Installation:
- Pre-requisites:
The only prerequisite for running SonarQube is to have Java (Oracle
JRE 11 orOpenJDK 11) installed on your machine.
- Hardware Requirements:
A small-scale (individual or small team) instance of the SonarQube server
requires atleast 2GB of RAM to run efficiently and 1GB of free RAM
for the OS. If you are installing an instance for a large teams or
Enterprise, please consider the additional recommendations below.
The amount of disk space you need will depend on how much code you
analyze with SonarQube.
SonarQube must be installed on hard drives that have excellent read &
write performance. Most importantly, the "data" folder houses the
Elasticsearch indices on which a huge amount of I/O will be done when
the server is up and running. Great read & write hard drive performance
will therefore have a great impact on the overall SonarQube server
performance.
SonarQube does not support 32-bit systems on the server side. SonarQube
does, however, support 32-bit systems on the scanner side.
- Download “SonarQube” (download
community edition)
https://www.sonarqube.org/downloads/
- Download “SonarQube-Scanner” (download as per your machine
OS)
https://docs.sonarqube.org/latest/analysis/scan/sonarscanner
/ Setup forSonarQube

Server:
Unzip both the downloaded files and keep it at a common
place.Add following to path in “System Variable”
<location>\sonar-scanner-cli-4.6.2.2472-windows\sonar-scanner-
4.6.2.2472windows\bin
Set some configuration inside “sonarqube-scanner” config file inside
your “sonarqube- scanner” folder, go to “conf” folder and find
“sonarscanner.properties” file. Open it in editmode.
Add these two basic properties in “sonar-scanner.properties” file, or if it’s
already there butcommented, then uncomment it.
sonar.host.url=http://localhost
:9000
sonar.sourceEncoding=UTF-8
.
Start the sonarqube server:
Open “Command prompt”, and from terminal itself, go to same folder path
where we kept the1st unzipped folder, i.e., sonarqube folder > bin > respective
OS folder.
//for example, this is my path
D:
Exp7\sonarqube-
9.1.0.47736\
sonarqu be-
9.1.0.47736\bin\win
dows-x86-64
Here, you will find
“sonar.sh” Bash file.
“SartSonar.bat” run
this command to
start SonarQube
server

If your terminal shows this output, that means your SonarQube Server is up and

running.
Open any browser, add the following address into address bar, and hit Enter.
http://localhost:9000
Default login and Password is admin. (You can change the password later).

Setup for SonarQube-Scanner:


Go to your project folder which you want to scan. Create one new file inside
your project'sroot folder path with name “sonar-project”. The extension of the
file will be “.properties”.
sonar-project.properties
The following basic configurations inside “sonar-project.properties” file.
sonar.projectKey="any unique name" sonar.projectName="any
unique name"sonar.sourceEncoding=UTF-8
sonar.sources="list of folders which will scan"
sonar.exclusion="list of folders which will exclude from
scan"
.
“sonar.sources” & “sonar.exclusion” property values will be the list
of folders or files which you wants to scan or exclude from scan.
The list must be separated by comma(,). If you want to include all
files or folders, then just mention Dot(.)
Run SonarQube Scanner on your project:
Now, you are all set for your scanning your code. “Command prompt” and
from Commandprompt, go to the folder path where your project code resides.
// for example, I kept my test project on
this pathD:\JavaTest\src\main
Run this command to scan your code.
sonar-scanner // start scann
sonar-scanner -h // to see other commands
Once the scanning ends, it will show you the output of scanning with the path
where you cansee the scanning details with dashboard data. Integrate SonarQube
with Jenkins
Login into Jenkins and install the SonarQube scanner plugin:
Go to Manage Jenkins –> Manage Plugins > Available –> SonarQube scanner

Add credentials plugins to store your credentials in


Jenkins.Configure SonarQube home path
Go to Manage Jenkins –> Global Tool Configuration –>
SonarQube ScannerName : sonar_scanner
SONAR_RUNNER_HOME: /opt/sonarqube (Your directory path of SonarQube)

Now, Configure SonarQube server in Jenkins


For integration, you need a SonarQube Server authentication token
in Jenkins.Log in into your SonarQube Server and find the following
under the user bar.
Go to My Account –> Security –> Generate Token

Go to Manage Jenkins –> Configure Systems –>


SonarQube ServersName: SonarQube
Server URL: Not Required is the same as the default Server
authentication token:Add server authentication token as following

Select it as a server authentication token.

Save it. Now, your SonarQube integration is completed with Jenkins. Create a
job
(Follow Jenkins – Continuous Integration System) to test SonarQube and
generate a report ofyour project.

Conclusion:
SonarQube and SonarScanner is successfully installed and integrated with Jenkins.
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering College,


Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 8

Date of 13/09/2023
Performance:

Date of 27/09/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 08
Aim: Create a Jenkins CICD Pipeline with SonarQube / GitLab Integration to
perform a static analysis of the code to detect bugs, code smells, and security
vulnerabilities on a sample Web
/ Java / Python application.
Lab Outcome No.: ITL504.1, ITL504.4
Lab Outcome: To understand the fundamentals of Cloud Computing and be
fully proficient with Cloud based DevOps solution deployment options to meet
your business requirements.
Create a Jenkins CICD Pipeline with SonarQube / GitLab Integration to
performa static analysis of the code to detect bugs, code smells, and security
vulnerabilities on a sampleWeb / Java / Python application

Theory:
Jenkins is a free and open-source automation server. It helps automate the parts
of software development related to building, testing, and deploying, facilitating
continuous integration and continuous delivery. It is a server-based system that
runs in servlet containers such as Apache Tomcat. It supports version control
tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce,
ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbtbased
projects as well as arbitrary shell scripts and Windows batch commands.
SonarQube is an automatic code review tool to detect bugs, vulnerabilities, and
code smells in your code. It can integrate with your existing workflow to enable
continuous code inspectionacross your project branches and pull requests.
What is Jenkins Pipeline?
Jenkins Pipeline (or simply "Pipeline") is a suite of plugins which supports
implementing andintegrating continuous delivery pipelines into Jenkins.
A continuous delivery (CD) pipeline is an automated expression of your process
for getting software from version control right through to your users and
customers. Every change to your software (committed in source control) goes
through a complex process on its way to being released. This process involves
building the software in a reliable and repeatable manner, as well as progressing
the built software (called a "build") through multiple stages of testing and
deployment.
Pipeline provides an extensible set of tools for modeling simple-to-complex
delivery pipelines "as code" via the Pipeline domain-specific language (DSL)
syntax.
The definition of a Jenkins Pipeline is written into a text file (called a
Jenkinsfile) which in turncan be committed to a project’s source control
repository. This is the foundation of "Pipelineas- code"; treating the CD pipeline
a part of the application to be versioned and reviewed like anyother code.
Creating a Jenkins file and committing it to source control provides a number of
immediate benefits:
1. Automatically creates a Pipeline build process for all branches and
pull requests.
2. Code review/iteration on the Pipeline (along with the remaining
source code).
3. Audit trail for the Pipeline.

4. Single source of truth for the Pipeline, which can be viewed and edited by
multiple members of the project.
While the syntax for defining a Pipeline, either in the web UI or with a
Jenkinsfile is the same, it is generally considered best practice to define the
Pipeline in a Jenkinsfile and check that into source control.
Pre-requisites:
• Make sure SonarQube is up and running and do the below steps:
• Make sure SonarQube plug-in installed in Jenkins.
Create a new job in Jenkins with following steps:
1. Click on new item  provide item name  click on pipeline  OK
2. Scroll down and click on “Pipeline syntax”
3. In “sample text” section select Git
4. Provide repository URL
5. Provide branch
6. Add credentials of your GIT account similar to hoe we have added a
token forSonarQube.
7. It’ll generate a pipeline script  copy it and paste it in pipeline script.
8. Write the following pipeline script and provide all the relevant details
(The script in redneed to be change according to your project)
node {
stage('clonning from GIT'){ checkout([$class: 'GitSCM', branches: [[name:
'*/master']], extensions: [], userRemoteConfigs: [[credentialsId:
'govindhivrale_git', url:
'https://github.com/Govindhivrale/jenk...]]
])
}
stage('SonarQube Analysis') {

def scannerHome = tool 'SonarQube'


withSonarQubeEnv('SonarQube') {
sh """/var/lib/jenkins/tools/hudson.plugins.sonar.SonarRunnerInstallation/
SonarQube
/bin/sonar-scanner \
-D sonar.projectVersion=1.0-SNAPSHOT \
-D sonar.login=admin \
-D sonar.password=admin \
-D sonar.projectBaseDir=/var/lib/jenkins/workspace/sonarqube_p/ \
-D sonar.projectKey=project1 \
-D sonar.sourceEncoding=UTF-8 \
-D sonar.language=java \
-D sonar.sources=project/src/main \
-D sonar.tests=project/src/test \
-D sonar.host.url=http://sonarqubelocalhost:9000/"""
}
}
}

9. Click on apply and save the pipeline.


10. Build the pipeline.
11. Once the pipeline is successfully built, we can check the code analysis
on Sonar server.

Conclusion: Jenkins CICD Pipeline is created with SonarQube / GitLab


Integration and a static analysis of the code to detect bugs, code smells, and
security vulnerabilities is performedon a sample application successfully.
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering College,


Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 9

Date of 27/09/2023
Performance:

Date of 04/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 09

Aim: To Understand Continuous monitoring and Installation and configuration of


Nagios Core,Nagios Plugins and NRPE (Nagios Remote Plugin Executor) on Linux
Machine.

Lab Outcome No.: ITL504.1, ITL504.5


Lab Outcome: To understand the fundamentals of Cloud Computing and be fully
proficient with Cloud based DevOps solution deployment options to meet your
business requirements.
To use Continuous Monitoring Tools to resolve any system errors (low memory,
unreachable server etc.) before they have any negative impact on the business
productivity.

Theory:

What is Nagios?
Nagios is an open source software for continuous monitoring of systems,
networks, and infrastructures. It runs plugins stored on a server which is
connected with a host or another server on your network or the Internet. In case
of any failure, Nagios alertsabout the issues so that the technical team can
perform recovery process immediately.Nagios is used for continuous monitoring
of systems, applications, service and business process in a DevOps culture.

Why We Need Nagios tool?


Here are the important reasons to use Nagios monitoring tool:

Detects all types of network or server issues


Helps you to find the root cause of the problem which allows you to get
thepermanent solution to the problem
Active monitoring of your entire infrastructure and business
processes Allows you to monitors and troubleshoot server
performance issues Helps you to plan for infrastructure upgrades
before outdated systems createfailures
You can maintain the security and availability of the service
Automatically fix problems in a panic situation
History of Nagios
1996-Ethan Galstad uses the ideas and architecture of his earlier work to
beginbuilding a new application which runs under Linux OS
1999-The plugins that were which were originally distributed as a part of
theNetSaint distribution are soon as a separate Nagios Plugins project
2002- Ethan renames the project to “Nagios” because of trademark
issues withthe name “NetSaint.”
2005- Nagios becomes SourceForge.net Project of the Month in June
2009-Nagios Enterprises releases its first commercial version, Nagios
XI 2012-Nagios again renamed as Nagios Core
2016-Nagios core surpasses 7,500,000 downloads directly
fromSourceForge.net website

Features of Nagios
Following are the important features of Nagios monitoring tool:

Relatively scalable, Manageable, and Secure


Good log and database system
Informative and attractive web interfaces
Automatically send alerts if condition changes
If the services are running fine, then there is no need to do check that host is
analive
Helps you to detect network errors or server crashes
You can troubleshoot the performance issues of the server.
The issues, if any, can be fixed automatically as they are identified during
themonitoring process
You can monitor the entire business process and IT infrastructure with a
singlepass
The product’s architecture is easy writing new plugins in the language of
yourchoice
Nagios allows you to read its configuration from an entire directory
whichhelps you to decide how to define individual files
Utilizes topology to determine dependencies
Monitor network services like HTTP, SMTP, HTTP, SNMP, FTP, SSH,
POP,etc.
Helps you to define network host hierarchy using parent hosts
Ability to define event handlers which runs during service or host events
forproactive problem resolution
Support for implementing redundant monitoring hosts

Nagios Architecture
Nagios is a client-server architecture. Usually, on a network, a Nagios server is
running on a host, and plugins are running on all the remote hosts which should be
monitored.

Nagios Architecture

1. The scheduler is a component of server part of Nagios. It sends a


signal toexecute the plugins at the remote host.
2. The plugin gets the status from the remote host
3. The plugin sends the data to the process scheduler
4. The process scheduler updates the GUI and notifications are sent to admins

Plugins
Nagios plugins provide low-level intelligence on how to monitor anything and
everything with Nagios Core. Plugins operate acts as a standalone application, but
they are designed to be executed by Nagios Core. It connects to Apache that is
controlled by CGI to display the result. Moreover, a database connected to Nagios
tokeep a log file.
How do plugins work?

Consider the above example-

Check_nt is a plugin to monitor a windows machine which is mostly


availablein the monitoring server
NSClinet++ should be installed in every Windows machine that you
wants tomonitor
There is an SSL connection between the server and the host which
continuouslyexchange information with each other

Likewise, NRPE(Nagios Remote plug-in Executor) and NSCA plugins are used to
monitor Linux and Mac OS X respectively.

Application of Nagios
Nagios application monitoring tool is a health check & monitoring system for a
typical Data Centre, comprises all type of equipment’s such as:

Server & Network Nodes


Application monitoring from a single console
Application Monitoring with transaction-level insights
Monitor Middleware & Messaging Components
Customizable Reports and Dashboards
UPS Backup System
Bio-Metric Identification System
Temperature & Humidity Control System (Sensing Mechanism)
CCTV/NVR System
Storage Subsystem (NAS&SAN)

Disadvantages of Using Nagios

Important features like wizards or interactive dashboard are only available


onNagios XI, which is quite an expensive tool
Nagios core has a confusing interface
There’re many configuration files which are very hard to configure for users
Nagios can’t monitor network throughput
The tool not allows you to manage the network but only allows to monitor
thenetwork
Nagios makes no difference between various devices like servers, routers,
orswitches as it treats every device as a host

Conclusion:
Continuous monitoring is a process to detect, report, respond all the attacks
which occur in its infrastructure. Nagios offers effective monitoring of your
entire infrastructure andbusiness processes
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering College,


Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 10

Date of 04/10/2023
Performance:

Date of 11/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 10

Aim: To perform Port, Service monitoring, Windows/Linux


servermonitoring using Nagios.

Lab Outcome No.: ITL504.1, ITL504.5

Lab Outcome: To understand the fundamentals of Cloud Computing and be fully


proficient with Cloud based DevOps solution deployment options to meet your
business requirements.
To use Continuous Monitoring Tools to resolve any system errors (low memory,
unreachable server etc.) before they have any negative impact on the business
productivity.

Theory:

What is Nagios?
Nagios is an open source software for continuous monitoring of systems,
networks, and infrastructures. It runs plugins stored on a server which is connected
with a host or another serveron your network or the Internet. In case of any failure,
Nagios alerts about the issues so that the technical team can perform recovery
process immediately.
Nagios is used for continuous monitoring of systems, applications, service and
business process ina DevOps culture.

Overview of Nagios

What should Nagios monitor? That depends on the role of the server. Every
monitoring system should watch CPU, memory, disk space and network activity,
as these core metrics combine to provide the overall health of the system. But
beyond that, specific monitoring metrics can vary. For example, if an Information
Services (IIS) web server is installed, an admin probablywould want to monitor the
availability of the associated web site or application.

Nagios can support both these scenarios: It can monitor common core metrics, as
well as perform the more specific and customized checks that pertain to specific
roles and applications. The tool can also monitor across multi-OS environments.
The Nagios monitoring tool comes in two primary versions: Nagios Core and
Nagios XI. The former is a free, open source version, and the latter is a commercial
version that offers additional features around graphs and reporting, capacity
planning and more.

Below is an overview of how to monitor Windows Server with Nagios Core.

Use Nagios agents for Windows monitoring

To monitor Windows Server with Nagios, the Nagios monitoring server must be a
Linux system. Once admins install and configure this setup, they can create
monitors for Windows machines with the Nagios Remote Data Processor (NRDP)
agent.

Although the Nagios server itself installs on a Linux box, admins can install an
agent on Windows systems to monitor those systems and report back to the main
Nagios server. This agent, the Nagios Cross Platform Agent (NCPA), has a
straightforward installation process, as detailed later in this article. Installation of
the NCPA is one of the first steps to monitor Windows systems with Nagios
-- but, before that, install the NRDP listener to support passive checks. Nagios XI
pre-installs this,but for Nagios Core, admins must complete this step manually.

To perform these steps as outlined below, use the Nagios server.

Install the NRDP agent

In this simplified example, we want to install the listener onto the Nagios server,
using an Ubuntusystem. Replace {version} with the current version of the NRDP
service:
apt-get update

apt-get install -y php-xml

cd /tmp

wget -O nrdp.tar.gz
https://github.com/NagiosEnterprises/nrdp/archive/{version}.tar. gz

tar xzf nrdp.tar.gz

cd /tmp/nrdp-{version}/

sudo mkdir -p /usr/local/nrdp

sudo cp -r clients server LICENSE* CHANGES* /usr/local/nrdp sudo chown -R na


sudo nano /usr/local/nrdp/server/config.inc.php

In the /usr/local/nrdp/server/config.inc.php file, generate a list of tokens permitted


to send data. Define one or more tokens -- these are arbitrary and can be set in any
way:

$cfg['authorized_tokens'] =

array( "randomtoken1",
Finally, restart Apache to enable the changes to take effect:
"randomtoken2",
sudo
); thecp nrdp.conf /etc/apache2/sites-enabled/
Test NRDP agent
sudo systemctl restart apache2.service
Navigate to the Nagios server and the NRDP listener, such as http://10.0.0.10/nrdp.
Use the token previously retrieved from the authorized_tokens section
of the configuration file /usr/local/nrdp/server/config.inc.php to send the
following JSON to test the listener:

"checkresults": [

"checkresult": {

"type":

"host",

"checktype": "1"

},

"hostname":

"myhost", "state":

"0",

"output": "Success | perfdata=1;"

},

"checkresult": {

"type":

"service",

"checktype": "1"

},

"hostname": "myhost",

"servicename": "myservice",

"state": "1",

"output": "Failure| perfdata=1;"


Install the check_ncpa.py plugin

The check_ncpa.py plugin enables Nagios to monitor the installed NCPAs on the
hosts. Followthese steps to install the plugin:

1. Download the plugin.

2. Add the file to the standard Nagios Core location,


/usr/local/nagios/libexec.

Apply these agent configurations

After the NRDP installation, install the NCPA. Download the installation files and
run the install.

For the listener configuration, follow these guidelines:

 API Token: Create an arbitrary token to query the API interface.

 Bind IP: Leave as default to listen on all addresses.

 Bind Port: Leave as default.

 SSL Version: Leave as default of TLSv1.2.

 Log Level: Leave as default.

For the NRDP passive configuration, apply the following:

 URL: Use the IP or host name of the Nagios server that hosts the
installed NRDP agent.

 NRDP Token: Use the token retrieved from


/usr/local/nrdp/server/config.inc.php.

 Hostname: Replace with the host name of the system.


 Check Interval and Log Level: Leave
as default.Implement Windows monitoring

After agent setup and configuration, the next step is to define the monitoring rules
for the WindowsServer -- a process that can vary, and can be extensive, depending
on an enterprise's needs.

Determine which metrics matter and require monitoring. Then, define alerting
intervals and threshold values. Ultimately, monitoring rules will determine the
actions admins should take when metrics hit, or back off, from those thresholds.
Additionally, define contact groups to customize who receives the notifications.

Create a NCPA check

We need to create a simple command to use the check_ncpa.py plugin, and normally
it lives here:
/usr/local/nagios/etc/commands.cfg.

define command {
command_name
check_ncpa

command_line $USER1$/check_ncpa.py -H $HOSTADDRESS$ $ARG1$

The final step to monitor Windows Server with Nagios is to create a simple
CPU checkin /usr/local/nagios/etc/ncpa.cfg.
define host {

host_name address NCPA Host 1


check_command system/agent_version
10.0.0.100
check_ncpa!-t 'mytoken' -P 5693 -M

max_check_attempts 5

check_interval 5

retry_interval 1

check_period 24x7

contacts nagiosadmin

notification_interval 60

notification_period 24x7

notifications_enabled 1
icon_image ncpa.png

statusmap_image ncpa.png

register 1
}

define service { host_name


service_description
NCPA Host 1
CPU Usage

check_command check_ncpa!-t 'mytoken' -P 5693 -M


cpu/percent -w 20 -c 40 -q 'aggregate=avg'

max_check_attempts 5

check_interval 5

retry_interval 1

check_period 24x7

notification_interval 60

notification_period 24x7

contacts nagiosadmin
register 1

Challenges with Nagios Windows monitoring

Since Nagios was primarily designed for Linux, it does have some Windows
monitoring limitations. However, the Nagios agent for Windows has been around
and actively developed for a long time. While it might not cover all check use
cases, especially around services, Nagios does have an extensive and evolving
plugin ability.
Protocols

The default protocols used by Nagios are as given under −


 http(s), ports 80 and 443 − The product interfaces are web-based in
Nagios. Nagios agentscan use http to move data.
snmp, ports 161 and 162 − snmp is an important part of network monitoring. Port
161 isused to send requests to nodes and post 162 is used to receive results.

 ssh, port 22 − Nagios is built to run natively on CentOS or RHEL Linux.


Administratorcan login into Nagios through SSH whenever they feel to
do so and perform checks.

Ports

The Default ports used by common Nagios Plugins are as given under −

 Butcheck_nt (nsclient++) 12489


 NRPE 5666
 NSCA 5667
 NCPA 5693
 MSSQL 1433
 MySQL 3306
 PostgreSQL 5432
 MongoDB 27017, 27018
 OracleDB 1521
 Email (SMTP) 25, 465, 587
 WMI 135, 445 / additionaldynamically-assigned ports in 1024-1034 range

Conclusion:
Nagios allows application monitoring from a single console with transaction-
level insightsThis tool not allows you to manage the network but only allows to
monitor the network
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering College,


Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 11

Date of 11/10/2023
Performance:

Date of 18/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 11.

Aim: To understand AWS Lambda, its workflow, various functions and create your first
Lambdafunctions using Python / Java / Nodejs.

Lab Outcome No.: ITL504.1, ITL504.6

Lab Outcome: To understand the fundamentals of Cloud Computing and be fully


proficient with Cloud based DevOps solution deployment options to meet your
business requirements.
To understand AWS Lambda, its workflow, various functions and create your first
Lambda functions using Python / Java / Nodejs.

Theory:
What is Serverless?
Serverless is a term that generally refers to serverless applications. Serverless
applicationsare ones that don’t need any server provision and do not require to
manage servers.
What is a Lambda function?
The code you run on AWS Lambda is called a “Lambda function.” After you
create yourLambda function, it is always ready to run as soon as it is triggered,
similar to a formula in a spreadsheet. Each function includes your code as well
as some associated configuration information, including the function name and
resource requirements.
Lambda functions are “stateless”, with no affinity to the underlying
infrastructure, so thatLambda can rapidly launch as many copies of the function
as needed to scale to the rate of incoming events.
After you upload your code to AWS Lambda, you can associate your function
with specific AWS resources, such as a particular Amazon S3 bucket, Amazon
DynamoDBtable, Amazon Kinesis stream, or Amazon SNS notification. Then,
when the resource changes, Lambda will execute your function and manage the
compute resources as needed to keep up with incoming requests.

How does AWS Lambda work?


Each Lambda function runs in its own container. When a function is created,
Lambda packages it into a new container and then executes that container on a
multi-tenant clusterof machines managed by AWS. Before the functions start
running, each function’s container is allocated its necessary RAM and CPU
capacity. Once the functions finish running, the RAM allocated at the
beginning
is multiplied by the amount of time the function spent running. The customers
then get charged based on the allocated memory and the amount of run time the
function took to complete.

The entire infrastructure layer of AWS Lambda is managed by AWS. Customers


don’t getmuch visibility into how the system operates, but they also don’t need to
worry about updating the underlying machines, avoiding network contention, and
so on—AWS takes care of this itself.

And since the service is fully managed, using AWS Lambda can save you time
on operational tasks. When there is no infrastructure to maintain, you can spend
more time working on the application code—even though this also means you
give up the flexibilityof operating your own infrastructure.

One of the distinctive architectural properties of AWS Lambda is that many


instances of the same function, or of different functions from the same AWS
account, can be executedconcurrently. Moreover, the concurrency can vary
according to the time of day or the dayof the week, and such variation makes
no difference to Lambda—you only get charged for the compute your functions
use. This makes AWS Lambda a good fit for deploying highly scalable cloud
computing solutions.

Advantages of AWS Lambda:


• Users can run the applications either from the web or in a mobile platform.
• Lambda makes use of AWS Identity and Access Management (IAM)
module to ensure that only the right users or groups get access to the
application or function.
• Lambda speeds up the execution process and scales your application
or code, byexecuting the events triggering a particular code.
• Developers don't have to focus on infrastructure to run an
application, allowingthem to focus on business logic.
• Strong APIs enable user applications to easily integrate with
innovative AWS services like AI and machine learning to develop
intelligent business applicationsor add intelligence into your
applications.
Process For Creating AWS Lambda Function:
Step 1: Create an IAM Role for Lambda
Lambda requires you to assign an AWS Identity and Access Management
(IAM) role when you create a Lambda function, in the same way Step
Functions requiresyou to assign an IAM role when you create a state
machine.
Attach the following policies to this IAM role

Step 2: Create a Lambda Function


Your Lambda function receives input (a name) and returns a greeting that
includes theinput value.

1. Open the Lambda console and choose Create a function.

2. In the Create function section, choose Author from scratch.


3. In the Basic information section, configure your Lambda function:
a. For Function name, enter HelloFunction.
b. For Runtime, choose Node.js 12.x.
c. For Role, select Choose an existing role.
d. For Existing role, select the Lambda role that you created earlier.

Note: If the IAM role that you created doesn't appear in the list, the
rolemight still need a few minutes to propagate to Lambda.

e. Choose Create function.

Step3: Write you code in pointed section


Note: use a simple code to test your function, try to avoid loops or if you
are trying to execute any loops be careful and make sure your code does
not enterinto infinite loop. Otherwise, your lambda function will be
running endlessly. (You can write code in any language)

Step 4: Deploy and test you code.

1. You can write any simple program instead of just a print statement
2. Deploy and test your code
3. You can see the execution result in ‘execution result tab’

Step5: Monitor Logs using CloudWatch event


1. Click on Monitor tab
2. Click on ‘View logs in CloudWatch’

Once you click on ‘View logs in CloudWatch’ a new tab will open with
name of yourlambda function along with the series of event happened with
that lambda function.
You can check everything which has been executed with that function.
Click any of the log stream and you’ll be able to all the activities related to that log

Conclusion: We have studied AWS Lambda, its workflow, various functions and
created a firstLambda functions using Python.
Mahavir Education Trust's

Shah & Anchor Kutchhi Engineering College,


Chembur, Mumbai 400 088.
UG Program in Information Technology

Experiment No: 12

Date of 18/10/2023
Performance:

Date of 23/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 12

Aim: To create a Lambda function which will log “An Image has been added”
once you add anobject to a specific bucket in S3.

Lab Outcome No.: ITL504.1, ITL504.6

Lab Outcome: To understand the fundamentals of Cloud Computing and be fully


proficient with Cloud based DevOps solution deployment options to meet your
business requirements.
To create a Lambda function which will log “An Image has been added” once youadd
an object to a specific bucket in S3

Theory:
Amazon S3 service is used for file storage, where you can upload or remove
files. We can trigger AWS Lambda on S3 when there are any file uploads in
S3 buckets. AWS Lambda hasa handler function which acts as a start point for
AWS Lambda function. The handler has the details of the events. In this
chapter, let us see how to use AWS S3 to trigger AWS Lambda function when
we upload files in S3 bucket.

Steps for Using AWS Lambda Function with Amazon S3

To start using AWS Lambda with Amazon S3, we need the following −

1. Create S3 Bucket
2. Create role which has permission to work with s3 and lambda
3. Create lambda function and add s3 as the trigger.
Let us see these steps with the help of an example which shows the basic
interaction betweenAmazon S3 and AWS Lambda.
User will upload a file in Amazon S3 bucket
Once the file is uploaded, it will trigger AWS Lambda function in the
background which willdisplay an output in the form of a console message that the
file is uploaded.
The user will be able to see the message in Cloudwatch logs once the file
is uploaded.The block diagram that explains the flow of the example is
shown here −
Step 1: Create S3 Bucket

Step 2: Create Role that Works with S3 and

LambdaStep 3: Create Lambda function and

Add S3 Trigger

1. Navigate to lambda function dashboard and select a lambda function


on which you areintended to set the trigger.
2. Click on add trigger
3. Click on add trigger and select S3 from dropdown list

4. Fill the details such as bucket name, event type, prefix, suffix and click on add
5 Once the trigger has been added you can see it along with the lambda function

6 Write a code for lambda function and deploy the changes


7 Now upload the file in S3 bucket on which we have set the
trigger. 8 You can check the logs of all the activities in cloudwatch
9. Select the resent log

Conclusion: A Lambda function which will log “An Image has been added” has been
createdsuccessfully using S3 bucket trigger.

You might also like