Professional Documents
Culture Documents
EEEEEE Merged Merged
EEEEEE Merged Merged
By the end of the educational experience our students will be able to:
1. The Information Technology graduates are able to analyse, design, develop, test and
apply management principles, mathematical foundations in the development of IT
based solutions for real world and open-ended problems.
2. The Information Technology graduates are able to perform various roles in creating
innovative career paths: to be an entrepreneur, a successful professional, pursue higher
studies with realization of moral values & ethics.
Justification:
PSO Number PO Number
PSO1-PO1 Engineering knowledge is basic need to develop and solve IT based solution for real
world and open-ended problems
PSO1-PO2 After identified and applied knowledge, Problem analysis must be performed for a
problem to develop any solution
PSO1-PO3 Development of solution for identified problem with appropriate consideration for the
public health and safety, and the cultural, societal, and environmental
considerations.
PSO1-PO4 Research methods, analysis of data must be performed for developed solution to
provide valid conclusion
PSO1-PO5 Appropriate tools and techniques should be chosen to model a solution for real world
problem
PSO1-PO7 New approaches of IT must demonstrate the knowledge of, and need for sustainable
development.
PSO1-PO11 Understanding of the engineering and management principles and apply these to
one’s own work, as a member and leader in a team, to manage projects and in
multidisciplinary environments is to be practiced for performing various roles in
team activity.
PSO2-PO6 Development of solution must be relevant to professional engineering practice and
follows societal, health, safety, legal and cultural issues.
PSO2-PO8 Ethical principles and commit to professional ethics and responsibilities and norms of
the engineering practice should be carried out in analysis, testing of any new
approaches
PSO2-PO9 Knowledge based system functions effectively when worked as an individual, and
as a member or leader in diverse teams, and in multidisciplinary settings
PSO2-PO10 To lead a group for developing system, effective reports, design documentation,
effective presentations, give and receive clear instructions.
PSO2-PO12 Research, data analysis, testing by applying new approaches to the field of IT ought
to recognize the need of life-long learning in the broadest context of technological
change.
Experiment No: 1
Date of 12/07/2023
Performance:
Date of 19/07/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO. 01
Aim: To understand the benefits of Cloud Infrastructure and Setup AWS Cloud9 IDE,
Launch AWS Cloud9 IDE and Perform Collaboration Demonstration.
Theory:
Benefits
AWS Cloud9 gives you the flexibility to run your development environment on a
managed Amazon EC2 instance or any existing Linux server that supports SSH.
This means that you can write, run, and debug applications with just a browser,
without needing to install or maintain a local IDE. The Cloud9 code editor and
integrated debugger include helpful, time-saving features such as code hinting,
code completion, and step-through debugging. The Cloud9 terminal provides a
browser-based shell experience enabling you to install additional software, do a git
push, or enter commands.
CODE TOGETHER IN REAL TIME
AWS Cloud9 makes collaborating on code easy. You can share your development
environment with your team in just a few clicks and pair program together. While
collaborating, your team members can see each other type in real time, and
instantly chat with one another from within the IDE.
AWS Cloud9 makes it easy to write, run, and debug serverless applications. It
preconfigures the development environment with all the SDKs, libraries, and plug-
ins needed for serverless development. Cloud9 also provides an environment for
locally testing and debugging AWS Lambda functions. This allows you to iterate
on your code directly, saving you time and improving the quality of your code.
START NEW PROJECTS QUICKLY
AWS Cloud9 makes it easy for you to start new projects. Cloud9’s development
environment comes prepackaged with tooling for over 40 programming languages,
including Node.js, JavaScript, Python, PHP, Ruby, Go, and C++. This enables you
to start writing code for popular application stacks within minutes by eliminating
the need to install or configure files, SDKs, and plug-ins for your development
machine. Because Cloud9 is cloud-based, you can easily maintain multiple
development environments to isolate your project’s resources.
Steps:
Note: please keep record of AWS credentials and 12 digit account number
Step 9: Now click on Add permission and select Attach Policy after that
search for Cloud9 related policy and select Awscloud9EnviornmentMember
policy , AWSCloud9Administrator and add it.
Step 10: Go back to AWS management console and sign out the root account.
Step 11: Sign in as IAM user created before by providing 12 digit Account
ID and credentials.
Step 12: Find the AWS cloud9 service in the Services console.
Step 13: create an environment and provide the details for the environment as
shown below
Step 14: Keep all the default setting as given below
Step 15: review the settings and create the environment.
Step 16: It will take few minutes to create aws instance for your Cloud 9
Environment
Step 17: Open the cloud9 IDE instance and see the welcome page
Step18: If you check at bottom side Cloud9 IDE also giving you and aws CLI
for command operations: as we here checked git version, iam user details
Step 19: Upload the website folder by selecting upload local files in file section.
Step 20: edit the .html file and save the changes.
Step 21: See the preview of index.html by selecting preview button . explore it
in the browser also.
Step 22: Create another IAM user by sign in to root user account and add
user in IAM management console and follow the procedure of step 4 to Step 9.
Step 23: Sign in to IAM user created initially and open the Cloud9
environment IDE
Step 24: Click on the share button and invite the new user by providing IAM
user name
Step 25 : Allow RW access to the user and click on ok
Step 26 :Now Open your Browsers Incognito Window and login with new
IAM user created.
Step 27: Go to Cloud9 service and open shared with you environment and
open IDE.
Step 28: Open both IAM users Cloud 9 IDE together in same window
Step 29: Edit the code in both user’s IDEs and see the changes.
Step 30: Also you can do group chat with in the team .
Step 31: you can also explore settings where you can update permissions
of your temmates as from RW to R only or you can remove user too
Conclusion: Hence the AWS Cloud9 IDE has been set up, Cloud9 IDE has been
launched and collaboration demonstration has been performed.
Mahavir Education Trust's
Experiment No: 2
Date of 19/07/2023
Performance:
Date of 26/07/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 02
Benefits
g.
4. On the Step 2: Choose an Instance Type page, choose the free tier
eligible t2.micro type as the hardware configuration for your instance,
and then choose Next: Configure Instance Details.
5. On the Step 3: Configure Instance Details page, do the following:
Important
11.Choose View Instances to close the confirmation page and return to the
console.
12.You can view the status of the launch on the Instances page. When you
launch an instance, its initial state is pending. After the instance starts, its
state changes to running, and it receives a public DNS name. (If the
Public DNS column is not displayed, choose the Show/Hide icon, and
then select Public DNS.)
13.It can take a few minutes for the instance to be ready for you to connect
to it. Check that your instance has passed its status checks. You can
view this information in the Status Checks column.
7. In Step 3: Add build stage, choose Skip build stage, and then accept
the warning message by choosing Skip again. Choose Next.
8. In Step 4: Add deploy stage, in Deploy provider, choose AWS
CodeDeploy. The Region field defaults to the same AWS Region as
your pipeline. In Application name, enter MyDemoApplication, or
choose the Refresh button, and then choose the application name from
the list. In Deployment group, enter MyDemoDeploymentGroup, or
choose it from the list, and then choose Next.
9. In Step 5: Review, review the information, and then choose Create
pipeline.
10.The pipeline starts to run. You can view progress and success and failure
messages as the CodePipeline sample deploys a webpage to each of the
Amazon EC2 instances in the CodeDeploy deployment.
11.After Succeeded is displayed for the action status, in the status area for
the Deploy stage, choose Details. This opens the AWS CodeDeploy
console.
12. In the Deployment group tab, under Deployment lifecycle events, choose
an instance ID. This opens the EC2 console.
13 On the Description tab, in Public DNS, copy the address, and then paste it
into the address bar of your web browser. View the index page for the sample
application you uploaded to your S3 bucket.
The following page is the sample application you uploaded to your S3 bucket.
Conclusion: Hence you build your Application using AWS CodeBuild and Deploy on S3
using Aws CodePipeline also deployed application on EC2 instance using AWS CodeDeploy.
Mahavir Education Trust's
Experiment No: 3
Date of 26/07/2023
Performance:
Date of 02/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 03
Theory:
Kubernetes is an open-source platform for deploying and managing containers.
It provides a container runtime, container orchestration, container-centric
infrastructure orchestration, self- healing mechanisms, service discovery and
load balancing. It’s used for the deployment, scaling, management, and
composition of application containers across clusters of hosts.
It aims to reduce the burden of orchestrating underlying compute, network, and
storage infrastructure, and enable application operators and developers to focus
entirely on container- centric workflows for self-service operation. It allows
developers to build customized workflows and higher-level automation to
deploy and manage applications composed of multiple containers.
Kubernetes Architecture and Concepts:
From a high level, a Kubernetes environment consists of a control plane
(master),a distributed storage system for keeping the cluster state consistent
(etcd), and a numberof cluster nodes (Kubelets).
Kubernetes Control Plane:
The control plane is the system that maintains a record of all Kubernetes
objects. It continuously manages object states, responding to changes in the
cluster; it also works to makethe actual state of system objects match the desired
state. As the above illustration shows, the control plane is made up of three
major components: kube-apiserver, kube-controller- manager and kube-
scheduler. These can all run on a single master node, or can be replicated
across multiple master nodes for high availability.
The API Server provides APIs to support lifecycle orchestration (scaling,
updates, and so on) for different types of applications. It also acts as the
gateway to the cluster, so the API server must be accessible by clients from
outside the cluster. Clients authenticate via the API Server,and also use it as a
proxy/tunnel to nodes and pods (and services).
Most resources contain metadata, such as labels and annotations, desired state
(specification) and observed state (current status). Controllers work to drive the
actual state toward the desiredstate.
There are various controllers to drive state for nodes, replication (autoscaling),
endpoints (services and pods), service accounts and tokens (namespaces). The
Controller Manager is a daemon that runs the core control loops, watches the
state of the cluster, and makes changes to drive status toward the desired state.
The Cloud Controller Manager integrates into each public cloud for optimal
support of availability zones, VM instances, storage services, and network
services for DNS, routing and load balancing.The Scheduler is responsible for
the scheduling of containers across the nodes in the cluster; it takes various
constraints into account, such as resource limitations or guarantees, and affinity
and anti-affinity specifications.
Cluster Nodes:
Cluster nodes are machines that run containers and are managed by the master
nodes.The Kubelet is the primary and most important controller in Kubernetes. It’s
responsiblefor driving thecontainer execution layer, typically Docker.
Pods are one of the crucial concepts in Kubernetes, as they are the key construct that
developersinteractwith. The previous concepts are infrastructure-focused and internal
architecture.
This logical construct packages up a single application, which can consist of multiple
containers andstorage volumes. Usually, a single container (sometimes with some
helper program in an additional container) runs in this configuration – as shown in
the diagram below.
A pod represents a running process on a cluster.
Kubernetes Networking:
Networking Kubernetes has a distinctive networking model for cluster-wide,
podto-pod networking. In most cases, the Container Network Interface (CNI)
uses a simple overlay network (like Flannel) to obscure the underlying network
from the pod by using traffic encapsulation (like VXLAN); it can also use a
fully-routed solution like Calico. In both cases, pods communicate over a
cluster-wide pod network, managed by a CNI provider like Flannel or Calico.
Within a pod, containers can communicate without any restrictions. Containers
within a pod exist within the same network namespace and share an IP. This
means containers can communicate over localhost. Pods can communicate with
each other using the pod IP address,which is reachable across the cluster.
Moving from pods to services, or from external sources to services, requires
going through kube-proxy.
Kubernetes Tooling and Clients:
Here are the basic tools you should know:
Kubeadm bootstraps a cluster. It’s designed to be a simple way for new users to
build clusters(more detail on this is in a later chapter).
Kubectl is a tool for interacting with your existing cluster.
Minikube is a tool that makes it easy to run Kubernetes locally.
Step1: Create two EC2 Instances with ubuntu OS, and attach thefollowing security groups
to it.(Renamethem as K8s-Master andK8s-Slave)
1. All Traffic (IPV4)
2. All Traffic (IPV6)
Step2: Create an IAM user/role with Route53, EC2, IAM and S3 fullaccess.
Step3: Attach IAM role that we just created and attach it to ubuntu server.
Step 4: Connect both the instances using Putty/WinSCP.
docker ––version
Install Kubernetes
Ifcurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg
you get an error that curl is not installed, install it with:
| sudo apt-key add
sudo apt-get install curl
2. Then repeat the previous command to install the signing keys. Repeat for each
server node.
kubeadm version
Kubernetes Deployment
Decide which server to set as the master node. Then enter the command:
Once
sudo this command
kubeadm initfinishes, it will display a kubeadm join message
--pod-network-cidr=10.244.0.0/16 at the
--ignore-
pr eflight-errors=all
end. Make a noteofthe whole entry. This will be used to join the worker nodes
to the cluster.
Next, enter the following to create a directory for the cluster:
kubernetes-master:~$ mkdir -p $HOME/.kube
kubernetes-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HO
ME/.kube/config
kubernetes-master:~$ sudo chown $(id -u):$(id -g)
$HOME/.kube/ config
As indicated in Step 7, you can enter the kubeadm join command on each worker
node to connect itto the cluster.
Switch to the worker01 system and enter the command you noted from Step 7:
Replace the alphanumeric codes with those from your master server. Repeat for
each workernode on the cluster. Wait a few minutes; then you can check the
status of the nodes.
Switch to the master server, and enter:
The system should display the worker nodes that you joined to the cluster.
Conclusion:
Kubernetes Cluster has been installed and spi
Mahavir Education Trust's
Experiment No: 4
Date of 02/08/2023
Performance:
Date of 09/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 04
Aim: To install Kubectl and execute Kubectl commands to manage the
Kubernetes cluster anddeploy Your First Kubernetes Application
Theory:
What is kubectl?
Before learning how to use kubectl more efficiently, you should have a basic
understandingof what it is and how it works.
From a user’s point of view, kubectl is your cockpit to control Kubernetes. It
allows you toperform every possible Kubernetes operation.
From a technical point of view, kubectl is a client for the Kubernetes API.
The Kubernetes API is an HTTP REST API. This API is the real Kubernetes
user interface. Kubernetes is fully controlled through this API. This means that
every Kubernetes operation is exposed as an API endpoint and can be executed
by an HTTP request to this endpoint.
Consequently, the main job of kubectl is to carry out HTTP requests to the Kubernetes
API:
The kubectl command line tool lets you control Kubernetes clusters. For configuration,
kubectl looks for a filenamed config in the $HOME/.kube directory. You can specify other
kubeconfig filesby setting the KUBECONFIG environment variable or by setting the --
kubeconfig flag. This overview covers kubectl syntax, describes the command operations,
and provides common examples. For details about each command,including all the supported
flags and subcommands, seethe kubectl reference documentation. For installation instructions
see installing kubectl.
Syntax:
Use the following syntax to run kubectl commands from your terminal window:
kubectl [command] [TYPE] [NAME] [flags]
kubectl cluster-info
kubectl api-versions
On Master Node:
3. Each Pod has a unique IP address, those IPs are not exposed outside the
cluster without aService. Services allow your applications to receive traffic.
Services can be exposed in different ways by specifying a type in the
Service Spec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster.
This type makes theService only reachable from within the cluster.
NodePort - Exposes the Service on the same port of each selected Node in
the cluster usingNAT. Makes a Service accessible from outside the cluster
using
<NodeIP>:<NodePort>. Superset of ClusterIP.
LoadBalancer - Creates an external load balancer in the current cloud
(if supported) andassigns a fixed, external IP to the Service. Superset of
NodePort.
ExternalName - Maps the Service to the contents of the externalName field
(e.g. foo.bar.example.com), by returning a CNAME record with its value. No
proxying ofany kind is set up. This type requires v1.7 or higher of kube-dns
5 If you want more details about particular pod run following command
You can always check status of your pod through this command, it helps to keep track of pod’s
status andcontainers within that pod.
6 Similarly, you can check details of deployments that you have created
8 To get details about all those services such as what kind of services you
have configured,Ips, endpoints, on which port the pod is running etc
10 If a particular node is not functioning well, you can remove that node
by followingcommand
Drain: stop all the pods and containers running on that node
Delete : once all the services running on the nodes are stopped you can delete
it byusingfollowing command.
kubectl delete node <node name>
Experiment No: 5
Date of 09/08/2023
Performance:
Date of 23/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 05
Aim: To understand terraform lifecycle, core concepts/terminologies and install it
on a LinuxMachine.
Theory:
Introduction to Terraform:
Terraform is an infrastructure as code (IaC) tool that allows you to build,
change, and version infrastructure safely and efficiently. This includes low-level
components such as compute instances, storage, and networking, as well as high-
level components such as DNS entries, SaaSfeatures, etc. Terraform can manage
both existing service providers and custom in-house solutions.
What is Terraform used for:
On the one hand Terraform is used for creating or provision new infrastructure
and formanaging existing infrastructure.
On the other hand it can be used to replicate infrastructure. E.g. when you
want to replicate thedevelopment setup also for staging or production
environment:
How does Terraform work:
Terraforms Architecture Terraform has 2 main components:
1) CORE: Terraform's Core takes two input sources, which are your
configuration files (your desired state) and second the current state
(which is managed by Terraform). Withthis information the Core then
creates a plan of what resources need to be created/changed/removed.
2) Provider: The second part of the Architecture are providers. Providers
can be IaaS (likeAWS, GCP, Azure), PaaS (like Heroku, Kubernetes) or
SaaS services (like Cloudflare).Providers expose resources, which makes
it possible to create infrastructure across all these platforms
Terraform Lifecycle:
Writing Terraform Code:
The first thing in the terraform workflow is to start with writing your Terraform
configuration just like you write code: in your editor of choice. It’s common
practice to store your work in aversion-controlled repository even when you’re
just operating as an individual.
terraform init:
The first thing that you do after writing your code in Terraform is initializing the
code using thecommand terraform init. This command is used to initialize the
working directory containing Terraform configuration files. It is safe to run this
command multiple times.
You can use the init command for:
- Plugin Installation.
- Child Module Installation.
- Backend Initialization.
- terraform plan:
After a successful initialization of the working directory and the
completion of the plugin download, we can create an execution plan
using terraform plan command, thisis a handy way to check whether the
execution plan matches your expectations withoutmaking any changes to
real resources or to the state.
If the Terraform discovers no changes to resources, then the terraform plan
indicatesthat no changes are required to the real infrastructure.
Terraform apply:
After a successful initialization of the working directory and the completion of the
plugin download, we can create an execution plan using terraform plan command,
this is a handy way to check whether the execution plan matches your expectations
without making any changes toreal resources or to the state. If the Terraform
discovers no changes to resources, then the terraform plan indicates that no
changes are required to the real infrastructure.
Terraform also helps to save the plan to a file for later execution with terraform
apply, whichcan be useful while applying automation with Terraform.
Installation of Terraform:
Step 1: Download terraform
$ wget <download link> Ex.
wget
https://releases.hashicorp.com/terraform/1.0.7/terraform_1.0.7_linux_amd64.zip
Step 2: Unzip the downloaded folder
$ unzip <file name>
$ unzip
terraform_1.0.7_linux_amd64.zipStep 3:
Check the version of terraform
$ terraform -v
Step 4: Check the various commands of terraform $ terraform
Conclusion:
Terraform lifecycle, core concepts/terminologies are discussed and Terraform
installed on aLinux Machine successfully
Mahavir Education Trust's
Experiment No: 6
Date of 23/08/2023
Performance:
Date of 30/08/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 06
Theory:
The terraform {} block contains Terraform settings, including the required
providers Terraform wil luse to provision your infrastructure. For each
provider, the source attribute defines an optional hostname, a namespace, and
the provider type. Terraform installs providers from the Terraform Registry
by default. In this example configuration, the AWS provider's source is
defined as has hicorp/AWS, which is shorthand for
registry.terraform.io/hashicorp/AWS. You can also set a version constraint
for each provider defined in the required providers block. The version
attribute is optional, but we recommendusing it to constrain the provider
version so that Terraform does not install a version of the provider that does
not work with your configuration. If you do not specify a provider version,
Terraform will automatically download the most recent version during
initialization.
Providers:
The provider block configures the specified provider, in this case aws.
A provider is a plugin that Terraform uses to create and manage your
resources.
The profile attribute in the AWS provider block refers Terraform to the AWS
credentials stored in your AWS configuration file, which you created when
you configured the AWS CLI. Never hard-code credentials or other secrets in
your Terraform configuration files. Like other types of code, you may share
and manage your Terraform configuration files using source control, so hard-
coding secret values can expose them to attackers.
You can use multiple provider blocks in your Terraform configuration to
manage resources from different providers. You can even use different
providers together. For example, you could pass the IP address of your
AWS EC2 instance to a monitoring resource from DataDog.
Resources:
Use resource blocks to define components of your infrastructure. A
resource might be a physical or virtual component such as an EC2 instance,
or it can bea logical resource such as a Heroku application.
Resource blocks have two strings before the block: the resource type and the
resource name.
In this example, the resource type is AWS instance and the name is app_server.
The prefix of the type maps to the name of the provider. In the configuration,
Terraform manages the aws_instance resource with the aws provider.
Together, the resource type and resource name form a unique ID for the
resource. For example, the ID for your EC2 instance is
aws_instance.app_server. Resource blocks contain arguments which you use
toconfigure the resource. Arguments can includethings like machine sizes, disk
image names, or VPC IDs. Our providers reference documents the required
andoptional arguments for each resource. For your EC2 instance, the example
configuration sets the AMI ID to an Ubuntu image, and the instance type to
t2.micro, which qualifiesfor AWS' free tier. It also sets a tag to give the
instance a name.
Create infrastructure:
Apply the configuration now with the terraform apply command. Terraform
will print output similarto what is shown below. We have truncated some of
the output to save space.
Before it applies any changes, Terraform prints out the execution plan which
describes the actionsTerraform will take in order to change your
infrastructure to match the configuration.
The output format is similar to the diff format generated by tools such as Git.
The output has a + next to aws_instance.app_server, meaning that Terraform
will create this resource. Beneath that, it shows the attributes that will be set.
When the value displayed is (known after apply), it means thatthe value will
not be known until the resource is created. For example, AWS assigns
Amazon Resource Names (ARNs) to instances upon creation, so Terraform
cannot know the value of the arn attribute until you apply the change and the
AWS
provider returns that value from the AWS API. Terraform will now pause
and wait for your approval before proceeding. If anything in the plan seems
incorrect or dangerous, it is safe to abort here with no changes made to your
infrastructure. In this case the plan is acceptable, so type yes at the
confirmation prompt to proceed. Executing theplan will take a few minutes
since Terraform waits for the EC2 instance to become available.
Destroy
The terraform destroy command terminates resources managed by your
Terraform project. This command is the inverse of terraform apply in that
it terminates all the resources specified in your Terraform state. It does not
destroy resources running elsewhere that are not managed by the current
Terraform project.
Conclusion:
We have successfully created, changed and destroyed AWS infrastructure
UsingTerraform
Mahavir Education Trust's
Experiment No: 7
Date of 30/08/2023
Performance:
Date of 13/09/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 07
Aim: To understand Static Analysis SAST process and learn to integrate
Jenkins SAST to SonarQube/GitLab.
Theory:
SonarQube is a Code Quality Assurance tool that collects and analyzes source
code, and provides reports for the code quality of your project. It combines
static and dynamic analysis tools and enables quality to be measured
continually over time. Everything from minor styling choices, to design errors
are inspected and evaluated by SonarQube. This provides users with a rich
searchable history of the code to analyze where the code is messing up and
determine whether or not it is styling issues, code defeats, code duplication, lack
of test coverage, or excessively complex code. The software will analyze source
code from different aspects and drills down the code layer by layer, moving
module level down to the class level, with each level producing metric values
and statistics that should reveal problematic areas in the source code that needs
improvement.
SonarQube also ensures code reliability, Application security, and reduces
technical debt by making your code base clean and maintainable. SonarQube
also provides support for 27 different languages, including C, C++, Java,
JavaScript, PHP, GO, Python, and much more. SonarQube also provides Ci/CD
integration, and gives feedback during code review with branch analysis and
pull request decoration.
Server:
Unzip both the downloaded files and keep it at a common
place.Add following to path in “System Variable”
<location>\sonar-scanner-cli-4.6.2.2472-windows\sonar-scanner-
4.6.2.2472windows\bin
Set some configuration inside “sonarqube-scanner” config file inside
your “sonarqube- scanner” folder, go to “conf” folder and find
“sonarscanner.properties” file. Open it in editmode.
Add these two basic properties in “sonar-scanner.properties” file, or if it’s
already there butcommented, then uncomment it.
sonar.host.url=http://localhost
:9000
sonar.sourceEncoding=UTF-8
.
Start the sonarqube server:
Open “Command prompt”, and from terminal itself, go to same folder path
where we kept the1st unzipped folder, i.e., sonarqube folder > bin > respective
OS folder.
//for example, this is my path
D:
Exp7\sonarqube-
9.1.0.47736\
sonarqu be-
9.1.0.47736\bin\win
dows-x86-64
Here, you will find
“sonar.sh” Bash file.
“SartSonar.bat” run
this command to
start SonarQube
server
If your terminal shows this output, that means your SonarQube Server is up and
running.
Open any browser, add the following address into address bar, and hit Enter.
http://localhost:9000
Default login and Password is admin. (You can change the password later).
Save it. Now, your SonarQube integration is completed with Jenkins. Create a
job
(Follow Jenkins – Continuous Integration System) to test SonarQube and
generate a report ofyour project.
Conclusion:
SonarQube and SonarScanner is successfully installed and integrated with Jenkins.
Mahavir Education Trust's
Experiment No: 8
Date of 13/09/2023
Performance:
Date of 27/09/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 08
Aim: Create a Jenkins CICD Pipeline with SonarQube / GitLab Integration to
perform a static analysis of the code to detect bugs, code smells, and security
vulnerabilities on a sample Web
/ Java / Python application.
Lab Outcome No.: ITL504.1, ITL504.4
Lab Outcome: To understand the fundamentals of Cloud Computing and be
fully proficient with Cloud based DevOps solution deployment options to meet
your business requirements.
Create a Jenkins CICD Pipeline with SonarQube / GitLab Integration to
performa static analysis of the code to detect bugs, code smells, and security
vulnerabilities on a sampleWeb / Java / Python application
Theory:
Jenkins is a free and open-source automation server. It helps automate the parts
of software development related to building, testing, and deploying, facilitating
continuous integration and continuous delivery. It is a server-based system that
runs in servlet containers such as Apache Tomcat. It supports version control
tools, including AccuRev, CVS, Subversion, Git, Mercurial, Perforce,
ClearCase and RTC, and can execute Apache Ant, Apache Maven and sbtbased
projects as well as arbitrary shell scripts and Windows batch commands.
SonarQube is an automatic code review tool to detect bugs, vulnerabilities, and
code smells in your code. It can integrate with your existing workflow to enable
continuous code inspectionacross your project branches and pull requests.
What is Jenkins Pipeline?
Jenkins Pipeline (or simply "Pipeline") is a suite of plugins which supports
implementing andintegrating continuous delivery pipelines into Jenkins.
A continuous delivery (CD) pipeline is an automated expression of your process
for getting software from version control right through to your users and
customers. Every change to your software (committed in source control) goes
through a complex process on its way to being released. This process involves
building the software in a reliable and repeatable manner, as well as progressing
the built software (called a "build") through multiple stages of testing and
deployment.
Pipeline provides an extensible set of tools for modeling simple-to-complex
delivery pipelines "as code" via the Pipeline domain-specific language (DSL)
syntax.
The definition of a Jenkins Pipeline is written into a text file (called a
Jenkinsfile) which in turncan be committed to a project’s source control
repository. This is the foundation of "Pipelineas- code"; treating the CD pipeline
a part of the application to be versioned and reviewed like anyother code.
Creating a Jenkins file and committing it to source control provides a number of
immediate benefits:
1. Automatically creates a Pipeline build process for all branches and
pull requests.
2. Code review/iteration on the Pipeline (along with the remaining
source code).
3. Audit trail for the Pipeline.
4. Single source of truth for the Pipeline, which can be viewed and edited by
multiple members of the project.
While the syntax for defining a Pipeline, either in the web UI or with a
Jenkinsfile is the same, it is generally considered best practice to define the
Pipeline in a Jenkinsfile and check that into source control.
Pre-requisites:
• Make sure SonarQube is up and running and do the below steps:
• Make sure SonarQube plug-in installed in Jenkins.
Create a new job in Jenkins with following steps:
1. Click on new item provide item name click on pipeline OK
2. Scroll down and click on “Pipeline syntax”
3. In “sample text” section select Git
4. Provide repository URL
5. Provide branch
6. Add credentials of your GIT account similar to hoe we have added a
token forSonarQube.
7. It’ll generate a pipeline script copy it and paste it in pipeline script.
8. Write the following pipeline script and provide all the relevant details
(The script in redneed to be change according to your project)
node {
stage('clonning from GIT'){ checkout([$class: 'GitSCM', branches: [[name:
'*/master']], extensions: [], userRemoteConfigs: [[credentialsId:
'govindhivrale_git', url:
'https://github.com/Govindhivrale/jenk...]]
])
}
stage('SonarQube Analysis') {
Experiment No: 9
Date of 27/09/2023
Performance:
Date of 04/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 09
Theory:
What is Nagios?
Nagios is an open source software for continuous monitoring of systems,
networks, and infrastructures. It runs plugins stored on a server which is
connected with a host or another server on your network or the Internet. In case
of any failure, Nagios alertsabout the issues so that the technical team can
perform recovery process immediately.Nagios is used for continuous monitoring
of systems, applications, service and business process in a DevOps culture.
Features of Nagios
Following are the important features of Nagios monitoring tool:
Nagios Architecture
Nagios is a client-server architecture. Usually, on a network, a Nagios server is
running on a host, and plugins are running on all the remote hosts which should be
monitored.
Nagios Architecture
Plugins
Nagios plugins provide low-level intelligence on how to monitor anything and
everything with Nagios Core. Plugins operate acts as a standalone application, but
they are designed to be executed by Nagios Core. It connects to Apache that is
controlled by CGI to display the result. Moreover, a database connected to Nagios
tokeep a log file.
How do plugins work?
Likewise, NRPE(Nagios Remote plug-in Executor) and NSCA plugins are used to
monitor Linux and Mac OS X respectively.
Application of Nagios
Nagios application monitoring tool is a health check & monitoring system for a
typical Data Centre, comprises all type of equipment’s such as:
Conclusion:
Continuous monitoring is a process to detect, report, respond all the attacks
which occur in its infrastructure. Nagios offers effective monitoring of your
entire infrastructure andbusiness processes
Mahavir Education Trust's
Experiment No: 10
Date of 04/10/2023
Performance:
Date of 11/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 10
Theory:
What is Nagios?
Nagios is an open source software for continuous monitoring of systems,
networks, and infrastructures. It runs plugins stored on a server which is connected
with a host or another serveron your network or the Internet. In case of any failure,
Nagios alerts about the issues so that the technical team can perform recovery
process immediately.
Nagios is used for continuous monitoring of systems, applications, service and
business process ina DevOps culture.
Overview of Nagios
What should Nagios monitor? That depends on the role of the server. Every
monitoring system should watch CPU, memory, disk space and network activity,
as these core metrics combine to provide the overall health of the system. But
beyond that, specific monitoring metrics can vary. For example, if an Information
Services (IIS) web server is installed, an admin probablywould want to monitor the
availability of the associated web site or application.
Nagios can support both these scenarios: It can monitor common core metrics, as
well as perform the more specific and customized checks that pertain to specific
roles and applications. The tool can also monitor across multi-OS environments.
The Nagios monitoring tool comes in two primary versions: Nagios Core and
Nagios XI. The former is a free, open source version, and the latter is a commercial
version that offers additional features around graphs and reporting, capacity
planning and more.
To monitor Windows Server with Nagios, the Nagios monitoring server must be a
Linux system. Once admins install and configure this setup, they can create
monitors for Windows machines with the Nagios Remote Data Processor (NRDP)
agent.
Although the Nagios server itself installs on a Linux box, admins can install an
agent on Windows systems to monitor those systems and report back to the main
Nagios server. This agent, the Nagios Cross Platform Agent (NCPA), has a
straightforward installation process, as detailed later in this article. Installation of
the NCPA is one of the first steps to monitor Windows systems with Nagios
-- but, before that, install the NRDP listener to support passive checks. Nagios XI
pre-installs this,but for Nagios Core, admins must complete this step manually.
In this simplified example, we want to install the listener onto the Nagios server,
using an Ubuntusystem. Replace {version} with the current version of the NRDP
service:
apt-get update
cd /tmp
wget -O nrdp.tar.gz
https://github.com/NagiosEnterprises/nrdp/archive/{version}.tar. gz
cd /tmp/nrdp-{version}/
$cfg['authorized_tokens'] =
array( "randomtoken1",
Finally, restart Apache to enable the changes to take effect:
"randomtoken2",
sudo
); thecp nrdp.conf /etc/apache2/sites-enabled/
Test NRDP agent
sudo systemctl restart apache2.service
Navigate to the Nagios server and the NRDP listener, such as http://10.0.0.10/nrdp.
Use the token previously retrieved from the authorized_tokens section
of the configuration file /usr/local/nrdp/server/config.inc.php to send the
following JSON to test the listener:
"checkresults": [
"checkresult": {
"type":
"host",
"checktype": "1"
},
"hostname":
"myhost", "state":
"0",
},
"checkresult": {
"type":
"service",
"checktype": "1"
},
"hostname": "myhost",
"servicename": "myservice",
"state": "1",
The check_ncpa.py plugin enables Nagios to monitor the installed NCPAs on the
hosts. Followthese steps to install the plugin:
After the NRDP installation, install the NCPA. Download the installation files and
run the install.
URL: Use the IP or host name of the Nagios server that hosts the
installed NRDP agent.
After agent setup and configuration, the next step is to define the monitoring rules
for the WindowsServer -- a process that can vary, and can be extensive, depending
on an enterprise's needs.
Determine which metrics matter and require monitoring. Then, define alerting
intervals and threshold values. Ultimately, monitoring rules will determine the
actions admins should take when metrics hit, or back off, from those thresholds.
Additionally, define contact groups to customize who receives the notifications.
We need to create a simple command to use the check_ncpa.py plugin, and normally
it lives here:
/usr/local/nagios/etc/commands.cfg.
define command {
command_name
check_ncpa
The final step to monitor Windows Server with Nagios is to create a simple
CPU checkin /usr/local/nagios/etc/ncpa.cfg.
define host {
max_check_attempts 5
check_interval 5
retry_interval 1
check_period 24x7
contacts nagiosadmin
notification_interval 60
notification_period 24x7
notifications_enabled 1
icon_image ncpa.png
statusmap_image ncpa.png
register 1
}
max_check_attempts 5
check_interval 5
retry_interval 1
check_period 24x7
notification_interval 60
notification_period 24x7
contacts nagiosadmin
register 1
Since Nagios was primarily designed for Linux, it does have some Windows
monitoring limitations. However, the Nagios agent for Windows has been around
and actively developed for a long time. While it might not cover all check use
cases, especially around services, Nagios does have an extensive and evolving
plugin ability.
Protocols
Ports
The Default ports used by common Nagios Plugins are as given under −
Conclusion:
Nagios allows application monitoring from a single console with transaction-
level insightsThis tool not allows you to manage the network but only allows to
monitor the network
Mahavir Education Trust's
Experiment No: 11
Date of 11/10/2023
Performance:
Date of 18/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 11.
Aim: To understand AWS Lambda, its workflow, various functions and create your first
Lambdafunctions using Python / Java / Nodejs.
Theory:
What is Serverless?
Serverless is a term that generally refers to serverless applications. Serverless
applicationsare ones that don’t need any server provision and do not require to
manage servers.
What is a Lambda function?
The code you run on AWS Lambda is called a “Lambda function.” After you
create yourLambda function, it is always ready to run as soon as it is triggered,
similar to a formula in a spreadsheet. Each function includes your code as well
as some associated configuration information, including the function name and
resource requirements.
Lambda functions are “stateless”, with no affinity to the underlying
infrastructure, so thatLambda can rapidly launch as many copies of the function
as needed to scale to the rate of incoming events.
After you upload your code to AWS Lambda, you can associate your function
with specific AWS resources, such as a particular Amazon S3 bucket, Amazon
DynamoDBtable, Amazon Kinesis stream, or Amazon SNS notification. Then,
when the resource changes, Lambda will execute your function and manage the
compute resources as needed to keep up with incoming requests.
And since the service is fully managed, using AWS Lambda can save you time
on operational tasks. When there is no infrastructure to maintain, you can spend
more time working on the application code—even though this also means you
give up the flexibilityof operating your own infrastructure.
Note: If the IAM role that you created doesn't appear in the list, the
rolemight still need a few minutes to propagate to Lambda.
1. You can write any simple program instead of just a print statement
2. Deploy and test your code
3. You can see the execution result in ‘execution result tab’
Once you click on ‘View logs in CloudWatch’ a new tab will open with
name of yourlambda function along with the series of event happened with
that lambda function.
You can check everything which has been executed with that function.
Click any of the log stream and you’ll be able to all the activities related to that log
Conclusion: We have studied AWS Lambda, its workflow, various functions and
created a firstLambda functions using Python.
Mahavir Education Trust's
Experiment No: 12
Date of 18/10/2023
Performance:
Date of 23/10/2023
Submission:
Program
formation/
Timely
Execution/
Submission
ethical Documentati Viva Answer Experiment Teacher Signature
(03)
practices (07) on (02) (03) Marks (15) with date
EXPERIMENT NO 12
Aim: To create a Lambda function which will log “An Image has been added”
once you add anobject to a specific bucket in S3.
Theory:
Amazon S3 service is used for file storage, where you can upload or remove
files. We can trigger AWS Lambda on S3 when there are any file uploads in
S3 buckets. AWS Lambda hasa handler function which acts as a start point for
AWS Lambda function. The handler has the details of the events. In this
chapter, let us see how to use AWS S3 to trigger AWS Lambda function when
we upload files in S3 bucket.
To start using AWS Lambda with Amazon S3, we need the following −
1. Create S3 Bucket
2. Create role which has permission to work with s3 and lambda
3. Create lambda function and add s3 as the trigger.
Let us see these steps with the help of an example which shows the basic
interaction betweenAmazon S3 and AWS Lambda.
User will upload a file in Amazon S3 bucket
Once the file is uploaded, it will trigger AWS Lambda function in the
background which willdisplay an output in the form of a console message that the
file is uploaded.
The user will be able to see the message in Cloudwatch logs once the file
is uploaded.The block diagram that explains the flow of the example is
shown here −
Step 1: Create S3 Bucket
Add S3 Trigger
4. Fill the details such as bucket name, event type, prefix, suffix and click on add
5 Once the trigger has been added you can see it along with the lambda function
Conclusion: A Lambda function which will log “An Image has been added” has been
createdsuccessfully using S3 bucket trigger.