Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

A

REAL TIME RESEARCH PROJECT

REPORT ON

“GENERATING CLOUD MONITORS FROM MODELS TO


SECURE”

This Real Time Research project Submitted in partial fulfillment of the

requirement.

For the award of the degree of

BACHELOR OF TECHNOLOGY

IN

COMPUTER SCIENCE AND ENGINEERING

Submitted
by

R.NAINA PATEL YADAV 22C71A05B0


M.JASWANTH 22C71A05C0
V.ROHAN 22C71A05C9

Under the guidance of


Dr.S.Swathi Rao
( Associate professor)

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

ELLENKI COLLEGE OF ENGINEERING & TECHNOLOGY


PATELGUDA, AMEENPUR, SANGAREDDY DISTRICT – 502319
Affiliated to JNTU, Hyderabad. 2023-2024
ELLENKI COLLEGE OF ENGINEERING & TECHNOLOGY
PATELGUDA, AMEENPUR, SANGAREDDY DISTRICT – 502319
Affiliated to JNTU Hyderabad.

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

CERTIFICATE

This is to certify that the dissertation entitled “GENERATING CLOUD


MONITORS FROM MODELS TO SECURE” being submitted by R. Naina
Patel Yadav,M.Jaswanth,V.Rohan bearing RollNo.22C71A05B0,22C71A05C0,
22C71A05C9 in partial fulfillment of the requirements for the award of degree of
BACHELOR OF TECHNOLOGY in COMPUTER SCIENCE AND
ENGINEERING, under Jawaharlal Nehru Technological University, Hyderabad,
is a record of bonafide work carried out by them under by guidance and supervision.
The results embodied in this Real Time Research Project report have not been
submitted to anyother university or institute for the award of any degree.

Project Coordinator HEAD OF THE DEPARTMENT


Dr.S.Swathi Rao Dr. Shaik Mahaboob Sharief
(Associate Professor) (Associate Professor)

EXTERNAL EXAMINER PRINCIPAL


Dr. P. JOHN PAUL
(Professor)
ACKNOWLEDGEMENT

We would like to thank Sri.Dr E. SADASHIVA REDDY, Chairman,


Ellenki Group of Institutions for providing us the conducive environment for
carrying through my academic schedules and Real time Research Projects with
ease.
We express a whole hearted gratitude to Sri M. SAMBHA SHIVA
REDDY, Director, Ellenki college Of Engineering and Technology, for
providing us the conducive environment for carrying through our academic
schedules and Real Time Research projects with ease.
we would like to thank our esteemed secretary Sri E.DAYAKAR
REDDY and Academic Director Sri E.SAI KIRAN REDDY for their
invaluable supervision, support and guidance throughout our study in college.
We would like to thank our Principal Dr. P. JOHN PAUL ,Ellenki
College Of Engineering and Technology. His insight during the Real time
Project and regular guidance were invaluable to me and also for providing
right suggestions at every phase of the development of our Real Time
Research Project.

We would like to thank our Head of the Department (Computer


Science And Engineering), Dr. SHAIK MAHABOOB SHARIEF for
providing seamless support and knowledge over the past one year and also
for providing right suggestions at every phase of the development of our Real
Time Research Project.
We would like to thank Our guide DR.S.Swathi Rao, Department of
Computer Science And Engineering, Ellenki College Of Engineering and
Technology, for providing seamless support and knowledge over the past one
year and also for providing right suggestions at every phase of the
development of Real Time Research Project.
Our sincere thanks to all our faculty members for their guidance and
support throughout our study at college and we also thank all our
administrative staff, examination branch staff for timely responding to our
queries.
There is definitely a need to thank my friends and parents for their
patience and support in always leading to successful completion of the Real
Time Research Project.

R.NAINA PATEL YADAV 22C71A05B0


M.JASWANTH 22C71A05C0
V.ROHAN 22C71A05C9
CHAPTER TITLE PAGE NO
ABSTRACT 1
1 INTRODUCTION 2
2 LITERATURE SURVEY 3-4
3 SYSTEM REQUIREMENTS 5-6
3.1 Hardware requirements
3.2 Software requirements
4 THEORETICAL 7-8
BACKGROUND
4.1 Introduction to python
4.2 introduction to Django
5 SYSTEM ANALYSIS 9-10
5.1 Existing system
5.2 Disadvantages of existing system
5.3 Proposed system
5.4 Advantages of proposed system
6 SYSTEM DESIGN 11-20
6.1 Introduction
6.2 Modules
6.3 services provider
6.4 view and Authorize users
6.5 Remote users
6.6 system architecture
6.7 UML Diagrams
6.8 Data flow Diagram
6.9 Use case Diagram
6.10 Class Diagram
6.11 Sequence Diagram
6.12 Activity Diagram
7 SYSTEM IMPLEMENTATION 21-24
7.1 Logical Design
7.2 Physical Design
8 SYSTEM TESTING 25-27
8.1 Types of Testing
8.2 White Box Testing
8.3 Black Box Testing
9 OUTPUT SCREENS 28-34
CONCLUSION 35
REFERENCE 36
LIST OF FIGURES
1.DATA FLOW DIAGRAM -16

2.USE CASE DIGRAM -17


3.CLASS DIAGRAM -18

4.SEQUNCE DIAGRAM -19


5.ACTIVITY DIAGRAM -20
GENERATING CLOUD MONITORS FROM MODELS TO
SECURE CLOUDS

ABSTRACT:

Authorization is an important security concern in cloud computing environments. It aims at


regulating an access of the users to system resources. A large number of resources associated
with REST APIs typical in cloud makes an implementation of security requirements
challenging and error-prone. To alleviate this problem, in this paper we propose an
implementation of security cloud monitor. We rely on model-driven approach to represent
the functional and security requirements. Models are then used to generate cloud monitors.
The cloud monitors contain contracts used to automatically verify the implementation. We
use Django web framework to implement cloud monitor and OpenStack to valid ate our
implementation.

1
CHAPTER -1
INTRODUCTION

In many companies, private clouds are considered to be an important element of data center
transformations. Private clouds are dedicated cloud environments created for the internal
use by a single organization . According to the Cloud Survey 2017 ,private clouds are
adopted by 72% of the cloud users, while the hybrid cloud adoption (both public and
private) accounts for 67%. The companies, adopting private clouds, vary in size from 500
to more than 2000 employees. Therefore, designing and developing secure private cloud
environments for such a large number of users constitutes a major engineering challenge.
Usually, cloud computing services offer REST APIs (REpresentational State Transfer
Application Programming Interface) to their consumers. REST APIs, e.g., AWS, Windows
Azure , OpenStack , define software interfaces allowing for the use of their resources in
various ways. The REST architectural style exposes each piece of information with a URI,
which results in a large number of URIs that can access the system. Data breach and loss of
critical data are among the top cloud security threats . The large number of URIs further
complicates the task of the security experts, who should ensure that each URI, providing
access to their system, is safeguarded to avoid data breaches or privilege escalation attacks.
Since the source code of the Open Source clouds is often developed in a collaborative
manner, it is a subject of frequent updates. The updates might introduce or remove a variety
of features and hence, violate the security properties of the previous releases. It makes it
rather unfeasible to manually check correctness of the APIs access control implementation
and calls for enhanced monitoring mechanisms. In this paper, we present a cloud monitoring
framework that supports a semi-automated approach to monitoring a private cloud
implementation with respect to its conformance to the functional requirements and API
access control policy. Our work uses UML (Unified Modeling Language) models with
OCL (Object Constraint Language) to specify the behavioral interface with security
constraints for the cloud implementation. The behavioral interface of the REST API
provides an information regarding the methods that can be invoked on it and pre- and post-
conditions of the methods. In the current practice, the pre- and post-conditions are usually
given as the textual descriptions associated with the API methods. In our work, we rely on
the Design by Contract (DbC) framework , which allows us to define security and functional
requirements as verifiable contracts.

2
CHAPTER-2
LITERATURE SURVEY
1.Model Driven Security For Web Servies

AUTHORS: MM Alam et al.


Model driven architecture is an approach to increase the quality of complex software systems
based on creating high level system models that represent systems at different abstract levels
and automatically generating system architectures from the models. We show how this
paradigm can be applied to what we call model driven security for Web services. In our
approach, a designer builds an interface model for the Web services along with security
requirements using the object constraint language (OCL) and role based access control
(RBAC) and then generates from these specifications a complete configured security
infrastructure in the form of Extended Access Control Markup Language (XACML) policy
files. Our approach can be used to improve productivity during the development of secure
Web services and quality of resulting systems.
2) Run-time generation, transformation, and verification of access control models for
self-protection
AUTHORS: Chen, Bihuan; Peng, Xin; Yu, Yijun; Nuseibeh, Bashar and Zhao, Wenyun
(2014).
A self-adaptive system uses runtime models to adapt its ar-chitecture to the changing
requirements and contexts. How-ever, there is no one-to-one mapping between the require-
ments in the problem space and the architectural elements in the solution space. Instead, one
refined requirement may crosscut multiple architectural elements, and its realization in
volves complex behavioral or structural interactions manifested as architectural design
decisions. In this paper we pro-pose to combine two kinds of self-adaptations: requirements-
driven self-adaptation, which captures requirements as goal models to reason about the best
plan within the problem space, and architecture-based self-adaptation, which cap-tures
architectural design decisions as decision trees to search for the best design for the desired
requirements within the contextualized solution space. Following these adaptations,
component-based architecture models are reconfigured using incremental and generative
model transformations. Com-pared with requirements-driven or architecture-based
approaches, the case study using an online shopping bench-mark shows promise that our
approach can further improve the effectiveness of adaptation (e.g. system throughput in this
case study) and offer more adaptation flexibility.
3. Towards development of secure systems using umlsec.

Author: Jan J¨urjens


We show how UML (the industry standard in object-oriented modelling) can be used to
express security requirements during system development. Using the extension mechanisms
provided by UML, we incorporate standard concepts from formal methods regarding multi-
level secure systems and security protocols. These definitions evaluate diagrams of various
kinds and indicate possible vulnerabilities.On the theoretical side, this work exemplifies use
of the extension mechanisms of UML and of a (simplified) formal semantics for it. A more

3
practical aim is to enable developers (that may not be security specialists) to make use of
established knowledge on security engineering through the means of a widely used notation.
4. Cloud computingthe business perspective

Authors: Sean Marston et al


The evolution of cloud computing over the past few years is potentially one of the major
advances in the history of computing. However, if cloud computing is to achieve its potential,
there needs to be a clear understanding of the various issues involved, both from the
perspectives of the providers and the consumers of the technology. While a lot of research is
currently taking place in the technology itself, there is an equally urgent need for
understanding the business-related issues surrounding cloud computing. In this article, we
identify the strengths, weaknesses, opportunities and threats for the cloud computing
industry. We then identify the various issues that will affect the different stakeholders of
cloud computing. We also issue a set of recommendations for the practitioners who will
provide and manage this technology. For IS researchers, we outline the different areas of
research that need attention so that we are in a position to advice the industry in the years to
come. Finally, we outline some of the key issues facing governmental agencies who, due to
the unique nature of the technology, will have to become intimately involved in the
regulation of cloud computing.
5. An Extensive Systematic Review on Model-Driven Development of Secure Systems

Author: PhuHNguyenetal

Model-Driven Security (MDS) is as a specialised Model-Driven Engineering research area


for supporting the development of secure systems. Over a decade of research on MDS has
resulted in a large number of publications. Objective: To provide a detailed analysis of the
state of the art in MDS, a systematic literature review (SLR) is essential. Method: We
conducted an extensive SLR on MDS. Derived from our research questions, we designed a
rigorous, extensive search and selection process to identify a set of primary MDS studies
that is as complete as possible. Our three-pronged search process consists of automatic
searching, manual searching, and snowballing. After discovering and considering more than
thousand relevant papers, we identified, strictly selected, and reviewed 108 MDS
publications. Results: The results of our SLR show the overall status of the key artefacts of
MDS, and the identified primary MDS studies. E.g. regarding security modelling artefact,
we found that developing domain-specific languages plays a key role in many MDS
approaches. The current limitations in each MDS artefact are pointed out and corresponding
potential research directions are suggested. Moreover, we categorise the identified primary
MDS studies into 5 principal MDS studies, and other emerging or less common MDS
studies. Finally, some trend analyses of MDS research are given. Conclusion: Our results
suggest the need for addressing multiple security concerns more systematically and
simultaneously, for tool chains supporting the MDS development cycle, and for more
empirical studies on the application of MDS methodologies. To the best of our knowledge,
this SLR is the first in the field of Software Engineering that combines a snowballing strategy
with database searching. This combination has delivered an extensive literature study on
MDS.

4
CHAPTER-3

SYSTEM REQUIREMENTS

HARDWARE REQUIREMENTS:

❖ System : Pentium IV 2.4 GHz.

❖ Hard Disk : 40 GB.

❖ Floppy Drive : 1.44 Mb.

❖ Monitor : 14’ Colour Monitor.

❖ Mouse : Optical Mouse.

❖ Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

❖ Operating system : Windows 7 Ultimate.

❖ Coding Language : Python.

❖ Front-End : Python.

❖ Back -End :Django

❖ Designing : Html,css,javascript.

❖ Data Base : mysql

5
SYSTEM STUDY

FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business proposal is put forth
with a very general plan for the project and some cost estimates. During system analysis
the feasibility study of the proposed system is to be carried out. This is to ensure that the
proposed system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are,

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the system will have on
the organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies
used are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY

This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand on the
available technical resources. This will lead to high demands on the available technical
resources. This will lead to high demands being placed on the client. The developed system
must have a modest requirement, as only minimal or null changes are required for
implementing this system.

SOCIAL FEASIBILITY

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by
the users solely depends on the methods that are employed to educate the user about the
system and to make him familiar with it. His level of confidence must be raised so that he is
also able to make some constructive criticism, which is welcomed, as he is the final user of
the system.

6
CHAPTER-4
THEORETICAL BACKGROUND
PYTHON
Python is a general-purpose interpreted, interactive, object-oriented, and high-level
programming language. An interpreted language, Python has a design philosophy that
emphasizes code readability (notably using whitespace indentation to delimit code
blocks rather than curly brackets or keywords), and a syntax that allows programmers to
express concepts in fewer lines of code than might be used in languages such
as C++or Java. It provides constructs that enable clear programming on both small and large
scales. Python interpreters are available for many operating systems. CPython, the reference
implementation of Python, is open source software and has a community-based
development model, as do nearly all of its variant implementations. CPython is managed by
the non-profit Python Software Foundation. Python features a dynamic type system and
automatic memory management. It supports multiple programming paradigms,
including object-oriented, imperative, functional and procedural, and has a large and
comprehensive standard library.

7
DJANGO

Django is a high-level Python Web framework that encourages rapid development


and clean, pragmatic design. Built by experienced developers, it takes care of much of the
hassle of Web development, so you can focus on writing your app without needing to
reinvent the wheel. It’s free and open source.

Django's primary goal is to ease the creation of complex, database-driven websites. Django
emphasizes reusabilityand "pluggability" of components, rapid development, and the
principle of don't repeat yourself. Python is used throughout, even for settings files and data

models.

Django also provides an optional administrative create, read, update and delete interface that
is generated dynamically through introspection and configured via admin models

8
CHAPTER-5

SYSTEM ANALYSIS

EXISTING SYSTEMS

In many companies, private clouds are considered to be an important element of data center
transformations. Private clouds are dedicated cloud environments created for the internal use
by a single organization. Therefore, designing and developing secure private cloud
environments for such a large number of users constitutes a major engineering challenge.
Usually, cloud computing services offer REST APIs (REpresentational State Transfer
Application Programming Interface) to their consumers. The REST architectural style
exposes each piece of information with a URI, which results in a large number of URIs that
can access the system.

DISADVANTAGES OF EXISTING SYSTEM

➢ Data breach and loss of critical data are among the top cloud security threats.
➢ The large number of URIs further complicates the task of the security experts, who
should ensure that each URI, providing access to their system, is safeguarded to avoid
data breaches or privilege escalation attacks.
➢ Since the source code of the Open Source clouds is often developed in a collaborative
manner, it is a subject of frequent updates. The updates might introduce or remove a
variety of features and hence, violate the security properties of the previous releases.

PROPOSED SYSTEM

We present a cloud monitoring framework that supports a semi-automated approach to


monitoring a private cloud implementation with respect to its conformance to the functional
requirements and API access control policy. Our work uses UML (Unified Modeling
Language) models with OCL (Object Constraint Language) to specify the behavioral
interface with security constraints for the cloud implementation. The behavioral interface of
the REST API provides an information regarding the methods that can be invoked on it and
pre- and post-conditions of the methods. In the current practice, the pre- and post-conditions
are usually given as the textual descriptions associated with the API methods. In our work,
we rely on the Design by Contract (DbC) framework, which allows us to define security and
functional requirements as verifiable contracts.

9
ADVANTAGES OF PROPOSED SYSTEM

➢ Our methodology enables creating a (stateful) wrapper that emulates the usage
scenarios and defines security-enriched behavioural contracts to monitor cloud.
➢ The proposed approach also facilitates the requirements traceability by ensuring the
propagation of the security specifications into the code. This also allows the security
experts to observe the coverage of the security requirements during the testing phase.
➢ The approach is implemented as a semi-automatic code generation tool in Django a
Python web framework.

10
CHAPTER-6
SYSTEM DESIGN

INTRODUCTION

This design consists of modules which are well programmed and structured.
MODULES
USER
It defines the access rights of the cloud users. A volume can be created, if the it has not
exceeded its quota of the permitted volumes and a user Authorization is an important security
concern in cloud computing environments. a POST request from the authorized user on the
volumes resource would create a new volume. a DELETE request on the volume resource
by an authorized user would delete the volume . if the user of the service is authorized to do
so, and the volume is not attached to any instance .It aims at regulating an access of the users
to system resources.

11
6.2.1 CLOUD

.The cloud monitors contain contracts used to automatically verify the implementation . A
cloud developer uses IaaS to develop a private cloud for her/his organization that would be
used by different cloud users within the organization. In some cases, this private cloud may
be implemented by a group of developers working collaboratively on different machines.
We use Django web framework to implement cloud monitor and OpenStack to validate our
implementation.
CLOUD LOGIN

12
6.2.3ADMIN

the cloud administrator using Keystone and users or usergroups are assigned the roles in
these projects. It defines the access rights of the cloud users in the project. A volume can be
created, if the project has not exceeded its quota of the permitted volumes and a user is
authorized to create a volume in the project. Similarly, a volume can be deleted, if the user
of the service is authorized to do so, and the volume is not attached to any instance, i.e., its
status is not in-use.

Admin login

13
Machine learning
Machine learning refers to the computer’s acquisition of a kind of ability to make predictive
judgments and make the best decisions by analyzing and learning a large number of existing
data. The representation algorithms include deep learning, artificial neural network, decision
tree, enhancement algorithm and so on. The key way for computers to acquire artificial
intelligence is machine learning. Nowadays, machine learning plays an important role in
various fields of artificial intelligence. Whether in aspects of internet search, biometric
identification, auto driving, Mars robot, or in American presidential election, military
decision assistants and so on, basically, as long as there is a need for data analysis, machine
learning can be used to play a role.

SYSTEM ARCHITECTURE

UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized general-


purpose modeling language in the field of object-oriented software engineering. The
standard is managed, and was created by, the Object Management Group.

14
The goal is for UML to become a common language for creating models of object
oriented computer software. In its current form UML is comprised of two major components:
a Meta-model and a notation. In the future, some form of method or process may also be
added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying, Visualization,
Constructing and documenting the artifacts of software system, as well as for business
modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the
software development process. The UML uses mostly graphical notations to express the
design of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks,
patterns and components.
7. Integrate best practices.

DATA FLOW DIAGRAM


1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be
used to represent a system in terms of input data to the system, various processing carried
out on this data, and the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to
model the system components. These components are the system process, the data used
by the process, an external entity that interacts with the system and the information flows
in the system.

15
3. DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and
the transformations that are applied as data moves from input to output.
DFD is also known as bubble chart. A DFD may be used to represent a system at any level
of abstraction. DFD may be partitioned into levels that represent increasing information
flow and functional detail.

16
USE CASE DIGRAM
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals
(represented as use cases), and any dependencies between those use cases. The main purpose
of a use case diagram is to show what system functions are performed for which actor. Roles
of the actors in the system can be depicted.

17
CLASS DIAGRAM

In software engineering, a class diagram in the Unified Modeling Language


(UML) is a type of static structure diagram that describes the structure of a
system by showing the system's classes, their attributes, operations (or
methods), and the relationships among the classes. It explains which class
contains information.

18
SEQUENCE DIAGRAM

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction


diagram that shows how processes operate with one another and in what order. It is a
construct of a Message Sequence Chart. Sequence diagrams are sometimes called event
diagrams, event scenarios, and timing diagrams.

19
ACTIVITY DIAGRAM

20
CHAPTER-7
SYSTEM IMPLEMENTATION
LOGICAL DESIGN
DEEP LEARNING
Deep learning algorithms refer to a class of machine learning algorithms that are
based on artificial neural networks with multiple layers (hence the term "deep").
These algorithms are designed to automatically learn hierarchical representations of
data by progressively extracting more abstract and complex features from raw input.
The most common and widely used deep learning algorithm is called the deep neural
network (DNN), specifically the convolutional neural network (CNN) for computer
vision tasks and the recurrent neural network (RNN) for sequential data such as text
or speech. However, there are several other types of deep learning algorithms, such
as generative adversarial networks (GANs), autoencoders, and transformers.
Here's a brief overview of some popular deep learning algorithms:
1. Convolutional Neural Networks (CNNs): CNNs are primarily used for image and
video analysis. They are composed of convolutional layers that extract local
features, followed by pooling layers that downsample the spatial dimensions, and
fully connected layers for classification or regression.
2. Recurrent Neural Networks (RNNs): RNNs are suitable for sequential data
processing, such as natural language processing and speech recognition. They
utilize recurrent connections to maintain internal memory, enabling the network to
capture temporal dependencies.
3. Generative Adversarial Networks (GANs): GANs consist of two neural networks—
the generator and the discriminator—that are trained in an adversarial manner. The
generator aims to generate realistic data samples, while the discriminator tries to
distinguish between real and fake data. GANs have been used for tasks like image
synthesis, style transfer, and data augmentation.
4. Autoencoders: Autoencoders are neural networks that aim to learn a compressed
representation of the input data. They consist of an encoder network that maps the
input to a lower-dimensional latent space and a decoder network that reconstructs
the input from the latent representation. Autoencoders are commonly used for data
denoising, dimensionality reduction, and anomaly detection.
5. Transformers: Transformers have gained significant attention for their success in
natural language processing tasks. They use self-attention mechanisms to capture
dependencies between different positions in the input sequence. Transformers have
been particularly effective in machine translation, text generation, and language
understanding.
It's important to note that deep learning algorithms require large amounts of labeled
data and significant computational resources to train effectively. Nonetheless, they

21
have achieved state-of-the-art results in various domains and continue to advance the
field of artificial intelligence.
ARTIFICIAL NEUTRAL NETWORK
Artificial Neural Networks (ANNs) are a class of machine learning algorithms
inspired by the structure and function of biological neural networks. ANNs consist of
interconnected artificial neurons, also known as nodes or units, organized into layers.
Each neuron receives input signals, performs a computation, and passes the result to
the next layer until the final output is generated.
Here's a general overview of the algorithmic steps involved in training and using an
artificial neural network:
1. Architecture Design: Decide on the architecture of the neural network, including
the number of layers, the number of neurons in each layer, and the type of
connections between neurons (e.g., feedforward, recurrent).
2. Initialize Weights: Initialize the weights (parameters) of the neural network with
small random values. These weights determine the strength of connections between
neurons and are adjusted during the training process.
3. Forward Propagation: Perform forward propagation to compute the output of the
neural network. Input data is passed through the network, and computations are
performed layer by layer using activation functions on the weighted sums of inputs.
4. Compute Loss: Compare the network's output with the desired output (ground
truth) and calculate a loss function that quantifies the mismatch between them.
Common loss functions include mean squared error (MSE) for regression problems
and cross-entropy loss for classification problems.
5. Backpropagation: Use the loss function to calculate the gradients of the weights
with respect to the loss. This is done through backpropagation, which involves
propagating the error backward from the output layer to the input layer while
applying the chain rule of calculus.
6. Update Weights: Adjust the weights of the neural network using an optimization
algorithm such as gradient descent or one of its variants. The goal is to minimize
the loss function by iteratively updating the weights in the direction that reduces the
loss.
7. Repeat Steps 3-6: Repeat steps 3 to 6 for multiple iterations or epochs until the
network's performance converges or reaches a satisfactory level.
8. Model Evaluation: Evaluate the trained model's performance on a separate
validation or test dataset to assess its generalization capabilities. Various metrics,
such as accuracy, precision, recall, or mean squared error, can be used depending
on the problem domain.
9. Prediction: Once the model is trained and evaluated, it can be used to make
predictions on new, unseen data by performing a forward pass through the network
using the learned weights.

22
Artificial neural networks can be applied to a wide range of tasks, including
classification, regression, pattern recognition, and time series analysis. They have
proven to be powerful tools in machine learning and have contributed to significant
advancements in various fields.
DECISION TREE
The decision tree algorithm is a supervised machine learning algorithm used for both
classification and regression tasks. It creates a tree-like model of decisions and their
possible consequences based on the features of the input data. The algorithm learns
decision rules by recursively partitioning the data into subsets based on feature
values and their associated target variables.
Here's a general overview of the decision tree algorithm:
1. Data Preparation: Prepare the dataset, ensuring it is in a format suitable for training
a decision tree. This typically involves cleaning the data, handling missing values,
and encoding categorical variables if necessary.
2. Tree Construction: The algorithm starts with the entire dataset and selects a feature
that best splits the data based on a certain criterion. The most common criterion for
classification problems is the Gini impurity or information gain, while for
regression problems, it can be based on mean squared error or other measures. The
dataset is partitioned into subsets based on the chosen feature.
3. Recursive Splitting: The process of selecting the best feature and splitting the data
is repeated recursively for each subset. This creates a tree-like structure where each
internal node represents a decision based on a specific feature, and each leaf node
represents a class label (in classification) or a predicted value (in regression).
4. Stopping Criteria: The recursion continues until a stopping criterion is met. This
criterion can be based on factors such as the maximum depth of the tree, the
minimum number of samples required to split a node, or the maximum number of
leaf nodes.
5. Tree Pruning (Optional): After the decision tree is constructed, pruning techniques
can be applied to reduce overfitting and improve generalization. Pruning involves
removing unnecessary branches or nodes that do not contribute significantly to the
overall accuracy or predictive power of the tree.
6. Prediction: Once the decision tree is built, it can be used to make predictions on
new, unseen data. The input data traverses the tree from the root to a leaf node,
following the decision rules at each internal node. The leaf node reached
determines the predicted class label or value.
Decision trees are popular due to their interpretability, as the resulting tree structure
can be easily visualized and understood. They can handle both categorical and
numerical data, and they can capture non-linear relationships between features and
targets. However, decision trees are prone to overfitting if not properly pruned or
regularized.

23
Various algorithms can be used to construct decision trees, such as ID3, C4.5, CART
(Classification and Regression Trees), and Random Forests (an ensemble of decision
trees). The choice of algorithm depends on the specific requirements and
characteristics of the problem at hand.

PHYSICAL DESIGN
INPUT DESIGN
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and those steps
are necessary to put transaction data in to a usable form for processing can be achieved by
inspecting the computer to read data from a written or printed document or it can occur by
having people keying the data directly into the system. The design of input focuses on
controlling the amount of input required, controlling the errors, avoiding delay, avoiding
extra steps and keeping the process simple. The input is designed in such a way so that it
provides security and ease of use with retaining the privacy. Input Design considered the
following things:
➢ What data should be given as input?
➢ How the data should be arranged or coded?
➢ The dialog to guide the operating personnel in providing input.
➢ Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES
1. Input Design is the process of converting a user-oriented description of the
input into a computer-based system. This design is important to avoid errors in the data input
process and show the correct direction to the management for getting correct information
from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle
large volume of data. The goal of designing input is to make data entry easier and to be free
from errors. The data entry screen is designed in such a way that all the data manipulates can
be performed. It also provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that the user will
not be in maize of instant. Thus the objective of input design is to create an input layout that
is easy to follow

OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and
to other system through outputs. In output design it is determined how the information is to
be displaced for immediate need and also the hard copy output. It is the most important and
direct source information to the user. Efficient and intelligent output design improves the
system’s relationship to help user decision-making.

24
CHAPTER :8

SYSTEM TESTING
8.1 TYPES OF TESTING-UNIT TESTING
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of
the application .it is done after the completion of an individual unit before integration. This
is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests
perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs
and expected results.

INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components
were individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing
the problems that arise from the combination of components.

FUNCTIONAL TESTING
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system documentation,
and user manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

25
Systems/Procedures : interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements,


key functions, or special test cases. In addition, systematic coverage pertaining to identify
Business process flows; data fields, predefined processes, and successive processes must be
considered for testing. Before functional testing is complete, additional tests are identified
and the effective value of current tests is determined.

SYSTEM TESTING
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example
of system testing is the configuration oriented system integration test. System testing is based
on process descriptions and flows, emphasizing pre-driven process links and integration
points.

WHITE BOX TESTING


White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose.
It is used to test areas that cannot be reached from a black box level.

BLACK BOX TESTING


Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as most other
kinds of tests, must be written from a definitive source document, such as specification or
requirements document, such as specification or requirements document. It is a testing in
which the software under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the software works.
UNIT TESTING
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted
as two distinct phases.

Test strategy and approach

Field testing will be performed manually and functional tests will be written in detail.
Test objectives

• All field entries must work properly.

26
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.
Features to be tested

• Verify that the entries are of the correct format


• No duplicate entries should be allowed
• All links should take the user to the correct page.

Integration Testing
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures caused by
interface defects.

The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company
level – interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

27
CHAPTER -9

OUTPUT SCREENS

User login

28
User register

This is the module where user can register

Admin login

29
Admin approve user

User app creation

30
Django rest

User app check

31
Cloud approve app

32
Edit file

User uploaded file

33
SAMPLE TEST CASES

S.no Test Case Excepted Result Result Remarks(IF Fails)


If USER
If USER is not
1 USER REGISTERED registration Pass
registered.
successfully.
If USER name and
If USER name or
password is correct
2 USER LOGIN Pass password is not
then it will getting
correct.
valid page.
USER rights will be If USER are not
3 ADMIN Pass
accepted here. registered.
If USER is not
Choose or select
4 USER upload files Pass select or SEND
USER files
MESSAGES
USER app rights
If USER app are not
5 Cloud will be accepted Pass
registered.
here.
User can edit user If file is not
6 User Pass
uploaded file available.

User can delete If file is not


User Pass
7 user uploaded file available.

User can download If file is not


8 User Pass
user uploaded file available.
User can upload If file is not
9 User Pass
user uploaded file available.

34
CONCLUSION

In this paper, we have presented an approach and associated tool for monitoring security in
cloud. We have relied on the model-driven approach to design APIs that exhibit REST
interface features. The cloud monitors, generated from the models, enable an automated
contract-based verification of correctness of functional and security requirements, which are
implemented by a private cloud infrastructure. The proposed semi-automated approach
aimed at helping the cloud developers and security experts to identify the security loopholes
in the implementation by relying on modelling rather than manual code inspection or testing.
It helps to spot the errors that might be exploited in data breaches or privilege escalation
attacks. Since open source cloud frameworks usually undergo frequent changes, the
automated nature of our approach allows the developers to relatively easily check whether
functional and security requirements have been preserved in new releases.

35
REFERENCES
[1] Amazon Web Services. https://aws.amazon.com/. Accessed: 30.11.2017.
[2] Block Storage API V3 . https://developer.openstack.org/api-ref/ block-storage/v3/.
retrieved: 126.2017.
[3] Cloud Computing Trends: 2017 State of the Cloud Survey. https://www.
rightscale.com/blog/cloud-industry-insights/. Accessed: 30.11.2017.
[4] cURL. http://curl.haxx.se/. Accessed: 20.08.2013.
[5] Extensible markup language (xml). https://www.w3.org/XML/. Accessed: 27.03.2018.
[6] Keystone Security and Architecture Review. Online at
https://www.openstack.org/summit/ openstack-summit-atlanta-2014/session-
videos/presentation/keystonesecurity-and-architecture-review. retrieved: 06.2017

36
37

You might also like