Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

TOWARDS SHARED OWNERSHIP IN THE CLOUD

A PROJECT REPORT

Submitted For the Partial Fulfilment of the Requirement for the


Degree of Master of Science (Information Technology)

By
MEGALA M
A19101PIT6095

Under the Guidance of


M GANESH RAJA
MCA.,M.Phil(P.HD)
ASSISTANT PROFESSOR
DB JAIN COLLEGE, CHENNAI

INSTITUTE OF DISTANCE EDUCATION


UNIVERSITY OF MADRAS
CHENNAI - 600 005

APRIL-2021

1
BONAFIDE CERTIFICATE

This is to certify that the report entitled TOWARDS SHARED


OWNERSHIP IN THE CLOUD being submitted to the University of Madras, Chennai
By A19101PIT6095 (Reg.No) for the Partial Fulfilment for the award of degree of
M.Sc(IT) is a bonafide record of work carried out by him/her under my guidance and
supervision.

Name and Designation of the Guide Co-Ordinator

Date: 10/04/2021

Submitted for the Viva-Voce Examination held on 10/04/2021


at centre IDE, University of Madras

Examiners

1.Name :
Signature :

2.Name :
Signature :
Acknowledgement

I Would like to express my special thanks of gratitude to my professor M


GANESH RAJA MCA.,M.Phil(P.HD) Assistant Professor, Who gave me
the golden opportunity to do this wonderful project on the topic TOWARDS
SHARED OWNERSHIP IN THE CLOUD, which also help me in doing a
lot of Research and I care to know about so many new things and forecasting
and I have done my best to complete the project.
Index

S.NO CONTENT PAGE.NO


01. INTRODUCTION 1
1.1 COMPANY PROFILE 1
1.2 PROJECT OVERVIEW 2
02. SYSTEM ANALYSIS 3
2.1 FEASIBILITY STUDY 3
2.2 EXISTING SYSTEM 5
2.3 PROPOSED SYSTEM 6
03. SYSTEM CONFIGURATION 7
3.1 HARDWARE SPECIFICATION 7
3.2 SOFTWARE SPECIFICATION 7
3.3 ABOUT THE SOFTWARE 7
04. SYSTEM DESIGN 17
4.1 NORMALIZATION 18
4.2 TABLE DESIGN 20
4.3 INPUT DESIGN 22
4.4 SFD\DFD 23
05. SYSTEM DESCRIPTION 29
06. TESTING AND IMPLEMENTATION 33
07. CONCLUSION AND FUTURE SCOPE 40
08. FORMS AND REPORT 42
09. BIBLIOGRAPHY 57
01. INTRODUCTION

1.1 COMPANY PROFILE

TECHNO INFO SOLUTIONS has a long-standing from 2005, Chennai based company
with a National wide reputation not only for excellence in academic research and innovation
which benefits society, but also for commercially-relevant research to assist companies in
achieving market leadership. The research involves in the doctorate sectors too. Its goals are
to enhance the user experience on computing devices, reduce the cost of writing and
maintaining software, and invent novel computing technologies.

Techno Info Solutions research also collaborates openly with colleges and universities
worldwide to broadly advance the field of computer science. Techno Info Solutions (TIS) is
one of the South India’s leading RPO Company that envisioned and pioneered the adoption
of the flexible global business practices that today enable companies to operate more
efficiently and produce more value.

We involved in software development, Hardware Integration, Product development and


research and development. Our major domains Healthcare, Logistics, Defense, and RPO.

We are part of South India’s growing conglomerates – Techno Info Solutions Groups -
which, with its interests in Financial Services, Agricultural, Engineering & Technology
Development, provides us with a grounded understanding of specific business challenges
facing global companies.

As industry leaders, we introduced offshore development and product development,


pioneered development and support frameworks, ensuring compressed delivery timeframes.
Today, our solutions provide strategic advantage to several most-admired organizations in the
world. We have long-standing and vibrant partnerships with many companies across the
globe.

1
Real Time Projects

Real Time Projects are executed in our Research and Development wing of TIS Group,
software development division, where a thorough research of all Projects are done and
developed as per MVC coding standard (CMM Level Standard). Techno Info Solutions is
specialized in innovative IT solutions to ease out the day to day operations through cost
effective software development projects. We employ highly qualified software developers
and creative designers with years of experience to accomplish software projects with utmost
satisfaction. Techno Info solutions help clients’ to design develop and integrate applications
and solutions based on the various platforms like Microsoft .Net, Java and PHP.

1.2 PROJECT OVERVIEW

To achieve secure data sharing for dynamic groups in the cloud, we expect to combine
the group signature and dynamic broadcast encryption techniques. Specially, the group
signature scheme enables users to anonymously use the cloud resources, and the dynamic
broadcast encryption technique allows data owners to securely share their data files with
others including new joining users. Unfortunately, each user has to compute revocation
parameters to protect the confidentiality from the revoked users in the dynamic broadcast
encryption scheme, which results in that both the computation overhead of the encryption and
the size of the cipher text increase with the number of revoked users. Thus, the heavy
overhead and large cipher text size may hinder the adoption of the broadcast encryption
scheme to capacity-limited users. To tackle this challenging issue, we let the group manager
compute the revocation parameters and make the result public available by migrating them
into the cloud. Such a design can significantly reduce the computation overhead of users to
encrypt files and the cipher text size. Specially, the computations overhead of users for
encryption operations and the cipher text size are constant and independent of the revocation
users. Secure environments protect their resources against unauthorized access by enforcing
access control mechanisms. So when increasing security is an issue text based passwords are
not enough to counter such problems. Using the instant messaging service available in
internet, user will obtain the One Time Password (OTP) after image authentication. This OTP
then can be used by user to access their personal accounts. In this paper I one time password
to achieve high level of security in authenticating the user over the internet.

2
In this project, we first formally define a notion of shared ownership within a file access
control model. We then propose two possible instantiations of our proposed shared ownership
model. Our first solution, called Commune, relies on secure file dispersal and collusion-
resistant secret sharing to ensure that all access grants in the cloud require the support of an
agreed threshold of owners. As such, Commune can be used in existing clouds without
modifications to the platforms. Our second solution, dubbed Comrade, leverages the block
chain technology in order to reach consensus on access control decision. Unlike Commune,
Comrade requires that the cloud is able to translate access control decisions that reach
consensus in the block chain into storage access control rules, thus requiring minor
modifications to existing clouds. We analyze the security of our proposals and
compare/evaluate their performance through implementations using Amazon S3.

02. SYSTEM ANALYSIS

2.1 FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with
a very general plan for the project and some cost estimates. During system analysis the
feasibility study of the proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some understanding of the
major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

• Economical feasibility

• Technical feasibility

• Social feasibility

3
ECONOMICAL FEASIBILITY:

This study is carried out to check the economic impact that the system will have on the
organization. The amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be justified. Thus the developed
system as well within the budget and this was achieved because most of the technologies used
are freely available. Only the customized products had to be purchased.

TECHNICAL FEASIBILITY:

This study is carried out to check the technical feasibility, that is, the technical requirements
of the system. Any system developed must not have a high demand on the available technical
resources. This will lead to high demands on the available technical resources. This will lead
to high demands being placed on the client. The developed system must have a modest
requirement, as only minimal or null changes are required for implementing this system.

SOCIAL FEASIBILITY:

The aspect of study is to check the level of acceptance of the system by the user. This
includes the process of training the user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity. The level of acceptance by the
users solely depends on the methods that are employed to educate the user about the system
and to make him familiar with it. His level of confidence must be raised so that he is also able
to make some constructive criticism, which is welcomed, as he is the final user of the system.

4
2.2 EXSISTING WORK

Existing system is nothing but already have in our or doing project. In this session we
discuss the construction of baseline models of existing systems. This activity relies on
knowledge of the hardware, software, workload, and monitoring tools associated with the
system under study. we address the problem of distributed enforcementvof shared ownership
within cloud storage providers. By distributed enforcement, we mean enforcement where
access to files in a shared repository is granted if and only if t out of n owners separately
support the grant decision. Therefore, we introduce the Shared- Ownership file access control
Model (SOM) to define our notion of shared ownership, and to formally state the given
enforcement problem. We then propose two instantiations of the SOM model to enforce
shared ownership policies in a distributed fashion. Some of the limitations of the existing
system are:

Data confidentiality: Unauthorized users should be prevented from accessing the


plaintext of the shared data stored in the cloud server. In addition, the cloud server, which is
supposed to be honest but curious, should also be deterred from knowing plaintext of the
shared data.

Backward secrecy: Backward secrecy means that, when a user’s authorization is expired,
or a user’s secret key is compromised, he/she should be prevented from accessing the
plaintext of the subsequently shared data that are still encrypted under his/her identity.

Forward secrecy: Forward secrecy means that, when a user’s authority is expired, or a
user’s secret key is compromised, he/she should be prevented from accessing the plaintext of
the shared data that can be previously accessed by him/her.

5
2.3 PROPOSED SYSTEM

Proposed system means you modified the particular pattern of doing project is called
"proposed system". In proposed system, we overcome the drawback of existing system. We
formalize the notion of shared ownership within a file access control model named SOM, and
use it to define a novel access control problem of distributed enforcement of shared
ownership in existing clouds. We propose a first solution, called Commune, which
distributively enforces SOM and can be deployed in an agnostic cloud platform. Commune
ensures that (i) a user cannot read a file from a shared repository unless that user is granted
read access by at least t of the owners, and (ii) a user cannot write a file to a shared repository
unless that user is granted write access by at least t of the owners. We propose a second
solution, dubbed Comrade, which leverages functionality from the blockchain technology in
order to reach consensus on access control decision. Comrade improves the performance of
Commune, but requires that the cloud is able to translate access control decisions that reached
consensus in the blockchain into storage access control rules, thus requiring minor
modifications of existing clouds. We build prototypes of Commune and Comrade and
evaluate their performance within Amazon S3 with respect to the file size and the number of
users.

6
03. SYSTEM SPECIFICATION

The purpose of system requirement specification is to produce the specification analysis


of the task and also to establish complete information about the requirement, behavior and
other constraints such as functional performance and so on. The goal of system requirement
specification is to completely specify the technical requirements for the product in a concise
and unambiguous manner.

3.1 HARDWARE SPECIFICATION

• System : Pentium Dual Core.


• Hard Disk : 120 GB.
• Monitor : 15’’ LED
• Input Devices : Keyboard, Mouse
• Ram : 125 GB

3.2 SOFTWARE SPECIFICATION

• Operating system : Windows .


• Coding Language : .NET,C#
• Tool : MICROSOFT VISUAL STUDIO
• Database : SQL SERVER

3.3 ABOUT THE SOFTWARE

.NET Framework

The Microsoft .NET Framework is a software framework developed by Microsoft that runs
primarily on Microsoft Windows. It includes a large class library known as Framework
Class Library (FCL) and provides language interoperability across several programming
languages. Programs written for .NET Framework execute in a software environment
known as Common Language Runtime (CLR), an application virtual machine that provides
services such as security, memory management, and exception handling. FCL and CLR
together constitute .NET Framework.

7
FCL provides user interface, data access, database connectivity, cryptography, web
application development, numeric algorithms, and network communications. Programmers
produce software by combining their own source code with .NET Framework and other
libraries. .NET Framework is intended to be used by most new applications created
for Windows platform. Microsoft also produces an integrated development
environment largely for .NET software called Visual Studio.

COMMON LANGUAGE INFRASTRUCTURE:

Common Language Runtime (CLI) provides a language-neutral platform for application


development and execution, including functions for exception handling, garbage collection,
security, and interoperability. By implementing the core aspects of .NET Framework within
the scope of CLI, this functionality will not be tied to a single language but will be available
across the many languages supported by the framework. Microsoft's implementation of CLI
is Common Language Runtime (CLR). It serves as the execution engine of .NET Framework.
All .NET programs execute under the supervision of CLR, guaranteeing certain properties
and behaviours in the areas of memory management, security, and exception handling.
For computer programs to run on CLI, they need to be compiled into Common Intermediate
Language (CIL) – as opposed to being compiled into machine code. Upon execution, an
architecture-specific Just-in-time compiler (JIT) turns the CIL code into machine code. To
improve performance, however, .NET Framework comes with Native Image
Generator (NGEN) that performs ahead-of-time compilation.

8
Figure 2: visual overview of the common language infrastructure (CLI)

CLASS LIBRARY

.NET Framework includes a set of standard class libraries. The class library is organized
in a hierarchy of namespaces. Most of the built-in APIs are part of
either System.* or Microsoft.* namespaces. These class libraries implement a large number
of common functions, such as file reading and writing, graphic rendering, database
interaction, and XML document manipulation, among others. .NET class libraries are
available to all CLI compliant languages. .NET Framework class library is divided into two
parts: Framework Class Library (FCL) and Base Class Library (BCL).

.NET CORE

.NET Core is a free and open-source partial implementation of the .NET Framework. It
consist of CoreCLR and CoreFX, which are partial forks of CLR and BCL respectively.NET
Core comes with an improved JIT compiler, called RyuJIT.

9
ASSEMBLIES

Compiled CIL code is stored in CLI assemblies. As mandated by the specification, assemblies
are stored in Portable Executable (PE) file format, common on Windows platform for
all DLL and EXE files. Each assembly consists of one or more files, one of which must
contain a manifest bearing the metadata for the assembly. The complete name of an assembly
(not to be confused with the file name on disk) contains its simple text name, version number,
culture, and public key token. Assemblies are considered equivalent if they share the same
complete name, excluding the revision of the version number. A private key can also be used
by the creator of the assembly for strong naming. The public key token identifies which
private key an assembly is signed with. Only the creator of the keypair (typically .NET
developer signing the assembly) can sign assemblies that have the same strong name as a
previous version assembly, since the creator is in possession of the private key. Strong
naming is required to add assemblies to Global Assembly Cache.

LANGUAGE INDEPENDENCE

.NET Framework introduces a Common Type System (CTS) that defines all
possible datatypes and programming constructs supported by CLR and how they may or may
not interact with each other conforming to CLI specification. Because of this feature, .NET
Framework supports the exchange of types and object instances between libraries and
applications written using any conforming .NET language.

PORTABILITY

While Microsoft has never implemented the full framework on any system except
Microsoft Windows, it has engineered the framework to be platform-agnostic, and cross-
platform implementations are available for other operating systems. Microsoft submitted the
specifications for CLI (which includes the core class libraries, CTS, and CIL), and
C++/CLI to both ECMA and ISO, making them available as official standards. This makes it
possible for third parties to create compatible implementations of the framework and its
languages on other platforms.

10
SECURITY

.NET Framework has its own security mechanism with two general features: Code
Access Security (CAS), and validation and verification. CAS is based on evidence that is
associated with a specific assembly. Typically the evidence is the source of the assembly
(whether it is installed on the local machine or has been downloaded from the intranet or
Internet). CAS uses evidence to determine the permissions granted to the code. Other code
can demand that calling code be granted a specified permission. The demand causes CLR to
perform a call stack walk: every assembly of each method in the call stack is checked for the
required permission; if any assembly is not granted the permission a security exception is
thrown.

MEMORY MANAGEMENT

CLR frees the developer from the burden of managing memory (allocating and freeing up
when done); it handles memory management itself by detecting when memory can be safely
freed. Instantiations of .NET types (objects) are allocated from the managed heap; a pool of
memory managed by CLR. As long as there exists a reference to an object, which might be
either a direct reference to an object or via a graph of objects, the object is considered to be in
use. When there is no reference to an object, and it cannot be reached or used, it becomes
garbage, eligible for collection. .NET Framework includes a garbage collector which runs
periodically, on a separate thread from the application's thread, that enumerates all the
unusable objects and reclaims the memory allocated to them and this is more efficient then
the java.

SIMPLIFIED DEPLOYMENT

.NET Framework includes design features and tools which help manage the installation of
computer software to ensure that it does not interfere with previously installed software, and
that it conforms to security requirements.

11
Features Of . Net:

Microsoft .NET is a set of Microsoft software technologies for rapidly building and
integrating XML Web services, Microsoft Windows-based applications, and Web solutions.
The .NET Framework is a language-neutral platform for writing programs that can easily and
securely interoperate. There’s no language barrier with .NET: there are numerous languages
available to the developer including Managed C++, C#, Visual Basic and Java Script. The
.NET framework provides the foundation for components to interact seamlessly, whether
locally or remotely on different platforms. It standardizes common data types and
communications protocols so that components created in different languages can easily
interoperate.

“.NET” is also the collective name given to various software components built upon the
.NET platform. These will be both products (Visual Studio.NET and Windows.NET Server,
for instance) and services (like Passport, .NET My Services, and so on).

THE .NET FRAMEWORK

The .NET Framework has two main parts:

1. The Common Language Runtime (CLR).

2. A hierarchical set of class libraries.

The CLR is described as the “execution engine” of .NET. It provides the environment
within which programs run. The most important features are

➢ Conversion from a low-level assembler-style language, called Intermediate


Language (IL), into code native to the platform being executed on.
➢ Memory management, notably including garbage collection.
➢ Checking and enforcing security restrictions on the running code.
➢ Loading and executing programs, with version control and other such features.
➢ The following features of the .NET framework are also worth description:

12
Managed Code:

The code that targets .NET, and which contains certain extra Information- “metadata” - to
describe itself. Whilst both managed and unmanaged code can run in the runtime, only
managed code contains the information that allows the CLR to guarantee, for instance, safe
execution and interoperability.

Managed Data

With Managed Code comes Managed Data. CLR provides memory allocation and Deal
location facilities, and garbage collection. Some .NET languages use Managed Data by
default, such as C#, Visual Basic.NET and JScript.NET, whereas others, namely C++, do not.
Targeting CLR can, depending on the language you’re using, impose certain constraints on
the features available. As with managed and unmanaged code, one can have both managed
and unmanaged data in .NET applications - data that doesn’t get garbage collected but instead
is looked after by unmanaged code.

Common Type System

The CLR uses something called the Common Type System (CTS) to strictly enforce type-
safety. This ensures that all classes are compatible with each other, by describing types in a
common way. CTS define how types work within the runtime, which enables types in one
language to interoperate with types in another language, including cross-language exception
handling. As well as ensuring that types are only used in appropriate ways, the runtime also
ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.

Common Language Specification

The CLR provides built-in support for language interoperability. To ensure that you can
develop managed code that can be fully used by developers using any programming
language, a set of language features and rules for using them called the Common Language
Specification (CLS) has been defined. Components that follow these rules and expose only
CLS features are considered CLS-compliant.

13
THE CLASS LIBRARY

.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of
the namespace is called System; this contains basic types like Byte, Double, Boolean, and
String, as well as Object. All objects derive from System. Object. As well as objects, there are
value types. Value types can be allocated on the stack, which can provide useful flexibility.
There are also efficient means of converting value types to object types if and when
necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network
I/O, threading, and so on, as well as XML and database connectivity.

The class library is subdivided into a number of sets (or namespaces), each providing
distinct areas of functionality, with dependencies between the namespaces kept to a
minimum.

OVERLOADING

Overloading is another feature in C#. Overloading enables us to define multiple procedures


with the same name, where each procedure has a different set of arguments. Besides using
overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING:

C#.NET also supports multithreading. An application that supports multithreading can


handle multiple tasks simultaneously, we can use multithreading to decrease the time taken
by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING

C#.NET supports structured handling, which enables us to detect and remove errors at
runtime. In C#.NET, we need to use Try…Catch…Finally statements to create exception
handlers. Using Try…Catch…Finally statements, we can create robust and effective
exception handlers to improve the performance of our application.

14
THE .NET FRAMEWORK

The .NET Framework is a new computing platform that simplifies application development in
the highly distributed environment of the Internet.

OBJECTIVES OF. NET FRAMEWORK

1. To provide a consistent object-oriented programming environment whether object codes


is stored and executed locally on Internet-distributed, or executed remotely.

2. To provide a code-execution environment to minimizes software deployment and


guarantees safe execution of code.

3. Eliminates the performance problems.

There are different types of application, such as Windows-based applications and Web-
based applications.

MICROSOFT SQL SERVER

Microsoft SQL Server is a relational database management system developed by Microsoft.


As a database, it is a software product whose primary function is to store and retrieve data as
requested by other software applications, be it those on the same computer or those running
on another computer across a network (including the Internet). There are at least a dozen
different editions of Microsoft SQL Server aimed at different audiences and for workloads
ranging from small single-machine applications to large Internet-facing applications with
many concurrent users. Its primary query languages are T-SQL and ANSI SQL.

HISTORY:

GENESIS

Prior to version 7.0 the code base for MS SQL Server was sold by Sybase SQL Server to
Microsoft, and was Microsoft's entry to the enterprise-level database market, competing
against Oracle, IBM, and, later, Sybase. Microsoft, Sybase and Ashton-Tate originally
worked together to create and market the first version named SQL Server 1.0 for OS/2 (about
1989) which was essentially the same as Sybase SQL Server 3.0 on Unix,VMS, etc.

Since the release of SQL Server 2000, advances have been made in performance, the client
IDE tools, and several complementary systems that are packaged with SQL Server 2005.
These include:

15
• an extract-transform-load (ETL) tool (SQL Server Integration Services or SSIS)
• a Reporting Server
• an OLAP and data mining server (Analysis Services)

Common Language Runtime (CLR) integration was introduced with this version, enabling
one to write SQL code as Managed Code by the CLR. For relational data, T-SQL has been
augmented with error handling features (try/catch) and support for recursive queries with
CTEs (Common Table Expressions). SQL Server 2005 has also been enhanced with new
indexing algorithms, syntax and better error recovery systems.

FEATURES SQL SERVER:

The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server
2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis
Services. Analysis Services also includes a new data mining component. The Repository
component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000
Meta Data Services. References to the component now use the term Meta Data Services. The
term repository is used only in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects,

They are,

1. TABLE

2. QUERY

3. FORM

4. REPORT

5. MACRO

TABLE:

A database is a collection of data about a specific topic.

16
VIEWS OF TABLE:

We can work with a table in two types,

1. Design View

2. Datasheet View

Design View

To build or modify the structure of a table we work in the table design view. We can specify
what kind of data will be hold.

Datasheet View

To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY:

A query is a question that has to be asked the data. Access gathers data that answers the
question from one or more table. The data that make up the answer is either dynaset (if you
edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information
in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an
action on it, such as deleting or updating.

03. SYSTEM DESIGN

SYSTEM MODEL

We consider a cloud computing architecture by combining with an example that a company


uses a cloud to enable its staffs in the same group or department to share files. The system
model consists of three different entities: the cloud, a group manager (i.e., the company
manager), and a large number of group members (i.e., the staffs) Cloud is operated by CSPs
and provides priced abundant storage services. However, the cloud is not fully trusted by
users since the CSPs are very likely to be outside of the cloud users trusted domain. Similar
to assume that the cloud server is honest but curious.

17
That is, the cloud server will not maliciously delete or modify user data due to the protection
of data auditing schemes but will try to learn the content of the stored data and the identities
of cloud users. Group manager takes charge of system parameters generation, user
registration, user revocation, and revealing the real identity of a dispute data owner. In the
given example, the group manager is acted by the administrator of the company. Therefore,
we assume that the group manager is fully trusted by the other parties. Group members are a
set of registered users that will store their private data into the cloud server and share them
with others in the group. In our example, the staffs play the role of group members. Note
that, the group membership is dynamically changed, due to the staff resignation and new
employee participation in the company.

4.1 NORMALIZATION
It is a process of converting a relation to a standard form. The process is used to handle
the problems that can arise due to data redundancy i.e. repetition of data in the database,
maintain data integrity as well as handling problems that can arise due to insertion, updating,
deletion anomalies.

Decomposing is the process of splitting relations into multiple relations to eliminate


anomalies and maintain anomalies and maintain data integrity. To do this we use normal
forms or rules for structuring relation.

• Insertion anomaly : Inability to add data to the database due to absence of other data.
• Deletion anomaly : Unintended loss of data due to deletion of other data.

18
• Update anomaly : Data inconsistency resulting from data redundancy and partial
update
• Normal Forms : These are the rules for structuring relations that eliminate
anomalies.

FIRST NORMAL FORM:

A relation is said to be in first normal form if the values in the relation are atomic for every
attribute in the relation. By this we mean simply that no attribute value can be a set of values
or, as it is sometimes expressed, a repeating group.

Column Data Type default


Name value

ip Address varchar(20) Primary key

File varchar(2000)

SECOND NORMAL FORM:

A relation is said to be in second Normal form is it is in first normal form and it should
satisfy any one of the following rules.

1) Primary key is a not a composite primary key


2) No non key attributes are present
3) Every non key attribute is fully functionally dependent on full set of primary key.

19
Column Data Type
Name

Node1 varchar(10)

Node2 varchar(10)

Node3 varchar(10)

Node4 varchar(10)

Node5 varchar(10)

4.2 TABLE DESIGN

Column Data Type default


Name value

ip Address varchar(20) Primary key

File varchar(2000) Not null

20
21
4.3 INPUT DESIGN

The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to
put transaction data in to a usable form for processing can be achieved by inspecting the
computer to read data from a written or printed document or it can occur by having people
keying the data directly into the system. The design of input focuses on controlling the
amount of input required, controlling the errors, avoiding delay, avoiding extra steps and
keeping the process simple. The input is designed in such a way so that it provides security
and ease of use with retaining the privacy. Input Design considered the following things:’

➢ What data should be given as input?


➢ How the data should be arranged or coded?
➢ The dialog to guide the operating personnel in providing input.
➢ Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES

1.Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume
of data. The goal of designing input is to make data entry easier and to be free from errors.
The data entry screen is designed in such a way that all the data manipulates can be
performed. It also provides record viewing facilities.

3.When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user

will not be in maize of instant. Thus the objective of input design is to create an input
layout that is easy to follow

22
4.4 OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and
direct source information to the user. Efficient and intelligent output design improves the
system’s relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the
right output must be developed while ensuring that each output element is designed so that
people will find the system can use easily and effectively. When analysis design computer
output, they should Identify the specific output that is needed to meet the requirements.

2.Select methods for presenting information.

3.Create document, report, or other formats that contain information produced by the
system.

The output form of an information system should accomplish one or more of the following
objectives.

❖ Convey information about past activities, current status or projections of the


❖ Future.
❖ Signal important events, opportunities, problems, or warnings.
❖ Trigger an action.
❖ Confirm an action.

23
4.4 SFD/DFD

System Flow Diagram:

System architecture is a conceptual model that defines the structure, behavior, and more
views of a system. An architecture description is a formal description and representation of a
system, organized in a way that supports reasoning about the structures and behaviors of the
system.

Fig. 3.1: A natural RIBE-based data sharing system

The data provider (e.g., David) first decides the users (e.g., Alice and Bob) who can share
the data. Then, David encrypts the data under the identities Alice and Bob, and uploads the
ciphertext of the shared data to the cloud server.

When either Alice or Bob wants to get the shared data, she or he can download and decrypt
the corresponding ciphertext. However, for an unauthorized user and the cloud server, the
plaintext of the shared data is not available.

In some cases, e.g., Alice’s authorization gets expired, David can download the ciphertext of
the shared data, and then decrypt-then-re-encrypt the shared data such that Alice is prevented
from accessing the plaintext of the shared data, and then upload the re-encrypted data to the
cloud server again.

24
CASE DIAGRAM:

To model a system the most important aspect is to capture the dynamic behaviour. To
clarify a bit in details, dynamic behaviour means the behaviour of the system when it is
running /operating. So only static behaviour is not sufficient to model a system rather
dynamic behaviour is more important than static behaviour.

In UML there are five diagrams available to model dynamic nature and use case diagram is
one of them. Now as we have to discuss that the use case diagram is dynamic in nature there
should be some internal or external factors for making the interaction. These internal and
external agents are known as actors. So use case diagrams are consists of actors, use cases
and their relationships.

The diagram is used to model the system/subsystem of an application. A single use case
diagram captures a particular functionality of a system. So to model the entire system
numbers of use case diagrams are used. A use case diagram at its simplest is a representation
of a user's interaction with the system and depicting the specifications of a use case. A use
case diagram can portray the different types of users of a system and the case and will often
be accompanied by other types of diagrams as well.

25
Viwe Key Request

Request Encryption Key

Key Authority

Send Encrypted Message


Sender

Key Send on mail

Cloud Server

Register

Login

Upload File
User

View File

Decrypt File

Download File

26
CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modeling Language (UML) is a type
of static structure diagram that describes the structure of a system by showing the system's
classes, their attributes, operations (or methods), and the relationships among the classes. It
explains which class contains information.

27
05. System description

It proposed a security for customers to store and share their sensitive data in the cryptographic
cloud storage. It provides a basic encryption and decryption for providing the security.
HoIver, the revocation operation is a sure performance killer in the cryptographic access
control system. To optimize the revocation procedure, they present a new efficient revocation
scheme which is efficient, secure, and unassisted. In this scheme, the original data are first
divided into a number of slices, and then published to the cloud storage. When a revocation
occurs, the data owner needs only to retrieve one slice, and re-encrypt and re-publish it. Thus,
the revocation process is accelerated by affecting only one slice instead of the whole data.
They have applied the efficient revocation scheme to the cipher text-policy attribute-based
encryption based cryptographic cloud storage. The security analysis shows that the scheme is
computationally secure. a secure file system designed to be layered over insecure network
and P2P file systems such as NFS, CIFS, Ocean Store, and Yahoo! Briefcase. SiRiUS
assumes the network storage is untrusted and provides its own read-write cryptographic
access control for file level sharing. Key management and revocation is simple with minimal
out-of-band communication. File system freshness guarantees are supported by SiRiUS using
hash tree constructions. SiRiUS contains a novel method of performing file random access in
a cryptographic file system without the use of a block server.

Extensions to SiRiUS include large scale group sharing using the NNL key revocation
construction. Our implementation of SiRiUS performs Ill relative to the underlying file
system despite using cryptographic operations. Sirius contains a novel method of performing
file random access in a cryptographic file system without the use of a block server. Using
cryptographic operations, implementation of Sirius also possible. It only uses the own read
write cryptographic access control. File level sharing are only done by using cryptographic
access. A File level proposed a system on multicast communication framework, various types
of security threat occurs. As a result construction of secure group communication that
protects users from intrusion and eavesdropping are very important. In this paper, they
propose an efficient key distribution method for a secure group communication over multicast
communication framework. In this method, they use IP multicast mechanism to shortest
rekeying time to minimize adverse effect on communication. In addition, they introduce

28
proxy mechanism for replies from group members to the group manager to reduce traffic
generated by rekeying.

They define a new type of batching technique for rekeying in which new key is generated for
both leaving and joining member. The rekeying assumption waits for 30 sec so that number
time's key generation will be reduced. It presented a security one of the most often-cited
objections to cloud computing; analysts and sceptical companies ask “who would trust their
essential data „out there somewhere?” There are also requirements for audit ability, in the
sense of Sarbanes-Oxley amazon spying on the contents of virtual machine memory; it’s easy
to imagine a hard disk being disposed of without being wiped, or a permissions bug making
data visible improperly. There’s an obvious defense, namely user-level encryption of storage.
This is already common for high-value data outside the cloud, and both tools and expertise are
readily available. This approach was successfully used by TC3, a healthcare company with
access to sensitive patient records and healthcare claims, when moving their HIPAA-
compliant application to AWS [9]. Similarly, auditability could be added as an additional layer
beyond the reach of the virtualized guest OS, providing facilities arguably more secure than
those built into the applications themselves and centralizing the software responsibilities
related to confidentiality and auditability into a single logical layer.

Such a new feature reinforces the Cloud Computing perspective of changing our focus from
specific hardware to the virtualized capabilities being provided and focused on a Hierarchical
Identity Based Encryption (HIBE) system where the cipher text consists of just three group
elements and decryption requires only two bilinear map computations, regardless of the
hierarchy depth. Encryption is as efficient as in other HIBE systems. They prove that the
scheme is selective-ID secure in the standard model and fully secure in the random oracle
model. The system has a number of applications: it gives very efficient forward secure public
key and identity based cryptosystems (with short cipher texts), it converts the NNL broadcast
encryption system into an efficient public key broadcast system, and it provides an efficient
mechanism for encrypting to the future. The system also supports limited delegation where
users can be given restricted private keys that only allow delegation to bounded depth. The
HIBE system can be modified to support sub linear size private keys at the cost of some
cipher text expansion.

In this section, we describe the main design goals of the proposed scheme including access
control, data confidentiality, anonymity and traceability, and efficiency as follows:

29
Access control

The requirement of access control is to fold. First, group members are able to use the cloud
resource for data operations. Second, unauthorized users cannot access the cloud resource at
any time, and revoked users will be incapable of using the cloud again once they are revoked.

Data confidentiality requires that unauthorized users including the cloud are incapable of
learning the content of the stored data. An important and challenging issue for data
confidentiality is to maintain its availability for dynamic groups. Specifically, new users
should decrypt the data stored in the cloud before their participation, and revoked users are
unable to decrypt the data moved into the cloud after the revocation.

Anonymity and traceability

Anonymity guarantees that group members can access the cloud without revealing the real
identity. Although anonymity represents an effective protection for user identity, it also poses
a potential inside attack risk to the system. For example, an inside attacker may store and
share a mendacious information to derive substantial benefit. Thus, to tackle the inside attack,
the group manager should have the ability to reveal the real identities of data owners.

Efficiency

The efficiency is defined as follows: Any group member can store and share data files with
others in the group by the cloud. User revocation can be achieved without involving the
remaining users. That is, the remaining users do not need to update their private keys or re-
encryption operations. New granted users can learn all the content data files stored before his
participation without contacting with the data owner.

MODULE DESCRIPTIONS

Admin or Group Owner

Group Creation

Groups are creating by admin. A company allows its staffs in the same group or department to
store and share files in the cloud. Any member in a group should be able to fully enjoy the data
storing and sharing services provided by the cloud, which is defined as the multiple-owner
manner.

30
User Registration

For the registration of user i with identity IDi, the group manager randomly selects a number
and characters for generate random key. Then, the group manager adds into the group user list,
which will be used in the traceability phase. After the registration, user i obtains a private key,
which will be used for group signature generation and file decryption.

Group Access Control

When a data dispute occurs, the tracing operation is performed by the group manager to
identify the real identity of the data owner. The employed group signature scheme can be
regarded as a variant of the short group signature, which inherits the inherent forget ability
property, anonymous authentication, and tracking capability. The requirement of access control
is twofold. First, group members are able to use the cloud resource for data operations. Second,
unauthorized users cannot access the cloud resource at any time, and revoked users will be
incapable of using the cloud again once they are revoked.

File Deletion

File stored in the cloud can be deleted by either the group manager or the data owner (i.e., the
member who uploaded the file into the server). To delete a file ID data, the group manager
computes a signature ID data and sends the signature along with ID data to the cloud.

Revoke User

User revocation is performed by the group manager via a public available revocation list RL,
based on which group members can encrypt their data files and ensure the confidentiality
against the revoked users. The admin can only have permission for revoke user and remove
revocation. User Or Group Member Group members are a set of registered users that will store
their private data into the cloud server and share them with others in the group.

31
File Upload

To store and share a data file in the cloud, a group member checks the revocation list and verify
the group signature. First, we check whether the marked date is fresh. Second, verifying the
contained signature. Uploading the data into the cloud server and adding the ID data into the
local shared data list maintained by the manager. On receiving the data, the cloud first checks its
validity. It returns true, the group signature is valid; otherwise, the cloud stops the data. In
addition, if several users have been revoked by the group manager, the cloud also performs
revocation verification; the data file will be stored in the cloud after successful group signature
and revocation verifications.

File Download

Signature and Key Verification In general, a group signature scheme allows any member of the
group to sign messages while keeping the identity secret from verifiers. Besides, the designated
group manager can reveal the identity of the signature’s originator when a dispute occurs, which
is denoted as traceability.

OTP (One Time Password)

OTPs avoid a number of shortcomings that are associated with traditional passwords. The most
important shortcoming that is addressed by OTPs is that, in contrast to static passwords, they are
not vulnerable to replay attacks. This means that a potential intruder who manages to record an
OTP that was already used to log into a service or to conduct a transaction will not be able to
abuse it, since it will be no longer valid. On the downside, OTPs are difficult for human beings
to memorize. OTP can be used to authenticate a user in a system via an authentication server.
Also, if some more steps are carried out (the server calculates subsequent OTP value and
sends/displays it to the user who checks it against subsequent OTP value calculated by his
token), the user can also authenticate the validation server. Generation of OTP Value. The
algorithm can be described in 3 steps:

Step 1: Generate the HMAC-SHA value Let HMK = HMAC-SHA(Key, T) // HMK is a 20-byte
string

Step 2: Generate a hex code of the HMK. HexHMK=ToHex (HMK)

32
Step 3: Extract the 8-digit OTP value from the string OTP = Truncate (HexHMK) the Truncate
function in Step 3 does the dynamic truncation and reduces the OTP to 8-digit.

AES Encryption

AES or Advanced Encryption Standard is a cipher, i.e., a method for encrypting and decrypting
information. AES is a block cipher with a block length of 128 bits. Whenever you transmit files
over secure file transfer protocols like HTTPS, FTPS, SFTP, WebDAVS, OFTP, or AS2, there's
a good chance your data will be encrypted by some flavor of AES ciphers - either AES 256,
192, or 128. The input 16 byte Plain text can be converted into 4×4 square matrix. Notice that
the first four bytes of a 128-bit input block occupy the first column in the 4 × 4 array of bytes.
The next four bytes occupy the second column, and so on. The AES Encryption consists of four
different stages they are (1) byte substitution (2) shift rows (3) mix columns and (4) add round
key.

We can take the following AES steps of encryption for a 128-bit block:

1. Derive the set of round keys from the cipher key.

2. Initialize the state array with the block data (plaintext).

3. Add the initial round key to the starting state array.

4. Perform nine rounds of state manipulation.

5. Perform the tenth and final round of state manipulation.

6. Copy the final state array out as the encrypted data (ciphertext).

AES Decryption

The Decryption algorithm makes use of the key in the reverse order. However, the decryption
algorithm is not identical to the encryption algorithm.

06. TESTING AND IMPLEMENTATION

SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to discover

33
every conceivable fault or weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or a finished product It is the
process of exercising software with the intent of ensuring that the Software system meets its
requirements and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.

TYPES OF TESTS:
Testing is the process of trying to discover every conceivable fault or weakness in a work
product. The different type of testing are given below:

UNIT TESTING:

Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches
and internal code flow should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before integration.

This is a structural testing, that relies on knowledge of its construction and is invasive. Unit
tests perform basic tests at component level and test a specific business process, application,
and/or system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined inputs and
expected results.

INTEGRATION TESTING:

Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic
outcome of screens or fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the combination of
components is correct and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.

FUNCTIONAL TEST:

Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user

34
manuals.
Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.


Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/ Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions,


or special test cases. In addition, systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive processes must be considered for
testing. Before functional testing is complete, additional tests are identified and the effective
value of current tests is determined.

SYSTEM TEST:
System testing ensures that the entire integrated software system meets requirements. It
tests a configuration to ensure known and predictable results. An example of system testing is
the configuration oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and integration points.

WHITE BOX TESTING:


White Box Testing is a testing in which in which the software tester has knowledge of the
inner workings, structure and language of the software, or at least its purpose. It is purpose. It
is used to test areas that cannot be reached from a black box level.

BLACK BOX TESTING:


Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests,
must be written from a definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing in which the

35
software under test is treated, as a black box .you cannot “see” into it. The test provides
inputs and responds to outputs without considering how the software works.

UNIT TESTING:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as
two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
• All field entries must work properly.
• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.

Features to be tested
• Verify that the entries are of the correct format
• No duplicate entries should be allowed
• All links should take the user to the correct page.

INTEGRATION TESTING:
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company
level – interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

ACCEPTANCE TESTING:
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional
requirements.

36
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

IMPLEMENTATION

Implementation is the stage of the project when the theoretical design is turned out into a
working system. Thus it can be considered to be the most critical stage in achieving a
successful new system and in giving the user, confidence that the new system will work and
be effective.
The implementation stage involves careful planning, investigation of the existing system and
it’s constraints on implementation, designing of methods to achieve changeover and
evaluation of changeover methods.
MODULES:
A module is a part of a program. Programs are composed of one or more independently
developed modules that are not combined until the program is linked. A single module can
contain one or several routines.
Our project modules are given below:
• Data Provider
• Key Authority
• Storage Server
• Cloud User

DATA PROVIDER
The data provider (e.g., David) first decides the users (e.g., Alice and Bob) who can share the
data. Then, David encrypts the data under the identities Alice and Bob, and uploads the
cipher rext of the shared data to the cloud server.

CLOUD USER
In this module, either Alice or Bob wants to get the shared data, she or he can download and
decrypt the corresponding cipher text. However, for an unauthorized user and the cloud

37
server, the plaintext of the shared data is not available. In some cases, e.g., Alice’s
authorization gets expired, David can download the cipher text of the shared data, and then
decrypt-then-re-encrypt the shared data such that Alice is prevented from accessing the
plaintext of the shared data, and then upload the re-encrypted data to the cloud server again.

KEY AUTHORITY
By delegating the generation of re-encryption key to the key authority, the ciphertext size of
their scheme also achieves constant. However, to this end, the key authority has to maintain a
data table for each user to store the user’s secret key for all time period, which brings storage
cost for key authority.

STORAGE SERVER:
A cloud service provider has huge storage space, computation resource and shared service to
provide the clients. It is responsible for controlling the data storage in outside users’ access,
and provides the corresponding contents.

METHODOLOGIES

Methodology is the systematic, theoretical analysis of the methods applied to a field of study.
A methodology does not set out to provide solutions - it is, therefore, not the same as a
method. Instead, a methodology offers the theoretical underpinning for understanding which
method, set of methods, or best practices can be applied to specific case, for example, to
calculate a specific result. It has been defined also as follows:
[1] "the analysis of the principles of methods, rules, and postulates employed by a
discipline"
[2] "the systematic study of methods that are, can be, or have been applied within a
discipline"
[3] "the study or description of methods"

DENTITY-BASED ENCRYPTION.

A revocable-storage identity-based encryption scheme with message space M, identity space


38
I and total number of time periods T is comprised of the following seven polynomial time
algorithms:
Setup(1λ, T, N ): The setup algorithm takes as input the security parameter λ, the time bound
T and the maximum number of system users N , and it outputs the public parameter PP and
the master secret key MSK, associated with the initial revocation list RL = ∅ and state st.

PKGen(PP, MSK, ID): The private key generation algorithm takes as input PP , MSK and
an identity ID ∈ I, and it generates a private key SKID for ID and an updated state st.

KeyUpdate(PP, MSK, RL, t, st): The key update al-gorithm takes as input PP , MSK, the
current revocation list RL, the key update time t ≤ T and the state st, it outputs the key update
KUt .
DKGen(PP, SKID, KUt): The decryption key genera-tion algorithm takes as input PP ,
SKID and KUt, and it generates a decryption key DKID,t for ID with time period t or a
symbol ⊥ to illustrate that ID has been previously revoked.

Encrypt(PP, ID, t, M ): The encryption algorithm takes as input PP , an identity ID, a time
period t ≤ T , and a message M ∈ M to be encrypted, and outputs a ciphertext CTID,t.

CTUpdate(PP, CTID,t, t′): The ciphertext update algo-rithm takes as input PP , CTID,t and a
new time period t′≥ t, and it outputs an updated ciphertext CTID,t′ .

Decrypt(PP, CTID,t, DKID,t′ ): The decryption algo-rithm takes as input PP , CTID,t ,


DKID,t′ , and it recovers the encrypted message M or a distinguished symbol ⊥ indicating
that CTID,t is an invalid ciphertext.

Revoke(PP, ID, RL, t, st): The revocation algorithm takes as input PP , an identity ID ∈ I to
be revoked, the current revocation list RL, a state st and revocation time period t ≤ T , and it
updates RL to a new one.

39
07. CONCLUSION AND FUTURE SCOPE

CONCLUSION
Even though existing cloud platforms are used as shared repositories, they do not support any
notion of shared ownership. We consider this a severe limitation because contributing parties
cannot jointly decide how their resources are used. In this paper, we introduced a novel
concept of shared ownership and we described it through a formal access control model,
called SOM.We then propose two possible instantiations of our proposed shared ownership
model. Our first solution, called Commune, relies on secure file dispersal and collusion-
resistant secret sharing to ensure that all access grants in the cloud require the support of an
agreed threshold of owners. As such, Commune can be used in existing agnostic clouds
without modifications to the platforms. Our second solution, dubbed Comrade, leverages the
blockchain technology in order to reach consensus on access control decision. Unlike
Commune, Comrade requires that the cloud is able to translate access control decisions that
achieved consensus in the blockchain into storage access control rules. Comrade, however,
shows better performance than Commune. Given the rise of personal clouds, we argue that
Commune and Comrade find direct applicability in setting up shared repositories that are
distributively managed atop of the various personal clouds owned by users. We therefore
hope that our findings motivate further research in this area.

40
FUTURE SCOPE

Recently, many companies are migrating from their own Infrastructure to cloud; this
migration should not compromise on performance of the cloud. So, in our future work we
introduce the concept of load balancing for cloud partition. It was observed that centralized
allocation will not efficient for load across all nodes in a system. So a partitioning approach is
required that balances load among network.

This application can be easily implemented under various situations. We can add new
features as and when we require. Re usability is possible as and when require in this
application. There is flexibility all the modules.

Software scope

Extensibility:

The software is extendable in ways that its original developers may not expect. the following
principles enhances extensibility like hide data structure, avoid traversing multiple links, or
methods, avoid case statements on object types and distinguish public and private operations.

Reusability:

Reusability is possible as and when require in this application. We can update it next version.
Reusable software reduces design, coding and testing cost by amortizing effort over several
designs. Reducing the amount of code also simplifies understandings, which increases the
likehood that the code is incorrect. We follow the both type of resueabilty. Sharing of newly
written code within a project and reuse of previously written code on new projects.

41
08. FORMS AND REPORT

HOME :

42
USER REGISTRATION:

43
44
Registered Successfully:

45
LOGIN PAGE:

46
CLOUD LOGIN:

47
VIEW ALL USERS :

48
GENERATE KEY:

49
OTP SEND:

50
FILE UPLOAD:

51
52
VIEW ALL DATA:

53
ENCRYPTING DATA:

54
DOWNLODING DATA:

55
PASSWARD NOTPAD:

56
BIBILOGRAPHY

[1] L. M. Vaquero, L. Rodero-Merino, J. Caceres, and M. Lindner, “A break in the clouds:


towards a cloud definition,” ACM SIGCOMM Computer Communication Review, vol. 39,
no. 1, pp. 50–55, 2008.

[2] iCloud. (2014) Apple storage service. [Online]. Available: https://www.icloud.com/

[3] Azure. (2014) Azure storage service. [Online]. Available:


http://www.windowsazure.com/

[4] Amazon. (2014) Amazon simple storage service (amazon s3). [Online]. Available:
http://aws.amazon.com/s3/

[5] K. Chard, K. Bubendorfer, S. Caton, and O. F. Rana, “Social cloud computing: A vision
for socially motivated resource sharing, ”Services Computing, IEEE Transactions on, vol. 5,
no. 4, pp. 551–563,2012.

[6] C. Wang, S. S. Chow, Q. Wang, K. Ren, and W. Lou, “Privacypreserving public


auditing for secure cloud storage,” Computers, IEEE Transactions on, vol. 62, no. 2, pp. 362–
375, 2013.

[7] G. Anthes, “Security in the cloud,” Communications of the ACM, vol. 53, no. 11, pp.
16–18, 2010.

[8] K. Yang and X. Jia, “An efficient and secure dynamic auditing protocol for data storage
in cloud computing,” Parallel and Distributed Systems, IEEE Transactions on, vol. 24, no. 9,
pp. 1717–1726, 2013.

[9] B. Wang, B. Li, and H. Li, “Public auditing for shared data with efficient user revocation
in the cloud,” in INFOCOM, 2013 Proceedings IEEE. IEEE, 2013, pp. 2904–2912.

[10] S. Ruj, M. Stojmenovic, and A. Nayak, “Decentralized access control with anonymous
authentication of data stored in clouds,” Parallel and Distributed Systems, IEEE Transactions
on, vol. 25, no. 2,pp. 384–394, 2014.

[11] X. Huang, J. Liu, S. Tang, Y. Xiang, K. Liang, L. Xu, and J. Zhou, “Cost-effective
authentic and anonymous data sharing with forward security,” Computers, IEEE Transactions
on, 2014, doi:10.1109/TC.2014.2315619.

57
[12] C.-K. Chu, S. S. Chow, W.-G. Tzeng, J. Zhou, and R. H. Deng, “Key-aggregate
cryptosystem for scalable data sharing in cloud storage,” Parallel and Distributed Systems,
IEEE Transactions on, vol. 25, no. 2, pp. 468–477, 2014.

[13] A. Shamir, “Identity-based cryptosystems and signature schemes,”in Advances in


cryptology. Springer, 1985, pp. 47–53.[14] D. Boneh and M. Franklin, “Identity-based
encryption from the weil pairing,” SIAM Journal on Computing, vol. 32, no. 3, pp. 586–615,
2003.

[15] S. Micali, “Efficient certificate revocation,” Tech. Rep., 1996.

[16] W. Aiello, S. Lodha, and R. Ostrovsky, “Fast digital identity revocation,” in Advances
in Cryptology–CRYPTO 1998. Springer, 1998, pp. 137–152.

[17] D. Naor, M. Naor, and J. Lotspiech, “Revocation and tracing schemes for stateless
receivers,” in Advances in Cryptology–CRYPTO 2001. Springer, 2001, pp. 41–62.

[18] C. Gentry, “Certificate-based encryption and the certificate revocation problem,” in


Advances in Cryptology–EUROCRYPT 2003. Springer, 2003, pp. 272–293.

[19] V. Goyal, “Certificate revocation using fine grained certificate space partitioning,” in
Financial Cryptography and Data Security. Springer, 2007, pp. 247–259.

[20] A. Boldyreva, V. Goyal, and V. Kumar, “Identity-based encryption with efficient


revocation,” in Proceedings of the 15th ACM conference on Computer and communications
security. ACM, 2008, pp. 417–426.

58

You might also like