RAAC Robust and Auditable Access Control With Multiple Attribute Authorities For Public Cloud Storage

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 85

RAAC: Robust and Auditable Access Control with Multiple Attribute

Authorities for Public Cloud Storage

ABSTRACT:

Data access control is a challenging issue in public cloud storage systems.


Ciphertext-policy attribute-based encryption (CP-ABE) has been adopted as a
promising technique to provide flexible, fine-grained, and secure data access
control for cloud storage with honest-but-curious cloud servers. However, in the
existing CP-ABE schemes, the single attribute authority must execute the time-
consuming user legitimacy verification and secret key distribution, and hence, it
results in a single-point performance bottleneck when a CP-ABE scheme is
adopted in a large-scale cloud storage system. Users may be stuck in the waiting
queue for a long period to obtain their secret keys, thereby resulting in low
efficiency of the system. Although multi-authority access control schemes have
been proposed, these schemes still cannot overcome the drawbacks of single-point
bottleneck and low efficiency, due to the fact that each of the authorities still
independently manages a disjoint attribute set. In this paper, we propose a novel
heterogeneous framework to remove the problem of single-point performance
bottleneck and provide a more efficient access control scheme with an auditing
mechanism. Our framework employs multiple attribute authorities to share the load
of user legitimacy verification. Meanwhile, in our scheme, a central authority is
introduced to generate secret keys for legitimacy verified users. Unlike other multi-
authority access control schemes, each of the authorities in our scheme manages
the whole attribute set individually. To enhance security, we also propose an
auditing mechanism to detect which attribute authority has incorrectly or
maliciously performed the legitimacy verification procedure. Analysis shows that
our system not only guarantees the security requirements but also makes great
performance improvement on key generation.

CHAPTER 1

INTRODUCTION

Cloud storage is a promising and important service paradigm in cloud computing.


Benefits of using cloud storage include greater accessibility, higher reliability,
rapid deployment and stronger protection, to name just a few. Despite the
mentioned benefits, this paradigm also brings forth new challenges on data access
control, which is a critical issue to ensure data security. Since cloud storage is
operated by cloud service providers, who are usually outside the trusted domain of
data owners, the traditional access control methods in the Client/Server model are
not suitable in cloud storage environment. The data access control in cloud storage
environment has thus become a challenging issue.

To address the issue of data access control in cloud storage, there have been quite a
few schemes proposed, among which Ciphertext-Policy Attribute-Based
Encryption (CP-ABE) is regarded as one of the most promising techniques. A
salient feature of CP-ABE is that it grants data owners direct control power based
on access policies, to provide flexible, finegrained and secure access control for
cloud storage systems.

In CP-ABE schemes, the access control is achieved by using cryptography, where


an owner’s data is encrypted with an access structure over attributes, and a user’s
secret key is labelled with his/her own attributes. Only if the attributes associated
with the user’s secret key satisfy the access structure, can the user decrypt the
corresponding ciphertext to obtain the plaintext. So far, the CP-ABE based access
control schemes for cloud storage have been developed into two complementary
categories, namely, single-authority scenario [5–9], and multiauthority scenario
[10–12].

Although existing CP-ABE access control schemes have a lot of attractive features,
they are neither robust nor efficient in key generation. Since there is only one
authority in charge of all attributes in single-authority schemes, offline/crash of
this authority makes all secret key requests unavailable during that period. The
similar problem exists in multi-authority schemes, since each of multiple
authorities manages a disjoint attribute set.

In single-authority schemes, the only authority must verify the legitimacy of users’
attributes before generating secret keys for them. As the access control system is
associated with data security, and the only credential a user possess is his/her
secret key associated with his/her attributes, the process of key issuing must be
cautious. However, in the real world, the attributes are diverse. For example, to
verify whether a user is able to drive may need an authority to give him/her a test
to prove that he/she can drive. Thus he/she can get an attribute key associated with
driving ability [13]. To deal with the verification of various attributes, the user may
be required to be present to confirm them. Furthermore, the process to
verify/assign attributes to users is usually difficult so that it normally employs
administrators to manually handle the verification, as [14] has mentioned, that the
authenticity of registered data must be achieved by out-of-band (mostly manual)
means. To make a careful decision, the unavoidable participation of human beings
makes the verification timeconsuming, which causes a single-point bottleneck.
Especially, for a large system, there are always large numbers of users requesting
secret keys. The inefficiency of the authority’s service results in single-point
performance bottleneck, which will cause system congestion such that users often
cannot obtain their secret keys quickly, and have to wait in the system queue. This
will significantly reduce the satisfaction of users experience to enjoy real-time
services. On the other hand, if there is only one authority that issues secret keys for
some particular attributes, and if the verification enforces users’ presence, it will
bring about the other type of long service delay for users, since the authority
maybe too far away from his/her home/workplace. As a result, single-point
performance bottleneck problem affects the efficiency of secret key generation
service and immensely degrades the utility of the existing schemes to conduct
access control in large cloud storage systems. Furthermore, in multi-authority
schemes, the same problem also exists due to the fact that multiple authorities
separately maintain disjoint attribute subsets and issue secret keys associated with
users’ attributes within their own administration domain. Each authority performs
the verification and secret key generation as a whole in the secret key distribution
process, just like what the single authority does in single authority schemes.
Therefore, the single-point performance bottleneck still exists in such multi-
authority schemes.

A straightforward idea to remove the single-point bottleneck is to allow multiple


authorities to jointly manage the universal attribute set, in such a way that each of
them is able to distribute secret keys to users independently. By adopting multiple
authorities to share the load, the influence of the single-point bottleneck can be
reduced to a certain extent.

However, this solution will bring forth threats on security issues. Since there are
multiple functionally identical authorities performing the same procedure, it is hard
to find the responsible authority if mistakes have been made or malicious
behaviors have been implemented in the process of secret key generation and
distribution. For example, an authority may falsely distribute secret keys beyond
user’s legitimate attribute set. Such weak point on security makes this
straightforward idea hard to meet the security requirement of access control for
public cloud storage. Our recent work, TMACS [15], is a threshold multi-authority
CP-ABE access control scheme for public cloud storage, where multiple
authorities jointly manage a uniform attribute set. Actually it addresses the single-
point bottleneck of performance and security, but introduces some additional
overhead. Therefore, in this paper, we present a feasible solution which not only
promotes efficiency and robustness, but also guarantees that the new solution is as
secure as the original single-authority schemes.

CHAPTER 2

LITERATURE REVIERW

Title: Enabling Personalized Search over Encrypted Outsourced Data


with Efficiency Improvemen

Author: Z. Fu, K. Ren, J. Shu, X. Sun, and F. Huang,

Abstract: In cloud computing, searchable encryption scheme over outsourced


data is a hot research field. However, most existing works on encrypted search
over outsourced cloud data follow the model of “one size fits all” and ignore
personalized search intention. Moreover, most of them support only exact keyword
search, which greatly affects data usability and user experience. So how to design a
searchable encryption scheme that supports personalized search and improves user
search experience remains a very challenging task. In this paper, for the first time,
we study and solve the problem of personalized multi-keyword ranked search over
encrypted data (PRSE) while preserving privacy in cloud computing. With the help
of semantic ontology WordNet, we build a user interest model for individual user
by analyzing the user's search history, and adopt a scoring mechanism to express
user interest smartly. To address the limitations of the model of “one size fit all”
and keyword exact search, we propose two PRSE schemes for different search
intentions. Extensive experiments on real-world dataset validate our analysis and
show that our proposed solution is very efficient and effective.

Title: Towards efficient content-aware search over encrypted outsourced data in


cloud

Author: Z. Fu, X. Sun, S. Ji, and G. Xie,

Abstract: With the increasing adoption of cloud computing, a growing number of


users outsource their datasets into cloud. The datasets usually are encrypted before
outsourcing to preserve the privacy. However, the common practice of encryption
makes the effective utilization difficult, for example, search the given keywords in
the encrypted datasets. Many schemes are proposed to make encrypted data
searchable based on keywords. However, keyword-based search schemes ignore
the semantic representation information of users retrieval, and cannot completely
meet with users search intention. Therefore, how to design a content-based search
scheme and make semantic search more effective and context-aware is a difficult
challenge. In this paper, we proposed an innovative semantic search scheme based
on the concept hierarchy and the semantic relationship between concepts in the
encrypted datasets. More specifically, our scheme first indexes the documents and
builds trapdoor based on the concept hierarchy. To further improve the search
efficiency, we utilize a tree-based index structure to organize all the document
index vectors. Our experiment results based on the real world datasets show the
scheme is more efficient than previous scheme. We also study the threat model of
our approach and prove it does not introduce any security risk.

Title: Towards A dynamic secure group sharing framework in public cloud


computing

Author: K. Xue and P. Hong,

Abstract: With the popularity of group data sharing in public cloud computing,
the privacy and security of group sharing data have become two major issues. The
cloud provider cannot be treated as a trusted third party because of its semi-trust
nature, and thus the traditional security models cannot be straightforwardly
generalized into cloud based group sharing frameworks. In this paper, we propose
a novel secure group sharing framework for public cloud, which can effectively
take advantage of the cloud servers' help but have no sensitive data being exposed
to attackers and the cloud provider. The framework combines proxy signature,
enhanced TGDH and proxy re-encryption together into a protocol. By applying the
proxy signature technique, the group leader can effectively grant the privilege of
group management to one or more chosen group members. The enhanced TGDH
scheme enables the group to negotiate and update the group key pairs with the help
of cloud servers, which does not require all of the group members been online all
the time. By adopting proxy re-encryption, most computationally intensive
operations can be delegated to cloud servers without disclosing any private
information. Extensive security and performance analysis shows that our proposed
scheme is highly efficient and satisfies the security requirements for public cloud
based secure group sharing.

Title: Towards A dynamic secure group sharing framework in public cloud


computing

Author: K. Xue and P. Hong,

Abstract: With the popularity of group data sharing in public cloud computing,
the privacy and security of group sharing data have become two major issues. The
cloud provider cannot be treated as a trusted third party because of its semi-trust
nature, and thus the traditional security models cannot be straightforwardly
generalized into cloud based group sharing frameworks. In this paper, we propose
a novel secure group sharing framework for public cloud, which can effectively
take advantage of the cloud servers' help but have no sensitive data being exposed
to attackers and the cloud provider. The framework combines proxy signature,
enhanced TGDH and proxy re-encryption together into a protocol. By applying the
proxy signature technique, the group leader can effectively grant the privilege of
group management to one or more chosen group members. The enhanced TGDH
scheme enables the group to negotiate and update the group key pairs with the help
of cloud servers, which does not require all of the group members been online all
the time. By adopting proxy re-encryption, most computationally intensive
operations can be delegated to cloud servers without disclosing any private
information. Extensive security and performance analysis shows that our proposed
scheme is highly efficient and satisfies the security requirements for public cloud
based secure group sharing.

Title: Attribute-based access to scalable media in cloud-assisted content sharing

Author: Y. Wu, Z. Wei, and H. Deng,

Abstract: This paper presents a novel Multi-message Ciphertext Policy


Attribute-Based Encryption (MCP-ABE) technique, and employs the MCP-ABE to
design an access control scheme for sharing scalable media based on data
consumers' attributes (e.g., age, nationality, or gender) rather than an explicit list of
the consumers' names. The scheme is efficient and flexible because MCP-ABE
allows a content provider to specify an access policy and encrypt multiple
messages within one ciphertext such that only the users whose attributes satisfy the
access policy can decrypt the ciphertext. Moreover, the paper shows how to
support resource-limited mobile devices by offloading computational intensive
operations to cloud servers while without compromising data privacy.

Architecture diagram:
CHAPTER 3

EXISTING SYSTEM:
Cloud computing is internet based computing which enables
sharing of services. Many users place their data in the cloud. However, the fact that
users no longer have physical possession of the possibly large size of outsourced
data makes the data integrity protection in cloud computing a very challenging and
potentially formida-ble task, especially for users with constrained computing
resources and capabilities. So correctness of data and security is a prime concern.
This article studies the problem of ensuring the integrity and security of data
storage in Cloud Computing. Security in cloud is achieved by signing the data
block before sending to the cloud. Using Cloud Storage, users can remotely store
their data and enjoy the on-demand high quality applications and services from a
shared pool of configurable computing resources, without the burden of local data
storage and maintenance. However, the fact that users no longer have physical
possession of the outsourced data makes the data integrity protection in Cloud
Computing a formidable task, especially for users with constrained computing
resources. Moreover, users should be able to just use the cloud storage as if it is
local, without worrying about the need to verify its integrity. Thus, enabling public
auditability for cloud storage is of critical importance so that users can resort to a
third party auditor (TPA) to check the integrity of outsourced data and be worry-
free. To securely introduce an effective TPA, the auditing process should bring in
no new vulnerabilities towards user data privacy, and introduce no additional
online burden to user. In this paper, we propose a secure cloud storage system
supporting privacy-preserving public auditing. We further extend our result to
enable the TPA to perform audits for multiple users simultaneously and efficiently.
Extensive security and performance analysis show the proposed schemes are
provably secure and highly efficient.
Disadvantage:
Especially to support block insertion, which is missing in most existing
schemes.
PROPOSED SYSTEM:

Client/Server model are not suitable in cloud storage environment.


The data access control in cloud storage environment has thus become a
challenging issue. To address the issue of data access control in cloud storage,
there have been quite a few schemes proposed, among which
Ciphertext-Policy Attribute-Based Encryption (CP-ABE) is regarded as one of the
most promising techniques. A salient feature of CP-ABE is that it grants data
owners direct control power based on access policies, to provide flexible,
finegrained and secure access control for cloud storage systems.In CP-ABE
schemes, the access control is achieved by using cryptography, where an owner’s
data is encrypted with an access structure over attributes, and a user’s secret key is
labelled with his/her own attributes. Only if the attributes associated with the user’s
secret key satisfy the access structure, can the user decrypt the corresponding
cipher text to obtain the plaintext. So far, the CP-ABE based access control
schemes for cloud storage have been developed into two complementary
categories, namely, single-authority scenario, and multiauthority scenario.
Although existing CP-ABE access control schemes have a lot of attractive features,
they are neither robust nor efficient in key generation. Since there is only one
authority in charge of all attributes in single-authority schemes, offline/crash of
this authority makes all secret key requests unavailable during that period. The
similar problem exists in multi-authority schemes, since each of multiple
authorities manages a disjoint attribute set.
ADVANTAGES:

• Public and private key generate for decrypt and encrypt work.

• View chart based on most number of downloads.

Module description:

Number of Modules:

After careful analysis the system has been identified to have the following:
Modules:
1. User Module
2. Owner Module
3. Attribute authority Module
4. Central authority Module
5. Chart Module

User Module:
The data consumer (User) is assigned a global user identity Uid by CA. The user
possesses a set of attributes and is equipped with a secret key associated with
his/her attribute set. The user can freely get any interested encrypted data from the
cloud server. However, the user can decrypt the encrypted data if and only if
his/her attribute set satisfies the access policy embedded in theencrypted data.
Owner module:
The data owner (Owner) defines the access policy about who can get access to
each file, and encrypts the file under the defined policy. First of all, each owner
encrypts his/her data with a symmetric encryption algorithm. Then, the owner
formulates access policy over an attribute set and encrypts the symmetric key
under the policy according to public keys obtained from CA. After that, the owner
sends the whole encrypted data and the encrypted symmetric key (denoted as
ciphertext CT) to the cloud server to be stored in the cloud.
Admin module:
Admin is a super user. they can view all the user and owner
details.admin can view the chart based on most number of word search , they can
add related word ,so user can easily mapping arelated words for example
Ambiguity level 2 refers to instances that most people think as ambiguous. These
instances contain two or more unrelated senses, such as “apple” (fruit & company)
and “jaguar” (animal & company). In this work, we only focus on disambiguation
of instances.
Attribute Authority module:
The attribute authorities (AAs) are responsible for
performing user legitimacy verification and generating intermediate keys for
legitimacy verified users. Unlike most of the existing multi-authority schemes
where each AA manages a disjoint attribute set respectively, our proposed scheme
involves multiple authorities to share the responsibility of user legitimacy
verification and each AA can perform this process for any user independently.
When an AA is selected, it will verify the users’ legitimate attributes by manual
labor or authentication protocols, and generate an intermediate key associated with
the attributes that it has legitimacy-verified. Intermediate key is a new concept to
assist CA to generate keys.
Central Authority module:
The central authority (CA) is the administrator of the entire system. It is
responsible for the system construction by setting up the system parameters and
generating public key for each attribute of the universal attribute set. In the system
initialization phase, it assigns each user a unique Uid and each attribute authority a
unique Aid. For a key request from a user, CA is responsible for generating secret
keys for the user on the basis of the received intermediate key associated with the
user’s legitimate attributes verified by an AA. As an administrator of the entire
system, CA has the capacity to trace which AA has incorrectly or maliciously
verified a user and has granted illegitimate attribute sets.The cloud server provides
a public platform for owners to store and share their encrypted data. The cloud
server doesn’t conduct data access control for owners. The encrypted data stored in
the cloud server can be downloaded freely by any user.
Chart module:
chart module,chart module based on number of file download in
particular user ,central authority can easily find out which file will be download
more.
2.5 System configuration:
HARDWARE REQUIREMENTS:
Hardware - Pentium
Speed - 1.1 GHz
RAM - 1GB
Hard Disk - 20 GB
Key Board - Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
SOFTWARE REQUIREMENTS:
Operating System : Windows
Technology : Java and J2EE
Web Technologies : Html, JavaScript, CSS
IDE : Net beans
Web Server : Tomcat
Database : My SQL
Java Version : J2SDK1.8
Algorithm
Software Description
INTODUCTION TO JAVA:

Java is a general-purpose computer programming language that is concurrent,


class-based, object-oriented, and specifically designed to have as few
implementation dependencies as possible. It is intended to let application
developers "write once, run anywhere" (WORA), meaning that compiled Java code
can run on all platforms that support Java without the need for recompilation. Java
applications are typically compiled to bytecode that can run on any Java virtual
machine (JVM) regardless of computer architecture. As of 2016, Java is one of the
most popular programming languages in use, particularly for client-server web
applications, with a reported 9 million developers. Java was originally developed
by James Gosling at Sun Microsystems (which has since been acquired by Oracle
Corporation) and released in 1995 as a core component of Sun Microsystems' Java
platform. The language derives much of its syntax from C and C++, but it has
fewer low-level facilities than either of them. The original and reference
implementation Java compilers, virtual machines, and class libraries were
originally released by Sun under proprietary licences. As of May 2007, in
compliance with the specifications of the Java Community Process, Sun relicensed
most of its Java technologies under the GNU General Public License. Others have
also developed alternative implementations of these Sun technologies, such as the
GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries),
and IcedTea-Web (browser plugin for applets). The latest version is Java 8, which
is the only version currently supported for free by Oracle, although earlier versions
are supported both by Oracle and other companies on a commercial basis.

INTRODUCTION TO JSP/SERVLET:

From Static to Dynamic WebPages

In the early days of the Web, most business Web pages were simply forms of
advertising tools and content providers offering customers useful manuals,
brochures and catalogues. In other words, these business-web-pages had static
content and were called "Static WebPages".

Later, the dot-com revolution enabled businesses to provide online services for
their customers; and instead of just viewing inventory, customers could also
purchase it. This new phase of the evolution created many new requirements; Web
sites had to be reliable, available, secure, and, if it was at all possible, fast. At this
time, the content of the web pages could be generated dynamically, and were
called "Dynamic WebPages".

The Static Web Pages contents are placed by the professional web developer
himself when developing the website; this is also called "design-time page
construction". Any changes needed have to be done by a web developer, this
makes Static WebPages expensive in their maintenance, especially when frequent
updates or changes are needed, for example in a news website.

On the other hand, dynamic Web pages have dynamically generated content that
depends on requests sent from the client's browser; this is called "constructed on
the fly". Updates and maintenance can be done by the client himself (doesn't need
professional web developers), this makes Dynamic WebPages considered to be the
best choice for web sites experiencing frequent changes.

Client-Side vs. Server-Side scripting

To generate dynamic content, scripting languages are needed. There are two types
of scripting languages:

1.Client-Side Scripting:

 It refers to the classes or programs embedded in a website and executed at


the client-side by the client's browser. These scripts can be embedded in
HTML code or placed in a separate file and referred to by the document
using it. On request, these files are sent to the user's browser where it gets
executed and the appropriate content is displayed.
 In client-side scripting, the user is able to view the source code (as it is
placed on the client's browser)
 Client-side scripts do not require additional software or an interpreter on the
server; however, they require that the user's web browser understands the
scripting language in which they are written. It is therefore important for the
developer to write scripts in a language that is supported by the web
browsers used by a majority of his or her users.
 Ideas of when to use client side scripts:
 Complimentary form pre-processing (should not be relied upon!)
 To get data about the user's screen or browser.
 Online games.
 Customizing the display (without reloading the page)
 Examples: JavaScript, VBScript

Server-side Scripting:

 It refers to the technology in which the user's request runs a script, or a


program, on the web server, generating customized web content that is sent
to the client in a format understandable by the web browsers (usually
HTML).
 Users are not able to display the source code of these scripts (the scripts are
placed on the server), they can only display the customized HTML sent to
the browser.
 Server-side scripts require their language's interpreter to be installed on the
server, and produce the same output regardless of the client's browser,
operating system, or other system details
 Server-side scripting –unlike client side scripting- has access to databases
which makes it more flexible and secure.
 Ideas for when to use server side scripts:
 Password protection.
 Browser sniffing/customization.
 Form processing.
 Building and displaying pages created from a database
 Examples: ASP, JSP, PHP, Perl

Introduction to JDBC:

Java Database Connectivity (JDBC) is an Application Programming Interface


(API) used to connect Java application with Database. JDBC is used to interact
with various type of Database such as Oracle, MS Access, My SQL and SQL
Server.
Introduction of JavaScript

 JavaScript is a object-based scripting language.

 Giving the user more control over the browser.

 It Handling dates and time.

 It Detecting the user's browser and OS,

 It is light weighted.

 JavaScript is a scripting language and it is not java.

 JavaScript is interpreter based scripting language.

JAVA Virtual Machine

The Java Virtual Machine Machine language consists of very simple instructions
that can be executed directly by (online) the CPU of a computer. Almost all
programs, though, are written in high-level programming languages such as Java,
Pascal, or C++. A program written in a high-level language cannot be run directly
on any computer. First, it has to be translated into machine language. This
translation can be done by a program called a compiler. A compiler takes a high-
level-language program and translates it into an executable machine-language
program. Once the translation is done, the machine-language program can be run
any number of times, but of course it can only be run on one type of computer
(since each type of computer has its own individual machine language). If the
program is to run on another type of computer it has to be re-translated, using a
different compiler, into the appropriate machine language.

There is an alternative to compiling a high-level language program. Instead of


using a compiler, which translates the program all at once, you can use an
interpreter, which translates it instruction-by-instruction, as necessary. An
interpreter is a program that acts much like a CPU, with a kind of fetch-and-
execute cycle. In order to execute a program, the interpreter runs in a loop in which
it repeatedly reads one instruction from the program, decides what is necessary to
carry out that instruction, and then performs the appropriate machine-language
commands to do so.

One use of interpreters is to execute high-level language programs. For example,


the programming language Lisp is usually executed by an interpreter rather than a
compiler. However, interpreters have another purpose: they can let you use a
machine-language program meant for one type of computer on a completely
different type of computer. For example, there is a program called “Virtual PC”
that runs on Mac OS computers. Virtual PC is an interpreter that executes
machine-language programs written for IBM-PC-clone computers. If you run
Virtual PC on your Mac OS, you can run any PC program, including programs
written for Windows. (Unfortunately, a PC program will run much more slowly
than it would on an actual IBM clone. The problem is that Virtual PC executes
several Mac OS machine-language instructions for each PC machine-language
instruction in the program it is interpreting. Compiled programs are inherently
faster than interpreted programs.)

The designers of Java chose to use a combination of compilation and


interpretation. Programs written in Java are compiled into machine language, but it
is a machine language for a computer that doesn’t really exist. This so-called
“virtual” computer is known as the Java Virtual Machine, or JVM. The machine
language for the Java Virtual Machine is called Java bytecode. There is no reason
why Java byte code couldn’t be used as the machine language of a real computer,
rather than a virtual computer. But in fact the use of a virtual machine makes
possible one of the main selling points of Java: the fact that it can actually be used
on any computer. All that the computer needs is an interpreter for Java bytecode.
Such an interpreter simulates the JVM in the same way that Virtual PC simulates a
PC computer. (The term JVM is also used for the Java bytecode interpreter
program that does the simulation, so we say that a computer needs a JVM in order
to run Java programs. Technically, it would be more correct to say that the
interpreter implements the JVM than to say that it is a JVM.)

Of course, a different Java bytecode interpreter is needed for each type of


computer, but once a computer has a Java byte code interpreter, it can run any Java
bytecode program. And the same Java byte code program can be run on any
computer that has such an interpreter. This is one of the essential features of Java:
the same compiled program can be run on many different types of computers.

Why, you might wonder, use the intermediate Java bytecode at all? Why not just
distribute the original Java program and let each person compile it into the machine
language of whatever computer they want to run it on? There are many reasons.
First of all, a compiler has to understand Java, a complex high-level language. The
compiler is itself a complex program. A Java bytecode interpreter, on the other
hand, is a fairly small, simple program. This makes it easy to write a bytecode
interpreter for a new type of computer; once that is done, that computer can run
any compiled Java program. It would be much harder to write a Java compiler for
the same computer.

Furthermore, many Java programs are meant to be downloaded over a network.


This leads to obvious security concerns: you don’t want to download and run a
program that will damage your computer or your files. The bytecode interpreter
acts as a buffer between you and the program you download. You are really
running the interpreter, which runs the downloaded program indirectly. The
interpreter can protect you from potentially dangerous actions on the part of that
program.

When Java was still a new language, it was criticized for being slow: Since Java
bytecode was executed by an interpreter, it seemed that Java bytecode programs
could never run as quickly as programs compiled into native machine language
(that is, the actual machine language of the computer on which the program is
running). However, this problem has been largely overcome by the use of just-in-
time compilers for executing Java bytecode. A just-in-time compiler translates Java
bytecode into native machine language. It does this while it is executing the
program. Just as for a normal interpreter, the input to a just-in-time compiler is a
Java bytecode program, and its task is to execute that program. But as it is
executing the program, it also translates parts of it into machine language. The
translated parts of the program can then be executed much more quickly than they
could be interpreted. Since a given part of a program is often executed many times
as the program runs, a just-in-time compiler can significantly speed up the overall
execution time.

I should note that there is no necessary connection between Java and Java
bytecode. A program written in Java could certainly be compiled into the machine
language of a real computer. And programs written in other languages could be
compiled into Java bytecode. However, it is the combination of Java and Java
bytecode that is platform-independent, secure, and networkcompatible while
allowing you to program in a modern high-level object-oriented language.

mpatible while allowing you to program in a modern high-level object-oriented


language. (In the past few years, it has become fairly common to create new
programming languages, or versions of old languages, that compile into Java
bytecode. The compiled bytecode programs can then be executed by a standard
JVM. New languages that have been developed specifically for programming the
JVM include Groovy, Clojure, and Processing. Jython and JRuby are versions of
older languages, Python and Ruby, that target the JVM. These languages make it
possible to enjoy many of the advantages of the JVM while avoiding some of the
technicalities of the Java language. In fact, the use of other languages with the
JVM has become important enough that several new features have been added to
the JVM in Java Version 7 specifically to add better support for some of those
languages.) ∗∗∗

Sensor networks are used in application domains, examples are cyber physical
infrastructure, environmental monitoring, whether monitoring power grids, etc.
The data that should be large sensor node sources and processed in-network with
their way to a Base Station (BS) that performs which decision should be taking.
Information is considered in the decision process or making. Data provenance is an
effective method to assess data trustworthiness, and the actions performed on the
data. Provenance in sensor networks has not been present properly addressed. We
investigate the problem of secure and efficient Journal homepage: www.mjret.in
ISSN:2348 - 6953 Multidisciplinary Journal of Research in Engineering and
Technology, Volume 2, Issue 4, Pg.823-828 824 | P a g e M43-2-4-10-2015
provenance transmission and handling for sensor networks, and we use provenance
to detect packet loss attacks staged by malicious sensor nodes. In a multi-hop
sensor network the data provenance is to allow the Base Station to trace the source
and forwarding path of a specific data packet the provenance must be record for
each an every packet, but important challenges arise due to some reason the first is
tight storage, energy and bandwidth constraints of sensor nodes. Therefore it is
necessary devise a light-weight provenance solution with low overhead. Sensors
should operate in untrusted environment, where they may be happens subject to
attacks. That’s why it is necessary to address security requirements such as
privacy, reliability and cleanness of provenance. Our project goal is to design a
provenance encoding and decoding tool that would be satisfies such safety and
presentation needs. We Design propose a provenance encoding strategy where
each node on the track of a data packet securely embeds provenance information
within a Bloom filter that is conveyed along with the data. Receiving the packet the
Base Station should be extracts and verifies the provenance information. The
provenance encoding system that allows the Base Station to detect if a packet drop
attack was staged by a malicious node. We use fast Message Authentication Code
and Bloom filters (BF), which are stable size data structures that efficiently
represent provenance. The modern developments in micro sensor technology and
low power analog and digital electronics, have led to the development of
distributed, wireless networks of sensor devices Sensor networks of the future are
intended to consist of hundreds of cheap nodes, that can be readily deployed in
physical situations to collect useful information. Our motivation on the subsection
of distributed networking applications created on packetheader-size Bloom filters
to share some state between network nodes. The specific state carried in the Bloom
filter differs from application to application, ranging from secure credentials to IP
prefixes and link identifiers with the shared requirement of a fixed-size packet
header data structure to well verify set memberships. Bloom filters make effective
usage of bandwidth, and they yield low error rates in practice. Our specific
contributions are:-

 We formulate the problem of secure provenance transmission in sensor


networks.
 The implementation of an in-packet Bloom filter provenance encoding
Scheme.
 To design efficient techniques for provenance decoding and verification at
the base station.
 To design mechanism that detects packet drop attacks staged by
maliciousforwarding sensor nodes.
 To perform a detailed security analysis and performance Evaluation.

SYSTEM ANALYSIS
JAVA
The Java architecture consists of:
• A high-level object-oriented programming language,
• A platform-independent representation of a compiled class,
• A pre-defined set of run-time libraries,
• A virtual machine.
This book is mainly concerned with the language aspects of Java and the
associated java.lang library package. Consequently, the remainder of this section
provides a brief introduction to the language. Issues associated with the other
components will be introduced as and when needed in the relevant chapters. The
introduction is broken down into the following components • identifiers and
primitive data types
• structured data types
• Reference types
• Blocks and exception handling
• control structures
• Procedures and functions
• object oriented programming, packages and classes
• Inheritance
• interfaces
• Inner classes.
Identifiers and primitive data types
Identifiers Java does not restrict the lengths of identifiers. Although the language
does allow the use of a “_” to be included in identifier names, the emerging style is
to use a mixture of upper and lower case characters. The following are example
identifiers:

FIGURE 1. Part of the The Java Predefined Throwable Class Hierarchy

SQL Server 2005 Enterprise Edition

Enterprise Edition includes the complete set of SQL Server data management and
analysis features and is uniquely characterized by several features that make it the
most scalable and available edition of SQL Server 2005. It scales to the
performance levels required to support the largest Web sites, Enterprise Online
Transaction Processing (OLTP) systems and Data Warehousing systems. Its
support for failover clustering also makes it ideal for any mission critical line-of-
business application.

Top-10 Features of SqlServer-2005


1. T-SQL (Transaction SQL) enhancements
T-SQL is the native set-based RDBMS programming language offering high-
performance data access. It now incorporates many new features including error
handling via the TRY and CATCH paradigm, Common Table Expressions (CTE),
which return a record set in a statement, and the ability to shift columns to rows
and vice versa with the PIVOT and UNPIVOT commands.
2. CLR (Common Language Runtime)
The next major enhancement in SQL Server 2005 is the integration of a .NET
compliant language such as C#, ASP.NET or VB.NET to build objects (stored
procedures, triggers, functions, etc.). This enables you to execute .NET code in the
DBMS to take advantage of the .NET functionality. It is expected to replace
extended stored procedures in the SQL Server 2000 environment as well as expand
the traditional relational engine capabilities.
3. Service Broker
The Service Broker handles messaging between a sender and receiver in a loosely
coupled manner. A message is sent, processed and responded to, completing the
transaction. This greatly expands the capabilities of data-driven applications to
meet workflow or custom business needs.

4. Data encryption
SQL Server 2000 had no documented or publicly supported functions to encrypt
data in a table natively. Organizations had to rely on third-party products to
address this need. SQL Server 2005 has native capabilities to support encryption of
data stored in user-defined databases.

5. SMTP mail
Sending mail directly from SQL Server 2000 is possible, but challenging. With
SQL Server 2005, Microsoft incorporates SMTP mail to improve the native mail
capabilities. Say "see-ya" to Outlook on SQL Server!
6. HTTP endpoints
You can easily create HTTP endpoints via a simple T-SQL statement exposing an
object that can be accessed over the Internet. This allows a simple object to be
called across the Internet for the needed data.
7. Multiple Active Result Sets (MARS)
MARS allow a persistent database connection from a single client to have more
than one active request per connection. This should be a major performance
improvement, allowing developers to give users new capabilities when working
with SQL Server. For example, it allows multiple searches, or a search and data
entry. The bottom line is that one client connection can have multiple active
processes simultaneously.
8. Dedicated administrator connection
If all else fails, stop the SQL Server service or push the power button. That
mentality is finished with the dedicated administrator connection. This
functionality will allow a DBA to make a single diagnostic connection to SQL
Server even if the server is having an issue.
9. SQL Server Integration Services (SSIS)
SSIS has replaced DTS (Data Transformation Services) as the primary ETL
(Extraction, Transformation and Loading) tool and ships with SQL Server free of
charge. This tool, completely rewritten since SQL Server 2000, now has a great
deal of flexibility to address complex data movement.
10. Database mirroring
It's not expected to be released with SQL Server 2005 at the RTM in November,
but I think this feature has great potential. Database mirroring is an extension of
the native high-availability capabilities. So, stay tuned for more details….

INFORMATION SUPER HIGHWAY:


 
A set of computer networks, made up of a large number of smaller networks, using
different networking protocols. The world's largest computing network consisting
of over two million computers supporting over 20 millions users in almost 200
different countries. The Internet is growing a phenomenal rate between 10 and 15
percent. So any size estimates are quickly out of date.
 
Internet was originally established to meet the research needs of the U.S Defence
Industry. But it has grown into a huge global network serving universities,
academic researches, commercial interest and Government agencies, both in the
U.S and Overseas. The Internet uses TCP/IP protocols and many of the Internet
hosts run the Unix Operating System.
SOFTWARE REQUIREMENT SPECIFICATION
A software requirements specification (SRS) is a complete description of the
behavior of the software to be developed. It includes a set of use cases that
describe all of the interactions that the users will have with the software. In
addition to use cases, the SRS contains functional requirements, which define the
internal workings of the software: that is, the calculations, technical details, data
manipulation and processing, and other specific functionality that shows how the
use cases are to be satisfied. It also contains nonfunctional requirements, which
impose constraints on the design or implementation (such as performance
requirements, quality standards or design constraints).

The SRS phase consists of two basic activities:


1) Problem/Requirement Analysis:
The process is order and more nebulous of the two, deals with understanding the
problem, the goal and constraints.
2) Requirement Specification:
Here, the focus is on specifying what has been found giving analysis such as
representation, specification languages and tools, and checking the specifications
are addressed during this activity.
The Requirement phase terminates with the production of the validate SRS
document. Producing the SRS document is the basic goal of this phase.
Role of SRS:
The purpose of the Software Requirement Specification is to reduce the
communication gap between the clients and the developers. Software Requirement
Specification is the medium though which the client and user needs are accurately
specified. It forms the basis of software development. A good SRS should satisfy
all the parties involved in the system.
CHAPTER 4

SYSTEM DESIGN

UML DIAGRAM

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have
proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software
and the software development process. The UML uses mostly graphical notations
to express the design of software projects.

GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.

USE CASE DIAGRAM:

A use case diagram in the Unified Modeling Language (UML) is a type of


behavioral diagram defined by and created from a Use-case analysis. Its purpose is
to present a graphical overview of the functionality provided by a system in terms
of actors, their goals (represented as use cases), and any dependencies between
those use cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system can be
depicted.
Class Diagram

In software engineering, a class diagram in the Unified Modeling Language


(UML) is a type of static structure diagram that describes the structure of a system
by showing the system's classes, their attributes, operations (or methods), and the
relationships among the classes. It explains which class contains information.
Data Flow Diagram

1. The DFD is also called as bubble chart. It is a simple graphical formalism


that can be used to represent a system in terms of input data to the system,
various processing carried out on this data, and the output data is generated
by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It
is used to model the system components. These components are the system
process, the data used by the process, an external entity that interacts with
the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.
DFD is also known as bubble chart. A DFD may be used to represent a system at
any level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.

DATA FLOW DIAGRAM

ER diagram:

An entity–relationship model (ER model for short) describes interrelated things


of interest in a specific domain of knowledge. A basic ER model is composed of
entity types (which classify the things of interest) and specifies relationships that
can exist between instances of those entity types.
Activity Diagram

Activity diagrams are graphical representations of workflows of stepwise


activities and action with support for choice, iteration and concurrency. In the
Unified Modeling Language, activity diagrams are intended to model both
computational and organizational processes (i.e., workflows), as well as the data
flows intersecting with the related activities. Although activity diagrams primarily
show the overall flow of control, they can also include elements showing the flow
of data between activities through one or more data stores.
SEQUENCE DIAGRAM

A sequence diagram in Unified Modeling Language (UML) is a kind of


interaction diagram that shows how processes operate with one another and in
what order. It is a construct of a Message Sequence Chart. Sequence diagrams are
sometimes called event diagrams, event scenarios, and timing diagrams.
Data Flow Diagram / Use Case Diagram / Flow Diagram

The DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system.

SYSTEM TESTING

SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying
to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the

Software system meets its requirements and user expectations and does not fail in
an unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.

TYPES OF TESTS

Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic
tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.

SUCESSFUL IMPORT OF CODE TO THE SYSTEM


ECLIPLSE

FILE

IMPORT

EXSISTING PROJECT INTO WORKSPACE

BROWSE

IMPORT THE LOCATION

SUBMIT

PROJECT IMPORTED SUCCESSFULLY

Functional test

Functional tests provide systematic demonstrations that functions tested are


available as specified by the business and technical requirements, system
documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be exercised.

Systems/Procedures: interfacing systems or procedures must be invoked.


Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.

System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.

CHECKING OF SYSTEM SPECIFICATION AND SOFTWARE


IMPLEMENTATION
Unit Testing:

Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.

Test strategy and approach


Field testing will be performed manually and functional tests will be written
in detail.

User Registration
Admin

File upload

File download

Test objectives

 All field entries must work properly.


 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested

 Verify that the entries are of the correct format


 No duplicate entries should be allowed
 All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures
caused by interface defects.

The task of the integration test is to check that components or software


applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

MySQL and Eclipse are connect by JDBC (JAVA DATABASE CONNECTION)

And with the jar fields


Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

Sample Coding
Activity
<%--
Document : oactivate
Created on : Sep 27, 2017, 4:12:34 PM
Author : welcome
--%>

<%@page import="java.security.SecureRandom"%>
<%@page import="java.util.Random"%>
<%@page import="DB.mail1"%>
<%@page import="DB.Dbconnection"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.Connection"%>
<%
String uid = request.getQueryString();
Integer id=Integer.parseInt(uid);
Connection con = null;
Statement st = null;
Statement st1 = null;
ResultSet rs = null;
try {
con = Dbconnection.getConnection();
st = con.createStatement();
rs = st.executeQuery("select * from attrireg where id='" + id + "'");
if (rs.next()) {
Random skey = new SecureRandom();
int skeyl = 25;
String auto =
"abcABCDEdefghFGHIJKopLMNOPQqrsRSTUVWXYZtuvwxyz@#$%&*?";
String key1 = "";
for (int i = 0; i < skeyl; i++) {
int index = (int) (skey.nextDouble() * auto.length());
key1 += auto.substring(index, index + 1);
}

String emaild=rs.getString("aaemailid");
System.out.println("mail"+emaild);

st1 =con.createStatement();
int i = st1.executeUpdate("update attrireg set
aastatus='Yes',aakey='"+key1+"' where id='" + id + "'");
if (i != 0) {
mail1 m = new mail1();
String kk="AAs_Activaction_key";
m.secretMail(key1, kk, emaild);
response.sendRedirect("centattri.jsp?msgg=success");
} else {
response.sendRedirect("centattri.jsp?nmsg=failed");
}
}
} catch (Exception ex) {
ex.printStackTrace();
}

%>

<%@page import="DB.Dbconnection"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.Connection"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>RAAC_Robust and Auditable</title>
<link href='http://fonts.googleapis.com/css?family=Droid+Serif' rel='stylesheet'
type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Oswald' rel='stylesheet'
type='text/css'>
<link href="css/styles.css" rel="stylesheet" type="text/css" />
</head>
<body>
<%
if (request.getParameter("umsgg") != null) {%>
<script>alert('User verified succesfully ');</script>
<%}
%>
<%
if (request.getParameter("uumsgg") != null) {%>
<script>alert('User not verified ');</script>
<%}
%>

<div class="menu-wrap">
<center><h1 style="font-size: 35px;color:#e68019;font-family:
Algerian">RAAC: Robust and Auditable Access Control with
Multiple</h1></center>
<center><h1 style="font-size: 35px;color:#e68019;font-family:
Algerian">Attribute Authorities for Public Cloud Storage</h1><center>
<div class="menu">
<ul>
<li><a href="index.jsp" class="active">HOME</a></li>
<li><a href="attruuserdet.jsp"> USER DETAIL</a></li>
<li><a href="attowner.jsp"> OWNER DETAIL</a></li>
<li><a href="attrifilerequest.jsp"> FILE REQUEST</a></li>
<li><a href="attrilogin.jsp"> LOGOUT</a></li>

</ul>
</div>
</div>
<!--------menu wrap ends--------->
<div class="clearing"></div>
<div class="header">

<div class="logo">

</div>

</div>
<!--------header wrap ends--------->
<div class="banner" style="background-image:url(images/vieww.jpg); ">
<h1>Central Authorities </h1>
<h2>OWNER DETAILS</h2>
</div>
<!--------banner end-------->
<style>
.textboxb {
width: 230px;
height: 30px;
border: 5px solid burlywood;
border-left: 12px solid burlywood;
}
.button {
background-color: #0B3B2E; /* Green */
border-radius: 8px;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
}
</style>
<div class="page" style="background-color: #d9ffb3;">
<div class="primary-col">
<div class="generic">
<div class="panel" style="background-color:#d9ffb3; ">
<div style="border-style: double;border-width: 5px;border-color:
darkmagenta;border-style: dashed;overflow: auto;width: 610px">
<div class="title">

<center><u><h2 style="margin-left: 50px;font-size: 38px;font-family:


Colonna MT;color:#0B3B2E" >Owner Details</h2></center></u>
<center>
<table border="1" style="margin-left: 30px;text-align: center">
<tr >
<th style="color:blueviolet;font-size: 20px">Owner ID</th>
<th style="color:blueviolet;font-size: 20px">Name</th>

<th style="color:blueviolet;font-size: 20px">Email</th>


<th style="color:blueviolet;font-size: 20px">Birth Day</th>
<th style="color:blueviolet;font-size: 20px">Gender</th>
<th style="color:blueviolet;font-size: 20px">Phone Number</th>
<th style="color:blueviolet;font-size: 20px">Location</th>
<th style="color:blueviolet;font-size: 20px">Status</th>
<th style="color:blueviolet;font-size: 20px">Status</th>

</tr>
<tr>
<%
Connection con = null;
Statement st = null;
ResultSet rs = null;
try {
con = Dbconnection.getConnection();
st = con.createStatement();
rs = st.executeQuery("select * from ownerreg");
while (rs.next()) {%>

<td style="color:black;font-size: 15px"><


%=rs.getString("owerid")%></td>
<td style="color:black;font-size: 15px"><
%=rs.getString("ousernmae")%></td>

<td style="color:black;font-size: 15px"><


%=rs.getString("oemailid")%></td>
<td style="color:black;font-size: 15px"><
%=rs.getString("odateb")%></td>
<td style="color:black;font-size: 15px"><
%=rs.getString("ogender")%></td>
<td style="color:black;font-size: 15px"><
%=rs.getString("ophone")%></td>
<td style="color:black;font-size: 15px"><
%=rs.getString("olocation")%></td>
<td style="color:black;font-size: 15px"><
%=rs.getString("ostatus")%></td>
<td style="color:black;font-size: 15px"><a href="oactivate.jsp?<
%=rs.getString("id")%>" style="color: lightcoral">Activate</a></td>

</tr>
<%}
} catch (Exception ex) {
ex.printStackTrace();
}
%>
</table>
</center>

</div>
<div class="content">

</div>
</div>
</div>
</div>
<div class="block float-left mar-top30">

</div>

</div>
<!----primary end--->
<div class="side-bar">
<div class="search">

Sample Screenshot

Home
User Registration
User Login
Attribute Authority Registration
Central authority Login
Attribute authority verfication and send Key
Attribute authority Login
Attribute key checking
User activation key
User Login
User key checking
Owner Registration
Owner Verification Key send
Owner login
Key checking
File upload
User file request
Intermediate key Generate send to CA
File Key send to user
CONCLUTION & Future work

In this paper, we proposed a new framework, named RAAC, to eliminate the


single-point performance bottleneck of the existing CP-ABE schemes. By
effectively reformulating CPABE cryptographic technique into our novel
framework, our proposed scheme provides a fine-grained, robust and
efficient access control with one-CA/multi-AAs for public cloud storage. Our
scheme employs multiple AAs to share the load of the time-consuming
legitimacy verification and standby for serving new arrivals of users’
requests. We also proposed an auditing method to trace an attribute
authority’s potential misbehavior. We conducted detailed security and
performance analysis to verify that our scheme is secure and efficient. The
security analysis shows that our scheme could effectively resist to individual
and colluded malicious users, as well as the honest-but-curious cloud
servers. Besides, with the proposed auditing & tracing scheme, no AA could
deny its misbehaved key distribution. Further performance analysis based on
queuing theory showed the superiority of our scheme over the traditional
CP-ABE based access control schemes for public cloud storage.
Future enhancement:
In future, we will future develop our algorithm in the following
aspects:
The security analysis shows that our scheme could effectively resist to
individual and colluded malicious users, as well as the honest-but-curious cloud
servers. Besides, with the proposed auditing & tracing scheme, no AA could
deny its misbehaved key distribution. Further performance analysis based on
queuing theory showed the superiority of our scheme over the traditional CP-
ABE based access control schemes for public cloud storage.
References
[1] P. Mell and T. Grance, “The NIST definition of cloud computing,” National
Institute of Standards and Technology Gaithersburg, 2011.

[2] Z. Fu, K. Ren, J. Shu, X. Sun, and F. Huang, “Enabling personalized search
over encrypted outsourced data with efficiency improvement,” IEEE Transactions
on Parallel & Distributed Systems, vol. 27, no. 9, pp. 2546–2559, 2016.

[3] Z. Fu, X. Sun, S. Ji, and G. Xie, “Towards efficient content-aware search over
encrypted outsourced data in cloud,” in in Proceedings of 2016 IEEE Conference
on Computer Communications (INFOCOM 2016). IEEE, 2016, pp. 1–9.

[4] K. Xue and P. Hong, “A dynamic secure group sharing framework in public
cloud computing,” IEEE Transactions on Cloud Computing, vol. 2, no. 4, pp. 459–
470, 2014.

[5] Y. Wu, Z. Wei, and H. Deng, “Attribute-based access to scalable media in


cloud-assisted content sharing,” IEEE Transactions on Multimedia, vol. 15, no. 4,
pp. 778–788, 2013.

[6] J. Hur, “Improving security and efficiency in attributebased data sharing,”


IEEE Transactions on Knowledge and Data Engineering, vol. 25, no. 10, pp. 2271–
2282, 2013.

[7] J. Hur and D. K. Noh, “Attribute-based access control with efficient revocation
in data outsourcing systems,” IEEE Transactions on Parallel and Distributed
Systems, vol. 22, no. 7, pp. 1214–1221, 2011.
[8] J. Hong, K. Xue, W. Li, and Y. Xue, “TAFC: Time and attribute factors
combined access control on timesensitive data in public cloud,” in Proceedings of
2015 IEEE Global Communications Conference (GLOBECOM 2015). IEEE,
2015, pp. 1–6.

[9] Y. Xue, J. Hong, W. Li, K. Xue, and P. Hong, “LABAC: A location-aware


attribute-based access control scheme for cloud storage,” in Proceedings of 2016
IEEE Global Communications Conference (GLOBECOM 2016). IEEE, 2016, pp.
1–6.

[10] A. Lewko and B. Waters, “Decentralizing attribute-based encryption,” in


Advances in Cryptology–EUROCRYPT 2011. Springer, 2011, pp. 568–588.

[11] K. Yang, X. Jia, K. Ren, and B. Zhang, “DAC-MACS: Effective data access
control for multi-authority cloud storage systems,” in Proceedings of 2013 IEEE
Conference on Computer Communications (INFOCOM 2013). IEEE, 2013, pp.
2895–2903.

[12] J. Chen and H. Ma, “Efficient decentralized attributebased access control for
cloud storage with user revocation,”in Proceedings of 2014 IEEE International
Conference on Communications (ICC 2014). IEEE, 2014, pp. 3782–3787.

[13] M. Chase and S. S. Chow, “Improving privacy and security in multi-authority


attribute-based encryption,” in Proceedings of the 16th ACM conference on
Computer and Communications Security (CCS 2009). ACM, 2009, pp. 121–130.

[14] M. Lippert, E. G. Karatsiolis, A. Wiesmaier, and J. A. Buchmann, “Directory


based registration in public key infrastructures.” in Proceedings of the 4th
International Workshop for Applied PKI (IWAP 2005), 2005, pp. 17– 32.
[15] W. Li, K. Xue, Y. Xue, and J. Hong, “TMACS: A robust and verifiable
threshold multi-authority access control system in public cloud storage,” IEEE
Transactions on Parallel & Distributed Systems, vol. 27, no. 5, pp. 1484– 1496,
2016.

You might also like