Professional Documents
Culture Documents
RAAC Robust and Auditable Access Control With Multiple Attribute Authorities For Public Cloud Storage
RAAC Robust and Auditable Access Control With Multiple Attribute Authorities For Public Cloud Storage
RAAC Robust and Auditable Access Control With Multiple Attribute Authorities For Public Cloud Storage
ABSTRACT:
CHAPTER 1
INTRODUCTION
To address the issue of data access control in cloud storage, there have been quite a
few schemes proposed, among which Ciphertext-Policy Attribute-Based
Encryption (CP-ABE) is regarded as one of the most promising techniques. A
salient feature of CP-ABE is that it grants data owners direct control power based
on access policies, to provide flexible, finegrained and secure access control for
cloud storage systems.
Although existing CP-ABE access control schemes have a lot of attractive features,
they are neither robust nor efficient in key generation. Since there is only one
authority in charge of all attributes in single-authority schemes, offline/crash of
this authority makes all secret key requests unavailable during that period. The
similar problem exists in multi-authority schemes, since each of multiple
authorities manages a disjoint attribute set.
In single-authority schemes, the only authority must verify the legitimacy of users’
attributes before generating secret keys for them. As the access control system is
associated with data security, and the only credential a user possess is his/her
secret key associated with his/her attributes, the process of key issuing must be
cautious. However, in the real world, the attributes are diverse. For example, to
verify whether a user is able to drive may need an authority to give him/her a test
to prove that he/she can drive. Thus he/she can get an attribute key associated with
driving ability [13]. To deal with the verification of various attributes, the user may
be required to be present to confirm them. Furthermore, the process to
verify/assign attributes to users is usually difficult so that it normally employs
administrators to manually handle the verification, as [14] has mentioned, that the
authenticity of registered data must be achieved by out-of-band (mostly manual)
means. To make a careful decision, the unavoidable participation of human beings
makes the verification timeconsuming, which causes a single-point bottleneck.
Especially, for a large system, there are always large numbers of users requesting
secret keys. The inefficiency of the authority’s service results in single-point
performance bottleneck, which will cause system congestion such that users often
cannot obtain their secret keys quickly, and have to wait in the system queue. This
will significantly reduce the satisfaction of users experience to enjoy real-time
services. On the other hand, if there is only one authority that issues secret keys for
some particular attributes, and if the verification enforces users’ presence, it will
bring about the other type of long service delay for users, since the authority
maybe too far away from his/her home/workplace. As a result, single-point
performance bottleneck problem affects the efficiency of secret key generation
service and immensely degrades the utility of the existing schemes to conduct
access control in large cloud storage systems. Furthermore, in multi-authority
schemes, the same problem also exists due to the fact that multiple authorities
separately maintain disjoint attribute subsets and issue secret keys associated with
users’ attributes within their own administration domain. Each authority performs
the verification and secret key generation as a whole in the secret key distribution
process, just like what the single authority does in single authority schemes.
Therefore, the single-point performance bottleneck still exists in such multi-
authority schemes.
However, this solution will bring forth threats on security issues. Since there are
multiple functionally identical authorities performing the same procedure, it is hard
to find the responsible authority if mistakes have been made or malicious
behaviors have been implemented in the process of secret key generation and
distribution. For example, an authority may falsely distribute secret keys beyond
user’s legitimate attribute set. Such weak point on security makes this
straightforward idea hard to meet the security requirement of access control for
public cloud storage. Our recent work, TMACS [15], is a threshold multi-authority
CP-ABE access control scheme for public cloud storage, where multiple
authorities jointly manage a uniform attribute set. Actually it addresses the single-
point bottleneck of performance and security, but introduces some additional
overhead. Therefore, in this paper, we present a feasible solution which not only
promotes efficiency and robustness, but also guarantees that the new solution is as
secure as the original single-authority schemes.
CHAPTER 2
LITERATURE REVIERW
Abstract: With the popularity of group data sharing in public cloud computing,
the privacy and security of group sharing data have become two major issues. The
cloud provider cannot be treated as a trusted third party because of its semi-trust
nature, and thus the traditional security models cannot be straightforwardly
generalized into cloud based group sharing frameworks. In this paper, we propose
a novel secure group sharing framework for public cloud, which can effectively
take advantage of the cloud servers' help but have no sensitive data being exposed
to attackers and the cloud provider. The framework combines proxy signature,
enhanced TGDH and proxy re-encryption together into a protocol. By applying the
proxy signature technique, the group leader can effectively grant the privilege of
group management to one or more chosen group members. The enhanced TGDH
scheme enables the group to negotiate and update the group key pairs with the help
of cloud servers, which does not require all of the group members been online all
the time. By adopting proxy re-encryption, most computationally intensive
operations can be delegated to cloud servers without disclosing any private
information. Extensive security and performance analysis shows that our proposed
scheme is highly efficient and satisfies the security requirements for public cloud
based secure group sharing.
Abstract: With the popularity of group data sharing in public cloud computing,
the privacy and security of group sharing data have become two major issues. The
cloud provider cannot be treated as a trusted third party because of its semi-trust
nature, and thus the traditional security models cannot be straightforwardly
generalized into cloud based group sharing frameworks. In this paper, we propose
a novel secure group sharing framework for public cloud, which can effectively
take advantage of the cloud servers' help but have no sensitive data being exposed
to attackers and the cloud provider. The framework combines proxy signature,
enhanced TGDH and proxy re-encryption together into a protocol. By applying the
proxy signature technique, the group leader can effectively grant the privilege of
group management to one or more chosen group members. The enhanced TGDH
scheme enables the group to negotiate and update the group key pairs with the help
of cloud servers, which does not require all of the group members been online all
the time. By adopting proxy re-encryption, most computationally intensive
operations can be delegated to cloud servers without disclosing any private
information. Extensive security and performance analysis shows that our proposed
scheme is highly efficient and satisfies the security requirements for public cloud
based secure group sharing.
Architecture diagram:
CHAPTER 3
EXISTING SYSTEM:
Cloud computing is internet based computing which enables
sharing of services. Many users place their data in the cloud. However, the fact that
users no longer have physical possession of the possibly large size of outsourced
data makes the data integrity protection in cloud computing a very challenging and
potentially formida-ble task, especially for users with constrained computing
resources and capabilities. So correctness of data and security is a prime concern.
This article studies the problem of ensuring the integrity and security of data
storage in Cloud Computing. Security in cloud is achieved by signing the data
block before sending to the cloud. Using Cloud Storage, users can remotely store
their data and enjoy the on-demand high quality applications and services from a
shared pool of configurable computing resources, without the burden of local data
storage and maintenance. However, the fact that users no longer have physical
possession of the outsourced data makes the data integrity protection in Cloud
Computing a formidable task, especially for users with constrained computing
resources. Moreover, users should be able to just use the cloud storage as if it is
local, without worrying about the need to verify its integrity. Thus, enabling public
auditability for cloud storage is of critical importance so that users can resort to a
third party auditor (TPA) to check the integrity of outsourced data and be worry-
free. To securely introduce an effective TPA, the auditing process should bring in
no new vulnerabilities towards user data privacy, and introduce no additional
online burden to user. In this paper, we propose a secure cloud storage system
supporting privacy-preserving public auditing. We further extend our result to
enable the TPA to perform audits for multiple users simultaneously and efficiently.
Extensive security and performance analysis show the proposed schemes are
provably secure and highly efficient.
Disadvantage:
Especially to support block insertion, which is missing in most existing
schemes.
PROPOSED SYSTEM:
• Public and private key generate for decrypt and encrypt work.
Module description:
Number of Modules:
After careful analysis the system has been identified to have the following:
Modules:
1. User Module
2. Owner Module
3. Attribute authority Module
4. Central authority Module
5. Chart Module
User Module:
The data consumer (User) is assigned a global user identity Uid by CA. The user
possesses a set of attributes and is equipped with a secret key associated with
his/her attribute set. The user can freely get any interested encrypted data from the
cloud server. However, the user can decrypt the encrypted data if and only if
his/her attribute set satisfies the access policy embedded in theencrypted data.
Owner module:
The data owner (Owner) defines the access policy about who can get access to
each file, and encrypts the file under the defined policy. First of all, each owner
encrypts his/her data with a symmetric encryption algorithm. Then, the owner
formulates access policy over an attribute set and encrypts the symmetric key
under the policy according to public keys obtained from CA. After that, the owner
sends the whole encrypted data and the encrypted symmetric key (denoted as
ciphertext CT) to the cloud server to be stored in the cloud.
Admin module:
Admin is a super user. they can view all the user and owner
details.admin can view the chart based on most number of word search , they can
add related word ,so user can easily mapping arelated words for example
Ambiguity level 2 refers to instances that most people think as ambiguous. These
instances contain two or more unrelated senses, such as “apple” (fruit & company)
and “jaguar” (animal & company). In this work, we only focus on disambiguation
of instances.
Attribute Authority module:
The attribute authorities (AAs) are responsible for
performing user legitimacy verification and generating intermediate keys for
legitimacy verified users. Unlike most of the existing multi-authority schemes
where each AA manages a disjoint attribute set respectively, our proposed scheme
involves multiple authorities to share the responsibility of user legitimacy
verification and each AA can perform this process for any user independently.
When an AA is selected, it will verify the users’ legitimate attributes by manual
labor or authentication protocols, and generate an intermediate key associated with
the attributes that it has legitimacy-verified. Intermediate key is a new concept to
assist CA to generate keys.
Central Authority module:
The central authority (CA) is the administrator of the entire system. It is
responsible for the system construction by setting up the system parameters and
generating public key for each attribute of the universal attribute set. In the system
initialization phase, it assigns each user a unique Uid and each attribute authority a
unique Aid. For a key request from a user, CA is responsible for generating secret
keys for the user on the basis of the received intermediate key associated with the
user’s legitimate attributes verified by an AA. As an administrator of the entire
system, CA has the capacity to trace which AA has incorrectly or maliciously
verified a user and has granted illegitimate attribute sets.The cloud server provides
a public platform for owners to store and share their encrypted data. The cloud
server doesn’t conduct data access control for owners. The encrypted data stored in
the cloud server can be downloaded freely by any user.
Chart module:
chart module,chart module based on number of file download in
particular user ,central authority can easily find out which file will be download
more.
2.5 System configuration:
HARDWARE REQUIREMENTS:
Hardware - Pentium
Speed - 1.1 GHz
RAM - 1GB
Hard Disk - 20 GB
Key Board - Standard Windows Keyboard
Mouse - Two or Three Button Mouse
Monitor - SVGA
SOFTWARE REQUIREMENTS:
Operating System : Windows
Technology : Java and J2EE
Web Technologies : Html, JavaScript, CSS
IDE : Net beans
Web Server : Tomcat
Database : My SQL
Java Version : J2SDK1.8
Algorithm
Software Description
INTODUCTION TO JAVA:
INTRODUCTION TO JSP/SERVLET:
In the early days of the Web, most business Web pages were simply forms of
advertising tools and content providers offering customers useful manuals,
brochures and catalogues. In other words, these business-web-pages had static
content and were called "Static WebPages".
Later, the dot-com revolution enabled businesses to provide online services for
their customers; and instead of just viewing inventory, customers could also
purchase it. This new phase of the evolution created many new requirements; Web
sites had to be reliable, available, secure, and, if it was at all possible, fast. At this
time, the content of the web pages could be generated dynamically, and were
called "Dynamic WebPages".
The Static Web Pages contents are placed by the professional web developer
himself when developing the website; this is also called "design-time page
construction". Any changes needed have to be done by a web developer, this
makes Static WebPages expensive in their maintenance, especially when frequent
updates or changes are needed, for example in a news website.
On the other hand, dynamic Web pages have dynamically generated content that
depends on requests sent from the client's browser; this is called "constructed on
the fly". Updates and maintenance can be done by the client himself (doesn't need
professional web developers), this makes Dynamic WebPages considered to be the
best choice for web sites experiencing frequent changes.
To generate dynamic content, scripting languages are needed. There are two types
of scripting languages:
1.Client-Side Scripting:
Server-side Scripting:
Introduction to JDBC:
It is light weighted.
The Java Virtual Machine Machine language consists of very simple instructions
that can be executed directly by (online) the CPU of a computer. Almost all
programs, though, are written in high-level programming languages such as Java,
Pascal, or C++. A program written in a high-level language cannot be run directly
on any computer. First, it has to be translated into machine language. This
translation can be done by a program called a compiler. A compiler takes a high-
level-language program and translates it into an executable machine-language
program. Once the translation is done, the machine-language program can be run
any number of times, but of course it can only be run on one type of computer
(since each type of computer has its own individual machine language). If the
program is to run on another type of computer it has to be re-translated, using a
different compiler, into the appropriate machine language.
Why, you might wonder, use the intermediate Java bytecode at all? Why not just
distribute the original Java program and let each person compile it into the machine
language of whatever computer they want to run it on? There are many reasons.
First of all, a compiler has to understand Java, a complex high-level language. The
compiler is itself a complex program. A Java bytecode interpreter, on the other
hand, is a fairly small, simple program. This makes it easy to write a bytecode
interpreter for a new type of computer; once that is done, that computer can run
any compiled Java program. It would be much harder to write a Java compiler for
the same computer.
When Java was still a new language, it was criticized for being slow: Since Java
bytecode was executed by an interpreter, it seemed that Java bytecode programs
could never run as quickly as programs compiled into native machine language
(that is, the actual machine language of the computer on which the program is
running). However, this problem has been largely overcome by the use of just-in-
time compilers for executing Java bytecode. A just-in-time compiler translates Java
bytecode into native machine language. It does this while it is executing the
program. Just as for a normal interpreter, the input to a just-in-time compiler is a
Java bytecode program, and its task is to execute that program. But as it is
executing the program, it also translates parts of it into machine language. The
translated parts of the program can then be executed much more quickly than they
could be interpreted. Since a given part of a program is often executed many times
as the program runs, a just-in-time compiler can significantly speed up the overall
execution time.
I should note that there is no necessary connection between Java and Java
bytecode. A program written in Java could certainly be compiled into the machine
language of a real computer. And programs written in other languages could be
compiled into Java bytecode. However, it is the combination of Java and Java
bytecode that is platform-independent, secure, and networkcompatible while
allowing you to program in a modern high-level object-oriented language.
Sensor networks are used in application domains, examples are cyber physical
infrastructure, environmental monitoring, whether monitoring power grids, etc.
The data that should be large sensor node sources and processed in-network with
their way to a Base Station (BS) that performs which decision should be taking.
Information is considered in the decision process or making. Data provenance is an
effective method to assess data trustworthiness, and the actions performed on the
data. Provenance in sensor networks has not been present properly addressed. We
investigate the problem of secure and efficient Journal homepage: www.mjret.in
ISSN:2348 - 6953 Multidisciplinary Journal of Research in Engineering and
Technology, Volume 2, Issue 4, Pg.823-828 824 | P a g e M43-2-4-10-2015
provenance transmission and handling for sensor networks, and we use provenance
to detect packet loss attacks staged by malicious sensor nodes. In a multi-hop
sensor network the data provenance is to allow the Base Station to trace the source
and forwarding path of a specific data packet the provenance must be record for
each an every packet, but important challenges arise due to some reason the first is
tight storage, energy and bandwidth constraints of sensor nodes. Therefore it is
necessary devise a light-weight provenance solution with low overhead. Sensors
should operate in untrusted environment, where they may be happens subject to
attacks. That’s why it is necessary to address security requirements such as
privacy, reliability and cleanness of provenance. Our project goal is to design a
provenance encoding and decoding tool that would be satisfies such safety and
presentation needs. We Design propose a provenance encoding strategy where
each node on the track of a data packet securely embeds provenance information
within a Bloom filter that is conveyed along with the data. Receiving the packet the
Base Station should be extracts and verifies the provenance information. The
provenance encoding system that allows the Base Station to detect if a packet drop
attack was staged by a malicious node. We use fast Message Authentication Code
and Bloom filters (BF), which are stable size data structures that efficiently
represent provenance. The modern developments in micro sensor technology and
low power analog and digital electronics, have led to the development of
distributed, wireless networks of sensor devices Sensor networks of the future are
intended to consist of hundreds of cheap nodes, that can be readily deployed in
physical situations to collect useful information. Our motivation on the subsection
of distributed networking applications created on packetheader-size Bloom filters
to share some state between network nodes. The specific state carried in the Bloom
filter differs from application to application, ranging from secure credentials to IP
prefixes and link identifiers with the shared requirement of a fixed-size packet
header data structure to well verify set memberships. Bloom filters make effective
usage of bandwidth, and they yield low error rates in practice. Our specific
contributions are:-
SYSTEM ANALYSIS
JAVA
The Java architecture consists of:
• A high-level object-oriented programming language,
• A platform-independent representation of a compiled class,
• A pre-defined set of run-time libraries,
• A virtual machine.
This book is mainly concerned with the language aspects of Java and the
associated java.lang library package. Consequently, the remainder of this section
provides a brief introduction to the language. Issues associated with the other
components will be introduced as and when needed in the relevant chapters. The
introduction is broken down into the following components • identifiers and
primitive data types
• structured data types
• Reference types
• Blocks and exception handling
• control structures
• Procedures and functions
• object oriented programming, packages and classes
• Inheritance
• interfaces
• Inner classes.
Identifiers and primitive data types
Identifiers Java does not restrict the lengths of identifiers. Although the language
does allow the use of a “_” to be included in identifier names, the emerging style is
to use a mixture of upper and lower case characters. The following are example
identifiers:
Enterprise Edition includes the complete set of SQL Server data management and
analysis features and is uniquely characterized by several features that make it the
most scalable and available edition of SQL Server 2005. It scales to the
performance levels required to support the largest Web sites, Enterprise Online
Transaction Processing (OLTP) systems and Data Warehousing systems. Its
support for failover clustering also makes it ideal for any mission critical line-of-
business application.
4. Data encryption
SQL Server 2000 had no documented or publicly supported functions to encrypt
data in a table natively. Organizations had to rely on third-party products to
address this need. SQL Server 2005 has native capabilities to support encryption of
data stored in user-defined databases.
5. SMTP mail
Sending mail directly from SQL Server 2000 is possible, but challenging. With
SQL Server 2005, Microsoft incorporates SMTP mail to improve the native mail
capabilities. Say "see-ya" to Outlook on SQL Server!
6. HTTP endpoints
You can easily create HTTP endpoints via a simple T-SQL statement exposing an
object that can be accessed over the Internet. This allows a simple object to be
called across the Internet for the needed data.
7. Multiple Active Result Sets (MARS)
MARS allow a persistent database connection from a single client to have more
than one active request per connection. This should be a major performance
improvement, allowing developers to give users new capabilities when working
with SQL Server. For example, it allows multiple searches, or a search and data
entry. The bottom line is that one client connection can have multiple active
processes simultaneously.
8. Dedicated administrator connection
If all else fails, stop the SQL Server service or push the power button. That
mentality is finished with the dedicated administrator connection. This
functionality will allow a DBA to make a single diagnostic connection to SQL
Server even if the server is having an issue.
9. SQL Server Integration Services (SSIS)
SSIS has replaced DTS (Data Transformation Services) as the primary ETL
(Extraction, Transformation and Loading) tool and ships with SQL Server free of
charge. This tool, completely rewritten since SQL Server 2000, now has a great
deal of flexibility to address complex data movement.
10. Database mirroring
It's not expected to be released with SQL Server 2005 at the RTM in November,
but I think this feature has great potential. Database mirroring is an extension of
the native high-availability capabilities. So, stay tuned for more details….
SYSTEM DESIGN
UML DIAGRAM
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
ER diagram:
The DFD is also called as bubble chart. It is a simple graphical formalism that can
be used to represent a system in terms of the input data to the system, various
processing carried out on these data, and the output data is generated by the
system.
SYSTEM TESTING
SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of trying
to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the
Software system meets its requirements and user expectations and does not fail in
an unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic
tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.
FILE
IMPORT
BROWSE
SUBMIT
Functional test
System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
User Registration
Admin
File upload
File download
Test objectives
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
Sample Coding
Activity
<%--
Document : oactivate
Created on : Sep 27, 2017, 4:12:34 PM
Author : welcome
--%>
<%@page import="java.security.SecureRandom"%>
<%@page import="java.util.Random"%>
<%@page import="DB.mail1"%>
<%@page import="DB.Dbconnection"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.Connection"%>
<%
String uid = request.getQueryString();
Integer id=Integer.parseInt(uid);
Connection con = null;
Statement st = null;
Statement st1 = null;
ResultSet rs = null;
try {
con = Dbconnection.getConnection();
st = con.createStatement();
rs = st.executeQuery("select * from attrireg where id='" + id + "'");
if (rs.next()) {
Random skey = new SecureRandom();
int skeyl = 25;
String auto =
"abcABCDEdefghFGHIJKopLMNOPQqrsRSTUVWXYZtuvwxyz@#$%&*?";
String key1 = "";
for (int i = 0; i < skeyl; i++) {
int index = (int) (skey.nextDouble() * auto.length());
key1 += auto.substring(index, index + 1);
}
String emaild=rs.getString("aaemailid");
System.out.println("mail"+emaild);
st1 =con.createStatement();
int i = st1.executeUpdate("update attrireg set
aastatus='Yes',aakey='"+key1+"' where id='" + id + "'");
if (i != 0) {
mail1 m = new mail1();
String kk="AAs_Activaction_key";
m.secretMail(key1, kk, emaild);
response.sendRedirect("centattri.jsp?msgg=success");
} else {
response.sendRedirect("centattri.jsp?nmsg=failed");
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
%>
<%@page import="DB.Dbconnection"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.Connection"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<title>RAAC_Robust and Auditable</title>
<link href='http://fonts.googleapis.com/css?family=Droid+Serif' rel='stylesheet'
type='text/css'>
<link href='http://fonts.googleapis.com/css?family=Oswald' rel='stylesheet'
type='text/css'>
<link href="css/styles.css" rel="stylesheet" type="text/css" />
</head>
<body>
<%
if (request.getParameter("umsgg") != null) {%>
<script>alert('User verified succesfully ');</script>
<%}
%>
<%
if (request.getParameter("uumsgg") != null) {%>
<script>alert('User not verified ');</script>
<%}
%>
<div class="menu-wrap">
<center><h1 style="font-size: 35px;color:#e68019;font-family:
Algerian">RAAC: Robust and Auditable Access Control with
Multiple</h1></center>
<center><h1 style="font-size: 35px;color:#e68019;font-family:
Algerian">Attribute Authorities for Public Cloud Storage</h1><center>
<div class="menu">
<ul>
<li><a href="index.jsp" class="active">HOME</a></li>
<li><a href="attruuserdet.jsp"> USER DETAIL</a></li>
<li><a href="attowner.jsp"> OWNER DETAIL</a></li>
<li><a href="attrifilerequest.jsp"> FILE REQUEST</a></li>
<li><a href="attrilogin.jsp"> LOGOUT</a></li>
</ul>
</div>
</div>
<!--------menu wrap ends--------->
<div class="clearing"></div>
<div class="header">
<div class="logo">
</div>
</div>
<!--------header wrap ends--------->
<div class="banner" style="background-image:url(images/vieww.jpg); ">
<h1>Central Authorities </h1>
<h2>OWNER DETAILS</h2>
</div>
<!--------banner end-------->
<style>
.textboxb {
width: 230px;
height: 30px;
border: 5px solid burlywood;
border-left: 12px solid burlywood;
}
.button {
background-color: #0B3B2E; /* Green */
border-radius: 8px;
color: white;
padding: 15px 32px;
text-align: center;
text-decoration: none;
display: inline-block;
font-size: 16px;
}
</style>
<div class="page" style="background-color: #d9ffb3;">
<div class="primary-col">
<div class="generic">
<div class="panel" style="background-color:#d9ffb3; ">
<div style="border-style: double;border-width: 5px;border-color:
darkmagenta;border-style: dashed;overflow: auto;width: 610px">
<div class="title">
</tr>
<tr>
<%
Connection con = null;
Statement st = null;
ResultSet rs = null;
try {
con = Dbconnection.getConnection();
st = con.createStatement();
rs = st.executeQuery("select * from ownerreg");
while (rs.next()) {%>
</tr>
<%}
} catch (Exception ex) {
ex.printStackTrace();
}
%>
</table>
</center>
</div>
<div class="content">
</div>
</div>
</div>
</div>
<div class="block float-left mar-top30">
</div>
</div>
<!----primary end--->
<div class="side-bar">
<div class="search">
Sample Screenshot
Home
User Registration
User Login
Attribute Authority Registration
Central authority Login
Attribute authority verfication and send Key
Attribute authority Login
Attribute key checking
User activation key
User Login
User key checking
Owner Registration
Owner Verification Key send
Owner login
Key checking
File upload
User file request
Intermediate key Generate send to CA
File Key send to user
CONCLUTION & Future work
[2] Z. Fu, K. Ren, J. Shu, X. Sun, and F. Huang, “Enabling personalized search
over encrypted outsourced data with efficiency improvement,” IEEE Transactions
on Parallel & Distributed Systems, vol. 27, no. 9, pp. 2546–2559, 2016.
[3] Z. Fu, X. Sun, S. Ji, and G. Xie, “Towards efficient content-aware search over
encrypted outsourced data in cloud,” in in Proceedings of 2016 IEEE Conference
on Computer Communications (INFOCOM 2016). IEEE, 2016, pp. 1–9.
[4] K. Xue and P. Hong, “A dynamic secure group sharing framework in public
cloud computing,” IEEE Transactions on Cloud Computing, vol. 2, no. 4, pp. 459–
470, 2014.
[7] J. Hur and D. K. Noh, “Attribute-based access control with efficient revocation
in data outsourcing systems,” IEEE Transactions on Parallel and Distributed
Systems, vol. 22, no. 7, pp. 1214–1221, 2011.
[8] J. Hong, K. Xue, W. Li, and Y. Xue, “TAFC: Time and attribute factors
combined access control on timesensitive data in public cloud,” in Proceedings of
2015 IEEE Global Communications Conference (GLOBECOM 2015). IEEE,
2015, pp. 1–6.
[11] K. Yang, X. Jia, K. Ren, and B. Zhang, “DAC-MACS: Effective data access
control for multi-authority cloud storage systems,” in Proceedings of 2013 IEEE
Conference on Computer Communications (INFOCOM 2013). IEEE, 2013, pp.
2895–2903.
[12] J. Chen and H. Ma, “Efficient decentralized attributebased access control for
cloud storage with user revocation,”in Proceedings of 2014 IEEE International
Conference on Communications (ICC 2014). IEEE, 2014, pp. 3782–3787.