File Integrity Checking and Monitoring in Cloud Final Report

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 84

FILE INTEGRITY CHECKING AND MONITORING

IN CLOUD

A PROJECT REPORT

Submitted by

SIVAPRAKASH S (822219104035)
VISWESWARAN R (822219104039)

in partial fulfillment for the award of the degree


of

BACHELOR OF ENGINEERING

IN

COMPUTER SCIENCE AND ENGINEERING

UNIVERSITY COLLEGE OF ENGINEERING THIRUKKUVALAI

ANNA UNIVERSITY:CHENNAI 600 025

APRIL/MAY 2023
ANNA UNIVERSITY: CHENNAI – 600 025

BONAFIDE CERTIFICATE

Certified that this project report titled “FILE INTEGRITY


CHECKING AND MONITORING IN CLOUD ” is the bonafide work of
“SIVAPRAKASH S (822219104035), VISWESWARAN R
(822219104039)” of Computer science & Engineering whose carried out
this Project work under my supervision.

SIGNATURE SIGNATURE

Dr. K. L. NEELA, M.E., Ph.D., Mrs. T. MAHESHSELVI, M.E.,

Assistant professor, Assistant professor,

HEAD OF THE DEPARTMENT, SUPERVISOR,

Department of CSE, Department of CSE,

University College of Engineering, University College of Engineering,

Thirukkuvalai – 610204. Thirukkuvalai – 610204.

Submitted for the Project Viva Voce held on: ……………….

INTERNAL EXAMINER EXTERNAL EXAMINER

ii
ACKNOWLEDGEMENT
We would like our gratitude towards all the people who have imparted
their valued time and efforts to help us in completing this project. With Out
whom it would not have been possible for us understand and examine the
project.
First of all, we express our sincere gratitude to our dear parents who
have been the major contributor of inspiration and encouragement to us
throughout our career. We express our heartfelt thanks to our honourable Dean.
Dr. G. ELANGOVAN, M.E., Ph.D., for allowing as to do this project.
We express our sincere thanks to Head of the Department Dr. K. L.
NEELA, M.E., Ph.D., for providing the necessary support in all means in
carry out this project.
We express our deep sense of reverence and offer our thanks with
profound gratitude to our project guide Dr. K. L. NEELA, M.E., Ph.D., for
motivation and encourage infused in us. Her readiness and eagerness for
consolation at all times, him educative inputs, concern and assistance have
been invaluable.
We express our sincere thanks to the project committee co-ordinator
Mr. S. MADHAN, B.E., M.Tech., Department of Computer Science and
Engineering, University College of Engineering, Thirukkuvalai, for their
invaluable guidance and technical Support. We thank all the faculty members
of the Department of Computer Science and Engineering for their valuable
help and guidance.
Finally, we thank god almighty for his blessing without which we would
have not the project and made this project success.

PROJECT TEAM
SIVAPRAKASH S (822219104035)
VISWESWARAN R (822219104039)

iii
ABSTRACT

Secure sharing of dynamic audit data is automatically performed


through the consensus strategy and highly programmable smart contracts.
Blockchain technology has become one of the most important emerging
technologies in this period. Its characteristics of decentralization, non-
tampering, traceability and high security can effectively solve the problems of
traditional cloud audit data in central storage, vulnerability, tampering,
incomplete transmission and unsafe data flow. The use of blockchain in cloud
audit data can shorten compiling time and improve the quality of audit reports.
In this model, a Consortium blockchain is built for different auditors; cloud
auditing data can be shared only by users who have identity compliance
through the node admission mechanism. Off-chain asynchronous secure
storage is used for heterogeneous audit data with different ownership patterns
and sensitivity degrees. In proposed work, the auditor is required to create a
new transaction after every verification, where the information corresponding
to the verification is integrated into the transaction, and the auditor conducts
the transaction. After the transaction is recorded into the blockchain, the user is
able to verify the time when the auditor performs the verification by checking
the generation time of the transaction.

iv
TABLE OF CONTENTS

CHAPTER TITLE PAGE


NO. NO.
ABSTRACT Iv
LIST OF FIGURES viii
LIST OF ABBREVIATIONS x
1 INTRODUCTION 1
1.1 CLOUD COMPUTING 1
1.2 CHALLENGES OF CLOUD COMPUTING 2
1.3 CLOUD SECURITY 4
1.4 APPLICATIONS OF CLOUD COMPUTING 7
2 LITERATURE SURVEY 11
2.1 Privacy preserving cloud data auditing with
efficient key update 11
2.2 An efficient public auditing protocol with novel
dynamic structure for cloud data 12
2.3 Remote data possession checking with privacy
preserving authenticators for cloud storage 13
2.4 Light-weight and privacy-preserving secure cloud
auditing scheme for group users via the third party
medium 14
2.5 Privacy preserving auditing protocol for remote
data storage 15
2.6 A blockchain-based multi-cloud storage data
auditing scheme to locate faults 16
2.7 A blockchain-based secret-data sharing framework
for personal health records in emergency condition 17

v
2.8 Blockchain-based public auditing for big data in
cloud storage 18
2.9 A privacy-preserving and untraceable group data
sharing scheme in cloud computing 19
2.10 Privacy preserving cloud data auditing with
efficient key update 20

3 SYSTEM ANALYSIS 21
3.1 EXISTING SYSTEM 21
3.1.1 DISADVANTAGE 21
3.2 PROPOSED METHOD 22
3.2.1 ADVANTAGE 23
3.3 ALGORITHM 23
3.3.1 BLOCKCHAIN TECHNOLOGY 23
3.3.2 HASHING 25
3.3.3 BLOCKCHAIN HASH GENERATION 27
3.3.4 FEATURES OF BLOCKCHAIN
TECHNOLOGY 28
4 IMPLEMENTATION 31
4.1 MODULE LIST 31
4.2 MODULE DESCRIPTION 31
5 SYSTEM SPECIFICATION 34
5.1 HARDARE REQUIREMENTS 34
5.2 SOFTWARE REQUIREMENTS 34
5.3 SOFTWARE DESCRIPTION 34
5.3.1 FRONT END SOFTWARE 34
5.3.2 SQL SERVER 8.0 43
6 SYSTEM DESIGN 52
6.1 UML DIAGRAMS 52

vi
6.2 USE CASE DIAGRAM 54
6.3 CLASS DIAGRAM 55
6.4 SEQUENCE DIAGRAM 56
6.5 ACTIVITY DIAGRAM 57
6.6 DATA FLOW DIAGRAM 58
7 SYSTEM TESTING 61
7.1 TYPES OF TESTING 61
7.2 UNIT TESTING 64
7.3 INTEGRATION TESTING 65
7.4 ACCEPTANCE TESTING 65
8 SNAPSHOTS 66
9 APPENDIX 69
10 CONCLUSION 73
11 REFERENCES 74

vii
LIST OF FIGURES

FIGURE PAGE
TITLE
NO. NO.
1.1 Cloud Security 5
3.1 Architecture Diagram 23
3.2 Block Creation 25
3.2 Block Creation with Hash Link 27
6.1 Usecase Diagram 55
6.2 Class Diagram 56
6.3 Sequence Diagram 57
6.4 Activity Diagram 58
6.5 Data Flow level 0 59
6.6 Data Flow level 1 60
6.7 Data Flow level 2 60
6.8 Data Flow level 3 61
6.9 Data Flow level 4 61
8.1 Home Page 67
8.2 Owner Login 67
8.3 Owner Registration 68
8.4 Owner File Uploading 68
8.5 TPA approval 69
8.6 User Accessing the File 69

viii
LIST OF ABBREVIATIONS

DEX - Decentralized Exchanges


CEX - Centralized Exchanges
IaaS - Infrastructure-as-a-Service
PaaS - Platform-as-a-Service
SaaS - Software-as-a-Service
DR - Disaster Recover
PKI - Public Key Infrastructure
CPVPA - Certificateless Public Verification scheme against
Procrastinating Auditors
SHA - Secure Hashing Algorithm
MD - Message-Digest algorithm

ix
CHAPTER 1
INTRODUCTION
1.1 CLOUD COMPUTING

Cloud computing technology consists of the use of computing resources


that are delivered as a service over a network. In cloud computing model users
have to give access to their data for storing and performing the desired
business operations. Hence cloud service provider must provide the trust and
security, as there is valuable and sensitive data in huge amount stored on the
clouds. There are concerns about flexible, scalable and fine-grained access
control in the cloud computing.

Cloud computing is consistently growing and there are many main cloud
computing providers including Amazon, Google, Microsoft, Yahoo and many
others who are offering solutions including Software-as-a-Service (SaaS),
Platform-as-a-Service (PaaS), Storage-as-a- Service and Infrastructure-as-a-
Service (IaaS). In addition, considering the possibility to substantially
minimizing expenses by optimization and also maximizing operating as well as
economic effectiveness, cloud computing is an excellent technology.
Furthermore, cloud computing can tremendously boost its cooperation, speed,
and also range, thus empowering a totally worldwide computing model on the
internet infrastructure. On top of that, the cloud computing has advantages in
delivering additional scalable, fault tolerant services.

Cloud computing handles resource management in a better way since the


user no longer needs to be responsible for identifying resources for storage. If a
user wants to store more data they request it from the cloud provider and once
they are finished they can either release the storage by simply stopping the use
of it, or move the data to a long-term lower-cost storage resource. This further

1
allows the user to effectively use more` dynamic resources because they no
longer need to concern themselves with storage and cost that accompany new
and old resources.

Cloud computing service models are all inside in the cloud sing and
laptops, desktops, phones and tablets are acts like clients to get services from
the cloud. Servers provide services to clients according to their request or pay
base. Cloud computing provides a shared pool of configurable IT resources on
demand, in which needs minimal effort of management to get better services.
Services are based on various agreement SLA (Service Level Agreement)
between service providers and consumers.

1.2 CHALLENGES OF CLOUD COMPUTING

The following are some of the notable challenges associated with cloud
computing, and although some of these may cause a slowdown when
delivering more services in the cloud, most also can provide opportunities, if
resolved with due care and attention in the planning stages.

Security and Privacy:

Perhaps two of the more “hot button” issues surrounding cloud


computing relate to storing and securing data, and monitoring the use of the
cloud by the service providers. These issues are generally attributed to slowing
the deployment of cloud services. These challenges can be addressed, for
example, by storing the information internal to the organization, but allowing it
to be used in the cloud. For this to occur, though, the security mechanisms
between organization and the cloud need to be robust and a Hybrid cloud could
support such a deployment.

2
Lack of Standards:

Clouds have documented interfaces; however, no standards are


associated with these, and thus it is unlikely that most clouds will be
interoperable. The Open Grid Forum is developing an Open Cloud Computing
Interface to resolve this issue and the Open Cloud Consortium is working on
cloud computing standards and practices. The findings of these groups will
need to mature, but it is not known whether they will address the needs of the
people deploying the services and the specific interfaces these services need.
However, keeping up to date on the latest standards as they evolve will allow
them to be leveraged, if applicable.

Continuously Evolving:

User requirements are continuously evolving, as are the requirements for


interfaces, networking, and storage. This means that a “cloud,” especially a
public one, does not remain static and is also continuously evolving.

Compliance Concerns:

The Sarbanes-Oxley Act (SOX) in the US and Data Protection directives


in the EU are just two among many compliance issues affecting cloud
computing, based on the type of data and application for which the cloud is
being used. The EU has a legislative backing for data protection across all
member states, but in the US data protection is different and can vary from
state to state. As with security and privacy mentioned previously, these
typically result in Hybrid cloud deployment with one cloud storing the data
internal to the organization.

1.3 CLOUD SECURITY

The meaning of security is plentiful. Security is the combination of


confidentiality, the prevention of the unauthorized disclosure of information,
integrity, the prevention of the unauthorized amendment or deletion of
3
information, and availability, the prevention of unauthorized withholding of
information.

The major issues in the cloud computing include resource security,


resource management, and resource monitoring. Currently, there are no
standard rules and regulations to deploy applications in the cloud, and there is
a lack of standardization control in the cloud. Numerous novel techniques had
been designed and implemented in cloud; however, these techniques fall short
of ensuring total security due to the dynamics of the cloud environment.

The inherent issues of data security, governance, and management with respect
to control in the cloud computing are discussed. The key security, privacy, and
trust issues in the existing environment of cloud computing and help users to
recognize the tangible and intangible threats related to its use. According to the
authors, there are three major potential threats in cloud computing,
namely, security, privacy, and trust. Security plays a critical role in the current
era of long dreamed vision of computing as a utility. It can be divided into four
subcategories: safety mechanisms, cloud server monitoring or tracing, data
confidentiality, and avoiding malicious insiders' illegal operations and service
hijacking.

A data security framework for cloud computing networks is proposed.


The authors mainly discussed the security issues related to cloud data storage.
There are also some patents about the data storage security techniques. Give a
survey on secure cloud computing for critical infrastructure. A security and
privacy framework for RFID in cloud computing was proposed for RFID
technology integrated to the cloud computing, which will combine the cloud
computing with the Internet of Things.

4
Fig 1.1 Cloud Security

Data Integrity

Data integrity is one of the most critical elements in any information


system. Generally, data integrity means protecting data from unauthorized
deletion, modification, or fabrication. Managing entity's admittance and rights
to specific enterprise resources ensures that valuable data and services are not
abused, misappropriated, or stolen.

Data integrity is easily achieved in a standalone system with a single


database. Data integrity in the standalone system is maintained via database
constraints and transactions, which is usually finished by a database
management system (DBMS). Transactions should follow ACID (atomicity,
consistency, isolation, and durability) properties to ensure data integrity. Most
databases support ACID transactions and can preserve data integrity.

Authorization is used to control the access of data. It is the mechanism


by which a system determines what level of access a particular authenticated
user should have to secure resources controlled by the system.

5
Data Confidentiality
Data confidentiality is important for users to store their private or
confidential data in the cloud. Authentication and access control strategies are
used to ensure data confidentiality. The data confidentiality, authentication,
and access control issues in cloud computing could be addressed by increasing
the cloud reliability and trustworthiness.
Because the users do not trust the cloud providers and cloud storage
service providers are virtually impossible to eliminate potential insider threat,
it is very dangerous for users to store their sensitive data in cloud storage
directly. Simple encryption is faced with the key management problem and
cannot support complex requirements such as query, parallel modification, and
fine-grained authorization.
Data Availability
Data availability means the following: when accidents such as hard disk
damage, IDC fire, and network failures occur, the extent that user's data can be
used or recovered and how the users verify their data by techniques rather than
depending on the credit guarantee by the cloud service provider alone. The
issue of storing data over the transmission boarder servers is a serious concern
of clients because the cloud vendors are governed by the local laws and,
therefore, the cloud clients should be cognizant of those laws. Moreover, the
cloud service provider should ensure the data security, particularly data
confidentiality and integrity. The cloud provider should share all such concerns
with the client and build trust relationship in this connection. The cloud vendor
should provide guarantees of data safety and explain jurisdiction of local laws
to the clients. The main focus of the paper is on those data issues and
challenges which are associated with data storage location and its relocation,
cost, availability, and security.

6
Data Privacy
Privacy is the ability of an individual or group to seclude them or
information about themselves and thereby reveal them selectively. Privacy has
the following elements. In the cloud, the privacy means when users visit the
sensitive data, the cloud services can prevent potential adversary from inferring
the user's behavior by the user's visit model (not direct data leakage).
Researchers have focused on Oblivious RAM (ORAM) technology. ORAM
technology visits several copies of data to hide the real visiting aims of users.
ORAM has been widely used in software protection and has been used in
protecting the privacy in the cloud as a promising technology.

1.4 APPLICATIONS OF CLOUD COMPUTING

Here are a few situations where cloud computing is used to enhance the
ability to achieve business goals.

1. Infrastructure as a service (IaaS) and platform as a service (PaaS)


When it comes to IaaS, using an existing infrastructure on a pay-per-use
scheme seems to be an obvious choice for companies saving on the cost of
investing to acquire, manage and maintain an IT infrastructure. There are also
instances where organizations turn to PaaS for the same reasons while also
seeking to increase the speed of development on a ready-to-use platform to
deploy applications.
2. Private cloud and hybrid cloud
Among the many incentives for using cloud, there are two situations
where organizations are looking into ways to assess some of the applications
they intend to deploy into their environment through the use of a cloud
(specifically a public cloud). While in the case of test and development it may
be limited in time, adopting a hybrid cloud approach allows for testing
application workloads, therefore providing the comfort of an environment
7
without the initial investment that might have been rendered useless should the
workload testing fail.
Another use of hybrid cloud is also the ability to expand during periods
of limited peak usage, which is often preferable to hosting a large
infrastructure that might seldom be of use. An organization would seek to have
the additional capacity and availability of an environment when needed on a
pay-as you-go basis.

3. Test and development


Probably the best scenario for the use of a cloud is a test and
development environment. This entails securing a budget, setting up your
environment through physical assets, significant manpower and time. Then
comes the installation and configuration of your platform. All this can often
extend the time it takes for a project to be completed and stretch your
milestones.
With cloud computing, there are now readily available environments tailored
for your needs at your fingertips. This often combines, but is not limited to,
automated provisioning of physical and virtualized resources.

4. Big data analytics


One of the aspects offered by leveraging cloud computing is the ability
to tap into vast quantities of both structured and unstructured data to harness
the benefit of extracting business value. Retailers and suppliers are now
extracting information derived from consumers’ buying patterns to target their
advertising and marketing campaigns to a particular segment of the population.
Social networking platforms are now providing the basis for analytics on
behavioral patterns that organizations are using to derive meaningful
information.

8
5. File storage
Cloud can offer you the possibility of storing your files and accessing,
storing and retrieving them from any web-enabled interface. The web services
interfaces are usually simple. At any time and place you have high availability,
speed, scalability and security for your environment. In this scenario,
organizations are only paying for the amount of storage they are actually
consuming, and do so without the worries of overseeing the daily maintenance
of the storage infrastructure.
There is also the possibility to store the data either on or off premises
depending on the regulatory compliance requirements. Data is stored in
virtualized pools of storage hosted by a third party based on the customer
specification requirements.

6. Disaster recovery
This is yet another benefit derived from using cloud based on the cost
effectiveness of a disaster recovery (DR) solution that provides for a faster
recovery from a mesh of different physical locations at a much lower cost that
the traditional DR site with fixed assets, rigid procedures and a much higher
cost.

7. Backup
Backing up data has always been a complex and time-consuming
operation. This included maintaining a set of tapes or drives, manually
collecting them and dispatching them to a backup facility with all the inherent
problems that might happen in between the originating and the backup site.
This way of ensuring a backup is performed is not immune to problems such as
running out of backup media , and there is also time to load the backup devices
for a restore operation, which takes time and is prone to malfunctions and

9
human errors. Cloud-based backup, while not being the panacea, is certainly a
far cry from what it used to be. You can now automatically dispatch data to
any location across the wire with the assurance that neither security,
availability nor capacities are issues.
While the list of the above uses of cloud computing is not exhaustive, it
certainly give an incentive to use the cloud when comparing to more traditional
alternatives to increase IT infrastructure flexibility , as well as leverage on big
data analytics and mobile computing.

10
CHAPTER 2

LITERATURE SURVEY

1. TITLE: Privacy preserving cloud data auditing with efficient key update

AUTHORS: Li, Yannan, Yong Yu, Bo Yang, Geyong Min, and Huai Wu

YEAR: 2019

DESCRIPTION:

Propose a key updating and authenticator-evolving mechanism with


zero-knowledge privacy of the stored files for secure cloud data auditing,
which incorporates zero knowledge proof systems, proxy re-signatures and
homomorphic linear authenticators. Here instantiate our proposal with the
state-of-the-art Shacham-Waters auditing scheme. When the cloud user needs
to update his key, instead of downloading the entire file and re-generating all
the authenticators, the user can simply download one single file tag, work out a
re-signing key with the new private key and upload the new file tag together
with some verification information to the cloud server, in which the user
undertakes the least amount of the workload in the updating phase. Three kinds
of entities are involved in the scenario, namely cloud users or data owners, the
cloud server and a third party auditor (TPA). A cloud user generates data files
and stores large amount of data on the remote cloud server without keeping a
local copy. TPA can be an organization managed by the government, which
has expertise and capabilities that cloud users do not have and is trusted to
check the integrity of the hosted data on behalf of cloud users upon request.
TPA is responsible for checking the integrity of the cloud data on behalf the
cloud users in case that they have no time, resources or feasibility to monitor
their data, and returns the auditing report to the cloud user.
11
2. TITLE: An efficient public auditing protocol with novel dynamic structure
for cloud data

AUTHORS: Shen, Jian, Jun Shen, Xiaofeng Chen, Xinyi Huang, and Willy
Susilo

YEAR: 2020

DESCRIPTION:

Propose an efficient public auditing protocol with global and sampling


blockless verification as well as batch auditing, where data dynamics are
substantially more efficiently supported than is the case with the state of the
art. Note that, the novel dynamic structure in our protocol consists of a doubly
linked info table and a location array. The doubly linked info table (DLIT) is a
two-dimensional data structure employed by the TPA to store data information
concerning auditing, differing from the one-dimensional Index Hash Table
(IHT). Data information in the DLIT is divided into two types: file information
and block information. The left part is the file information, including user ID
and file ID. In previous works by other researchers, the file information simply
consists of the file ID, therein making the length of the file ID long. Moreover,
when the total number of files increases over time, it becomes more difficult to
make each file ID unique. However, even if a user ID of only one bit is added
into the process of identifier generation, the number of identifiers can be
doubled to 32. In the real world, both the file ID bit and the user ID bit will be
larger, and the number of identifiers will consequently be increased. The right
part is the block information, including the current version number and the
time stamp, which are generated when a given block is uploaded or updated.
With such a double linking data structure, the insertion and deletion of a file or
data block will no longer cause a change in other records in the DLIT.
Moreover, the advantages of the DLIT will be reflected in batch operations at
lower costs when searching for a certain element.

12
3. TITLE: Remote data possession checking with privacy-preserving
authenticators for cloud storage

AUTHORS: Shen, Wenting, Guangyang Yang, Jia Yu, Hanlin Zhang, Fanyu
Kong, and Rong Hao

YEAR: 2020

DESCRIPTION:

Propose a new paradigm named remote data possession checking with


privacy-preserving authenticators for cloud storage. In this new paradigm, both
cloud service provider and the public verifier do not have access to the real
authenticators (signatures) for cloud data. Meanwhile, the integrity of cloud
data is still able to be efficiently checked. It is potentially useful in some
special situations where electronic checks and contracts are outsourced. To
securely protect the privacy of the authenticator, we design a new authenticator
called Homomorphic Invisible Authenticator (HIA), which protects the privacy
of authenticator and supports the blockless verification. Based on HIA, we
construct the first remote data possession checking scheme with privacy-
preserving authenticators for cloud storage. There are four types of entities
involved in the framework: the cloud user, the cloud, the third party auditor
(TPA) and the trusted authority (TA). The cloud user owns large amount of
data that will be outsourced into the cloud. The cloud provides enormous
storage space for the cloud user, which is supervised by cloud service
providers (CSPs). The TA is an organization trusted by both the cloud and the
cloud user, which can be a public institution or a non-government organization.
The public key of the TA is used by the cloud user to generate privacy-
preserving authenticators for his outsourced data. Under rational requirements,
the TA can recover the real authenticators from the privacy-preserving ones.

13
4. TITLE: Light-weight and privacy-preserving secure cloud auditing scheme
for group users via the third party medium

AUTHORS: Shen, Wenting, Jia Yu, Hui Xia, Hanlin Zhang, Xiuqing Lu, and
Rong Hao

YEAR: 2019

DESCRIPTION:

Propose a cloud storage auditing scheme for group users, which greatly
reduces the computation burden on the user side. In our scheme, we introduce
a Third Party Medium (TPM) to perform time-consuming operations on behalf
of users. The TPM is in charge of generating authenticators for users and
verifying data integrity on behalf of users. In order to protect the data privacy
against the TPM, we blind data using simple operations in the phase of data
uploading and data auditing. The user does not need to perform time-
consuming decryption operations when using cloud data. In a group, there are
multiple group users and one original user who is the original owner of data
and can create shared data to the cloud. After the original user uploads data to
the cloud, other users in the group can access these shared cloud data. The
original data owner can play the role of the group manager. When the user
wants to upload data to the cloud, he needs to blind the data, and then sends
them to the TPM. After receiving the blinded data from the user, the TPM will
generate the corresponding authenticators for the blinded data, and then
uploads the blinded data and the corresponding authenticators to the cloud
together. The cloud recovers the real data and the real authenticators from the
blinded data and the corresponding authenticators, and stores them. When the
TPM wants to verify the integrity of cloud data, he will send an auditing
challenge to the cloud. After receiving this challenge, the cloud will respond to
this TPM with a proof of data possession. And then, the TPM will check the
correctness of the proof to verify the integrity of cloud data.

14
5. TITLE: Privacy preserving auditing protocol for remote data storage

AUTHORS: Suguna, M., and S. Mercy Shalinie

YEAR: 2019

DESCRIPTION:

A third-party trusted verifier is introduced in the proposed work which


maintains dynamic metadata stored locally for the verification process.
Bilinear mapping is used to ensure verification without retrieving the original
data called block less process. The verification proof generated using the
proposed method is a small signature, which reduces the auditing overhead at
the client side compared to existing solution. A data auditing protocol is
proposed for ensuring privacy of outsourced data at semi trusted storage by
verifying data intactness through blockless verification. The third party trusted
verifier (TV) is assigned the verification process in charge of the client thereby
reducing the computation and storage overhead at the client side. In order to
ensure the privacy of data against TV, blockless process is ensured where the
challenge involves a metadata proof from the server which is verified locally
by TV without the actual data. Even when the TV tries to get the data from
encrypted file, the proof construction using bilinear map ensures the data
privacy. In provable data possession schemes, the auditing process ensures the
data intactness through challenge response method. The usage of a trusted third
party verifier ensures low computation and storage overhead for the client. But,
the privacy of data becomes a question as the verifier can derive the data from
his history of challenge and proof in hand. Using blockless verification, the
proposed P-DAP protocol ensures privacy of user’s data against the auditor
verification process. Also the user signing process in every data dynamic
operation ensures the authenticity of updates. The storage server is also
benefited by proving it’s trustworthiness by responding with the proof as the
challenge information is sent by the verifier.

15
6. TITLE: A blockchain-based multi-cloud storage data auditing scheme to
locate faults

AUTHORS: Zhang, Cheng, Yang Xu, Yupeng Hu, Jiajing Wu, Ju Ren, and
Yaoxue Zhang

YEAR: 2021

DESCRIPTION:

Introduce the blockchain to record the interactions among users, service


providers, and organizers in data auditing process as evidence, but also employ
the smart contract to detect service dispute, so as to enforce the untrusted
organizer to honestly identify malicious service providers. Before outsourcing
data to CSPs, U and CSPs jointly generate HVT of data. Both parties confirm
that their HVT is consistent with the help of blockchain. During the data
auditing process, U generates a challenge nonce and requires CSPs to respond.
After all CSPs calculate the respond based on challenged data blocks, O will
aggregate the results into one integrity proof and send it to U through the
blockchain for audit. When there is a data integrity dispute, the smart contract
can judge whether the dispute exists based on the records on the blockchain,
thereby preventing the framing behavior of malicious U. If the smart contract
determines that there is a problem with the service provider side, it will ask O
to find the malicious CSP within the specified time; otherwise it will consider
O to be malicious. At this point, the rational O must honestly find out the
malicious entity and cannot frame other CSPs, because the framed CSP can
prove to the smart contract that he is honest through the interaction records on
the blockchain. In the event of a dispute, the smart contract can identify
malicious service providers based on these records without the need for trusted
TPA. We also use the credible and traceable feature of the blockchain to force
the untrusted organizer to honestly report the malicious behavior of service
providers.

16
7. TITLE: A blockchain-based secret-data sharing framework for personal
health records in emergency condition

AUTHORS: Rajput, Ahmed Raza, Qianmu Li, and Milad Taleby Ahvanooey

YEAR: 2021

DESCRIPTION:

Present a healthcare management framework that employs blockchain


technology to provide a tamper protection application by considering safe
policies. These policies involve identifying extensible access control, auditing,
and tamper resistance in an emergency scenario. It utilizes several consensus
algorithms to reach approval on the new event for the blockchain. In general,
blockchain considers the security as mentioned earlier policies to ensure the
reliability of generated records, containing events, termed as blocks. Besides, it
empowers authoritative participant’s entry and access control and needs to
support accountability. Auditing is the significant property of the blockchain.
When the transaction is performed, the current block records the transaction
with a timestamp, and the participant of the system trails the previous event
actions. It records a history of all transactions. This strategy is beneficial for
individual persons or medical organizations that require obtaining tamper-
proof account records. Proposed mechanism’s access rules essentially
concentrate on the purpose, what data object, and which activities they have to
perform. In our framework, patient predefined access permissions rules such as
read, write, update, delete, and period to share their PHR by smart contracts on
the blockchain without the lack of control. Smart contracts can be executed on
the blockchain network once all the conditions are met. We proposed that
patient can empower access to his/her PHR only under predefined conditions
of an appropriate type and for a provided time limit.

17
8. TITLE: Blockchain-based public auditing for big data in cloud storage
AUTHORS: Li, Jiaxing, Jigang Wu, Guiyuan Jiang, and Thambipillai
Srikanthan

YEAR: 2020

DESCRIPTION:

Develop a novel public auditing scheme for verifying data integrity in


cloud storage. In the proposed scheme, different from the existing works that
involve three participatory entities, only two predefined entities (i.e. data
owner and cloud service provider) who may not trust each other are involved,
and the third party auditor for data auditing is removed. Specifically, data
owners store the lightweight verification tags on the blockchain and generate a
proof by constructing the Merkle Hash Tree using the hash tags to reduce the
overhead of computation and communication for integrity verification. The
DO outsources its data to the cloud and retrieves them when necessary.
Between the DO and the CSP, the data flow should be secure and encrypted as
both of them distrust each other. The TPA, delegated by the DOs, is
responsible for auditing on behalf of the DOs. Once receiving the auditing
request, the TPA sends a challenge to the CSP, i.e., the TPA asks the CSP for a
storage proof of the specified data file in the cloud. Then the TPA checks the
responded proof and finally returns the auditing report to the DO. Typically,
the challenge procedure of the auditing scheme works in the following way: A
DO generates an auditing request regarding to the specified data blocks it
wants to verify, and sends the request to the TPA. When the CSP receives the
challenge, it performs cryptographic operations on the verified data blocks,
generates a cryptographic proof and sends the proof to the TPA. After
receiving the proof, the TPA check whether the proof is correct, and sends a
result to the DO.

18
9. TITLE: A privacy-preserving and untraceable group data sharing scheme in
cloud computing

AUTHORS: Shen, Jian, Huijie Yang, Pandi Vijayakumar, and Neeraj Kumar.

YEAR: 2021

DESCRIPTION:

Propose proxy re-encryption and oblivious random access memory


(ORAM), a privacy-preserving and untraceable scheme to support multiple
users in sharing data in cloud computing. The ciphertext obtained according to
the proxy re-encryption phase enables group members to implement access
control and store data, thereby completing secure data sharing. On the other
hand, this paper realizes data untraceability and a hidden data access pattern
through a one-way circular linked table in a binary tree (OCLT) and
obfuscation operation. Additionally, based on the designed structure and
pointer tuple, malicious users are identified and data tampering is prevented.
Based on key exchange, the proposed approach can efficiently generate the
user’s conference key, which can be used to protect the security of shared data
and prevent malicious user collusion with other users. In addition, security of
shared group data in the cloud and access control is achieved with respect to
the proxy re-encryption technique. Moreover, according to the operation
algorithms and the novel OCLT storage structure, our OCLT-ORAM protocol
can support untraceability of address sequences and efficiency in data storage.
Fault-tolerant and tamper protection features are accomplished with respect to
pointer tuples. The sufficient security proof indicates the security of our
protocol. The experimental comparison results could be considered as
validation of the performance of our protocol, making it substantially more
convincing.

19
10. TITLE: Privacy preserving cloud data auditing with efficient key update

AUTHORS: Li, Yannan, Yong Yu, Bo Yang, Geyong Min, and Huai Wu.

YEAR: 2019

DESCRIPTION:

Propose a key updating and authenticator-evolving mechanism with


zero-knowledge privacy of the stored files for secure cloud data auditing,
which incorporates zero knowledge proof systems, proxy re-signatures and
homomorphic linear authenticators. When the cloud user needs to update his
key, instead of downloading the entire file and re-generating all the
authenticators, the user can simply download one single file tag, work out a re-
signing key with the new private key and upload the new file tag together with
some verification information to the cloud server, in which the user undertakes
the least amount of the workload in the updating phase. This approach
dramatically reduces the communication and computation cost while
maintaining the desirable security. Each entity has his own obligations and
benefits. The cloud server may be self-interested, and for his own benefits,
such as to maintain its reputation, the server might even decide to hide data
corruption incidents to others. TPA is responsible for checking the integrity of
the cloud data on behalf the cloud users in case that they have no time,
resources or feasibility to monitor their data, and returns the auditing report to
the cloud user. In an auditing scheme with zero knowledge privacy, the TPA
cannot learn any information of the stored data during the auditing process.

20
CHAPTER 3
SYSTEM ANALYSIS

3.1 EXISTING SYSTEM

In public verification schemes, after data outsourcing, the user sets a


verification period (i.e., the frequency at which the auditor performs the
verification). Then the auditor verifies the outsourced data integrity at the
corresponding time. In practice, the auditor generates a verification report
containing multiple verification results (corresponding to multiple periods, we
call these periods an epoch). If in any period the verification result is “Reject”,
it means that the data may be corrupted and the auditor needs to inform the
user at once. Otherwise, the auditor generates a verification log and provides
the user with the log at the end of each epoch. Since the auditor is able to
verify the data integrity without the user’s participation, the user can assign the
auditor to perform the verification with any period as needed. Although cloud
auditing brings some convenience to auditors, its problems in internal and
external data security cannot be ignored. At present, the collection, storage,
transmission, sharing and analysis of audit data are all under the centralized
management of the "cloud audit platform". Most public verification schemes
are built on the public key infrastructure (PKI), where the auditor needs to
manage the user’s certificate to choose the correct public key for verification.
Consequently, these schemes suffer from the certificate management problem
including certificate revocation, storage, distribution, and verification, which is
very costly and cumbersome.

3.1.1 DISADVANTAGES

• Cannot resist a procrastinating auditor who may not perform the data
integrity verification on schedule
21
• Deviate from the original objective of public verification schemes.
• It might be too late to recover the data loss or damage if the auditor
procrastinates on the verification.
• The procrastinating auditor also cannot be detected in the public
verification schemes, even though malicious auditors can be detected
there

3.2 PROPOSED SYSTEM

In this proposed work the first certificateless public data integrity


verification scheme (CPVPA) was implemented that resists malicious and
procrastinating auditors. The key idea behind CPVPA is to use blockchain
technology, which provides a tamper-proofing and distributed way to conduct
transactions without a central authority (i.e., bank). In CPVPA, the auditor is
required to create a new transaction after each verification, where the
information corresponding to the verification is integrated into the transaction,
and the auditor conducts the transaction. After the transaction is recorded into
the blockchain, the user is able to verify the time when the auditor performs the
verification by checking the generation time of the transaction. We stress that
for a blockchain system, the more participants in it, the stronger security
guarantee it can provide. Therefore, we construct CPVPA on a well-
established and widely-used blockchain system, rather than a newly created
one. The proposed scheme also supports public auditing using a TPA (Third
Party Auditor) to help low-powered clients. The proposed scheme satisfies all
fundamental security requirements, and is more efficient than the existing
schemes that are designed to support deduplication and public auditing at the
same time.

22
Fig 3.1 Architecture Diagram

3.2.1 ADVANTAGES

• Allows the cloud server to prove that the outsourced data is well
maintained.
• The communication and computation overhead should be as efficient as
possible.
• Collusion between any two participants cannot break the security of the
proposed scheme.

3.3 ALGORITHM

3.3.1 BLOCKCHAIN TECHNOLOGY

Blockchain builds on the idea of P2P networks and provides a universal


data set that every actor can trust, even though they might not know or trust
23
each other. It provides a shared and trusted ledger of transactions, where
immutable and encrypted copies of information are stored on every node in the
network. Economic incentives in the form of native network tokens are applied
to make the network fault tolerant, and attack and collusion resistant.

Blockchain and derived technologies provide a universal and transparent


accounting and governance layer for the Internet. All network participants have
equal access to the same data in real-time. Transactions running over the
network are transparent to all actors and can be traced back to their origin.
Blockchain can also be described as a distributed accounting machine or a
supranational governance machine that is public and transparent. When the
network validates a transaction by majority consensus, the transaction is
permanently written to the blockchain. Otherwise, the transaction is rejected
and does not go through. Only transactions that have been included in the
blockchain are considered as valid and final.
A Blockchain protocol operates on top of the Internet, on a P2P network
of computers that all run the protocol and hold an identical copy of the ledger
of transactions, enabling P2P value transactions without a middleman though
machine consensus. Blockchain itself a file a shared and public ledger of
transactions that records all transactions from the genesis block (first block)
until today.
Blockchain is a shared, trusted, public ledger of transactions, that
everyone can inspect but which no single user controls. It is a distributed
database that maintains a continuously growing list of transaction data records,
cryptographically secured from tampering and revision. Blockchain has three
different types, i.e. public blockchain, private blockchain, and consortium
blockchain. Bitcoin and Ethereum are the examples of public blockchain,
anyone and from anywhere can join them and can get relieved at the time of
his will. This is proofed by the complex mathematical functions. The private
blockchain is the internal-public ledger of the company and the joining on that

24
blockchain is granted by the company owning that blockchain. The block
construction and mining speed is far better in the private blockchain as
compared to public blockchain due to the limited nodes. The consortium
blockchain however exists among the companies or group of companies and
instead of the consensus the principles of memberships are designated to
govern the blockchain transactions more effectively. This research uses
consortium blockchain as the blockchain is to be governed by a national
authority in the country. Block is the primary component of the blockchain. A
block consists of the header and the body, the body of the block contains the
transactions being written to the system. The header of the block contains the
information about the block that includes previous hash, nonce value and
difficulty, and the time stamp of the block and the transactions. The length of
the block is variable and deemed to have been among1 to 8 MB of size. The
header of the block uniquely identifies the block to be placed.

Fig 3.2 Block Creation

3.3.2 HASHING
Hashing is the process of changing the arbitrary and variable size input
to a fixed size output. There are different functions that perform hashing of
different level. MD5 algorithm is widely used for hashing purposes and it

25
provides a 128 nit or 32 symbols long hash value. MD5 is the latest algorithm
in the series while before that Md2, Md3, and Md4 also existed. The algorithm
was designed to be used as a cryptographic hashing algorithm but it faces some
problems that reduce the production of unique hash value and hence it faces
some vulnerability. SHA (Secure Hashing Algorithm) is another
cryptographic hash function that yields 160-bit hash value consisting of 40
hexadecimal characters. The algorithm could not resist the collusion attacks
against it and its usage has declined. In this time several new algorithms have
also been proposed, including SHA 3, and SHA 256. The SHA 2 set of
algorithms is designed by the US's Nation Security Agency. SHA 256 and
SHA 512 are new hash functions that do not have collusion problems and
deemed secure otherwise, at least as yet.

In a Blockchain, each block consists of following headers.

Previous Hash:
This hash address locates the previous block.
Transaction Details:
Details of all the transactions that need to occur.
Nonce:
An arbitrary number given by cryptography to differentiate the block’s
hash address.
Hash Address of the Block:
All of the above (i.e., preceding hash, transaction details, and nonce) are
transmitted through a hashing algorithm. This gives an output containing a
256-bit, 64-character length value, which is called the unique ‘hash address.’
Consequently, it is referred to as the hash of the block. Numerous people
around the world try to figure out the right hash value to meet a pre-determined
condition using computational algorithms. The transaction completes when the

26
predetermined condition is met. To put it more plainly, Blockchain miners
attempt to solve a mathematical puzzle, which is referred to as a proof of work
problem. Whoever solves it first gets a reward.
Mining
In Blockchain technology, the process of adding transactional details to
the present digital/public ledger is called ‘mining.’ Though the term is
associated with Bitcoin, it is used to refer to other Blockchain technologies as
well. Mining involves generating the hash of a block transaction, which is
tough to forge, thereby ensuring the safety of the entire Blockchain without
needing a central system.

Fig 3.3 Block Creation with Hash Link

3.3.3 Block and Hash Generation

1. A Block containing information about current transactions.

2. Each data generates a hash.

27
3. A hash is a string of numbers and letters.

4. Transactions are entered in the order in which they occurred.

5. The hash depends not only on the transaction but the previous
transaction's hash.

6. Even a small change in a transaction creates a completely new hash.

7. The nodes check to make sure a transaction has not been changed by
inspecting the hash.

8. If a transaction is approved by a majority of the nodes then it is written


into a block.

9. Each block refers to the previous block and together make the
Blockchain.

10. A Blockchain is effective as it is spread over many computers, each of


which has a copy of the Blockchain.

3.3.4 FEATURES OF BLOCKCHAIN TECHNOLOGY

Better Transparency
Transparency is one of the big issues in the current industry. To improve
transparency, organizations have tried to implement more rules and
regulations. But there is one thing that doesn’t make any system 100%
transparency, i.e., centralization. With blockchain, an organization can go for a
complete decentralized network where there is no need for a centralized
authority, improving the system’s transparency. A blockchain consists of peers
who are responsible for carrying out transactions and validating them. Not
every peer takes part in the consensus method, but they are free to choose if
they want to participate in the validation process. To provide validation
through decentralization, the consensus method is used. Once validated, each
node keeps a copy of the transaction record. This way, the blockchain network
28
handles transparency. Transparency has bigger implications when it comes to
organizations. As mentioned earlier, governments can also utilize transparency
in building government processes or even conduct voting.

Enhanced Security
Blockchain technology utilizes advanced security compared to other
platforms or record-keeping systems. Any transaction that is ever recorded
needs to be agreed upon according to the consensus method. Also, each
transaction is encrypted and has a proper link to the old transaction using a
hashing method.

Security is also enhanced by the fact that each node holds a copy of the
transactions ever performed on the network. So, if any malicious actor ever
wanted to make changes in the transaction, he won’t be able to do so as other
nodes will reject his request to write transactions to the network.

Blockchain networks are also immutable, which means the data, once
written, cannot be reverted by any means. This is also the right choice for
systems that thrive on immutable data, such as systems that citizen’s age.

Reduced Costs
Right now, businesses spend a lot of money to improve to manage their
current system. That’s why they want to reduce cost and divert the money into
building something new or improving current processes.
By using blockchain, organizations can bring down a lot of costs
associated with 3rd party vendors. As blockchain has no inherited centralized
player, there is no need to pay for any vendor costs. On top of that, there is less
interaction needed when it comes to validating a transaction, further removing
the need to spend money or time to do basic stuff.

29
True Traceability
With blockchain, companies can focus on creating a supply chain that
works with both vendors and suppliers. In the traditional supply chain, it is
hard to trace items that can lead to multiple problems, including theft,
counterfeit, and loss of goods.
With blockchain, the supply chain becomes more transparent than ever.
It enables every party to trace the goods and ensure that it is not being replaced
or misused during the supply chain process. Organizations can also make the
most out of blockchain traceability by implementing it in-house.

Improved Speed and Highly Efficient


The last industrial benefit that blockchain brings is improved efficiency
and speed. Blockchain solves the time-consuming process and automates them
to maximize efficiency. It also eradicates human-based errors with the help of
automation.
The digital ledger makes everything this possible by providing a single
place to store transactions. The streamlining and automation of processes also
mean that everything becomes highly efficient and fast.

30
CHAPTER 4

IMPLEMENTATION

4.1 MODULE LIST

• Block Chain Storage Framework

• User Enrolment

• Data Block Creation

• Data Auditing

• Secure Data Sharing

4.2 MODULE DESCRIPTION

BLOCK CHAIN STORAGE

The storage scheme of data uses blockchain based cloud storage


technology to achieve safe storage and sharing. In this module, create a local
Cloud and provide priced abundant storage services. Data storage and access
control are the main transactions in the medical blockchain. It would be
optimal to be able to hold all medical data on the blockchain. Once get space
from cloud the users can upload to share data in the cloud. In this work, the
cloud storage can be implementing with high secure using block chain
technology. The cloud server is subject to the cloud service provider, and
provides cloud storage services. It has not only significant storage space, but
also a massive amount of computing power.

USER ENROLMENT

Before a user can be access to the system, he has to be registered with


the system for the first time. So, for a new user, he has to get registered with a

31
system and then authenticated before he can request server. In a basic
authentication process, a user presents some credentials like user ID to prove
that the user is the true owner of the user ID. Data owner could upload the file
on cloud. Once the file is stored in cloud, the file will get encrypted.

DATA BLOCK CREATION

A blockchain is a digital concept to store data. These blocks are chained


together, and this makes their data immutable. When a block of data is chained
to the other blocks, its data can never be changed again. It will be publicly
available to anyone who wants to see it ever again, in exactly the way it was
once added to the blockchain. Blockchain technology functions are reliable for
use in a hashing crypto method, which helps create an adequate and strong
hashing code and convert it from a bit of fixed size data to strings of character.
Each transaction proposed in a blockchain are hashed together before shoving
in a block, and the hash pointers connect each block to the next block for
holding of previous hash data as it is undisputable. Therefore, any changes in
the blockchain transaction of hashing function will result in different hash
string of character and affect all the involved blocks.

DATA AUDITING

TPA works for the user. It feeds back the verification results to the user
and the cloud server, and detects the data corruption as soon as possible. The
communication between TPA and other entities is authenticated. The
verification period is determined by the user. For a point in time when the data
integrity should be verified, TPA first extracts the hash values of ' successive
blocks that are the latest ones confirmed on the blockchain, where ' denotes the

32
number of blocks deep used to confirm a transaction and sends the challenging
message to the cloud server. Upon receiving the challenging message, the
cloud server computes the corresponding proof. TPA checks the validity of the
proof to verify the data integrity. If the checking fails, TPA informs the user
that the data may be corrupted.

DATA SHARING

In the data sharing concept storage server is most important module. The
storage data store the huge amount of data. This data is securely store in
storage server. It also store encrypted data and key which used for data
encryption. When the user requires his data, user requests to the storage server.
There are two keys used for encryption and decryption purpose. Data sharing
can be done in a secure manner.

33
CHAPTER 5

SYSTEM SPECIFICATION

5.1 HARDWARE REQUIREMENTS

• CPU type : Intel Pentium 4


• Clock speed : 3.0 GHz
• Ram size : 512 MB
• Hard disk capacity : 40 GB
• Monitor type : 15 Inch color monitor
• Keyboard type : internet keyboard

5.2 SOFTWARE REQUIREMENTS

• Operating System : Windows 7


• Language : ASP.NET
• IDE : Microsoft Visual Studio 2010
• Back End : SQL Server 2008

5.3 SOFTWARE DESCRIPTION

5.3.1 FRONT END SOFTWARE

Dot Net Overview Visual Studio .NET

VB.NET stands for Visual Basic.NET, and it is a computer


programming language developed by Microsoft. It was first released in 2002 to
replace Visual Basic 6. VB.NET is an object-oriented programming language.
This means that it supports the features of object-oriented programming which
include encapsulation, polymorphism, abstraction, and inheritance.

34
Visual Basic .ASP NET runs on the .NET framework, which means that it has
full access to the .NET libraries. It is a very productive tool for rapid creation
of a wide range of Web, Windows, Office, and Mobile applications that have
been built on the .NET framework.

The language was designed in such a way that it is easy to understand to


both novice and advanced programmers. Since VB.NET relies on the .NET
framework, programs written in the language run with much reliability and
scalability. With VB.NET, you can create applications that are fully object-
oriented, similar to the ones created in other languages like C++, Java, or C#.
Programs written in VB.NET can also interoperate well with programs written
in Visual C++, Visual C#, and Visual J#. VB.NET treats everything as an
object.

It is true that VB.NET is an evolved version of Visual Basic 6, but it's


not compatible with it. If you write your code in Visual Basic 6, you cannot
compile it under VB.NET.

Visual Studio .Net is the fast application improvement device for


BASIC. Visual Studio .Net offers complete mix with ASP.NET and empowers
to move and customize server controls and outline Web Forms as they ought to
show up when client sees them. A percentage of alternate points of interest of
making BASIC applications in Visual Studio .Net are

• Visual Studio .Net is a Rapid Application (RAD) apparatus. Rather than


adding every control to the Web Form automatically, it serves to include these
controls by utilizing tool stash, sparing programming endeavors.

• Visual Studio .Net backings custom and composite controls. Can make
custom controls that embody a typical usefulness that may need to use in
various applications.

35
• Visual Studio .Net makes a glorious showing of rearranging the creation
and utilization of Web Services. Mush of the software engineer neighborly
stuff (making all the XML-based reports) happens consequently, without much
exertion on the developer's side.

• A characteristic based writing computer program is an effective idea that


empowers Visual Studio .Net to mechanize a considerable measure of software
engineer unpleasant assignments.

.NET programming dialects

The .NET Framework gives an arrangement of instruments that


assistance to assemble code that works with the .NET Framework, Microsoft
gives an arrangement of dialects that are as of now .NET perfect. Fundamental
is one of those dialects.

The following reasons make VB.Net a widely used professional language

• Modern, general purpose.

• Object oriented.

• Component oriented.

• Easy to learn.

• Structured language.

• It produces efficient programs.

• It can be compiled on a variety of computer platforms.

• Part of .Net Framework.

VB.NET Features

VB.NET comes loaded with numerous features that have made it a


popular programming language amongst programmers worldwide. These
features include the following:
36
• VB.NET is not case sensitive like other languages such as C++ and Java.
• It is an object-oriented programming language. It treats everything as an
object.
• Automatic code formatting, XML designer, improved object browser
etc.
• Garbage collection is automated.
• Support for Boolean conditions for decision making.
• Simple multithreading, allowing your apps to deal with multiple tasks
simultaneously.
• Simple generics.
• A standard library.
• Events management.
• References. You should reference an external object that is to be used in
a VB.NET application.
• Attributes, which are tags for providing additional information regarding
elements that have been defined within a program.
• Windows Forms- you can inherit your form from an already existing
form.

Advantages of VB.NET

The following are the pros/benefits you will enjoy for coding in VB.NET:

• Code will be formatted automatically.


• Use object-oriented constructs to create an enterprise-class code.
• Can create web applications with modern features like performance
counters, event logs, and file system.
• Can create your web forms with much ease through the visual forms
designer. You will also enjoy drag and drop capability to replace any
elements that you may need.

37
• Can connect your applications to other applications created in languages
that run on the .NET framework.
• Will enjoy features like docking, automatic control anchoring, and in-
place menu editor all good for developing web applications.

ASP.NET environment

Dynamic Server Pages were discharged by Microsoft to empower the


formation of element pages considering client information and cooperation
with a Web website. ASP.NET enhances the first ASP by giving code-behind.
With ASP.NET and code-behind, the code and HTML can be isolated.

ASP.NET Web administrations are XML-construct benefits that are


presented with respect to the Internet that can be gotten to by other Web
administrations and Web administration customers.

ASP.NET

ASP.NET is more than the following form of Active Server Pages


(ASP); it is a brought together Web advancement stage that gives the
administrations important to designers to fabricate undertaking class Web
applications. While ASP.NET is to a great extent sentence structure perfect
with ASP, it likewise gives another programming model and foundation for
more secure, versatile, and stable applications.

ASP.NET is an assembled, .NET-based environment; you can creator


applications in any .NET perfect dialect, including VisualBasic.NET, BASIC,
and JScript.NET. Furthermore, the whole .NET Framework is accessible to
any ASP.NET application. Engineers can undoubtedly get to the regale of
these advances, which incorporate oversaw normal dialect runtime
environment, sort wellbeing, legacy, et cetera.

ASP.NET has been intended to work consistently with WYSIWYG


HTML editors and other programming instruments, including Microsoft Visual
38
Studio .NET. Does this make Web improvement simpler, as well as gives
every one of the advantages that these apparatuses bring to the table, including
a GUI that designers can use to drop server controls onto a Web page and
completely coordinated investigating backing. Engineers can browse the
accompanying two elements when making a " ASP.NET application, Web
Forms and Web administrations, or consolidate these in any capacity they see
fit.

• Web Forms permits you to assemble intense structures-based Web


pages. At the point when building these pages, you can utilize ASP.NET server
controls to make normal Ul components, and system them for basic
assignments. These controls permit you to quickly assemble a Web Form out
of reusable implicit or custom segments, rearranging the code of a page.

• An XML Web administration gives the intends to get to server


usefulness remotely

FEATURES

➢ Intuitive C++ based Language

Utilize a dialect displayed on C++ linguistic structure, instantly commonplace


to C++ and Java designers, and also natural new dialect builds that incredibly
streamline advancement errands

➢ Reliable Interoperability

Utilize code to call local Windows APIs, use pre-constructed COM


parts, and influence existing ActiveX controls to flawlessly coordinate existing
applications and segments.

➢ Advanced, Component-Oriented Language

39
Exploit inborn backing for properties, indexers, delegates, single and
multidimensional clusters, propelled legacy, traits, forming, and XML remarks.

➢ Capable Debugging and Testing Tools

ASP .NET incorporates a capable remote and multi-dialect debugger,


empowering engineers to test applications and fabricate solid multi-level
arrangements that compass process limits and are composed in different
programming dialects.

Net framework class library

Addition experienced and capable, constructed in usefulness, including a


rich arrangement of accumulation classes, systems administration bolster,
multithreading bolster, string and customary expression classes, and wide
backing for XML, XML patterns, XML namespaces, XSLT, XPath, and
SOAP.

Powerful Web Development Environment:

Make Web-based arrangements in C# utilizing the mutual Web Forms


Designer and XML Designer. Engineers can likewise utilize IntelliSense
elements and label finish or pick the WYSIWYG manager for move and
customize creating to construct intelligent Web applications.

. NET Framework

Microsoft planned VB from the beginning to exploit its new .NET


Framework. The .NET Framework is comprised of four sections, the Common
Language Runtime, an arrangement of class libraries, an arrangement of
programming dialects, and the ASP.NET environment. The .NET Framework
was composed on account of three objectives. In the first place, it was planned
to make Windows applications considerably more solid, while likewise
furnishing an application with more prominent level of security.

40
Second, it was proposed to improve the advancement of Web
applications and administrations that work in the conventional sense, as well as
on cell phones too. Finally, the structure was intended to give a solitary
arrangement of libraries that would work with various dialects. The .NET
Framework is the base for the new Microsoft .NET Platform. Furthermore, it is
a typical situation for building, conveying, and running Web applications and
Web Services. The .NET Framework contains a typical dialect runtime and
basic class libraries - like ADO .NET, ASP .NET and Windows Forms - to
give propelled standard administrations that can be coordinated into a mixed
bag of PC frameworks. The .NET Framework gives a component rich
application environment, streamlined improvement and simple mix between
various diverse advancement dialects. The .NET Framework is dialect
nonpartisan. At present it bolsters C++, C#, Visual Basic, and Jscript.
Microsoft's Visual Studio.NET is a typical advancement environment for the
new .NET Framework.

Coordinating with IIS

IIS is the web server is utilized here. IIS 5.0 or above is key for the
ASP.NET for the earth. This arrival of ASP.NET uses IIS 5.0 as the priKim
host environment. IIS dependably accept that an arrangement of accreditations
maps to a Windows NT record and uses them to verify a client. There are three
various types of validation accessible in IIS 5.0: BASIC, DIGEST, and
INTEGRATED WINDOWS Authentication (NTLM or Kerberos). You can
choose the kind of verification to use in the IIS regulatory administrations.

On the off chance that you ask for a URL containing an ASP.NET
application, the solicitation and confirmation data are given off to the
application. ASP.NET gives the two extra sorts of verification depicted in the
accompanying table.

41
Web Service

Web administrations are ostensibly the most energizing and improve


elements of Microsoft's. NET activity and they are liable to significantly
influence the way business collaborate utilizing PC application. Rundown of
conceivable Web administrations is as changes as the rundown of conceivable
business opportunities. Web administration would normally perform a center
business administration, for example, client confirmation, Visa approval,
valuing a derivates security, submitting a buy request for a stock or estimating
a same-day shipment.

A web administration is a part that performs a capacity or


administration. A segment is a bit of programming that has a very much
characterized interface, shrouded internals, and the ability of being found. By
"found" implies that you can figure out what the part' manages without
expecting to see the code inside of it. A segment is like a strategy since we can
call it with contentions that fit an arrangement of parameters, and it has the
ability of returning results.

A web administration might likewise return data to the guest. This


administration dwells some place on the Web and can be gotten to from
different areas on the Web. For this administration to be called, there are
various components that must be set up. To start with, the guest must' know
how to call the administration. Second, the call must be made over the Web. At
long last, the “web administration must know how to react”.

Database Management System

A database management system is a software application which is used


for managing different databases. It helps us to create and manage database.
With the help of DBMS, we take care following tasks –

42
1. Data Security

2. Data Backup

3. Manages huge amount of data

4. Data export & import

5. Serving multiple concurrent database requests

6. Gives us a way to manage the data using programming languages.

5.3.2 SQL SERVER 8.0

SQL stands for Structured Query Language. SQL is the language used to
create, edit and manipulate a database. In other words, SQL is used to manage
data held within a relational database management system (RDBMS).

Because this is a database design series, we will not be working with


SQL directly, but will design our database to work with SQL in the future
(once it is completely designed and ready to be programmed).

SQL is the general language used to communicate with relational


database management systems. This means that we use SQL to communicate
to MySQL, Oracle, SQL Server, etc… So learning about SQL will help you
with a lot of different things! A RDBMS takes SQL and uses it to do
something with the database. The SQL can come directly from us hand-typing
it or it can come from another source (such as a PHP script).

Social database frameworks are the most critical database frameworks


utilized as a part of the product business today. A standout amongst the most
remarkable frameworks is Microsoft SQL Server. SQL Server is a database
administration framework created and showcased by Microsoft. It runs solely
under Windows NT and Windows 95/98.

43
➢ The most critical parts of SQL Server 8 are:

• SQL Server is anything but difficult to utilize.

• SQL Server scales from a portable tablet to symmetric multiprocessor


frameworks.

• SQL Server gives information warehousing elements that as of recently


have just been accessible in Oracle and other more costly DBMSs.

A database framework is a general gathering of distinctive database


programming segments and databases containing the parts viz. Database
application projects, Front-End segments, Database administration
frameworks, and Databases.

➢ A database framework must give the accompanying elements:

• A mixture of client interfaces

• Physical information autonomy

• Logical information autonomy

• Query advancement

• Data honesty

• Concurrency control

• Backup and recuperation

• Security and approval

SQL Server is a Relational Database Management System. The SQL


Server social dialect is called Transact-SQL.SQL is resource arranged dialect.
44
This implies that SQL can inquiry numerous lines from one or more tables
utilizing only one announcement. This component permits the utilization of
this dialect at a coherently larger amount than procedural dialects. Another
vital property of SQL is its non-procedurally. SQL contains two sub dialects
DDL and DML.

SQL Server functions as a characteristic augmentation of Windows NT


and windows 95/98.SQL Server is generally simple to oversee through the
utilization of a graphical registering environment for each undertaking of
framework and database organization. SQL Server uses administrations of
Windows NT to offer new or expanded database capacities, for example,
sending and accepting messages and overseeing login security.

The SQL Server chairman's essential device for connecting with the
framework is Enterprise Manager. The Enterprise Manager has two primary
purposes: Administration of the database server and Management of database
items.

• SQL Server Query Analyzer gives a graphical presentation of the


execution arrangement of a question and a programmed segment that
recommends which list ought to be utilized for a chose inquiry. This intelligent
segment of SQL Server performs the assignments like:

• Generating and executing Transact-SQL explanations

• Putting away the produced Transact-SQL explanations in a document

• Analyzing execution gets ready for produced inquiries

• Graphically representing the execution arrangement for a chose


question.

A put away method is an exceptional sort of clump written in Transact-


SQL utilizing the SQL dialect and SQL augmentations. It is saved money on

45
the database server to enhance the execution and consistency of monotonous
undertakings. SQL Server backings put away methods and framework
techniques. Put away techniques can be utilized for the accompanying
purposes: to control access approval, to make a review trial of exercises in
database tables, to discrete information definition & information control
articulations concerning a database & every single comparing application.

The database article perspective can be utilized for:

• Restricting the utilization of specific sections and lines of tables - that is


to control access to a specific piece of one or more tables,

• To shroud the points of interest of confounded inquiries, to limit


embedded & redesigned qualities to certain extents.

The Query Optimizer is the piece of SQL Server that chooses how to
best perform a question. It creates a few inquiry executions gets ready for the
given question & chooses the arrangement with the most minimal expense.

SQL Server can work in one of two security modes:

• Windows NT

• Mixed

Windows NT security mode solely utilizes Windows NT client records


to sign into the SQL Server framework. Blended mode permits clients to
associate with SQL Server utilizing the Windows NT security framework or
the SQL Server framework. Moreover, it gives three security offices to
controlling access to database objects:

• Transact-SQL explanations GRANT, DENY, and REVOKE.

• Views.

• Stored methodology
46
A Windows NT client record or a SQL server login name permits a
client to sign into the SQL server framework. A client who hence needs to get
to a database of the framework needs a database client record to work in the
DB. In this manner clients must have a DB client represent each DB they need
to utilize. In the event that there is no such record the client may be permitted
to work in the DB under the visitor account."

Put away methods can likewise be utilized to limit information access.


The confinement of information access utilizing put away methodology is
based upon the property that the consent to execute a put away' strategy is free
of any authorization for DB objects that are referenced by the put away system.

SQL server gives an instrument called a trigger for upholding procedural


respectability requirements.

A DBMS handles 2 sorts of honesty requirements:

Declarative Integrity limitations characterized utilizing CREATE& ALTER


TABLE articulations.

➢ Procedural honesty requirements took care of by triggers.

A trigger is an instrument that is conjured when a specific activity


happens on a specific table. Every trigger has 3 general parts:

• A name

• The activity

• The execution

SQL server keeps record of every change it makes to the db amid an


exchange. This is essential in the event that a lapse happens amid the execution
of the exchange. For this situation all already executed explanations inside of

47
the exchange must be moved back. SQL server keeps every one of these
records, specifically the previously, then after the fact values, in one or more
documents called the exchange log. Each DB of the SQL server framework has
its own particular exchange log. Concurrency in multi-client frameworks, for
example, SQL Server has chosen impact of execution. At the point when
access to the information is taken care of such that stand out project at once
can utilize the information, preparing moderates significantly. SQL Server like
all different DBMSs takes care of this issue utilizing exchanges. All
announcements inside an exchange manufacture a nuclear unit. This implies
that either all announcements are executed or for the situation of
disappointment, all announcements are wiped out.

Elements of SQL Server

Microsoft SQL Server bolsters a full arrangement of elements that


outcome in the accompanying. SQL incorporates an arrangement of managerial
and advancement instruments that enhance our capacity to introduce, convey,
oversee and use SQL Server over a few locales.

➢ Adaptability

The same database motor can be utilized crosswise over stages going
from smart phones Microsoft Windows95 to substantial; multiprocessor
servers running Microsoft Windows NT, Enterprise Edition.

➢ Ease in building information distribution centers

SQL Server incorporates instruments for removing and examining


synopsis information for online investigative preparing (OLAP). SQL Server
likewise incorporates apparatuses for outwardly planning databases and
breaking down information utilizing English based inquiries.

48
SQL API (SQL Application Programming Interface)

Implanted SQL applications utilize the DB-library DLL to get to SQL


server. The SQL Server ODBC driver clients don't get to Microsoft SQL
Server straightforwardly. They utilize an application kept in touch with access
the information in SQL Server. SQL Server can likewise be gotten to through
COM, Microsoft ActiveX, or Windows DNA (Windows Distributed Internet
Applications Architecture) parts. Applications are composed to get to SQL
Server through a database Application Programming Interface (API).

Web Clients

A Web customer comprises of two sections:

• Dynamic Web pages containing different sorts of markup dialect which


are created by Web parts running in the Web level.

• Web program, which renders the pages got from the server.

A Web customer is now and again called a slim customer. Slim


customers as a rule don't question databases, execute complex business
guidelines, or associate with legacy applications.

Within SQL, we have two forms of languages. These forms differ in that
one is used to build and edit the structure of the database while the other is
used to create and edit the actual data within the database. These two languages
are known as data definition language and data manipulation language.

Data Definition Language (DDL)

Data definition language is one of the subcategories of SQL. It is used


to define and work with the database schema (structure). This includes the
attributes (columns) within each table, the name of each table, the name of the
database, and the connection of keys between tables. Here are general
explanations of the types of commands in DDL:
49
CREATE – used to create the database, the tables, and the columns
within each table. Within the create statement we also define the data type of
each column. A data type is literally the type of data we are supposed to store
within each column, whether it be an integer, a date, or a string.

ALTER – used to alter existing database structures. This includes adding


columns and more.

RENAME – This is used to…rename.

DROP – This is used to destroy your database or table.

Data Manipulation Language (DML)

Data manipulation language is used to work with the actual data within
the database. if we looked at an example with a users table, the table is created
with DDL while the value “Caleb Curry” is entered using DML.

The main statement in DML are:

SELECT – this is used to select data from our database. We first say
SELECT and then we say what columns to select. After we say what columns,
we specify what tables using FROM. After we select what columns and what
tables we can limit our results using a WHERE clause.

INSERT INTO – This is used to insert new values.

UPDATE – This is used to change values.

DELETE – this is used to delete values (the database structure stays the
same, only inserted values are removed).

50
HTML

HTML is a markup language for describing web documents (web pages).

• Hyper is the opposite of linear. It used to be that computer programs had


to move in a linear fashion. This before this, this before this, and so on.
HTML does not hold to that pattern and allows the person viewing the
World Wide Web page to go anywhere, any time they want.
• Text is what you will use. Real, honest to goodness English letters.
• Mark up is what you will do. You will write in plain English and then
mark up what you wrote. More to come on that in the next Primer.
• Language because they needed something that started with “ L ” to
finish HTML and Hypertext Markup Louie didn’t flow correctly.
Because it’s a language, really but the language is plain English.

HTML remains for Hyper Text Markup Language. It is a basic content


designing dialect used to make hypertext records. It is a stage free dialect not at
all like most other programming dialect. HTML is impartial and can be utilized
on numerous stages or desktop. It is this component of HTML that makes it
mainstream as standard on the WWW.

This adaptable dialect permits the making of hypertext connections,


otherwise called hyperlinks. These hyperlinks can be utilized to unite reports
on diverse machine, on the same system or on an alternate system, or can even
indicate purpose of content in the same record.

HTML is utilized for making archives where the accentuation is on the


presence of the record. It is likewise utilized for DTP. The records made
utilizing HTML can have content with diverse sizes, weights and hues. It can
also contain graphics to make the document more effective.

51
CHAPTER 6
SYSTEM DESIGN

6.1 UML DIAGRAM

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.

The goal is for UML to become a common language for creating models
of object oriented computer software. In its current form UML is comprised of
two major components: a Meta-model and a notation. In the future, some form
of method or process may also be added to; or associated with, UML. The
Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system,
as well as for business modeling and other non-software systems.

The UML represents a collection of best engineering practices that have


proven successful in the modeling of large and complex systems.

The UML is a very important part of developing objects oriented


software and the software development process. The UML uses mostly
graphical notations to express the design of software projects.

52
GOALS:

The Primary goals in the design of the UML are as follows:

1. Provide users a ready-to-use, expressive visual modeling Language so that


they can

develop and exchange meaningful models.

2. Provide extendibility and specialization mechanisms to extend the core


concepts.

3. Be independent of particular programming languages and development


process.

4. Provide a formal basis for understanding the modeling language.

5. Encourage the growth of OO tools market.

6. Support higher level development concepts such as collaborations,


frameworks, patterns and components.

7. Integrate best practices.

53
6.2 USECASE DIAGRAM

A use case is a list of steps, typically defining interactions between a


role (known in Unified Modeling Language (UML) as an "actor") and a
system, to achieve a goal. The actor can be a human, an external system, or
time. In systems engineering, use cases are used at a higher level than within
software engineering, often representing missions or stakeholder goals.

System

Registration

Approval

File Upload

Server
Block Creation using SHA

File storage
Data Owner

File Auditing

TPA
Audit Response

File Request

User
File Access

Fig 6.1 Usecase Diagram

54
6.3 CLASS DIAGRAM

The class diagram is the main building block of object-


oriented modeling. It is used both for general conceptual modeling of the
systematic of the application, and for detailed modeling translating the models
into programming code. Class diagrams can also be used for data
modeling. The classes in a class diagram represent both the main elements,
interactions in the application, and the classes to be programmed.

Data Owner
Server
+Owner Details
+File Details +Data

+Enrolment() +Data Storage()


+Upload Files() +Block Creation using SHA()
+Send Audit Request() +File Verification()
+View Audit Result()
+Key Sharing()

Data User TPA

+Register Details +File Details


+File Details +Get Audit Request()
+Enrolment() +File Audit()
+File Request() +Send Response()
+File Access()

Fig 6.2 Class Diagram

55
6.4 SEQUENCE DIAGRAM

A Sequence diagram is an interaction diagram that shows how processes


operate with one another and in what order. It is a construct of a Message
Sequence Chart. A sequence diagram shows object interactions arranged in
time sequence. sequence diagram is sometimes called event trace diagrams,
event scenarios, and timing diagrams. A sequence diagram shows, as parallel
vertical lines, different processes that live simultaneously and horizontal
arrows. The messages exchanged between them.

Data Owner Server TPA User

1 : Data Owner Registration()

2 : Enrolment()

3 : Allocate Space()
4 : Upload File()

5 : Block Creation using SHA()

6 : File Storage()

7 : Send Audit Request()

8 : File Audit()
9 : Audit Response()

10 : Send Audit Response()

11 : Send File Request()

12 : File Access()

Fig 6.3 Sequence Diagram


56
6.5 ACTIVITY DIAGRAM

Activity diagrams are graphical representations of workflows of


stepwise activities and action with support for choice, iteration and
concurrency. In the Unified Modeling Language, activity diagrams are
intended to model both computational and organizational processes Activity
diagrams show the overall flow of control. Activity diagram has initial and
final state. Then activities are mentioned between the states.

Owner Registration

File Upload

Block Creation using SHA

File Storage

File Auditing

File Sharing

Fig 6.4 Activity Diagram

57
6.6 DATA FLOW DIAGRAM

1. The DFD is also called as bubble chart. It is a simple graphical formalism


that can be used to represent a system in terms of input data to the system,
various processing carried out on this data, and the output data is generated by
this system.

2. The data flow diagram (DFD) is one of the most important modeling tools.
It is used to model the system components. These components are the system
process, the data used by the process, an external entity that interacts with the
system and the information flows in the system.

3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that depicts
information flow and the transformations that are applied as data moves from
input to output.

4. DFD is also known as bubble chart. A DFD may be used to represent a


system at any level of abstraction. DFD may be partitioned into levels that
represent increasing information flow and functional detail.

Level 0

Data Owner

TPA Framework BC Data


Storage

Data User

Fig 6.5 Data Flow level 0

58
Level 1

Data Owner Register

Get Approval

Cloud Access BC Data


Storage

Fig 6.6 Data Flow level 1

Level 2

Data Owner Login

Upload Data

Data Encryption

Block Creation BC Cloud


Storage

Fig 6.7 Data Flow level 2

59
Level 3

Data Owner Data Audit


Request

TPA Audit Challenge Cloud


Storage

Audit Proof

Fig 6.8 Data Flow level 3

Level 4

Data User Send File


Request

Data User Access Cloud


Permission Storage

Data Retrieval

Fig 6.9 Data Flow level 4

60
CHAPTER 7

SYSTEM TESTING

7.1 TYPES OF TESTING

Software testing is a method of assessing the functionality of a software


program. There are many different types of software testing but the two main
categories are dynamic testing and static testing. Dynamic testing is an
assessment that is conducted while the program is executed; static testing, on
the other hand, is an examination of the program's code and associated
documentation. Dynamic and static methods are often used together.

Testing is a set activity that can be planned and conducted


systematically. Testing begins at the module level and work towards the
integration of entire computers-based system. Nothing is complete without
testing, as it is vital success of the system.
Testing Objectives:

There are several rules that can serve as testing objectives, they are
1. Testing is a process of executing a program with the intent of
finding an error
2. A good test case is one that has high probability of finding an
undiscovered error.
3. A successful test is one that uncovers an undiscovered error.

If testing is conducted successfully according to the objectives as stated


above, it would uncover errors in the software. Also testing demonstrates that
software functions appear to the working according to the specification, that
performance requirements appear to have been met.
There are three ways to test a program

61
1. For Correctness
2. For Implementation efficiency
3. For Computational Complexity.

Tests for correctness are supposed to verify that a program does exactly
what it was designed to do. This is much more difficult than it may at first
appear, especially for large programs. It is a code-refining process, which
reexamines the implementation phase of algorithm development. Tests for
computational complexity amount to an experimental analysis of the
complexity of an algorithm or an experimental comparison of two or more
algorithms, which solve the same problem.
The testing is carried in the following steps,

1. Unit Testing
2. Integration Testing
3. Validation Testing
4. System Testing
5. Acceptance Testing
6. Functional Testing

1. Unit Testing

Unit testing refers testing of all the individual programs. This is


sometimes called as program testing. This test should be carried out during
programming stage in order to find the errors in coding and logic for each
program in each module. Unit test focuses verification effort on the smallest
unit of software design module. In this project, the user must fill each field
otherwise the user to enter values.

• Reduces Defects in the newly developed features or reduces bugs when


changing the existing functionality.
• Reduces Cost of Testing as defects are captured in very early phase.
• Improves design and allows better refactoring of code.
62
• Unit Tests, when integrated with build gives the quality of the build as
well.

2. Integration Testing

Software integration testing is the incremental integration testing of two


or more integrated software components on a single platform to produce
failures caused by interface defects. The task of the integration test is to check
that components or software applications, e.g. components in a software
system or – one step up – software applications at the company level – interact
without error.

3. Validation Testing

Valid and invalid data should be created and the program should be
made to process this data to catch errors. When the user of each module wants
to enter into the page by the login page using the use rid and password .If the
user gives the wrong password or use rid then the information is provided to
the user like “you must enter user id and password”. Here the inputs given by
the user are validated. That is password validation, format of date are correct,
textbox validation. Changes that need to be done after result of this testing.

4. System Testing

System testing is used to test the entire system (Integration of all the
modules). It also tests to find the discrepancies between the system and the
original objective, current specification and system documentation. The entire
system is checked to correct deviation to achieve correctness.

5. Acceptance Teasing

Acceptance testing can be defined in many ways, but a simple definition


is the succeeds when the software functions in a manner that can be reasonable
expected by the customer. After the acceptance test has been conducted, one
63
of the two possible conditions exists. This is to fine whether the inputs are
accepted by the database or other validations. For example accept only
numbers in the numeric field, date format data in the date field and also the
null check for the not null fields. If any error occurs then show the error
messages. The function of performance characteristics to specification and is
accepted. A deviation from specification is uncovered and a deficiency list is
created. User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets
the functional requirements.

6. Functional Testing

Functional test can be defined as testing two or more modules together


with the intent of finding defects, demonstrating that defects are not present,
verifying that the module performs its intended functions as stated in the
specification and establishing confidence that a program does what it is
supposed to do.

7.2 UNIT TESTING:

Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and
unit testing to be conducted as two distinct phases. Test strategy and approach
field testing will be performed manually and functional tests will written in
detail.

Test objectives

• All field entries must work properly.


• Pages must be activated from the identified link.
• The entry screen, messages and responses must not be delayed.

Features to be tested

• Verify that the entries are of the correct format


64
• No duplicate entries should be allowed
• All links should take the user to the correct page.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.

7.3 INTEGRATION TESTING:

Software integration testing is the incremental integration testing of two


or more integrated software components on a single platform to produce
failures caused by interface defects. The task of the integration test is to check
that components or software applications, e.g. components in a software
system or – one step up –software applications at the company level – interact
without error.

Test Results: All the test cases mentioned above passed successfully. No
defects encountered.

7.4 ACEPTANCE TESTING

User Acceptance Testing is a critical phase of any project and requires


significant participation by the end user. It also ensures that the system meets
the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No

defects encountered.

65
CHAPTER 8

SNAPSHOTS

Fig 8.1 Home Page

Fig 8.2 Owner Login

66
Fig 8.3 Owner Registration

Fig 8.4 Owner File Uploading

67
Fig 8.5 TPA approval

Fig 8.6 User Accessing the File

68
CHAPTER 9

APPENDIX

Home.aspx

<%@ Page Language="C#" AutoEventWireup="true"


CodeFile="Home.aspx.cs" Inherits="Home" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0


Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-
transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">

<head runat="server">

<meta charset="utf-8">

<meta http-equiv="X-UA-Compatible" content="IE=edge">

<title>Home</title>

<meta content="width=device-width, initial-scale=1, maximum-


scale=1, user-scalable=no" name="viewport">

<link

rel="stylesheet"
href="https://cdnjs.cloudflare.com/ajax/libs/fontawesome/4.5.0/css/f
ont-awesome.min.css">

<link rel="stylesheet" href="scss/main.css">

<link rel="stylesheet" href="scss/skin.css">


69
< src="https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.

mn.js"></script>

<script
src="http://netdna.bootstrapcdn.com/bootstrap/3.3.1/js/bootstrap.min
.js"></script>

<script src="./script/index.js"></script>

</head>

<body id="wrapper">

<section id="top-header">

</div>

</section>

<header>

<nav class="navbar navbar-inverse">

<div class="container">

<div class="row">

<div class="navbar-header">

<a class="navbar-brand" href="#">

<h1>BlockChain</h1>

<span>Data Storage</span></a>

</div>

<div id="navbar" class="collapse navbar-collapse navbar-right">


70
<ul class="nav navbar-nav">

<li><ahref="Home.aspx">Home</a></li>
<li><ahref="OwnerLogin.aspx">OwnerLogin</a></li>
<li><ahref="NewOwner.aspx">NewOwner</a></li>
<li><a href="UserLogin.aspx">UserLogin</a></li>

<li><ahref="userReg.aspx">NewUser</a></li>
<li><ahref="TPAlogin.aspx">TPALogin</a></li>

<li><ahref="ServerLogin.aspx">ServerLogin</a></li>

</ul>

</div>

<!--/.nav-collapse -->

</div>

</div>

</nav>

</header>

<!--/.nav-ends -->

<div id="myCarousel" class="carousel slide">

<!-- Indicators -->

<ol class="carousel-indicators">

<li data-target="#myCarousel" data-slide-to="0"


class="active"></li>

71
<li data-target="#myCarousel" data-slide-to="1"></li>

<li data-target="#myCarousel" data-slide-to="2"></li>

</ol>

<!-- Wrapper for slides -->

<div class="carousel-inner">

<div class="item active">

<divclass="fill"style="background-image:url('img/banner-
slide-1.jpg');"></div>

</div>

<div class="item">

<div class="fill" style="background-


image:url('img/banner-slide-2.jpg');"></div>

</div>

<div class="item">

<div class="fill" style="background-


image:url('img/banner-slide-3.jpg');"></div>

72
CHAPTER 10

CONCLUSION

Here proposed a certificateless public verification scheme against the


procrastinating auditor, namely CPVPA. This proposed work utilizes the on-
chain currencies, where verification performed by the auditor is integrated into
a transaction on the blockchain of on-chain currencies. Furthermore, the
proposed system is free from the certificate management problem. The security
analysis demonstrates that CPVPA provides the strongest security guarantee
compared with existing schemes.

For the future work, we will investigate how to construct CPVPA on


other blockchain systems. Since the main drawback of proofs of work (PoW) is
the energy consumption, constructing CPVPA on other blockchain systems
(e.g., proofs-of-stake-based blockchain systems) can save energy. However, it
requires an elaborated design to achieve the same security guarantee while
ensuring the high efficiency. This remains an open research issue that should
be further explored.

73
CHAPTER 11

REFERENCES

[1] J. Yu, K. Wang, D. Zeng, C. Zhu, and S. Guo, “Privacy-preserving data


aggregation computing in cyber-physical social systems,” ACM Transactions
on Cyber-Physical Systems, vol. 3, no. 1, p. 8, 2018.

[2] H. Ren, H. Li, Y. Dai, K. Yang, and X. Lin, “Querying in internet of things
with privacy preserving: Challenges, solutions and opportunities,” IEEE
Network, vol. 32, no. 6, pp. 144–151, 2018.

[3] J. Li, H. Ye,W.Wang,W. Lou, Y. T. Hou, J. Liu, and R. Lu, “Efficient and
secure outsourcing of differentially private data publication,” in Proc.
ESORICS, 2018, pp. 187–206.

[4] L. Zhong, Q. Wu, J. Xie, J. Li, and B. Qin, “A secure versatile light
payment system based on blockchain,” Future Generation Computer Systems,
vol. 93, pp. 327–337, 2019.

[5] G. Xu, H. Li, Y. Dai, K. Yang, and X. Lin, “Enabling efficient and
geometric range query with access control over encrypted spatial data,” IEEE
Trans. Information Forensics and Security, vol. 14, no. 4, pp. 870–885, 2019.

[6] K. Yang, K. Zhang, X. Jia, M. A. Hasan, and X. Shen, “Privacy preserving


attribute-keyword based data publish-subscribe service on cloud platforms,”
Information Sciences, vol. 387, pp. 116–131, 2017.

[7] W. Shen, B. Yin, X. Cao, Y. Cheng, and X. Shen, “A distributed secure


outsourcing scheme for solving linear algebraic equations in ad hoc clouds,”
IEEE Trans. Cloud Computing, to appear, doi:10.1109/TCC.2016.2647718.
74
[8] H. Yang, X. Wang, C. Yang, X. Cong, and Y. Zhang, “Securing content-
centric networks with content-based encryption,” Journal of Network and
Computer Applications, vol. 128, pp. 21–32, 2019.

[9] Q. Wang, C. Wang, J. Li, K. Ren, and W. Lou, “Enabling public


verifiability and data dynamics for storage security in cloud computing,” in
Proc. of ESORICS, 2009, pp. 355–370.

[10] X. Zhang, H. Wang, and C. Xu, “Identity-based key-exposure resilient


cloud storage public auditing scheme from lattices,” Information Sciences, vol.
472, pp. 223–234, 2018.

[11] K.Wang, J. Yu, X. Liu, and S. Guo, “A pre-authentication approach to


proxy re-encryption in big data context,” IEEE Transactions on Big Data,
2017, to appear, doi. 10.1109/TBDATA.2017.2702176.

[12] J. Ni, K. Zhang, Y. Yu, X. Lin, and X. Shen, “Providing task allocation
and secure deduplication for mobile crowd sensing via fog computing,” IEEE
Transactions on Dependable and Secure Computing, to appear, doi.
10.1109/TDSC.2018.2791432.

75

You might also like