Download as pdf or txt
Download as pdf or txt
You are on page 1of 69

EFFICIENT AND PRIVACY – PRESERVING WITH

BLOCKCHAIN DATA SECURED COMPUTING

BY

PRASHANTH G – 71381902078
THANISHKAA P – 71381902122
VARNISH K – 71381902125

Report submitted in partial fulfilment


of the requirements for the
Degree of Bachelor of Engineering in
Computer Science and Engineering

Sri Ramakrishna Institute of Technology


Coimbatore – 641010
May, 2023.
ACKNOWLEDGEMENT

“A well-educated sound and motivated work force is the bed rock of special and
economic progress of our nation”. Our heartfelt thanks to the personalities for helping us
to bring out this project in a successful manner.
We would like to express our sincere gratitude and hearty thanks to our
Principal Dr.M.PAULRAJ, for providing all kind of technological resources to bring
out excellence for us.
We express our profound thanks to Dr.P.S.PRAKASH, Head of the
Department, Computer Science and Engineering, for extending his help in using all the
lab facilities.
We express our sincere thanks and deep regards to our project coordinator
Dr.N.S.KAVITHA, Associate Professor, Department of Computer Science and
Engineering for being supportive during our project.
We take immense pleasure to thank our project guide Ms.V.KAVIYA DEVI,
Assistant Professor, Department of Computer Science and Engineering, for spending her
valuable time in guiding us and for her constant encouragement throughout the success of
this project.
Finally, we would like to express our heartfelt thanks to our beloved parents for
their blessing, our friends for their help and wishes for the successful completion of this
project.

ii
APPROVAL AND DECLARATION

This project report titled “EFFICIENT AND PRIVACY-PRESERVING WITH


BLOCKCHAIN DATA SECURED COMPUTING” was prepared and submitted
by PRASHANTH.G(71381902078), THANISHKAA.P(71381902122), VARNISH.K
(71381902125) and has been found satisfactory in terms of scope, quality and
presentation as partial fulfilment of the requirement for the Bachelor of
Engineering (Computer Science Engineering) in Sri Ramakrishna Institute of
Technology, Coimbatore (SRIT).

Checked and Approved by

Ms.V.Kaviya Devi
Project Supervisor
Assistant Professor

Department of Computer Science Engineering


Sri Ramakrishna Institute of Technology, Coimbatore -10.
May, 2023.

iii
BONAFIDE CERTIFICATE

Certified that this project report “EFFICIENT AND PRIVACY-PRESERVING


WITH BLOCKCHAIN DATA SECURED COMPUTING” the Bonafide work of
“PRASHANTH.G(71381902078), THANISHKAA.P(71381902122), VARNISH.K
(71381902125)” who carried out the project work under my supervision.

SIGNATURE SIGNATURE
Ms.V.KAVIYA DEVI Dr.P.S PRAKASH
ASSITANT PROFESSOR PROFESSOR AND HEAD
Department of Computer Science and Department of Computer Science and
Engineering Engineering
Sri Ramakrishna Institute of Technology, Sri Ramakrishna Institute of Technology,
Coimbatore-10. Coimbatore-10.

Submitted for viva-voce examination held on………………...

INTERNAL EXAMINER EXTERNAL EXAMINER

iv
EFFICIENT AND PRIVACY – PRESERVING WITH
BLOCKCHAIN DATA SECURED COMPUTING

ABSTRACT

Digital encryption is applied prior to outsourcing data to the cloud for use because of
worries about cloud security. This complicates the efficient execution of queries on
ciphertexts. Currently, cryptographic solutions offer weak generality, little ability to be
verified, significant computational complexity, and limited operation support. Data
owners may share their outsourced data with a large number of users in the cloud, and
those users may only want to get the data files that correspond to the queries they are
interested in. The most often used method for doing this is keyword-based retrieval. the
suggestion of a new searchable encryption system that makes use of cutting-edge
cryptography techniques like homomorphic encryption. In the suggested technique, the
data owner uses homomorphic encryption to protect the searchable index. A multikey
word query is sent to the cloud server, which computes the scores from the encrypted
index stored there and then sends the encrypted scores of the files to the data user. The
data user then decrypts the scores and chooses the top k files with the highest scores to
send a request for their identifiers to the cloud server. The cloud server and the data user
must communicate twice in order to retrieve the data.

v
TABLE OF CONTENTS

TITLE PAGE NO.

ACKNOWLEDGEMENT ii

APPROVAL AND DECLARATION iii

BONAFIDE CERTIFICATE iv

ABSTRACT v

TABLE OF CONTENTS vi

LIST OF FIGURES viii

LIST OF ABBREVATIONS ix

CHAPTER 1 INTRODUCTION 11
1.1 Background History 12
1.2 Problem Statement 13
1.3Applications 13
1.3.1 Security by the blocks 15
1.3.2 Asymmetric key cryptography 15
1.3.3 Pubic Key Infrastructure – PKI 17
1.3.4 The role of cryptography in blockchain security 18
1.3.5 Fog Computing 18
1.4 Scope of the Project 19
1.5 Existing System 20
1.6 Proposed system 21

vi
CHAPTER 2 LITERATURE SURVEY 22

CHAPTER 3 REQUIREMENT SPECIFICATIONS 27


3.1 Software Requirements 27
3.2 Hardware Requirements 27

CHAPTER 4 SYSTEM SPECIFICATIONS 28


4.1 Homomorphic Encryption 28
4.1.1 Homomorphic Message Authenticators 28
4.2 Flowchart 30
4.3 About the Software 31

CHAPTER 5 RESULT AND DISCUSSION 34

CHAPTER 6 CONCLUSION 41
5.1 Summary 41
5.2 Future Work 41

CHAPTER 7 REFERENCES 42

APPENDIX 43

PLAGIARISM CERTIFICATE 61

JOURNAL PUBLISHED 63

vii
LIST OF FIGURES

4.1. Blockchain connection 15

4.2. Homomorphic Encryption 29

4.3. Flowchart 30

4.4. Java Compilation 31

5.1.1. Home Page 1 34

5.1.2. Home Page 2 34

5.1.3. User Signup Page 1 35

5.1.4. Admin Login Page 1 35

5.1.5. Admin Approval Page 1 36

5.1.6. Approval Successful Page 1 36

5.1.7. User File Upload Page 1 37

5.1.8. User File Upload Successful Page 1 37

5.1.9. Download Page 1 38

5.1.10. File Damaged Page 38

5.1.11. Verifier Page 1 39

5.1.12. Primary Backup Page 1 39

5.1.13. Backup Page 2 40

5.1.14. Backup Page 3 40

viii
LIST OF ABBREVIATIONS

PKI Public Key Infrastructure

POW Proof of Work

SHA Secure Hash Algorithm

TPA Third Party Auditor

IoT Internet of Things

QoS Quality of Service

FNC Fog Node Clusters

FN Fog Nodes

SDN Software Defined Network

BFNC Blockchain based fog node cluster

SCN Software Defined Network Core Network

ACL Access Control List

BH Block Hash

CPU Central Processing Unit

NFV Network Functions Virtualization

MAC Message Authentication Code

ix
FC Fog Computing

CC Cloud Computing

BC Block Chain

RCC Remote Cloud Centre

CS Cloud Server

CSI Channel State Information

F-RAN Fog Radio Access Network

C-RAN Cloud Radio Access Network

D-RAN Distribution Radio Access Network

CSP Cloud Service Provider

CCE Cloud Computing Environment

EHR Electronic Health Record

JDBC Java Data Base Connectivity

JSP Java Server Page

ODBC Open Data Base Connectivity

RRS Remote Radio System

CA Certificate Authority

x
CHAPTER 1

INTRODUCTION

Cloud computing's drawbacks are compensated for by fog computing. It has numerous
benefits, but there are a number of quirks that must be understood, including security,
resource management, storage, and other elements simultaneously. The suggested
model uses the blockchain's reward and punishment system to encourage active
resource contribution from fog nodes. To create a transparent, open, and tamper-free
service, the fog node's actions while providing resources and the extent to which the
task was completed when contributing resources are bundled into blocks and recorded
in the blockchain system. It offers a five-category breakdown of threat models against
IoT-based fog models, including assaults on the qualities of privacy, authentication,
confidentiality, availability, and integrity. The fog computing environment can be used
to deploy privacy-focused blockchain-based solutions as well as consensus algorithms
for IoT applications and how they will be modified for Trust Chain. Several fog nodes
(FNs) are part of the Software Defined Network (SDN) core network and make up the
fog computing system. In order to provide a significant quantity of computing power
and storage space, fog computing refers to an architecture that places FNs and MDs at
the edge. Most blockchain applications require a significant amount of computer power
and storage space. We suggest a Blockchain-based fog node cluster (BFNC) to reduce
the amount of computational power and storage space needed. To conserve storage
capacity, a blockchain in a BFNC has a static length restriction. Additionally, BFNC, a
small-scale P2P network, generates blockchains with less CPU resources than a large-
scale P2P network. A group of BFNs at the edge of an SDN core network (SCN) is
known as a BFNC. Only essential data is sent to BFNCs by a SCN's central controller.
A BFNC can be thought of as a P2P network as a result.

11
1.1 Background history:

In order to conduct tasks closer to end users, fog computing (FC) extends cloud
computing (CC) from the centre of the internet architecture to the edge of the network.
It has been demonstrated that this extension improves security and lowers latency and
energy use [1]. On the other hand, blockchain (BC) is the cryptography's fundamental
technology and is used in a broad variety of applications. The research community was
motivated to merge BC's distributed trust management criteria with FC in order to take
a step towards developing a distributed and trusted Data, Payment, Reputation, and
Identity management system because of its security and dependability. [2] A new
paradigm in computing and storage resource provisioning for Internet of Things (IoT)
devices is called fog computing. All devices in a fog computing system can offload data
or computationally heavy tasks to close-by fog nodes rather than the far-off cloud. Fog
computing can dramatically reduce the transmission time between IoT devices and
computer servers in comparison to cloud computing. The existing fog system is,
however, quite open to malevolent intrusions. [1] We suggest dividing the fog system
into fog node clusters (FNCs) in order to boost security, with fog nodes (FNs) inside
each cluster sharing a single access control list (ACL) that is secured by a blockchain.
Blockchain generation needs a lot of computer power and can quickly deplete FNs'
computational resources. In this post, we first modify the blockchain for FNC to cut
down on the amount of storage space and CPU power needed. [3] Second, a brand-new
method is created for the blockchain-based FNC (BFNC) to automatically recover
ACL. Additionally, we suggest a heuristic technique to shorten the time needed to
compute block hash values by working with all available devices. The simulation
findings showed that compared to noncooperative techniques, computing a block hash
takes less time when utilising a cooperative computing strategy.

12
1.2 Problem statement

The majority of the auditing protocols in use today are built on PKI (Public Key
Infrastructure). For PKI, managing certificates is a complicated task. PKI-based
auditing protocols, however, are particularly cumbersome for batch auditing in the
context of multiple users since it requires certificate verification, which might add to the
auditor's workload. We suggest an ID-based public auditing approach to address this
issue by fusing ID-based encryption with the homomorphic authenticator mechanism.

1.3 Applications

Although it can seem, blockchain's fundamental idea is actually fairly straightforward.


A database is one sort of blockchain. It is helpful to first comprehend what a database is
in order to understand blockchain.
A database is a group of data that is electronically stored on a computer system.
Databases frequently have information, or data, organised in table style to make it
simpler to search for and filter out specific information. What makes utilising a
spreadsheet as opposed to a database for information storage different? Spreadsheets are
made to store and provide access to a modest amount of data for one person or a small
number of individuals. A database, on the other hand, is made to hold substantially
more data so that it may be rapidly and readily accessed, filtered, and changed by
multiple users at once.

o Ledger: It's a constantly expanding file.


o Permanent: This indicates that once a transaction is recorded in a blockchain, it can be
left there indefinitely.
o Secure: Blockchain stored data in a safe manner. To make sure that the information is
locked inside the blockchain, it uses very sophisticated cryptography.
o Chronological: When a transaction is chronological, it follows the one before it.
o Immutable refers to the fact that as transactions are added to the blockchain, this
ledger cannot ever be altered.

13
A blockchain is made up of a series of information-containing blocks. For the
blockchain, each block acts as a permanent database that stores all of the most recent
transactions. Every time a block is finished, a new block is generated. Blockchain, a
distributed ledger system, is regarded by its proponents as one of the best methods for
transaction security. Any changes to the contents require a change in the block hash
because the block hash depends on the data contained within. As a result, the data in
each block and the hash of the block preceding it are both used to produce the hash of
each block.

1.3.1 Security by the blocks

As the name implies, a blockchain is a network of digital "blocks" that houses


transaction records. Every block that comes before and after a specific block is
connected. This makes it challenging to change a single record because a hacker would
have to change both the block that contains the record and those linked to it in order to
avoid detection. On its own, this might not seem like much of a deterrent, but several
more inherent properties of blockchain provide additional security precautions. Records
on a blockchain are encrypted for security. Each network participant has a private key
they use to sign transactions with a distinct digital signature. The peer network will be
alerted right away if a record is altered because the signature will no longer be reliable.
For more injury to be prevented, early notice is crucial. Unfortunately for those
enthusiastic hackers, blockchains are scattered across peer-to-peer networks that are
constantly updated and kept in sync. Because they are not held in a single location,
blockchains do not have a single point of failure and cannot be changed by a single
machine. Massive computer power would be required to simultaneously access every
instance (or at least a 51 percent majority) of a certain blockchain and change them all.
The possible attack susceptibility of smaller blockchain networks has been explored,
but no decision has been made. In any case, as your network grows, your blockchain
will become more tamper-proof. In summary, blockchains have a few positive
characteristics that would help to protect your transaction data. However, when using a
blockchain for business, there are extra considerations and requirements.

14
fig. 4.1 blockchain connection

The figure fig.4.1 connecting blocks from login to Database

1.3.2 Asymmetric key cryptography

Asymmetric key cryptography, or public key cryptography, is a type of encryption that


employs two separate keys for encryption and decoding. Although there is a
mathematical connection between the two keys, one key is kept private and the other is
made available. Data is encrypted using the public key, while decrypting it requires the
private key. creation of keys For asymmetric key cryptography to work, a pair of
keys—a public key and a private key—must first be generated. A key pair's owner must
keep the private key hidden while making the public key available to anyone who needs
to communicate encrypted data. Asymmetric key cryptography is used to encrypt data
by using the recipient's public key to convert plaintext into ciphertext. Only the
recipient's private key can be used to decipher the ciphertext, making it safe to send
across an unsecured communication channel. Decryption: To convert the ciphertext
back into plaintext, the recipient must utilise their private key. The decrypted plaintext

15
is kept secret because it can only be accessed by the receiver using their private key.
Authentication: To confirm the sender's or recipient's identity in a message, asymmetric
key cryptography can also be used for authentication. In this scenario, the sender signs a
message using their private key, and the recipient verifies the signature using the
sender's public key. Since only the sender has access to their private key, the recipient
may be sure that the message was sent by them. Many different applications, such as
secure communication, digital signatures, and key exchange, use asymmetric key
cryptography. Stronger security guarantees, the absence of a secure key exchange, and
support for non-repudiation are only a few of its advantages over symmetric key
cryptography. It is not suited for encrypting huge volumes of data because it is often
slower and more computationally costly than symmetric key cryptography.

Asymmetric key cryptography uses keys that are theoretically related to one another,
but it is computationally impossible to deduce the private key from the public key. This
trait is based on the fact that discrete logarithm computation and factoring of huge
integers are both extremely challenging mathematical tasks.

Key management: Since the private key must always remain a secret, asymmetric key
cryptography necessitates rigorous key pair management. Key production, storage,
distribution, and revocation are all included in key management. In some
circumstances, the management and distribution of public keys is handled by a trusted
third party, such as a certificate authority.

Asymmetric key cryptography and symmetric key cryptography are frequently


combined to form the hybrid encryption method. Using a quick symmetric key
algorithm, a symmetric key is produced specifically for each message or session in this
method. The symmetric key is then sent with the encrypted message after being
encrypted with the recipient's public key. The recipient first decrypts the symmetric key
with their private key before using it to decrypt the message.

Asymmetric key cryptography is also used for digital signatures, which are a technique
of signing and authenticating electronic documents. In this method, the sender signs a
message with their private key to produce a distinctive digital signature. The recipient

16
can be sure that the message was not altered in transit by using the sender's public key
to verify the signature.

1.3.3 Pubic Key Infrastructure – PKI

A system called Public Key Infrastructure (PKI) uses public key cryptography to
provide safe communication across a public network. Digital certificates and public
keys that are used for authentication, encryption, and digital signatures are managed by
PKI. PKI is made up of various parts, such as:

Digital certificates used to confirm the identities of people, devices, and organisations
are issued and managed by the certificate authority (CA), which is in charge of doing
so. The CA is often a dependable third party in charge of confirming the legitimacy of
the certificate holder. Verifying the identities of persons and devices making requests
for digital certificates falls under the purview of the Registration Authority (RA).
Between the CA and the end user or device, the RA serves as a middleman. Digital
certificates and other data pertaining to the PKI system are kept in a database known as
the certificate repository. List of Revocation of Certificates (CRL): A list of digital
certificates that have been revoked and invalidated before their expiration date is
contained in the CRL.A digital certificate's validity can be instantly verified via the
Online Certificate Status Protocol (OCSP). Users no longer need to download the
complete CRL in order to check the status of a certificate thanks to this protocol. Digital
signatures, email encryption, and secure web browsing are just a few of the many
applications that use PKI. Since PKI uses public key cryptography to encrypt data and
confirm the identities of users and devices, it offers high security guarantees. PKI also
offers non-repudiation, which implies that a user cannot dispute having sent a message
or signed a document because anyone with access to their public key may verify their
digital signature. PKI can, however, be difficult to set up, manage, and require
significant money and expertise to do so. Since a breach of the CA could lead to the
compromising of the entire PKI system, PKI also necessitates a high level of trust in the
CA and other system components.

17
1.3.4 The role of cryptography in blockchain security

Blockchains primarily rely on encryption to protect their data. In this context, the so-
called cryptographic hashing functions are of utmost importance. Hashing is the act of
taking data of any size as input and producing a hash with a predetermined and fixed
size (or length). Any input size will result in the same length of the output. But if the
input is changed, the outcome will be very different. If the input stays the same, no
matter how many times you run the hash function, the output hash will remain constant.
Blockchains use these output numbers, also known as hashes, to uniquely identify data
blocks. The hash of each block is generated in relation to the hash of the block before it,
creating a chain of connected blocks. Any changes to the contents require a change in
the block hash because the block hash depends on the data contained within. As a result,
the data in each block and the hash of the block preceding it are both used to produce
the hash of each block. These hash identifiers greatly improve the security and
immutability of the blockchain. Hashing is also employed in consensus methods for
transaction verification. For instance, the SHA-256 hash function is used by the Proof
of Work (PoW) algorithm on the Bitcoin network. According to its name, SHA-256
creates a hash from the input data that is 256 bits, or 64 characters, long. In addition to
safeguarding transaction records on ledgers, cryptography helps ensure the security of
the wallets used to store cryptocurrency units. The associated public and private keys
that permit users to receive and transmit payments are created using asymmetric or
public-key cryptography. Private keys are used to produce digital signatures for
transactions, making it possible to confirm who is the rightful owner of the sent funds.
The specifics are outside the scope of this article, but the nature of asymmetric
cryptography prevents anyone but the owner of the private key from accessing the funds
stored in a cryptocurrency wallet, keeping those funds secure until the owner decides to
spend them (as long as the private key is not shared or compromised).

1.3.5 Fog Computing

Fog computing, often referred to as edge computing, entails distributed computing using
a large number of "peripheral" devices linked to a cloud. IoT devices are referred to as
the "fog" in this context since they are closer to the "ground" than clouds. Instead of
sending all of this data (for example, from sensors) to cloud-based servers for

18
processing, the idea behind fog computing is to execute as much processing as possible
utilizing computer units placed nearby the data-generating devices. Thus, bandwidth
requirements are decreased and processed data rather than raw data is delivered.
Processing close to the source of the data rather than remotely is preferable because the
time between input and response is decreased and it is likely that the same devices that
gave the data will need the processed data. This idea is not entirely novel; for example,
signal processing chips that do Fast Fourier Transforms have long been used in non-
cloud computing environments to reduce latency and lessen the strain on a CPU.

A data plane and a control plane make up fog networking. For instance, fog computing
on the data plane enables computing services to exist at the network's edge rather than
on servers in a data center. Fog computing, in contrast to cloud computing, places an
emphasis on close proximity to end users and client objectives (such as operational
costs, security policies, and resource exploitation), dense geographical distribution and
context-awareness (for what concerns computational and IoT resources), latency
reduction and backbone bandwidth savings to achieve better quality of service (QoS),
and edge analytics/stream mining, resulting in superior user-experience and redundancy
in case of failure. The Internet of Things (IoT) idea, in which the majority of the gadgets
used by people on a daily basis will be connected to one another, is supported by fog
networking. Phones, wearable health monitors, networked cars, and augmented reality
gadgets like Google Glass are among examples. IoT devices frequently have limited
resources and limited computational power to carry out cryptographic computations. In
its place, a fog node can offer security for IoT devices by carrying out these
cryptographic operations.

1.4 Scope of the Project

The main objective of this project is to develop and put into use a system that will let
customers hire a third party auditor to check the veracity of their data while it is being
stored in the cloud (TPA). The TPA will be responsible for checking the veracity of the
users' data on their behalf. Because the TPA shouldn't have access to the substance of
the user's data during the auditing process, the privacy of the user's data will be
safeguarded. Also, the technology will safeguard the privacy of user data by limiting
access to it while it is being stored on the cloud.

19
1.5 Existing System

Blockchain was used in earlier research to lower the amount of storage space and
computational power needed for fog node clusters (FNC). Second, a brand-new
method is created for the blockchain-based FNC (BFNC) to automatically restore the
access control list. Additionally, we suggest a heuristic technique to shorten the time
needed to compute block hash values by working with all available devices. Most
blockchain applications, like bitcoin, require a significant amount of computer power
and storage space. We suggest a Blockchain-based fog node cluster to reduce the need
for computational power and storage space. (BFNC). To conserve storage capacity, a
blockchain in a BFNC has a static length restriction. Additionally, because BFNC is a
small-scale P2P network, creating blockchains there uses less computational power
than it would in a larger P2P network. The length of the blockchain is limited to a
fixed number of blocks due to the limited storage space that each BFN in a BFNC has
(previous blocks are automatically erased when the length hits the upper limitation).
BFNs in a BFNC are not compensated once they have finished computing BHs, in
contrast to bitcoin miners. As a result, rather than being competitive, the ties between
BFNs are cooperative. That is, all BFNs in a single BFNC pool their computing
resources and attempt to obtain the first BH that satisfies the system requirements as
quickly as possible.

Disadvantages

o Small-scale FNCs are still not suitable due to the large computational difficulties.
o However, the computing time required for FNs to acquire a new block hash (BH)
is still too long for a fog system compared to the considerably reduced computing
power requirements.
o PKI-based auditing protocols, on the other hand, are particularly cumbersome for
batch auditing in the multi-user environment since certificate verification can add
to the auditor's workload.

20
1.6 PROPOSED SYSTEM:

Utilizing homomorphic authenticators was the method. A client can outsource the
authentication of a sizable group of data items, as well as the accompanying
authenticators, to an untrusted server using homomorphic authenticators (HAs).
Abstract Functions on authenticated data can be evaluated using homomorphic
authenticators. Both in the public key setting and the secret key setting, there are
constructions that take the form of homomorphic signatures and homomorphic
message authentication codes (MACs). The public-key infrastructure (PKI), a key
management system that supports public-key cryptography, can help with the most
important need of "assurance of public key." Public key assurance is provided by PKI.
It enables public key distribution and public key identification.

Advantages:

o On behalf of the users, the TPA will carry out the task of examining the accuracy
of the users' data.
o The system will protect user data confidentiality by preventing other parties from
accessing it while it is stored in the cloud.
o By combining a homomorphic token with distributed erasure-coded data
verification, this strategy combines storage correctness insurance with data
localization.
o Considering the time, computation resources, and even the related online burden
of users, we also provide the extension of the proposed main scheme to support
third-party auditing.
o Highly efficient and resilient to Byzantine failure, malicious data modification
attack, and even server colluding atta.

21
CHAPTER 2

LITERATURE SURVEY

1. A Cooperative Computing Strategy for Blockchain-Secured Fog Computing,


July 2020.
Di Wu and Nirwan Ansari, IEEE Internet of Things Journal, proposed that a new
paradigm in computing and storage resource provisioning for Internet of Things (IoT)
devices is called fog computing. All devices in a fog computing system can offload data
or computationally heavy tasks to close-by fog nodes rather than the far-off cloud. Fog
computing can dramatically reduce the transmission time between IoT devices and
computer servers in comparison to cloud computing. The existing fog system is,
however, quite open to malevolent intrusions. We suggest dividing the fog system into
fog node clusters (FNCs) in order to boost security, with fog nodes (FNs) inside each
cluster sharing a single access control list (ACL) that is secured by a blockchain.
Blockchain generation needs a lot of computer power and can quickly deplete FNs'
computational resources. In this post, we first modify the blockchain for FNC to cut
down on the amount of storage space and CPU power needed. Second, a brand-new
method is created for the blockchain-based FNC (BFNC) to automatically recover
ACL. Additionally, we suggest a heuristic technique to shorten the time needed to
compute block hash values by working with all available devices. The simulation
findings showed that compared to noncooperative techniques, computing a block hash
takes less time when utilising a cooperative computing strategy.

2. Control-Data Separation with Decentralized Edge Control in Fog-Assisted


Uplink Communications, Feb 2019.
W.Shen, J.Qin, J.Yu, R.Hao, and J.Hu, IEEE Transaction on Information Forensics and
Security, suggested that remote radio systems (RRSs), which are wireless edge nodes,
and remote cloud centre (RCC) processors, which are coupled to the RRSs through a
fronthaul access network, are both included in fog-aided network topologies for 5G
systems. Network Functions Virtualization (NFV), which enables a flexible division of
network functionalities that responds to network characteristics like fronthaul latency
and capacity, is used to operate RRSs and RCC. This study examines the cloud-edge
allocation of two crucial network functions, namely the control functionality of rate

22
selection and the data-plane function of decoding, with an emphasis on uplink
communications. There are three different functional splits that are taken into
consideration: (i) Distributed Radio Access Network (D-RAN), where both functions
are carried out decentralised at the RRSs; (ii) Cloud RAN (C-RAN), where both
functions are instead carried out centrally at the RCC; and (iii) a new functional split
called Fog RAN (F-RAN), with separate decentralised edge control and centralised
cloud data processing. The paradigm under consideration entails a time-varying uplink
channel in which the RCC possesses local but more timely channel state information
(CSI) whereas the RRSs have global but delayed CSI due to fronthaul latency. It is
determined that the F-RAN design can deliver significant benefits in the presence of
user mobility using the adaptive sum-rate as the performance criterion. A prospective
guiding principle for the implementation of functional splits between edge and cloud,
made possible by NFV, in fog-aided 5G systems is provided by the control-data
separation architecture. In this study, using the outage adaptive sum-rate criterion, we
have evaluated the relative benefits of functional splits where rate selection and data
decoding are performed either at the edge or in the cloud.
One of the primary takeaways from this article was that the completely centralised
design that was favoured in the initial implementation of the C-RAN architecture
should only be chosen if the fronthaul latency is low or the channel's time-variability is
constrained. Otherwise, a fog-based system could result in large benefits because joint
data decoding is done in the cloud while rate selection control is handled at the edge.
This conclusion shows the advantages of decentralised but timely CSI over centralised
but delayed CSI for scheduling purposes.

3. Decentralized and Privacy-Preserving Public Auditing for Cloud Storage Based


on Blockchain, July 2020.
Y. Miao, Q. Huang, M. Xiao, and H. Li, IEEE Access, proposed that systems for cloud
storage give customers a flexible, practical, and friendly option to outsource their data.
However, once consumers outsource their data to the cloud, they no longer have control
over it. To maintain data integrity, public auditing was developed, in which third-party
auditors (TPAs) are given the authority to carry out auditing responsibilities. Typically,
TPA creates and delivers challenge data to the cloud server (CS), which confirms data
ownership as necessary. But the TPA might not follow the public auditing protocol

23
honestly, or it might even work with CS to trick consumers. Some currently used public
auditing techniques make use of blockchain to fend off malicious TPA. The possibility
of user information being disclosed to the TPA during the auditing process exists since
the CS may guess the challenge messages. In this paper, we propose a blockchain-based
decentralised and privacy-preserving public auditing scheme , in which the auditor is
required to log the audit process onto the blockchain and the blockchain is used as an
unpredictable source for the generation of challenge information. Users can view the
audit results publicly due to the properties of blockchain.

Additionally, zero-knowledge proof is utilised in DBPA to safeguard user privacy


during the audit process so that the data related to the user's account is not revealed in
the response information supplied by the CS. Analysis of security and performance
reveals that DBPA is both secure and effective. A decentralised, privacy-preserving
public auditing system that is protected from hostile cloud servers and tardy third-party
auditors. Our method uses two elements to provide unexpected challenge messages. The
other is made up of a succession of decentralised block hashes, while the first is
generated by the auditor. Our plan was able to withstand the tardy auditor, and a hostile
cloud server was unable to retrieve or guess the challenge message beforehand.
Additionally, our system offers improved user privacy protection while verifying the
audit answer from the cloud server. We examined our system to demonstrate its
security, and we carried out a thorough performance analysis to demonstrate its
efficiency in terms of compute overhead and minimal communication overhead.

4. Enabling Identity-Based Integrity Auditing and Data Sharing with Sensitive


Information Hiding for Secure Cloud Storage, June 2018.
J. Kang, O. Simeone, J. Kang, and S. Shamai Shitz, IEEE Transactions on Wireless
Communications, suggested that users can remotely store their data to the cloud and
enable data sharing with others via cloud storage services. To ensure the integrity of the
data saved in the cloud, remote data integrity auditing is suggested. The cloud file may
contain potentially sensitive data in some popular cloud storage systems, such as the
Electronic Health Records (EHRs) system.
When the cloud file is shared, the sensitive information shouldn't be made available to
others. The sensitive information can be realised by encrypting the entire shared file,

24
but it will prevent others from using it. It has not yet been determined how to implement
data sharing with sensitive information concealed in remote data integrity audits. We
suggest a remote data integrity auditing approach that realises data sharing with
sensitive information hidden in this study to solve this issue. A sanitizer is employed in
this method to turn the data blocks' signatures into valid ones for the sanitised file while
also sanitising the data blocks that correspond to the file's sensitive information. During
the integrity auditing process, these signatures are used to confirm the accuracy of the
sanitised file. Therefore, our method enables the cloud-stored file to be shared and
utilised by others under the condition that the sensitive material is hidden, while the
remote data integrity auditing is still able to be successfully carried out. The suggested
approach, meanwhile, simplifies the challenging certificate management because it is
based on identity-based cryptography. The proposed method is efficient and secure,
according to the performance analysis and security analysis. In this research, we
presented a data sharing with sensitive information masking identity-based data
integrity auditing technique for safe cloud storage. According to our plan, the cloud-
stored file may be shared and utilised by others provided that the file's sensitive
information is safeguarded. Additionally, remote data integrity audits can still be carried
out effectively. The experimental analysis and security proof show that the suggested
approach achieves the desired efficiency and security.

5. Framework for Auditing in Cloud computing Environment, 2018.


O. Novo, IEEE Internet of Things Journal, proposed that in the internet era, a new field
called cloud computing has emerged where users of the cloud can access information
and physical resources on a pay-per-use basis. The primary issues in the Cloud
Computing Environment (CCE) are security and privacy. The Third party Service
Provider manages all of the physical resources, including computing resources,
sensitive user data, and processes, to address these concerns. The audit feature in CCE
gives cloud service providers a mechanism to easily make performance and security
data accessible to cloud users. Auditing is therefore incorporated within Third Party
Service Provider in order to maintain data confidentiality, privacy, integrity, and
availability. In addition to proposing a framework that aids the designer in including
auditing elements in CCE, this paper aims to examine various challenges in auditing
that have been encountered in practise. Due to the necessity of maintaining the security,

25
privacy, integrity, and availability of data in the field of rapid usage of cloud, web
server, application server, and database server, auditing is required at every stage of
cloud architecture. Recently, data has been transported, processed, and stored outside of
the business or organisation. The organisation does not physically own the raw data,
and shared computing environments are also making it available to the general public.
More privacy and security are needed to prevent this kind of flaws. No restrictions
have been put in place to limit data alteration with regard to data access, and no
recording events such data access, transfer, or modification have been watched. Other
drawbacks of cloud architecture include provider viability and limited change control
capabilities. One further factor is that the Cloud Service Provider (CSP) manages and
maintains all logical and physical accesses. Therefore, auditing is crucial to maintaining
the confidentiality of sensitive data, limiting access to computing and physical
resources, and ensuring integrity.

6.Practical Homomorphic Message Authenticators for Arithmetic Circuits, 2018.


Catalano and Fiore, Journal of Cryptology, suggested that homomorphic message
authenticators enable the holder of an evaluation key to execute computations over
previously authenticated data in a way that the generated tag may be used to validate the
computation's authenticity. More specifically, a user can confirm that authenticates the
proper output of the calculation by using the secret key that was used to authenticate the
original data. Gennaro and Wichs have formalised this fundamental and demonstrated
how to realise it using fully homomorphic encryption. In this research, we provide
novel designs of this primitive that, while allowing for a narrower set of functions (i.e.,
polynomials bounded arithmetic circuits as opposed to Boolean ones), are significantly
more effective and simpler to use. Furthermore, our techniques can withstand countless
(malicious) verification inquiries. Our initial construction has the limitation that the size
of the generated tags increases with the degree of the circuit, even though it allows for
arbitrary composition. This construction is based solely on the assumption that one-way
functions exist. The D-Diffie-Hellman Inversion assumption underlies our second
method, which provides some orthogonal characteristics by allowing for very short tags
(one single group element!). but has some limitations in terms of composition.

26
CHAPTER 3

REQUIREMENT SPECIFICATIONS

3.1 SYSTEM REQUIREMENTS

Software Requirements
➢ Language : Java, J2EE
➢ Technology : JSP, Servlet
➢ Database : MySQL 5.0
➢ Backend Tool : SQL Yog
➢ Developing Tool : NetBeans IDE 7.2.1
➢ Web Server : Apache Tomcat 5.5
➢ Build Tool : Apache Ant

Hardware Requirements
➢ Processor : Any Processor above 2 GHz
➢ Ram : 1 GB
➢ Hard Disk : 80 GB

27
CHAPTER 4

SYSTEM SPECIFICATIONS

4.1Homomorphic Encryption

Calculations can be performed directly on encrypted data without having to first decrypt
it thanks to a type of encryption called homomorphic encryption. When decrypted, the
calculations results are identical to what they would have been if they had been
performed on the unencrypted data. The calculations are then saved in an encrypted
format. Using homomorphic encryption, compute and storage can be outsourced
discreetly. This makes it possible to encrypt data while it is processed in a commercial
cloud environment. A type of encryption known as homomorphic encryption enables
users to do additional analyses on encrypted material without needing to know the
secret key. The outcome of such a calculation is encrypted. Homomorphic encryption
can be seen as progression of public-key cryptography[how?]. It is possible to consider
the plaintext and ciphertext spaces as homomorphisms between the encryption and
decryption operations. The word "homomorphic" refers to the algebraic concept of
homomorphism.

4.1.1 Homomorphic Message Authenticators.

Homomorphic message authenticators enable the holder of an evaluation key to execute


computations over previously authenticated data in a way that the generated tag may be
used to validate the computation's authenticity. More specifically, a user can confirm
that, by using the secret key used to authenticate the original data, the computation's
right output is authenticated. Gennaro and Wichs have formalised this fundamental and
demonstrated how to realise it using fully homomorphic encryption. In this research, we
provide novel designs of this primitive that, while allowing for a narrower set of
functions (i.e., polynomials bounded arithmetic circuits as opposed to Boolean ones),
are significantly more effective and simpler to use. Furthermore, our techniques can
withstand countless (malicious) verification inquiries. Our initial construction has the
limitation that the size of the generated tags increases with the degree of the circuit,

28
even though it allows for arbitrary composition. This construction is based solely on the
assumption that one-way functions exist. The D-Diffie-Hellman Inversion assumption
underlies our second method, which provides some orthogonal characteristics by
allowing for very short tags (one single group element!). but has some limitations in
terms of composition.

fig.4.2. Homomorphic Encryption


The figure 4.2 shows the homomorphic encryption in the cryptograph

29
4.2 FLOWCHART:

fig.4.3. Flowchart
The flowchart fig.4.3 describes about the cryptography in fog computing which
connected to the cloud database.

30
4.3 About the Software

Java (programming language)

In the Java programming language, all source code is first written in plain text files
ending with the .java extension. Those source files are then compiled into .class files by
the javac compiler. A .class file does not contain code that is native to your processor; it
instead contains bytecodes — the machine language of the Java Virtual Machine (Java
VM). The java launcher tool then runs your application with an instance of the Java
Virtual Machine.

fig.4.4. Java compilation

The figure 4.4 shows about the java compilation in the system.

The same .class files can be run on Microsoft Windows, the Solaris TM Operating
System (Solaris OS), Linux, or Mac OS due to the Java VM's availability on a variety of
operating systems. Some virtual machines, like the Java Hotspot virtual machine, carry
out extra tasks at runtime to improve the performance of your program. Finding speed
bottlenecks and recompiling commonly used code parts (to native code) are a few
examples of the jobs that fall under this category. The same application can run on
various platforms thanks to the Java VM.

JSP – FRONT END

In response to a Web client request, Java Server Pages (JSP), a Java technology,
enable software developers to dynamically generate HTML, XML, or other types of
documents. The technique enables the embedding of Java code and specific pre-defined
actions into static content. Additional XML-like tags, referred to as JSP actions, are
added to the JSP syntax and can be used to call built-in functionality. The technology
also enables the development of JSP tag libraries, which function as extensions to the

31
basic HTML or XML tags. A platform independent method of expanding a Web
server's functionality is through tag libraries. A JSP compiler converts JSPs into Java
Servlets. A JSP compiler may provide byte code for the servlet directly or it may
produce a servlet in Java code that is subsequently built by the Java compiler. JSPs can
also be interpreted instantly, which speeds up reloading of updates. The Java Server
Pages (JSP) technology offers a straightforward, efficient method of producing dynamic
online content. Rapid creation of server- and platform-independent web applications is
made possible by JSP technology.

SERVLETS – FRONT END

Servlets are CGI programming's replacement in Java technology. They are applications
that create Web pages and run on a web server. For a variety of reasons, creating Web
pages on the fly is beneficial (and frequently done):

o The user-submitted data form the basis of the Web page. Program that process
orders for e-commerce sites and search engine results pages, for instance, are
generated in this manner.
o The data is often updated. For instance, a website with weather information or news
headlines might be built dynamically, returning an older version of the page if it is
still current.
o The website makes use of data from business databases and other similar sources.
You might use this to create a Web page for an online store that provides the
current prices and quantity of inventory, for instance.

MYSQL – BACK END

Most MySQL use cases are covered in the MySQL Reference Manual. Both MySQL
Community Server and MySQL Enterprise Server are covered by this manual. If the
manual does not contain the solution(s), MySQL Enterprise, which offers extensive
support and services, can be purchased to receive assistance. Additionally, MySQL
Enterprise offers a large knowledge base library with hundreds of technical articles that
address complex issues related to well-known database topics including performance,
replication, and migration.

32
An economical family of high-performance database products is created and supported
by MySQL AB. The company's premier product is "MySQL Enterprise," a
comprehensive collection of software that has been proven in production and comes
with proactive monitoring tools and first-rate support services. The most widely used
open source database programme worldwide is MySQL. Several of the largest and
fastest-growing businesses in the world, including market leaders like Yahoo!, Alcatel-
Lucent, Google, Nokia, YouTube, and Booking.com, utilise MySQL to power their
high-volume Web sites, mission-critical systems, and packaged software. With activities
across the globe including offices in the US and Sweden, MySQL AB addresses both
the demands of business clients and open source principles.

JDBC

Java programmers can use the Java Database Connectivity (JDBC) framework to create
applications that can access data held in databases, spreadsheets, and flat files. No
matter what database management system is employed to maintain the database, JDBC
is frequently used to link a user program to a "behind the scenes" database. JDBC is
cross-platform in this sense. This article will give an overview and sample code that
shows database access from Java program that utilize the JDBC API classes, which can
be downloaded for free from Sun's website. A data source is a database that another
program links to. Open Database Connectivity (ODBC), a widely adopted data source
standard, is already implemented in many data sources, including Microsoft and Oracle
software. Many old C and Perl program link to data sources using ODBC. Many of the
similarities between database management systems were unified through ODBC. JDBC
advances the level of abstraction by building on this capability. Java program can now
connect to ODBC-capable database program thanks to JDBC-ODBC bridges.

33
CHAPTER 5
RESULT AND DISCUSSION
5.1 Output screen

5.1.1 Home page 1

fig. 5.1.1Home page 1


5.1.2 Home page 2

fig.5.1.2 Home page 2


The home page to view about the project website shows a small description about the
project.

34
5.1.3 User signup page 1

fig.5.1.3 User signup page 1


The new user can sign up in this page such that the new account is created.

5.1.4 Admin login page 1

fig.5.1.4 Admin login page 1

35
5.1.5 Admin approval page 1

fig.5.1.5 Admin approval page 1


5.1.6 Approval successful page 1

Fig.5.1.6 Approval successful page 1


The admin has to approve the new user to access the storage.
By this the admin only allows the known user to access the storage.

36
5.1.7 User file upload page 1

Fig.5.1.7 User file upload page 1

5.1.8 User file upload successful page 1

fig.5.1.8 User file upload successful page 1

The user gets the access, user can store the files by using upload file option and by
choosing in the local file we can save it to cloud.

37
5.1.9 Download page 1

fig.5.1.9 Download page 1

Here is the download page where we can download the file after the login in future.
The encryption and decryption will happen to each logins we perform, here we use
Homomorphic encryption as a encryption algorithm.

5.1.10 File damaged page (information theft)

fig.5.1.10 File damaged page

If suppose the uploaded file is deleted or theft or some information leaked, it shows the
command as “File downloaded gets affected or damaged, Recovery process going
on…..”.

38
5.1.11 Verifier page 1

fig.5.1.11 Verifier page 1

The verifier page act to keep the file as backup so that if the user in need to get the file
back without any crash, The third party auditor will back up the file needed stored in the
cloud.

5.1.12 Primary backup page 1

fig.5.1.12 Primary backup page 1

39
5.1.13 Backup page 2

fig.5.1.13 Backup page 2

5.1.14 Backup page 3

fig.5.1.14 Backup page 3


The third party auditor will provide the file we need of particular so that the user can get
the original file back.

40
CHAPTER 6

CONCLUSION

5.1 Summary

In this method, the issue of data security in cloud data storage, which is essentially a
distributed storage system, has been researched and examined. We provide an efficient
and adaptable distributed architecture with explicit dynamic data support, including
block update, delete, and append, in order to ensure the integrity and availability of
cloud data and enforce the quality of dependable cloud storage service for customers. In
the file distribution preparation, we rely on erasure-correcting code to deliver
redundancy parity vectors and ensure the dependability of the data. The integration of
storage correctness insurance and data error localisation is accomplished by this
technique by combining the homomorphic token with distributed verification of
erasure-coded data.

5.2 FUTURE WORK

In order to facilitate third-party audits, taking into consideration the time, computation
resources, and even the related internet load of users, we also propose an extension of
the core scheme that is presented. Through stringent security and in-depth experiment
findings, we show that this strategy is highly effective and resistant to Byzantine failure,
malicious data modification assault, and even server collusion attack. The blockchain
application's ongoing experimental development will lead to its wider implementation.

41
CHAPTER 7

REFERENCES

1. A Cooperative Computing Strategy for Blockchain-secured Fog Computing, by Di


Wu and Nirwan Ansari, IEEE Internet of Things Journal, Volume: 7, Issue 7, Pages:
6603–6609, July 2020.
2. Enabling Identity-Based Integrity Auditing and Data Sharing With Sensitive
Information Hiding for Secure Cloud Storage, IEEE Transaction on Information
Forensics and Security, W.Shen, J.Qin, J.Yu, R.Hao, and J.Hu, (Volume: 14, Issue:
2, Page(s): 331-346), Feb 2019.
3. Decentralized and Privacy-Preserving Public Auditing for Cloud Storage Based on
Blockchain, (Volume: 8, Page(s):139813-139826), Y. Miao, Q. Huang, M. Xiao,
and H. Li, IEEE Access, July 2020.
4. IEEE Transactions on Wireless Communications, "Control-data separation with
decentralised edge control in fog-assisted uplink communications," Volume: 17,
Issue: 6, Page(s): 3686–3696, June 2018. J. Kang, O. Simeone, J. Kang, and S.
Shamai Shitz.
5. Blockchain meets IoT: an architecture for scalable access control in IoT, O. Novo,
IEEE Internet of Things Journal, Volume: 6 Page(s): 99, 2018.
6. Practical Homomorphic Message Authenticators for Arithmetic Circuits, Catalano
and Fiore, Journal of Cryptology, volume 3, page 37, 2018.

42
APPENDICES

<%@page import="java.util.List"%>
<%@page import="java.util.Iterator"%>
<%@page import="java.util.ArrayList"%>
<%@page import="java.sql.Connection"%>
<%@page import="java.sql.ResultSet"%>
<%@page import="java.sql.Statement"%>
<%@page import="java.sql.DriverManager"%>
<%@page import="java.sql.PreparedStatement"%>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>SecureBackupRecovery</title>
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<link href="style.css" rel="stylesheet" type="text/css" />
<script type="text/javascript" src="js/cufon-yui.js"></script>
<script type="text/javascript" src="js/georgia.js"></script>
<script type="text/javascript" src="js/cuf_run.js"></script>
<style type="text/css">
<!--
.style4 {font-size: large; font-weight: bold; }
.style14 {font-size: medium; font-weight: bold; }
.style15 {color: #993366} .style5 {color: #E1EAF1; font-size: 24px;
}
-->
</style>
</head>
<body>
<!-- START PAGE SOURCE -->
<div class="main">
<div class="header">
<div class="header_resize">
<div class="logo">
<h1><a href="index.html"><small>The</small><br />
SecureBackupRecovery</a></h1>
</div>
<div class="logo_text"><a href="#"></a></div>
<div class="clr"></div>
</div>
<div class="headert_text_resize">
<div class="menu">
<ul>
<li><a href="index.jsp">Home</a></li>
<li><a href="#" class="active">Verifier</a></li>
<li><a href="index.jsp">Logout</a></li>
<li></li>

43
</ul>
</div>
<div class="clr"></div>
<h2 class="bigtext"><span>Data Backup &amp; Recovery </span><br />
Protective Technique </h2>
<div class="headert_text">
<p>&nbsp;</p>
</div>
<div class="clr"></div>
</div>
</div>
<div class="body">
<div class="body_resize">
<div class="left">
<h2>Verifier Page </h2>
<form action="vprocess" method="post" name="form1" id="form1">
<table width="461" height="224" border="0" >
<tr>
<td height="52" colspan="3" bgcolor="#666666"><div align="center"><span
class="style5">Recover the data </span></div></td>
</tr> <tr>
<td width="134"><div align="center" class="style14"><span class="style15">Choose
Option </span></div></td>
<td><div align="center" class="style14"><span class="style15">File
id</span></div></td>
<td width="219"><div align="center" class="style14"><span class="style15">File
Name</span></div></td>
</tr> <% try {
// String username = session.getAttribute("username").toString();
PreparedStatement ps;
ResultSet rs;
//Connection con = null;
Statement st,st1,st2,st3,st4;
Class.forName("com.mysql.jdbc.Driver");
Connection con =
DriverManager.getConnection("jdbc:mysql://localhost:3306/securebackup","root","password");
String query = "select * from audit"; st =
con.createStatement(); rs = st.executeQuery(query);

while (rs.next()) {
String id = rs.getString(1);
//String ownerName = rs.getString(2);
String fileName = rs.getString(3); %>
<tr>
<td><div align="center">
<input type="radio" name="filSelect" value="<%=id%>" />
</div></td>
<td><div align="center"><%=id%></div></td>
<td><div align="center"><%=fileName%></div></td>
</tr>

44
<%
}
con.close();
} catch (Exception e) { out.println(e);
}

%>
<tr>
<td colspan="3"><div align="center">
<p>
<input type="submit" name="Submit" value="Recover" />
</p>
</div></td>
</tr>
</table>
</form>
<p>&nbsp;</p>
<a href="#"></a>
<div class="bg"></div>
</div>
<div class="right">
<h2>Sidebar Menu</h2>
<ul>
<li><a href="index.jsp">Home</a></li>
<li><a href="index.jsp">Logout</a><a href="verifierlogin.jsp"></a></li>
<li></li>
</ul>
<div class="bg"></div>
<h2>&nbsp;</h2>
<div class="bg"></div>
</div>
<div class="clr"></div>
</div>
</div>
<div class="FBG">
<div class="FBG_resize">
<div class="blok">
<h2>&nbsp;</h2>
<p>&nbsp;</p>
</div>
<div class="clr"></div>
</div>
</div>
<div class="footer">
<div class="footer_resize">
<p class="lf">Copyright &copy; - All Rights Reserved</p>
<p class="rf"><a href="http://all-free-download.com/free-website-templates/"></a></p>
<div class="clr"></div>
</div>

45
<div class="clr"></div>
</div>
</div>
<!-- END PAGE SOURCE -->
<div align=center></div>
</body>
</html>
Homomorphic encryption:

import java.io.BufferedReader; import java.io.FileReader; import java.io.FileWriter; import


java.io.IOException;
public class HOMO {
/**
* S-BOX table used for Key Expansion and Sub-Bytes.
*/
public static final String newline = System.getProperty("line.separator"); //The newline for
whatever system you choose to run in. public static enum Mode { ECB,CBC }; public static
final int[][] sbox = {{0x63, 0x7c, 0x77, 0x7b, 0xf2, 0x6b, 0x6f, 0xc5, 0x30, 0x01, 0x67, 0x2b,
0xfe, 0xd7, 0xab, 0x76}, {0xca, 0x82, 0xc9, 0x7d, 0xfa, 0x59, 0x47, 0xf0, 0xad, 0xd4, 0xa2,
0xaf, 0x9c, 0xa4, 0x72, 0xc0}, {0xb7, 0xfd, 0x93, 0x26, 0x36, 0x3f, 0xf7, 0xcc, 0x34, 0xa5,
0xe5, 0xf1,
0x71, 0xd8, 0x31, 0x15}, {0x04, 0xc7, 0x23, 0xc3, 0x18, 0x96, 0x05, 0x9a, 0x07, 0x12, 0x80,
0xe2,
0xeb, 0x27, 0xb2, 0x75}, {0x09, 0x83, 0x2c, 0x1a, 0x1b, 0x6e, 0x5a, 0xa0, 0x52, 0x3b, 0xd6,
0xb3,
0x29, 0xe3, 0x2f, 0x84}, {0x53, 0xd1, 0x00, 0xed, 0x20, 0xfc, 0xb1, 0x5b, 0x6a, 0xcb, 0xbe,
0x39,
0x4a, 0x4c, 0x58, 0xcf}, {0xd0, 0xef, 0xaa, 0xfb, 0x43, 0x4d, 0x33, 0x85, 0x45, 0xf9, 0x02,
0x7f, 0x50,
0x3c, 0x9f, 0xa8}, {0x51, 0xa3, 0x40, 0x8f, 0x92, 0x9d, 0x38, 0xf5, 0xbc, 0xb6, 0xda, 0x21,
0x10, 0xff,
0xf3, 0xd2}, {0xcd, 0x0c, 0x13, 0xec, 0x5f, 0x97, 0x44, 0x17, 0xc4, 0xa7, 0x7e, 0x3d, 0x64,
0x5d,
0x19, 0x73}, {0x60, 0x81, 0x4f, 0xdc, 0x22, 0x2a, 0x90, 0x88, 0x46, 0xee, 0xb8, 0x14, 0xde,
0x5e,
0x0b, 0xdb}, {0xe0, 0x32, 0x3a, 0x0a, 0x49, 0x06, 0x24, 0x5c, 0xc2, 0xd3, 0xac, 0x62, 0x91,
0x95,
0xe4, 0x79}, {0xe7, 0xc8, 0x37, 0x6d, 0x8d, 0xd5, 0x4e, 0xa9, 0x6c, 0x56, 0xf4, 0xea, 0x65,
0x7a,
0xae, 0x08}, {0xba, 0x78, 0x25, 0x2e, 0x1c, 0xa6, 0xb4, 0xc6, 0xe8, 0xdd, 0x74, 0x1f, 0x4b,
0xbd,
0x8b, 0x8a}, {0x70, 0x3e, 0xb5, 0x66, 0x48, 0x03, 0xf6, 0x0e, 0x61, 0x35, 0x57, 0xb9, 0x86,
0xc1,
0x1d, 0x9e}, {0xe1, 0xf8, 0x98, 0x11, 0x69, 0xd9, 0x8e, 0x94, 0x9b, 0x1e, 0x87, 0xe9, 0xce,
0x55,
0x28, 0xdf}, {0x8c, 0xa1, 0x89, 0x0d, 0xbf, 0xe6, 0x42, 0x68, 0x41, 0x99, 0x2d, 0x0f, 0xb0,
0x54, 0xbb, 0x16}};
/**
* Inverse SBOX table used for invSubBytes
*/

46
public static final int[][] invsbox = {{0x52, 0x09, 0x6a, 0xd5, 0x30, 0x36, 0xa5, 0x38, 0xbf,
0x40, 0xa3, 0x9e, 0x81, 0xf3, 0xd7, 0xfb}, {0x7c, 0xe3, 0x39, 0x82, 0x9b, 0x2f, 0xff, 0x87,
0x34, 0x8e, 0x43,
0x44, 0xc4, 0xde, 0xe9, 0xcb}, {0x54, 0x7b, 0x94, 0x32, 0xa6, 0xc2, 0x23, 0x3d, 0xee, 0x4c,
0x95,
0x0b, 0x42, 0xfa, 0xc3, 0x4e}, {0x08, 0x2e, 0xa1, 0x66, 0x28, 0xd9, 0x24, 0xb2, 0x76, 0x5b,
0xa2, 0x49, 0x6d, 0x8b, 0xd1, 0x25}, {0x72, 0xf8, 0xf6, 0x64, 0x86, 0x68, 0x98, 0x16, 0xd4,
0xa4, 0x5c, 0xcc, 0x5d, 0x65, 0xb6, 0x92}, {0x6c, 0x70, 0x48, 0x50, 0xfd, 0xed, 0xb9, 0xda,
0x5e, 0x15, 0x46,
0x57, 0xa7, 0x8d, 0x9d, 0x84}, {0x90, 0xd8, 0xab, 0x00, 0x8c, 0xbc, 0xd3, 0x0a, 0xf7, 0xe4,
0x58,
0x05, 0xb8, 0xb3, 0x45, 0x06}, {0xd0, 0x2c, 0x1e, 0x8f, 0xca, 0x3f, 0x0f, 0x02, 0xc1, 0xaf,
0xbd, 0x03,
0x01, 0x13, 0x8a, 0x6b}, {0x3a, 0x91, 0x11, 0x41, 0x4f, 0x67, 0xdc, 0xea, 0x97, 0xf2, 0xcf,
0xce, 0xf0,
0xb4, 0xe6, 0x73}, {0x96, 0xac, 0x74, 0x22, 0xe7, 0xad, 0x35, 0x85, 0xe2, 0xf9, 0x37, 0xe8,
0x1c, 0x75, 0xdf, 0x6e}, {0x47, 0xf1, 0x1a, 0x71, 0x1d, 0x29, 0xc5, 0x89, 0x6f, 0xb7, 0x62,
0x0e, 0xaa,
0x18, 0xbe, 0x1b}, {0xfc, 0x56, 0x3e, 0x4b, 0xc6, 0xd2, 0x79, 0x20, 0x9a, 0xdb, 0xc0, 0xfe,
0x78,
0xcd, 0x5a, 0xf4}, {0x1f, 0xdd, 0xa8, 0x33, 0x88, 0x07, 0xc7, 0x31, 0xb1, 0x12, 0x10, 0x59,
0x27,
0x80, 0xec, 0x5f}, {0x60, 0x51, 0x7f, 0xa9, 0x19, 0xb5, 0x4a, 0x0d, 0x2d, 0xe5, 0x7a, 0x9f,
0x93, 0xc9,
0x9c, 0xef}, {0xa0, 0xe0, 0x3b, 0x4d, 0xae, 0x2a, 0xf5, 0xb0, 0xc8, 0xeb, 0xbb, 0x3c, 0x83,
0x53,
0x99, 0x61}, {0x17, 0x2b, 0x04, 0x7e, 0xba, 0x77, 0xd6, 0x26, 0xe1, 0x69, 0x14, 0x63, 0x55,
0x21, 0x0c, 0x7d}};

/**
* Galois table used for mixColumns
*/
public static final int[][] galois = {{0x02, 0x03, 0x01, 0x01},
{0x01, 0x02, 0x03, 0x01},
{0x01, 0x01, 0x02, 0x03},
{0x03, 0x01, 0x01, 0x02}};
/**
* Inverse Galois table used for invMixColumns
*/
public static final int[][] invgalois = {{0x0e, 0x0b, 0x0d, 0x09},
{0x09, 0x0e, 0x0b, 0x0d},
{0x0d, 0x09, 0x0e, 0x0b},
{0x0b, 0x0d, 0x09, 0x0e}};
/**
* RCon array used for Key Expansion
*/
public static final int[] rcon = {0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b,
0x36,
0x6c, 0xd8, 0xab, 0x4d, 0x9a,

47
0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5, 0x91,
0x39,
0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d,
0x3a,
0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c,
0xd8,
0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa,
0xef,
0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66,
0xcc,
0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80,
0x1b,
0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97, 0x35, 0x6a, 0xd4,
0xb3,
0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a,
0x94,
0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02, 0x04, 0x08, 0x10,
0x20,
0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e, 0xbc, 0x63, 0xc6, 0x97,
0x35,
0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3, 0xbd, 0x61, 0xc2,
0x9f,
0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8, 0xcb, 0x8d, 0x01, 0x02,
0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1b, 0x36, 0x6c, 0xd8, 0xab, 0x4d, 0x9a, 0x2f, 0x5e,
0xbc, 0x63,
0xc6, 0x97, 0x35, 0x6a, 0xd4, 0xb3, 0x7d, 0xfa, 0xef, 0xc5, 0x91, 0x39, 0x72, 0xe4, 0xd3,
0xbd, 0x61, 0xc2, 0x9f, 0x25, 0x4a, 0x94, 0x33, 0x66, 0xcc, 0x83, 0x1d, 0x3a, 0x74, 0xe8,
0xcb}; static String key = ""; static String iv = ""; static String ftw = ""; static
BufferedReader keyreader; static BufferedReader input; static Mode mode; static
FileWriter out; static int keyFileIndex = 1; //Index where the keyFile argument should be. Used
to determines the index of other arguments.
/**
* Empty HOMO constructor.
*/ public () {
//Nothing to initialize here.
}
/**
* Main method with which we run the HOMO algorithm. * Usage: java HOMO e|d [-length] [-
mode] keyFile inputFile * @param args Array of command line arguments.
*/
public static void main(String args[]) throws IOException
{
/*
* args[0] should be either "e" or "d"
* args[1] and args[2] should correspond to the following:
* -length => "128" or "256"
* -mode => "ecb" or "cbc"
* neither -length nor -mode: args[1] should be the keyFile, and args[2] should be the inputFile
*
* args[3] and args[4] should exist only if -length was specified:

48
*/ try
{
int keysizecheck = 128; //User's intended key size. if (!args[1].equals("-length"))
//No optional length argument given.
{
if(!args[1].equals("-mode")) //No optional mode given either;
{
//Defaults to 128-bit key size and ECB.
}
else //Mode option was given;
{
mode = args[2].equals("ecb") ? Mode.ECB : Mode.CBC; keyFileIndex
+= 2;
}
}
else //-length was explicitly given.
{
keyFileIndex+=2;
keysizecheck = Integer.parseInt(args[keyFileIndex-1]);
if(args[3].equals("-mode")) //Both -length and -mode options were given
{
mode = args[4].equals("ecb") ? Mode.ECB : Mode.CBC;
keyFileIndex+=2;
}
}
keyreader = new BufferedReader(new FileReader(args[keyFileIndex])); key =
keyreader.readLine(); if(key.length() *4 != keysizecheck) //Check to see if user's intended
key size matches the size of key in file.
{
throw new Exception("Error: Attemping to use a " + key.length() * 4 + "-bit key with
HOMO-"+keysizecheck);
}
input = new BufferedReader(new FileReader(args[keyFileIndex+1])); if(mode ==
Mode.CBC)
{
iv = keyreader.readLine();
if(iv == null)
{
throw new Exception("Error: Initialization Vector required for CBC Mode.");
}
else if(iv.length() != 32)
{
throw new Exception("Error: Size of Initialization Vector must be 32 bytes.");
}
}
ftw += args[keyFileIndex+1];
}
catch (Exception e)
{
System.err.println(e.getMessage() + newline);

49
System.exit(1);
}
HOMO = new HOMO(); if (args[0].equalsIgnoreCase("e"))
{
out = new FileWriter(ftw + ".enc");
int numRounds = 10 + (((key.length() * 4 - 128) / 32)); String line =
input.readLine(); int[][] state, initvector = new int[4][4]; int[][] keymatrix =
HOMO.keySchedule(key); if(mode == Mode.CBC)
{
for (int i = 0; i < 4; i++)
{
for (int j = 0; j < 4; j++) {
initvector[j][i] = Integer.parseInt(iv.substring((8 * i) + (2 * j), (8 * i) + (2 * j + 2)),
16);
}
}
}
while (line != null) { if (line.matches("[0-9A-F]+")) //If line is valid (i.e.
contains valid hex characters, encrpyt. Otherwise, skip line.
{
if (line.length() < 32) {
line = String.format("%032x",Integer.parseInt(line, 16));
}
state = new int[4][4];
for (int i = 0; i < 4; i++) //Parses line into a matrix
{
for (int j = 0; j < 4; j++) {
state[j][i] = Integer.parseInt(line.substring((8 * i) + (2 * j), (8 * i) + (2 * j + 2)),
16);
}
}
if(mode == Mode.CBC)
{
HOMO.addRoundKey(state, initvector);
}
HOMO.addRoundKey(state, HOMO.subKey(keymatrix, 0)); //Starts the
addRoundKey with the first part of Key Expansion for (int i = 1; i < numRounds;
i++) {
HOMO.subBytes(state); //implements the Sub-Bytes subroutine.
HOMO.shiftRows(state); //implements Shift-Rows subroutine.
HOMO.mixColumns(state);
HOMO.addRoundKey(state, HOMO.subKey(keymatrix, i));
}
HOMO.subBytes(state); //implements the Sub-Bytes subroutine.
HOMO.shiftRows(state); //implements Shift-Rows subroutine.
HOMO.addRoundKey(state, HOMO.subKey(keymatrix, numRounds));
if(mode == Mode.CBC)
{
initvector = state;
}

50
out.write(MatrixToString(state) + newline); //If all systems could just use the same
newline, I'd be set.
line = input.readLine();
} else
{
line = input.readLine();
}
}
input.close(); out.close();
}
else if (args[0].equalsIgnoreCase("d")) //Decryption Mode
{
out = new FileWriter(ftw + ".dec");
int numRounds = 10 + (((key.length() * 4 - 128) / 32));
String line = input.readLine(); int[][] state = new int[4][4]; int[][]
initvector = new int[4][4]; int[][] nextvector = new int[4][4]; int[][] keymatrix =
HOMO.keySchedule(key);
if(mode == Mode.CBC) //Parse Initialization Vector
{
for (int i = 0; i < 4; i++)
{
for (int j = 0; j < 4; j++) {
initvector[j][i] = Integer.parseInt(iv.substring((8 * i) + (2 * j), (8 * i) + (2 * j + 2)),
16);
}
}
}
while (line != null) { state = new int[4][4];
for (int i = 0; i < state.length; i++) //Parses line into a matrix
{
for (int j = 0; j < state[0].length; j++) {
state[j][i] = Integer.parseInt(line.substring((8 * i) + (2 * j), (8 * i) + (2 * j + 2)),
16);
}
}
if(mode == Mode.CBC)
{
HOMO.deepCopy2DArray(nextvector,state);
}
HOMO.addRoundKey(state, HOMO.subKey(keymatrix, numRounds)); for
(int i = numRounds - 1; i > 0; i--) {
HOMO.invShiftRows(state);
HOMO.invSubBytes(state);
HOMO.addRoundKey(state, HOMO.subKey(keymatrix, i));
HOMO.invMixColumns(state);
}
HOMO.invShiftRows(state);
HOMO.invSubBytes(state);
HOMO.addRoundKey(state, HOMO.subKey(keymatrix, 0));
if(mode == Mode.CBC)

51
{
HOMO.addRoundKey(state, initvector);
HOMO.deepCopy2DArray(initvector,nextvector);
}
out.write(MatrixToString(state) + newline);
line = input.readLine();
}
input.close();
out.close();
} else {
System.err.println("Usage for Encryption: java HOMO e keyFile inputFile");
System.err.println("Usage for Decryption: java HOMO d keyFile encryptedinputFile");
}
}
//Helper method which executes a deep copy of a 2D array. (dest,src) private void
deepCopy2DArray(int[][] destination, int[][] source)
{
assert destination.length == source.length && destination[0].length == source[0].length;
for(int i = 0; i < destination.length;i++)
{
System.arraycopy(source[i], 0, destination[i], 0, destination[0].length);
}
}
/**
* Pulls out the subkey from the key formed from the keySchedule method
* @param km key formed from HOMO.keySchedule()
* @param begin index of where to fetch the subkey
* @return The chunk of the scheduled key based on begin.
*/
private int[][] subKey(int[][] km, int begin) {
int[][] arr = new int[4][4]; for (int i = 0; i < arr.length; i++) { for (int j = 0; j <
arr.length; j++) {
arr[i][j] = km[i][4 * begin + j];
}
}
return arr;
}
/**
* Replaces all elements in the passed array with values in sbox[][]. * @param arr Array
whose value will be replaced * @return The array who's value was replaced.
*/
public void subBytes(int[][] arr) {
for (int i = 0; i < arr.length; i++) //Sub-Byte subroutine
{
for (int j = 0; j < arr[0].length; j++) { int hex = arr[j][i];
arr[j][i] = sbox[hex / 16][hex % 16];
}
}
}
/**

52
* Inverse rendition of the subBytes. The operations of invSubBytes are the reverse operations of
subBytes.
* @param arr the array that is passed public void invSubBytes(int[][] arr) {
for (int i = 0; i < arr.length; i++) //Inverse Sub-Byte subroutine
{
for (int j = 0; j < arr[0].length; j++) {
int hex = arr[j][i];
arr[j][i] = invsbox[hex / 16][hex % 16];
}
}
}
* Performs a left shift on each row of the matrix.
* Left shifts the nth row n-1 times.
* @param arr the reference of the array to perform the rotations.
public void shiftRows(int[][] arr) { for (int i = 1; i < arr.length; i++) { arr[i] =
leftrotate(arr[i], i);
}
} /**
* Left rotates a given array. The size of the array is assumed to be 4. * If the number of times
to rotate the array is divisible by 4, return the array
* as it is.
* @param arr The passed array (assumed to be of size 4) * @param times The number of
times to rotate the array.
* @return the rotated array.
*/ private int[] leftrotate(int[] arr, int times)
{
assert(arr.length == 4); if (times % 4 == 0) {
return arr;
}
while (times > 0) { int temp = arr[0];
for (int i = 0; i < arr.length - 1; i++) {
arr[i] = arr[i + 1];
}
arr[arr.length - 1] = temp;
--times;
}
return arr; } /**
* Inverse rendition of ShiftRows (this time, right rotations are used).
* @param arr the array to compute right rotations.
*/
public void invShiftRows(int[][] arr) { for (int i = 1; i < arr.length; i++) {
arr[i] = rightrotate(arr[i], i);
}
} /**
* Right reverses the array in a similar fashion as leftrotate
* @param arr
* @param times
* @return
*/

53
private int[] rightrotate(int[] arr, int times) { if (arr.length == 0 || arr.length == 1 || times %
4 == 0) { return arr;
}
while (times > 0) { int temp = arr[arr.length - 1]; for (int i = arr.length - 1; i >
0; i--) {
arr[i] = arr[i - 1];
}
arr[0] = temp;
--times;
}
return arr; } /**
* Performed by mapping each element in the current matrix with the value * returned by its
helper function.
* @param arr the array with we calculate against the galois field matrix.
*/
public void mixColumns(int[][] arr) //method for mixColumns
{
int[][] tarr = new int[4][4];
for(int i = 0; i < 4; i++)
{
System.arraycopy(arr[i], 0, tarr[i], 0, 4);
}
for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) {
arr[i][j] = mcHelper(tarr, galois, i, j);
}
}
} /**
* Helper method of mixColumns in which compute the mixColumn formula on each element.
* @param arr passed in current matrix
* @param g the galois field
* @param i the row position
* @param j the column position
* @return the computed mixColumns value
*/
private int mcHelper(int[][] arr, int[][] g, int i, int j)
{
int mcsum = 0; for (int k = 0; k < 4; k++) {
int a = g[i][k];
int b = arr[k][j]; mcsum ^= mcCalc(a, b);
}
return mcsum;
} private int mcCalc(int a, int b) //Helper method for mcHelper
{
if (a == 1) { return b; } else if (a == 2) {
// return MCTables.mc2[b / 16][b % 16];
} else if (a == 3) {
// return MCTables.mc3[b / 16][b % 16];
}
return 0;
}

54
public void invMixColumns(int[][] arr) {
int[][] tarr = new int[4][4]; for(int i = 0; i < 4; i++)
{
System.arraycopy(arr[i], 0, tarr[i], 0, 4);
}
for (int i = 0; i < 4; i++) { for (int j = 0; j < 4; j++) {
arr[i][j] = invMcHelper(tarr, invgalois, i, j);
}
}
}
private int invMcHelper(int[][] arr, int[][] igalois, int i, int j) //Helper method for invMixColumns
{
int mcsum = 0; for (int k = 0; k < 4; k++) { int a = igalois[i][k]; int b =
arr[k][j];
mcsum ^= invMcCalc(a, b);
}
return mcsum;
}
/**
* Helper computing method for inverted mixColumns.
*
* @param a Row Position of mcX.
* @param b Column Position of mcX
* @return the value in the corresponding mcX table based on the a,b coordinates.
*/
private int invMcCalc(int a, int b) //Helper method for invMcHelper
{
if (a == 9) {
// return MCTables.mc9[b / 16][b % 16];
} else if (a == 0xb) {
// return MCTables.mc11[b / 16][b % 16];
} else if (a == 0xd) {
// return MCTables.mc13[b / 16][b % 16];
} else if (a == 0xe) {
// return MCTables.mc14[b / 16][b % 16];
}
return 0;
}
/**
*The keyScheduling algorithm to expand a short key into a number of separate round keys.
*
* @param key the key in which key expansion will be computed upon.
* @return the fully computed expanded key for the HOMO encryption/decryption.
*/
public int[][] keySchedule(String key)
{
int binkeysize = key.length() * 4;
int colsize = binkeysize + 48 - (32 * ((binkeysize / 64) - 2)); //size of key scheduling will be
based on the binary size of the key. int[][] keyMatrix = new int[4][colsize / 4]; //creates the
matrix for key scheduling int rconpointer = 1; int[] t = new int[4];

55
final int keycounter = binkeysize / 32; int k;
for (int i = 0; i < keycounter; i++) //the first 1 (128-bit key) or 2 (256-bit key) set(s) of 4x4
matrices are filled with the key.
{
for (int j = 0; j < 4; j++) {
keyMatrix[j][i] = Integer.parseInt(key.substring((8 * i) + (2 * j), (8 * i) + (2 * j + 2)),
16);
}
}
int keypoint = keycounter; while (keypoint < (colsize / 4)) { int temp = keypoint
% keycounter;
if (temp == 0) { for (k = 0; k < 4; k++) {
t[k] = keyMatrix[k][keypoint - 1];
}
t = schedule_core(t, rconpointer++);
for (k = 0; k < 4; k++) {
keyMatrix[k][keypoint] = t[k] ^ keyMatrix[k][keypoint - keycounter];
}
keypoint++; } else if (temp == 4) { for (k = 0; k < 4; k++) {
int hex = keyMatrix[k][keypoint - 1];
keyMatrix[k][keypoint] = sbox[hex / 16][hex % 16] ^ keyMatrix[k][keypoint -
keycounter];
}
keypoint++;
} else {
int ktemp = keypoint + 3; while (keypoint < ktemp) { for (k = 0;
k < 4; k++) {
keyMatrix[k][keypoint] = keyMatrix[k][keypoint - 1] ^ keyMatrix[k][keypoint -
keycounter];
}
keypoint++;
}
}
}
return keyMatrix;
}
/**
* For every (binary key size / 32)th column in the expanded key. We compute a special column
* using sbox and an XOR of the an rcon number with the first element in the passed array.
*
* @param in the array in which we compute the next set of bytes for key expansion
* @param rconpointer the element in the rcon array with which to XOR the first element in 'in'
* @return the next column in the key scheduling.
*/
public int[] schedule_core(int[] in, int rconpointer) {
in = leftrotate(in, 1); int hex;
for (int i = 0; i < in.length; i++) { hex = in[i];
in[i] = sbox[hex / 16][hex % 16];
}
in[0] ^= rcon[rconpointer];

56
return in;
}
/**
* In the AddRoundKey step, the subkey is combined with the state. For each round, a chunk of
the key scheduled is pulled; each subkey is the same size as the state. Each element in the byte
matrix is XOR'd with each element in the chunk of the expanded key.
*
* @param state reference of the matrix in which addRoundKey will be computed upon. *
@param keymatrix chunk of the expanded key
*/
public void addRoundKey(int[][] bytematrix, int[][] keymatrix)
{
for (int i = 0; i < bytematrix.length; i++) { for (int j = 0; j < bytematrix[0].length;
j++) { bytematrix[j][i] ^= keymatrix[j][i];
}
}
}

/**
* ToString() for the matrix (2D array).
*
* @param m reference of the matrix
* @return the string representation of the matrix.
*/

public static String MatrixToString(int[][] m) //takes in a matrix and converts it into a line of
32 hex characters.
{
String t = ""; for (int i = 0; i < m.length; i++) { for (int j = 0; j < m[0].length;
j++) {
String h = Integer.toHexString(m[j][i]).toUpperCase(); if (h.length() == 1) {
t += '0' + h;
} else { t += h;
}
}
} return t;
}
}

System.out.println(filepath);

System.out.println(filepath1);

File file = new File(filepath);


if (!file.exists() || !file.isFile()) {
System.out.println("File doesn\'t exist");
rd=request.getRequestDispatcher("failure2.jsp");

rd.forward(request,response);

57
}
Long tmp2 = file.length();

File file1 = new File(filepath1);


if (!file1.exists() || !file1.isFile()) {
System.out.println("File2 doesn\'t exist");
rd=request.getRequestDispatcher("failure2.jsp");

rd.forward(request,response);

Long tmp3 = file1.length();

// String tmp = sb.toString();

// String tmp1 = sb1.toString();

if(!tmp2.equals(tmp3))
{
System.out.println("\n File corrupted");
String sql = "insert into audit values('" + primary + "','" + filename2 + "','" +
originalName + "','" + fname + "','" + s1 + "','" + k1 + "')"; PreparedStatement ps =
con.prepareStatement(sql); int noOfRows = ps.executeUpdate();

rd=request.getRequestDispatcher("failure3.jsp");

rd.forward(request,response);
} else {
System.out.println(originalName);
File f = new File ("D:\\UploadedFiles\\Dropbox\\"+fname+"\\"+originalName+"");
String my=f.getPath();
System.out.println(my);
String filename=f.getName();
System.out.println(filename);
String type=getMimeType("file:"+my);
System.out.println("pass 1");

response.setContentType (type); System.out.println("pass 2");


response.setHeader ("Content-Disposition", "attachment; filename=\""+filename+"\"");
System.out.println("pass 3");
String name = f.getName().substring(f.getName().lastIndexOf("/") + 1,f.getName().length());
System.out.println("pass 4");
InputStream in = new FileInputStream(f);
System.out.println("pass 5");
ServletOutputStream outs = response.getOutputStream();
System.out.println("pass 6");

58
int bit = 256; int i = 0; try { while ((bit) >= 0) { bit =
in.read(); outs.write(bit);
}
} catch (IOException ioe) {
ioe.printStackTrace(System.out);
}
outs.flush(); outs.close(); in.close();
//update the count by
/*count += 1;
query = "update upload set counts=" + count +" where fileid=" + primary; st
= con.createStatement();
int update = st.executeUpdate(query);*/
}

} else
{
rd = request.getRequestDispatcher("failure.jsp"); rd.forward(request,
response);
}

}
catch (Exception ex) { System.out.print(ex);
} finally { // out.close();
}
}
// <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on the
left to edit the code.">
/**
* Handles the HTTP <code>GET</code> method. * @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);
}
/**
* Handles the HTTP <code>POST</code> method.
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
*/
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);

59
}
/**
* Returns a short description of the servlet.
* @return a String containing servlet description
*/
@Override public String getServletInfo() { return "Short description";
}// </editor-fold>
public static String getMimeType(String fileUrl)throws java.io.IOException,
MalformedURLException
{
String type = null;
URL u = new URL(fileUrl); URLConnection uc = null; uc = u.openConnection(); type
= uc.getContentType();
return type;
}
}

60
61
62
63
64
65
66
67
68
69

You might also like