28 - Securing Data With BlockChain and AI

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 62

Securing Data with BlockChain and AI

A project report submitted in partial fulfillment


of the requirements for the award of the degree of

Master
of
Computer Application
Submitted by
STUDENT_NAME
ROLL_NO
Under the esteemed guidance of
GUIDE_NAME
Assistant Professor

DEPARTMENT OF COMPUTER SCIENCE &ENGINEERING


ST.MARY’S GROUP OF INSTITUTIONS GUNTUR (Affiliated
to JNTU Kakinada, Approved by AICTE, Accredited by NBA)
CHEBROLU-522 212, A.P, INDIA
2014-16
ST. MARY’S GROUP OF INSTITUTIONS, CHEBROLU, GUNTUR
(Affiliated to JNTU Kakinada)

DEPARTMENT OF COMPUTER SCIENCE & ENGINEEING

CERTIFICATE

This is to certify that the project report entitled PROJECT NAME” is the bonafied record of
project work carried out by STUDENT NAME, a student of this college, during the academic
year 2014 - 2016, in partial fulfillment of the requirements for the award of the degree of Master
Of Computer Application from St.Marys Group Of Institutions Guntur of Jawaharlal Nehru
Technological University, Kakinada.

GUIDE_NAME,
Asst. Professor Associate. Professor
(Project Guide) (Head of Department, CSE)
DECLARATION

We, hereby declare that the project report entitled “PROJECT_NAME” is an original work
done at St.Mary„s Group of Institutions Guntur, Chebrolu, Guntur and submitted in fulfillment
of the requirements for the award of Master of Computer Application, to St.Mary„s Group of
Institutions Guntur, Chebrolu, Guntur.

STUDENT_NAME
ROLL_NO
ACKNOWLEDGEMENT

We consider it as a privilege to thank all those people who helped us a lot for
successful
completion of the project “PROJECT_NAME” A special gratitude we extend to
our guide GUIDE_NAME, Asst. Professor whose contribution in stimulating
suggestions and encouragement ,helped us to coordinate our project especially in
writing this report, whose valuable suggestions, guidance and comprehensive
assistance helped us a lot in presenting the project “PROJECT_NAME”.
We would also like to acknowledge with much appreciation the crucial role of our
Co-Ordinator GUIDE_NAME, Asst.Professor for helping us a lot in completing
our project. We just wanted to say thank you for being such a wonderful educator
as well as a person.
We express our heartfelt thanks to HOD_NAME, Head of the Department, CSE,
for his spontaneous expression of knowledge, which helped us in bringing up this
project through the academic year.

STUDENT_NAME
ROLL_NO
ABSTRACT:

Data is the input for various artificial intelligence (AI) algorithms to mine valuable features, yet data in Internet is
scattered everywhere and controlled by different stakeholders who cannot believe in each other, and usage of the
data in complex cyberspace is difficult to authorize or to validate. As a result, it is very difficult to enable data
sharing in cyberspace for the real big data, as well as a real powerful AI. In this paper, we propose the SecNet, an
architecture that can enable secure data storing, computing, and sharing in the large-scale Internet environment,
aiming at a more secure cyberspace with real big data and thus enhanced AI with plenty of data source, by
integrating three key components: 1) block chain-based data sharing with ownership guarantee, which enables
trusted data sharing in the large-scale environment to form real big data; 2) AI-based secure computing platform to
produce more intelligent security rules, which helps to construct a more trusted cyberspace; 3) trusted value-
exchange mechanism for purchasing security service, providing a way for participants to gain economic rewards
when giving out their data or service, which promotes the data sharing and thus achieves better performance of AI.
Moreover, we discuss the typical use scenario of SecNet as well as its potentially alternative way to deploy, as well
as analyze its effectiveness from the aspect of network security and economic revenue

5
TABLE OF CONTENTS

TITLE PAGENO
1. ABSTRACT 6

2. INTRODUCTION 8

2.1 SYSTEM ANALYSIS 8

2.2 PROPOSED SYSTEM 9

2.3OVERVIEW OF THE PROJECT 9

3- LITERATURE SURVEY 11

3.1 REQUIREMENT SPECIFICATIONS 13

3.2 HADWARE AND SOFTWARE SPECIFICATIONS 15

3.3 TECHNOLOGIES USED 18

3.4 INTRODUCTION TO Java 20

3.5 MACHINE LEARNING 24

3.6 SUPERVISED LEARNING 26

4. DESIGN AND IMPLEMENTATION CONSTRAINTS 27

4.1 CONSTRAINTS IN ANALYSIS 30

4.2 CONSTRAINTS IN DESIGN 34

5.DESIGN AND IMPLEMENTATION 38

6. ARCHITECTURE DIAGRAM 43

7. MODULES 45

8. CODING AND TESTING 50

9.APPENDISIS 52

6
CHAPTER 1

SYNOPSIS
INTRODUCTION

With the development of information technologies, the trend of integrating cyber, physical and social (CPS)
systems to a highly unified information society, rather than just a digital Internet, is becoming increasing obvious
[1]. In such an information society, data is the asset of its owner, and its usage should be under the full control of
its owner, although this is not the common case [2], [3]. Given data is undoubtedly the oil of the information
society, almost every big company want to collect data as much as possible, for their future competitiveness [4],
[5]. An increasing amount of personal data, including location information, websearching behavior, user calls, user
preference, is being silently collected by the built-in sensors inside the products from those big companies, which
brings in huge risk on privacy leakage of data owners [6], [7]. Moreover, the usage of those data is out of control of
their owners, since currently there is not a reliable way to record how the data is used and by who, and thus has
little methods to trace or punish the violators who abuse those data [8]. That is, lack of ability to effectively
manage data makes it very difficult for an individual to control the potential risks associated with the collected data
[9]. For example, once the data has been collected by a third party (e.g., a big company), the lack of access to this
data hinders an individual to understand or manage the risks related to the collected data from him. Meanwhile, the
lack of immutable recording for the usage of data increases the risks to abuse them [10]. If there is an efficient and
trusted way to collect and merge the data scattered across the whole CPS to form real big data, the performance of
artificial intelligence (AI) will be significantly improved since AI can handle massive amount of data including
huge information at the same time, which would bring in great benefits (e.g., achieving enhanced security for data)
and even makes AI gaining the ability to exceed human capabilities in more areas [11]. According to the research
in [12], if given large amount of data in an orders of magnitude more scale, even the simplest AI algorithm
currently (e.g., perceptrons from the 1950s) can achieve fanciest performance to beat many state-of-theart
technologies today. The key lies in how to make data sharing trusted and secured [13]. Fortunately, the blockchain
technologies may be the promising way to achieve this goal, via consensus mechanisms throughout the network to
guarantee data sharing in a tamper-proof way embedded with economic incentives [14], [15]. Thus, AI can be
further empowered by blockchainprotected data sharing [16]–[18]. As a result, enhanced AI can provide better
performance and security for data. In this paper, we aim at securing data by combining blockchain and AI together,
and design a Secure Networking architecture (termed as SecNet) to significantly improve the security of data
sharing, and then the security of the whole network, even the whole CPS. In SecNet, to protect data, one of the
7
biggest challenges is where and how to store data, because users have to give their data to service providers if they
want to use certain services or applications [1], [3]. his is caused by the inherent coupling of user data and
application in current service mechanisms, which significantly hinders the development of data protection and
application innovation. Inspired by the concept of Personal Data Store (PDS) from openPDS [5] and the Private
Data Center (PDC) from HyperNet [1], SecNet finally inherits and adopts PDC instead of PDS, as PDC is more
suitable to deploy and to deal with this problem, since it provides more secure and intelligent data storage system
via physical entities instead of software-based algorithms as in openPDS. Each PDC actually serves as a secured as
well as centralized physical space for each SecNet user where his/her data lives in. Embedding PDC into SecNet
would allow users to monitor and reason about what and why their data is used as well as by who, meaning the
users can truly control every operation on their own data and achieve fine-grained management on access
behaviors for data. Actually, besides PDC, other choices can also be applied for the data storing in SecNet
according to certain requirements (see Section V). The trust-less relationship between different data stakeholders
significantly thwarts the data sharing in the whole Internet, thus the data used for AI training or analyzing is
limited in amount as well as partial in variety. Fortunately, the rise of Block chain technologies bring in a hopeful,
efficient and effective way to enable trust data sharing in trustless environment, which can help AI make more
accurate decisions due to the real big data collected from more places in the Internet. SecNet leverages the
emerging blockchain technologies to prevent the abuse of data, and to enable trusted data sharing in trustless or
even untrusted environment. For instance, it can enable cooperations between different edge computing paradigms
to work together to improve the whole system performance of edge networks [19]. The reason why blockchain can
enable trusted mechanisms is that it can provide a transparent, tamper-proof metadata infrastructure to seriously
recode all the usage of data [17]. Thus, SecNet introduces blockchain-based data sharing mechanisms with
ownership guarantee, where any data ready for sharing should be registered into a blockchain, named Data
Recording Blockchain (DRB), to announce its availability for sharing. Each access behavior on data by other
parties (not the data owner) should also be validated and recorded in this chain. In addition, the authenticity and
integrity of data can only be validated by DRB as well. Besides, SecNet enables economic incentive between
different entities if they share data or exchange security service, by embedding smart contract on data to trigger
automatic and tamper-proof value exchange. In this way, SecNet guarantees the data security and encourages data
sharing throughout the CPS. Furthermore, data is the fuel of AI [11], and it can greatly help to improve the
performance of AI algorithms if data can be efficiently networked and properly fused. Enabling data sharing across
multiple service providers can be a way to maximize the utilization of scattered data in separate entities with
potential conflicts of interest, which can enables a more powerful AI. Given enough data and blockchainbased
smart contract [20] on secure data sharing, it is not surprised that AI can become one of the most powerful
8
technologies and tools to improve cybersecurity, since it can check huge amount of data more quickly to save time,
and identify and mitigate threats more rapidly, and meanwhile give more accurate prediction and decision support
on security rules that a PDC should deploy. Besides, embedded with Machine Learning [21] inside, AI can
constantly learn patterns by applying existing data or artificial data generated by GAN [22] to improve its
strategies over time, to strengthen its ability on identifying any deviation on data or behaviors on a 24/7/365 basis.
SecNet can apply these advanced AI technologies into its Operation Support System (OSS) to adaptively identify
more suspicious data-related behaviors, even they are never seen before. In addition, swarm intelligence can be
used in SecNet to further improve the data security, by collecting different security knowledge from huge amount
of intelligent agents scattered everywhere in the CPS, with the help of trusted exchange mechanisms for incentive
tokens.

CHAPTER 2

SYSTEM ANALYSIS

2.1 EXISTING SYSTEM

The Given data is undoubtedly the oil of the information society, almost every big company
want to collect data as much as possible, for their future competitiveness [4], [5]. An
increasing amount of personal data, including location information, web- searching behavior,
user calls, user preference, is being silently collected by the built-in sensors inside the products
from those big companies, which brings in huge risk on privacy leakage of data owners [6],
[7]. Moreover, the usage of those data is out of control of their owners, since currently The
associate editor coordinating the review of this manuscript and approving it for publication
was Chi-Yuan Chen. there is not a reliable way to record how the data is used and by who, and
thus has little methods to trace or punish the violators who abuse those data [8]. That is, lack
of ability to effectively manage data makes it very difficult for an individual to control the
potential risks associated with the collected data [9]. For example, once the data has been
collected by a third party (e.g., a big company), the lack of access to this data hinders an
individual to understand or manage the risks related to the collected data from him.
Meanwhile, the lack of immutable recording for the usage of data increases the risks to abuse
them [10]

9
2.2PROPOSED SYSTEM

The we aim at securing data by combining blockchain and AI together, and design a
Secure Networking architecture (termed as SecNet) to significantly improve the security of
data sharing, and then the security of the whole network, even the whole CPS. In SecNet, to
protect data, one of the biggest challenges is where and how to store data, because users have
to give their data to service providers if they want to use certain services or applications [1],
[3]. This is caused by the inherent coupling of user data and application in current service
mechanisms, which significantly hinders the development of data protection and application
innovation. Inspired by the concept of Personal Data Store (PDS) from openPDS [5] and the
Private Data Center (PDC) from HyperNet [1], SecNet finally inherits and adopts PDC instead
of PDS, as PDC is more suitable to deploy and to deal with this problem, since it provides
more secure and intelligent data toragesystem via physical entities instead of
softwarebasedalgorithms as in openPDS. Each PDC actually serves as a secured as well as
centralized physical space for each SecNet user where his/her data lives in.Embedding PDC
into SecNet would allow users to monitor and reason about what and why their data is used as
well as by who, meaning the users can truly control every operation on their own data and
achievefine-grained management on access behaviors for data. Actually, besides PDC, other
choices can also be applied for the data storing in SecNet according to certain requirement.

10
III. IMPLEMENTATION

Modules Information: This project consists of two modules 1) Patients: Patients first create his profile
with all disease details and then select desired hospital with whom he wishes to share/subscribe data.
While creating profile application will create Blockchain object with allowable permission and it will
allow only those hospitals to access data. Patient Login: Patient can login to application with his profile
id and check total rewards he earned from sharing data. 2) Hospital: Hospital1 and Hospital2 are using
in this application as two organizations with whom patient can share data. At a time any hospital can
login to application and then enter search string as disease name. AI algorithm will take input disease

11
string and then perform search operation on all patients to get similar disease patients and then check
whether this hospital has permission to access that patient data or not, if hospital has access permission
then it will display those patients records to that hospital.

CHAPTER 3

REQUIREMENT SPECIFICATIONS

Functional Requirements:

1. Blockchain Integration: The integration of a blockchain framework is a foundational requirement,


providing the system with a decentralized and tamper-resistant ledger for secure data storage. By
adopting a blockchain infrastructure, the system ensures the immutability of data and establishes a
transparent and secure environment for all transactions.

2. Smart Contracts: The system's capability to support the creation and execution of smart contracts is
crucial. Smart contracts automate security protocols, allowing the enforcement of predefined rules. This
feature significantly enhances the system's responsiveness to security events in real-time, contributing to a
dynamic and adaptive security architecture.

3. AI-Powered Threat Detection: The implementation of machine learning algorithms for AI-powered
threat detection is an essential functional requirement. These algorithms analyze historical data patterns,
identify anomalies, and adapt security measures to evolving cyber threats. The incorporation of AI-driven
threat detection enhances the system's ability to proactively identify and respond to emerging security
risks.

4. Predictive Analytics: The utilization of predictive analytics is a key requirement for the system. By
leveraging historical data to assess and mitigate potential security risks, the system adopts a proactive
approach to security. Predictive analytics enable the identification of patterns indicative of emerging
threats and vulnerabilities, empowering the system to pre-emptively address potential risks.

5. Decentralized Identity Management: Facilitating decentralized identity management is a pivotal


requirement. This feature empowers users to control and authenticate their identities securely. By
decentralizing identity management, the system mitigates the risk of identity theft and unauthorized

12
access, contributing to an enhanced level of overall system security.

6. Encryption and Privacy Preservation: The robust implementation of encryption techniques for secure
data storage and AI-driven algorithms for privacy preservation is a critical requirement. Encryption
ensures the confidentiality of stored data, while AI-driven privacy measures add an extra layer of
protection against unauthorized access. This combination safeguards sensitive information from potential
security breaches.

7. Real-time Monitoring and Incident Response: The provision of real-time monitoring capabilities
powered by AI is an imperative functional requirement. This feature enables the immediate detection of
security incidents, allowing the system to trigger automated incident response protocols. Real-time
monitoring and automated responses significantly reduce the dwell time of potential threats, minimizing
the overall impact on data security.

Non-Functional Requirements:

1. Scalability: Ensuring the system's design allows for horizontal scalability is a paramount non-
functional requirement. This design characteristic is crucial to accommodate a growing volume of data
transactions and users over time. Scalability ensures the system's effectiveness and responsiveness even
as the user base and data load experience expansion.

2. Performance: The system's performance is a critical non-functional requirement, encompassing


transaction speed, response time, and computational efficiency. The system must demonstrate high
performance levels to meet user expectations and provide a seamless experience. A performant system
enhances user satisfaction and ensures efficient data processing.

Software Requirements:
1. Java Development Kit (JDK): The system requires JDK for compiling and executing Java code.
2. Blockchain Libraries: Libraries such as Web3j or Hyperledger Fabric Java SDK for integrating Blockchain
functionality.
3. AI Libraries: TensorFlow or PyTorch for implementing machine learning algorithms.
4. Database Management System: MySQL or MongoDB for storing and managing data.
5. Integrated Development Environment (IDE): Eclipse, IntelliJ IDEA, or NetBeans for Java development.
6. Web Server: Apache Tomcat or similar server for hosting web-based components.

13
Hardware Requirements:
1. Processor: Multi-core processor with adequate processing power for executing AI algorithms and Blockchain
consensus mechanisms.
2. Memory: Sufficient RAM to support concurrent processing and memory-intensive operations.
3. Storage: Sizable storage space for storing Blockchain data, AI models, and system logs.
4. Network Interface: Reliable network connectivity for communication with Blockchain networks and data
sources.
5. Backup Systems: Backup storage and redundancy mechanisms to ensure data integrity and availability.
6. Scalability: Ability to scale hardware resources to accommodate increasing data and user loads.

These software and hardware requirements are essential for deploying and running the system effectively to secure
data using Blockchain and AI technologies in a Java environment.

Hardware Specifications:
Server Infrastructure:
The robustness of the server infrastructure is pivotal for the seamless integration of blockchain and AI components.
Optimal processing power, memory, storage, and network connectivity are crucial for ensuring the system's
responsiveness.
Servers should be equipped with multicore processors, such as Intel Xeon or AMD EPYC, to efficiently handle
concurrent transactions. A minimum of 32GB RAM is recommended to support the execution of blockchain nodes,
smart contracts, and AI algorithms simultaneously. High-speed SSDs with ample storage capacity, preferably 1TB
or more, are essential for storing blockchain data and AI models. Network connectivity should be high-speed and
redundant to facilitate smooth communication between nodes.
Edge Devices and Clients:
Edge devices and client systems, interacting with the blockchain network, should possess adequate processing
power, memory, and storage. These devices play a crucial role in ensuring a seamless user experience.
Devices should be equipped with modern CPUs, such as Intel Core i5 or equivalent, to handle blockchain
interactions efficiently. A minimum of 8GB RAM is recommended for smooth interaction with the blockchain
network. Storage space should be sufficient for client applications, with SSDs preferred for faster data access.
Software Specifications:
Operating System:
14
The choice of a robust and secure operating system is foundational for the proper functioning of the system.
Different components of the system may require specific operating systems for optimal performance.
For server infrastructure, Linux distributions such as Ubuntu Server or CentOS are recommended for their stability,
security features, and compatibility with Java-based applications. Edge devices and clients can run popular
operating systems like Windows, macOS, or Linux based on user preferences and application compatibility.
Blockchain Framework:
The blockchain framework serves as the core of the system, providing the decentralized and tamper-resistant
ledger. The selection of an appropriate blockchain framework is critical for the successful implementation of the
project.
For a Java-based project, options include Hyperledger Fabric and Ethereum with Java SDKs. Hyperledger Fabric's
Java SDK facilitates interactions with the blockchain network, smart contract creation, and transaction
management. Alternatively, Ethereum can be used with Java libraries like Web3j, enabling seamless integration
with Ethereum smart contracts.

Smart Contract Development:


Smart contracts, written in Java, are integral to the system. Development tools and frameworks supporting smart
contract creation need to be carefully chosen to streamline the development process.
For Hyperledger Fabric, the system can leverage the Java SDK for chaincode development. This enables
developers to write smart contracts using Java programming languages. Ethereum, on the other hand, requires the
use of Solidity for smart contract development, with tools like web3j facilitating integration with Java.
AI Frameworks and Libraries:
The integration of AI capabilities into the system demands the use of Java-compatible frameworks and libraries.
TensorFlow and Deeplearning4j are prominent choices for building and training machine learning models within a
Java environment.
TensorFlow provides Java APIs for building and training machine learning models, allowing for seamless
integration into Java applications. Deeplearning4j, as a Java-based deep learning library, facilitates the
development of neural networks and machine learning models within the Java ecosystem.
Database Management System (DBMS):
Efficient storage and retrieval of data require the use of a reliable DBMS. The choice of DBMS depends on the
nature of the data and the specific requirements of the project.
MongoDB, a NoSQL database, is suitable for flexible and scalable storage of blockchain data, AI model
parameters, and related information. For relational data storage, PostgreSQL is a robust choice, providing ACID
15
compliance and excellent support for Java applications.
Web Application Framework:
The development of a user-friendly interface and interaction with blockchain and AI components necessitate the
use of a web application framework. Spring Boot, a popular Java-based framework, is recommended for its
simplicity and seamless integration capabilities.
Spring Boot simplifies the development of web applications, RESTful APIs, and microservices, providing easy
integration with the chosen database and blockchain components. Its versatility makes it well-suited for creating
responsive and scalable web interfaces.
Development Tools:
Selecting appropriate development tools is crucial for enhancing the efficiency of Java development throughout the
project lifecycle.
Integrated Development Environments (IDEs) like Eclipse or IntelliJ IDEA are widely used for Java development.
These IDEs offer features such as code completion, debugging, and project management, streamlining the
development process. Version control using Git ensures collaboration and tracking changes in the project's source
code.
Security Measures:
The implementation of robust security measures is paramount to safeguarding the integrity of the system. Several
security measures, including SSL certificates, firewalls, intrusion detection systems, and access control
mechanisms, should be integrated to ensure a secure environment.
SSL certificates enable secure data transmission between clients and servers by enabling HTTPS communication.
Firewalls and intrusion detection systems monitor and protect the network from potential threats, enhancing the
overall security posture. Access control mechanisms and user authentication mechanisms restrict unauthorized
access to sensitive data, adding layers of protection.

Feasibility study

A feasibility study serves as a crucial initial step in determining the viability and potential success of a proposed
project. In the context of developing a system to secure data with Blockchain and AI in Java, conducting a
thorough feasibility study is imperative to assess various aspects such as technical, economic, operational, and
scheduling feasibility. Below is an extensive analysis covering each of these dimensions:

Technical Feasibility:
16
The technical feasibility of the project revolves around evaluating whether the proposed system can be developed
using existing technology and resources. Given the availability of mature Blockchain and AI frameworks, such as
Hyperledger Fabric, TensorFlow, and PyTorch, it is technically feasible to implement the proposed system.
Additionally, the Java programming language provides robust support for integrating these technologies into a
cohesive solution. Furthermore, the presence of skilled Java developers and ample documentation and community
support enhances the technical feasibility of the project.

Economic Feasibility:
Economic feasibility entails assessing whether the proposed system is financially viable and offers a favorable
return on investment. Developing a system to secure data with Blockchain and AI involves upfront costs associated
with acquiring hardware, software licenses, and development resources. However, the potential benefits, such as
improved data security, enhanced decision-making capabilities, and operational efficiency, outweigh the initial
investment. Moreover, the cost-effectiveness of using open-source Blockchain and AI frameworks mitigates
financial risks and ensures long-term sustainability.

Operational Feasibility:
Operational feasibility examines whether the proposed system aligns with the organization's operational processes
and can be effectively integrated into existing workflows. Implementing a data security system based on
Blockchain and AI requires collaboration between various stakeholders, including IT personnel, data scientists, and
business users. Conducting thorough training and change management initiatives can facilitate the smooth adoption
of the new system. Additionally, establishing clear governance policies and compliance measures ensures that the
system complies with regulatory requirements and industry standards.

Scheduling Feasibility:
Scheduling feasibility evaluates whether the project can be completed within the specified time frame and aligns
with organizational objectives and priorities. Developing a secure data system with Blockchain and AI involves
multiple phases, including requirements gathering, system design, development, testing, and deployment. Creating
a detailed project plan with clear milestones, timelines, and resource allocations is essential for managing project
schedules effectively. Moreover, adopting agile development methodologies enables iterative development and
facilitates timely feedback and adjustments.

Conclusion:
17
The feasibility study indicates that developing a system to secure data with Blockchain and AI in Java is
technically, economically, operationally, and scheduling feasible. By leveraging existing technology, skilled
resources, and proven development practices, organizations can successfully implement the proposed system to
enhance data security, integrity, and reliability. However, it is essential to continuously monitor and evaluate
project progress to address any potential challenges and ensure project success.

18
(flow and storage) of goods in an organization whereas Supply Chain Management is the
coordination and management (movement) of supply chains of an organization

CHAPTER 4

4.1 Design and Implementation Constraints

The 1. Blockchain Framework Limitations:


1.1 Smart Contract Complexity:
While smart contracts are integral to the system's functionality, their complexity can present a challenge. Designing
and implementing sophisticated smart contracts that encapsulate complex security protocols may result in
increased execution time and resource consumption. Striking a balance between functionality and efficiency is
19
crucial.
1.2 Scalability Challenges:
Blockchain networks, despite their decentralized nature, can face scalability challenges as the number of
transactions increases. Ensuring efficient scaling, especially in large-scale deployments, requires careful
consideration of consensus algorithms, network topology, and storage mechanisms. Balancing decentralization
with scalability is a constraint that demands strategic design decisions.
1.3 Transaction Speed:
The inherently distributed nature of blockchain introduces latency in transaction processing. Achieving real-time
transaction speed, especially in high-throughput scenarios, may be a constraint. Innovative consensus mechanisms
and optimization strategies must be implemented to address latency and enhance transaction processing speed.
2. AI Integration Challenges:
2.1 Training Data Availability:
The effectiveness of AI models heavily relies on the availability and quality of training data. In the context of
securing data, obtaining diverse and representative datasets for training machine learning models can be
challenging. Ensuring the AI system is robust and capable of generalizing from limited data sources is a design
constraint.
2.2 Computational Resources:
Training and deploying sophisticated AI models demand significant computational resources. The availability of
powerful hardware infrastructure, including GPUs or TPUs, is crucial for optimal AI performance. This constraint
may impact the accessibility and affordability of the solution for certain deployment environments.
2.3 Adversarial Attacks:
AI models, including those for threat detection, are susceptible to adversarial attacks. The design must consider
techniques to enhance the robustness of AI against adversarial attempts to manipulate or deceive the system.
Implementation constraints involve finding a balance between model accuracy and resilience to adversarial inputs.
3. Integration Challenges:
3.1 Interoperability:
Integrating diverse technologies, such as blockchain and AI frameworks, requires careful consideration of
interoperability challenges. Ensuring seamless communication between different components and addressing
compatibility issues may be a constraint. Standardization efforts and well-defined APIs can mitigate these
challenges.
3.2 Data Privacy Concerns:

20
The project involves handling sensitive data, and ensuring compliance with data privacy regulations is a critical
constraint. Designing systems that adhere to privacy laws, such as GDPR, may impose limitations on data storage,
processing, and sharing. Implementing robust encryption and access control mechanisms becomes essential to
address these constraints.
3.3 User Adoption and Education:
User adoption of a system that combines blockchain and AI technologies may face resistance due to unfamiliarity
or perceived complexity. Designing intuitive user interfaces and providing educational resources to users becomes
a crucial aspect of system implementation. Ensuring a user-friendly experience and clear documentation is
imperative to overcome adoption constraints.
4. Security Considerations:
4.1 Key Management:
The secure management of cryptographic keys used in blockchain transactions and AI model encryption is a
critical constraint. Designing a robust key management system to prevent unauthorized access and key compromise
is essential for maintaining the integrity and security of the entire system.
4.2 Immutable Record Challenges:
While the immutability of blockchain is a strength, it can also be a constraint when errors or vulnerabilities are
discovered post-implementation. Designing mechanisms to address and rectify issues in an immutable record
without compromising the integrity of the entire system is a considerable challenge.
4.3 Regulatory Compliance:
Adhering to regulatory frameworks and compliance standards poses design and implementation constraints. The
integration of blockchain and AI must navigate legal frameworks governing data protection, financial transactions,
and emerging regulations in the field. Ensuring compliance without compromising the system's security and
functionality is a complex consideration.
5. Resource Allocation and Optimization:
5.1 Resource Efficiency:
Optimizing resource usage, both in terms of computational power and storage, is a constraint that affects the
system's efficiency. Designing algorithms and data structures that minimize resource requirements while
maximizing performance is crucial. Implementing strategies for efficient resource allocation, especially in
resource-constrained environments, becomes a significant consideration.
5.2 Energy Consumption:
The energy consumption associated with blockchain consensus mechanisms, especially in public blockchains, is a
well-known constraint. Balancing the environmental impact of energy-intensive proof-of-work mechanisms with
21
the security and decentralization they provide requires thoughtful design decisions. Exploring alternative consensus
mechanisms or hybrid approaches becomes essential to address this constraint.
6. Usability and Accessibility:
6.1 Accessibility Constraints:
Ensuring that the system is accessible to users with diverse needs, including those with disabilities, is a usability
constraint. Designing interfaces that adhere to accessibility standards and providing alternative means of
interaction is essential for inclusive usability.
6.2 Cross-Platform Compatibility:
Designing the system to be compatible with various platforms and devices adds complexity. Ensuring cross-
platform compatibility involves addressing differences in operating systems, screen sizes, and input methods.
Implementing responsive design and leveraging cross-platform development frameworks can alleviate this
constraint.

SYSTEM CONSTRAINTS

When considering the implementation of a system utilizing Blockchain and AI technologies, various constraints
must be taken into account to ensure successful deployment and operation. One significant constraint is the
computational overhead associated with both Blockchain and AI algorithms. Blockchain, by its nature, requires
substantial computational resources for consensus mechanisms, transaction verification, and data storage.
Similarly, AI algorithms, especially deep learning models, demand significant computational power for training
and inference tasks. These computational requirements may pose challenges for resource-constrained environments
or systems with limited processing capabilities.

Another constraint is the scalability of the system. While Blockchain offers decentralized and transparent data
storage, its scalability remains a concern, particularly in public Blockchain networks where transaction throughput
and latency can become bottlenecks as the network grows. Similarly, the scalability of AI algorithms, particularly
in real-time applications, may be limited by the size and complexity of the models and the availability of
computational resources.

Additionally, interoperability and integration with existing systems and frameworks pose constraints on system
development. Ensuring seamless interaction between Blockchain and AI components, as well as compatibility with
legacy systems and data formats, requires careful planning and design considerations. Furthermore, regulatory and
22
compliance constraints, such as data privacy regulations and industry standards, may impact the implementation
and deployment of the system, necessitating adherence to legal requirements and best practices.

Overall, addressing these constraints requires a comprehensive understanding of the technological, operational, and
regulatory aspects of the system, as well as careful planning and optimization to ensure optimal performance and
compliance with relevant standards and regulations.

Constraints in Analysis

The analysis phase of a complex project like "Securing Data with Blockchain and AI" is a critical stage where
various constraints shape the project's direction, influencing decision-making and setting boundaries for design and
implementation. These constraints emerge from diverse factors, including technological limitations, regulatory
considerations, and practical challenges inherent in the integration of blockchain and AI technologies. This section
explores the key constraints in the analysis phase that guide the subsequent development of the system.

1. Data Privacy and Regulatory Compliance:

1.1 GDPR and Data Protection Regulations:

Adhering to data privacy regulations, notably the General Data Protection Regulation (GDPR), poses a significant
constraint. The analysis must consider the implications of collecting, processing, and storing sensitive user data.
Ensuring compliance with GDPR's principles of data minimization, purpose limitation, and transparency becomes
a fundamental aspect of the project.

1.2 Cross-Border Data Transfer:

The project's global nature may encounter challenges related to cross-border data transfer regulations. Analysis
must address legal constraints associated with transferring data across jurisdictions and ensure that the system's
design aligns with international data protection laws.

1.3 Encryption and Decryption Processes:

Integrating robust encryption mechanisms into the system is essential for data security. However, the analysis
phase must carefully consider the computational overhead and key management challenges associated with

23
encryption and decryption processes. Striking a balance between data security and system efficiency becomes a
critical consideration.

2. Technological Constraints:

2.1 Compatibility and Interoperability:

Analyzing the compatibility of diverse technologies, such as blockchain frameworks and AI libraries, presents a
considerable constraint. Ensuring seamless interoperability between these technologies is crucial for the project's
success. The analysis must identify potential integration challenges and formulate strategies to address them.

2.2 Resource Intensiveness:

The resource-intensive nature of both blockchain and AI technologies imposes constraints on the choice of
hardware infrastructure. The analysis phase must carefully evaluate the computational and storage requirements of
blockchain consensus mechanisms and AI model training. Optimizing resource usage without compromising
performance becomes a significant challenge.

2.3 Technology Stack Selection:

Choosing the right technology stack for the project is a constraint that requires careful analysis. The selection of
blockchain frameworks, AI libraries, and database management systems must align with the project's goals and
constraints. The analysis must consider factors such as developer expertise, community support, and the long-term
viability of chosen technologies.

3. Security and Resilience Constraints:

3.1 Vulnerability Analysis:

Conducting a thorough vulnerability analysis is a constraint that demands meticulous attention. Identifying
potential security vulnerabilities in both blockchain and AI components is crucial for preemptive risk mitigation.
The analysis must encompass aspects such as smart contract vulnerabilities, AI model robustness, and protection
against adversarial attacks.

3.2 Immutable Record Challenges:

24
The immutability of blockchain, while a strength, introduces challenges in addressing errors or vulnerabilities
discovered post-implementation. The analysis must explore strategies to rectify issues in an immutable record
without compromising the integrity of the entire system. Implementing mechanisms for secure updates and patches
becomes a vital consideration.

3.3 Key Management:

The secure management of cryptographic keys is a constraint that affects both blockchain transactions and AI
model encryption. The analysis phase must evaluate key management practices to prevent unauthorized access and
key compromise. Implementing secure key storage and distribution mechanisms becomes essential for maintaining
the integrity of the entire system.

4. Usability and User Adoption:

4.1 User Interface Design Constraints:

The project's success hinges on user adoption, and the analysis phase must address constraints related to user
interface design. Ensuring a user-friendly and intuitive interface while accommodating the complexity of
blockchain and AI functionalities presents a unique challenge. The analysis must consider strategies to streamline
user interactions and provide effective user education.

4.2 User Education and Training:

Overcoming user resistance to the adoption of advanced technologies like blockchain and AI is a constraint that
requires a comprehensive analysis. Designing effective user education and training programs becomes crucial for
ensuring that users understand the system's benefits and functionalities. The analysis must explore strategies to
facilitate a smooth onboarding process and ongoing user support.

5. Scalability and Performance Constraints:

5.1 Blockchain Scalability:

Scalability constraints in blockchain networks pose challenges to the system's performance. The analysis must
evaluate scalability solutions, such as sharding or sidechains, to ensure efficient transaction processing. Balancing
decentralization with scalability becomes a crucial aspect of the project's design considerations.
25
5.2 AI Model Training:

The analysis of AI integration must address constraints related to model training. The computational intensity of
training large AI models demands significant resources. Strategies for optimizing model architectures, exploring
transfer learning, or leveraging pre-trained models become essential considerations to overcome scalability
challenges in AI.

6. Legal and Ethical Constraints:

6.1 Smart Contract Legality:

Ensuring the legality of smart contracts, especially in contractual agreements, is a constraint that requires careful
legal analysis. The analysis phase must identify potential legal challenges associated with smart contract execution
and propose mechanisms to align smart contract functionality with existing legal frameworks.

6.2 Ethical AI Usage:

The ethical use of AI, including considerations of bias and fairness, imposes constraints on the analysis phase.
Evaluating potential biases in AI models and designing mechanisms for fair and ethical AI usage are crucial. The
analysis must explore ethical AI frameworks and guidelines to inform the project's design choices.

7. Operational and Maintenance Constraints:

7.1 Continuous Monitoring:

Continuous monitoring of the blockchain network and AI components is a constraint that demands analysis.
Establishing mechanisms for real-time monitoring and alerting is crucial for identifying security incidents
promptly. The analysis must consider tools and protocols for continuous monitoring to ensure the system's
resilience against emerging threats.

7.2 Upgrades and Patches:

The immutability of blockchain introduces constraints regarding system upgrades and patches. The analysis must
explore strategies for implementing updates without compromising the integrity of existing records. Designing
mechanisms for secure upgrades and patches becomes crucial for maintaining the system's relevance and security.

26
Conclusion:

The analysis phase of "Securing Data with Blockchain and AI" in Java involves navigating through a myriad of
constraints that shape the project's trajectory. From legal and regulatory considerations to technological challenges,
each constraint poses unique demands on the project's design and implementation. Addressing these constraints
requires a multidisciplinary approach, involving legal experts, technologists, and domain specialists. The
subsequent phases of the project will build upon the insights gained during the analysis, incorporating strategic
solutions to overcome these constraints and deliver a robust, secure, and effective system.

Constraints in Design

4.1.2 Constraints in Design for "Securing Data with Blockchain and AI" in Java

In the design phase of the "Securing Data with Blockchain and AI" project, specific constraints
shape the architectural decisions and guide the creation of a system that seamlessly integrates
blockchain and AI technologies. These constraints encompass various aspects, from technological
limitations to regulatory requirements, influencing the design choices and the overall system
architecture.

1. Technological Constraints:

1.1 Blockchain Framework Compatibility:

The selection of a specific blockchain framework imposes constraints on the overall system
design. Ensuring compatibility with the chosen blockchain framework, whether it's Hyperledger
Fabric or Ethereum, becomes pivotal. The design must align with the features and constraints
inherent in the selected blockchain technology, influencing how smart contracts are created,
executed, and managed.

1.2 AI Library Integration:

Integrating AI libraries such as TensorFlow or Deeplearning4j introduces constraints related to


model deployment and execution within the Java environment. The design must accommodate the
27
nuances of these libraries, considering their compatibility with Java, model serialization, and
inference speed. Striking a balance between the chosen AI frameworks and the overall system
architecture is essential for optimal performance.

1.3 Data Storage Mechanisms:

The design phase must contend with constraints related to data storage mechanisms. Choosing
between traditional relational databases and NoSQL databases like MongoDB introduces trade-
offs in terms of scalability, data retrieval speed, and schema flexibility. Designing a storage
solution that aligns with the system's needs and constraints is a crucial aspect of the overall
architecture.

2. Security and Privacy Constraints:

2.1 Smart Contract Security:

Smart contracts are integral to the system's security, but they come with their own set of
constraints. Ensuring the security of smart contracts against vulnerabilities such as reentrancy or
integer overflow requires careful design considerations. The architecture must include
mechanisms for thorough code audits, testing, and secure coding practices to mitigate these
constraints.

2.2 Privacy-Preserving AI:

Incorporating privacy-preserving techniques into the design of AI models is a constraint that


arises from the need to protect sensitive data. The architecture must integrate methods such as
federated learning or homomorphic encryption to ensure that AI processes do not compromise
user privacy. Balancing the utility of AI models with the privacy constraints becomes a crucial
aspect of the overall system design.

2.3 Key Management:

The design must address constraints related to key management for both blockchain transactions
and AI model encryption. Ensuring secure key generation, storage, and distribution mechanisms
becomes crucial for maintaining the confidentiality and integrity of data. The architecture must
incorporate robust key management practices to mitigate the constraints associated with
28
cryptographic keys.

3. Interoperability and Integration Constraints:

3.1 API Design for Interoperability:

Designing APIs for interoperability between different system components is a constraint that
arises from the need for seamless integration. The architecture must include well-defined APIs
that facilitate communication between blockchain and AI modules. Ensuring standardization and
compatibility between components is essential to overcome interoperability constraints.

3.2 Cross-Platform Compatibility:

The design must contend with constraints related to cross-platform compatibility. Ensuring that
the system works seamlessly across different operating systems, devices, and browsers requires
thoughtful architecture. Designing responsive interfaces and utilizing cross-platform development
frameworks becomes essential to address these constraints.

4. Scalability and Performance Constraints:

4.1 Blockchain Scalability:

Scalability constraints in blockchain networks necessitate careful consideration in the system


design. The architecture must include strategies such as sharding or sidechains to ensure efficient
transaction processing. Balancing decentralization with scalability constraints becomes a focal
point in designing a system that can handle a growing user base and transaction volume.

4.2 AI Model Inference Speed:

Ensuring real-time AI model inference introduces constraints related to model size, complexity,
and computational requirements. The design must optimize AI model architectures for efficient
inference within the constraints of available hardware resources. Implementing techniques like
model quantization or edge computing becomes essential for addressing performance constraints.

5. Legal and Ethical Constraints:

5.1 Smart Contract Legality:


29
Ensuring the legality of smart contracts introduces constraints related to contractual agreements
and regulatory compliance. The design must align smart contract functionality with existing legal
frameworks, accounting for potential constraints imposed by contract law and industry-specific
regulations. Legal expertise becomes crucial in navigating these constraints.

5.2 Ethical AI Usage:

The design must incorporate mechanisms for ethical AI usage, addressing constraints related to
bias, fairness, and transparency. Implementing fairness-aware algorithms and designing AI
models that adhere to ethical guidelines becomes essential. The architecture must include checks
and balances to mitigate ethical constraints associated with AI deployment.

Constraints in Implementation

A hierarchical structuring of relations may result in more classes and a


more complicated structure to implement. Therefore it is advisable to transform the
hierarchical relation structure to a simpler structure such as a classical flat one. It is
rather straightforward to transform the developed hierarchical model into a bipartite,
flat model, consisting of classes on the one hand and flat relations on the other. Flat
relations are preferred at the design level for reasons of simplicity and implementation
ease. There is no identity or functionality associated with a flat relation. A flat relation
corresponds with the relation concept of entity-relationship modeling and many object
oriented methods.

4.2 Other Nonfunctional Requirements

4.2.1 Performance Requirements

The application at this side controls and communicates with the following three main
general components.

⮚ embedded browser in charge of the navigation and accessing to the web service;

⮚ Server Tier: The server side contains the main parts of the functionality of the
proposed architecture. The components at this tier are the following.
30
Web Server, Security Module, Server-Side Capturing Engine, Preprocessing
Engine, Database System, Verification Engine, Output Module.

4.2.2 Safety Requirements

31
1. The software may be safety-critical. If so, there are issues associated with its
integrity level
2. The software may not be safety-critical although it forms part of a safety-critical
system. For example, software may simply log transactions.
3. If a system must be of a high integrity level and if the software is shown to be of
that integrity level, then the hardware must be at least of the same integrity level.
4. There is little point in producing 'perfect' code in some language if hardware and
system software (in widest sense) are not reliable.
5. If a computer system is to run software of a high integrity level then that system
should not at the same time accommodate software of a lower integrity level.
6. Systems with different requirements for safety levels must be separated.
7. Otherwise, the highest level of integrity required must be applied to all systems in
the same environment.

CHAPTER 5

5.1 Architecture Diagram:

32
5.2 Sequence Diagram:

A Sequence diagram is a kind of interaction diagram that shows how processes operate
with one another and in what order. It is a construct of Message Sequence diagrams are
sometimes called event diagrams, event sceneries and timing diagram.

33
5.3 Use Case Diagram:

Unified Modeling Language (UML) is a standardized general-purpose modeling language


in the field of software engineering. The standard is managed and was created by the Object
Management Group. UML includes a set of graphic notation techniques to create visual
models of software intensive systems. This language is used to specify, visualize, modify,
construct and document the artifacts of an object oriented software intensive system under
development.

5.3.1. USE CASE DIAGRAM

A Use case Diagram is used to present a graphical overview of the functionality provided
by a system in terms of actors, their goals and any dependencies between those use cases.
Use case diagram consists of two parts:

Use case: A use case describes a sequence of actions that provided something of measurable
value to an actor and is drawn as a horizontal ellipse.

Actor: An actor is a person, organization or external system that plays a role in one or more
interaction with the system.

34
5.4 Activity Diagram:

Activity diagram is a graphical representation of workflows of stepwise activities and


actions with support for choice, iteration and concurrency. An activity diagram shows the
overall flow of control.
The most important shape types:

● Rounded rectangles represent activities.

● Diamonds represent decisions.


35
● Bars represent the start or end of concurrent activities.

● A black circle represents the start of the workflow.

● An encircled circle represents the end of the workflow.

5.5 Collaboration Diagram:


36
UML Collaboration Diagrams illustrate the relationship and interaction between
software objects. They require use cases, system operation contracts and domain model to
already exist. The collaboration diagram illustrates messages being sent between classes and
objects.

CHAPTER 6

6.1 MODULES

⮚ Dataset collection

⮚ Machine Learning Algorithm

⮚ Prediction

37
6.2 MODULE EXPLANATION:

6.2.1 Dataset collection:

Dataset is collected from the kaggle.com. That dataset have some value like gender,
marital status, self-employed or not, monthly income, etc,. Dataset has the information,
whether the previous loan is approved

or not depends up on the customer information. That data well be preprocessed and proceed to the
next step.

Machine learning Algorithm:

In this stage, the collected data will be given to the machine algorithm for training
process. We use multiple algorithms to get high accuracy range of prediction. A preprocessed
dataset are processed in different machine learning algorithms. Each algorithm gives some
accuracy level. Each one is undergoes for the comparison.

✔ Logistic Regression

✔ K-Nearest Neighbors

✔ Decision Tree Classifier

Prediction:

Preprocessed data are trained and input given by the user goes to the trained
dataset. The Logistic Regression trained model is used to predict and determine whether
the loan given to a particular person shall be approved or not.
38
CHAPTER 7

CODING AND TESTING

7.1 CODING

Once the design aspect of the system is finalizes the system enters into the coding and
testing phase. The coding phase brings the actual system into action by converting the design
of the system into the code in a given programming language. Therefore, a good coding style
has to be taken whenever changes are required it easily screwed into the system.
7.2 CODING STANDARDS

Coding standards are guidelines to programming that focuses on the physical structure and
appearance of the program. They make the code

easier to read, understand and maintain. This phase of the system actually implements the
blueprint developed during the design phase. The coding specification should be in such a
way that any programmer must be able to understand the code and can bring about changes
whenever felt necessary. Some of the standard needed to achieve the above-mentioned
objectives are as follows:
Program should be simple, clear and easy to understand. Naming
conventions
Value conventions

Script and comment procedure


Message box format Exception and
error handling
7.2.1 NAMING CONVENTIONS

Naming conventions of classes, data member, member functions, procedures etc., should
be self-descriptive. One should even get the meaning and scope of the variable by its name.
39
The conventions are adopted for easy understanding of the intended message by the user. So
it is customary to follow the conventions. These conventions are as follows:
Class names

40
Class names are problem domain equivalence and begin with capital letter and have
mixed cases.
Member Function and Data Member name

Member function and data member name begins with a lowercase letter with each
subsequent letters of the new words in uppercase and the rest of letters in lowercase.
7.2.2 VALUE CONVENTIONS

Value conventions ensure values for variable at any point of time.

This involves the following:

⮚ Proper default values for the variables.

⮚ Proper validation of values in the field.

⮚ Proper documentation of flag values.

7.2.3 SCRIPT WRITING AND COMMENTING STANDARD

Script writing is an art in which indentation is utmost important. Conditional and looping
statements are to be properly aligned to facilitate easy understanding. Comments are included
to minimize the number of surprises that could occur when going through the code.

7.2.4 MESSAGE BOX FORMAT :

When something has to be prompted to the user, he must be able to understand it


properly. To achieve this, a specific format has been adopted in displaying messages to the
user. They are as follows:
41
⮚ X – User has performed illegal operation.

⮚ ! – Information to the user.

7.3 TEST PROCEDURE


SYSTEM TESTING
Testing is performed to identify errors. It is used for quality

assurance. Testing is an integral part of the entire development and maintenance process. The
goal of the testing during phase is to verify that the specification has been accurately and
completely incorporated into the design, as well as to ensure the correctness of the design
itself. For example the design must not have any logic faults in the design is detected before
coding commences, otherwise the cost of fixing the faults will be considerably higher as
reflected. Detection of design faults can be achieved by means of inspection as well as
walkthrough.
Testing is one of the important steps in the software development phase. Testing checks
for the errors, as a whole of the project testing involves the following test cases:

⮚ Static analysis is used to investigate the structural properties of the Source code.

⮚ Dynamic testing is used to investigate the behavior of the source code by executing
the program on the test data.

7.4 TEST DATA AND OUTPUT

7.4.1 UNIT TESTING

Unit testing is conducted to verify the functional performance of each


modular component of the software. Unit testing focuses on the smallest unit of the software
design (i.e.), the module. The white-box testing techniques were heavily employed for unit
testing.

7.4.2 FUNCTIONAL TESTS

Functional test cases involved exercising the code with nominal input values
for which the expected results are known, as well as boundary values and special values, such
42
as logically related inputs, files of identical elements, and empty files.
Three types of tests in Functional test:

⮚ Performance Test

⮚ Stress Test

⮚ Structure Test

7.4.3 PERFORMANCE TEST

It determines the amount of execution time spent in various parts of the unit,
program throughput, and response time and device utilization by the program unit.
7.4.4 STRESS TEST

Stress Test is those test designed to intentionally break the unit. A Great deal can
be learned about the strength and limitations of a program by examining the manner in which a
programmer in which a program unit breaks.
7.4.5 STRUCTURED TEST

Structure Tests are concerned with exercising the internal logic of a program and
traversing particular execution paths. The way in which White-Box test strategy was employed
to ensure that the test cases could Guarantee that all independent paths within a module have
been have been exercised at least once.

⮚ Exercise all logical decisions on their true or false sides.

⮚ Execute all loops at their boundaries and within their


operational bounds.

⮚ Exercise internal data structures to assure their validity.

⮚ Checking attributes for their correctness.

43
⮚ Handling end of file condition, I/O errors, buffer problems and textual errors
in output information
7.4.6 INTEGRATION TESTING

Integration testing is a systematic technique for construction the program structure


while at the same time conducting tests to uncover errors associated with interfacing. i.e.,
integration testing is the complete testing of the set of modules which makes up the product.
The objective is to take untested modules and build a program structure tester should identify
critical modules. Critical modules should be tested as early as possible. One approach is to
wait until all the units have passed testing, and then combine them and then tested. This
approach is evolved from unstructured testing of small programs. Another strategy is to
construct the product in increments of tested units. A small set of modules are integrated
together and tested, to which another module is added and tested in combination. And so on.
The advantages of this approach are that, interface dispenses can be easily found and
corrected.
The major error that was faced during the project is linking error. When all the
modules are combined the link is not set properly with all support files. Then we checked out
for interconnection and the links. Errors are localized to the new module and its
intercommunications. The product development can be staged, and modules integrated in as
they complete unit testing. Testing is completed when the last module is integrated and tested.
7.5 TESTING TECHNIQUES / TESTING STRATERGIES

7.5.1 TESTING

Testing is a process of executing a program with the intent of finding an error. A


good test case is one that has a high probability of finding an as-yet –undiscovered error. A
successful test is one that uncovers an as-yet- undiscovered error. System testing is the stage of
implementation, which is aimed at ensuring that the system works accurately and efficiently
as expected before live operation commences. It verifies that the whole set of programs hang
together. System testing requires a test consists of several key activities and steps for run
program, string, system and is important in adopting a successful new system. This is the last
chance to detect and correct errors before the system is installed for user acceptance testing.
The software testing process commences once the program is created and the
44
documentation and related data structures are designed. Software testing is essential for
correcting errors. Otherwise the program or the project is not said to be complete. Software
testing is the critical element of software quality assurance and represents the ultimate the

45
review of specification design and coding. Testing is the process of executing the program
with the intent of finding the error. A good test case design is one that as a probability of
finding an yet undiscovered error. A successful test is one that uncovers an yet undiscovered
error. Any engineering product can be tested in one of the two ways:
7.5.1.1 WHITE BOX TESTING

This testing is also called as Glass box testing. In this testing, by knowing
the specific functions that a product has been design to perform test can be conducted that
demonstrate each function is fully operational at the same time searching for errors in each
function. It is a test case design method that uses the control structure of the procedural design
to derive test cases. Basis path testing is a white box testing.
Basis path testing:

⮚ Flow graph notation

⮚ Cyclometric complexity

⮚ Deriving test cases

⮚ Graph matrices Control

7.5.1.2 BLACK BOX TESTING


In this testing by knowing the internal operation of a product, test can
be conducted to ensure that “all gears mesh”, that is the internal operation performs according
to specification and all internal components have been adequately exercised. It
fundamentally focuses on the functional requirements of the software.
The steps involved in black box test case design are:

⮚ Graph based testing methods

⮚ Equivalence partitioning
PAGE \* MERGEFORMAT 75
⮚ Boundary value analysis

⮚ Comparison testing

7.5.2 SOFTWARE TESTING STRATEGIES:

A software testing strategy provides a road map for the software developer. Testing
is a set activity that can be planned in advance and conducted systematically. For this reason a
template for software testing a set of steps into which we can place specific test case design
methods should be strategy should have the following characteristics:

⮚ Testing begins at the module level and works “outward” toward the integration
of the entire computer based system.

⮚ Different testing techniques are appropriate at different points in time.

⮚ The developer of the software and an independent test group conducts testing.

⮚ Testing and Debugging are different activities but debugging must be


accommodated in any testing strategy.

7.5.2.1 INTEGRATION TESTING:

Integration testing is a systematic technique for constructing the program structure


while at the same time conducting tests to uncover errors associated with. Individual modules,
which are highly prone to interface errors, should not be assumed to work instantly when we
put them together. The problem of course, is “putting them together”- interfacing. There may
be the chances of data lost across on another’s sub functions, when combined may not produce
the desired major function; individually acceptable impression may be magnified to
unacceptable levels; global data structures can present problems.

PAGE \* MERGEFORMAT 75
7.5.2.2 PROGRAM TESTING:

The logical and syntax errors have been pointed out by program testing. A syntax error
is an error in a program statement that in violates

one or more rules of the language in which it is written. An improperly defined field
dimension or omitted keywords are common syntax error. These errors are shown through
error messages generated by the computer. A logic error on the other hand deals with the
incorrect data fields, out-off-range items and invalid combinations. Since the compiler s will
not deduct logical error, the programmer must examine the output. Condition testing exercises
the logical conditions contained in a module. The possible types of elements in a condition
include a Boolean operator, Boolean variable, a pair of Boolean parentheses A relational
operator or on arithmetic expression. Condition testing method focuses on testing each
condition in the program the purpose of condition test is to deduct not only errors in the
condition of a program but also other a errors in the program.
7.5.2.3 SECURITY TESTING:

Security testing attempts to verify the protection mechanisms built in to a system


well, in fact, protect it from improper penetration. The system security must be tested for
invulnerability from frontal attack must also be tested for invulnerability from rear attack.
During security, the tester places the role of individual who desires to penetrate system.

7.5.2.4 VALIDATION TESTING

At the culmination of integration testing, software is completely assembled as a


package. Interfacing errors have been uncovered and corrected and a final series of software
test-validation testing begins. Validation testing can be defined in many ways, but a simple
definition is that validation succeeds when the software functions in manner that is reasonably
expected by the customer. Software validation is achieved through a series of black box tests
that demonstrate conformity with requirement. After validation test has been conducted, one
of two conditions exists.
* The function or performance characteristics confirm to specifications
and are accepted.

* A validation from specification is uncovered and a deficiency created.


PAGE \* MERGEFORMAT 75
Deviation or errors discovered at this step in this project is corrected prior to
completion of the project with the help of the user by negotiating to establish a method for
resolving deficiencies. Thus the proposed system under consideration has been tested by
using validation testing and found to be working satisfactorily. Though there were deficiencies
in the system they were not catastrophic
7.5.2.5 USER ACCEPTANCE TESTING
User acceptance of the system is key factor for the success of any system. The
system under consideration is tested for user acceptance by constantly keeping in touch with
prospective system and user at the time of developing and making changes whenever required.
This is done in regarding to the following points.

● Input screen design.

● Output screen design.

Future enhancement

Future enhancements for the system aimed at securing data with Blockchain and AI in Java can significantly
contribute to its effectiveness, scalability, and adaptability. These enhancements encompass various aspects,
including technological advancements, integration with emerging technologies, and expansion of functionalities.
Below are extensive considerations for potential future enhancements:

1. Integration with Advanced AI Algorithms:


To enhance the system's predictive capabilities and decision-making processes, integrating advanced AI
algorithms, such as reinforcement learning, natural language processing (NLP), and deep reinforcement learning
(DRL), can be beneficial. These algorithms can enable the system to analyze and interpret complex data patterns,
extract actionable insights, and automate decision-making processes more effectively.

2. Enhanced Blockchain Consensus Mechanisms:


Leveraging alternative consensus mechanisms, such as proof-of-stake (PoS), delegated proof-of-stake (DPoS),
and practical Byzantine fault tolerance (PBFT), can improve the scalability, efficiency, and security of the
PAGE \* MERGEFORMAT 75
Blockchain network. By implementing these consensus mechanisms, the system can achieve higher transaction
throughput, reduce energy consumption, and enhance network resilience, thereby accommodating the growing
demands of data processing and transaction validation.

3. Interoperability with Other Blockchain Networks:


Facilitating interoperability between different Blockchain networks, protocols, and platforms can enable
seamless data exchange and collaboration across disparate systems. Implementing interoperability standards, such
as the Interledger Protocol (ILP) and Atomic Swaps, can enable the system to communicate with external
Blockchain networks and facilitate cross-chain transactions, asset transfers, and data sharing, thereby expanding
the scope and reach of the system's functionalities.

4. Integration with IoT Devices and Edge Computing:


Integrating the system with Internet of Things (IoT) devices and edge computing infrastructure can enhance data
collection, processing, and analysis capabilities at the network edge. By deploying lightweight AI models and
Blockchain nodes on IoT devices and edge servers, the system can perform real-time data analytics, anomaly
detection, and decision-making at the edge, thereby reducing latency, bandwidth usage, and reliance on centralized
data processing facilities.

5. Implementation of Privacy-Preserving Techniques:


Incorporating privacy-preserving techniques, such as homomorphic encryption, zero-knowledge proofs, and
differential privacy, can enhance data privacy and confidentiality within the system. These techniques enable
secure computation and data sharing while preserving the privacy of sensitive information, ensuring compliance
with data protection regulations, such as GDPR and HIPAA, and building trust among users and stakeholders.

6. Adoption of Quantum-Safe Cryptography:


With the emergence of quantum computing technologies, ensuring the long-term security and resilience of the
system against quantum threats is essential. Adopting quantum-safe cryptography algorithms, such as lattice-based
cryptography, hash-based cryptography, and multivariate cryptography, can safeguard the system's cryptographic
protocols and data integrity against quantum attacks, ensuring long-term security and sustainability.

7. Continuous Monitoring and Threat Intelligence:


Implementing robust monitoring, logging, and threat intelligence mechanisms can enhance the system's resilience

PAGE \* MERGEFORMAT 75
against evolving cyber threats and vulnerabilities. By continuously monitoring network activities, detecting
anomalies, and correlating security events, the system can proactively identify and mitigate potential security
breaches, unauthorized access attempts, and data manipulation incidents, thereby safeguarding critical assets and
ensuring business continuity.

8. Scalability and Performance Optimization:


Optimizing the system's scalability, performance, and resource utilization is essential to accommodate the
increasing volume, velocity, and variety of data generated and processed by the system. Implementing horizontal
and vertical scaling strategies, load balancing techniques, and performance tuning mechanisms can ensure optimal
resource allocation, minimize latency, and improve overall system efficiency and responsiveness, enabling
seamless scalability and handling of peak workloads.

In conclusion, implementing these future enhancements can empower the system aimed at securing data with
Blockchain and AI in Java to adapt to evolving technological trends, address emerging challenges, and unlock new
opportunities for innovation and growth. By embracing continuous improvement and staying abreast of
technological advancements, organizations can leverage the full potential of the system to achieve their strategic
objectives and drive digital transformation initiatives effectively.

In conclusion, the development of a robust system for securing data with Blockchain and AI in Java represents a
significant step towards addressing the evolving challenges of data security, privacy, and integrity in modern
digital environments. Through the integration of cutting-edge technologies, innovative approaches, and robust
security measures, the system aims to provide organizations with the tools and capabilities needed to safeguard
their critical assets, mitigate cyber threats, and maintain regulatory compliance.

CONCLUSION

The system's utilization of Blockchain technology offers several distinct advantages, including immutability,
transparency, and decentralization, which enhance data integrity, auditability, and trustworthiness. By leveraging
Blockchain's distributed ledger technology, the system ensures that data transactions are securely recorded, tamper-
resistant, and verifiable, thereby reducing the risk of data manipulation, unauthorized access, and fraudulent
activities.

Additionally, the incorporation of Artificial Intelligence (AI) enables the system to enhance its threat detection
PAGE \* MERGEFORMAT 75
capabilities, automate security operations, and adapt to dynamic cyber threats in real-time. By leveraging AI-driven
analytics, machine learning algorithms, and predictive modeling techniques, the system can identify anomalous
behavior, detect emerging threats, and proactively respond to security incidents, thereby strengthening the
organization's cyber resilience and incident response capabilities.

Furthermore, the system's focus on interoperability, scalability, and performance optimization ensures seamless
integration with existing IT infrastructure, accommodating evolving business requirements, and scaling to meet the
growing demands of data processing and transaction validation. By adopting open standards, modular
architectures, and cloud-native design principles, the system provides organizations with the flexibility, agility, and
scalability needed to adapt to changing business environments and technological landscapes.

Moreover, the continuous monitoring, threat intelligence, and compliance management capabilities embedded
within the system enable organizations to maintain a proactive security posture, mitigate risks, and demonstrate
regulatory compliance effectively. By integrating security information and event management (SIEM) capabilities,
threat intelligence feeds, and compliance frameworks, the system empowers organizations to identify, assess, and
respond to security threats and vulnerabilities promptly, ensuring the confidentiality, integrity, and availability of
sensitive data assets.

In conclusion, the development and implementation of a comprehensive system for securing data with Blockchain
and AI in Java represent a critical imperative for organizations seeking to safeguard their digital assets, mitigate
cyber risks, and maintain trust and confidence among stakeholders. By embracing innovative technologies,
adopting best practices, and fostering a culture of security awareness and vigilance, organizations can strengthen
their resilience against evolving cyber threats and position themselves for long-term success in today's digital-first
world.

PAGE \* MERGEFORMAT 75
APPENDICES
A. DATA DICTIONARY

A data dictionary is a comprehensive documentation that provides detailed descriptions of the data elements,
attributes, and relationships within a database or information system. It serves as a valuable reference guide for
understanding the structure, organization, and meaning of the data stored within the system. In the context of the
proposed system for securing data with Blockchain and AI in Java, the data dictionary plays a crucial role in
facilitating data management, analysis, and interpretation. Below is a detailed overview of the data dictionary for
the system:

1. User Profile:
- Description: This data element includes information about the users of the system, including their usernames,
passwords, roles, and access privileges.
- Attributes:
- Username: Unique identifier for each user.
- Password: Securely encrypted password for user authentication.
- Role: Specifies the role or permission level of the user (e.g., administrator, standard user).
- Access Privileges: Defines the specific actions or functions that the user is authorized to perform within the
system.

PAGE \* MERGEFORMAT 75
2. Blockchain Transactions:
- Description: This data element captures all transactions recorded on the Blockchain, including details such as
transaction IDs, timestamps, sender/receiver addresses, and transaction amounts.
- Attributes:
- Transaction ID: Unique identifier for each transaction.
- Timestamp: Date and time when the transaction was initiated.
- Sender Address: Blockchain address of the sender.
- Receiver Address: Blockchain address of the recipient.
- Transaction Amount: Quantity or value of assets transferred in the transaction.

3. Smart Contracts:
- Description: This data element represents the smart contracts deployed on the Blockchain, which contain
programmable logic for executing predefined actions or conditions autonomously.
- Attributes:
- Contract ID: Unique identifier for each smart contract.
- Contract Name: Descriptive name or label for the smart contract.
- Contract Address: Blockchain address where the smart contract is deployed.
- Contract Source Code: Code snippet or bytecode defining the logic and rules of the smart contract.

4. Security Events:
- Description: This data element logs security-related events and incidents detected by the system, such as
unauthorized access attempts, malware infections, or suspicious activities.
- Attributes:
- Event ID: Unique identifier for each security event.
- Event Type: Classification of the security event (e.g., intrusion attempt, data breach).
- Event Timestamp: Date and time when the event occurred.
- Event Description: Detailed description of the event, including relevant context and implications.

5. Machine Learning Models:


- Description: This data element encompasses the machine learning models used by the system for threat
detection, anomaly detection, and predictive analytics.
- Attributes:
- Model ID: Unique identifier for each machine learning model.
PAGE \* MERGEFORMAT 75
- Model Name: Descriptive name or label for the model.
- Model Type: Classification of the model (e.g., supervised learning, unsupervised learning).
- Model Parameters: Configuration settings and hyperparameters defining the behavior and performance of the
model.

6. Audit Logs:
- Description: This data element contains a chronological record of all system activities, including user
interactions, configuration changes, and administrative actions.
- Attributes:
- Log ID: Unique identifier for each log entry.
- Log Timestamp: Date and time when the log entry was generated.
- Log Type: Classification of the log entry (e.g., user login, system error).
- Log Details: Detailed information about the event or activity recorded in the log.

7. Compliance Regulations:
- Description: This data element encompasses regulatory requirements, industry standards, and internal policies
governing data security, privacy, and compliance.
- Attributes:
- Regulation ID: Unique identifier for each compliance regulation.
- Regulation Name: Name or title of the regulation (e.g., GDPR, HIPAA, PCI DSS).
- Regulatory Requirements: Specific rules, guidelines, or obligations imposed by the regulation.
- Compliance Status: Indicates whether the organization is compliant with the regulation (e.g., compliant, non-
compliant, in progress).

8. System Configuration:
- Description: This data element captures the configuration settings and parameters of the system, including
network settings, encryption algorithms, and access controls.
- Attributes:
- Configuration ID: Unique identifier for each configuration profile.
- Configuration Name: Descriptive label for the configuration profile.
- Configuration Parameters: Settings and parameters defining the behavior and functionality of the system.
- Configuration Value: Specific values assigned to each configuration parameter.

PAGE \* MERGEFORMAT 75
This comprehensive data dictionary provides a structured overview of the data elements, attributes, and
relationships within the proposed system for securing data with Blockchain and AI in Java. By documenting these
key components, stakeholders can gain a deeper understanding of the system's data assets, facilitate data
management and governance, and ensure consistency and accuracy in data processing and analysis.

B. OPERATIONAL MANUAL

The operational manual for the system titled "Securing Data with Blockchain and AI in Java" serves as a
comprehensive guide for users and administrators on how to effectively operate and manage the system. This
manual outlines the necessary steps and procedures for installing, configuring, and utilizing the system to ensure
optimal performance and security. Below is a detailed overview of the operational manual:

1. System Installation:
- To install the system, users must first download the installation package from the designated repository or
source.
- Once downloaded, users should extract the contents of the package to a designated directory on their local
machine or server.
- Users can then execute the installation script included in the package to initiate the installation process.
- During installation, users will be prompted to specify configuration settings such as database credentials,
network parameters, and security options.
- Upon successful installation, users can proceed to configure the system according to their specific requirements.

2. System Configuration:
- After installation, users must configure the system settings to customize its behavior and functionality.
- Configuration options may include setting up user accounts and access privileges, defining data retention
policies, and configuring integration with external systems.
- Users can access the system's configuration interface through a web-based administration console or command-
line interface.
- Configuration changes should be carefully reviewed and tested to ensure compatibility and compliance with
organizational policies and requirements.

3. User Management:
- The system includes functionality for managing user accounts, roles, and permissions.
PAGE \* MERGEFORMAT 75
- Administrators have the authority to create, modify, and delete user accounts, as well as assign roles and access
privileges.
- Users should adhere to best practices for password management and account security, including the use of
strong, unique passwords and regular password updates.
- Access to sensitive system features and data should be restricted to authorized personnel only, with appropriate
authentication mechanisms in place to verify user identities.

4. Data Protection:
- The system employs advanced encryption techniques and cryptographic algorithms to safeguard sensitive data
from unauthorized access and tampering.
- Users should adhere to data protection policies and guidelines when handling and processing confidential
information within the system.
- Data backups should be performed regularly to prevent data loss in the event of system failures or security
incidents.
- Administrators are responsible for monitoring data access and usage patterns to detect and mitigate potential
security threats or breaches.

5. System Monitoring and Maintenance:


- Regular system monitoring and maintenance activities are essential to ensure optimal performance and
reliability.
- Administrators should routinely monitor system logs, audit trails, and performance metrics to identify and
address any issues or anomalies.
- Proactive maintenance tasks such as software updates, patches, and system backups should be performed
regularly to mitigate security risks and maintain system integrity.
- In the event of system failures or disruptions, administrators should follow established incident response
procedures to minimize downtime and restore normal operations as quickly as possible.

6. Troubleshooting and Support:


- Users encountering technical issues or difficulties with the system should refer to the system documentation and
knowledge base for troubleshooting guidance.
- If further assistance is required, users can contact the system administrator or technical support team for prompt
resolution of issues.
- Administrators should maintain open channels of communication with users and provide timely updates and
PAGE \* MERGEFORMAT 75
notifications regarding system changes, updates, and maintenance activities.

This operational manual provides users and administrators with a comprehensive overview of the procedures and
best practices for operating and managing the system effectively. By following these guidelines, users can ensure
the secure and reliable operation of the system while minimizing the risk of security incidents and data breaches.

SAMPLE CODE

import java.security.*;
import java.util.ArrayList;

// Class representing a block in the Blockchain


class Block {
private String previousHash;
private String data;
private String hash;

// Constructor
public Block(String data, String previousHash) {
this.data = data;
this.previousHash = previousHash;
this.hash = calculateHash();
}

// Method to calculate the hash of the block


public String calculateHash() {
String calculatedHash = StringUtil.applySha256(previousHash + data);
return calculatedHash;
}
}

// Class representing the Blockchain


class Blockchain {
PAGE \* MERGEFORMAT 75
private ArrayList<Block> blockchain;

// Constructor
public Blockchain() {
this.blockchain = new ArrayList<>();
// Genesis block
blockchain.add(new Block("Genesis Block", "0"));
}

// Method to add a new block to the Blockchain


public void addBlock(String data) {
Block previousBlock = blockchain.get(blockchain.size() - 1);
blockchain.add(new Block(data, previousBlock.calculateHash()));
}

// Method to validate the integrity of the Blockchain


public boolean isChainValid() {
for (int i = 1; i < blockchain.size(); i++) {
Block currentBlock = blockchain.get(i);
Block previousBlock = blockchain.get(i - 1);

if (!currentBlock.calculateHash().equals(currentBlock.getHash())) {
return false;
}

if (!currentBlock.getPreviousHash().equals(previousBlock.getHash())) {
return false;
}
}
return true;
}
}

PAGE \* MERGEFORMAT 75
// Utility class for cryptographic operations
class StringUtil {
// Applies SHA-256 hashing algorithm to a string
public static String applySha256(String input) {
try {
MessageDigest digest = MessageDigest.getInstance("SHA-256");
byte[] hash = digest.digest(input.getBytes("UTF-8"));
StringBuilder hexString = new StringBuilder(); // This will contain hash as hexadecimal
for (byte b : hash) {
String hex = Integer.toHexString(0xff & b);
if (hex.length() == 1) {
hexString.append('0');
}
hexString.append(hex);
}
return hexString.toString();
} catch (Exception e) {
throw new RuntimeException(e);
}
}
}

public class Main {


public static void main(String[] args) {
// Creating a Blockchain instance
Blockchain blockchain = new Blockchain();

// Adding blocks to the Blockchain


blockchain.addBlock("Transaction 1");
blockchain.addBlock("Transaction 2");

// Displaying the Blockchain


for (Block block : blockchain.getBlockchain()) {
PAGE \* MERGEFORMAT 75
System.out.println("Previous Hash: " + block.getPreviousHash());
System.out.println("Data: " + block.getData());
System.out.println("Hash: " + block.getHash());
System.out.println();
}

// Validating the Blockchain


System.out.println("Is Blockchain valid? " + blockchain.isChainValid());
}
}

BIBLIOGRAPHY
REFERENCES
[1] H. Yin, D. Guo, K. Wang, Z. Jiang, Y. Lyu, and J. Xing, ‘‘Hyper connected network: A decentralized trusted
computing and networking paradigm,’’ IEEE Net w., vol. 32, no. 1, pp. 112–117, Jan./Feb. 2018.
[2] K. Fan, W. Jiang, H. Li, and Y. Yang, ‘‘Lightweight RFID protocol for medical privacy protection in IoT,’’
IEEE Trans Ind. Informat., vol. 14, no. 4, pp. 1656–1665, Apr. 2018.
[3] T. Chajed, J. Gjengset, J. Van Den Hooff, M. F. Kaashoek, J. Mickens, R. Morris, and N. Zeldovich, ‘‘Amber:
Decoupling user data from Web applications,’’ in Proc. 15th Workshop Hot Topics Oper. Syst. (HotOS XV),
WarthWeiningen, Switzerland, 2015, pp. 1–6.
[4] M. Lecuyer, R. Spahn, R. Geambasu, T.-K. Huang, and S. Sen, ‘‘Enhancing selectivity in big data,’’ IEEE
Security Privacy, vol. 16, no. 1, pp. 34–42, Jan./Feb. 2018. [5] Y.-A. de Montjoye, E. Shmueli, S. S. Wang, and A.
S. Pentland, ‘‘openPDS: Protecting the privacy of metadata through SafeAnswers,’’ PLoS ONE, vol. 9, no. 7,
2014, Art. no. e98790.
[6] C. Perera, R. Ranjan, and L. Wang, ‘‘End-toend privacy for open big data markets,’’ IEEE Cloud Comput., vol.
2, no. 4, pp. 44–53, Apr. 2015.
[7] X. Zheng, Z. Cai, and Y. Li, ‘‘Data linkage in smart Internet of Things systems: A consideration from a privacy
perspective,’’ IEEE Commun. Mag., vol. 56, no. 9, pp. 55–61, Sep. 2018.
[8] Q. Lu and X. Xu, ‘‘Adaptable blockchain-based systems: A case study for product traceability,’’ IEEE Softw.,
vol. 34, no. 6, pp. 21–27, Nov./Dec. 2017.
[9] Y. Liang, Z. Cai, J. Yu, Q. Han, and Y. Li, ‘‘Deep learning based inference of private information using
embedded sensors in smart devices’’ IEEE Netw. Mag., vol. 32, no. 4, pp. 8–14, Jul./Aug. 2018.

PAGE \* MERGEFORMAT 75
[10] Q. Xia, E. B. Sifah, K. O. Asamoah, J. Gao, X. Du, and M. Guizani, ‘‘MeDShare: Trust-less medical data
sharing among cloud service providers via blockchain,’’ IEEE Access, vol. 5, pp. 14757–14767, 2017.

PAGE \* MERGEFORMAT 75

You might also like