UNIT 3 Reference Notes

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 31

UNIT – 3

Reference Notes
1.Internet of things, smart sensors, and pervasive systems: Enabling connected and pervasive
healthcare?

1.1 Introduction to Digitization in Healthcare:


The digitization of healthcare data has ushered in transformative changes, making
data more accessible and fostering transparency among stakeholders. Connected medical
devices, including smartphones, tablets, and wearables, are increasingly prevalent, paving the
way for pervasive healthcare. The goal is not just to deliver effective healthcare services but
also to enhance patients' quality of life through continuous health monitoring.

1.2 Pervasive Healthcare:


Pervasive healthcare expands healthcare services beyond traditional settings, enabling
remote monitoring and support through smart sensors, IoT, and wearables. This approach
empowers patients to track their medical records, perform basic analytics, and seek
consultation from healthcare professionals. The integration of IoT and pervasive computing
creates a platform for ubiquitous access to medical data through user-friendly mobile
interfaces.

1.3 IoT, Smart Sensors, and Pervasive Computing:


• IoT Overview: IoT extends internet connectivity to everyday objects, facilitating
pervasive access to information.
• Smart Sensors: Enhance IoT by intelligently gathering and processing relevant data,
transforming devices into smarter entities.
• Pervasive Systems: Enable computation and information access anywhere, anytime,
integrating digital functionalities into common objects.
• Differences and Complementarity: While IoT and pervasive systems serve distinct
purposes, they complement each other, with IoT providing connectivity and pervasive
systems enabling ubiquitous computation and access.

1.4 Challenges in Traditional Healthcare Systems:


• Aging Population: The increasing senior population and prevalence of chronic
conditions strain healthcare resources, necessitating continuous monitoring and
treatment.
• Shortage of Medical Professionals: Hospitals face shortages of qualified medical staff,
impacting the quality and efficiency of healthcare delivery.
• Rising Costs: Healthcare services become increasingly expensive, posing financial
challenges for patients and healthcare systems.
• Changing Disease Patterns: Unhealthy lifestyles contribute to the emergence of new
disease patterns, requiring adaptation in healthcare delivery.
• Disjointed Care: Fragmented diagnosis and treatment processes lead to high rates of
hospital readmissions and prolonged hospital stays, exacerbating patient suffering and
increasing healthcare costs.
• Lack of Integration: Disconnected healthcare facilities result in inefficient
coordination, redundant processes, and suboptimal patient care.

Addressing these challenges requires the development of an integrated healthcare


system that connects patients, providers, and stakeholders. By leveraging IoT, smart sensors,
and pervasive computing, healthcare delivery can be transformed to meet the evolving needs
of patients and society.

1.5 Mobile and Pervasive Healthcare:


The integration of IoT into healthcare services has revolutionized patient care by
enabling pervasiveness. This pervasive approach combines mobility, adaptability, and context
awareness to deliver timely and relevant information to healthcare providers and patients. For
instance, mobile devices equipped with health monitoring sensors allow for real-time tracking
of vital signs, such as heart rate and blood pressure, regardless of the patient's location. This
continuous monitoring not only facilitates early intervention in emergencies but also
promotes proactive management of chronic conditions.

1.6 Adaptability and Context-Awareness:


In pervasive healthcare systems, adaptability and context-awareness are essential for
delivering personalized care tailored to each patient's unique needs. By leveraging patient
contextual information, such as medical history, lifestyle factors, and environmental
conditions, healthcare providers can make informed decisions regarding diagnosis and
treatment. For example, wearable devices equipped with AI algorithms can analyze a patient's
physiological data in real-time and provide personalized health recommendations based on
their current context.

1.7 Connected Healthcare:


The concept of connected healthcare extends beyond remote monitoring to encompass
seamless data sharing and collaboration among healthcare stakeholders. IoT-enabled medical
devices facilitate the collection and transmission of patient data to centralized platforms,
enabling healthcare providers to access comprehensive health records and collaborate on
patient care regardless of geographical barriers. Moreover, connected healthcare promotes
patient engagement by empowering individuals to actively participate in their own health
management through access to personalized health insights and interventions.

1.8 Pervasive Healthcare vs. Telemedicine:


While both pervasive healthcare and telemedicine leverage technology to improve
access to healthcare services, they differ in their approach and scope. Telemedicine primarily
focuses on delivering clinical services remotely, such as virtual consultations and diagnosis,
to overcome geographical barriers and enhance access to specialized care. In contrast,
pervasive healthcare encompasses a broader spectrum of services, including remote
monitoring, predictive analytics, and personalized health interventions, aimed at promoting
holistic wellbeing and preventive care throughout the patient's lifetime.

1.9 Role of IoT in Healthcare:


IoT plays a pivotal role in transforming healthcare delivery by facilitating real-time
data collection, analysis, and decision-making. In clinical care, IoT-enabled devices enable
rapid diagnosis and treatment by providing healthcare providers with instant access to critical
patient data. Remote monitoring powered by IoT empowers patients to manage their health
proactively and enables healthcare providers to intervene promptly in case of abnormalities.
Furthermore, the integration of IoT with medical robotics enhances the precision and
efficiency of surgical procedures, leading to better patient outcomes and recovery.

1.10 Different Healthcare Sensors:


Health sensors, ranging from wearable devices to implantable biosensors, are
instrumental in capturing vital physiological data for pervasive healthcare applications. These
sensors utilize various technologies, such as optical, electromechanical, and biochemical, to
detect and measure biological signals accurately. For example, wearable ECG monitors can
detect abnormal heart rhythms, while implantable glucose sensors provide continuous
monitoring for diabetic patients. The advancements in sensor technology have paved the way
for personalized medicine, where treatment strategies are tailored to individual patient
profiles based on real-time health data.
2. Migration of healthcare relational database to NoSQL cloud database for healthcare
analytics and management ?
2.1 Introduction to Big Data and Database Management Challenges:
The exponential growth of data, termed Big Data, presents challenges for traditional
database management systems (DBMSs). With data expected to grow 40 times by 2020,
managing this volume becomes increasingly difficult. Database administrators (DBAs) face
challenges in managing the growing variety and volume of structured and unstructured data
across multiple platforms.

• Rise of NoSQL Databases:


o Traditional relational database management systems (RDBMSs) face
scalability issues, leading to the emergence of NoSQL databases. NoSQL
databases offer horizontal scaling, making them more cost-effective and
flexible for online and mobile applications.

• Healthcare Databases:
o Healthcare databases aggregate data from various sources like electronic
medical records (EMRs) and electronic health records (EHRs). Analyzing
healthcare data presents complex challenges due to the diversity and volume
of data.

• Data Migration Techniques:


o Data migration involves shifting data between storage systems or formats for
various reasons, such as system upgrades or performance issues. Techniques
like storage migration, database migration, and application migration facilitate
the seamless transfer of data from legacy systems to modern platforms.

• Challenges of Data Migration:


o Data migration poses challenges such as data modeling, data loss,
denormalization, and performance issues. Ensuring data integrity and
synchronization between source and target systems is crucial to mitigate these
challenges and minimize disruptions during the migration process.
2.2 NoSQL Cloud-Based Technology for Healthcare:
In the contemporary era of burgeoning data volumes, traditional relational databases
face challenges in accommodating the scalability, flexibility, and performance demands posed
by cloud computing. This has catalyzed the emergence and ascension of NoSQL databases in
the healthcare domain. NoSQL, or "Not Only SQL," encompasses a diverse array of database
management systems that depart from the rigid structure of relational databases, offering
more flexible data models suitable for the dynamic and distributed nature of cloud
environments.
In the healthcare sector, where data volumes are escalating rapidly due to
advancements in medical technology and the proliferation of digital health records, the need
for scalable, agile, and resilient database solutions has become paramount. NoSQL databases
address these needs by providing specialized data storage models such as column family
stores, graph databases, document stores, and key-value stores. These models offer enhanced
flexibility in handling complex and interconnected healthcare data, ranging from patient
records and medical imaging to genomic sequences and clinical trial data.

2.3 Applications of NoSQL Cloud-Based Technology in Healthcare:


The applications of NoSQL cloud databases in healthcare are multifaceted and
impactful. These databases serve as the backbone for a myriad of healthcare initiatives,
including real-time analytics, mobile application development, Internet of Things (IoT)
integration, user profile management, fraud detection, content management, personalization,
customer 360-degree view, and digital communication platforms.
Real-time analytics powered by NoSQL databases enable healthcare organizations to
extract actionable insights from large and diverse datasets, facilitating data-driven decision-
making and predictive analytics in areas such as patient care, disease management, and
resource allocation. Mobile application development leverages NoSQL databases to deliver
seamless and responsive user experiences, with features like offline support and automatic
synchronization ensuring continuous access to critical healthcare information.
IoT integration involves aggregating and analyzing vast streams of sensor data
generated by medical devices, wearables, and environmental sensors. NoSQL databases
provide the scalability and performance required to process and store this data in real-time,
enabling applications such as remote patient monitoring, predictive maintenance, and
personalized healthcare interventions.
User profile management systems powered by NoSQL databases serve as repositories
for maintaining comprehensive and secure user information, enabling personalized healthcare
services and targeted communication strategies. Fraud detection algorithms leverage the
speed and agility of NoSQL databases to monitor transactions in real-time, detect anomalies,
and mitigate financial risks for healthcare organizations and patients alike.

2.4 Cloud-Based Databases for Healthcare:


Cloud-based databases have emerged as indispensable tools for modernizing
healthcare infrastructure and unlocking the full potential of digital health initiatives. By
leveraging the scalability, elasticity, and accessibility of cloud computing, healthcare
organizations can overcome geographical barriers, streamline data management processes,
and enhance collaboration among stakeholders.
The architecture of cloud-based databases is characterized by distributed data storage,
virtualized infrastructure, and centralized management systems. This architecture enables
seamless integration with existing healthcare systems and applications, while also facilitating
rapid deployment and provisioning of database resources.
2.5 Technology/Services Used for Storage and Retrieval of Cloud Databases:
Cloud database services offer a range of storage and retrieval technologies tailored to
the diverse needs of healthcare organizations. Database as a Service (DBaaS) providers such
as Amazon RDS, Google Cloud SQL, and Microsoft SQL Azure offer scalable, managed
database solutions that eliminate the need for on-premises infrastructure and administrative
overhead.
These services provide features such as automatic replication, data backup, and
disaster recovery to ensure high availability and data integrity. Additionally, they offer
flexible pricing models based on usage metrics such as storage capacity, data transfer volume,
and compute resources, enabling healthcare organizations to optimize costs while meeting
performance requirements.

2.6 Challenges in Accessing Cloud Databases for Healthcare:


Despite the numerous benefits offered by cloud databases, healthcare organizations
face several challenges in accessing and leveraging these technologies effectively. Key
challenges include:

1. Internet Speed and Connectivity: The performance of cloud databases is heavily dependent
on internet speed and network connectivity. In regions with limited bandwidth or unreliable
internet connections, accessing cloud databases may be slow or intermittent, hindering real-
time data access and analysis.
2. Data Security and Privacy: Healthcare organizations must adhere to strict regulatory
requirements regarding the security and privacy of patient data. Storing sensitive healthcare
information in the cloud raises concerns about data breaches, unauthorized access, and
compliance with regulations such as HIPAA (Health Insurance Portability and Accountability
Act) and GDPR (General Data Protection Regulation).
3. Performance Optimization: Optimizing the performance of cloud databases requires
careful tuning of configuration parameters, resource allocation, and data management
strategies. Balancing the trade-offs between performance, cost, and scalability can be
challenging, especially for large-scale healthcare applications with complex data processing
requirements.
4. Data Migration and Integration: Migrating existing healthcare data from on-premises
systems to cloud databases involves complex data transformation and integration processes.
Ensuring data consistency, integrity, and compatibility across heterogeneous data sources
requires careful planning, testing, and validation to minimize disruptions to clinical
workflows and patient care.
5. Vendor Lock-In and Interoperability: Healthcare organizations must carefully evaluate
vendor lock-in risks and ensure interoperability between cloud database platforms and
existing IT systems. Standardizing data formats, APIs (Application Programming Interfaces),
and integration protocols can mitigate the risks of vendor lock-in and facilitate seamless data
exchange across disparate systems and applications.
2.7 Relational Database Migration to NoSQL Cloud Databases:
The migration from traditional relational databases to NoSQL cloud databases
represents a paradigm shift in healthcare data management. While relational databases excel
in structured data storage and transaction processing, they often struggle to accommodate the
diverse and unstructured nature of healthcare data generated by electronic health records
(EHRs), medical imaging systems, genomic sequencing platforms, and IoT devices.

2.8 General Guidelines Involved in Migration from SQL to NoSQL Databases:

Successful migration from SQL to NoSQL databases requires careful planning, evaluation,
and execution. General guidelines for migration include:

1. Understanding Application Requirements: Identify the specific requirements and use cases
driving the migration, such as scalability, performance, flexibility, and cost-effectiveness.
2. Selecting Appropriate NoSQL Offerings: Evaluate and select the most suitable NoSQL
database technology based on factors such as data model, scalability, consistency, availability,
and durability.
3. Conducting Proof of Concepts: Design and implement proof of concept projects to validate
the feasibility and performance of NoSQL databases for targeted use cases and workloads.
4. Modeling and Schema Design: Develop data models and schema designs optimized for
NoSQL databases, taking into account factors such as denormalization, partitioning, and
indexing strategies.
5. Data Migration Tools and Techniques: Utilize data migration tools and techniques such as
Apache Sqoop, AWS Database Migration Service, or custom ETL (Extract, Transform, Load)
pipelines to transfer data from relational databases to NoSQL formats.
6. Performance Tuning and Optimization: Fine-tune the performance of NoSQL databases
through workload profiling, query optimization, and resource allocation adjustments to meet
application SLAs (Service Level Agreements) and performance requirements.
7. Monitoring and Maintenance: Implement monitoring, alerting, and maintenance
procedures to ensure the ongoing health, performance, and reliability of NoSQL database
deployments, including backup and recovery strategies, security audits, and software updates.

By following these guidelines and best practices, healthcare organizations can successfully
transition from legacy relational databases to modern NoSQL cloud databases, unlocking new
opportunities for innovation, collaboration, and value creation in the digital healthcare
ecosystem.
3. DSS ?

• In healthcare systems , try to integrate IoT with decision support systems (DSSs) that
aid patients and doctors in emergency clinical situations.
• Making intelligent decisions, discovery of temporarily hidden values, and removing
limitations related to language and methodology are achievable for a company’s
operations team if unstructured and structured data, called big data, are managed
appropriately.
• A DSS is an interactive, flexible, and adaptable computer-based information system
that utilizes decision rules, models, and model base coupled with a comprehensive
database and the decision maker’s own insights, leading to specific, implementable
decisions in solving problems that would not be amenable to management science
models. Thus, a DSS supports complex decision making and increases its
effectiveness”
MATHEMATICAL MODEL FOR COST ALLOCATION
• In spite of the extensive planning for cost allocation in the healthcare sector, there are
considerable injustices and inequalities in health service accessibility in the provinces.
The actual budgeting method of the Ministry of Health and Medical Education is still a
traditional method, which is very inefficient and leads to inequality and gap increase
among the provinces. The Ministry allocates the budget among its main deputies,
including health, treatment, education, research and technology and the management
deputy.
• In the next step, the mentioned deputies allocate the received budget among medical
universities all over the country. Then each university feeds its departments, including
hospitals, health centres, etc. This budgeting structure is presented in Fig. 3.2.

In this system, the goals are set based on the constitutional agenda of the Ministry and the
national 5-year plan. The programs in some way contribute to the goals. The weights at each
level are elicited from the managers and the experts in the Ministry. The goals are allocated to
the activities of the programs for each province, considering the constraints of the available
resources and the requirement in the provinces.

For cost estimating, one province has been selected as baseline and the costs of all activities
have been determined for it. Then the costs of activities have been calculated for other
provinces based on their drivers. These drivers can be deprivation, population dispersion,
weather condition, distance from the capital, etc. Therefore, a large amount of data collection
for other provinces may not be necessary.

SYSTEM DESIGN

The system is composed of three main components, namely input, inference engine
mechanism, and output subsystems. Fig. 3.4 provides an overview of the designed DSS.
• The input section contains the database (DB) that covers all internal and external
financial data of the Ministry.
• The engine mechanism incorporates a multiobjective model for budget allocation and
provides estimation and what-if analysis for better decision making.
• In the output section, reports are presented for management use in terms of tables and
charts. The users have access to DSS through the user interface (UI). They are able to
perform changes in all parts of the system and carry out what-if analysis and,
consequently, produce various reports according to their choices.
• An information security menu has been designed in this DSS that not only
revents nonorganizational personnel from logging into the system, but also
provides different levels of access for users in the Ministry. The designed
security menu of the system is shown in Fig. 3.7.

• The designed DSS in this chapter consists of three tabs, labeled “info,” “manage,” and
“report.” In the information tab, all internal and external data of the organization are
retrieved and saved.
• They include the goals, programs, provinces, etc. For example, the budgeting-required
indicators of provinces are population, dispersion, health services level, and deprivation
level, which is retrieved from internal and external databases of the organization and
saved in the province sector of this tab. The information tab is shown in Fig. 3.8.
• After importing data using the information tab, in the management tab the relationship
among goals, plans, provinces, activities, and resources must be determined. Moreover,
the budget constraints related to each link must be specified through determination of
the relationship to utilize in the budgeting model. The schema of the management tab
has been indicated in Fig. 3.9.

• Characterization of actions that should be taken to increase the likelihood of achieving


the expected goal is difficult in such organizations as the Ministry of Health. Due to
unpredictability, there is no assurance that the outcome of the action or the function will
be the best one.
• The suitable decision comes from a process of decision making that deliberates for all
the essential factors and is clear about decision changes, options, and indeterminacy.
• Once a piece of data has been collected, it must produce the information in an
organized, evaluated, and conclusive form for arranging a report to the users for final
decisions. Most of the users who operate with
• DSS require more analyses and specific decision-relevant, brief reports.
• In the designed model, after linking the goals, programs, and other factors of the
budgeting problem, in the report tab (Fig. 3.10), managers can perform sensitivity
analysis on the parameters and evaluate the consequences of the changes. This tab has
the main task of sensitivity analysis of the model

4. Securing large datasets involving fast-performing key bunch matrix block cipher ?

4.1 Introduction:
The exponential growth of industries and enterprises in the 1980s and 1990s catalyzed
an unprecedented surge in data generation. This led to a pressing need for advanced storage
solutions and computing tools to manage the burgeoning volumes of data effectively.
Procedural languages like COBOL emerged to facilitate manual navigation of linked datasets,
reducing inaccuracies introduced by human error during data extraction. However, the absence
of a robust search facility spurred the development of relational database management systems
(RDBMS) like System R and SQL, enabling efficient data retrieval. These advancements laid
the foundation for modern database systems and query languages, revolutionizing data
management practices across organizations of all sizes.

4.2 Database Security Threats:


The escalating frequency and severity of data breaches underscore the pervasive nature
of database security threats. High-profile incidents involving companies like AOL, NHS, and
Apple highlight the multifaceted challenges faced by organizations in safeguarding sensitive
data. Privilege escalation, denial of service (DoS) attacks, database injections (such as SQL
and No-SQL), malware infections, and exposure of storage media represent just a few of the
myriad threats confronting modern databases. Moreover, the negligence of personnel handling
data further exacerbates the risk of breaches, emphasizing the critical importance of robust
security measures and proactive risk mitigation strategies.

4.3 Database Security Measures:


In response to evolving threats, organizations have implemented a range of security
measures to protect their databases and mitigate risks effectively. Data protection using
encryption stands out as a fundamental strategy, encompassing various techniques such as
encrypting data at rest, column-level encryption, and encrypting data in motion. Additionally,
user rights management, database compliance, and activity monitoring (DCAM), and data
masking play pivotal roles in enhancing database security. These measures collectively aim to
fortify access controls, monitor database activity, and mask sensitive information, thereby
bolstering defenses against internal and external threats.
4.4 Fast Dataset Block Cipher Development:
Addressing the growing demand for robust encryption techniques capable of securing
large datasets efficiently, researchers have developed innovative block ciphers based on key
bunch matrices. These ciphers offer enhanced security for data at rest and in motion, leveraging
advanced encryption algorithms to safeguard sensitive information effectively. By encrypting
entire datasets or specific data elements, organizations can mitigate the risk of unauthorized
access and data breaches. Moreover, the adoption of standardized encryption protocols ensures
interoperability and compatibility across diverse platforms and systems, further strengthening
the security posture of database environments.

4.5 Cryptanalysis:
Cryptanalysis evaluates a cryptosystem's strength against attacks, crucial for
encryption techniques. Common attacks include ciphertext only, known plaintext, chosen
plaintext, and chosen ciphertext attacks. A comparison table of block ciphers highlights
parameters like key size, block size, operations, rounds, and key generation time.
The proposed fast dataset block cipher aims to secure large datasets at rest and during
transmission. It uses a key bunch matrix and iterative method for encryption. The cipher aims
to withstand ciphertext only and known plaintext attacks. Theoretical proofs support its
resistance to these attacks.
For a ciphertext only attack, brute force is impractical due to the large key space.
Known plaintext attack analysis shows breakability in one round but resistance with multiple
rounds. Chosen plaintext and chosen ciphertext attacks are deemed ineffective against the
cipher.
This emphasize the cipher's efficiency in securing large datasets with fast-performing
operations. Its robustness against cryptanalytic attacks is highlighted, comparable to the Hill
Cipher. The cipher's suitability for securing images in transmission or at rest is noted,
following appropriate digitization.

5. Comparative analysis of semantic frameworks in healthcare ?

5.1 INTRODUCTION :
Interoperability is vital in various fields, including healthcare. Achieving it is
challenging due to diverse data formats. In India, interoperability could save billions, but
adoption remains low. Semantic frameworks are crucial for meaningful data exchange.

5.2 BACKGROUND WORK:


• Data, Information, and Knowledge: Data becomes information when captured, and
knowledge is built upon it.
• Semantic Web Overview: Tim Berners-Lee's principles emphasize open, machine-
readable data formats.
• Linked Data Principles: Key principles include availability, machine-readability, and
use of nonproprietary formats.
• RDF: Utilized for healthcare information representation in a triple format.
• SPARQL: Query language for RDF data, enabling semantic querying.
• Ontology: Essential for interoperability, providing a common vocabulary and
semantic description.
• Multiagent Systems: Facilitate communication and information exchange between
agents using ontologies.

5.3 HEALTHCARE SEMANTIC FRAMEWORKS AND SOFTWARE:


• Agent and Ontology-based Information Sharing (AOIS): Integrates multiagent and
peer-to-peer systems for semantic data sharing.
• Statistics and Collaborative Knowledge Exchange (SCKE): Lightweight system for
secure information exchange between hospitals.
• Multiagent Semantic-driven Framework (MASE): Supports epidemiological analysis
and resource allocation.
• MET4: Manages clinical workflows, focusing on chronic kidney disease.
• Role of Existing Semantic Software: Tools like PACS, RIS, and DICOM streamline
radiology workflows and data management in healthcare.
5.4 RESEARCH ISSUES:
Challenges in Semantic Web Technologies in Healthcare:
• Data Format: Health organizations generate vast amounts of data in various formats,
requiring mapping tools for conversion to standard formats.
• Ontology Diversity: Multiple ontologies in healthcare pose challenges in
standardization and vocabulary alignment within domains.
• Continuous Data Updates: Data description using diverse vocabularies necessitates
strategies and tools for ontology integration.
• Data Mining Integration: Integrating data mining with Semantic Web tech offers
opportunities but requires high-quality data due to its critical nature in healthcare.

5.5 EXISTING INFORMATION RETRIEVAL METHODS IN SEMANTIC WEB:

Approaches:
1. Graph Traversal: Utilizes search algorithms on graphs to extract correlations between
entities.
2. Query Expansion: Manipulates user queries by adding synonyms or hyponyms for
improved retrieval.
3. Spread Activation: Considers both edge weights and incoming connections to discover
relationships among documents.

5.6 INTEROPERABILITY IN HEALTHCARE


• Types: Standard and translation interoperability.
• Challenges: Diverse standards, evolving use cases, and complexity hinder universal
standardization efforts.
• Solution: Focus on translation-oriented methods and frameworks for practical
information interoperability.
5.7 PROPOSED FRAMEWORK:
Components:
• Context Manager: Converts healthcare data into standard RDF format.
• Process Manager: Selects u-healthcare services based on patient risk level.
• Repository System: Stores process information for execution.

5.8 DATA MINING TECHNIQUES FOR INTEROPERABILITY:


• Various techniques like K-NN classification and neural networks aid in healthcare
data interoperability.

5.9 IMPLEMENTATION OF MULTIAGENT SYSTEM:


• Requires Protégé for ontologies, JADE or WADE for multiagent platform, and
Apache Nutch plugin for efficient data retrieval.
• Advocates translation interoperability in healthcare due to diverse standards and use
cases.
• Discusses the proposed framework's components and the role of data mining
techniques in improving data quality and automation.

6. A reversible and secure electronic patient record embedding technique using histogram bin
shifting and RC6 encryption ?
6.1 Introduction:
The rapid advancement of electronic healthcare, facilitated by the internet, has led to
concerns regarding the security of patient data. Ensuring the integrity, confidentiality, and
security of this data during electronic transfer is crucial. Reversible data embedding, also
known as lossless data embedding, offers a solution by hiding patient data within digital
cover images in a reversible manner. This technique minimizes image degradation while
providing high embedding capacity and security. The proposed reversible Electronic Patient
Record (EPR) hiding scheme encrypts EPR using the RC6 encryption algorithm before
embedding, adding an extra layer of security. The chapter discusses related work, explains the
proposed scheme, presents experimental results, and concludes with a summary of outcomes.

6.2 Proposed Work :


In this section reversible data embedding along with encryption is explained. The
technique used for embedding the data is explained with the help of various diagrams and
examples. One of the distinguishing features of the proposed work is “reversibility,” which
has been made possible by performing various preprocessing steps that mainly involve the
cover image. The preprocessing includes RC6 encryption, block division, and histogram
shifting. This is followed by data hiding, data extraction, and decryption. The process of
block division is explained in Section 3.1. The RC6 encryption technique is discussed in
Section 3.2. Section 3.3 discusses embedding procedure while Section 3.4 presents the data
extraction procedure. Fig. 1 presents the general framework of the proposed system. 3.1
(i)BLOCK DIVISION : As stated earlier, in the HBS technique the peak point is also
known as the embed point, which is because of the fact that the embedding capacity is
directly proportional to this point. To exploit this property what we have actually done is
divide the cover image into blocks so as to get more peak points and hence increased
capacity. Fig. 2 gives an arbitrary idea of block division. It is in place to show that after
dividing a cover image into various constituent blocks, we find histograms for each

Fig 1 : General framework of proposed scheme.


Input Data RC6 Encryption Data Embedding Output Image Cover Image FIG. 1
General framework of proposed scheme3 Proposed Work 249 block and apply the bin
shifting technique. By doing so, we are able to get various peak points, which results in an
increase in payload as compared to other bin shifting techniques. The method shown in Fig. 2
can be better understood by taking an example: Consider an 88 matrix of an image that can
be subdivided into four 44 blocks by using the block division. Thus we can then work upon
these blocks, which are of smaller dimensions, in order to hide the data.

(ii) RC6 ALGORITHM:


RC6, known as Rivest Cipher 6, is similar in structure to RC5. RC6 is a symmetric
key block cipher designed by “Ron Rivest,” “Matt Robshaw,” “Ray Sidney,” and “Yiqun
Lisa.” RC6 is usually specified as RC6-l/m/n where the l specifies the word size in bits, m
specifies nonnegative number of rounds, and length of the encryption key in bytes is given by
n. This algorithm works on a block size of 128 bits and supports different key sizes ranging
from 128 to 2040 bits (128, 192, and 256 bits up to 2040 bits). In order to support a large
variety of word lengths, key sizes, and number of rounds, it may sometimes be parameterized
[49]. When compared with RC5, RC6 is considered to be like two RC5 encryption processes
present in parallel, though an extra multiplication operation is used in RC6 that is not present
in the RC5 technique [27–31]. The advantage of the multiplication operation used in addition
to RC5 is to make the rotation operation dependent on every single bit in a word, instead of a
few least significant bits. The various operations employed in the RC6 algorithm are: n p + q:
It represents the addition of two integers using modulo 2w. n p2q: It represents the
subtraction of two integers using modulo 2w. n pq: It represents bitwise exclusive-or of l-bit
words. n p×q: It represents multiplication of integers using modulo 2w. n p<<>>q: This
operation denotes that the word whose length is l bits is rotated towards the right by an
amount equal to the least significant log l bits of n. For encryption, four l-bit registers W, X,
Y, Z are used. These four registers W, X, Y, Z contain both the input plain text as well as the
cipher-text, which is the output obtained at the end of encryption. In the least significant byte
of W, the first byte of the original plaintext is placed, while as in the most significant byte of
Z the last byte of the plaintext is placed. The process is shown in the flowchart in

Fig 2 : Encryption steps in RC6 algorithm.


6.3 EMBEDDING PROCEDURE
This subsection presents the data embedding procedure after the cover image has
been divided into a number of blocks. The complete block diagram showing the
embedding procedure is given in Fig. 3. The embedding procedure has been followed
in steps:
1. Divide the cover image into N N blocks, which must be nonoverlapping and
where N represents the block size.
2. For each block, obtain histogram.
3. On analyzing the histogram, the peak point (which is the pixel value having
maximum frequency) and the zero point (i.e., pixel value whose frequency is
zero) are selected. In case the zero point is far from the peak point, then pixel
value with low frequency is chosen in order to reduce the distortion. Here, the
pixel value used to embed the watermark is the peak point, so it is also known by
the name “embed point.”
4. For a certain pair of peak and zero points, the image block is scanned and certain
conditions are checked, which are:
(a) If peak point lies ahead of zero point (P > Z), then gray values of the pixels
between Z + 1 and P 1 are reduced by 1 (shift the range of the histogram to
252 CHAPTER 10 HBS-based reversible and secure EPR embedding
Technique the left). This creates a bin adjacent to gray level P. Then the image block is
rescanned and it is checked; if the corresponding secret bit is 1 the value of
the pixel with a gray value of P is decremented by 1; otherwise they will not
be modified.
Fig 3 : EPR hiding using the proposed approach.

(b) If zero point lies ahead of peak point (Z > P), then gray values of the pixels
between P + 1 and Z 1 are incremented by 1. As a result of this, a bin is
created at gray value P + 1. After that the image block is rescanned and it
is checked: if the corresponding bit to be embedded is 1, then the pixel value
having P as its gray value is incremented by 1; otherwise, no modification is
made.
The EPR embedding process using histogram bin shifting has been further explained
in Example 1.
Example 1
1. Consider the same 88 matrix as shown in Fig. 3, to which we have applied the
block division technique in order to form small blocks of 44 dimensions.
2. Plot histogram for each block. The histogram for block 1 is plotted in Fig. 7A.
From the histogram, we locate the peak as well as zero points.
3. Since the point with minimum/zero frequency lies towards the right-hand portion
of the point with maximum frequency, thus we shift this point towards the left
circularly and hence it occupies the position at pixel value 123, as is shown in
Fig. 7B.
4. Scan the image in row (or column) order and whenever the peak pixel value
“122” is encountered then check the bit value, which needs to be embedded from
the EPR stream. If it is “1” then the value of the pixel is increased by 1; else for bit
“0,” it is kept unchanged.

Fig : (A) Histogram with P]2 and Z]7 (Z > P); (B) Histogram formed after bin shifting;
(C) Histogram when embedding has been done
Fig : (A) Histogram of Block 1; (B) Shifted histogram.
5. Let the EPR to be hidden be “1001”; thus while scanning, the value of the peak point is
incremented two times and hence the frequency of the pixel having zero value is simultaneously
increased by 2.

Fig : Histogram after embedding of data.


The changes made to the matrix are better understood from the matrix shown.
As can be observed from the matrix, the pixel values “122” in row 1 and 2 are
incremented by 1 corresponding to the data to be embedded, while as in row 2
these values are left untouched because the bit to be embedded is “0.”

6. Repeat the steps for all the corresponding blocks to obtain the watermarked
image containing the EPR information.

7. Clinical Prediction Models ?

7.1 Introduction
Purpose: Discuss supervised learning methods used in clinical prediction tasks.
Key methods: Linear regression, logistic regression, decision trees, neural networks, Bayesian
models, survival models.
Outcomes predicted: Continuous, binary, categorical, ordinal, and survival outcomes.

7.2 Basic Statistical Prediction Models


7.2.1 Linear Regression :
• Application: Predict continuous outcomes like medical costs and inspection
estimations.
• Model: The dependent variable ( Y ) is modeled as a linear combination of the
predictors ( X ).
• Mathematical Expression:
7.2.2 Generalized Additive Model (GAM) :
• Application: Model continuous outcomes using non-linear functions of predictors.
• Model: Allows each predictor a unique smooth function f(i).
• Mathematical Expression:

• Learning: Initially used backfitting, but now commonly estimated with penalized
regression splines.
• Evaluation:
- Focus on the underlying relationship between features and outcomes using
appropriate statistical or machine learning methods based on the type of
outcome.
- The formulas given provide the mathematical basis for parameter estimation
and model fitting essential for understanding and implementing these prediction
models in clinical settings.
7.2.2
7.3 Alternative Clinical Prediction Models:
8. Visual Analytics for Healthcare ?

8.1. Introduction to Visual Analytics in Healthcare:


• Visual analytics integrates interactive visual interfaces with analytical
techniques to manage complex healthcare data effectively.
• Challenges like information overload in healthcare data necessitate
innovative approaches to support data interpretation and decision-making.

8.2. Challenges Posed by Information Overload:


• Healthcare data overload can lead to errors, overlooked changes, and
confusion due to the complexity of data types and the sheer volume of
information.
• Human cognitive limits, such as the ability to analyze up to four variables,
exacerbate the challenge of interpreting extensive clinical datasets.

8.3. Types of Data in Clinical Settings:


• Clinical data encompasses structured (quantitative, interval, ordinal,
categorical, hierarchical) and unstructured (free text) data, requiring
sophisticated analysis techniques.
• Effective analysis often involves converting unstructured data into a
standardized, computable format using Natural Language Processing
(NLP) techniques.

FIGURE 8.1: The most common technique to show structured clinical data within EHRs are tables.
(a) Table with a colormap showing the pain scale values for a given patient that went through an
intensive 20-day treatment. (b) Graphical illustration showing the pain scale for a given patient that
went through an intensive 20-day treatment. (c) Illustration of the DoD/VA pain rating scale shown to
patients to better standardize pain assessments.
8.4. Visualization Techniques in Healthcare:
• Common visualization methods in healthcare include tables, heatmaps,
line charts, and scatter plots, facilitating data comprehension and analysis.
• Advanced visual analytics tools offer interactive interfaces to explore
complex datasets effectively, enhancing clinicians' ability to derive
insights and make informed decisions.

8.5. Impact and Future Directions:


• Visual analytics tools play a vital role in validating clinical data, improving
workflows, and promoting transparency within healthcare institutions.
• Future advancements aim to refine visual analytics tools to handle higher
data complexities, fostering intuitive data interaction and enhancing
diagnostic accuracies and healthcare delivery.

8.6 Visual Analytics in Healthcare :


Medicine is a field driven by data. Basic research is conducted by designing
experiments, collecting data, and analyzing the results. Scientists studying human health
gather a wide variety of different types of data, and they do it at scales that range from an
individual’s genomic fingerprint to large-scale surveys of global populations. Data is often
collected over time, then analyzed, summarized, and inspected to draw out clinically
significant findings. These insights translate to the bedside where healthcare providers gather
yet more data about their patients, examine this data in light of the known literature, and
make treatment decisions. Policy makers define guidelines and regulations, develop
economic models, and design clinical workflows to match this information. The system is
monitored for quality assurance by observing, analyzing the gathered data, and—ideally—
feeding the insights back to the healthcare system for continuous improvement.

8.7 Visual Analytics for Clinical Workflow


As clinical centers continue to embrace new health information technology (HIT),
hospital administrators and leadership groups are interested in ways to better understand the
overall workflow of their organizations including billings, coding patterns, waiting times,
patient outcomes, differences/similarities between providers, frequencies of specific
diagnoses, effectiveness of particular treatments, and many other details that could be used to
obtain insight about the organization.
Effective methods to obtain that information, generate reports, and identify trends can
help hospital administrators better justify resources, determine areas of improvement,
increase transparency between providers and patients, compare the performance between
different departments, and reduce the overall cost of treatment.
12.3.3 Visual Analytics for Clinicians
In contrast to the applications described earlier in this chapter, clinical use cases for
visual analytics typically focus on understanding data about an individual patient. Such index
patients are often visualized in the context of a larger background population to demonstrate
deviations (or lack thereof) from a peer group, but the goal is to provide a clinician with
individualized insights and—potentially—support personalized-care decisions.

Fig 8.2 Lifelines from emerged in the late 1990s as an early example of visualization
applied to personal medical records.

12.3.4 Visual Analytics for Patients


Interfaces that are helpful to patients may be very different from the ones that are
helpful to clinicians. At the outset, the intended uses may be different. Major uses include
assisting patients to comprehend health information, to manage their health condition, and to
serve as a communication platform within particular healthcare contexts.
FIGURE 12.13: The individual summary visualization used within Patients Like Me.

The role of visual analytics in healthcare, highlighting how these technologies help
manage and interpret complex data through interactive visual interfaces. These tools are
particularly useful in healthcare applications like public health monitoring, medical research,
clinical workflow, and enhancing patient engagement. Visual analytics combine human
cognitive skills with computational power, aiding in tasks such as clinical decision-making
and medical research. While significant progress has been made in making complex
healthcare data more accessible, challenges still remain, with ongoing research aimed at
refining these visual analytic techniques to keep up with increasing data complexity and
volume. This is crucial for advancing a more evidence-based healthcare system.

You might also like