Simulations in Medicine Pre Clinical and Clinical Applications 9783110406344 9783110406269

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 370

Irena Roterman-Konieczna (Ed.

)
Simulations in Medicine
Editor
Prof. Dr. Irena Roterman-Konieczna
Jagiellonian University
Medical College, Department of Bioinformatics and Telemedicine
Ul. Sw. Lazarza 16, 31-530 Krakow, Poland
e-mail: myroterm@cyf-kr.edu.pl

This book has 247 figures and 5 tables.

The publisher, together with the authors and editors, has taken great pains to ensure that all
information presented in this work (programs, applications, amounts, dosages, etc.) reflects the
standard of knowledge at the time of publication. Despite careful manuscript preparation and
proof correction, errors can nevertheless occur. Authors, editors and publisher disclaim all
responsibility and for any errors or omissions or liability for the results obtained from use of the
information, or parts thereof, contained in this work.

The citation of registered names, trade names, trademarks, etc. in this work does not imply, even
in the absence of a specific statement, that such names are exempt from laws and regulations
protecting trademarks etc. and therefore free for general use.

ISBN 978-3-11-040626-9
e-ISBN (PDF) 978-3-11-040634-4
e-ISBN (EPUB) 978-3-11-040644-3

Library of Congress Cataloging-in-Publication Data


A CIP catalog record for this book has been applied for at the Library of Congress.

Bibliographic information published by the Deutsche Nationalbibliothek


The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed
bibliographic data are available on the Internet at http://dnb.dnb.de.

© 2015 Walter de Gruyter GmbH, Berlin/Boston


Cover image: Eraxion/istock/thinkstock
Typesetting: PTP-Berlin, Protago-TEX-Production GmbH, Berlin

www.degruyter.com
Preface
“Simulations in medicine” – typing these words into Google produces a long list
of institutions that train students in all practical aspects of medicine using
phantoms. The student may learn to perform a variety of procedures and surgical
interventions by interacting with a simulated patient. Such centers perform a
great range of tasks related to medical education; however, medical simulations
are not limited to manual procedures.
The very word “simulation” is closely tied to computer science. It involves
recreating a process which occurs over a period of time. The process may
include actions performed manually by a student but it can also comprise events
occurring in virtual space, under specific conditions and in accordance with
predetermined rules – including processes occurring on the molecular (Chapter
1) or cellular (Chapter 2) level, at the level of a communication system (Chapter
3) or organs (Chapters 4 and 5) or even at the level of the complete organism –
musculoskeletal relations (Chapter 6). “Simulations in medicine” also involve
recreating the decision-making process in the context of diagnosis (Chapters 7,
8, 9), treatment (Chapter 10,11), therapy (Chapter 12), as supported by large-
scale telecommunication (Chapter 13) and finally in patient support (Chapter
14).
This interpretation of the presented concept – focusing on understanding of
phenomena and processes observed in the organism – is the core subject of our
book and can, in fact, be referred to as “PHANTOMLESS medical simulations”.
The list of problems which can be presented in the form of simulations is
vast. Some selection is therefore necessary. While our book adopts a selective
approach to simulations, each simulation can be viewed as a specific example of
a generic phenomenon: indeed, many biological events and processes can be
described using coherent models and assigned to individual categories. This
pattern-based approach broadens the range of interpretations and facilitates
predictions based on the observable analogies. As a result, simulation results
become applicable to a wide category of models, permitting further analysis.
One such universal pattern which we will refer to on numerous occasions is
the concept of an “autonomous entity”. The corresponding definition is broad,
encompassing all systems capable of independent operation, ensuring their own
survival and homeostasis. This includes individual organisms, but also complex
social structures such as ant colonies, beehives or even factories operating under
market conditions. The structures associated with the autonomous operation of
these entities share certain common characteristics – they include e.g.
construction structures which fulfill the role of “building blocks” (Fig. 1(a)),
function-related structures which act in accordance with automation principles
(Fig. 1 (b)) and, finally, structures responsible for sequestration of materials,
making them compact and durable while also ensuring that they can be easily
accessed when needed (Fig. 1 (c)).

Fig. 1: Symbolic depiction of the structural and functional characteristics of the organism as an
autonomous entity, comprising three basic types of components (a, b, c) corresponding to specific
aims: (a) construction; (b) function; (c) storage.

Living organisms conform to the above described model. Each problem, when
expressed in the form of a simulation, has its place in a coherent system – much
like a newly acquired book in a library collection.
The division presented also helps explain common issues and problems
relevant to each group of models. Afflictions of the skeletal system, metabolic
diseases or storage-related conditions can all be categorized using the above
presented schema (although some of them may affect more than one category).
Even randomly selected simulations follow this generalized model,
contributing to proper categorization of biological phenomena. This fact
underscores the importance of simulation-based imaging.
Journal “Bio-Algorithms and Med-Systems” published by de Gruyter invites
all Readers to submit papers concerning the wildly understood spectrum of
PHANTOMLESS simulations in medicine.
You are invited to visit: http://www.degruyter.com/view/j/bams

Krakow, March, 2015 Irena Roterman-Konieczna


Contents
Preface

List of authors
Part I: Molecular level

Monika Piwowar and Wiktor Jurkowski


1 Selected aspects of biological network analysis
1.1 Introduction
1.2 Selected biological databases
1.2.1 Case study: Gene Expression Omnibus
1.2.2 RegulonDB
1.3 Types of biological networks
1.3.1 Relations between molecules and types of networks
1.3.2 Biochemical pathways
1.4 Network development models
1.4.1 Selected tools for assembling networks on the basis of gene expression data
1.4.2 Selected tools for reconstruction of networks via literature mining
1.5 Network analysis
1.5.1 Selected tools
1.5.2 Cytoscape analysis examples
1.6 Summary
Part II: Cellular level

Jakub Wach, Marian Bubak, Piotr Nowakowski, Irena Roterman, Leszek Konieczny, and
Katarzyna Chłopaś
2 Negative feedback inhibition – Fundamental biological regulation in cells and
organisms
2.1 Negative feedback-based systems simulations
2.1.1 Introduction
2.1.2 Glossary of Terms
2.1.3 Software model
2.1.4 Application manual
2.1.5 OS model example
2.1.6 Simulation algorithm

Irena Roterman-Konieczna
3 Information – A tool to interpret the biological phenomena
Part III: Organ level

Anna Sochocka and Tomasz Kawa


4 The virtual heart

Marc Ebner and Stuart Hameroff


5 Modeling figure/ground separation with spiking neurons
5.1 Introduction
5.2 Figure/ground separation
5.3 Spiking neural networks
5.4 Lateral connections via gap junctions
5.5 Simulation of a sheet of laterally connected neurons
5.6 Basis of our model
5.7 Conclusion
Part IV: Whole body level

Ryszard Tadeusiewicz
6 Simulation-based analysis of musculoskeletal system properties
6.1 Introduction
6.2 Components of a motion simulation model
6.2.1 Simulating the skeleton
6.2.2 Bone model simulations
6.2.3 Muscle models
6.2.4 Velocity-dependent simulations of the muscle model
6.3 Summary
6.4 Simulation software available for download
Part V: Diagnostics procedure

Andrzej A. Kononowicz and Inga Hege


7 The world of virtual patients
7.1 Introduction
7.2 What are virtual patients?
7.3 Types of virtual patient
7.4 The motivation behind virtual patients
7.5 Theoretical underpinnings of virtual patients
7.5.1 Experiential learning theory
7.5.2 Theory of clinical reasoning
7.6 The technology behind virtual patients
7.6.1 Virtual patient systems
7.6.2 Components of virtual patients
7.6.3 Standards
7.7 Howto use virtual patients?
7.7.1 Preparation for or follow-up of face-to-face teaching
7.7.2 Integration into a face-to-face session
7.7.3 Assessment
7.7.4 Learning-by-teaching approach
7.8 The future of virtual patients

Dick Davies, Peregrina Arciaga, Parvati Dev, and Wm LeRoy Heinrichs


8 Interactive virtual patients in immersive clinical environments: The potential for
learning
8.1 Introduction
8.2 What are virtual worlds?
8.3 Immersive Clinical Environments (Virtual Clinical Worlds)
8.4 Virtual patients
8.5 Interactive virtual patients in immersive clinical environments
8.6 Case study: Using immersive clinical environments for Inter-Professional Education at
Charles R. Drew University of Medicine
8.6.1 Introduction to case study
8.6.2 The case study
8.6.3 Assessment
8.6.4 Summary and lessons learned
8.7 The potential for learning
8.7.1 Why choose immersive clinical environments?
8.7.2 Decide
8.7.3 Design
8.7.4 Develop
8.7.5 Deploy
8.8 Conclusion: “Learning by Doing… Together”

Joanna Jaworek-Korjakowska and Ryszard Tadeusiewicz


9 Melanoma thickness prediction
9.1 Introduction
9.2 Motivation
9.3 Clinical definition and importance
9.4 Algorithm for the determination of melanoma thickness
9.5 Melanoma thickness simulations
9.6 Conclusions
Part VI: Therapy

Ryszard Tadeusiewicz
10 Simulating cancer chemotherapy
10.1 Simulating untreated cancer
10.2 Enhanced model of untreated cancer
10.3 Simulating chemotherapy
10.4 Simulation software available for the reader

Piotr Dudek and Jacek Cieślik


11 Introduction to Reverse Engineering and Rapid Prototyping in medical
applications
11.1 Introduction
11.2 Reverse Engineering
11.2.1 Phase one – Inputs of medical RE
11.2.2 Phase two – Data acquisition
11.2.3 Phase three – Data processing
11.2.4 Phase four – Biomedical applications
11.3 Software for medical RE
11.3.1 Mimics Innovation Suite
11.3.2 Simpleware ScanIP
11.3.3 3D-DOCTOR
11.3.4 Amira
11.3.5 Other software for 3D model reconstruction
11.3.6 RE and dimensional inspection
11.3.7 Freeform modeling
11.3.8 FEA simulation and CAD/CAM systems
11.4 Methods of Rapid Prototyping for medical applications – Additive Manufacturing
11.4.1 Liquid-based RP technology
11.4.2 Stereolithography (SLA)
11.4.3 Polymer printing and jetting
11.4.4 Digital Light Processing (DLP)
11.4.5 Solid sheet materials
11.4.6 Fused Deposition Modeling (FDM)
11.4.7 Selective Laser Sintering (SLS)
11.4.8 Selective Laser Melting (SLM)
11.4.9 Electron Beam Melting (EBM)
11.4.10 Tissue engineering
11.5 Case studies
11.5.1 One-stage pelvic tumor reconstruction
11.5.2 Orbital reconstruction following blowout fracture
11.6 Summary

Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut


12 Computer simulations in surgical education
12.1 Introduction
12.2 Overview of applications
12.2.1 Gray’s Anatomy Student Edition, Surgical Anatomy – Student Edition, digital editions
of anatomy textbooks for the iOS (free) and Android (paid)
12.2.2 Essential Skeleton 4, Dental Patient Education Lite, 3D4Medical Images and
Animations, free educational software by 3D4Medical.com, available for iOS, Android
(Essential Skeleton 3 – earlier version; paid editions of Essential Anatomy 3 and
iMuscle 2)
12.2.3 SpineDecide – An example of point of care patient education for healthcare
professionals, available for iOS
12.2.4 iSurf BrainView – Virtual guide to the human brain, available for iOS
12.2.5 Monster Anatomy Lite – Knee – Orthopedic guide, available for iOS (Monster Minds
Media)
12.2.6 AO Surgery Reference – Orthopedic guidebook for diagnosis and trauma treatment,
available for iOS and Android
12.2.7 iOrtho+ – Educational aid for rehabilitationists, available for iOS and Android
12.2.8 DrawMD – Based on General Surgery and Thoracic Surgery by Visible Health Inc.,
available for iOS
12.2.9 MEDtube, available for iOS and Android
12.3 Specialized applications
12.3.1 Application description
12.4 Simulators
12.4.1 Selected examples of surgical simulators
12.5 Summary
Part VII: Support of therapy

Łukasz Czekierda, Andrzej Gackowski, Marek Konieczny, Filip Malawski, Kornel Skałkowski,
Tomasz Szydło, and Krzysztof Zieliński
13 From telemedicine to modeling and proactive medicine
13.1 Introduction
13.2 ICT-driven transformation in healthcare
13.2.1 Overview of telemedicine
13.2.2 Traditional model of healthcare supported by telemedicine
13.2.3 Modeling as knowledge representation in medicine
13.2.4 Towards a personalized and proactive approach in medicine
13.2.5 Model of proactive healthcare
13.3 Computational methods for models development
13.3.1 Computational methods for imaging data
13.3.2 Computational methods for parametric data
13.4 TeleCARE – telemonitoring framework
13.4.1 Overview
13.4.2 Contribution to the model-based proactive medicine concept
13.4.3 Case study
13.5 TeleDICOM – system for remote interactive consultations
13.5.1 Overview
13.5.2 Contribution to the model-based proactive medicine concept
13.6 Conclusions

14 Serious games in medicine


Paweł Węgrzyn
14.1 Serious games for health – Video games and health issues
14.1.1 Introduction
14.1.2 Previous surveys
14.1.3 Evidence review
14.1.4 Conclusions

Ewa Grabska
14.2 Serious game graphic design based on understanding of a new model of visual
perception – computer graphics
14.2.1 Introduction
14.2.2 A new model of perception for visual communication
14.2.3 Visibility enhancement with the use of animation
14.2.4 Conclusion

Irena Roterman-Konieczna
14.3 Serious gaming in medicine
14.3.1 Therapeutic support for children
14.3.2 Therapeutic support for the elderly

Index
List of authors

Dr. Peregrina Arciaga


Charles R. Drew/UCLA University,
School of Medicine
1731 East 120th Street,
CA 90059 Los Angeles, USA
e-mail: peregrinaarciaga@cdrewu.edu
Chapter 8

Dr. Marian Bubak


AGH – Cyfronet
Nawojki 11, 30-950 Krakow, Poland
e-mail: bubak@agh.edu.pl
Chapter 2

Katarzyna Chtopaś – Student


Jagiellonian University – Medical College
Sw. Anny 12, 31-008 Krakow, Poland
email: katarzyna.chlopas@uj.edu.pl
Chapter 2

Prof. Jacek Cieślik


AGH – University of Science and Technology
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
e-mail: cieslik@agh.edu.pl
Chapter 11

Dr. Łukasz Czekierda


AGH – University of Science and Technology
Kawiory 21, 30-055 Krakow, Poland
e-mail: luke@agh.edu.pl
Chapter 13

Jakub Dąbroś
AGH – University of Science and Technology
Łazarza 16, 30-530 Krakow, Poland
e-mail: kubadabros@gmail.com
Chapter 12

Dr. Dick Davies


Ambient Performance
43 Bedford Street, Suite 336,
Covent Garden, London WC2E 9HA, UK
e-mail: dick.davies@ambientperformance.com
Chapter 8

Dr. Parvati Dev


Innovation in Learning Stanford, USA
12600 Roble Ladera Rd,
CA 94022 Los Altos Hills, USA
e-mail: parvati@parvatidev.org
Chapter 8

Dr. Piotr Dudek


AGH – University of Science and Technology
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
e-mail: pdudek@agh.edu.pl
Chapter 11

Jacek Dygut MD
Canton Hospital – Wojewodzki Hospital
Monte Casino 18, 37-700 Przemysl, Poland
e-mail: jacekdygut@gmail.com
Chapter 12

Prof. Marc Ebner


Ernst-Moritz-Arndt University Greifswald
Institute for Mathematics and Informatics
Walther-Rathenau-Str. 47,
17487 Greifswald, Germany
e-mail: marc.ebner@uni-greifswald.de
Chapter 5

Prof. Andrzej Gackowski


Jagiellonian University – Medical College,
Cardiology Hospital
Prądnicka 80, 31-202 Krakow, Poland
e-mail: agackowski@szpitaljp2.krakow.pl
Chapter 13

Prof. Ewa Grabska


Jagiellonian University
Łojasiewicza 11, 30-348 Krakow, Poland
e-mail: ewa.grabska@uj.edu.pl
Chapter 14

Prof. Stuart Hameroff


Departments of Anesthesiology and Psychology
and Center for Consciousness Studies
The University of Arizona Tucson
Arizona 85724, USA
e-mail: hameroff@u.arizona.edu
Chapter 5

Dr. Inga Hege


Ludwig-Maximilians-University München
Ziemssenstr. 1, 80336 München, Germany
e-mail: inga.hege@med.uni-muenchen.de
Chapter 7

Dr. LeRoy Heinrichs


Stanford University School of Medicine, USA
8 Campbell Lane, CA 94022 Menlo Park, USA
e-mail: whl@stanford.edu
Chapter 8

Dr. Joanna Jaworek-Korjakowska


AGH – University of Science and Technology
Al. A. Mickiewicza 30, 30-059 Krakow
e-mail: jaworek@agh.edu.pl
Chapter 9

Dr. Wiktor Jurkowski


The Genome Analysis Centre,
Norwich Research Park
Norwich NR4 7UH, UK
e-mail: wiktor.jurkowski@tgac.ac.uk
Chapter 1

Tomasz Kawa MSc


Jagiellonian University
Łojasiewicza 11, 30-348 Krakow, Poland
Chapter 4

Prof. Leszek Konieczny


Jagiellonian University – Medical College
Kopernika 7, 31-034 Krakow, Poland
e-mail: mbkoniec@cyf-kr.edu.pl
Chapter 2

Marek Konieczny
AGH – University of Science and Technology
Kawiory 21, 30-055 Krakow, Poland
e-mail: marekko@agh.edu.pl
Chapter 13

Dr. Andrzej Kononowicz


Jagiellonian University – Medical College
Łazarza 16, 31-530 Kraków, Poland
e-mail: mykonono@cyf-kr.edu.pl
Chapter 7

Filip Malawski
AGH – University of Science and Technology
Kawiory 21, 30-055 Krakow, Poland
e-mail: fmal@agh.edu.pl
Chapter 13

Piotr Nowakowski MSc


AGH – University of Science and Technology
Nawojki 11, 30-950 Krakow, Poland
e-mail: ymnowako@cyf-kr.edu.pl
Chapter 2

Dr. Monika Piwowar


Jagiellonian University – Medical College
Łazarza 16, 31-530 Krakow, Poland
e-mail: myroterm@cyf-kr.edu.pl
Chapter 1

Prof. Irena Roterman-Konieczna


Jagiellonian University – Medical College
Łazarza 16, 31-530 Krakow, Poland
e-mail: myroterm@cyf-kr.edu.pl
Chapters 2, 3, and 14

Kornel Skałkowski MSc


AGH – University of Science and Technology
Kawiory 21, 30-055 Krakow, Poland
e-mail: skalkow@agh.edu.pl
Chapter 13

Dr. Anna Sochocka


Jagiellonian University
Łojasiewicza 11, 30-348 Krakow, Poland
e-mail: a.sochocka@uj.edu.pl
Chapter 4

Dr. Tomasz Szydło


AGH – University of Science and Technology
Kawiory 21, 30-055 Krakow, Poland
e-mail: tszydlo@agh.edu.pl
Chapter 13

Prof. Ryszard Tadeusiewicz


AGH – University of Science and Technology,
Chair of Automatics and Bioengineering
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
e-mail: rtad@agh.edu.pl
Chapters 6, 9, and 10

Jakub Wach MSc


AGH – Cyfronet
Nawojki 11, 30-950 Krakow, Poland
e-mail: j.wach@cyfronet.pl
Chapter 2

Prof. Paweł Wegrzyn


Jagiellonian University
tojasiewicza 11, 30-348 Krakow, Poland
e-mail: pawel.wegrzyn@uj.edu.pl
Chapter 14

Zdzistaw Wiśniowski MSc


Jagiellonian University – Medical College
Łazarza 16, 30-530 Krakow, Poland
e-mail: mywisnio@cyf-kr.edu.pl
Chapter 12

Prof. Krzysztof Zieliński


AGH – University of Science and Technology,
Informatics Institute
Kawiory 21, 30-055 Krakow, Poland
e-mail: kz@ics.agh.edu.pl
Chapter 13
Part I: Molecular level
Monika Piwowar and Wiktor Jurkowski
1 Selected aspects of biological network
analysis

1.1 Introduction
Much has been made of the Human Genome Project’s potential to unlock the
secrets of life [1,2]. Mapping the entire human DNA was expected to provide
answers to unsolved problems of heredity, evolution, protein structure and
function, disease mechanisms and many others. The actual outcome of the
project, however, differed from expectations. It turned out that coding fragments
– genes – constitute only a minute fraction (approximately 2 %) of human DNA.
Furthermore, comparative analysis of human and chimpanzee genomes revealed
that despite profound phenotypic differences the DNA of these species differs by
only 1.5 %. Despite being an undisputed technological tour de force, the Human
Genome Project did not live up to the far-reaching hopes of the scientific
community. It seems that genes alone do not convey sufficient information to
explain phenotypic uniqueness – indeed, additional sources of information are
required in order to maintain a coherent system under which the expression of
individual genes is strictly regulated [3].
Cellular biology has historically been dominated by the reductionist
(“bottom- up”) approach. Researchers studied specific components of the cell
and drew conclusions regarding the operation of the system as a whole [4, 5].
Structural and molecular biology reveals the sequential and structural
arrangement of proteins, DNA and RNA chains. In recent years efficient
technologies have emerged, enabling analysis of entire genomes (genomics) [6,
7], regulation of transcription processes (transcriptomics) [8], quantitative and
qualitative properties of proteins (proteomics) [9] as well as the chemical
reactions which form the basis of life (metabolomics) [10, 11]. Specialist
literature is replete with breadth-first data analysis studies which are often jointly
referred to as “omics” (e.g. lipidomics) [12]. The common factor of all these
disciplines is the application of modern experimental methods to study changes
which occur in a given cell or tissue [12].
The ongoing evolution of IT methodologies enables efficient processing of
vast quantities of data and, as a result, many specialist databases have emerged.
Progressive improvements in computational sciences facilitates more and more
accurate analysis of the structure and function of individual components of
living cells. Yet, despite the immense effort invested in this work, it has become
evident that biological function cannot – in most cases – be accurately modeled
by referring to a single molecule or organelle. In other words, the cell is more
than merely a sum of its parts and it is not possible to analyze each part
separately and then to assemble them together (like a bicycle). The fundamental
phenomena and properties of life fade from focus when such a reductionist
approach is applied. While an organism can be said to “operate” as determined
by the laws of physics, and while it is composed of a wide variety of chemical
elements, it cannot be analyzed using the same tools which are successfully
applied in other disciplines (e.g. linearization, extrapolation etc.) where our
knowledge of the target system is complete [13, 14]. Molecules interact with one
another forming a fantastically complex web of relationships. Hundreds of
thousands of proteins are encoded by genes which themselves fall under the
supervision of additional proteins. Genes and proteins act together to drive
innumerable processes on the level of individual cells, tissues, organs and entire
organisms. The end result is an enormously complicated, elastic and dynamic
system, exhibiting a multitude of emergent phenomena which cannot be
adequately explained by focusing on its base components [15].
The knowledge and data derived from efficient experimentation allow us to
begin explaining how such components and their interactions affect the
processes occurring in cells – whether autonomous or acting within the scope of
a given tissue, organ or organism. This approach, usually referred to as “systems
biology” has been gaining popularity in recent years. It is based on a holistic
(“top-down”) approach which attributes the properties of biological units to the
requirements and features of systems to which they belong [3]. While a
comprehensive description of the mechanism of life – even on the basic cellular
level – is still beyond our capabilities, ongoing developments in systems biology
and biomedicine supply ample evidence in support of this holistic methodology.
Barbasi et al. [16] have conducted several studies which indicate that biological
networks conform to certain basic, universal laws. Accurately describing
individual modules and pathways calls for a marriage between experimental
biology and other modern disciplines, including mathematics and computer
science, which supply efficient means for the analysis of vast experimental
datasets. This formal (mathematical) approach can be applied to biological
processes, yielding suitable methods for modeling the complex
interdependencies which play a key role in cells and organisms alike [17]. Such
a “network-based” view of cellular mechanisms provides an entirely new
framework for studies of both normal and pathological processes observed in
living organisms [16, 18].
Network analysis is a promising approach in systems biology and produces
good results when the target system has already been accurately described (e.g.
metabolic reactions in mitochondria; well-studied signaling pathways etc.).
While such systems are scarce – as evidenced by the interpretation of available
results – network methods are also good at supplying hypotheses or singling out
candidates for further study (e.g. interesting genes).
Existing mathematical models that find application in biology can be roughly
divided into two classes based on their descriptive accuracy: continuous models,
where the state of a molecule (its concentration, degree of activation etc.) and its
interaction with other molecules (chemical reactions) can be formally described
using ordinary differential equations (ODEs) [19, 20] under a specific kinetic
model, and discrete models, where molecules exist in a limited number of states
(typically two) interlinked in a directionless or directed graph. This second class
includes Boolean networks, where each vertex assumes a value of 0 or 1
depending on the assumed topology and logic [21, 22], and Bayesian networks,
where the relations between molecules are probabilistic [23, 24]. As networks
differ in terms of computational complexity, selecting the appropriate tool
depends on the problem we are trying to solve. Boolean networks are well suited
to systems which involve “on/off” switches, such as gene transcription factors
which can be either present or absent, while continuous models usually provide a
more accurate description of reaction kinetics where the quantities of substrates
and products vary over time.

1.2 Selected biological databases


Formulating more and more precise theoretical descriptions of protein/protein or
protein/gene interactions would not have been possible without experimental
data supplied by molecular biology studies such as sequencing, dihybrid
crossing, mass spectrometry and microarray experiments. From among these,
particular attention has recently been devoted to the so-called vital stain
techniques. Their application in the study of cellular processes is thought to hold
great promise since they enable analysis of dynamic changes occurring in a
living cell without disrupting its function. As a result, this approach avoids the
complications associated with cell death and its biochemical consequences. Vital
stains provide a valuable source of information which can be exploited in
assembling and annotating relation networks. Such efforts are often
complemented by microarray techniques which “capture” the state of the cell at
a particular point in its life cycle. Microarray experiments carried out at
predetermined intervals, while imperfect, provide much information regarding
the relations between individual components of a cell, i.e. proteins. Such detailed
data describing specific “members” of interaction networks along with their
mutual relations is typically stored in specialized repositories, including:
– genomes
– Ensembl (http://www.ensembl.org/index.html)
– UCSD (http://genome.ucsc.edu/)
– protein data
– Protein (http://www.ncbi.nlm.nih.gov/protein/)
– Uniprot (http://www.uniprot.org/)
– PDB (http://www.rcsb.org)
– microarray and NGS data
– GEO (http://www.ncbi.nlm.nih.gov/geo/)
– ArrayExpress (http://www.ebi.ac.uk/arrayexpress/)

1.2.1 Case study: Gene Expression Omnibus

GEO (Gene Expression Omnibus; http://www.ncbi.nlm.nih.gov/geo/) is a


database which aggregates publicly available microarray data as well as data
provided by next generation sequencing and other high-throughput genomics
experiments. GEO data is curated and annotated so that users do not need to
undertake complex preprocessing steps (such as noise removal or normalization)
when they wish to e.g. review gene expression levels in patients with various
stages of intestinal cancer. Additionally, the database provides user-friendly
query interfaces and supports a wide range of visualization and data retrieval
tools to ensure that gene expression profiles can be readily located and accessed.
Owing to its structure, GEO permits comparative analysis of results e.g. for
different patients, applying statistical methods such as Student’s t-test
(comparison of average values in two groups) or ANOVA (comparison of a
larger number of groups). Graphical representation of microarray data with color
maps or charts depicting the expression of selected genes in several different
experiments facilitates preliminary assessment and enables researchers to
pinpoint interesting results. The database also hosts supplementary data: primary
datasets obtained directly from scanning microarrays and converting
fluorescence intensity into numerical values, as well as raw microarray scans
(see Gene Expression Omnibus info; http://www.ncbi.nlm.nih.gov/geo/info/.)
The information present in the GEO database may be retrieved using several
types of identifiers; specifically:
– GPLxxx: requests a specific platform. Platform description files contain data
on matrices or microarray sequencers. Each platform may include multiple
samples.
– GSMxxx: requests a specific sample. The description of a sample comprises
the experiment’s free variables as well as the conditions under which the
experiment was performed. Each sample belongs to one platform and may be
included in multiple series.
– GSExxx: requests a specific series. A series is a sequence of linked samples
supplemented by a general description of the corresponding experiment.
Series may also include information regarding specific data items and analysis
steps, along with a summary of research results.

The identifiers of samples, series and platforms are mutually linked – thus, by
querying for a specific microarray sample we may also obtain information on the
platforms and series to which it belongs.
The GEO homepage offers access to gene expression profiles as well as sets
of individual microarray samples obtained using identical platforms and under
identical conditions. The repository also publishes its data via the National
Center of Biotechnology Information (http://www.ncbi.nlm.nih.gov), with two
distinct collections: GEO DataSet and Geo Gene Profiles. This division is due to
practical reasons and a brief summary of the NCBI databases which aggregate
GEO data is presented below.

GEO DataSet
The Geo DataSet database comprises data from curated microarray experiments
carried out with the use of specific platforms under consistent conditions. It can
be queried by supplying dataset identifiers (e.g. GDSxxx), keywords or names of
target organisms. ID-based queries produce the most accurate results – keywords
and names are ambiguous and may result in redundant data being included in the
result set. An example of a microarray dataset (comprising a number of samples)
is GDS3027 which measures gene expression levels in patients suffering from
early-stage Duchenne muscular dystrophy. The study involved a control group as
well as a group of patients of varying age (measured in months) (Fig. 1.1).
Fig. 1.1 Results of a microarray experiment involving a group of patients afflicted with Duchenne
muscular dystrophy, along with a control group. GSMxxx identifiers refer to specific samples.

Graphical representation of GDS3027 results reveals the expression levels of


individual genes (Fig. 1.2). Purple markers indicate high expression, green
markers correspond to poor expression and grey areas indicate that no
expression could be detected.
In addition, the repository aggregates data in clusters depending on the
correlation between expression profiles with regard to specific samples
(columns) and genes (rows).
Fig. 1.2: Graphical representation of gene expression levels in the GDS3027 microarray dataset.
The inset frame shows a magnified fragment of the GDS matrix. Colors correspond to expression
levels: purple – high expression; green – poor expression; grey – no expression.

GEO Gene Profiles


Unlike GEO DataSet, this repository deals with expression of specific genes
across a number of microarray experiments.
Gene expression levels may be “observed” under a given set of experimental
conditions (such as time of study, gender or other concomitant variables) to
quickly determine whether there is a connection between expression levels and
any of these variables. Additionally, the database supplies links to genes with
similar expression profiles. Queries can be forwarded to other databases
aggregated by NCBI, e.g. to obtain additional data regarding the target sequence
or protein structure. GEO Gene Profile search interfaces are roughly similar to
those provided by GEO DataSet.

Fig. 1.3: MYH8 (myosin, heavy chain) expression profile. As shown, the expression levels of this
gene are higher in the test group than in the control group.

The GDS3027 dataset includes (among others) myosin, whose expression in the
test group is higher than in the control group. The corresponding GEO Gene
Profile data is presented as a bar graph (Fig. 1.3).
Similar techniques can be applied to other genes. The database enables
researchers to quickly discover homologues and genes with similar expression
profiles (referred to as “profile neighbors”). Links to GEO DataSet profiles are
also provided.

1.2.2 RegulonDB
RegulonDB is a database that focuses on the gene regulatory network of E. coli
– arguably the most studied regulatory network [25]. The database portal
provides a range of online (browser accessible) tools that can be used to query
the database, analyze data and export results including DNA sequences and
biological interdependence networks.
In conjunction with the E. coli microarray experiment results (which can be
obtained from GEO), RegulonDB supports validation of regulatory network
simulation algorithms.

Using RegulonDB to determine the efficiency of network construction algorithms

The main page of RegulonDB (http://regulondb.ccg.unam.mx/index.jsp) provides links to a set of search


engines facilitating access to gene regulation data. The most popular engines are briefly characterized
below.
– Gene: this interface returns data on a given gene, its products, Shine-Dalgarno sequences, regulators,
operons and all transcription units associated with the gene. It also supplies a graphical depiction of all
sequences present in the gene’s neighborhood, including promoters, binding sites and terminators (in
addition to loci which do not affect regulation of the target gene).
– Operon: the operon is commonly defined as a set of neighboring genes subject to cotranscription. The
database introduces a further distinction between operons and transcription units, treating the operon as
a set of transcription units that are shared by many genes. In RegulonDB a gene may not belong to
more than one operon. A transcription unit (TU) is a set of one or more genes which are transcribed
from a common promoter. TU may also provide binding loci for regulatory proteins, affecting its
promoter and terminator. The search engine returns all information related to a given operon, its
transcription units and the regulatory elements present in each unit. Graph visualization is provided,
showing the placement of all regulatory elements within the target region. A complete set of known
TUs (with detailed descriptions) is also listed below each operon.
– Regulon: this search interface provides basic and detailed information concerning regulons, i.e. groups
of genes regulated by a single, common transcription factor. In addition to such “simple” regulons,
RegulonDB introduces the notion of a complex regulon where several distinct transcription factors
regulate a set of genes, with each factor exerting equal influence upon all genes from its set. The
Regulon interface also shows binding sites and promoters grouped by function.

1.3 Types of biological networks

1.3.1 Relations between molecules and types of networks

Biological networks are composed of molecules: proteins, genes, cellular


metabolites etc. These building blocks are linked by various types of chemical
reactions. Among the simplest biological networks is the gene regulatory
network (GRN) showing which genes activate or inhibit other genes. Networks
are usually depicted as graphs (see inset); however this representation should not
be confused with the graphical layout of networks stored in KEGG databases or
wikipathways.

Graphs as a representation of networks

A graph is a collection of elements (called vertices) linked by mutual relationships (called edges). The
interpretation of vertices and edges may vary – in gene regulatory networks vertices represent genes
while edges correspond to activation/inhibition effects.

In a simple graph there are no loops (edges which connect a vertex with itself) and only one edge may
appear between each pair of vertices. The maximum number of edges in a simple graph with N vertices
is N(N -1)/2. In a directed graph each edge has a specific direction but there is no limit on the number
of edges between each pair of vertices.

Protein-protein interaction networks are represented by simple graphs while signaling networks and
gene regulatory networks usually rely on directed graphs. Metabolic networks describing reversible
chemical reactions may use graphs with weighed edges – in these types of graphs each edge carries a
numerical value which corresponds e.g. to its reaction rate constant. Graphs have many applications in
information technology: for example they can be used for traffic modeling or Internet routing.

The most common types of vertices are genes, proteins and other molecules
which participate in biochemical processes. Some networks also include cellular
organelles (e.g. mitochondria, vacuoles etc.) viewed as “targets” of specific
processes. The set of potential elements may be further extended with abstract
concepts: UV radiation intensity, pH, ROS and any other phenomena which need
to be taken into account when performing network analysis.
Relations between elements can be direct – e.g. a simple chemical reaction
between two molecules – or indirect where a number of intervening reactions
are necessary. An example of an indirect relationship is mutual regulation of
genes. Simply observing that “gene A regulates the expression of gene B”
conceals the existence of a complicated chain where the product of gene A acts
upon the transcription factor or other mechanisms which, in turn, regulate the
expression of gene B.
When the character of the relation is unknown, the relation is said to be
directionless, i.e. we cannot determine which of the two interacting elements is
the effector and which one is the receptor. This phenomenon occurs in many
nonspecific protein-protein interactions: we may know that two proteins bind to
each other but the purpose of the reaction is not known – unlike, for example,
directed activation of adrenergic receptors via hormone complexation leading to
release of protein G which, in turn, binds to its dedicated receptor. In some cases
we possess knowledge not just of the relation’s direction but also of its positive
or negative effects.
A positive effect may involve upregulation of a chemical reaction by an
enzyme, activation of gene expression or an increase in the concentration of
some substrate. A negative effect indicates inhibition or simply a reduction in
the intensity of the above mentioned processes.
This complex interplay of directionless and directed reactions underscores
the fundamental difference between protein-protein interaction (PPI) networks
which focus on nonspecific interactions between proteins, and signaling
networks (SN) which provide detailed insight into biochemical processes
occurring in the cell. As shown, the types of network elements and their mutual
relations are directly related to the scope of our knowledge regarding biological
mechanisms and the accuracy of experimental data.

1.3.2 Biochemical pathways

Several online databases store manually-validated process relationship data and


visualize it by means of interaction diagrams:
– KEGG (http://www.genome.jp/kegg/)
– Reactome Pathways Database (http://www.reactome.org)
– Wikipathways (http://www.wikipathways.org/)

KEGG (Kyoto Encyclopedia of Genes and Genomes – GenomeNet;


http://www.kegg.jp/ kegg/) is a database dedicated to researchers who study the
properties of molecular interaction networks on the level of cells, organisms or
even entire ecosystems [26, 27].
Among the most popular features of KEGG is the presentation of molecular
interactions as activity pathways (KEGG PATHWAY). The relationships
between individual molecules (typically proteins) are represented as block
diagrams with directed or directionless links indicating the flow of information.
The number of activity pathways has grown so large that attempts are currently
being made to assemble a global network consisting of various interlinked
pathways (Fig. 1.4).
KEGG also includes a set of relations between individual pathway
components (KEGG BRITE). This database is a set of hierarchical
classifications representing our knowledge regarding various aspects of
biological systems. KEGG DISEASE is an interesting database that stores
molecular interaction data associated with various pathological processes in
humans (http://www.genome.jp/kegg/) (Fig. 1.5).
The ability to visualize individual proteins and other molecules, along with
references to detailed information regarding their properties, provides substantial
help in creating network models for analysis of disease-related processes.
All KEGG databases are interlinked, permitting easy navigation between
datasets.
Although KEGG is popular as a source of gene-centric information applied
for instance to overrepresentation and Gene Set Enrichment analysis, KEGG has
limited applicability for network analysis. The main hurdle is the heterogeneity
in the style applied to represent particular pathways arising from the
incompleteness of available knowledge and missing annotations. Interactions
represented as a graph are often accompanied by disjoined boxes describing
phenotypes or states. Some pathways are described by a set of chemical
reactions and some are just lists of genes.

Fig. 1.4: Global activity network consisting of multiple pathways. Each dot indicates (in most
cases) a single pathway. A more detailed view of a representative pathway is shown in the central
part of the image, indicating stages of fructose and mannose metabolism.
Fig. 1.5: KEGG interaction diagram corresponding to Alzheimer’s disease. The red-framed inset
contains detailed information concerning the protein labeled “PSEN”.

Both Wikipathways and Reactome are focusing on gathering information that


can be described in the form of biochemical reactions, therefore escaping the
abovementioned problems. They are much more straightforward in defining
simulation models or interaction graphs.

1.4 Network development models

1.4.1 Selected tools for assembling networks on the basis of gene


expression data

Assembling gene regulatory networks remains an open problem. Existing


methods are not equally efficient in processing diverse datasets and it is often
difficult to select the optimal algorithm for a given task. As few regulatory
networks have been experimentally validated, assessment of the accuracy of
hypothetical networks also poses a significant challenge. The DREAM
(Dialogue for Reverse Engineering Assessments and Methods) consortium
attempts to address these issues by organizing regular events where the
efficiency of various network construction algorithms is independently validated
(see http://www.the-dream-project.org/). This section discusses the fundamental
aspects of the construction of regulatory networks based on gene expression
data.
Gene Network Weaver – gene regulatory network processing
software
Gene Network Weaver (GNW) provides an efficient way to determine the
validity of gene regulatory network construction algorithms. This software
package can read input datasets created for the purposes of the DREAM project.
The first analysis step involves construction of a realistic regulatory network
from known fragments of real- life interaction networks. This is followed by
generation of simulated gene expression data. GNW is bundled with a number of
preassembled datasets (Escherichia coli and Staphylococcus gene regulation
networks along with several customized DREAM databases). The program
enables users to select subnetworks in order to carry out operations on smaller
and more convenient sets of data. In addition to providing its own datasets,
GNW can import and parse user-generated networks [28].

Cytoscape
While Cytoscape will be presented further on in this chapter, we should note that
it includes the CyniToolbox extension which can derive gene regulation
networks from gene expression data [29]. Data analysis proceeds by detecting
simple correlations on the basis of information theory concepts, such as mutual
information and Bayesian networks. Additionally, CyniToolbox can fill in
missing data and perform input discretization (as required by most processing
algorithms). Similar tasks are handled by another Cytoscape plug-in – MONET
(http://apps.cytoscape.org/apps/monet). Each new version of Cytoscape comes
with a range of plug-ins – up-to-date information can always be obtained on the
toolkit’s homepage.

GenePattern – gene expression analysis features


GenePattern is a comprehensive toolset for analyzing genomics data. Its feature
analysis of genetic sequences, gene expression, proteomics and flow cytometry
data. Tools can be chained into workflows to automate complex analyses.
GenePattern is an open- source project and can be used free of charge for
scientific purposes. User registration is required. Tools can be downloaded from
the project’s website and users may either set up local copies of the software or
connect to one of the available public servers.
ARACNE – one of many GenePattern modules – can reconstruct cellular
networks by applying the ARACNE algorithm. A thorough description of the
data input format is available and data can also be imported from other modules
using appropriate converters. The GEOImporter tool can download data directly
from the GEO database (see Section 1.2.1). GenePattern also provides a server
which recreates gene regulatory networks on the basis of selected DREAM
methods, and implements a meta-algorithm assembled from the three highest
ranked algorithms submitted to the most recent edition of DREAM.

1.4.2 Selected tools for reconstruction of networks via literature


mining

Networks can be reconstructed by analyzing peer-reviewed publications. This


process involves specification of target elements (e.g. gene symbols) and relation
types (genetic regulation, protein complexation, etc.) The resulting network can
be exported to a file which may then serve as input for another software
package, or visualized with a GUI to enable further analysis of a specific graph
edge or to prepare a presentation. The methods described in this section can be
roughly divided into two groups. The first group comprises event-centric
methods, e.g. searching for information on physical interactions between two
proteins. This approach offers a great advantage since by focusing on the
description of a biological event we avoid potentially incorrect interpretation of
experiment results – although on the other hand the interpretation task is left
entirely to the user. The second group covers methods which attempt to
determine causative factors in intermolecular relations. This approach offers a
shortcut to useful results since – in most cases – correct interpretations may have
already been obtained and can aid in the reconstruction of cellular networks.
In both cases we should be mindful of the limitations inherent in combining
the results of experiments carried out in various models (animals, tissues, cell
lines) under differing conditions and with the use of dissimilar experimental
techniques. The final outcome of the process should be viewed with caution until
it can be independently validated by a consistent series of experiments (e.g.
differential gene expression analysis).

IntAct and MINT


IntAct (http://www.ebi.ac.uk/intact/) and MINT
(http://mint.bio.uniroma2.it/mint/Welcome.do) contain validated interaction data
for a broad set of proteins, genes and other micromolecules in various
organisms, including humans. All data is traced to peer-reviewed publications
presenting experimental results, and the databases only provide information on
direct interactions without attempting to interpret their outcome.
The databases can be queried by publication, author credentials and proteins
set, and additionally by the quality of the applied experimental methods and
target organisms. Networks can be displayed or saved in one of the popular
network file formats. Each relation can be traced by supplying the corresponding
PubMed ID.

Pathway Studio and Ingenuity Pathway Analysis


Pathway Studio (Ariadne Genomics, http://ariadnegenomics.com) and Ingenuity
Pathway Analysis (Ingenuity Systems, http://www.ingenuity.com) represent a
different approach to literature mining: they subject publications to lexical
analysis and submit preliminary results to a panel of experts in order to reduce
the likelihood of mistakes.
Query results indicate which publications discuss the specific relation and
provide information on the organisms, tissues and cells analyzed in the context
of these publications.

1.5 Network analysis


The typical systems biology research process is a cycle comprising preliminary
bioinformatics analysis generating new hypotheses concerning the operation of a
given system followed by subsequent experiments to verify initial assumptions
which can then be subjected to further analysis. The analysis of biological
networks may be approached from various angles such as pathway analysis,
which concerns itself with assembling rich gene ontology datasets and finding
genes or biological processes overrepresented in the data under study; analysis
of the flow of substrates in chemical reaction chains that allows precise
quantification of perturbation; graph analysis, which seeks vertices of particular
importance for a given process or cellular phenotype. Many software packages
support the interpretation of biochemical data with the use of network analysis
tools. This section introduces some of most popular tools.

1.5.1 Selected tools


From among the multitude of open-source and commercial network analysis
packages, the following tools are particularly noteworthy: Cytoscape
(www.cytoscape. org/), COPASI (www.copasi.org), Cell Illustrator
(www.cellillustrator.com), and igraph (http://igraph.org/redirect.html). They
permit the user to trace (among others) metabolic pathways, signaling cascades,
gene regulatory networks and many other types of interactions between
biologically active molecules (DNA, RNA and proteins). They also support
statistical analysis and visualization of results as well as of the networks
themselves. COPASI and Cell Illuminator base their simulations on a broad
knowledge base which describes many important reactions in terms of
differential equations. In Cytoscape and igraph biological networks are
represented by graphs – in these cases the underlying reactions are not described
in detail and simulations are instead based on the existence (or lack of) directed
links between various molecules.

COPASI
COPASI is a noncommercial software package capable of analyzing and
simulating biochemical reactions as well as any other processes which can be
expressed in terms of mutual relations between entities [30]. It supports the
SBML model description standard and can perform simulations using ordinary
differential equations (ODEs) or Gillespie’s stochastic algorithms
acknowledging arbitrary discrete events.
COPASI can be used to simulate and study the kinetics of chemical reactions
occurring in various zones (e.g. organelles) of the cell (Fig. 1.6). Biochemical
processes are expressed as sets of reactions, using a standardized notation, with
parameters such as reaction rate, stoichiometry and location taken into account.
This functionality enables users to integrate various processes – chemical
reaction, molecular aggregation, transport etc. The software comes with a rich
set of metadata describing common reactions and, in most cases, the user only
needs to select a given reaction from a list. In more complex scenarios users can
define custom biochemical functions describing nonstandard reactions, along
with a kinetic model expressing the relation between reagent concentrations and
reaction rate. The tool also enables the user to determine where a given element
can be found, which reactions it participates in and which kinetic models should
be applied when simulating these reactions. Finally, COPASI can be used to
define entirely new models describing phenomena other than chemical reactions.
Each reaction is assigned to a differential equation which will be used to
simulate its progress. In theory this permits the user to simulate highly complex
processes comprising many different reactions. In practice, however, dealing
with a large set of differential equations forces the user to provide the values for
many distinct parameters (e.g. on the basis of experimental data) and incorrect
values may lead to nonsensical results. For this reason it is recommended to limit
each simulation to not more than several dozen reactions.

Fig. 1.6: Defining chemical reactions in COPASI [http://www.copasi.org/tiki-index.php?page=


screenshots].

COPASI is a popular tool with an active user community offering assistance


in the construction of new metabolic models and reuse of existing ones. The
system’s appeal is also evidenced by the number of scientific publications which
apply COPASI e.g. in the analysis of lactic acid fermentation [31], studying the
TGF-beta 1 signal cascade in the context of 3D simulations of the epidermis [32]
or modeling lipids which form actin microfilaments with a view towards
validating hypothetical mechanisms proposed on the basis of experimental
results [33].

Cell Illustrator
Cell Illustrator [34] provides the so-called Biopathway Modeling Mode, which
uses a modified version of the Petri net paradigm known as Hybrid Petri Net
with Extensions (HFPNE). Unlike classic Petri nets which model discrete events,
HFPNEs can be used to simulate continuous processes [35].
In the Gene Net mode Cell Illustrator can analyze and explore gene
regulatory networks, however without the ability to directly simulate such
networks. Once a gene interaction network has been set up the tool can be
switched to the Biopathway Modeling Mode which provides a starting point for
the development of simulation models.
The tool provides a robust graphical user interface where users may either
carry out simulations of custom pathways or reuse models available from
specialized databases (Fig. 1.7).

Fig. 1.7 Angiogenesis model (Transpath: endothelial_cell_angiogenesis_30.csml). The inset


frame shows a magnified version of the network, revealing individual vertices as well as quantified
relations.

Fig. 1.8: Technical and graphical symbols used by Cell Illustrator {Entity: quantity/concentration of
biomolecule/object; Process: Rate/conditions/mechanism of interaction/reaction/transfer between
connected biomolecules/objects (arrows); Connector: Process/interaction (complexation)/
inhibition}.

Cell Illustrator is known for its user friendliness. It provides a clear and intuitive
menu facilitating easy access to all of its features. Another distinct advantage of
Cell Illustrator is its support for graphical representations of proteins, DNA
chains and other structures, including entire organelles (Fig. 1.8).
In addition to an efficient simulation panel the program also provides a tool
for generating high-quality diagrams and video files presenting simulation
results.

Examples of simple chemical reaction models implemented using Cell Illustrator


– Translocation process (p53 protein; nucleus to cytoplasm).

Increase of p53 concentration in cell cytoplasm and the corresponding decrease of its
concentration in the nucleus.
– Degradation process: three separate variants simulating discrete/continuous changes in the
quantity/concentration of the protein which undergoes degradation.
Visualization of the rate of degradation in relation to the process type and the properties of the
molecule which undergoes degradation.

– Protein complexation: p53 and mdm2


Complexation process as a function of the quantity/concentration of substrates in the cell.

Cell Illustrator enables:


– drag-and-drop construction (in a manner similar to the examples shown above) of biological pathway
models consisting of molecular components;
– selection of mathematical formulae to simulate the biochemical reactions which comprise biological
pathways. Simulations may be carried out directly in the workspace (interactively) or uploaded to a
remote server called the Cell Illustrator Server;
– storing simulation results in graphical files and assembling eye-catching animations using the Cell
Animator plug-in;
– analyzing gene interaction networks (static analysis only);
– importing networks created in other programs (support for SBML and CellML) or downloaded from
specialized libraries/databases such as Transpath or KEGG.
Cytoscape
Cytoscape enables visualization and analysis of network topologies. Each
network, regardless of the complexity of the underlying process, is always
depicted as a graph. The graph is either provided by user or can be generated
with text mining or coexpression analysis approaches provided by the multiple
plug-ins available. In addition, integration with Wikipathways allows the easy
import of biochemical pathways into the graph representation.
Cytoscape provides a rich set of visualization options for vertices, edges and
labels. The network is stored as a tabularized list of source and target vertices
along with the properties of their relations and any additional user-defined
attributes. In addition to basic topological analysis, users may access a set of
plug-ins for advanced bioinformatics studies. As many of the available plug-ins
have been contributed by community members the system’s authors take care to
enforce coherent standards for code development, data sharing and integration
with the common graphical user interface.

1.5.2 Cytoscape analysis examples

Upon launch, Cytoscape displays its main window and a toolbar. The description
of each tool is provided in the form of mouseover tooltips. In addition to the
network display panel the main window also contains a control panel with
several tabs, and a data panel which can be used to select individual vertices and
edges, and display their attributes. The default control panel tab provides an
overview of the available networks, showing how many vertices and edges they
comprise (each imported network is available at all times – even when not
currently displayed).
Another popular tab contains a selection of graphical widgets, providing
different styles and layouts for network elements. All program features are
comprehensively described in the user manual. Much like COPASI, Cytoscape
boasts an active user community that provides assistance and helps troubleshoot
technical problems.

Identifying communication hubs


Identifying hubs enables us to determine which components of a biochemical
process play a key role in the functioning of the cell. Accordingly, this is often
the first operation carried out on a newly constructed network. For complex
networks with hundreds of vertices and edges visual inspection alone does not
reveal obvious hubs (Fig. 1.9 (a)). Instead, a formal algorithm is applied to
determine the “centrality” of each vertex. The simplest measure is the degree of
the vertex, i.e. its number of incoming and outgoing edges. This value, however,
only indicates the local density of the network and does not acknowledge
network-wide communication. A better indicator of centrality is the so-called
closeness centrality criterion, which takes network topology into account.
Closeness centrality is defined as the aggregate length of the shortest paths
connecting the given vertex to all other vertices. It can be interpreted as a
measure of effectiveness with which the given vertex can propagate information
through the network. A similar measure, known as betweenness centrality,
depends on the degree of the target vertex’s involvement in communication
between each pair of vertices for which the shortest connecting path includes the
target vertex. While closeness centrality expresses the vertex’s overall capability
for communication, betweenness centrality indicates its involvement in
mediating communication between other vertices. Similar measures are applied
in social network analysis. In our example the network hubs are proteins which
play a key role in regulating the expression of other proteins: ubiquitin (UBC)
and the CREB-binding protein (CREBBP). Another important vertex
corresponds to the PPARG transcription factor which is intimately involved in
regulating metabolism and whose activity is modulated by dietary fat intake
[36].

Fig. 1.9 Network hub analysis example. (a) Lipid metabolism pathway network consisting of over
100 genes and several hundred relations; (b) results of the analysis step where the centrality of
each vertex is indicated by the radius and color of the corresponding circle (small/yellow – low
centrality; large/red – high centrality).

1.6 Summary
Experimental methods continue to undergo rapid development and every decade
seems to bring forth a new revolution in the scope of biological information
processing. For example, rapid progress in sequencing tools and algorithms –
Sanger sequencing, synthesis sequencing (Illumina), pyrosequencing (454) or
real-time sequencing (Pacific Bioscience) to name just a few – has driven a
thousand-fold decrease in the cost of sequencing DNA while simultaneously
increasing its accuracy and reliability. As a result, recent years have witnessed
the emergence of many specialized databases aggregating vast quantities of
biological data. On the other hand it has also become apparent that amassing
data cannot, by itself, explain the complexity and function of cells since it takes
its constituent units (proteins, DNA, RNA etc.) out of context. The traditional
reductionist (bottom-up) approach is therefore insufficient – instead, systems
biology should rely on a holistic (top-down) strategy which takes into account
the relations between components and a multilayered model of cell complexity
and dynamics. Classification and analysis of specific types of elements in
controlled conditions is slowly yielding to qualitative analysis of cells and
organisms as a whole.
This progress would not have been possible without the aid of technical
sciences, and particularly of computer science which facilitates accurate
simulations of complex systems and events. New models are being formulated to
describe metabolic pathways, signaling cascades, gene regulatory networks and
other types of relationships between metabolites, proteins and nucleic acids. This
evolution calls for new data storage and exchange standards. One of the projects
which address this issue is BioPAX (Biological Pathway Exchange;
http://www.biopax.org/) – a language which permits integration, exchange,
visualization and analysis of biological pathways. BioPAX provides a common
format in which biological pathways can be expressed and stored. Work on a
comprehensive description of specific data types and visualization of biological
networks is currently ongoing (Tab. 1.1). While it is, in principle, possible to
apply such common standards to the description of arbitrary biological data,
practical implementations are lagging behind – possibly due to the fact that
interdisciplinary studies aggregating various datasets are still somewhat
infrequent and a common vision of data integration in the scope of molecular
biology is yet to emerge.
Tab. 1.1: Selected standards for storing biochemical pathway data.

Name Type Website


BioPAX Pathway representation standard (RDF/OWL) http://www.biopax.org/
CSML Pathway representation standard (XML) http://www.cellml.org
SBML Pathway representation standard (XML) http://sbml.org
SBGN Network diagram creation http://www.sbgn.org
PSI Proteomics http://www.psidev.info/

The popularity of biological pathway and network analysis is expected to


increase in the future. Access to growing datasets will enable researchers to
assemble more complex networks which describe cellular biochemistry and, as a
result, conduct advanced simulations focusing e.g. on cell-wide metabolism in
search for factors which affect phenotypic differences.

References
[1] Venter JC et al. The sequence of the human genome. Science. 2001 Feb
16;291(5507):1304–51. Erratum in: Science 2001 Jun 5;292(5523):1838.
[2] Lander ES et al. Initial sequencing and analysis of the human genome. Nature. 2001 Feb 15;
409(6822):860–921. Erratum in: Nature 2001 Aug 2;412(6846):565. Nature 2001 Jun 7;
411(6838):720.
[3] Keller EF. The century beyond the gene. J Biosci. 2005 Feb;30(1):3–10.
[4] Mazzocchi F. Complexity and the reductionism-holism debate in systems biology. Wiley
Interdiscip Rev Syst Biol Med. 2012 Sep–Oct;4(5):413–27.
[5] Wolfe CT. Chance between holism and reductionism: tensions in the conceptualisation of
Life. Prog Biophys Mol Biol. 2012 Sep;110(1):113–20.
[6] Alföldi J and Lindblad-Toh K. Comparative genomics as a tool to understand evolution and
disease. Genome Res. 2013 Jul;23(7):1063–8.
[7] Chain P, Kurtz S, Ohlebusch E and Slezak T. An applications-focused review of comparative
genomics tools: capabilities, limitations and future challenges. Brief Bioinform. 2003
Jun;4(2):105–23. Review.
[8] Tseng GC, Ghosh D and Feingold E. Comprehensive literature review and statistical
considerations for microarray meta-analysis. Nucleic Acids Res. 2012 May;40(9):3785–3799.
doi: 10.1093/nar/gkr1265. Epub 2012 Jan 19. Review.
[9] Anderson NL and Anderson NG. Proteome and proteomics: new technologies, new
concepts, and new words. Electrophoresis. 1998 Aug;19(11):1853–61. Review.
[10] Astarita G and Langridge J. An emerging role for metabolomics in nutrition science. J
Nutrigenet Nutrigenomics. 2013 Aug 31;6(4):179–198.
[11] Bouatra S, Aziat F, Mandal R, Guo AC, Wilson MR, Knox C, Bjorndahl TC, Krishnamurthy R,
Saleem F, Liu P, Dame ZT, Poelzer J, Huynh J, Yallou FS, Psychogios N, Dong E, Bogumil
R, Roehring C and Wishart DS. The human urine metabolome. PLoS One. 2013 Sep
4;8(9):e73076.
[12] Tomescu OA, Mattanovich D and Thallinger GG. Integrative analysis of -omics data: a
method comparison. Biomed Tech (Berl). 2013 Sep 7.
[13] Cramer F. Gene technology in humans: can the responsibilities be borne by scientists,
physicians, and patients? Interdisciplinary Science Review. 2001;26:1–4.
[14] Lazebnik Y. Can a biologist fix a radio? – Or, what I learned while studying apoptosis. Cancer
Cell. 2002 Sep;2(3):179–82. Biochemistry (Mosc). 2004Dec;69(12):1403–6.
[15] Luisi, P. Emergence in Chemistry: Chemistry as the Embodiment of Emergence. 4
Foundations of Chemistry 183–200-200. Dordrecht, Netherlands: Springer; 2002.
[16] Barabási AL and Oltvai ZN. Network biology: understanding the cell’s functional organization.
Nat Rev Genet. 2004 Feb;5(2):101–13. Review. PubMed PMID: 14735121.
[17] Sharma A, Gulbahce N, Pevzner S, Menche J, Ladenvall C, Folkersen L, Eriksson P, Orho-
Melander M and Barabási AL. Network based analysis of genome wide association data
provides novel candidate genes for lipid and lipoprotein traits. Mol Cell Proteomics. 2013 Jul
23; 12(11):3398–3408.
[18] Gohlke JM, Thomas R, Zhang Y, Rosenstein MC, Davis AP, Murphy C, Becker KG, Mattingly
CJ and Portier CJ. Genetic and environmental pathways to complex diseases. BMCSyst Biol.
2009 May 5;3:46.Basak S, Behar M and Hoffmann A. Lessons from mathematically modeling
the NF-kB pathway. Immunol Rev. 2012 Mar;246(1):221–38.
[19] BogdatMN, Hat B, Kochañczyk M and Lipniacki T. Levels of pro-apoptotic regulator Bad and
anti-apoptotic regulator Bcl-xL determine the type of the apoptotic logic gate. BMC Syst Biol.
2013 Jul 24;7:67.
[20] Wang RS, Saadatpour A and Albert R. Boolean modeling in systems biology: an overview of
methodology and applications. Phys Biol. 2012 Oct;9(5):055001.
[21] Berestovsky N, Zhou W, Nagrath D and Nakhleh L. Modeling integrated cellular machinery
using hybrid petri-boolean networks. PLoS Comput Biol. 2013 Nov;9(11):e1003306.
[22] Kim SY, Imoto S and Miyano S. Inferring gene networks from time series microarray data
using dynamic Bayesian networks. Brief Bioinform. 2003 Sep;4(3):228–35.
[23] Logsdon BA, Hoffman GE and Mezey JG. Mouse obesity network reconstruction with a
variational Bayes algorithm to employ aggressive false positive control. BMC Bioinformatics.
2012 Apr 2;13:53.
[24] Salgado H, Peralta M et.al. RegulonDB (version 8.0): Omics data sets, evolutionary
conservation, regulatory phrases, cross-validated gold standards and more. Nucleic Acids
Research 2013 Nov; doi: 10.1093/nar/gks1201.
[25] Kanehisa M, Goto S, Furumichi M, Tanabe M and Hirakawa M. KEGG for representation and
analysis of molecular networks involving diseases and drugs. Nucleic Acids Res. 2010;38:
D355-D360.
[26] Kanehisa M, Goto S, Sato Y, Furumichi M and Tanabe M. KEGG for integration and
interpretation of large-scale molecular datasets. Nucleic Acids Res. 2012;40:D109-
D114.Schaffter T, Marbach D and Floreano D. GeneNetWeaver: in silico benchmark
generation and performance profiling of network inference methods. Bioinformatics. 2011
Aug 15;27(16):2263–70.
[27] Shannon P, Markiel A, Ozier O, Baliga NS, Wang JT, et al. Cytoscape: a software
environment for integrated models of biomolecular interaction networks. Genome Res.
2003;13:2498- 2504. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?
artid=403769&tool=pmcentrez&rendertype=abstract. Accessed 9 July 2014.
[28] Mendes P, Hoops S, Sahle S, Gauges R, Dada J and Kummer U. Computational modelling
of biochemical networks using COPASI. Methods Mol Biol. 2009;500:17–59.
[29] Oh E, Lu M, Park C, Oh H Bin, Lee SY, et al. Dynamic modeling of lactic acid fermentation
metabolism with Lactococcus lactis. J Microbiol Biotechnol. 2011;21:162–169. Available:
http://www.ncbi.nlm.nih.gov/pubmed/21364298. Accessed 18 August 2014.
[30] Adra S, Sun T, MacNeil S, Holcombe M, Smallwood R. Development of a three dimensional
multiscale computational model of the human epidermis. PLoS One 2010;5: e8511.
Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?
artid=2799518&tool=pmcentrez&rendertype=abstract. Accessed 15 August 2014.
[31] Kühnel M, Mayorga LS, Dandekar T, Thakar J, Schwarz R, et al. Modelling phagosomal lipid
networks that regulate actin assembly. BMC Syst Biol. 2008;2:107. Available:
http://www.pubmedcentral.nih.gov/articlerender.fcgi?
artid=2628873&tool=pmcentrez&rendertype=abstract. Accessed 18 August 2014.Kühnel et
al. 2008.
[32] Nagasaki M, Saito A, Jeong E, Li C, Kojima K, Ikeda E and Miyano S. Cell illustrator 4.0: a
computational platform for systems biology. Stud Health Technol Inform. 2011;162:160–81.
[33] Nagasaki M, Doi A, Matsuno H and Miyano S. A versatile Petri net based architecture for
modeling and simulation of complex biological processes. Genome Informatics. 2004;15(1):
180–197.
[34] Choi JH, Banks AS, Estall JL, Kajimura S, Boström P, et al. Anti-diabetic drugs inhibit
obesity- linked phosphorylation of PPARγ by Cdk5. Nature. 2010;466:451–456. Available:
http://www.nature.com/doifinder/10.1038/nature09291. Accessed 21 July 2010.
[35] Doi A, Nagasaki M, Fujita S, Matsuno H, Miyano S. Genomic Object Net: II. Modeling
biopathways by hybrid functional Petri net with extension. Applied Bioinformatics.
2004;2:185–188.
[36] Jurkowski W, Roomp K, Crespo I, Schneider JG, Del Sol A. PPARγ population shift produces
disease-related changes in molecular networks associated with metabolic syndrome. Cell
Death Dis. 2011 Aug 11;2:e192.
Part II: Cellular level
Jakub Wach, Marian Bubak, Piotr Nowakowski, Irena
Roterman, Leszek Konieczny, and Katarzyna Chtopas
2 Negative feedback inhibition – Fundamental
biological regulation in cells and organisms
The functioning of biological systems is governed by the laws of physics and
chemistry. Biological processes occur spontaneously and are subjected to
thermodynamic regulation.
Thermodynamics introduces a distinction between open and closed systems.
Reversible processes occurring in closed systems tend to a state of equilibrium.
This state is achieved when the reaction proceeds at an identical rate in both
directions (from substrate to product and from product to substrate) – a situation
which can be denoted v1 = v2. In terms of energy transfer this state is inert, with
nil spontaneity (ΔG = 0) (Fig. 2.1(a)). In thermodynamically open systems
(including all biological systems) equilibrium is not achieved spontaneously – it
can, however, be maintained by continually replenishing the required substrates.
This so-called stationary state is characterized by nonzero spontaneity ΔG ≠ 0)
(Fig. 2.1 (b)).
Maintaining a stationary state requires regulation. In nonsentient systems,
such as the interior of a cell, regulation must be automatic and is typically based
on negative feedback loops, as shown in Fig. 2.2.
The negative feedback loop may be symbolically represented as a closed
circuit in which a detector monitors the state of a controlled process while an
effector counteracts detected changes. The function of the effector is dependent
on signals received from the detector (see Fig. 2.2).
In cells and organisms detectors are usually referred to as “receptors”. Most
receptors are proteins which react with the products of a controlled process.
Their genetically-conditioned affinity to a specific product enables them to
control its concentration. Such structures can be found inside the cell, as well as
in the cellular membrane, with receptors protruding into the environment and
capable of registering external signals (Fig. 2.3 (a), (b) and (c)).
Receptors are usually allosteric proteins, i.e. proteins which can adopt two
structurally distinct forms, depending on interaction with a ligand. This
interaction causes the receptor component of the feedback loop to flip to its
alternative conformation, triggering a signal which is recognized by the effector.
Intracellular receptors often form complexes with subunits responsible for
effector processes (Fig. 2.3).
Detector-effector complexation enables the signal to be conveyed directly to
the effector by means of allosteric effects. Proteins that perform this function are
called regulatory enzymes or proteins. Their receptor components are referred to
as regulatory subunits while their effectors are called catalytic subunits (as they
typically exhibit enzymatic properties).

Fig. 2.1 Two types of stability: equilibrium (closed system, (a)) and stationary state (open system,
(b)).

Fig. 2.2: Symbolic representation of a negative feedback loop with its principal components.
Fig. 2.3: The function of an intracellular regulatory enzyme: (a) and (b) allosteric detector subunits
responsible for binding the product in complex with effector (catalytic) subunits; (c) detectors built
into the cellular membrane and capable of registering external signals.

Fig. 2.4 Inverse relationship between the receptor’s affinity for a controlled product and the
product’s target concentration. Low affinity results in high concentration and vice versa.

The degree of affinity of the receptor for the controlled ligand determines that
ligand’s target concentration (Fig. 2.4). Lower affinity permits greater
concentration and vice versa.
Affinity depends on the structure of the receptor protein and therefore on its
genetic blueprint. This phenomenon explains the differing concentrations of
various chemicals inside cells and organisms. Receptor affinity may also change
as a result of additional structure-altering reactions, e.g. phosphorylation. This is
often observed in hormonal signaling pathways which force the cell to alter its
behavior (Fig. 2.5).

Fig. 2.5: Downregulation of receptor affinity by phosphorylation. Arrows indicate receptor


activation and signal initiation. This effect is often triggered by hormones.

The negative feedback loop is an abstraction of a biological process. Its


constituent elements – receptors and effectors – provide answers to two key
questions: “how much?” and “how?” respectively. The receptor addresses the
“how much?” aspect since it determines the level of activity or product
concentration. For its part, the effector initiates action when triggered by the
detector; therefore it embodies the “how?” aspect – as shown in Fig. 2.6.
Fig. 2.6: Information conveyed by a negative feedback loop consisting of a receptor and an
effector.

Since almost all processes occurring in the cell and in the organism are
subject to automatic control, we can assume that all cellular and organism related
structures belong to regulatory circuits.
This supports the conclusion that the negative feedback loop can be treated
as a basic structural and functional unit of biological systems. Evidently, the goal
of the cell and the organism is to maintain a steady state of chemical reactions
and product concentrations.
Signals originating beyond the regulatory circuits for a given process and
altering the sensitivity of its receptor are called steering signals. They facilitate
coordination of various biological processes, which, in turn, ensures targeted
action and stabilizes the cell’s environment (see Fig. 2.7).

Fig. 2.7: Negative feedback loop with indication of steering modifying the affinity of the receptor
for the controlled product.

Coordination is effected by ensuring that the product of one process acts upon
the receptor of another process, modifying its affinity. These types of
relationships aggregate cellular processes into higher-order chains (Fig. 2.8).
Signals sent by the organism can also exert a coordinating influence, which is
strictly hierarchical in nature, overriding cell-level control. In this way the
organism coordinates the function of various cell types, ensuring overall
homeostasis.

Fig. 2.8: Symbolic representation of coupling between regulatory circuits: (a) cooperation (the
product of one circuit is used by the effector of another circuit as a substrate); (b) coordination (the
product of one circuit modifies the receptor of another circuit, altering its affinity). Dashed – – –line
signal, continuous line——product substrate.

Such action can be equated to a command: coordinating signals derived from the
organism are usually based on covalent reactions and subject to strong
amplification. In contrast, intracellular signals can be compared to suggestions:
they manifest themselves as changes in concentrations of various substrates,
made possible by the limited volume of the cell (this also explains why cells are
universally small). Changes in the quantity of product of one regulated process
act as a signal that can be recognized by the receptor of another process. The
organism works by coordinating the function of various specialized tissues and
cells. The relation between the organism and an individual cell can therefore be
likened to the relation between the state and a single individual (Fig. 2.9). The
principal task of the organism is to ensure homeostasis by counteracting
external stimuli. Signals issued by the organism must be transmitted over long
distances (via the bloodstream, other body fluids, nerves etc.). As a result, they
require efficient encoding and strong amplification in the target area.
Fig. 2.9: Schematic representation of the relation (coupling) between the organism and its cells.
Signals issued by the organism modify the function of specific cells by altering the affinity of
receptors (taking effect as a change in the concentration/activity setpoint).

The principal benefit of encoding is that the signal may retain relatively low
intensity at its source. This enables rapid transmission since fewer signal carriers
need to be synthesized. A fitting analogy is communication by radio: the
encoded signal is highly specific and stands out against the background. It can be
readily detected and amplified inside the target cell, and is not drowned out by
ambient noise. Both the receptor and the intra-cellular signal pathway are
elements of the decoding mechanism, while the product of the process triggered
by the signal emerges as decoded (see Fig. 2.10).
Amplification is typically provided by a signaling cascade: a multilayer
system where the signal is amplified at each stage by dedicated enzymes (Fig.
2.11).
The positive feedback loop is another very potent amplifier used in situations
where amplification must be unusually strong (e.g. in extracellular processes).
Examples of this mechanism include blood coagulation and complement system
activation; it can also be used to modulate incoming signals (Fig. 2.12).
Due to the commanding nature of organism derived signals, cells require a way
to switch them off when necessary. This protects the cell from dangers which
may result from prolonged activation of signals issued by the organism. As it
turns out, cells are equipped with specific signal breakers – G proteins. Each G
protein complex comprises a receptor which undergoes structural rearrangement
when triggered by an input signal, activating an intracellular signaling pathway.
At the same time the complex acts as an inhibitory enzyme, terminating the
signal after a short period (see Fig. 2.13).
The signaling cycle may recur as needed, producing the desired action while
at the same time protecting the cell from uncontrolled activation.
Organism derived signals which do not require an immediate effect are
transmitted in an “economic” manner via body fluids – typically through the
blood stream.

Fig. 2.10: Encoding and decoding signals sent by the organism to the cell (hormonal and nerve
transmission).

Fig. 2.11 Cascade amplifier – principle of operation.

Fig. 2.12: The positive feedback loop as an amplifier (a) and the negative feedback loop as a
regulator (b). Positive feedback amplifies the signal while retaining its directionality. Negative
feedback counteracts the signal’s effect and therefore opposes its direction.
Fig. 2.13: The signal breaker as part of a hormonal signaling pathway. The hormonally activated
receptor binds the signal inhibitor (Grb2), permitting brief activation of the enzyme which initiates
signaling inside the cell.

When rapid reaction is required (such as in the case of muscle contraction),


nerves are used instead and the signal is only converted into a humoral form
immediately prior to penetrating into the target cell, where it can be decoded and
amplified.
In summary, negative feedback loops can be described as regulatory systems
while their stabilizing influence can be characterized as a manifestation of
regulation. Regulatory action may only occur when the loop is a closed one.
Signals which affect the initial stabilization program are referred to as steering
signals. By applying the concepts of control engineering we may further divide
such signals into three groups.
1. Tracking (Fig. 2.14).
In this type of mechanism the effect closely tracks the input signal. Tracking
control is typically observed in metabolic processes and can be compared to
the action of a mechanical servo.
Fig. 2.14: The principle of tracking signalization. Dashed line – steering signal. Black line – effect.

2. Extremal control – control mode in which the signal acts as a trigger,


unambiguously initiating some complex process such as blood coagulation or
cell division (Fig. 2.15).

Fig. 2.15: The principle of extremal control. The signal initiates (dashed line) the process which
then proceeds to its expected conclusion (black line).

3. Sequential control – typical for development processes. Here, signals are


produced sequentially according to a predetermined algorithm, following a
previous phase (Fig. 2.16).

Fig. 2.16: The principle of sequential control. The effect is achieved in stages. Each stage ends
with a signal which triggers the subsequent stage (dashed line). Schematic depiction of sequential
control – black line.

The negative feedback loop can operate in an automatic manner and is therefore
autonomous (Fig. 2.17 (a)); however, its autonomy may be affected by external
stimuli which we refer to as steering signals. When the steering signal originates
in another feedback loop a coupling can be said to exist between both loops.
Coupling influences regulation by subordinating the controlled circuit to the
circuit which controls its receptor (see Fig. 2.17 (b)).

Fig. 2.17: Hierarchical coupling between feedback loops: controlling (dark) and controlled (light)
components. (a) independent loop, (b) coupled loops with hierarchical dependence, (c) and (d)
coupled loops with no-hierarchical dependence.

New regulation effects appear when mutual coupling between a number of


circuits produces a closed loop and the (formerly) autonomous circuit begins
acting in accordance with signals sent by the final circuit in the chain. As a
result, each component of the control chain is, in itself, subject to external
control. An interesting problem emerges at this stage: since none of the coupled
circuits retains its autonomy. The system as a whole loses its point of reference
and may become unstable. The emergent regulatory “supercircuit” (Fig. 2.17 (c)
and (d)) is very flexible and hence requires access to “memory” which enables it
to perform its regulatory function.
Coupling is not the only way in which efficient regulation can be established.
A different approach to control relies on correct arrangement of uncoupled
feedback loops. Biological systems frequently employ autonomous loops which
counteract each other in order to reduce undesirable fluctuations in the
controlled process (Fig. 2.18).

Fig. 2.18 Regulatory circuits linked by a common program – amplitude control. Black arrows
indicate that the receptor sends out signals triggering processes, which is able to counteract only
decreases or increases in the observed quantity.

Each feedback loop can only counteract changes occurring in one specific
direction. An example is the blood sugar level control mechanism which releases
glucagon as its receptor can only react to low glucose levels. Counteracting
excessive concentrations of glucose requires a different mechanism which
triggers the release of insulin – if this mechanism is not present or if it
malfunctions, diabetes occurs. The interplay between both circuits ensures a
steady glucose concentration, with limited fluctuations. Such mechanisms are
quite common on the level of the organism. Inside cells, where reaction products
can be quickly consumed or expelled, there is no need for such systems –
instead, efficient regulation may be provided by receptors which detect low
concentrations of a given substance and trigger increased production.
It is worth noting that not all cellular processes need to be tightly controlled.
Metabolic chains often involve many intermediates, with only the final product
subject to control. The effector of a regulatory circuit may perform a number of
successive actions limited only by the availability of substrates. The products of
metabolic pathways are genetically conditioned and regulated. Likewise, genes
determine the affinity of receptors to their target substances, thus ensuring proper
functioning of the cell.

2.1 Negative feedback-based systems simulations

2.1.1 Introduction

In order to simulate biological systems modeled using negative feedback loops, a


specialized piece of software had to be created. The application is capable of
running a simulation of user-defined organized systems, represented by a model
and composed of multiple negative feedback inhibition systems (NFIS) and
external receptors, connected via cooperation or coordination.
It is available courtesy of Jagiellonian University – Medical College, at the
following address: http://crick.cm-uj.krakow.pl:8080/nfs/

2.1.2 Glossary of Terms

– NFIS: negative feedback inhibition system. It is composed of a coupled


receptor and effector in a closed loop (see Fig. 2.19).
– Effector: component of NFIS – responsible for delivering a product (see Fig.
2.20).
– Receptor: component of NFIS responsible for controlling an effect. If the
programmed value of the regulation threshold is reached, the receptor issues a
control signal, disabling the effector (see Fig. 2.21).
– External receptor: receptor coupled with another NFIS (considered here as
the secondary NFIS). Regulation thresholds of NFISs may be affected by
signals from such external receptors, coupling them. Interconnection is mostly
realized by products of NFISs which thus play the role of coupling signals
(see Fig. 2.22).
– Organized systems (OS): all interconnected negative feedback inhibition
systems.
– State: concentration of products delivered by effectors.
– Old/previous state: state of the OS, as defined above, during the previous
time step used to describe a simulation algorithm from a dynamic perspective.

Fig. 2.19: NFIS.

Fig. 2.20: Effector – the effector of the initially considered NFIS.

Fig. 2.21: Receptor – the receptor of the initially considered NFIS.

Fig. 2.22: External receptor.


2.1.3 Software model

The model represents a negative feedback-based biological system. The full


model defines the organized system’s (OS) structure with connections between
NFISs and values of the control parameters.
The model expressed uses a language called JSON. The application provides
two editors. One allows for changes in both OS structure and parameters, but
requires knowledge of the JSON format. The other editor is more user-friendly
but limited to changes in the defined parameters.
The general model structure is as follows:

OS (organized systems)
A single organized systems (OS) entry is composed of:
– NFISs: list of coupled effectors and receptors.
– Receptors: list of external receptors coupled with other NFISs. Each such
receptor can modify threshold values of coupled NFISs.
– Init: initial state of the OS. Defines concentration of products for the first run
of the simulation. This is especially important for NFISs with the substrate
parameter defined. In such cases, some substrate is required for any
production. The initial concentration of each substrate should be defined in
this section.
An OS also has the following properties:
– Name: user-defined identification of an OS. Users of the application can load
an

OS using the name.


– Description: user-defined description of an OS. The description should
elaborate on the purpose of the OS or biological phenomena it models.

NFIS
One Negative Feedback Inhibition System (NFIS) is composed of the following
components:
– effector
– receptor
An NFIS has an id property. The id defines a logical name of an NFIS. It is used
by the application in order to uniquely identify an NFIS through a simulation.

Effector
A single effector is characterized by the following properties:
– Product: mandatory – name of the product delivered by this particular effector.
– Substrate: optional – defines a product that should be used as a substrate for
this particular effector. The product is delivered by another effector. It is
defined as the product parameter’s value. Substrate product exchange means
that NFIS systems are connected by cooperation.
– If the parameter is not defined, the substrate is considered as “always
available”. This means that the effector always delivers the exact amount of
product molecules as defined by the production parameter.
– Production: mandatory – maximum production rate of the effector defined by
the number of molecules per time unit. “Maximum rate” means that if a
substrate is available in an amount greater than or equal in relation to the
value of the described parameter, the effector delivers exactly the same
amount of product.
– Outflow: optional – product outflow rate defined as the number of molecules
per time unit. Outflow models diffusion as occurring in biological systems.

Receptor
A single receptor is characterized by the following properties:
– Delay: optional – time required for this particular receptor to become active. If
a product threshold is exceeded, a signal of this receptor (coupled) affects the
effector only after a defined number of time units. This parameter was
introduced to model communication in biological systems, which is rarely
immediate.
– Thresholds: mandatory – list of threshold levels (values) defined for this
particular receptor. One of the defined values will be used to activate the
receptor during simulation. Each threshold value configuration is
characterized by the following properties:
– Signal: optional – defines which external receptor becomes active in
response to the surpassed threshold value. If this parameter is not defined,
the threshold value will always be considered for the receptor’s activation.
In other words, this parameter defines the condition under which a value
will or will not be considered.
– Product: mandatory – defines which product is considered when this
configuration is active. The considered product’s concentration is checked
versus the threshold value. For a proper negative feedback loop there has to
be at least one threshold configuration with the value of this parameter
equal to the product of coupled effector. If the value of this parameter refers
to any other product, it means that there is coordination between this NFIS
and the NFIS delivering the product.
– Value: mandatory – defines the threshold concentration of the product. If it
is exceeded for the amount of time defined by the value of the delay
parameter, the signal of this receptor becomes active.

External receptor
Single external receptor – receptor coupled with some other NFISs. It differs
from a regular receptor by only one added property: id. This property gives the
receptor a logical name, which can be referenced by other receptors.

2.1.4 Application manual

The application allows the user to select a predefined model, modify it and
simulate for a chosen period of time. The user can modify either the structure of
the OS or just control the parameters. Modifications of the first kind are
facilitated by a JSON-based editor. The second type of modifications can be
conducted with a more user-friendly parameter editor.
After loading the application, the main screen looks as shown in Fig. 2.23.

Fig. 2.23: Main screen on start-up.


Main screen
The majority of the screen is occupied by three sections, all empty at the
moment because no OS has been selected. The sections are:
1. OS definition introduces the user to the software. As previously mentioned,
this section allows the user to modify the model using one of two editor
structures: JSON (text-based) or property (easy to use, graphic-based).
2. OS graph-graphical representation of the model is displayed. All NFISs,
receptors, effectors and external receptors are presented using icons as
depicted in Figs. 2.19 to 2.22. Additionally, connections between parties are
captured on the graph. This includes cooperation (product – substrate),
coordination (product to receptor) and regulation threshold change (external
receptor to NFIS receptor). Both the product (substance) and the signal
(receptor) connections have distinct graphical representations, as shown in
Figs. 2.24 and 2.25.

Fig. 2.24: Product connection type.

Product is denoted by a solid line. The color of the line is the same as the
color of the product on the simulation’s concentration chart (described later).
Additionally, the line is annotated with the name of the product as defined in
the model. The signal is denoted by a dashed line.
3. Simulation chart: represents the concentration of all products present in the
model. The chart is only available after a simulation run.

Fig. 2.25: Signal connection type.

OS picker
The most interesting part of the screen at this point is the OS picker, allowing the
user to choose an OS to be simulated. Upon a right mouse click, a list of the
available OS models will appear (see Fig. 2.26).

Fig. 2.26: List of OS models in the OS picker.

Once the user selects an OS, its model is loaded into the application and the two
topmost sections are populated with data.

OS definition
The left part of the OS definition section is occupied by the structure editor.
Once an OS model is loaded, the editor is populated with the exact JSON
representation. The format has already been described in the “OS Model
Example” part. The editor appears as in Fig. 2.27. There are two buttons
available for this editor.
Submit button should be used to let the application know that user has made
changes to the model. Upon clicking, the current JSON will be sent to the
application. First, the model will be validated. In case of any errors within the
model, it will be rejected with an appropriate error message displayed in the
main window (see Fig. 2.28). If there are no errors within the model, it will be
remembered by the application as the current one. From now on, each simulation
run will use the submitted model.
Fig. 2.27: Structure editor.

System definition validation error. Following errors were found. – Product ‘first-productdw’ x not found
for Receptor with ID ‘external’

Fig. 2.28: Validation error example.

Fig. 2.29: Property editor.

Format code button plays an auxiliary role. It can be used to format the model
JSON to a nice and readable form. Therefore, a user making changes to the
model does not need to care about formatting on their own.
The right part of the OS definition section is occupied by the property editor.
The editor is also initialized right after a model is selected or submitted (submit
button). Upon initialization, each editable property of the model is turned into a
small integer value editor (Fig. 2.29).
Each NFIS and external receptor has its own box with a heading containing
its name (“id” property). The NFIS’s heading also contains the name of the
product delivered by the effector. For example, a heading that reads “NFIS ‘first’
(produces: ‘first- product’)” means that the box presents the properties of a
negative feedback NFIS with the “id” property equal to “first”, delivering a
product named “first-product”.
The first row of the box contains editors for basic effector and receptor
parameters, that is the effector’s “production” and “outflow” along with the
receptor’s “delay”. The second row, preceded with the heading that reads
“Thresholds”, contains editors for each receptor’s threshold defined in the model
for this particular NFIS. Each threshold editor has a name which can take one of
two forms:
– if the “signal” parameter is defined, the form is “signal/product”,
– if the “signal” parameter is not defined (threshold is always active), the form
is “product”.
Parameter values set in the editor are applied with the Set parameters button.
Once a user has changed a value of any parameter and is ready to use the new
settings for a simulation, the button should be clicked. Upon clicking the
application will save the new model for simulation and will also update the
structure editor.

OS graph
This section is devoted exclusively to the OS model graph. This graphical
representation of the model does not contain any property values as it aims to
help the user understand the model’s structure. Let us consider the example in
Fig. 2.30.
The example depicts an OS consisting of two negative feedback NFISs,
called “first” and “second”. Effectors and receptors are labeled as “first-
effector”, “second- effector” and “first-receptor”, “second-receptor”,
respectively. There are also connections depicting the structure. Between the
receptor and the effector of the same NFIS there is a pair of connections –
product and signal, as the feedback loop is closed. There is also a product
connection between the NFIS “first” and the receptor of the NFIS “second”.
Such a connection means that a threshold of the latter NFIS depends on the
product of the first one. In other words, there is coordination between the two
NFISs. Two similar connections between empty space and effectors depict
substrate that flows directly from the environment.

Fig. 2.30: Example OS graph.

Simulation run
Once a model has been loaded into the application and all of the properties have
been set to appropriate values, it is time to simulate the OS. In the top right-hand
corner of the screen, the user can find a simulation run box (Fig. 2.31).

Fig. 2.31: Simulation run box.

The box allows the user to enter the desired simulation length (in arbitrary time
units). The default value is 100 time units. Clicking the Simulate button will
cause the simulation algorithm to be run.

Simulation chart
Once the simulation has been run, the product concentration data is returned to
the browser and displayed to the user. An example simulation chart is shown in
Fig. 2.32.
The result of each simulation is presented as a separate panel. The panel is
composed of two parts – left and right.
The right-hand part is a chart, presenting the concentration of two products
called “first-product” and “second-product”. The colors of the products are
exactly the same as on the OS graph. On the horizontal axis we have time steps
of the simulation. On the vertical axis, the concentration in molecules is given.
The user can export the contents of the chart as an image by using the small icon
in the top right-hand corner of the chart.

Fig. 2.32: Simulation run chart.

The left-hand part summarizes the model that was used to run the simulation.
The summary is composed of the model’s name (“cooperation example” in this
case) and all of the control parameters with their values.
The user can run as many simulations as desired. Each will be presented as a
separate panel. A panel can be removed from the screen by using the small “X”
button located in the top right-hand corner of each panel.
The following section should be considered as optional, only for interested
readers. It concerns technical details of the simulation software – in particular
the OS model in JSON language and the algorithm used.

2.1.5 OS model example Definition

Definition
The following box contains an example of an OS model, written in the JSON
language, as supported by the application.
Description
The example presents an OS composed of the following:
1. No external receptors defined (note that the “receptors” array is empty).
2. Negative feedback NFIS named “first” (id parameter). The NFIS effector
delivers product named “first-product” with a production rate of three
molecules per time unit and outflow of one molecule per time unit. There is
no substrate defined, therefore the software assumes that the substrate is
always available in unlimited quantity. The NFIS receptor has a delay of five
time units. The NFIS has only one threshold value defined. The programmed
threshold value activates the receptor if the concentration of the product
called “first-product” exceeds six molecules. Additionally, since the “signal”
parameter is not defined, the value of six will always be used for activating
the receptor.
3. NFIS with id “second”. The NFIS effector delivers a product called
“second- product”. The maximum production rate is set to two molecules per
time unit, with an outflow of one molecule per time unit. The substrate is set
to the product called “first-product”. We can observe that it is exactly the
same product as delivered by the NFIS called “first”. Therefore, it can be
said that NFISs “first” and “second” are cooperating. The receptor has its
delay property set to one time unit. There is also a threshold value defined
for the product called “second- product”. The threshold value is set to 12
molecules. As in the previous case, the property called “signal” is defined
for the receptor, so it is always considered by the simulation software.
4. Initiation of the state. There is an entry setting the concentration of the
product called “first-product” to five molecules.
In summary, there are two cooperating NFISs in the model. The simulation will
start with the first NFIS’s product concentration set to five molecules. There are
a total of eight control parameters that can be changed by the user without
changing the structure of the OS.

2.1.6 Simulation algorithm

The application uses a very simple, discrete-time algorithm to simulate the


model. For each point in time (also called a time step), the software simulates the
new state taking into account the model, which is constant, and the previous
state.
A single iteration of the algorithm computing product concentrations and
signal activations (the state) is described in the following.

Phase I – Outflow
This phase models diffusion of products (substances), naturally occurring in a
biological system. It starts with concentrations as in the previous state. If this is
the first iteration, then the initial state is considered. If a product is not set at the
initial state, the concentration is assumed to be zero.
For each NFIS, the previous concentration of the product delivered by the
NFIS is decreased by the value of the outflow parameter, as defined for the
NFIS. If the result of this subtraction is negative, the resulting concentration is
set to zero.
The phase ends with a new state. It is defined as follows:
1. Product concentrations are decreased by outflow, but cannot be less than
zero molecules,
2. Receptor signals are the same as in the previous state.
What is worth mentioning is that the old state is also modified. Product
concentrations are decreased by outflow. This step is necessary because the old
state will later be used in the production phase to obtain substrate concentrations.

Phase II – Production
This phase encompasses the delivery of all products present in a model.
Therefore, for each effector of the model, the following sequence of events
occurs.
The activation status of the coupled receptor is taken from the previous state.
If the signal is active, no processing is done. We could say that this effector is in
fact “turned off”. If there is no active signal, the effector is in a working state and
can deliver product.
First, the algorithm calculates the available substrate. If the parameter called
“substrate” is not defined for the effector, it is assumed that the required
substrate is delivered from the environment and that there is a sufficient quantity
of it. In this case, effector production represents a maximum, equal to a value of
the “production” parameter.
If the “substrate” parameter is defined, the available substrate concentration
is retrieved from the old state. The quantity of delivered product is then
calculated as the minimum of two values – available substrate and “production”
parameter. If there is less substrate available than the maximum production
capability of the effector, the output quantity is equal to the substrate quantity. In
other words, the effector is not able to deliver more molecules of the product
than there is substrate available.
In the last step, the concentration of the substrate is decreased by the quantity
of the product. Subtraction is performed for both the new and the old state. The
old state has to be updated since there can be more than one effector using the
same substrate.
The sequence of events ends with the new state updated once again. The
concentration of the product is increased by the delivered value. The old state
remains the same.
It is noteworthy that when a “substrate” parameter is defined, the product
delivery for an effector is quite complex. This is because the software has to take
into account that multiple effectors can use the same product as the substrate.
Therefore, if one NFIS uses a substrate, the quantity available for other effectors
during the same time step (iteration) has to be decreased by the amount of the
substrate used. On the other hand, multiple effectors could be cooperating,
creating a chain or even closed cycle of product-substrate dependency. If, during
one iteration, an effector delivers some product, the product can play the role of
substrate for other effectors. In such a situation, the amount of delivered product
is not available as a substrate in the same time step. Therefore, both the old and
the new state are constantly updated in each time step. Most importantly, in both
cases substrate concentration is always decreased by the amount used.

Phase III – Signal activation


In this phase, all of the receptors coupled with effectors as NFISs are processed
in order to calculate signal activation. As the production phase has already been
conducted, the state of signals calculated in this phase will be used only in the
next time step (iteration).
The important thing to note is that every receptor has a delay property
defined. Therefore, in spite of reaching the programmed threshold level, the
receptor still remains silent. The product concentration has to stay above the
threshold for the defined number of time units (delay parameter value) to start
signalization. The process is called “receptor charging”. The same applies to a
situation where product concentration drops below the threshold value. The
receptor is not deactivated immediately, but fades out over time, lasting exactly
the number of time units defined. Therefore, signal state per receptor is more
complex than a simple Boolean on/off flag. It consists of the following data, per
each product bound to any threshold configuration of the receptor:
– active on/off Boolean flag
– charge time, expressed in time units. This property is used for both charging
and discharging (fading) of receptor
For each receptor the following sequence is executed:
First, the value of the regulation threshold needs to be calculated out of all
available configurations. Some threshold configurations are only active
conditionally when the appropriate signal is active. Therefore, out of all
available threshold configurations those with an active signal or without any set
signal property are taken into consideration. As the model allows for multiple
threshold values to be defined for the same product, a list of threshold values to
be considered may contain more than one value for a product. However, in order
to calculate whether the concentration of a product exceeds the threshold and the
receptor should be activated, only one value per product has to be chosen. In
order to resolve this issue, the algorithm will arbitrarily choose the highest
threshold per each product for further processing. For each product per active
threshold list, the following algorithm runs:
– The receptor checks if the new concentration of the product is above the
threshold value, i.e. whether the receptor should signal now and whether its
signal was active in the previous iteration.
– If the receptor should signal, but was not previously active, and the receptor
has been charging for a time shorter than the delay defined, the signal is set as
not active and the charge time is increased. The receptor is charging.
– If the receptor should not signal but was previously active, and the receptor
was charging for a time shorter than the delay defined, the signal is set as
active and the charge time is increased. The receptor is discharging, or in
other words – the signal is fading.
– In all other cases (the current and the previous signals are the same) the signal
is set as computed (active, not active) and the charge time is reset to zero.
There is no charging or discharging in progress.
At the end of the sequence process the new state is updated with signal
activation and charge times for each product present in this receptor threshold
configuration. The old state remains the same.

Phase IV – Higher level NFISs (external receptors)


Receptors considered external, that is, coupled with other NFISs, can be used to
select an alternative configuration of thresholds defined for those NFISs. This is
done via the already described “signal” property of a receptor.
The new state of such receptors is calculated in exactly the same way as for
receptors coupled with effectors. One important observation here is that since the
algorithm always uses the previous state for the new state calculations, the order
of receptor execution does not matter.
Irena Roterman-Konieczna
3 Information – A tool to interpret the biological
phenomena
Information management is an oft-neglected aspect of biology. Biologists
usually study information in the context of DNA-related processes; however, the
relevance of information processing and storage goes far beyond this narrow
scope. Before we attempt a closer look at the problem we must first introduce
some basic concepts derived from information theory. The first is information
quantity, defined by Shannon as follows:

I = -log2(p)

Here, p is the likelihood of a specific event. Clearly, the lower the probability of
occurrence of an event the more information it carries. By the same token – if an
event is certain to occur (p = 1.0), it carries no information whatsoever.
We can also easily define the basic unit of information: the bit. By definition,
one bit corresponds to the quantity of information carried by an event whose
probability of occurrence is . In the classic coin flip experiment each outcome
(heads or tails) is equally likely and therefore the result of the coin flip carries 1
bit of information. In biology, a similar situation occurs at childbirth: excluding
some fringe cases, the child is equally likely to be a boy or a girl – thus 1 bit of
information is required to describe the child’s sex (of course this discussion in no
way touches upon the emotional consequences of a given event).
The above mentioned formula permits us to compute the quantity of
information carried by an elementary event. Similar considerations apply to
events with more than two possible outcomes, such as a fair dice roll where each
of the six possible results is equally likely. In this case the quantity of
information carried e.g. by rolling a 6 is
Why would a biologist require such knowledge? In order to answer this
question we must acknowledge the growing importance of information
technology for society at large, and – more specifically – for scientific research.
We can, for example, inquire about the information quantity carried by a
single amino acid. Biology tells us that protein chains are comprised of 20
common amino acids. If we were to assume that each of these amino acids
occurs with identical frequency, the information content of a single amino acid
would be 4.32 bits. Since not all amino acids are equally common
their actual information content varies from 3.661 bits [Ala] to 6.091 bits [Trp].
As already noted, the arrangement of amino acids determines the 3D
structure of protein chains which, in turn, determines their biological properties.
Many areas of biological research, such as drug design, require detailed
knowledge of 3D protein structures in order to predict their function. For each
protein the tertiary structure is determined by a list of (Φ, Ψ) angle pairs. These
can be plotted on the Ramachan- dran plot which spans the full conformational
space (1–360 degrees) for each angle separately. Assuming a 1-degree step, the
likelihood of correctly identifying an angle pair is (1/(360 · 360)), which calls
for 16.98 bits of information. Alternatively, if the Ramachandran plot is
subdivided into 5 · 5 degree sections, the amount of information required to
identify the correct section for a given angle pair is 12.34 bits. It is, however,
important to note that the actual distribution of angle pairs is not uniform and
that some areas are more heavily populated than others, in accordance with the
intrinsic properties of each amino acid. Taking this diversity into account and
based on the definition of information entropy we may predict the average
quantity of information required to pinpoint the correct pair of conformational
angles for each amino acid.

H corresponds to our degree of ignorance with regard to assignment of each (Φ,


Ψ) angle pair to an appropriate 5 · 5 degree area of the Ramachandran plot.
Comparing this value with the information “payload” of selected amino acids
(8.33 bits for Pro and 10.6 bits for Gly) indicates an overall information
shortage. It seems that the amino acid sequence alone does not carry enough
information to enable us to predict the correct set of (Φ, Ψ) angles for each
residue. Additional information may be provided by mutual interactions between
amino acids, which, in turn, depend on their specific locations along the chain
and on inter-residue distances.
Event probability estimates
Another important question in biology revolves around the likelihood of
achieving success (interpreted as the occurrence of a favorable event) given a
large enough number of attempts at some task. As an example, let us consider
the need to deliver a message to a recipient under the assumption that the
message itself may be copied and dispatched many times. Successful receipt is
conditioned by two factors: (1) p, the probability that the addressee will receive
any particular copy of the message (this depends e.g. on population density); (2)
k, the number of copies sent out. Clearly, the more copies are dispatched the
greater the likelihood that at least one of them will reach the intended recipient.
In mathematical terms this may be expressed as follows:

P = 1 –(1 – p)k

Fig. 3.1 graphically presents the influence on P of k (number of events) and Fig.
3.2 of p (probability of an elementary event).
The overall probability of contacting the recipient may be maximized by
creating additional copies of the message or by trying to increase the value of p
,e.g. byprecisely specifying the recipient’s address in order to restrict the search
to a narrow area.

Fig. 3.1 One way to approach the goal (P = 1.0) – by more attempts, i.e. increasing k.
http://crick.cm-uj.krakow.pl:8080/nfs/.
Fig. 3.2: Second way to approach the goal (P = 1.0) – by increasing p (probability of elementary
event). http://crick.cm-uj.krakow.pl:8080/nfs/.

Is the above phenomenon relevant to biologists? It seems so, since nature itself
makes use of the presented mechanisms. Multiplying the number of attempts
leads to a “solution by energy”, since copying the message requires an
expenditure of resources. This type of situation occurs e.g. in farming where
many seeds are sowed but only some of them encounter conditions which
promote germination.
Another way to approach the problem is by increasing p , a process which
can be called “solution by information”. In this case, instead of expending
energy we must first amass a certain quantity of information – for example by
inquiring about the recipient’s address. If sufficient information is available we
can reasonably hope that a single copy of the message will suffice.
Another way to visualize the differences between both approaches is to
compare a machine gun (which fires many “informationless” projectiles) with a
guided missile (which requires a large quantity of information in order to find its
mark).
Analysis of the presented examples suggests one additional interpretation.
When dispatching multiple messages to unknown recipients we cannot predict
who will actually receive and read them. Our situation can therefore be termed
“unpredictable” – it may happen that nobody will act upon our message. The
alternative solution, i.e. by amassing information, can be useful when the goal of
our task is precisely determined.
Fig. 3.3 “Investment in information” – construction of a protein synthesis initiation complex as a
multistage process (conjunction of conditions – all conditions must be met in order to ensure the
desired outcome).

When do living organisms make use of each of the presented methods? In


fact we have already indirectly answered this question.
If the process plays a decisive role in proper functioning of the organism (or
cell) and if its outcome is predictable (i.e. we have detailed knowledge of the
desired result) the cell constructs something similar to a “guided missile”,
investing in an information-rich carrier. In most cases this is done by
synthesizing a complex consisting of various molecules, each with a specific
function. An example of this approach is protein synthesis initiation – one of the
most fundamental processes in life (Fig. 3.3). Each component of the ribosome
carries a certain quantity of information while the overall structure of the
complex ensures that all these pieces come together giving rise to a complex
biological phenomenon. Here, maximizing P is done by increasing the value of p
since only a carefully designed structure can successfully initiate protein
synthesis.
The opposite approach, i.e. an energy-based solution, can succeed when the
favorable course of action is difficult to predict in advance. Such situations often
emerge in response to unexpected external stimuli. For example, the organism
may be invaded by an unknown bacterial pathogen, necessitating an effective
response despite the inherently low value of p (there are many different bacteria
so it is difficult to select the proper antibody) (Fig. 3.4). The organism reacts by
producing a great variety of “missiles” (antibodies), assuming that at least one of
them will prove effective in combating the invader (antigen). Changes in
antibody specificity are introduced through recombination and mutations in a
specific fragment of the polypeptide chain comprising the protein’s active site.
Despite the relatively low value of p, the massive quantity of antibodies (high k)
ensures that P remains close to 1, i.e. the organism is very likely to “stumble
upon” the correct solution despite having no prior knowledge of the threat.
Naturally, synthesizing so many antibodies consumes a great deal of energy.

Fig. 3.4: Antibody synthesis as an example of an “energy-based” solution where it is hoped that at
least one of many possible antigen binding sites will match the given antigen.

An online application which shows the dependence of P on the values of p


and k is available at: http://crick.cm-uj.krakow.pl:8080/ip/.
Part III: Organ level
Anna Sochocka and Tomasz Kawa
4 The virtual heart
Professional medical imaging is no longer about analogue film and
negatoscopes: these have been replaced by more advanced tools which continue
to revolutionize the work of medical professionals and students alike. Modern
technology has drastically altered the way in which we view the human being –
the primary focus of our research. Undergraduate courses at medical schools
now rarely involve live patients; instead students can test their skills on
indestructible virtual entities, without worrying about mistakes resulting from
insufficient theoretical knowledge or lack of experience. The development of
new tools and solutions has been greatly assisted by progress in computer
graphics. An entirely separate branch of CGI has emerged specifically to serve
the needs of medical science. Its primary features include manipulation of 3D
models such as surgical tools, organs and tissues which comprise virtual
operating theaters. Rather than depict vast sceneries such tools focus on accurate
modeling of the interior of the human body. New models – both static and
dynamic – are being developed for individual organs or for entire organisms,
enabling users to visualize and simulate specific physical and biological
phenomena [1].
Medicine and healthcare are among the main consumers of novel CGI
technologies. There are many applications which assist students in learning
about the structure and function of the human body as well as in formulating
accurate diagnoses. Virtualization enables the student to repeat a given procedure
many times without causing irreversible harm to actual patients.
In addition to professional computer models, ongoing improvements in
medical software also involve animations which, together with static CGI,
represent an invaluable aid for patients, helping them understand complex
concepts and issues in medicine.
It is often said that a picture is worth a thousand words. Accurate images of
the human body can explain physiological processes even to laymen: for
instance, we can teach a student of economics how the organism works by
invoking economic laws. Likewise, an architect can learn about the properties of
the musculoskeletal system by referring to architectural concepts. The world
operates in accordance with universal principles; the only challenge is to present
them in a suitable manner. Clearly, computerized animations and advanced
graphics are very successful at exercising imagination and conveying
knowledge.
More and more healthcare institutions, including hospitals and research
laboratories, are turning to medical visualization as a way to train students,
educate patients and assist doctors. Ideally, we would like to equip every medical
practice in Poland with suitable teaching aids. Doctors are able to make good use
of such solutions, knowing how important it is for patients to understand the
nature of their afflictions – both on the cellular and organic level.
This chapter presents organ simulation techniques, using the human heart as
a case study.
The heart is among the most important organs in the body, maintaining
circulation and ensuring that oxygen is properly distributed throughout the
organism. It is located in the mediastinum behind the breastbone, enclosed in the
pericardial sac. In humans the heart consists of four chambers: two upper
chambers called the left and right atria, and two lower chambers called the left
and right ventricles (Fig. 4.1). The left and right side of the heart are divided by
a thick muscle layer called the atrioventricular septum. The right atrium and
right ventricle are usually referred to as the right heart, while the left atrium and
left ventricle are known as the left heart [2]. In principle, the heart operates like a
dual pump. The right heart collects deoxygenated blood from the organism and
pushes it into the lungs, while the left heart receives oxygenated blood from the
lungs. Contraction of the left atrium opens the mitral valve, forcing blood into
the left ventricle. Moments later the ventricle itself contracts, opening the
tricuspid valve and pumping blood into the aorta, which, in turn, distributes it
throughout the organism.

Fig. 4.1: Principal components of the heart [3].


Heart contractions are mediated by an electrical impulse originating in the
sinoatrial node. The node generates an electrical potential which spreads to the
atrium and then to other parts of the heart. In atria the pacemaking impulse is
communicated from cell to cell, while in ventricles it is instead carried by
specialized tissues known as Purkinje fibers. Fig. 4.2 presents the structure of the
heart’s conduction system [4]. The electrical activity of the heart muscle can be
traced with an ECG diagram which illustrates each phase of the cycle:
contraction, relaxation and pause.

Fig. 4.2: The heart’s conduction system superimposed on the image of the heart [4].

In constructing the presented model of the heart we relied on anatomical atlases


and online publications, as well as on CT scans published on the University of
Michigan website, presenting various cross-sections of the human body [5].
Finally, we have use of patient autopsy photos and 3D CAD models.
Below we present a list of websites which host interactive animations created
using Adobe Flash, WebGL and Java [6-11].
Fig. 4.3 Interactive animation screenshot taken from explania.com [6].

The heart is a powerful muscle that pumps blood throughout the body by means
of a coordinated contraction. The contraction is generated by an electrical
activation, which is spread by a wave of bioelectricity that propagates in a
coordinated manner throughout the heart.
In this web site you will find information about the heart’s function and structure
along with information about some arrhythmias with many movies and
interactive java applets.

Fig. 4.4 Screenshot from thevirtualheart.org [7].


The animation shown in Fig. 4.3 presents the anatomy and operation of the
human heart. The “blood flow” control enables the user to visualize each stage
of the cycle [6].
thevirtualheart.org is an online knowledge base which contains animations
and several Java applets visualizing the heart in 3D (Fig. 4.4). Transparency
features enable users to observe each component of the heart – ventricles, atria,
valves and conduction system – from any angle. The website also explains the
structure and action of the heart and provides information on arrhythmias and
other cardiac pathologies. A separate applet displays scrollable cross-sections of
the heart obtained using NMR imaging. In addition to the human heart users can
also visualize hearts from other organisms, including dog and horse [7].
Your Practice Online is another source of interactive content presenting the
structure and function of the heart, including cardiac circulation (Fig. 4.5). The
website also includes a database of conditions affecting the heart along with the
available treatment options [8].
The Internet is home to many videos and animations which depict the heart’s
structure and common pathologies – it is even possible to obtain video
recordings of actual human hearts filmed during autopsies (Fig. 4.6). The
references section includes links to some online sources which the authors drew
upon when developing the simulations discussed below [9-11].

Fig. 4.5 Screershot from Your Practice Online [8].


Fig. 4.6 Screenshots from various educational videos available on YouTube [9-11].

The above mentioned sources, along with the open-source Blender rendering
software, enabled us to construct a set of 3D models and animations depicting
the heart (Fig. 4.7). The resulting model should help the user understand the way
in which the heart performs its function.
Our model is divided into several components which can be independently
displayed by clicking the mouse button:
– upper heart (atria, aorta, pulmonary artery),
– cover of the right atrium,
– cover of the left atrium,
– lower heart (ventricles),
– left ventricle,
– right ventricle,
– valves.
For each of the above mentioned components separate “contracted” and
“relaxed” models were created and then assembled into animations (technical
aspects of animating the models will be discussed later on in this chapter). A set
of custom bitmap textures was applied to the base 3D model. Texturing is a
popular CGI technique which visualizes object surface details, either as a
collection of images superimposed onto the model (bitmap texturing) or via the
application of mathematical functions (procedural texturing). Bitmap texturing
typically relies on photographs of real-life objects such as walls, fabrics, tree
bark, wooden planks etc. – in our case we used photos of actual hearts. Once
prepared the texture must be mapped to the model: the CGI artist must determine
how to assign representative pixels (now referred to as texels) to specific
features of the model’s surface.
Fig. 4.7 Screenshot of the heart model prepared using Blender. Left: individual components; right:
complete organ.

Heart textures were recreated using several software packages – first Blender
and then Adobe PhotoShop CS5. The initial step was to prepare a UV map from
a threedimensional model of the organ. UV mapping enables a 3D shape to be
rendered as a 2D surface (see the leftmost image in Fig. 4.8). It produces a
bitmap texture which is then mapped to a 3D object. In our case textures were
prepared using Photoshop and then imported into the rendering tool. The UV
map tells the program which fragment (pixel or texel) should be assigned to each
element of the 3D model surface. Fig. 4.8 presents the UV map and texture for
the upper heart.

Fig. 4.8: UV map of the upper heart (left), the corresponding texture (center) and a combination of
the two (right).

Altogether, four textures were created – one each for the upper and lower heart
and for the left and right ventricle. Atrial covers were not textured but instead
assigned a uniform color using the OpenGL library. To differentiate between
deoxygenated and oxygenated blood, veins and arteries were colored in blue and
red respectively.
Animation of the heart’s contractions required a set of two models for the
upper and lower part of the organ, representing their contraction and relaxation
phases. Using these model pairs our software can process the contraction cycle
on a frame- by-frame basis, generating the output animation (Fig. 4.9). This
algorithm considers each vertex separately and calculates its momentary position
by referring to both limit cases (full contraction and full relaxation). When
rendering contraction, a vector is computed from the relaxed to the contracted
phase and the target vertex is translated along this vector for each frame of the
animation. During relaxation the vertex traverses the opposite path, with its per-
frame offset determined by the length of the vector, the contraction interval
(provided by ECG readings) and the animation frame rate. The same algorithm
is applied to each vertex of the 3D model, producing an animation which
visualizes the heart’s action. Once the animation reaches the end of each phase
(full contraction or full relaxation) the source and target model are swapped and
a new phase begins, in accordance with the ECG readout [12].
The presented model can also be used to visualize real ECG data which
indicates the beginning and end of each interval (contraction and relaxation of
atria and ventricles), presenting each phase of the heart’s action in detail.
Our application can be run on any computer equipped with the Windows OS
(Windows 7 or later), an OpenGL-compliant graphics adapter (version 3.2 or
later) and pixel shaders (version 1.50 or later). The program does not require an
installer – instead, the entire Heart directory should be copied to the local hard
drive and the Heart.exe executable launched. This will display the program’s
main window shown in Fig. 4.10.
The left-hand panel shows the current heart rate (in beats per minute) along
with a set of control buttons (starting/stopping the animation, frame-by-frame
scrolling and enabling/disabling automatic rotation). The central part of the
window visualizes the heart at the current point in the ECG cycle. Raw ECG
data is displayed to the right, with the red vertical line indicating the current
frame. The heart can be manually rotated by clicking and dragging its model in
the desired direction or by using the arrow keys (←, ↑, →, ↓) (Fig. 4.11).
Fig. 4.9: Vertex translation algorithm as applied by the authors.

The user may toggle visualization of selected components of the heart by


clicking them with the left mouse button. Restoring a disabled component
requires clicking the empty space it normally occupies (Fig. 4.12).

Fig. 4.10: Main window of the presented application [13].

Fig. 4.11: Rotated model of the heart [13].

Fig. 4.12: Hiding the lower heart section reveals the internal structure of the ventricles [13].

The animation is initially suspended. To begin playback users should click the
play button located right below the heart rate display. Once clicked, the play
button is replaced by a pause button which can suspend the animation at any
point. The same actions can be performed by pressing the “P” key on the
keyboard (Fig. 4.13).

Fig. 4.13: Play and pause controls used to start or stop the animation [13].
While paused, the animation can be advanced frame by frame by clicking the
forward/backward step button or by using the “Z” and “X” keys (Fig. 4.14).

Fig. 4.14: Backward and forward step controls used to advance the animation frame by frame
[13].

Clicking the round arrow button will cause the heart model to rotate about its
axis automatically. This feature can also be toggled by pressing the spacebar
(Fig. 4.15).

Fig. 4.15: Automatic rotation control [13].

Fig. 4.16 presents the heart model during ventricular systole, with open aortic
and pulmonary valves. It also indicates which phase in the ECG cycle
corresponds to ventricular ejection. The valves separating atria from ventricles
open in only one direction, ensuring proper blood flow.
Fig. 4.17 presents the ventricular diastole where aortic and pulmonary valves
remain closed.
Fig. 4.18 illustrates the right atrial systole, showing the open tricuspid valve.
Finally, Fig. 4.19 presents the right atrial diastole – during this phase the
tricuspid valve remains closed.

Fig. 4.16: The heart during ventricular systole, with open aortic and pulmonary valves [13].
Fig. 4.17: The heart during ventricular diastole, with closed aortic and pulmonary valves [13].

Fig. 4.18: Right atrial systole with open tricuspid valve [13].

Fig. 4.19: Right atrial diastole with closed tricuspid valve [13].

The presented application shows how computer-aided modeling and


visualization tools assist medical research. The program can be used as a
teaching aid at medical schools and it may also help medical practitioners
explain cardiac problems to their patients. In fact, the applicability of CGI goes
much further: for example, 3D models of the heart may be used for live
diagnostics, revealing the loci of cardiac anomalies.

References
[1] brasil.cel.agh.edu.pl. Medycyna | Teoria Nowoczesnej Grafiki Komputerowej [Internet]. 2015
[cited 9 June 2015]. Available from: http://brasil.cel.agh.edu.pl/~12sustrojny/przykladowe-
zastosowania-nowoczesnej-grafiki-komputerowej/medycyna/.
[2] Wikipedia. Heart [Internet]. 2015 [cited 9 June 2015]. Available from:
http://en.wikipedia.org/wiki/Human_heart.
[3] pl.wikipedia.org. Budowa serca cztowieka [Internet]. 2015 [cited 9 June 2015]. Available
from: http://pl.wikipedia.org/wiki/Budowa_serca_cz%C5%82owieka.
[4] pl.wikipedia.org. Uktad bodzcotwórczo-przewodzgcy serca [Internet]. 2015 [cited 9 June
2015]. Available from:
http://pl.wikipedia.org/wiki/Uk%C5%82ad_bod%C5%BAcotw%C3%B3rczo-
przewodz%C4%85cy_serca.
[5] thefree3dmodels.com. Human Heart – 3d model – .3ds, .obj [Internet]. 2015 [cited 9 June
2015]. Available from: http://thefree3dmodels.com/stuff/anatomy/human_heart_2/20-1-0-
1077.
[6] Explania – Animated Explanations. The Human Heart [Internet]. 2015 [cited 9 June 2015].
Available from: http://www.explania.com/en/channels/health/detail/the-human-heart.
[7] thevirtualheart.org. The Virtual Heart | Cardiac Arrhythmia Modeling [Internet]. 2015 [cited 9
June 2015]. Available from: http://www.thevirtualheart.org/.
[8] yourpracticeonline.com.au. Heart Anatomy 3D Animation Video – Animated Human Heart
Anatomy Multimedia [Internet]. 2015 [cited 9 June 2015]. Available from:
http://www.yourpracticeonline.com.au/heart-anatomy-3dvideo.xhtml#.
[9] YouTube. 3D Simulation of a Human Heart [Internet]. 2015 [cited 9 June 2015]. Available
from: http://www.youtube.com/watch?v=RoNbB9qV5tU.
[10] YouTube. 3D Medical Animation – Congestive Heart Failure [Internet]. 2015 [cited 9 June
2015]. Available from: http://www.youtube.com/watch?v=GnpLm9fzYxU.
[11] YouTube. Human Heart – 3D app (i3D S.A.) [Internet]. 2015 [cited 9 June 2015]. Available
from: http://www.youtube.com/watch?v=4wtD0yUcyh4.
[12] Haubold L. NeHe Productions: Morphing & Loading Objects From A File [Internet].
nehe.gamedev.net. 2015 [cited 9 June 2015]. Available from:
http://nehe.gamedev.net/tutorial/morphing loading_objects_from_a_file/16003/.
[13] Kawa, T. Wizualizacja i symulacja pracy serca (Visualization and simulation of the working
heart). MSc Thesis, Jagiellonian University, ZTG UJ.
Marc Ebner and Stuart Hameroff
5 Modeling figure/ground separation with
spiking neurons
The human brain routinely performs figure/ground separation like picking up a
cup on a table, or grasping an apple from a tree. Figure/ground separation is used
not only in vision, but in separating voice from chatter, touch and other
modalities, and remains a challenge in artificial intelligence and machine
learning. To approach this problem we’ve shown figure/ground separation in a
laterally-connected sheet of “integrate-and-fire” spiking neurons. Biologically,
the lateral connections and the sheet are based on “gap junction” electrical
synapses between dendrites and soma of brain neurons, e.g. cortical layer 5
pyramidal cells and interneurons. The method is based on locally computing the
average integration potential for each neuron using lateral connection inputs as
well as direct sensory inputs. We show results for artificial as well as real world
images in which the model adaptively extracts a figure from background,
irrespective of actual numerical value of the figure’s pixels. Lateral modulation
enables collective integration and synchronized firing/spiking of large groups of
neurons. As lateral connections open and close a zone of integration moves
through the larger system. With further development, such “mobile zones” offer
models for executive agency, causal action, attention and correlates of
consciousness in intelligent systems.

5.1 Introduction
The human brain excels at many tasks involving “figure/ground separation”, e.
g. recognizing salience and meaning in an object or pattern from its background.
Understanding how the brain discriminates “figure” from less important
“ground” may help design computer systems for artificial intelligence (AI).
In the brain, figure/ground separation in conscious perception depends on
sensory inputs reaching the thalamus, and then on to cerebral cortex in a 3-step
hierarchical process. For example in visual processing, optical inputs are mapped
onto (“retino- topic”) sheets of rod and cone cells in the retina, and then relayed
to the thalamus. From the thalamus, the first wave of activity relays sensory
inputs to the primary cortex, e.g. for vision in V1 in the very back of the brain’s
occipital lobe, still maintaining retino-topic representation. From there,
secondary “associative” processes project in feed-forward pathways to frontal
areas including the prefrontal “executive” cortex. Finally, from there, a third
wave of “recursive” feedback projects to other cortical and subcortical areas and
correlates with consciousness [1]. This third wave matches a philosophical
approach termed “higher-order thought” (HOT, [2, 3]), which suggests that
recursive feedback projection from the prefrontal cortex to other brain regions
correlates with consciousness. Lee et al. [4] showed this third wave activity is
selectively sensitive to anesthetics of all types, i.e. gas molecules, propofol and
ketamine.
Third wave activities terminate at the apex of the brain’s hierarchy, cortical
layer V giant pyramidal neurons which are unique in several ways. Their apical
dendrites arise vertically to the cortical surface, and are primarily responsible for
EEG signals. Basilar dendrites of pyramidal neurons spread laterally, parallel to
the brain surface, forming, along with interneurons, sheet-like networks
throughout the cortex. They are likely biological sites for conscious perceptions,
and figure/ground separation.
But how are certain activities within the sheet recognized as figure rather
than ground, and assigned meaning? Various “spotlight” attentional theories
require a spotlight operator. We approach the problem with a self-organizing
mobile zone of integrative activity within a larger sheet of integrate-and-fire
spiking neurons.
Neurons consist of multiple dendrites and cell body, or soma, which receive
and integrate synaptic inputs by chemical messengers to a threshold potential for
all-or- none firing, or spiking along a single long axon, giving unidirectional
information flow. However, the brain also utilizes electrical synapses, or “gap
junctions” which physically fuse neuronal membranes, create window-like
connections between adjacent neuronal interiors, and synchronize membrane
depolarizations, e.g. in gamma synchrony EEG. Although gap junctions are
prevalent in the brain, more primitive than chemical synapses and highly
conserved in evolution, their true function is not understood.
In networks of integrate-and-fire (spiking) neurons, inputs from dendritic-
dendritic (and dendritic-somatic) gap junctions have been proposed to modulate
integration and regulate firing/spiking. Gap junction lateral connections may
enable groups of neurons to function collectively, and more intelligently. We
applied this principle in a computational model system for figure/ground
separation.
We previously described figure/ground processing using a sheet of spiking
neurons with lateral connections [5–7]. It is assumed that the neurons’ “pre-
spiking” dendrites are laterally connected through gap junctions to neurons in
their immediate vicinity. This is an extension to the traditional spiking neuron
model which does not include lateral connections.
In our model, if an open gap junction exists between two neurons then the
neurons are resistively coupled, allowing for a current flow, or voltage gradient,
from the neuron having a higher activation to the neuron having a lower
activation. As a result, neurons connected via an open gap junction are able to
integrate collectively and synchronize their firing behavior. Neurons cause gap
junctions to open and close based on activation or other factors. If a gap junction
closes, the lateral connection between two neurons is severed, and they integrate
individually and fire asynchronously.
While gap junctions are either open or closed, we assume an additional
permanent network of lateral connections. This may occur biologically (e.g. in
layer 5 cortical pyramidal neurons) by cortical interneurons, and used to
compute a spatial average of the input. This average is then used to determine
whether neurons close or open their gap junctions. If a neuron’s activation is
above the spatial average, then this neuron will open its gap junctions. Neurons
connected by lateral gap junctions can collectively respond to a given stimulus,
and fire synchronously with a specific frequency which can be used to discern
different stimuli. Neurons which do not respond may fire asynchronously.
Our model uses visual information from a sheet of neurons whose output
corresponds basically to a topological representation of the retina, e.g. retinal
outputs serve as inputs to our model which is comparable to neurons found in
V1. Our sheet of laterally connected neurons then performs figure/ground
segmentation.
The brighter part of the image is extracted if neurons whose activation is
above the spatial average open their gap junctions, irrespective of the actual
maximum output of a single input neuron. The neurons adjust themselves such
that whichever region appears brighter than the surrounding will be extracted as
figure while the remaining area is considered as background. We could extract
arbitrary types of regions (e.g. regions having a certain color or texture) by
transforming the input through additional sheets of neurons placed in front of the
sheet that we describe here.
Below, we fully describe our theoretical model of a spiking neuron including
its lateral connections. We show how a sheet of such neurons is able to separate
figure from ground for several real-world images. We show only results for
visual input data. However, the method can also be used to separate figure from
ground in any kind of spatially distributed signal.
For example, suppose we take a topological map of neurons representing
different regions of the body and pressure, touch or proprioception are the
relevant signals. Our method would be able to locate the signal irrespective of
the amount of pressure exerted. The corresponding neurons of a certain region
would only have to fire, or fire with a larger frequency, when this area is
touched. All neurons responding to the same touch would fire in synchrony.

5.2 Figure/ground separation


The ability to separate figure from ground is very important to many daily tasks.
If one grasps a cup on a table, the brain determines which part of the scene
corresponds to the cup and which part corresponds to the table. A similar ability
is probably required by all animals processing visual information in nature.
Locating red apples on a tree or strawberries in a field requires a similar ability.
Fig. 5.1 illustrates how different objects can be extracted based on their
color. The first column shows the input image. First, squared differences are
computed. The second column shows the results after computing squared
differences with the colors [0.82, 0.43, 0], [0.63, 0.02, 0.04], [0.90, 0.78, 0.62]
and applying Gaussian blur with a standard deviation of 10 pixels. Let a(x, y) be
the output of the blur operator, i.e. a local spatial average and let a be the global
average of this output. The third column of images shows areas for which a(x, y)
≥ fā holds with f = {1.4, 1.16, 1.7}. Extracted objects are shown in Fig. 5.1 (c).
This rather simple example shows how different colored objects (red flowers,
yellow flowers or a butterfly) can be separated from their background based on
their color.
Fig. 5.1: (a) Input image. (b) Gaussian blur of squared color differences highlighting areas which
have the colors (from top to bottom): red, yellow, white, (c) extracted areas: red flowers, yellow
flowers, butterfly.

A cup on a table may be more difficult to extract. However, we can of course


take additional information such as depth or texture into account. Then we
would extract areas which have a unique texture and a depth different from the
surrounding area.
Below, we will show how a figure can be separated from ground using a
network of laterally connected neurons. But first, we will describe the basic
model of a spiking neuron followed by our extension which also includes lateral
connections.

5.3 Spiking neural networks


The brain consists of a large set of “integrate-and-fire” spiking neurons
interconnected by variable strength chemical synapses in feed-forward networks
[8].
These neurons exchange electrochemical signals, resulting in network
patterns of activity which correlate with sensory perception, motor control and
learning. Each neuron sends out sequences of spikes along its axon to other
neurons, the sequence depending largely (but not entirely) on integration of
electrochemical membrane signals the neuron receives from other neurons.
Each neuron receives and integrates synaptic inputs to its dendrites and cell
body/ soma in the form of an integrated activation potential. When this potential
reaches a threshold potential, an all-or-none firing, or spike is triggered down the
axon. However, in brains of awake animals, the threshold for firing varies spike-
to-spike, suggesting some other factor contributes to integration, possibly via
gap junctions [9].
In standard computational models, only algorithmic integration and a fixed
spiking threshold are considered. Other potentially relevant aspects are usually
not included, for example lower level synaptic and cytoskeletal interactions and
lateral gap junction influences. Using a spiking neural model based on
membrane capacitance, resting potential, threshold potential, a recovery variable,
dendritic and synaptic currents and other parameters, a large-scale
approximation of mammalian thalamocortical systems can be constructed [10].
In simulations, we model a large network of neurons. Consider a single
neuron i with activation potential Vi. The change of the activation potential dVi /
dt can be described by (modified from [11]):

where C is the capacitance of the neuron, gi is the leakage conductance, Ei is the


resting potential and Itonic is a tonic current. The leakage conductance will
determine the speed with which the activation potential will reach the resting
potential if no input is received from other neurons. External input to the neuron
can be modeled using Ii while the input received from other neurons is modeled
by Kj. The input received from other neurons will depend on the strength of the
synaptic connection wij between the two neurons i and j. Neural learning is
thought to basically modify the connection strengths wij.
This standard model only considers input from other neurons received via
synaptic connections, with learning achieved by tuning the connective weights.
Lateral connections created by gap junctions are not included in this standard
model. Below, we will show that these lateral connections enable exchange of
activation between neighboring neurons, allowing them to tune their response to
a given stimulus. Thus neurons connected via open gap junctions can
collectively integrate inputs and memory, fire synchronously and efficiently
separate figure from ground.
Fig. 5.2: Gap junctions between adjacent neurons give rise to two resistive networks: a fixed
resistive network and a modifiable resistive network defined by the state of the gap junction (open
or closed).

5.4 Lateral connections via gap junctions


Our model neuronal network also includes lateral connections via gap junctions.
Neurons in our model network receive and temporally integrate incoming spikes
as in the standard model, the activation potential slowly rises toward the
threshold. And like the standard model, once the activation potential reaches a
particular threshold, the neuron “fires”, or “spikes”, and sends a spike along its
axon to the next synapse. Our model adds another factor during integration and
activation which regulates firing/ spiking: inputs from lateral “gap junctions”
and interneuron connections.
Laterally-connected neurons are able to exchange part of this activation via
open gap junctions. A current will flow, i.e. a voltage gradient exists, from a
neuron with a higher activation potential to a neuron with a lower activation
potential. In our model, this current flow/gradient between neighboring neurons
can be turned on or off by opening or closing the gap junction. Only neurons
which perform a similar function, i.e. which respond to the same type of signals,
are assumed to be connected. Such neurons interconnected by open gap
junctions will collectively respond to the same stimulus.
It can be envisioned that initially, i.e. during early development, many lateral
connections randomly form, and are then pruned to form subnetworks able to
collectively respond to a stimulus. Gap junction-defined subnetworks then
specialize themselves to respond to different types of stimuli by tuning their
synaptic weights.
A gap junction can be treated as a resistive connection between neurons [12,
13]. The set of all neurons interconnected via gap junctions forms a resistive grid
where the node points are the neurons and the gap junctions the resistors. If a
gap junction exists, it is always there, i.e. a permanent connection between two
neurons, though it may be open or closed. We assume that this connection forms
an unconditional resistive grid. Therefore, some information is always
exchanged between these laterally connected neurons. This network could be
used to compute a global average among interconnected neurons.
Traub et al. [14] used a voltage dependent conductance of gap junctions. In
our model, each gap junction is also able to change its state. It can open or close,
i.e. the conductance depends on an internal state of the neuron. This creates a
second network whose connections can be modified over time. The two
networks are illustrated in Fig. 5.2. If a gap junction is open between two
neurons then these neurons will synchronize their firing behavior. It is well
known that chaotic or nonlinear electrical circuits will synchronize their
behavior if they are resistively connected [15–17]. The same will happen with
neurons whose firing pattern is chaotic whenever they are resistively connected
to other neurons.
The unconditional network is assumed to spatially average the input spikes.
This allows each neuron to become aware of the global average firing pattern of
all other laterally connected neurons. This signal, which we call the sync-
threshold, is used to adjust the adaptive resistive network. Each neuron can of
course also compute a temporal average of its firing pattern. If the average firing
pattern is above the global average, i.e. the sync-threshold, then the network will
open its gap junctions. Otherwise it will close its gap junctions. This results in
synchronized firing of all neurons responding to the figure, i.e. which have an
average firing rate above the average.
Fig. 5.3 shows our model neuron which is also laterally connected to four
other neurons. Of course, neurons may also have fewer or more gap junctions.
Similarly, neurons may have another number of dendritic inputs and the
connections will not be uniformly distributed. In Fig. 5.3, the lateral connections
extrude from the neuron body. This is meant to illustrate that information is
exchanged through this connection on the same level. For actual neurons, gap
junctions are generally located on the dendrites, shown on the left in the
illustration. The dendrites lead up to the neuron soma, or cell body which also
receives and temporally integrates inputs.
Inputs received and temporally integrated are indicated by the box labeled
with ∫ dt. This input is also temporally and then spatially integrated as indicated
by the boxes labeled with ∫ dt and ∫ dx. Gap junctions allow for an exchange of
this signal with laterally connected neurons, provided these gap junctions are
open. Two connections are shown for each gap junction. The connection
extending from the sphere illustrates the conditional connection between
neighboring neurons. The small sphere on this connection indicates that the gap
junction can be opened or closed. This connection is controlled via the input
from the sync-threshold which in turn is determined by the spatially averaged
input signal obtained from the unconditional network (formed by the lateral
connections protruding in between the dendrites and the large sphere, and
biologically by interneurons). The neuron will fire if its activation potential is
above the threshold. This operation is indicated by the threshold box. Whenever
the neuron fires, a spike (as indicated by the spike shown on the last box on the
right) is sent along the axon.
It may also be that some of the functions just described are spread over two
or three interconnected neurons inside a cortical column. Mountcastle [18] gives
a review of columnar organization of the neocortex. These neurons would then
have different shapes due to the different function performed by the neuron. A
lateral network formed by such neurons will extract any kind of spatial signal
which is above the average. This is more useful than using a fixed threshold per
neuron. A set of neurons which extracts a signal with a fixed threshold will fail
to detect a faint signal. It will also fail to generate synchronized firing behavior.

5.5 Simulation of a sheet of laterally connected neurons


We evaluate our model using real world photographs as visual stimuli. We
simulate a sheet of laterally connected neurons. Each neuron is randomly
connected to other neurons in its immediate surroundings. All neurons of this
sheet receive their input from a virtual retina. This input can also be viewed as
coming from V1 with some additional transformation of the visual stimulus.
The sheet of neurons is assumed to extend over an area of 1000 x 1000 x 2
units, i.e. a square area of nonzero height. We simulate 4000 neurons inside this
area. The nonzero height is modeled because actual neurons are not perfectly
arranged on a two-dimensional plane inside the cortex. In our simulation, each
neuron i is located at position (xi, yi, zi) inside the three-dimensional sheet. It is
laterally connected to its six nearest neighbors.
The human visual system processes visual information using three channels,
bright–dark, red–green and yellow–blue [19]. Our sheet of neurons only
processes information from the bright-dark channel (also called lightness). Data
is extracted from computer images which are stored using red, green, blue
components. This data is transformed to lightness for each image pixel. Let (R,
G, B) be the red, green and blue nonlinear pixel intensities, then lightness is
given by L = 0.299R + 0.587G + 0.114B [20].
A topological mapping is used to determine the input received by neuron i.
Let the coordinates xi, yi, zi of neuron i be normalized to the range [0,1]. For each
input signal, we determine offset coordinates (xr,i, yr,i) randomly selected from -1,
0, 1. The neuron j where neuron i receives its input from is given by (wxi + xr,i ,
hyi + yr,i) where w and h are the width and height of the input image. Each neuron
i receives input from three neurons of a lower sheet. The output provided by
neuron j from a lower sheet is taken as Oj = L(xj, yj) where L is simply the
lightness of the corresponding image pixel at position (xj, yj).
Our sheet of neurons is probably located in some higher visual area which is
in charge of figure/ground separation. For our experiments, we use the input
directly from the image. Since weighted inputs from all three color bands are
used, this amounts to using only the black and white channel as input. Any type
of color could be extracted by simply computing squared differences to certain
colors or by correlating the input signal with a given signal.
Neurons found in V3 respond to different types of oriented lines [21, 22].
Hence, neurons receiving input from V3 could be used to separate figure from
ground based on texture. Neurons found in V5 respond to lines moving in a
certain type of direction. Some neurons also respond to the global motion of an
object. This contrasts with the response of neurons found in V1 which only
respond to local image motion. Thus, neurons receiving input from V5 could be
used to separate a figure moving in a certain direction while the background is
either stationary or moving in a different direction.
In our computer model, the state of our laterally connected neuron i is
described by the following variables: ai activation, ti fire threshold, oi output
voltage, ãi temporal average of incoming spikes, āi spatial average of temporal
average. The spatial average is computed using the unconditional network
formed by the gap junctions. The model illustrated in Fig. 5.3 shows temporal
averaging per dendrite. Since these temporally averaged signals are eventually
spatially averaged, we use only one variable ãi for the neuron i to speed up the
simulation. The update equations which modify these state variables are shown
in Fig. 5.4. Compared to our earlier model [6], we have modified the algorithm
slightly in that the activation is not distributed to neighboring neurons but simply
averaged. In other words, a current flows, or voltage gradient exists from the
neuron having a higher activation to a neuron having a lower activation.
Fig. 5.3 Artifici al neuron, laterally connected to four other neurons via gap junctions.

The state variables can be initialized with random values. The exact values used
to initialize the simulation are not relevant. Due to the leakage factors the
simulation will converge to the same type of response if the input is kept
constant. The following values were used for our experiments: decay of
activation potential αa = 0.9995, decay of output voltage αo = 0.5, temporal
averaging factor of gap junction at = 0.001, spatial averaging factor of gap
junction input as = 0.0001, leakage to adjacent neurons upon firing ε = 0.0001,
reduction of fire threshold γ = 0.0005, factor for over-relaxation ω = 1.999,
refractory period of neuron Δtr = 10.
All weights between neurons i and j are set to wij = 1. Learning could be
used to tune these weights. For our experiments, we only process the luminance
signal to extract objects with maximum luminance. Hence, learning is not
necessary. Simple Hebbian learning [23] could tune these weights in a way that
objects having certain colors are extracted. Alternatively, learning and memory
could occur within each neuron, e.g. in the cytoskeleton. Negative weights could
also be used. Such weights would model inhibitory signals.
The behavior of a single neuron is fully described by the algorithm shown in
Fig. 5.4. Each line of the algorithm describes one small state change. If all
updates are completed, then the algorithm starts over. First, the output voltage
decays (01), then the weighted input is computed (02). If we were to add
learning, we would have to tune the weights wij.
Each neuron computes the temporal average of the input using the parameter
αa (03). Next, another temporal average is computed using the parameter αt (04).
The spatial average of ā is computed in lines (05–08) using the unconditional
network formed by the gap junctions. Over-relaxation is used in line (09) to
speed up the simulation. If the temporal average a is above the spatial average ā,
then we open all gap junctions, otherwise we close all gap junctions (10–11).
This spatial average is an adaptive threshold. It depends on the average spatial
output produced by the previous sheet of neurons. Only neurons whose input is
above the spatial average will fire vigorously and also connect with adjacent
neurons which allows for figure/ground separation.

Fig. 5.4: Update equations which change the state variables of neuron i over time.

Here, all gap junctions are modified at the same time. This is probably not
the case for actual neurons. We used this simplification to allow for faster
simulation of the model. Currently, it takes 3.7 ms to simulate one update of a
sheet of 4000 neurons on a 2.8 GHz Intel Core i7. In real neurons, gap junctions
probably open and close independently of the other gap junctions. The state of
the gap junction is probably controlled by the signal passing through its dendrite.
We are done if the neuron fired recently (12). Next, the neuron computes the
spatial average of activation across open gap junctions (13–17). This operation
allows activation to flow from a neuron having a higher activation to a
neighboring neuron with a lower activation. Then the fire threshold is computed
(18–19). This threshold depends on the number of neurons in the subnetwork
formed by neurons connected through open gap junctions. For larger networks
the threshold will be lowered such that neurons belonging to a large set of
neurons will fire with a higher frequency. An adaptive firing threshold is not
really required. However, it allows to distinguish larger figures from smaller
figures in higher visual areas.
Whenever the neuron fires (20), i.e. the activation is above the fire threshold,
then the activation is reset (21) and the output rises to 1 minus some leakage
(22). This leakage is distributed to adjacent neurons (23).
It should be clear that our sheet of neurons is capable of adaptive
figure/ground separation irrespective of the actual lightness values. Fig. 5.5
shows how the sheet of neurons responds to images of a gray square on a
slightly darker background. The input images are shown on the left-hand side.
Noise with a mean of zero and a standard deviation of 0.5 has been applied to
these images as real-world signals always contain some noise. The noisy input
signal is shown in the middle. The sheet of neurons is shown overlayed on this
input signal. Each neuron is represented by a dot. The brightness of the dot is
proportional to its activation. Open gap junctions between two neurons are
represented by thick lines. The color of the lines is chosen in a way that each
subnetwork has a unique color.
On the right-hand side, the lightness histograms are shown. It should be
noted that the lightness of the background in case (d) is higher than the lightness
of the square in cases (a) and (b). Nevertheless, the square is correctly separated
from the background in case (d). A single neuron which may be capable of
simple bright/dark classification based on a fixed threshold would not be able to
correctly separate the figure from ground in all four cases shown in Fig. 5.5.
Fig. 5.5 Results for a brighter sphere in front of a darker background. Noise has been applied with
zero mean and standard deviation 0.05 (pixel range [0, 1]). Background lightness Lb and figure
lightness Lf were chosen as follows: (a) Lb = 0.1, Lf = 0.4; (b) Lb = 0.2, Lf = 0.5; (c) Lb = 0.5, Lf =
0.9; (d) Lb = 0.65, Lf = 1.0.
Fig. 5.6: Sequence of output spikes generated by individual neurons. (a) Input stimulus; (b)
behavior of six different neurons (marked). Neurons 1–3 fire in synchrony. They belong to the
subnetwork formed by neurons extracting the figure. Neurons 4–6 fire out of sync.

Fig. 5.7: Sequence of output spikes generated by individual neurons. (a) input stimulus (b)
behavior of six different neurons (marked). Neurons 1–3 fire in synchrony. They belong to the sub-
network formed by neurons extracting the figure. Neurons 4–6 fire out of sync.

Figs. 5.6 and 5.7 show the response of individual neurons for two different types
of input stimuli. The input stimulus is shown on the left-hand side. Selected
neurons are marked with a circle and a number. Three neurons are located on the
figure (bright square) while three other neurons are located on the background.
The spiking sequences are shown on the right-hand side. Neurons 1 to 3 whose
receptive fields lie on the bright square fire in sync, while neurons 4 to 6 whose
receptive field lie on the background fire out of sync. All neurons located on the
figure are connected via open gap junctions to neighboring neurons. It can also
be seen that the neurons located on the larger square fire with a higher frequency
compared to the neurons located on the smaller square. This behavior is due to
the adaptive firing threshold used in line (19) of Fig. 5.4. Higher visual areas
may make use of the different firing rates for tracking of different objects. It may
also be used for visual servoing [24, 25].
A simulation of the network responding to an image sequence is shown in
Fig. 5.8. This sequence simulates a sweep of the eye from one photograph to
another photograph. The object in the foreground is extracted by a network of
neurons connected via open gap junctions. Over the course of time, different
neurons respond to the object. The firing rate of neurons with their receptive
field above the object will stay constant as long as the object does not change in
size. It is known that the mapping from the retina to the visual cortex can be
viewed as a complex-logarithmic mapping [26–28]. This transforms the visual
input to a size invariant representation.

Fig. 5.8 Results for a moving stimulus. (a-c) Snapshots from an image sequence. (d-f) Neurons
with a receptive field above the figure respond vigorously. They collectively extract the figure. The
subnetwork formed by neurons with open gap junctions tracks the figure.

5.6 Basis of our model


As we have just shown a sheet of laterally connected neurons is able to segment
figure from ground. Zhao and Breve [29] have shown that chaotic
synchronization in a 2D lattice can be used for scene segmentation. In their
work, they used Wilson- Cowan neural oscillators [30] to segment letters. They
only used static input. Quiles et al. [31] have developed a visual selection
mechanism using a network of integrate- and-fire neurons with short-range
excitatory connections and long-range inhibitory connections. They also used
only static images as input. Eckhorn et al. [32] simulated feature linking via
synchronization among distributed neural assemblies. They simulated two one-
dimensional layers from the visual cortex of the cat including feedforward and
backward connections. The backward connections modulate the inputs. A
moving stimulus was used as input. Our model addresses synchronization within
a sheet of neurons and only relies on local connections. As gap junctions open
and close, a moving zone of collective, synchronized activity tracks and moves
with it. Such a zone of activity in the brain is asserted to correlate with conscious
perception and control. Self-organizing mobile zones are also possible.
Neurons that have their receptive field above the stimulus will fire in sync
while all other neurons will fire out of sync. In the brain, synchronized firing
behavior is seen in the electroencephalogram (EEG) especially in the frequency
range from 40 to 80 Hz [33, 34]. This type of synchronous activity includes
gamma synchrony EEG. Singer [35] gives a review on how gamma synchrony
correlates with perception and motor control. Gamma synchrony arises because
of inter-dendritic gap junctions [36–39].
According to the so-called “conscious pilot model”, created by Hameroff
[40], gap junctions open and close, creating self-organizing synchronized zones
of activity moving through the brain. In our model, these zones move because
the input stimulus moves. In the “conscious pilot”, the zone self-organizes
converting nonconscious “auto-pilot” cognition to conscious cognition. Self-
organizing mobile zones are candidates for consciousness in the brain, and for an
executive “causal agent” in computer systems.
Kouider [41] has reviewed several different theories of consciousness.
Tononi and Edelman [42] have developed the re-entrant dynamic core
hypothesis. This theory assumes recurrent processing of information. Another
theory, the local recurrence theory developed by Lamme [43], also assumes
recurrent processing of information, both consistent with the “third wave”, also
described philosophically by the higher-order thought (HOT) theory.
In the context of theories of consciousness, our model may be seen as a
moving zone within the third wave target, e.g. laterally-connected cortical layer
5 pyramidal neurons.
As noted by Crick and Koch [44], humans appear not to be aware of
processing occurring inside V1. In this case, V1 is merely in charge of
preprocessing visual input and conscious processing starts in higher visual areas.
Zeki [45] suggests that multiple consciousnesses are distributed across different
processing sites. He calls the conscious processing of different aspects
microconsciousness, e.g. of attributes such as color, form and motion bound
together. Woolf and Hameroff [46] proposed feed-forward cascades through
visual cortex that accumulate shape, color, motion and meaning aspects into a
conscious visual gestalt. With further unification with other modalities, visual
scenes unify into global consciousness; linguistic and communication skills are
included. Our model addresses only a single percept, but gap junction-mediated
zones may coalesce into larger zones to accommodate full conscious awareness.
So far, we have not modeled synchronization across different sheets but
could do so in the future. This would then be macroconsciousness in Zeki’s
terms. Wang [47] noted that only local connections can lead to global synchrony.
König and Schillen [48] used long-range excitatory delay connections to achieve
synchronization within two-dimensional layers of oscillators. Terman and Wang
[49] have developed an architecture of oscillator networks in which a global
inhibitor is used to achieve desynchronization. In our model, neurons connected
via open gap junctions are resistively coupled. They synchronize their behavior,
and the connected zone moves through the larger sheet of neurons. With further
development, our mobile zone model can serve as a theory for the neural
correlate of consciousness, and executive causal agents in artificial intelligence.

5.7 Conclusion
We extend standard network models of “integrate-and-fire” spiking neurons to
include lateral connections between neurons in a two-dimensional feed-forward
sheet. The lateral connections are based biologically on gap junctions between
dendrites and cell bodies of brain neurons. Neurons within the laterally-
connected network are shown to perform figure/ground separation by collective
integration and synchronized outputs.
Here we implement an artificial system based on laterally connected neurons
and show examples of its ability to discriminate figure from ground. Gap
junctions resistively couple adjacent neurons. A gap junction may be in one of
two states: open or closed. If a gap junction connects two neurons then it creates
a physical connection between the two neurons. A network of resistively coupled
neurons forms an unconditional network. In addition to this unconditional
network, another network is formed which may be changed over time by
opening or closing gap junctions. We use the un-conditional network to compute
a spatial average of the temporally averaged input. This spatially averaged signal
allows the neuron to adaptively respond to the input. If the temporal average of
the input signal received by the neuron is above the spatial average, then the
neuron opens its gap junctions, otherwise it closes its gap junctions. Neurons
connected via open gap junctions synchronize their firing behavior because a
current always flows from the neuron having a higher activation to adjacent
neurons having a lower activation. Neurons with synchronized firing collectively
respond to the figure whereas the remaining neurons fire out of sync. Neurons
out of sync respond to the ground. Our model is shown to separate figure from
ground for artificial as well as real-world images.

References
[1] Lamme VAF and Roelfsema PR. The distinct modes of vision offered by feed-forward and
recurrent processing. Trends in Neuroscience. 2000;23:571–579.
[2] Gennaro RJ. Consciousness and Self-consciousness: A Defense of the Higher-Order
Thought Theory of Consciousness. Amsterdam: John Benjamins Publishers, 1996.
[3] Rosenthal D. Consciousness and higher-order thought. In: Nadel L, ed. Macmillan
Encyclopedia of Cognitive Science, London: Macmillan, 2002, pp. 717–726, 2002.
[4] Lee U, Ku S, Noh G, Baek S, Choi B and Mashour GA. Disruption of frontal-parietal
communication by ketamine, propofol, and sevoflurane. Anesthesiology. 2013;118:1264–
1275.
[5] Ebner M and Hameroff S. Lateral information processing by spiking neurons – a theoretical
model of the neural correlate of consciousness. Computational Intelligence and
Neuroscience. 2011, Article ID 247879,17 pages.
[6] Ebner M and Hameroff S. Computational modeling of robust figure/ground separation. In:
Proceedings of the Third International Conference on Bioinformatics, Biocomputational
Systems and Biotechnologies, Venice/Mestre, Italy, 2011, pp. 67–72.
[7] Ebner M and Hameroff S. A computational model for conscious visual perception and figure/
ground separation. In: Proceedings of the International Conference on Bio-Inspired Systems
and Signal Processing, Rome, Italy. Portugal: Science and Technology Publications, 2011,
pp. 112–118.
[8] Gerstner W and Kistler W. Spiking Neuron Models. Cambridge, UK: Cambridge University
Press, 2002.
[9] Naundorf B, Wolf F and Volgushev M. Unique features of action potential initiation in cortical
neurons. Nature. 2006 April;440(7087):1060–1063.
[10] Izhikevich EM and Edelman GM. Large-scale model of mammalian thalamocortical systems.
Proceedings of the National Academy of Sciences USA. 2008;105(9):3593–3598.
[11] Thivierge JP and Cisek P. Nonperiodic synchronization in heterogeneous networks of spiking
neurons. The Journal of Neuroscience. 2008 Aug;28(32):7968–7978.
[12] Herault J. A model of colour processing in the retina of vertebrates: From photoreceptors to
colour opposition and colour constancy phenomena. Neurocomputing. 1996;12:113–129.
[13] Veruki ML and Hartveit E. All (rod) amacrine cells form a network of electrically coupled
interneurons in the mammalian retina. Neuron. 2002 Mar;33:935–946.
[14] Traub RD, Kopell N, Bibbig A, Buhl EH, LeBeau FEN and Whittington MA. Gap junctions
between interneuron dendrites can enhance synchrony of gamma oscillations in distributed
networks. The Journal of Neuroscience. 2001 Mar;21(23):9478–9486.
[15] Carroll TL and Pecora LM. Synchronizing chaotic circuits. IEEE Transactions on Circuits and
Systems. 1991 Apr;38(4):453–456.
[16] Pecora LM and Carroll TL Synchronization in chaotic systems. Physical Review Letters.
1990 Feb;64(8):821–824.
[17] Volos CK, Kyprianidis IM and Stouboulos IN. Experimental synchronization of two resistively
coupled Duffing-type circuits. Nonlinear Phenomena in Complex Systems. 2008;11(2):187–
192.
[18] Mountcastle VB. The columnar organization of the neocortex. Brain. 1997;120:701–722.
[19] Tové MJ. An Introduction to the Visual System. Cambridge: Cambridge University Press,
1996.
[20] Poynton C. Digital Video and HDTV. Algorithms and Interfaces. San Francisco, CA: Morgan
Kaufmann Publishers, 2003.
[21] Zeki SM. Review Article: Functional specialisation in the visual cortex of the rhesus monkey.
Nature. 1978 Aug;274:423–428.
[22] Zeki S. A Vision of the Brain. Oxford: Blackwell Science, 1993.
[23] Hebb DO. The Organization of Behavior. New York: Wiley, 1949.
[24] Chaumette F and Hutchinson S. Visual servo control part I: Basic approaches. IEEE
Robotics & Automation Magazine. 2006 Dec;13(4):82–90.
[25] Chaumette F and Hutchinson S. Visual servo control part II: Advanced approaches. IEEE
Robotics & Automation Magazine. 2007 Mar;14(1):109–118.
[26] Schwartz EL. Spatial mapping in the primate sensory projection: Analytic structure and
relevance to perception. Biological Cybernetics. 1977;25:181–194.
[27] Schwartz EL. Computational anatomy and functional architecture of striate cortex: A spatial
mapping approach to perceptual coding. Vision Research. 1980;20:645–669.
[28] Cavanagh P. Image transforms in the visual system. In: Dogwell PC and Caelli T, eds. Figurai
Synthesis. Hillsdale, NJ: Erlbaum, 1984, pp. 185–218.
[29] Zhao L and Breve FA. Chaotic synchronization in 2D lattice for scene segmentation.
Neurocomputing. 2008;71:2761–2771.
[30] Wilson HR and Cowan JD. Excitatory and inhibitory interactions in localized populations of
model neurons. Biophysical Journal. 1972;12:1–24.
[31] Quiles MG, Zhao L, Breve FA and Romero RAF. A network of integrate and fire neurons for
visual selection. Neurocomputing. 2009;72:2198–2208.
[32] Eckhorn R, Reitboeck HJ, Arndt M and Dicke P. Feature linking via synchronization among
distributed assemblies: Simulations of results from cat visual cortex. Neural Computation.
1990;2:293–307.
[33] Gray CM and Singer W, Stimulus-specific neuronal oscillations in orientation columns of cat
visual cortex. Proceedings of the National Academy of Sciences USA. 1989 Mar;86:1698–
1702.
[34] Ribary U, Ioannides AA, Singh KD, Hasson R, Bolton JPR, Lado F, Mogilner A and Llinas R.
Magnetic field tomography of coherent thalamocortical 40-hz oscillations in humans.
Proceedings of the National Academy of Sciences USA. 1991 Dec;88:11037–11041.
[35] Singer W. Neuronal synchrony: A versatile code for the definition of relations? Neuron. 1999;
24:49–65.
[36] Dermietzel R. Gap junction wiring: a ‘new’ principle in cell-to-cell communication in the
nervous system? Brain Research Reviews. 1998;26:176–183.
[37] Draguhn A, Traub RD, Schmitz D and Jefferys JGR. Electrical coupling underlies high-
frequency oscillations in the hippocampus in vitro. Nature. 1998 Jul;394:198–192.
[38] Hormuzdi SG, Filippov MA, Mitropoulou G, Monyer H and Bruzzone R. Electrical synapses:
a dynamic signaling system that shapes the activity of neuronal networks. Biochemica etBio-
physica Acta. 2004;1662:113–137.
[39] Bennett MVL and Zukin RS. Electrical coupling and neuronal synchronization in the
mammalian brain. Neuron. 2004;41:495–511.
[40] Hameroff S. The “conscious pilot” – dendritic synchrony moves through the brain to mediate
consciousness. Journal ofBiological Physics. 2010;36:71–93.
[41] KouiderS. Neurobiological theories of consciousness. In: Banks WP, ed. Encyclopedia of
Consciousness. Oxford: Elsevier, 2009, pp. 87–100.
[42] Tononi G. and Edelman GM. Consciousness and complexity. Science. 1998 Dec;282:1846-
1851.
[43] Lamme VAF. Towards a true neural stance on consciousness. Trends in Cognitive Sciences.
2006;10(11):494–501.
[44] Crick F and Koch C. Are we aware of neural activity in primary visual cortex? Nature. 1995
May; 375:121–123.
[45] Zeki S. A theory of micro-consciousness. In: Velmans M and Schneider S, eds. The
Blackwell Companion to Consciousness. Malden, MA: Blackwell Publishing, 2007, pp. 580–
588.
[46] Woolf NJ and Hameroff SR. A quantum approach to visual consciousness. Trends in
Cognitive Sciences. 2001;5(11):472- 478.
[47] Wang D. Emergent synchrony in locally coupled neural oscillators. IEEE Transactions on
Neural Networks. 1995 Jul;6(4):941–948.
[48] König P and Schillen TB. Stimulus-dependent assembly formation of oscillatory responses: I.
synchronization. Neural Computation. 1991;3:155–166.
[49] Terman D and Wang D Global competition and local cooperation in a network of neural
oscillators. Physica D. 1995;81:148–176.
Part IV: Whole body level
Ryszard Tadeusiewicz
6 Simulation-based analysis of musculoskeletal
system properties

6.1 Introduction
The motion of living organisms is among the most obvious targets for computer-
aided simulations in biology. This is due to several reasons. First, the outcome of
such simulations is usually self-evident, requiring no further intellectual effort or
sophisticated methodologies to interpret the simulation model and determine its
accuracy. Second, the motion of real-life organisms is easily observed in great
detail, permitting comparisons between simulation data and natural phenomena.
The overall quality of the simulation model is easy to judge: simply speaking,
the model either correctly reflects biological mechanisms or it does not.
Assuming the former, the model can be applied in all areas where simulations
are typically needed – as a source of scientific inspiration, as a support tool in
diagnostics, therapy or rehabilitation, or as a teaching aid (Fig. 6.1).

Fig. 6.1 Various applications of biocybernetic models (the figure has been assembled from MS
ClipArt objects).

Qualitative assessment of motion simulation models can be easily extended with


quantitative aspects. For instance, it is possible to measure (in a noninvasive
manner) the displacement of various points on the surface of the human body.
Without going into detail, it should be noted that modern motion capture
techniques are capable of rendering motion with great accuracy – e.g. by
aggregating images from several cameras in order to accurately track the
position of 3D markers embedded in the model’s clothing (Fig. 6.2).

Fig. 6.2 Modern tools can accurately capture human motion.

As the physical properties of motion (such as the forces and momentums


involved) are also easily measurable, it becomes relatively easy to place motion
simulation algorithms on a solid theoretical footing.
The structure of the presented simulation models is, for the most part,
determined by clear and logical cause-and-effect chains founded upon the basic
laws of physics. This does not, however, mean that such models are easy to
construct. Highly accurate simulations of the musculoskeletal system and its
motion are often very complicated and costly. In contrast to state-of-the-art
professional tools – developed e.g. to aid medical diagnostics or patient
rehabilitation – the models presented in this book constitute far-reaching
simplifications of the biological phenomena which they seek to represent.
Indeed, they should be treated as mockups of actual models, permitting only
rudimentary simulations. In spite of their obvious drawbacks, such “rough”
models offer one important advantage: they enable the reader to quickly become
familiar with their underlying principles and architecture without getting bogged
down in minute implementation details.
At this point we should emphasize one additional difference between
professional modeling software (applicable in medical practice) and the
simplistic tools described in this chapter: professional models usually render
their subjects in great detail, immersing them in photorealistic scenery. The
range of actions which can be simulated by such systems closely reflects real-
world situations and events.
Medical simulators are not unique in this regard: demand for realistic user
experience is evident (among others) in flight simulators and driving simulators
– even those used for entertainment (i.e. computer games).
Speaking of computer games, they have also matured over the years and
often feature realistic objects participating in complex events. Most games are –
at their core – simulation programs, presenting imaginary entities whose
behavior is governed by artificial scenarios. This book concerns itself with
medical simulations and therefore assumes that the subject of the simulation is a
real person or a specific fragment of the body (e.g. an organ, a tissue affected by
some disease etc.). This fact introduces certain additional restrictions (not seen
in gaming) but it also enables us to achieve far more ambitious goals.
Developing professional simulation software is an enormously time-
consuming task, requiring millions of lines of code and involving entire teams of
graphical designers, animation specialists and experts in the field of human-
computer interaction. We must acknowledge that such a system could not be
presented in our book – its complexity would distract us from our goal and
discussing the intricacies of its user interface could very well obscure the
underlying principles and simulation algorithms. See for instance Fig. 6.3. The
relative size of each box represents a rough estimate of the amount of work
required to develop each component: the core program which implements the
simulation model (yellow box), control and interaction logic (light brown box)
and visualization mechanism (teal box). Clearly, the effort involved in making
the system approachable and user-friendly dwarfs the workload associated with
constructing simulation models. When quality of experience (QoE) is a primary
concern (such as in the case of videogames), the developer will usually spare no
expense in creating a visually appealing product – these costs can be recouped
by increasing the product’s unit price and sales volume. In contrast, our goal is to
focus on the core aspects of simulations, without concerning ourselves with
state-of-the-art visuals or control schemes. We hope this explains why the
systems presented below only provide basic end-user tools.

Fig. 6.3: Schematic depiction of the workload involved in developing the simulation model and
end-user features.
6.2 Components of a motion simulation model
Having described the presentation and control elements of simulation software
let us now return to its core functionality. In this chapter we focus on the action
of the musculoskeletal system. Motion biodynamics is a complex field of study,
with each of its aspects (skeletal kinematics, muscle action, energy supply via
the bloodstream etc.) requiring advanced mathematical formulations and careful
observation of real-life phenomena in order to permit precise modeling. In this
book we do not attempt to formulate an accurate model of a complex biological
mechanism, i.e. the musculoskeletal system of a specific patient undergoing
examination and/or treatment. Rather, we present a number of demonstrations
showing that motion can be simulated by computers.
In our presentation we will deal with two components which together
comprise the musculoskeletal system:
– bones, i.e. building blocks of a kinematic system,
– muscles, i.e. effectors facilitating motion.

Each of these components will be discussed on the basis of simple simulation


models implemented in the Matlab1 environment and available free of charge to
all readers at http://www.uci.agh.edu.pl/english/tad/.

6.2.1 Simulating the skeleton

Bones are fairly complex anatomical structures. Their morphology – at least in


the case of the human skeleton – has been thoroughly studied and described in
dozens of handbooks and atlases. The anatomy of a bone (as seen from a
histological perspective) is also rather complicated, with Haversian sites, osteons
etc. On the other hand, if we focus on the mechanical properties of bone, it is
difficult to imagine a simpler biological construct! The motor capabilities of any
vertebrate are intrinsically linked to the strength of its bones. We will therefore
construct a model which calculates and presents stresses induced in bone tissue
as a result of external load. Exceeding load thresholds results in fractures – a
dynamic process which is also outside the scope of our presentation. We will
instead limit ourselves to a static model which assesses the load (i.e. input data)
and determines the internal stress field for a given bone (the output of the
simulation). Since most presentations of the skeletal system focus on long bones,
for the sake of this discussion let us adopt the human femur as our principal
study subject (Fig. 6.4).
The first step in constructing a model is to carefully determine its subject.
Obviously, focusing on the femur means discarding all other anatomical
elements of the human thigh (muscles, fatty tissue, blood vessels, nerves, skin
etc.). It should also be noted that the extremities of the femur are quite complex,
with the head, neck and trochanters in the upper extremity, and two condyles
spanned by an intercondylar fossa in the lower extremity. It is, of course,
possible to simulate such complex structures – the relevant computational
algorithms apply the so-called finite element method in which the original shape
is subdivided into small, standardized pieces (“finite elements”), with the
required quantities (forces, stresses, displacements etc.) calculated for each
element separately, acknowledging the shape and influence of adjacent elements.

Fig. 6.4: The subject of our study: the human femur (source:
http://www.biologycorner.com/anatomy/skeletal/printables_files/ Femur_c_sm.jpg; accessed
September 2014).

This type of model allows stresses to be accurately computed and visualized,


as shown in Fig. 6.5.
Fig. 6.5: Stress pattern in the upper extremity of the human femur. Courtesy professor Romuald
Bedzinski.

As our goal is to present a simple model,2 we will omit these complicated


extremities – instead, our analysis will focus solely on the body of the femur
highlighted in Fig. 6.4. To further simplify matters, we will approximate its
actual shape using a pipe with a circular cross-section, described by three
parameters: length l, diameter d and crosssection surface area S (Fig. 6.6).

Fig. 6.6: Approximating the body of the femur with a pipe.

Actual bones are subject to a complex interplay of forces. In our sample model
this pattern will be replaced by two forces: the longitudinal force (Fo) and the
lateral force (Fp), as shown in Fig. 6.7.
Fig. 6.7 : Bone model acted upon by two forces (longitudinal and lateral).

The effect of the longitudinal force is to generate compressive stress which can
be mathematically expressed as:

This is schematically depicted in Fig. 6.8. The force acting upon the bone
provides input for our model. The force causes compression (which has the
potential to induce mechanical failure in the bone), representing the model’s
output. It should be noted that more sophisticated analysis of compressive forces
acting upon a long, narrow shaft should also acknowledge the potential damage
mode known as buckling – however, for the sake of simplicity, we will omit this
aspect in our analysis.

Fig. 6.8: Biocybernetic bone model subjected to compressive forces.

Lateral forces induce shear stress, expressed by the following formula:

This phenomenon is schematically depicted in Fig. 6.9.

Fig. 6.9: Biocybernetic bone model subjected to shearing forces.

Combining both aspects yields a two-dimensional model with two inputs and
two outputs, shown in Fig. 6.10.
Fig. 6.10: Two-dimensional model of the femur.

6.2.2 Bone model simulations

The model presented above is extremely simple; indeed, it could be


characterized as “primitive”. Yet even this rudimentary formal apparatus enables
us to make nontrivial observations.
The first such observation concerns the model itself. As the length of the bone is
greater than its width, the torsional stress σρ (caused by the lateral force – see
equation (6.2)) usually exceeds the compressive stress σo (caused by the
longitudinal force – see equation (6.1)).

Comparing the values of σρ and σo suggests that, at least in the case of the femur,
transverse fractures should occur with greater frequency than compression
fractures (Fig. 6.11).

Fig. 6.11: The femur is more frequently affected by transverse fractures (left) than by compression
fractures (right) (source: http://trialx.com/curetalk/wp-content/blogs.dir/7/files/2011/09/Broken-
Bone.jpg, http://www.houstonspineandjoint.com/images/compression-fracture.jpg; accessed
September 2014).
This conclusion is readily validated by visiting an ER facility.
Our model, while simple, can serve as the basis for computer-aided
simulations. Such simulations are efficiently performed in the Matlab
environment by using the software available at
http://www.uci.agh.edu.pl/english/tad/. The first simulation concerns various
stress patterns present in the bone – as well as the corresponding risk of fracture
– for animals of varying size (Fig. 6.12). The user interface provides a selection
of tools which can be used to adjust model parameters (i.e. dimensions of the
simulated bone), select an animal species, determine the input forces and
observe the resulting stress patterns.
The simulation model described above is somewhat abstract since our target
bone is subject to “virtual” forces which do not correspond to any specific real-
life situations. On the other hand, computer-aided modeling and simulations are
a branch of applied computer science, which suggests that they should reflect
physical reality. Thus, we have decided to extend our model with a variable load
pattern associated with a common activity: crouching. The corresponding tool is
available for download from our website and is depicted in Fig. 6.13.

Fig. 6.12: User interface of the bone simulation tool available for download.
Fig. 6.13: Crouching simulator. The femur, which is the subject of our model, is drawn as a red
line.

Let us observe the model in action. Upon launch the application displays the
image shown in Fig. 6.13. The diagram shows a simplified human figure which
can be made to crouch by dragging its body (either down or up) with the mouse.
By dragging the figure downwards, we perform a “crouch” and can observe
the changing compressive and torsional stresses affecting the highlighted femur.
These stresses are visualized using a bar chart shown in the left-hand part of the
main window (Fig. 6.14).
Fig. 6.14: Changes in compressive and torsional stresses which affect the model femur during a
simulated crouch.

The sliders displayed in the control panel enable us to change the properties and
dimensions of the simulated bone. Fig. 6.15 shows the outcome of changing the
femur’s dimensions: the bone has become narrower and shorter, with
simultaneous reduction in the model’s weight (thus simulating e.g. a petite
woman instead of a grown man). Our model reveals that these changes lead to a
dangerous increase in torsional stress, possibly precipitating a fracture.
Due to the way in which the model has been constructed, the crouch may
extend all the way to the ground, resulting in near-zero compressive stress but
further increasing torsional stress (Fig. 6.16).
Fig. 6.15: Repeated simulation following changes in the dimensions of the model femur.

Fig. 6.16: Stress pattern observed during a full crouch.

6.2.3 Muscle models

The models described above provide insight into the capability of the skeletal
system to withstand mechanical loads, however only in a static loading scenario.
At this point it might be useful to investigate the motion produced by muscle
action.
Motion is one of the principal characteristics of life. The simplest and – quite
often – most accurate means of determining whether a creature is alive is to
check for motion. While crude, this criterion is universally applied in a number
of scenarios (including ones which involve humans – although, of course, formal
determination of death requires multiple tests and involves specialized criteria).
It should be noted that even the most familiar manner of checking whether
someone is alive – palpating the carotid artery – is, in fact, tantamount to seeking
motion (in this case – motion of the heart which, if present, produces an
identifiable pulse).
The human body supports an enormous range of highly diverse motion
patterns – slow and deliberate (watchmaker) or fast and forceful (boxer). We
owe this freedom to a system of approximately 500 muscles (Fig. 6.17) which
act as drives, guiding the motion of various parts of our body.

Fig. 6.17: Several dozen of the most important muscles in the human body (out of approximately
500 in total; source: http://www.endoszkop.com/wp-content/uploads/2014/03/muscles-of-the-
body-diagram-to-label.jpg; accessed September 2014).

The relatively imprecise reference to “approximately 500 muscles” is due to the


fact that, despite over 300 years of research, differences of opinion persists
regarding the classification of specialized anatomical structures in which bundles
of muscle fibers spread out in different directions, exerting force upon several
different joints. Should such structures be treated as a single muscle or as several
separate muscles (and if so – how many)?
Human anatomy, while self-evidently complex, is by no means exceptional.
Fig. 6.18 illustrates the muscular system of a horse – a case study which will
prove useful in our later presentations. One would be hard pressed to refer to this
biomechanical system as “simple”!

Fig. 6.18: The muscular system of a horse.

The horse has four legs and a complex array of muscles which are well known to
veterinarians; however for the purposes of our discussion we will rely on a
greatly simplified model, with a single leg and a single muscle (see Fig. 6.19).
That single muscle will be the subject of our analysis. We will not delve into
its anatomical details (see Fig. 6.20 for a depiction of muscle fiber anatomy),
although the correlation between the diameter of the muscle and the number of
individual fibers which exert force via contractions will matter in our further
study.
Muscles serve a number of purposes, the most important of them being
locomotion. This is a fundamental requirement for most animals, enabling them
to seek food, mate and flee from predators. Nature has come up with a variety of
ways to address this requirement. For example, aquatic animals employ four
different means of locomotion: thrust, waving, paddling and skidding. Land-
based locomotion is much more challenging and required evolutionary
development of specialized organs which not only permit organisms to move
around and change direction as required, but also ensure balance and enable the
body’s center of mass to be raised or lowered. Thus, for most land animals the
preferred mode of locomotion is by walking and this is the type of motion which
we will focus on throughout the remainder of the chapter.
Fig. 6.19: Simplified animal motor model used in our study.

Fig. 6.20: Schematic depiction of muscle tissue.

Fig. 6.21: Horse gait kinematics.

Real kinematics of animal gait are very complex (Fig. 6.21), as is the
anatomy of the legs and their constituent muscles (in fact, can any biological
mechanism be truly called simple?).
For simplicity’s sake we will assume that our model animal (Fig. 6.19)
moves as a result of circular motion of a single leg which contacts the ground.
As the “leg” is, in fact, a rigid, unarticulated rod, the model animal’s body would
need to rise slightly when the leg rests against the surface, and then fall once the
leg is lifted. Our model ignores this complication and asserts that the animal’s
vector of motion remains parallel to the ground at all times (Fig. 6.22).

Fig. 6.22: Simplified motion model for the simulated animal.

We also ignore the fact that at the end of the stance phase the leg is angled to the
rear and should be swung to the front again (this is impossible in our model as it
would require the body’s mass to temporarily rest on other legs, which are not
present). Nevertheless, despite these drastic simplifications, our model still
enables us to draw interesting conclusions.
As already suggested, motion is driven by a single simulated muscle. Let us
now focus on the characteristics of muscle tissue. The muscle contracts in
response to signals sent from the brain by means of nerves (which are not part of
our model). In this process the muscle’s length decreases by Xm and a force of Fm
is exerted (Fig. 6.23). Fig. 6.24 presents a simplified formal model of the
muscle, with a single input (representing control signals from the brain) and two
outputs: shortening (xm) and force (Fm).

Fig. 6.23: Muscle contraction.


Fig. 6.24: Simplified model of the muscle.

The presented model is not particularly useful as the input and output signals
cannot be directly observed – thus, we have no way of knowing whether the
model is correct. In general, it is difficult to track signals sent from the brain to
any particular muscle, and moreover there are no useful techniques for
correlating external stimuli (nerve signals) with the forces exerted by individual
muscles. We will therefore extend our model with an effector component. The
effect of the muscle’s action is easy to detect, enabling us to validate the model
and draw verifiable conclusions.
Muscle action typically produces motion. Let us assume that the model
animal is running and that the measure of this phenomenon is its velocity ( V).
This is schematically depicted in Fig. 6.25.
Our model of the animal’s gait will, of course, be highly simplified. We will
refer to the correlation between the kinetic energy attained by an animal with
mass M moving at velocity V and the measure of work performed by the muscle.
The kinetic energy of an animal in motion is given by the following formula:

The animal must acquire this energy (assuming it starts from a standstill) as a
result of muscle action. In reality, accelerating to velocity V calls for a complex
interplay of muscles powering several legs – a process which typically requires
the animal to take several steps before peak velocity is attained. However, in our
model all kinetic energy is supplied by a single “effective leg” powered by a
single “effective muscle”. As shown in Fig. 6.24 the muscle exerts a force of Fm
while reducing its length by xm. The work performed by the muscle in this
process is given by:
Fig. 6.25: The presented simplified structure enables us to design a formal model of a running
animal. The image presents the displacement of the animal’s body as a result of muscle action (cf.
Fig. 6.22).

By comparing equations (6.4) and (6.5) we obtain:

Equation (6.7) gives the calculated velocity:

This formula can be used to develop a model in which the muscle, acting upon
the simulated body of the animal, enables us to investigate the causative link
between the input signal (i.e. the brain’s decision to initiate motion) and the
resulting phenomenon (velocity). This model is depicted in Fig. 6.26. While
studying the figure it is worth noting the effect of M, i.e. the animal’s mass.
Clearly, this parameter cannot be ignored as it significantly influences the output
(velocity), but on the other hand it should not be treated as input either – rather,
it reflects a property of the system. Thus, we have placed it in a separate frame
above the block to which it applies and used a different type of arrow to indicate
that the given block depends on the mass parameter.
Fig. 6.26: Gait model which acknowledges two separate components: the muscle and the body.

In real life running requires the given creature (human, animal, insect etc.) to use
its legs in order to maintain the desired velocity over a period of time. As our
simplified model involves a single leg powered by a single muscle, we focus on
the initial phase of locomotion where the animal (or human) begins to move and
builds up speed. Once the target velocity is attained it must be maintained, which
involves additional leg swings3 and further energy expenditures;4 however these
aspects are beyond the scope of our presentation.

6.2.4 Velocity-dependent simulations of the muscle model

The presented model may be used in simulations which attempt to determine the
velocity attainable by animals of varying size (dependent on the xm parameter)
and mass (M). Our online simulation tool, depicted in Fig. 6.27, also enables us
to conduct virtual “races” between various types of animals.
While carrying out the presented simulations an interesting phenomenon can
be noted: maintaining a fixed ratio between the animal’s size and mass results in
a relatively stable velocity parameter. This can be explained by referring to the
formal model: assuming that the size of the animal decreases by a factor of β we
obtain the following muscle shortening and mass coefficients:
Fig. 6.27: Muscle simulator capable of calculating the velocity attainable by running animals.
following muscle shortening and mass coefficients:

Both formulae are rather straightforward and require no further clarifications. A


more interesting subject involves the required scaling of the force parameter to
produce the desired value of Fmw. In trying to determine how much stronger an
enlarged muscle will become, let us refer again to Fig. 6.20. The figure shows
that the source of the muscle’s strength lies in its fibers, each of which has a
fixed diameter and contributes a (relatively) fixed quantum of force. Clearly, the
number of available fibers depends on the cross-section area of the muscle –
therefore force increases in proportion to the square of the scaling factor β:

Substituting equations (6.8), (6.9) and (6.10) in equation (6.7) we obtain the
following result:

Equation (6.11) suggests that velocity is independent of the scaling factor and
that animals characterized by a fixed ratio between size and mass can run at the
same speed. At first glance this conclusion may appear nonsensical – it would
appear that a larger animal is capable of greater velocity. Nevertheless, the
mathematical result supplied by equation (6.11) is confirmed by experimental
data summarized in Tab. 6.1. It turns out that similarly proportioned animals do
indeed run at a similar speed!
Tab. 6.1: Top speed attainable by animals of varying mass and size.

Species Mass [kg] Velocity [km/h]


Dog 60 60
Horse 700 60
Kangaroo 70 45
Gerbil 0.1 50

6.3 Summary
The musculoskeletal models presented in this chapter, while simplistic, show the
way in which observations of a biological system may lead to the formulation of
a mathematical model and then to computerized simulations. Readers should
now hopefully be prepared to create and use more complex simulation models
when necessary. The response of bones to load patterns and the action of various
components of the muscular system are all interesting research topics, revealing
how the use of computers can benefit biological research as well as everyday
medical practice.

6.4 Simulation software available for download


As already mentioned, the Matlab scripts powering our simulation software are
freely available for download. We invite all interested users to visit:
http://www.uci.agh.edu.pl/english/tad/ – a site which hosts our sample
applications.
Part V: Diagnostics procedure
Andrzej A. Kononowicz and Inga Hege
7 The world of virtual patients

7.1 Introduction
This chapter is about virtual patients, broadly defined as computer simulations of
clinical scenarios involving patients and health professionals. Our writing is
intended for those who would like to get a quick overview of the topic to start
using, developing and maybe also researching in this area. We start with a brief
delineation of how the term “virtual patient” is understood and what different
types of virtual patients there are. It is our belief that for a good understanding of
the topic a familiarity with both the pedagogical and technical aspects which
underpin virtual patients is important. We will give examples of how virtual
patients are used at medical schools and finish by pointing to some new,
innovative developments in the field.

7.2 What are virtual patients?


It might be helpful to consider first what we can glean about virtual patients
from the name itself. The word “virtual” sounds modern, “digital”, a bit science
fiction, as if it may be related to the popular term “virtual reality”. For some this
brings the connotation of artificially constructed immersive environments, but
the term may be analyzed beyond the digital era. It can be understood as
something very close to being something without actually being it. The use of
the term “patient” sets the context to health sciences. We simulate a person who
receives medical care or treatment, and we are interested in the whole person,
not just parts. The compound noun “virtual patient” gives still lot of room for
movement, depending on how we emphasize and interpret the lexical
constituents of the term.
As part of a study we have systematically reviewed the existing literature in
order to analyze the usage of the term [1]. The term “virtual patient” is used in
education, clinical research, medical procedure planning and e-health. What
seems to be common across the fields is the desire to construct an object which
could be used to experiment safely. This could be to intuitively understand a
theory to be learnt, test a drug or medical documentation system, or explore the
consequences of a surgical intervention without putting somebody at risk. While
fully appreciating this diversity, we would like to focus in this chapter on the
educational use of virtual patients. This is where our expertise lies, but at the
same time it is the most common use of the term.
One way of constraining meaning is by setting definitions. Even though
several definitions can be found in the literature, two of them are clearly most
prevalent. The first, proposed by Ellaway et al. circa 2006 in relation to their
work with the Med-Biquitous standardization organization, defines virtual
patients as “an interactive computer simulation of real-life clinical scenarios for
the purpose of medical training, education, or assessment. Users may be
learners, teachers, or examiners” [2]. The second, to be found in a 2007
document by the Association of American Medical Colleges (AAMC), defines
virtual patients as “a specific type of computer-based program that simulates
real-life clinical scenarios; learners emulate the roles of health care providers to
obtain a history, conduct a physical exam, and make diagnostic and therapeutic
decisions” [3]. Both have in common the object of a simulation being a “real-life
clinical scenario”. This stresses the fact that a process is mimicked so as to be
authentic and relevant to a medical or healthcare practice. It is clear from both
definitions that they relate to a computer program. The first definition is broader
and aims to explain the context as training, education, or assessment whereas the
AAMC authors were more interested in the actual functionality of a system
which was expected to simulate concrete tasks of healthcare providers such as
history taking, physical examination and clinical decision-making.
Even though both definitions appeared around 2005, the idea of virtual
patients is much older. Some might be surprised to discover that pioneers of the
idea worked on virtual patients as far back as the mid-sixties and early seventies.
Bitzer, for instance, published a study in 1966 describing the use of the PLATO
computer system to teach first year nursing students the care of patients with
angina pectoris and myocardial infraction [4]. Other examples soon followed [5,
6]. Initially these types of computer programs were known under more
descriptive names as: “a computer-assisted system for learning diagnosis”; “a
computer-aided simulation of a clinical encounter”; “simulated laboratory”. It
was not until 1990 that the name “virtual patient” appeared in a research paper
for the first time [7]. The term entered common use around 2005. Even now it is
important to remember that a lot of interesting research on the topic is published
using different terms, such as “Interactive Case Simulation” or “Patient
Simulator”.
7.3 Types of virtual patient
Constraining ourselves to the educational sector does not solve the problem of
the ambiguity of the term. For instance, Talbot et al. distinguished seven classes
of virtual patients used in education encompassing the following: Case
Presentations, Interactive Patient Scenarios, Virtual Patient Games, High Fidelity
Software Simulations, Human Standardized Patients, High Fidelity Manikins
and Virtual Standardized Patients [8]. Huwendiek et al. presented a typology of
virtual patients with several instructional design and technical factors to consider
[9]. Beux and Fieschi point to three different uses of virtual patients in the
editorial to a special issue of the International Journal of Medical Informatics
[10]: in simulation for training, virtual reality and medical virtual universities.
We will go more into the details of features differentiating virtual patients later
in this chapter.
The crude division resulting from our literature review shows that the four
most common types of virtual patient are Interactive Patient Scenarios, High
Fidelity Software Simulations, Virtual Standardized Patients, and Virtual Patient
Games [1]. Fig. 7.1 presents these types, described in terms of targeted
competency and the technology used.

Fig. 7.1: The four most popular virtual patient types.

Virtual patients of the type Interactive Patient Scenario are technically the most
basic. This includes interactive web pages or multimedia presentations on CD-
ROM consisting of simple combinations of text, images, animations and video
recordings. The target competency is clinical reasoning understood as
“application of knowledge to collect and integrate information from various
sources to arrive at a diagnosis and management plan” [11]. Plenty of examples
of virtual patients of this type can be found in the eViP project repository [12].
Computer systems for the display and authoring of such cases include Web-SP
[13], CASUS [14], and OpenLabyrinth [15].
The High Fidelity Software Simulation virtual patient type uses advanced
numerical models of anatomy and physiology to teach procedural and basic
skills. Some may have specialized devices connected to a computer to increase
the simulation fidelity or train psychomotor skills. High Fidelity simulators are
used, for instance, in training of laparoscopy or endoscopy skills [16].
Virtual Standardized Patients focus on training interpersonal communication
skills. Learners are able to communicate with virtual patients using natural
language. The persona of a virtual patient is visualized often as a 3D animated
character which may gesticulate or show emotions in a dynamic way. The
simulation is possible thanks to the use of native language analysis software,
artificial intelligence technology and facial expression simulation. Such virtual
patients are used, for instance, in training management of Post-Traumatic Stress
Disorder (PTSD) [17] or the communication skills needed for sensitive
procedures like breast examination [18].
The characters of Virtual Patient Games are able to move in an artificially
created environment (for instance a virtual hospital) and interact with the learner
who also has a representation (avatar) in the virtual world (Fig. 7.2). This is one
of the ways in which Virtual Patient Games differ from Virtual Standardized
Patients, who are more stationary in their behavior. The educational focus of
Virtual Patient Games is predominantly team-based (with elements of clinical
reasoning) to practice, for instance, how an emergency team should work
together during mass causality events [19] or emergency situations [20]. Popular
online multi-user virtual environments like Second Life [20] or game engines
like Unreal 3 [21] are used to implement this type of virtual patient.
Fig. 7.2 CliniSpace™ – A Virtual Patient Game [22] (Image used with permission from Innovation
in Learning, Inc.).

7.4 The motivation behind virtual patients


The use of virtual patients can be motivated in various ways, one of which is the
need for an intermediate step between basic sciences and the practice of clinical
subjects [23]. Clinical clerkship and the wealth of experience it offers often do
not leave space for reflection or reference to textbooks. Group pressure and
anxiety about revealing a lack of knowledge in front of patients, peers or
teachers may prevent active learning processes. Virtual patients provide a safe
environment for training and give more structure to the observations made
during bedside teaching. Considering the limited timespan of clerkships, virtual
patients also offer a unique opportunity to follow the progress of a case over
time [24].
Specialization of teaching hospitals, the shift of clinical care from inpatient
to ambulatory, and the seasonal character of certain conditions may limit
learning opportunities [25]. To remedy this situation, the Liaison Committee for
Medical Education (LCME) in the United States recommends that if a student is
not able to encounter patients with a particular clinical condition the gap may be
remedied by online cases [26]. Virtual patients can improve assessment
procedures by standardizing clinical exposure and objectively recording
examinees’ performance.
Heightened patient safety procedures require a basic level of skills from
trainees before allowing them access to real patients. Even though the
authenticity of virtual patient simulations may be inferior to real practice
conditions, they may help reduce the learning curve in clinical training.
Finally, virtual patients share all the inherent advantages of e-learning
technologies, such as flexible learning time, pace and place [27]. Online access
to content facilitates easy updates and low publication costs. The learning
experience can scale substantially, which has been demonstrated in Massive
Open Online Courses (MOOCs), such as those offered by organizations like
edX, Coursera or Udacity which enroll thousands of participants worldwide [28].
Learning analytics techniques process big data logged from learners’ activity on
systems and provide plenty of possibilities to evaluate and draw conclusions on
challenges in understanding how effective computer- aided instruction can be
designed [29]. Last, but not least are students’ expectations that universities
should follow new trends in technology [30].
7.5 Theoretical underpinnings of virtual patients
Hands-on experience with virtual patients, combined with an intuitive
understanding of their advantages might not be enough to plan and motivate new
research or profoundly interpret the results. To deepen their understanding of
virtual patients, researchers use educational theories and conceptual frameworks
which guide the inquiry process [31]. Below we briefly present two theories
which are often used in context of virtual patients.

7.5.1 Experiential learning theory

Experiential learning theory (ELT) is grounded in the work of theorists such as


John Dewey, Kurt Lewin and Jean Piaget [32, 33]. The underpinning assumption
is that knowledge is constructed individually and gradually by the learner, with
the role of the teacher being to provide the right stimulus for development. This
is in contrast to more traditional approaches exemplified by behavioral learning
theories which focus on transmission of fixed knowledge from a knowledgeable
teacher to a receptive learner. Another important feature of ELT is that there is
more emphasis on the learning process itself than the content or measurable
outcome of the educational activity. The view of the learning process is holistic,
meaning it does not focus on individual senses or exclude some elements of
learning like affective aspects.
The hallmark of ELT is the pivotal role of the “experience”, which is an
impulse to transform and extend our understanding [32]. The learning process is
often presented as a cycle (or spiral) of consecutive phases triggered by concrete
experiences in our environment. In the popular depiction of this process
attributed to David Kolb (Fig. 7.3), experience is followed by a reflective
observation phase during which the external impulse is carefully analyzed and
compared by the learners to their current understanding of a given phenomenon
[33]. The next abstract conceptualization step enables the learner to generalize
the here-and-now observations and assimilate the conclusions into their existing
knowledge. This is the foundation for the fourth step where the added
knowledge is verified with active experimentation by planning and executing
actions in the environment. This leads to new experiences which deepen the
understanding.
Fig. 7.3: Experiential learning cycle.

The learning in ELT is enabled by the constructive tension between opposing


types of intellectual activities. The concrete experience contrasts with the
abstract conceptualization and the detached inner reflection opposes the active
external experimentation. The transition between the steps challenges and
modifies our existing knowledge structures and adds new content to them. The
postponement brought by reflection, conceptualization and planning of further
actions prevents the learner mindlessly leaping from one experiential impulse to
another. The resolution of conflicts between the theoretical reflection and
generalization as well as observed reality and planned action prevents the
detachment of the learner from the environment [32].
Looking at virtual patients from the perspective of ELT helps us to appreciate
the intrinsic advantages of this form of learning. The first encounter with a
virtual patient case may be viewed as the triggering experience. Admittedly, it
has clear disadvantages in terms of authenticity when compared with the
experience of meeting a real patient during bedside teaching. However, virtual
patients also have advantages in terms of support in the subsequent phases of the
ELT cycle. The haste in clinical practice makes it difficult to find time and space
to reflect immediately after an experience [23] and the flood of new information
may be overwhelming, especially in the early stages of education. The integrated
tools in virtual patients are instrumental in ordering observations and making
sense of experiences. The abstract conceptualization phase of the experiential
learning cycle can be supported by giving students opportunity to use reference
material while diagnosing patients, which is not possible or feasible during
bedside practice. Finally, it is self-evident that patient safety makes the active
experimentation phase with real patients problematic, whereas this is not a
barrier with computer simulations [34].

7.5.2 Theory of clinical reasoning


Clinical reasoning is a difficult concept to define and for students to grasp. In
general, this term describes the problem solving approach taken by clinicians
while making decisions on the diagnosis and management plan of particular
patients. It involves application of medical knowledge in order to collect and
integrate information from various sources [11]. According to a model proposed
by Gruppen and Frohna [35] the clinical reasoning process involves using prior
knowledge to summarize a patient’s situation and characteristics into a consistent
problem representation. The initial representation is iteratively evaluated and
refined under partly subconscious consideration of the case’s context. The
judgment of an acceptable fit between the problem representation and the
patient’s situation sets up the action to be taken (treatment/ management) [35].
Early attempts to formulate a universal clinical reasoning algorithm largely
failed due to the content specificity of the process. Success with one problem
was often a poor predictor of success with a second problem [36, 37]. Due to
this, experts shifted the focus of research in clinical reasoning to models of
knowledge organization [36]. This has led to descriptions of many potential
forms of storing prior knowledge as problem instances and prototypes, illness
scripts or concept networks, but no single optimal knowledge model has been
established [35, 38]. According to the currently intensively discussed Dual
Process Theory, clinical reasoning can be decomposed into two separate but
interconnected processes. The first is a nonanalytical, intuitive process which
instantly leads experts to recognize a solution based on previously observed
patterns (System 1 process). The second is an analytical, conscious process
which produces and verifies solutions by the application of formal rules (System
2 process) [39]. There is a disagreement in the current literature about which of
these two processes is more efficient. The nonanalytic process is rapid and often
observed in practice; however it might be susceptible to cognitive biases [39].
On the other hand, the analytical method can be used in new and challenging
problems but is slow, cognitively demanding and error prone [40]. It is suggested
that to gain efficient clinical reasoning skills it is crucial to build an extensive
theoretical knowledge base and switch between the intuitive and analytical
processes as necessary [41].
Virtual patients support the acquisition of nonanalytical clinical reasoning
skills by teaching pattern recognition mechanisms. The exposure to a large
number of cases which virtual patients facilitate is well-suited to adding problem
instances to the memory structures used for recognition while reasoning [11]. In
addition, Monteiro and Norman recommend improvement of reasoning skills by
mixed practice, which involves presenting easily confusable examples together
in order to focus attention on discriminatory features [41]. This is often not
achieved by the natural distribution of medical problems in clinics, but can be
easily simulated with virtual patients.
It is argued that the analytic reasoning processes could be supported by
virtual patients when adding specific types of questions to support differential
diagnosis (e.g. network questions in CASUS, which are described in the later
part of this chapter). Posel et al. have recently published a list of twelve tips on
how to support development of clinical reasoning skills in virtual patients [42].
One of their recommendations is that tools should be provided for the better
collection and organization of clinical data and formulation of differential
diagnosis by the learner. A focus on communication, the use of proper language,
provision of opportunities to learn from errors, and ways to receive substantial
feedback within virtual patients are all important [42]. Clinical reasoning models
(such as that of Gruppen and Frohna [35]) are used as a foundation for
evaluation tools of virtual patient design [43].

7.6 The technology behind virtual patients


The technology of virtual patients is complex and grounded in disciplines
including internet technologies, computational modeling, computer graphics and
artificial intelligence [44]. The technology used depends also on the discussed
type of virtual patient. Considering the confines of this chapter we have to be
selective in our choice and focus on some of the typical technical issues
encountered primarily in the most popular category of virtual patients, namely
Interactive Patient Scenarios [1].

7.6.1 Virtual patient systems

Bygone are the times when virtual patients were designed by generic web
authoring tools or multimedia presentation software. To make the use of virtual
patients efficient and sustainable, virtual patient systems should be used.
A virtual patient system is usually comprised of a player, editor, user
management module and examination tool. The player displays the content of a
virtual patient case. Virtual patient players were previously distributed on CD-
ROMs (e.g. [45, 46]), however it is now standard practice to develop virtual
patients as internet services, either as part of a web portal [13] or as immersive
online medical environments [47]. Examples of virtual patient systems are
Campus, CASUS, Decision Sim, Inmedea Simulator, Open Labyrinth and Web-
SP.
A virtual patient editor enables the authoring of new virtual patients or the
updating of existing ones (Fig. 7.4). They are constructed to empower teachers
in healthcare sciences to design and maintain virtual patients autonomously
without the need for substantial computer skills [13]. Editors are accessible as
part of virtual patient system web portals and authoring is achieved with simple
click and drag mechanisms, without programming skills.

Fig. 7.4: Authoring tool in CASUS™ virtual patient system [48].

The task of the user management module within a virtual patient system is to
control who is allowed to access the virtual patient cases, and the types of
functionality these users are allowed. Users can have rights to access the case in
read-only mode, and as authors or reviewers. Virtual patients can be grouped in
courses to be displayed to a selected group of users. Access can additionally be
time controlled (“spaced activation” [49]) or enabled by certain prerequisites
(e.g. completion of other virtual patient cases). In some systems it is possible to
generate user certificates which testify accomplishment of specific goals.
Additionally, it is often possible to display course usage statistics, thus allowing
teachers access to the performance of individual course participants. Virtual
patients may also be used for competency assessment [50]. For such systems it is
very important to provide network stability, detailed logging of user actions, and
mechanisms to prevent cheating.
Finally, the development of virtual patients over many years has resulted in
collections of numerous virtual patients. Such collections are sometimes
published in online repositories and made available for public use. In a recent
review Küfner et al. [51] found 17 repositories of virtual patients. One of them is
the eViP repository of 320 virtual patients [12] which has been developed by
eight participating medical universities across Europe as part of a European
Commission co-funded project. The interesting feature of virtual patients from
the eViP project is that they have not been developed from scratch, but
repurposed from existing virtual patients to adapt them to the needs of new
contexts.

7.6.2 Components of virtual patients

Navigation structure
Virtual patients are designed so that their content is not revealed all at once,
instead cases unfold gradually in response to learners’ actions. This design
feature can be implemented in various ways, one of them being the (semi-) linear
model, sometimes also called the “string of pearls” model [9]. The content of the
virtual patient is divided into parts called screen cards which are always
consecutively displayed in the same order while the learner progresses the case.
The semi-linearity comes from the fact that some of the screen cards have a
complex structure with additional options to choose from, such as expert
comments, detailed descriptions of patient images or new terminology, self-
assessment modules, etc. However, these options do not influence the next steps.
Screen cards may be template-based, meaning that certain elements of the
screen are already set (e.g. places to display images, questions, etc.). In some
systems the templates consist of a set of standard answers to questions that might
be put to the virtual patient, or results of examinations [13]. Designing such
cases requires replacing the default values with those characteristic to a
particular clinical condition.
An alternative to the linear model is the branched model, which allows the
learner to follow different paths through the case. This seems to be especially
helpful in training decision-making skills [52]. The learner is given a set of
options to choose from, with the selections influencing what happens later with
the virtual patient. This means that cases can have different outcomes depending
on the performance of the student [53].

Interactive elements
To keep the learner focused and provide stimuli for reflection or triggers for
feedback, some virtual patients contain questions or interactive tasks. Beyond
the well-known and self-evident types such as multiple choice questions or free
text fields, there are also specific types designed to assess, for instance, specific
aspects of clinical reasoning, or to support specific phases of the learning
process.
A good example is the interactive element type called Network Answer in
the CASUS virtual patient system, which resembles script concordance test
items [54]. In the context of a virtual patient scenario the learner is provided
with a set of hypotheses (e.g. about diagnosis, investigations or treatments)
which are reasonable in a given situation, and a set of new findings that may (or
may not) have an effect on the given hypotheses. The task of the student is to
select on a Likert-type response scale (usually ranging from –2 to +2) how likely
the hypothesis is in the light of the new information. The goal of the exercise is
to probe a specific aspect of clinical reasoning, this being the ability to interpret
clinical data under conditions of uncertainty.
In an attempt to make virtual patients more interactive they sometimes
include simulations of physiological or pharmacological models [55]. Recently,
we have proposed a framework illustrating this process in Interactive Patient
Scenarios using the example of a model developed within the Virtual Human
Project [56]. One of the proposed integration models is the addition of virtual
patient screen cards dubbed “What-If Nodes”, in which learners are given access
to a set of biomedical parameters to manipulate in order to experience the
reaction of the model to the change. This corresponds to the active
experimentation phase in Kolb’s learning cycle [32].

Game-informed elements
It has been noticed that younger learners are especially attracted to learning
activities which resemble game play [57]. We have already discussed a type of
virtual patient which uses general purpose or dedicated game engines to
represent virtual patients [8, 19, 20]. However, this approach has its drawbacks:
development costs are substantial; the academic community is generally
prejudiced about games; there are risks that the engine itself will distract from
the subject of learning with too much emphasis on the gaming aspect. To address
some of these issues the concept of game-informed learning is advocated [58].
The idea is to design education activities and tools which employ gaming
elements but do not directly use the same delivery mechanisms as games [58].
Examples of such elements include counters, rankings, scores, conditional
blocks and randomization, all of which have been added to virtual patient
systems as Open Labyrinth [59]. It is also important to design the narratives of
virtual patients in a lively, engaging way with lots of detail including names,
faces, histories, etc. [58].
7.6.3 Standards
Virtual patients and virtual patient systems can both be standardized. The ANSI
Med- Biquitous Virtual Patient (MVP) data standard [60] specifies elements and
their syntax which can be used when transferring virtual patient content between
systems. The format specifies three types of mark-up files: Virtual Patient Data
contains a structured description of patients’ clinical data. The Data Availability
Model provides a mechanism to describe how the media resources and virtual
patient data elements can be combined together and displayed. Finally, the
Activity Model defines the navigational structure through the content of the
virtual patient and the order in which these elements are displayed, as well as
implementing the branches. As part of the eViP project the MVP standard was
implemented in four virtual patient systems [61]. Over 320 cases from the
project are available in this format through the eViP repository [12].
Another area of use of virtual patient standards concerns the integration of
virtual patient systems with other elements of the technical e-learning
infrastructure of universities. For instance a mechanism to provide access to
virtual patient cases from the CASUS system in the Moodle virtual learning
environment can be implemented using the SCORM/AICC-HACP interface
[62]. More recently, Stathakarou et al. demonstrated how to integrate the virtual
patient system Open Labyrinth with the Open edX platform using the IMS LTI
specification [63].

7.7 How to use virtual patients?


When introducing virtual patients, curricular integration and context are
important for learners in terms of acceptance and learning success. Virtual
patients should not be implemented as isolated add-ons, but well integrated into
the overall course and curriculum design [64]. According to Cook and Dupras
[65] it is not sufficient to assume that learners will use available e-learning
resources if they are not motivated to do so. This is in line with many other
studies (e.g. [14, 66, 67]).
As for technology-based learning resources in general, there are also
numerous ways to integrate virtual patients into a curriculum.
7.7.1 Preparation for or follow-up of face-to-face teaching

As mentioned before, virtual patients are suitable teaching activities to expose


learners to many “patients” in a safe environment in which learners can train and
make errors without causing any harm. In this way learners can be prepared for
real life (face-to-face) activities, for example in the form of bedside teaching
with real patients or clerkships. This approach can be also applied in an inverted
(flipped) classroom setting: Learners could, for example, prepare for a seminar
in which questions or details of prepared cases or similar cases can be discussed
[68]. Edelbring et al. showed in their study that a follow-up seminar can enhance
the perceived benefit to students [64]. But virtual patients can also be a valuable
activity to follow up a face-to-face course, e.g. to apply knowledge learnt during
a course or to encounter variants of a disease encountered in a real patient.

7.7.2 Integration into a face-to-face session

Virtual patients are also sometimes directly integrated into face-to-face teaching
sessions such as problem-based learning (PBL) tutorials. Poulton et al.
conducted a study in which traditional paper-based cases in a PBL tutorial were
replaced with branched web-based virtual patients [15]. They showed that with
such branched virtual patients they could overcome the inflexibility of paper
cases, and students and teachers found the virtual patients more engaging than
paper cases.

7.7.3 Assessment

In recent years the introduction of virtual patients for assessment purposes has
increased. Courteille et al. [69] conducted a pilot study on the use of an
interactive patient simulation for an Objective Structured Clinical Examination
(OSCE)-based exam. They showed that the virtual patient could reliably
differentiate between the performances of the students. Another implementation
scenario is the key feature approach [70]. A key feature (KF) is defined as “a
critical step in the resolution of a problem” and can be in the form of a short
virtual patient scenario that focuses on a specific problem or decision. Two
facets were suggested by Page and Bordage: (1) a KF should focus on a step in
which examinees are most likely to make errors in the resolution of the problem,
and (2) a KF is a difficult aspect in the identification and management of a
problem in practice [71]. A study by Fischer et al. introduced a 15-item key
feature exam for assessing clinical decision-making skills in undergraduate
students [72]. They found that their modified key feature exam was a reliable,
valid and feasible method for assessing clinical decision-making skills. A
computer-based case simulation (CCS) was introduced in 1999 [73] for the
United States Medical Licensing Examination (USMLE) Step 3 examination.

7.7.4 Learning-by-teaching approach

In such an approach learner activity consists of creating a virtual patient instead


of working through one. Learners usually work in groups with tutorial guidance
to develop a virtual patient, including the patient story, interactive elements and
multimedia material. After an expert (content and pedagogy) review such virtual
patients can be integrated into the curriculum as learning resources. This
approach has been described, for example, by Ellaway et al. [74] and Fleiszer et
al. [75].

7.8 The future of virtual patients


Systematic reviews of the efficiency of virtual patients consistently show that
virtual patients improve competencies such as clinical reasoning when compared
with no intervention, and have the potential to be as effective as other types of
learning [76, 77]. Still to be discovered are those circumstances under which it is
most effective to use virtual patients. We still do not know a lot about the cost
effectiveness of virtual patients. Solutions are being sought to make the
authoring and maintenance process even simpler and quicker than it is now. It is
also still uncertain which virtual patient elements are the most beneficial and
important to focus on when developing new cases.
Inspired by research on the social aspects of learning, Ellaway advocates the
use of activity theory when the future agenda is set for virtual patient research
[78]. Other developments are related to the use of virtual patients on a massive
scale in courses involving thousands of participants (MOOCs). Simultaneous
learning with so many peer learners worldwide provides new educational
opportunities that did not exist in the past [79]. Rosen and Lok have an inventive
idea for reducing the cost of virtual patients which involves engaging Internet
users (so called crowdsourcing) in the development of virtual patients [80]. New
opportunities for virtual patients are presented by augmented and mixed-reality
technologies (e.g. using mobile devices, haptic technology or the display of
virtual patients on mannequins using specialized projectors) [81, 82].

Acknowledgment
The authors wish to thank Michael Timberlake for his comments on the chapter.

References
[1] Kononowicz AA, Zary N, Edelbring S, Corral J and Hege I. Virtual Patients – What are we
talking about? A framework to classify the meanings of the term in healthcare education. BMC
Med Educ. 2015;15:11.
[2] Ellaway R, Candler C, Greene P and Smothers V. An Architectural Model for MedBiquitous
Virtual Patients: MedBiquitous White Paper [Internet]. Baltimore, MD; 2006. Available from:
http://groups.medbiq.org/medbiq/display/VPWG/MedBiquitous+Virtual+Patient+Architecture
(accessed January 7, 2015)
[3] AAMC. Colloquium on Educational Technology: Recommendations and Guidelines for
Medical Educators. Washington, D.C.; 2007.
[4] Bitzer M. Clinical nursing instruction via the PLATO simulated laboratory. Nurs Res. 1966;
15(2):144–150.
[5] De Dombal FT, Hartley JR and Sleeman DH. A computer-assisted system for learning clinical
diagnosis. Lancet. 1969;1(7586):145–8.
[6] Harless W, Drennon G, Marxer J, Root J and Miller G. CASE: A computer-aided simulation of
the clinical encounter. Acad Med. 1971;46(5):443–8.
[7] Davis TL and Mark RG. Teaching physiology through simulation of hemodynamics. Proc
Com- put Cardiol IEEE Comput. Soc. Press. 1990; 649–652.
[8] Talbot T, Sagae K, John B and Rizzo A. Sorting out the Virtual Patient: How to exploit
Artificial Intelligence, Game Technology and Sound Educational Practices to create engaging
role-playing simulations. Int J Gaming Comput Simulations. 2012;4(3):1–19.
[9] Huwendiek S, De Leng BA, Zary N, Fischer MR, Ruiz JG and Ellaway R. Towards a typology
of virtual patients. Med Teach. 2009;31(8):743–8.
[10] Beux P and Fieschi M. Virtual biomedical universities and e-learning. Int J Med Inform. 2007;
76(5–6):331–335.
[11] Cook DA and Triola MM. Virtual patients: a critical literature review and proposed next steps.
Med Educ. 2009;43(4):303–11.
[12] eViP Project. Virtual Patient Repository [Internet]. Available from:
http://www.virtualpatients.eu/referatory/ (accessed January 7, 2015)
[13] Zary N, Johnson G, Boberg J and Fors UGH. Development, implementation and pilot
evaluation of a web-based Virtual Patient case simulation environment – Web-SP. BMC Med
Educ. 2006;6:10.
[14] Hege I, Kopp V, Adler M, Radon K, Mäsch G, Lyon H, et al. Experiences with different
integration strategies of case-based e-learning. Med Teach. 2007;327(8):791–7.
[15] Poulton T, Conradi E, Kavia S, Round J and Hilton S. The replacement of “paper” cases by
interactive online virtual patients in problem-based learning. Med Teach. 2009;31(8):752–8.
[16] Parikh SS, Chan S, Agrawal SK, Hwang PH, Salisbury CM, Rafii BY, et al. Integration of
patient-specific paranasal sinus computed tomographic data into a virtual surgical
environment. Am J Rhinol Allergy. 2013;23(4):442–7.
[17] Kenny P, Parsons TD, Gratch J and Rizzo AA. Evaluation of Justina: a virtual patient with
PTSD. Lect Notes Comput Sci. 2008;5208:394–408.
[18] Deladisma AM, Gupta M, Kotranza A, Bittner JG, Imam T, Swinson D, et al. A pilot study to
integrate an immersive virtual patient with a breast complaint and breast examination
simulator into a surgery clerkship. Am J Surg. 197(1):102–6.
[19] Heinrichs WL, Youngblood P, Harter P, Kusumoto L and Dev P. Training healthcare personnel
for mass casualty incidents in a virtual emergency department: VED II. Prehosp Disaster
Med. 2010;25(5):424–431.
[20] Conradi E, Kavia S, Burden D, Rice A, Woodham L, Beaumont C, et al. Virtual patients in a
virtual world: Training paramedic students for practice. Med Teach. 2009;31(8):713–20.
[21] LeFlore JL, Anderson M, Zielke MA, Nelson KA, Thomas PE, Hardee G, et al. Can a virtual
patient trainer teach student nurses how to save lives – teaching nursing students about
pediatric respiratory diseases. Simul Healthc. 2012;7(1):10–7.
[22] CliniSpace – Immersive Learning Environments for Healthcare. Available from:
http://www.clinispace.com/ (accessed January 7, 2015)
[23] Edelbring S, Dastmalchi M, Hult H, Lundberg IE and Dahlgren LO. Experiencing virtual
patients in clinical learning: a phenomenological study. Adv Health Sci Educ Theory Pract.
2011; 16(3):331–45.
[24] Tan A, Ross S and Duerksen K. Death is not always a failure: outcomes from implementing
an online virtual patient clinical case in palliative care for family medicine clerkship. Med Educ
Online. 2013;1:1–5.
[25] Fall LH, Berman NB, Smith S, White CB, Woodhead JC and Olson AL. Multi-institutional
development and utilization of a computer-assisted learning program for the pediatrics
clerkship: The CLIPP Project. Acad Med. 2005;80(9):847–855.
[26] LCME. Standards for Accreditation of Medical Education Programs Leading to the M.D.
Degree [Internet]. 2013. Available from: http://www.lcme.org/publications/2014–2015-
functions-and-structure-june-2013.doc (accessed January 7, 2015)
[27] Cook DA and Triola MM. What is the role of e-learning? Looking past the hype. Med Educ.
2014;48(9):930–7.
[28] Liyanagunawardena TR and Williams SA. Massive open online courses on health and
medicine: review. J Med Internet Res. 2014;16(8):e191.
[29] Ellaway RH, Pusic MV, Galbraith RM and Cameron T. Developing the role of big data and
analytics in health professional education. Med Teach. 2014;36(3):216–22.
[30] Proaño A and Ruiz EF. What medical students need: Virtual patients or real patients? Med
Teach. 2014;43(4):1.
[31] Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ. 2009;43(4):312–9.
[32] Kolb DA. Experiential Learning: Experience as the Source of Learning and Development.
Englewood Cliffs, N.J.: Prentice-Hall; 1984.
[33] Yardley S, Teunissen PW and Dornan T. Experiential learning: AMEE Guide No. 63. Med
Teach. 2012;34(2):e102–15.
[34] Ziv A, Wolpe P, Small S and Glick S. Simulation-based medical education: an ethical
imperative. Acad Med. 2003;78(8):783–788.
[35] Gruppen LD and Frohna AZ. Clinical reasoning. In: Norman G, Van der Vleuten CPM,
Newble DI, eds. International Handbook of Research in Medical Education. Dordrecht: Kluwer
Academic Publishers; 2002, pp. 205–230.
[36] Norman G. Research in clinical reasoning: past history and current trends. Med Educ.
2005;39(4):418–27.
[37] Bearman M, Cesnik B and Liddell M. Random comparison of “virtual patient” models in the
context of teaching clinical communication skills. Med Educ. 2001 30;35(9):824–832.
[38] Charlin B, Boshuizen HP, Custers EJ and Feltovich PJ. Scripts and clinical reasoning. Med
Educ. 2007;41(12):1178–84.
[39] Croskerry P. A universal model of diagnostic reasoning. Acad Med. 2009;84(8):1022–8.
[40] Norman GR and Eva KW. Diagnostic error and clinical reasoning. Med Educ. 2010;44(1):
94–100.
[41] Monteiro SM and Norman G. Diagnostic reasoning: where we’ve been, where we’re going.
Teach Learn Med. 2013;25 Suppl 1:S26–32.
[42] Posel N, McGee JB and Fleiszer DM. Twelve tips to support the development of clinical
reasoning skills using virtual patient cases. Med Teach. 2014 Dec 19:1–6. [Epub ahead of
print]
[43] Huwendiek S, De Leng BA, Kononowicz AA, Kunzmann R, Muijtjens AMM, Van Der Vleuten
CPM, et al. Exploring the validity and reliability of a questionnaire for evaluating virtual patient
design with a special emphasis on fostering clinical reasoning. Med Teach. 2014 Oct 14:1–8.
[Epub ahead of print]
[44] Tadeusiewicz R. Application in VP systems individualization of distance learning process
using student’s psychological profiles obtained by means of artificial intelligence methods.
Bio-Algorithms and Med-Systems. 2009;5(9):47–50.
[45] Wahlgren C-F, Edelbring S, Fors U, Hindbeck H and Ståhle M. Evaluation of an interactive
case simulation system in dermatology and venereology for medical students. BMC Med
Educ. 2006;6:40.
[46] Sanders C, Kleinert H and Free T. Developmental disabilities: improving competence in care
usingvirtual patients. J Nurs Educ. 2008;47(2):66–73.
[47] Dev P, Heinrichs WL and Youngblood P. CliniSpace™: A Multiperson 3D Online Immersive
Training Environment Accessible through a Browser. Stud Heal Technol Inf. 2011;163:173–
179.
[48] CASUS Authoring System [Internet]. Available from: http://www.instruct.eu/?q=en/node/106
(accessed January 7, 2015)
[49] Maier EM, Hege I, Muntau AC, Huber J and Fischer MR. What are effects of a spaced
activation of virtual patients in a pediatric course? BMC Med Educ. 2013;13:45.
[50] Round J, Conradi E and Poulton T. Improving assessment with virtual patients. Med Teach.
2009;31(8):759–63.
[51] Küfner J, Kononowicz A and Hege I. Virtual patient repositories – a comparative analysis.
Stud Heal Technol Inf. 2014;205:788–792.
[52] Poulton T, Ellaway RH, Round J, Jivram T, Kavia S and Hilton S. Exploring the efficacy of
replacing linear paper-based patient cases in problem-based learning with dynamic web-
based virtual patients: randomized controlled trial. J Med Internet Res. 2014;16(11):e240.
[53] Round J, Conradi E and Poulton T. Training staff to create simple interactive virtual patients:
the impact on a medical and healthcare institution. Med Teach. 2009;31(8):764–9.
[54] Lubarsky S, Dory V, Duggan P, Gagnon R and Charlin B. Script concordance testing: from
theory to practice: AMEE guide no. 75. Med Teach. 35(3):184–93.
[55] Cendan J and Lok B. The use of virtual patients in medical school curricula. Adv Physiol
Educ. 2012;36(1):48–53.
[56] Kononowicz A, Narracott AJ, Manini S, Bayley MJ, Lawford PV, McCormack K, et al. A
framework for different levels of integration of computational models into web-based virtual
patients. J Med Internet Res. 2014;16(1):e23.
[57] Prensky M. Digital natives, digital immigrants Part 1. Horiz. 2001 Sep;9(5):1–6.
[58] Begg M. Leveraging game-informed healthcare education. Med Teach. 2008;30(2):155–8.
[59] Tworek J, Paget M, McLauglin K and Wright B. How Dungeons & Dragons made us better
VPs: randomizing physiological data to rapidly produce 97 clinically realistic VPs. Bio-
Algorithms and Med-Systems. 2010;6(11):41–45.
[60] Smothers V, Greene P, Ellaway R and Detmer DE. Sharing innovation: the case for
technology standards in health professions education. Med Teach. 2008;30(2):150–4.
[61] Zary N, Hege I, Heid J, Woodham L, Donkers J and Kononowicz AA. Enabling
interoperability, accessibility and reusability of virtual patients across Europe – design and
implementation. Stud Heal Technol Inform. 2009;150:826–830.
[62] Kononowicz AA, Hege I, Adler M, De Leng BA, Donkers J and Roterman I. Integration
scenarios of virtual learning environments with Virtual Patients Systems. e-mentor.
2010;5(37):82–86.
[63] Stathakarou N, Zary N and Kononowicz AA. Beyond xMOOCs in healthcare education:
study of the feasibility in integrating virtual patient systems and MOOC platforms. PeerJ.
2014;2:e672.
[64] Edelbring S, Broström O, Henriksson P, Vassiliou D, Spaak J, Dahlgren LO, et al. Integrating
virtual patients into courses: follow-up seminars and perceived benefit. Med Educ. 2012;
46(4):417–25.
[65] Cook DA and Dupras DM. A practical guide to developing effective web-based learning. J
Gen Intern Med. 2004;19(6):698–707.
[66] Baumlin KM, Bessette MJ, Lewis C and Richardson LD. EMCyberSchool: an evaluation of
computer-assisted instruction on the Internet. Acad Emerg Med. 2000;7(8):959–62.
[67] Edelbring S. Measuring strategies for learning regulation in medical education: scale
reliability and dimensionality in a Swedish sample. BMC Med Educ. 2012;12(1):76.
[68] Sharma N, Lau CS, Doherty I and Harbutt D. How we flipped the medical classroom. Med
Teach. 2015;37(4):327–30.
[69] Courteille O, Bergin R, Stockeld D, Ponzer S and Fors U. The use of a virtual patient case in
an OSCE-based exam – a pilot study. Med Teach. 2008;30(3):e66–76.
[70] Bordage G, Brailovsky C, Carretier H and Page G. Content validation of key features on a
national examination of clinical decision-making skills. Acad Med. 1995;70(4):276–81.
[71] Page G and Bordage G. The Medical Council of Canada’s key features project: a more valid
written examination of clinical decision-making skills. Acad Med. 1995;70(2):104–10.
[72] Fischer MR, Kopp V, Holzer M, Ruderich F and Jünger J. A modified electronic key feature
examination for undergraduate medical students: validation threats and opportunities. Med
Teach. 2005;27(5):450–5.
[73] Dillon G, Boulet J, Hawkins R and Swanson D. Simulations in the United States Medical
Licensing Examination TM (USMLE™). Qual Saf Heal Care. 2004;13(suppl1):i41–i45.
[74] Ellaway R, Poulton T, Fors U and McGee JB, Albright S. Building a virtual patient commons.
Med Teach. 2008;30(2):170–4.
[75] Fleiszer DM, Posel N, Masterson J and Peterson L. Students as primary virtual patient case
authors: The University of British Columbia Model. AMEE 2010 Abstr B [Internet] Glasgow,
UK; 2010. p. 450. Available from: http://www.amee.org/getattachment/Conferences/AMEE-
Past-Conferences/AMEE-Conference-2010/AMEE-2010-Abstract-book.pdf (accessed
January 7, 2015)
[76] Cook DA, Erwin PJ and Triola MM. Computerized virtual patients in health professions
education: a systematic review and meta-analysis. Acad Med. 2010;85(10):1589–602.
[77] Consorti F, Mancuso R, Nocioni M and Piccolo A. Efficacy of virtual patients in medical
education: A meta-analysis of randomized studies. Comput Educ. 2012;59(3):1001–1008.
[78] Ellaway RH. Virtual patients as activities: exploring the research implications of an activity
theoretical stance. Perspect Med Educ. 2014;3(4):266–77.
[79] Stathakarou N, Zary N and Kononowicz AA. Virtual patients in massive open online courses-
design implications and integration strategies. Stud Heal Technol Inf. 2014;205:793–7.
[80] Rossen B and Lok B. A crowdsourcing method to develop virtual human conversational
agents. Int J Hum Comput Stud. 2012;70(4):301–319.
[81] Coles T, John N, Gould D and Caldwell D. Integrating haptics with augmented reality in a
femoral palpation and needle insertion training simulation. IEEE Transactions on Haptics.
2011;4(3):199–209.
[82] Rivera-Gutierrez D, Welch G, Lincoln P, Whitton M, Cendan J, Chesnitt DA, et al. Shader
lamps virtual patients: the physical manifestation of virtual patients. Stud Heal Technol Inf.
2012; 173:372–378.
Dick Davies, Peregrina Arciaga, Parvati Dev, and Wm LeRoy
Heinrichs
8 Interactive virtual patients in immersive
clinical environments: The potential for
learning
Whilst structured and managed interaction with real patients in clinical settings
is in principle the ideal medium for learning, the reality is that access is often
constrained and unsatisfactory. One solution is to explore the deployment of
online immersive environments as these are already used widely by the
generations now undertaking clinical education. A case study centric approach
shows how merging two technologies, immersive clinical environments with
interactive virtual patients – with both offline and real-time modifiable virtual
patient models at its core – was designed, developed and is being deployed now
in clinical education to teach Inter-Professional Education (IPE). Immersive
environments were chosen because they provided an affordable, manageable
solution to the problems encountered in delivering IPE in face-to-face sessions.
It was demonstrated that interactive virtual patients in immersive clinical
environments offer a specific set of social and clinical experiences that are
appropriate for learning by doing … together. The approach developed
responded to the issues raised in providing clinically realistic learning
experiences such as tutor and learner availability, standardization, engagement,
the learning experience and cost. The case study is complemented by recent and
ongoing research that abstracts from the case study’s specific experiences to
offer a set of referenced more general conclusions on simulation in medicine
covering choices of learning approach and immersive technologies.

8.1 Introduction
At first sight a consumer ‘street’ technology, such as virtual worlds, that was
developed for computer gaming would not seem to be the ideal candidate to be a
serious tool for learning in medicine. Practice and current research as we will
see, however, indicate otherwise.
Since the early 2000s medical faculty have been exploring the use of
immersive environments for learning. The forerunner in the field was Stanford’s
University School of Medicine in the US. Having previously developed
mathematical models of disease that could be run on a computer, when the
visualization technologies of the early virtual worlds arrived it was a visionary
choice to explore them by building patient avatars powered by these algorithms
and then to place them in virtual clinical environments. These interactive virtual
patients ran in real-time, could be treated in real-time and responded in real-time.
This was a true breakthrough. For the first time a nonlinear 3D virtual patient
could be both seen and managed by healthcare professionals.
This chapter starts by exploring, using a real-life clinical case study as an
example, how virtual world technologies – also sometimes called immersive
environments – coupled with interactive virtual patients are increasingly being
deployed in medicine. In fact, virtual worlds are being deployed in medicine in a
whole variety of ways: from providing a simple familiarization walkthrough of a
new medical facility, through to interactive demonstrations of complex clinical
procedures, through to deeply immersive clinical environments featuring
treatable real-time interactive virtual patients’ pre-hospital situations. After the
case study the question, ‘What potential do they have for learning in medicine?’
is examined in some depth before a final set of conclusions is drawn.
One of the major recent developments is the development of interactive
virtual patients. How these ‘patients’ are developed and then deployed in clinical
virtual worlds is therefore a key focus of this chapter as they are beginning to
bring a genuine sense of realism to immersive clinical environments. In
particular, how and what can we learn by deploying interactive virtual patients in
immersive environments? This introductory section starts by describing firstly
virtual worlds and then virtual patients before moving on to examine how they
are being jointly deployed in medicine.

8.2 What are virtual worlds?


Virtual worlds are now an accepted technology, deployed widely in consumer
gaming and with which most of the millennial generation, i.e. those born post-
1980, are familiar. To underpin this statement it is worth stating a couple of
facts:
– The average age of a computer gamer is now 34 [1].
– The revenue of the computer gaming industry is larger than that of the film
industry [2].
Known also as immersive environments, virtual worlds can be defined as a
synchronous, persistent network of people, represented by avatars, facilitated by networked computers
[3].

This has been further developed to:


a virtual world operates in real time, exists whether the participants are in the world or not, is a social
space in which people are digitally represented and so can interact, and is underpinned by networked
computers that manage the world and its interactions [4].

There are a wide range of variations, but virtual worlds are computer simulations
offering some or all of the following [5]:
– 3-dimensional spaces
– people represented by avatars
– objects in world are persistent and may be interacted with e.g. moveable
chairs, vehicles,
– communication is usually in real-time: voice, text and gesture.
Virtual worlds however do not have to represent the real world. Virtual world
technologies have developed rapidly due to the vast video game market which,
in the main, are fantasy style environments e.g. World of Warcraft.

8.3 Immersive Clinical Environments (Virtual Clinical


Worlds)
Serious Virtual Worlds, a definition coined by one of the authors of this chapter
(DD) and subsequently used in a substantial UK study [6], on the other hand can
be defined as a virtual world developed and deployed for professional purposes.
A systematic literature review of the use of 3D virtual worlds in the healthcare
sector concluded that “pedagogical uses are the most important applications of
3D VWs in healthcare” [7].
Where virtual worlds are used specifically for clinical purposes they could be
termed Immersive Clinical Environments or Virtual Clinical Worlds [4].
Heinrichs et al. went on to propose that in clinical practice virtual worlds are
being deployed because they offer the following attributes [4]:
– Presence
– Immersion i.e. engagement
– Team-based activities
– Real workplace settings
– Safe ‘play spaces’
– Relatively low cost (compared to custom healthcare game development)
We now explore virtual patients, before moving on to examine how virtual
patients can work within virtual worlds.

8.4 Virtual patients


The genesis of virtual patients is the “human standardised patient” developed by
Dr. Howard Burrows in 1963. Whilst computer simulations of patients started in
the 1960s, it is only recently with the widespread uptake of computers in medical
education that virtual patients, as they are now understood, have expanded. The
term “Virtual patients”, however, is a portmanteau term for a range of distinct
approaches.
Such approaches include case presentations, interactive patient scenarios, virtual patient games, human
standardised patients, high fidelity software simulations, high fidelity manikins and virtual
conversational agents [8].

The authors above explore these approaches in more detail. This categorization
is summarized below.
– Case presentations
Text or simple computer-based presentations followed with a series of
multiple- choice questions. Easy to author.
– Interactive patient scenarios
Usually an online text-based multimedia interactive presentation that is
similar to text-based adventure games. If the user cannot control the patient it
is called a linear case scenario. If the progress of the case is altered following
the user’s decisions it is called a branching scenario. There is a MedBiquitous
national virtual patient standard that offers universal playability and
assessment reporting [9]. Often difficult to author.
– Virtual patient “games”
Classically these are developed using commercially available game engines
from the video game world and consist of patient avatars in virtual clinical
worlds. Typically, they offer a 3D clinical environment, a human physiology
engine, 3D avatars, data displays, medical procedure capability and reporting
assessment capabilities. Traditionally they have been difficult and expensive
to develop scenarios for, but as we will see in this chapter this is changing.
– High fidelity software simulations
Offer real-time simulation of human physiology e.g. anesthesia simulators.
The interface in this case shows vital signs such as breathing, blood pressures,
etc. A patient avatar may or may not be present. Expensive to develop.
– High fidelity manikins
The manikins have realistic anatomy and use electronic actuators to offer
physical effects such as breathing, eye movement, pulses, etc. Most are
controlled by human operators. They are not generally described as virtual
patients. Resource intensive both in terms of physical space, staffing and
equipment.
– Human standardized patients
These are paid actors who pretend to be patients. HSPs provide an unrivalled
experience. In particular the practice of social skills, communication skills,
judgment and diagnostic acumen. When these are combined as interviewing
skills they offer real-time clinical problem solving experiences. Expensive due
to cost of hiring and training the actors.
– Virtual human conversation agents
These are artificial intelligence agents capable of natural language
conversations. They can be represented in virtual environments and may offer
speech and facial movement and body gestures. When deployed in clinical
settings they can be termed virtual standardized patients. Advancing rapidly,
but still in the research stages of development.

The boundaries between the virtual patient approaches above are not formal. As
technologies and practical implementations rapidly develop so the approaches
bleed over or merge into one another. Examples are Human Conversational
Agents being deployed in Virtual World Games [10–12] and Virtual World
Games in High Fidelity Software Simulations [8].

8.5 Interactive virtual patients in immersive clinical


environments
The focus of this chapter is on interactive virtual patients in virtual worlds.
These would seem to span the approaches, “Interactive Virtual Patients”,
“Virtual Patient Games” and “High Fidelity Software Simulations” as
categorized by Talbot [8] above. Given the frivolous connotation often given to
the term “games” when used in professional contexts, the terms immersive
clinical environments (ICE) or virtual clinical worlds (VCW) is preferred and is
used here.
From Talbot [8] above we can see the characteristics of the interactive virtual
patient are:
– 3D clinical environment,
– a human physiology engine,
– 3D avatars,
– data displays,
– medical procedure capability,
– reporting or assessment capabilities.
In practice this would mean that on entering the virtual environment (which in
fact need not be in a medical facility as it could be a pre-hospital setting), the
patient avatar(s) will be evident. If conscious, he or she can be ‘interviewed’ in
some restricted way either by text or potentially by voice. The patient avatar will
also be able to be examined e.g. to auscultate the heart or chest, physiological
monitors can be attached, e.g. BP cuff, and data displays appear. Finally, the
clinician will be able to treat or order treatments for the patient on the basis of a
diagnosis (normally recorded on an in-world medical record) e.g. IV, oral
medication, etc. On the basis of the intervention the virtual patient may respond
physiologically e.g. heart rate drop. During this time the virtual environment will
be capturing clinical activity for reporting or assessment. However,
Most of the few virtual patient models available for deployment in virtual environments are execute
only i.e. they run from start to finish and cannot be interrupted or even most cases parameterised simply
at start time. Two types of virtual patient designs can be distinguished: a ‘narrative’ or passive structure
and a ‘problem-solving’ or active structure. In the narrative/passive cases, the simulation represents a
single medical state, often in considerable detail, and with relevant graphics, audio and visual media
displaying the patient’s medical condition. Fewer simulations support the evolution of the ‘problem-
solving’/active patient’s state, both with and without medical intervention. In the problem-
solving/active model, one specifies both gradual changes in physiologic variables as well as a number
of discrete important ‘states’, with the patient moving from state to state based on the virtual patient’s
condition and on the actions taken by the learner [13].

What is clear is that at the center of this environment is the underlying human
physiology engine. Poulton [14] has argued that
It is now possible to consider the extension of the current relatively lightweight VP into a truly
interactive patient simulation, an ‘e-human’ or ‘digital avatar’. At this stage the simulation takes on new
capabilities, offering authentic patient management, clinical and communication skills training; the
potential capability to mimic the health or disease of any citizen.

One hope that does come through strongly in the literature is that:
Future progress must be made to develop rich, guided authoring tools that will allow medical educators,
not just computer scientists, to develop virtual patients … [8].

We will be addressing that requirement in more detail in this chapter.


This section has introduced the powerful combination of immersive clinical
environments and interactive virtual patients. The chapter explores how this
combination is deployed in practice at the Charles R. Drew/UCLA School of
Medicine in the US with an in-depth case study, before moving on to ask “what
potential do they have for learning in medicine”?

8.6 Case study: Using immersive clinical environments for


Inter-Professional Education at Charles R. Drew
University of Medicine

8.6.1 Introduction to case study

This is a case study of a simulation using an immersive environment to replicate


some or nearly all of the essential aspects of a specific clinical situation, in order
that the situation can be more readily understood and managed when it occurs
for real in clinical practice. The case study took place at the Charles R. Drew
University of Medicine, an affiliate of the UCLA School of Medicine.
(Drew/UCLA School of Medicine)
In practice-based healthcare professions, methods of teaching and learning
focus on enabling students to assimilate clinical knowledge and skills. Medical
and nursing students need to learn how to apply classroom learning in a clinical
context. Importantly, in this case study where these concepts were previously
taught in silos they were integrated to mirror how they worked inter-
professionally in clinical practice.
There is an increased recognition of the importance of inter-professional
training for professional accreditations. Inter-Professional Education (IPE) is
usually taught as a combination of didactic lectures and case scenarios.
Challenges encountered with case simulation include temporal, geographic,
logistical, limited resources, difficulty in set-up of scenarios, and the need for
students and facilitator to be physically present at the same location. Immersive
simulation would seem to be an ideal educational modality to teach and train
students in IPE and, in addition, help to alleviate some of the challenges of
implementing case scenarios.
The presentation of the case study is set in the context of the clinical skills
required for the preclinical/undergraduate learners.
– Technical skills (practical clinical skills including literacy and comfort in
computer technology) – in clinical practice, you cannot thrive without being
computer and technology literate.
– Cognitive (basic standard of care) – students must know, for example,
anatomy and pathophysiology
– Nontechnical/Soft Skills (teamwork, collaboration, communication)
The above three skillsets are the basic pre-requirements that learners needed
prior to joining a simulation session in order to have a meaningful encounter
with the virtual patient. These are the same basic skills that we need to nurture
during the training of our healthcare professionals in real life.
The case study was done over the period June to July 2014, by the faculty of
the College of Medicine and the School of Nursing at the Drew/UCLA School of
Medicine. The participants were 59 medical students and 68 nursing students, 1
faculty facilitator, 4 IT personnel, 2 IRB student advocates; 2 virtual
standardized patient actors/ gamer-virtual tech support. The case study was
funded by the Medical Student Affairs Funds of the Drew/UCLA College of
Medicine.

8.6.2 The case study

Entering into a project of the complexity of “Using immersive clinical


environments to teach Inter-Professional Education” should be done using a
staged project management approach to avoid pitfalls. Initially, a decision has to
be made as to whether an immersive environment can deliver IPE and then, if
the belief is that it can, management support and associated funding and other in-
house resources, e.g. staff, have to be found. Once that is in place, the case study
must be designed with a particular emphasis on the clinical and education
standpoints: how will the simulation assist students in learning the skills for IPE?
The design is now framed and the hard work of developing the components for
delivering the simulation must now begin: detailed learning and teaching
objectives have to be developed together with any associated assessment; the
simulation scenarios have to be written, the immersive environment configured
and the interactive virtual patients authored; finally the training courses for the
students, facilitators and assessors have to be written. The last part of the case
study, its delivery to medical and nursing students using the developed
resources, can now, at last, take place. The following sections now take this
journey.

The challenges of Inter-Professional Education


There are a range of approaches available to teaching IPE including problem-
based learning discussions, didactic lectures, web-based PowerPoint lectures,
workshops on IPE, mannequin/standardized patient hybrid IPE simulation. The
challenges faced in delivering IPE are as follows: temporal, geographical,
logistical, limited resources, difficulty in scenario set-up, faculty availability and
student/facilitator location. These challenges are examined in more detail below
together with how the decision to develop and deploy an immersive simulation
was used to address them.
Temporal: Students are usually in classrooms or shadowing mentors during the
day. They can participate in the virtual sessions when they go home from
school/clinic rotation. The time is flexible since we can reschedule according to
availability of students. The simulations can be more frequent since preparation
is not as extensive as traditional case simulation.
Geographic: The students can be on campus in a classroom, in the library, at
home, at a public place, or in a different state/country as long as the internet
connectivity is capable and they can be in a quiet place that allows them to talk,
be heard and allowed to focus during the session.
Logistical: The process of virtual session planning and organizing to make sure
that all the resources required are in place prior to each session to make it
effective, i.e. the availability of back-up headsets and computers with the
preinstalled or downloaded CliniSpace application, the availability of
standardized patient actors, the availability of trained assessors to view the
streamed video captured sessions, the scheduling of learners … and is very
tedious at first. However, once everything is set up, multiple sessions can be
repeated relatively simply. This is not possible with the traditional face-to-face
simulations.
Limited resources: Virtual simulation has some overhead expenses during the
set-up. But once everything is in place, repeated multiple and frequent sessions
can be done at lower cost to the university compared to face-to-face sessions.
Difficulty in set-up of scenarios: It is very challenging to set up virtual case
scenarios to make sure that the IPE objectives are delivered, considering the fact
that paper case scenarios that are usually produced are further dissected to fit
into the virtual dynamics to make the simulation as interactive and realistic as
possible. The period spent investing in a well-built virtual IPE scenario is worth
the time as, once you have it in place, you can vary the scenarios to give changes
in complexity to adjust to the levels of different learners. As soon as the virtual
scenarios are set up it is up to the imagination of the faculty facilitator on how to
enrich the environment to adjust to learners’ needs.
Faculty availability and schedule: Faculty training is always a challenge.
Training university faculty to use immersive simulations can be more so. The
main hurdle has been the training together of both our medical and nursing
faculty due to persistently busy schedules. Virtual simulation however does
allow as a minimum from one to zero faculty where students can do the
simulation on their own. Although the dynamics of the CliniSpace virtual
environment allows changes in the modes of interaction inside the world, faculty
facilitation through debriefing after each session is essential. This is a very new
training modality. And at this early stage the faculty would want the IPE
objectives of the virtual session to be fully addressed and taken home by the
students. Having said this, the project was able to use a minimum of one faculty
to facilitate all the virtual sessions, versus the 16 faculty to facilitate the
traditional IPE simulation sessions!
The need for students and facilitator to be present at the same location: The
virtual simulation allows us to undertake sessions anywhere where there is good
internet connectivity. It is very convenient as people do not have to travel to be
physically present in the same location.
For IPE, the use of an immersive environment to teach clinical and
teamwork skills overcame the challenges above. Designed to run on personal
computers, interactive immersive simulation platforms using games technology
can be developed to captivate, engage and introduce learners to new knowledge
or skills such as IPE. A decision was therefore made to undertake a case study of
using immersive simulation with multiple participants to teach IPE teamwork
and communication.
Why choose CliniSpace? CliniSpace [15] was chosen because of all the virtual
gaming technologies available during the planning phase for the IPE simulation
CliniSpace offered the appropriate clinical content that was applicable to both
medical and nursing students. Some exciting and fun virtual games were
available, which however had minimal clinical context: some had more
emphasis on only the nursing curriculum, some had medical student curriculum
by itself, but on analysis not one of them met the interactivity requirements for
team Inter-Professional Education for both medical and nursing students. In
addition, the ability to customize the interactive virtual patients and scenarios
was central as this enabled reuse of cases that had already been developed for
our traditional IPE mannequin/standardized patient based simulations. The
cooperative nature of the environment, the ability to extend imaginatively as
needed for the case, and constant faculty and programmer/developer
collaboration to find opportunities to improve student experience made us
choose CliniSpace.
CliniSpace is a 3D Virtual Simulation Center (VSC) immersive clinical
environment. It aims to providing hands-off training for nurses and other allied
healthcare personnel using real world simulations and customized training
modules. Its realistic environment, avatars and virtual patients satisfy learning
objectives for junior and advanced learners. Accurate portrayal of the medical
environment, procedures, scenarios and patient pathophysiology prepares
learners for real world practice, while tracking and recording of performance
helps proper analysis of learning objectives. It gives a feeling of collaboration in
the multi-user environment via text and voice chat. It is a practical tool for
nurses and other healthcare personnel, focusing on patient assessment, patient
management and communication protocols, collaboration with physicians,
charge nurse and other team members

The design of the case study


The design of case studies using new technologies is by definition more
complex, but as has already been mentioned, this upfront investment can deliver
almost immediate benefits. In addition to the clinical design and the related
educational design, the chosen technologies have to be deployed so that they
integrate to deliver the requirements of the clinical educational design and of any
assessment requirements. From these designs a top level project and resourcing
design needs to be developed: what resources are needed and who does what and
when?
The clinical design for IPE required acute cases with early onset patient
clinical crises that needed immediate decision-making by the learners so that
they work together as a team to handle a patient with an acute problem or
situation. Traditionally, these are text-based case scenarios that have been used
to teach medical and nursing students to diagnose cases early in their training, as
well as to evaluate their diagnostic and management capabilities.
Having made the decision to use immersive environments to deliver the
clinical design, for it to be deployed fully would require:
– Interactive virtual patients. These could respond to students’ inquiries and
students’ actions. The use of avatars for the simulated virtual patient and the
students being in the virtual world themselves in the form of avatars permits
question-response interactions between the patient and the learners in training.
– Real-time clinical setting. At Drew/UCLA, new incoming third year and
fourth year medical students and second and third semester entry level nursing
students are introduced to a real-time first day rotation in the hospital
medical/surgical floor rotation.
– Student collaboration. The virtual simulation application allows students and
teachers to interact and practice medical scenarios. Students work in two pairs
(one as primary physician and primary nurse in the hot seat and the other as
secondary physician and charge nurse as observer and to assist as needed) and
then exchange roles as they move from one case scenario to the next.
The educational design for learning outcomes and associated teaching strategies
usually also includes assessment design. One of the main rationales for the use
of immersive environments was that in real clinical situations clinicians
experience not just one case at a time, but multiple cases simultaneously. The
technology can be used to deliver multitasking within a team with two or more
patient encounters requiring real-time intervention during a single one hour
session.
The assessment design was targeted on the evaluation of the use of the
virtual modality of IPE simulation as well as to evaluate the inter-professional
teamwork and communication which is the main objective of the simulation.
The resourcing design covered the funding from the Medical Student Affairs
Funds of the Drew/UCLA College of Medicine, the CliniSpace application,
computers, headsets, and manpower (IT personnel, tech support, faculty, virtual
standardized actors), assessor training, faculty training, virtual standardized
patient/actor training).

Developing the case study resources


Following on from the design exercise above, the case study resources then
needed to be developed before being available for deployment. These resources
included: the detailed teaching and learning objectives, the simulation scenarios,
the interactive virtual patients, the assessment, and student, facilitator and
assessor training.
The simulation scenarios: To fulfill the clinical design requirement for multiple
simultaneous acute cases, two simulation scenarios for Inter-Professional
Education were used. The scenarios were taken from MedEdPORTAL (citations
follow each case).
CASE 1: “Red in the Face” is a human patient simulation IPE case scenario
applicable for medical students, nursing students, new residents, or new graduate
nurses. In the scenario, the patient experiences Red Man syndrome from a rapid
intravenous vancomycin infusion. The participants are expected to recognize
signs and symptoms of Red Man syndrome, prioritize treatment modalities,
implement the treatments collaboratively, and communicate inter-professionally
using crew resource management principles [20].
Patient 1: The first virtual patient is a 68 year old male, Manuel Cervantez, who
was admitted to the med/surgical unit last night for MRSA infection of a
previously lanced right leg wound. He has fever, slight cough and malaise for
three days. Patient has right antecubital IV running at 125 cc per hour. IV
vancomycin has been ordered by the admitting care provider, Dr. Jones. He has
poorly controlled diabetes is taking insulin and oral antidiabetic medications, has
asthmatic bronchitis on steroid bronchodilators, and is a smoker. He has a wife,
Adela Cervantez, who takes care of his home health needs. It is morning at 7 am
and the outgoing nurse just finished her shift. She is in a hurry to take her child
to school, has endorsed the patient to the incoming student nurse and MD, who
are the incoming team of learners for the session. The patient will have acute
changes in vital signs: oxygen saturation dropped to 92%, heart rate increased to
135, BP 110/60, accompanied by development of erythematous rash on his face,
chest and back [16]. Learning objectives for MD provider learners:
1. identify the appropriate vancomycin infusion rate for an adult patient
2. identify the potential complications related to rapid vancomycin infusion
3. recognize signs and symptoms of Red Man syndrome/allergic reaction
4. prioritize treatments and interventions for Red Man syndrome
5. order treatments and interventions for Red Man syndrome
6. collaborate with the interdisciplinary healthcare team using SBAR
communication and crisis resource management principles
Learning objectives for RN learners:
1. identify the appropriate vancomycin infusion rate for an adult patient
2. identify the potential complications related to rapid vancomycin infusion
3. recognize signs and symptoms of Red Man syndrome/allergic reaction
4. prioritize treatments and interventions for Red Man syndrome
5. implement treatments and interventions for Red Man syndrome
6. collaborate with the interdisciplinary healthcare team using SBAR
communication and crisis resource management principles
CASE 2: Sepsis in a postpartum patient – This simulation scenario for IPE is a
case developed to educate inter-professional teams of students about teamwork
and communication in healthcare. Implemented with medical, nursing, physical
therapy, occupational therapy, respiratory therapy, and physician assistant
students, the participants are expected to work together and use teamwork and
communication concepts to effectively manage a decompensating patient while
also identifying their roles on the patient care team.
Patient 2: The second virtual patient is a 29 year old female, Ashley Greene,
who was admitted to the Labor and Delivery Unit, two days ago for operative
delivery. A baby boy was delivered without complication and is currently in the
nursery. Mrs. Greene has a history of multiple sclerosis, has allergies to
penicillin and sulfa. She is taking prenatal vitamins. Her multiple sclerosis is in
remission. The obstetrician, Dr. Williams, ordered physical therapy exercises
postpartum so she can be able to resume her muscle strength soon enough after
her cesarean section. She is married, lives with husband, Charles Greene and
they have a young two year old child at home. It is morning at 9 am and the
incoming shift team learners arrive and are presented with a patient complaining
of body malaise and inability to get up and be able to do her morning exercise
with the physical therapist. Heart rate is 120 sinus tachycardia, BP is 100/40,
oxygen saturation is 92%, temperature 101.4 °F, warm and diaphoretic [17].

Learning objectives
Cognitive:
At the conclusion of the case, the learner will be able to
– identify a postpartum patient in sepsis
– identify roles and responsibilities during a high acuity patient scenario
– appropriately manage the care of a decompensating patient
– identify appropriate mode of intervention to prevent deterioration of patient’s
condition
– effectively communicate with physician using SBAR regarding patient’s
decompensation
Behavioral:
At the conclusion of this case the learner will be able to:
– appropriately prioritize interventions for patient
– call for help early using SBAR communication
– collaborate with the interdisciplinary healthcare team using TeamSTEPPS™
tools
– recognize the importance of teamwork and communication in healthcare in
improving patient safety

Authoring the cases for the interactive virtual patient in CliniSpace


The virtual patients above were created by modifying existing DynaPatients™
[18] in the CliniSpace Virtual Sim Center. The Sim Center includes both offline
and online CliniSpace Case Authoring Tools. Cases can be created and saved
using the Offline (Flash) Case Authoring App and can then be uploaded and
edited in the Online (inworld) Case Authoring App. Both of these authoring
tools have similar features and User Interface (UI) layout. The case was created,
edited, saved and uploaded to both these tools.
The CliniSpace Virtual Sim Center offers the cultural sensitivity feature of
having virtual patients of different ethnicity/skin color. Also the patient and
hospital room can be set up using the authoring tool e.g. the patient can have BP
cuff and IV cannula options set to on or disconnected from the patient, patients’
beds can have different positions as the author desires.
The Case Authoring App containing 13 tabs which are the same tabs present
in the Online (in-world) Case Authoring App. The 13 tabs (see Fig. 8.1) have to
be adjusted by the facilitator/author and consists of the following:
1. learning objectives
2. patient set-up
3. patient state
4. room set-Up
5. learner-patient dialogue Q&A
6. differential diagnosis
7. recommended case protocol
8. imaging
9. labs
10. medications
11. intravenous options
12. other interventions
13. event authoring

Fig. 8.1: CliniSpace Virtual Sim Center Offline Authoring Tool – Patient (CliniSpace Virtual Sim
Center Offline Authoring Tool, Patient Setup screen © 2013, Innovation in Learning, Inc. [19]).

The most challenging authoring component is event authoring (Fig. 8.2). In this,
you can author and edit events, transitions and flags. The event authoring tab can
largely be divided into two sections:
1. The left-hand panel containing four tabs: all procedures, attributes, flags and
transitions:
(a) All procedures is a static, “for-reference” list of actions including all
general procedures, medications and intravenous options that can be
executed inworld.
(b) Attributes tab contains a static, “for-reference” list of all options in-
world with editable values.
(c) Flags tab contains a list of flags that have already been created. You can
also delete existing flags, create new flags or edit existing flags from
this tab.
(d) Transitions tab is similar to the flags tab. The top part of the transitions
tab contains a list of transitions that have already been created, while
the bottom part of the tab contains the “create/edit” panel.
2. The right-hand panel containing two tabs: procedure setup and event
authoring:
(a) The procedure setup tab is used to link a particular procedure to an
existing flag by creating procedure-flag entries.
(b) The event authoring tab is used to create events and link flags and
transitions to an event.

Fig. 8.2: CliniSpace Virtual Sim Center Offline Authoring Tool – Event Authoring Screen (© 2013,
Innovation in Learning, Inc. [19]).

Developing the facilitator and student training program for


CliniSpace
A university has a range of human resources, especially students, available on its
doorstep. The human resources needed by the case study in addition to clinical
faculty were people who could act as standardized patients inside the CliniSpace
immersive environment. The two main skills required were a familiarity with
gaming environments and the ability to give standardized patient responses, act
as patient relatives and take other similar roles. University student gamers fitted
the requirement exactly and were recruited and used successfully.
Assumptions are often made that the millennial generation or young people
in general are fully IT savvy and so do not need training. The reality is that
medical students, as with the general student population, have a wide range of IT
skills. It is therefore important to develop a student training package to ensure
that all are briefed to understand the case study.

Developing the assessment


Assessment requirements:
– The post-virtual simulation survey assessments are submitted by the learners
electronically by email as soon as each session finishes.
– The assessors’ NTCS (Non-Technical and Cognitive Skills) scores were made
possible by video capturing the session using the X-Split Broadcast App and
streamed to assessors using the Twitch App. These are free applications for
video capture and video streaming. The NTCS rating forms were submitted
electronically to the faculty-facilitator in charge as soon as the session is done.

Delivering IPE at Drew/UCLA using the CliniSpace immersive


clinical environment
The IPE simulation has been designed, developed and can now be delivered to
the students at Drew/UCLA School of Medicine.
Learners are given IT training on computer basics, on how to download the
Clini-Space application into their personal computer, how to login and check on
their audio capability and be able to talk and be heard in the world to enable
them to communicate interactively with the other avatars. Note: Students faced
fewer issues if Dropbox was used when downloading the application. Each
learner was then oriented into the basic and clinical interactions in the virtual
environment to prepare them for the virtual simulation sessions. Learners are
given enough time to practice in-world on their own prior to the actual one hour
virtual IPE session.
Learners are briefed with a critical action checklist which includes: (1)
standard of care such as sanitizing the hands, infection control by wearing gowns
and gloves before approaching the patient, disposing of the gloves in the proper
garbage receptacle, introducing self/team, recognizing abnormal vital signs and
physical assessment, assessing lung sounds, assessing the wound, checking
antibiotic infusion rate, stopping or correcting infusion rate as necessary,
continually reassessing patient and vital signs and recognizing changes
throughout the simulation. (2) Teamwork – leadership and followership where
learners use and maintain closed loop communication (1st team member call out
request or action, 2nd team member uses a checkback to confirm request or
action, then 2nd team member confirms request fulfilled/action performed), call
for help early, know when to call for help from contact provider. (3)
Communication – uses SBAR – situation, background, assessment, and
recommendation to communicate between team and consultant provider;
communicate with the pharmacist as needed, and inform the relatives on patient
updates and status.
Each virtual case includes a medical crisis that the student team has to
respond to. Following a scenario briefing, the student team were asked to enter
the patient room and manage the virtual patient as a team of medical and nursing
students handling the crisis situation by working together collaboratively using
teamwork and communication. The event had to be managed with all the team
members observing role clarity, communication, personnel support, knowing the
resources (supplies and equipment) available to them, global assessment, and
then stepping back to reassess the situation. Crises are presented to student
learners so that they can apply the crisis resource management principles
according to Rall and Gaba in Miller’s Anesthesia [20]:
1. knowing the environment
2. anticipate and plan
3. call for help early
4. exercise leadership and followership
5. distribute the workload
6. mobilize all available resources
7. communicate effectively
8. use all available information
9. prevent and manage fixation errors
10. cross (double) check
11. use cognitive aids
12. re-evaluate repeatedly
13. use good teamwork
14. allocate attention wisely
15. set priorities dynamically
The in-world facilitator makes sure that the students are on track and are in line
with the objective of the session. They are available to cue the students if they
become stuck, so learners can reassess and move on. The facilitator will also
manage the end of session debriefing where learners are given a chance to “air
out” and self-reflect on their actions.
At the start of the case study, learners receive instructions and details of the
problem, which is presented as introductory information as they log into the
virtual world. In the opening scene, the student team consisting of two nursing
and two medical students are identified as the team taking care of the patient for
the shift. The learners are expected to wash their hands before entering the room,
knock at the door and inform the patient that they are coming in, introduce
themselves to the patient, look at the patient’s name/ID band (Fig. 8.3).
On entry into the immersive environment (Fig. 8.3) the learner would ask the
patient how they are doing, perform a history and physical exam to find out the
nature/ etiology and severity of their problem(s). They interact with other team
members and work together to handle and take care of the patients.
The virtual standardized patient is expected to respond to every student
inquiry with signs and symptoms in lay terms, with relevant family history
included. The learner is directed to various sections of the problem either
through the introductory briefing or by the facilitator. In each he must make at
least one decision in order to progress. To expand, the learner is directed to
various sections of the medical problem based on his actions, where the result of
every anticipated action has been preprogrammed by the case author. At times,
when learners are out of line from the objective of the case, the facilitator will
give some cues to realign them towards the case objective. The problem
incorporates branch points so as to allow the learner various pathways, although
some prove less desirable than others.

Fig. 8.3: Learners in CliniSpace cooperating in treating a patient (© [19] and Drew/UCLA).

Patient management consists of a series of decisions, hopefully based on a


balanced assessment of information available to the learners. Although for
practical reasons, the opportunity to practice clinical decision-making skills is
usually not available until the latter part of most medical/nursing courses, even
new students are aware of the necessity of acquiring such skills. Students are
aware of their need to acquire such skills because during their early years of
training they are shadowing clinical mentors where they can appreciate the skills
required to perform the art of taking care of patients.
Responses to each choice or decision are converted to real-time actions
executed by each learner. These are recorded in real-time as a scrolling display at
the top of the CliniSpace screen, giving the facilitator an overview of all the
things that are happening in the virtual environment. At the end of the scenario,
the summary of all the actions – these are captured by CliniSpace – are shown to
the learners, and compared to the expected actions for each scenario. Also at the
end of each scenario, a team debrief is given, since the key objective is working
together as a team. During the team debrief, individual members of the team are
allowed to discuss their reactions and take home messages from the session
A typical patient encounter. In the opening scene, the virtual patient starts to
have instability of vital signs presenting as hypotension, tachycardia and oxygen
desaturation, accompanied by redness in the face, chest and back. Differential
diagnoses will be formulated by the learner which includes the Red Man
syndrome, anaphylaxis, sepsis, pneumonia, diabetic ketoacidosis. The second
case scenario opening scene is a post-caesarean section virtual patient presenting
with weakness, body malaise, fever, hypogastric pains and instability of vital
signs. Differential diagnoses of postpartum sepsis will be formulated by the
learner which include mastitis, pneumonia, UTI, wound infection or endometritis
causing septic shock. The team will start working together to manage the
presenting problems to the best of their ability and would know when to call for
help. During the process there is a virtual pharmacist, patient relative and
attending or consultant who are all part of the inter-professional team that the
learner will need to communicate, work and collaborate with, to manage the
patient to safety. All the actions are performed by the learner using an avatar in
the virtual world. Actions are executed by clicking on interactive objects such as
the EMR, IV pole, telephone, glove box, garbage disposal, patient monitor,
headboard with the oxygen source, BP monitor, and pulse oximeter, medication
cart. They can perform a physical exam by clicking on patient body parts which
show images of the specific physical features, heart, lung and bowel sound
auscultatory findings, all of which are preset in the case authoring tool.
Interactivity is manifested by the ability of the learner to execute orders by
clicking on the patient, and looking at the ordered results in the EMR.
Each ten minute scenario is followed by 15 minute debriefing (Fig. 8.4). The
faculty-facilitated debriefing occurs outside the virtual patient’s room in the
immersive clinical environment. This is the time when the students will air out
how they feel the simulation went, what their reactions are, what is on their
minds, why they did things when presented with such a situation, what they will
do differently next time, lessons learned and take home message that they got
from the simulation. Also discussed were their challenges with the use of the
virtual environment, which could be a part of how they performed during the
session.
Fig. 8.4: Debriefing outside virtual patient room in CliniSpace (© [19] and Drew/UCLA).

The session is video captured and streamed to trained assessors who were
assigned to rate the student performance using the NTCS/Non-Technical and
Cognitive Skills Evaluation Tool. After the session, students are asked to
electronically submit a postvirtual simulation survey which consists of questions
regarding technical issues with installation, navigation, immersion, content
usability and satisfaction.

8.6.3 Assessment

Two assessment measures were used:


– The post-virtual simulation survey,
– the NTCS Evaluation Tool – adapted from MedEdPORTAL [21].
The post-virtual simulation survey was undertaken by the learner immediately
after the virtual session and submitted electronically to the facilitator in charge.
It consisted of a survey covering comments on technology and software,
navigation and learning, installation, immersion, content and usability. An
analysis of results showed that the students had a positive experience with the
virtual simulation with the exception of the audio where there are some problems
with the students not being able to communicate due to local audio problems
with their computer. Questions asked were:

Navigation: It is easy to navigate and learn the virtual simulation?


Installation: Mean of the following questions: installation instructions are clear
and easy to follow; the introductory training was helpful; the instruction for
access to virtual simulation are clear; once installed, it is easy to navigate, to
learn and to use.
Immersions: Mean of the following questions: I felt immersed in the case when
I was the only avatar navigating the virtual world; I felt immersed when two or
more avatars are navigating the virtual world; I was successful in
communicating with my partner in virtual simulation.
Content: Mean of the following questions: I realized that the case was getting
worse or better when I was executing actions; I felt comfortable managing
virtual patients.
Usability: Mean of the following questions: I was able to use the interactive
control with ease; I felt comfortable to use the interactive control.
Satisfaction: I was able to manage the patient to my satisfaction (Fig. 8.5 shows
the survey results).

Fig. 8.5: Post-virtual simulation survey of Drew/UCLA IPE cohort.

The NTCS evaluation is undertaken by trained assessors who are simultaneously


watching the streamed video (using X-Split Broadcaster) of the session as it is
streamed to each of the assessors using Twitch. The website or link for streaming
the virtual simulation session [22] is given to the assigned assessors. The
assessors are given an electronic copy of the NTCS evaluation tool to use for
rating which they electronically submit to the faculty upon completion. The
NTCS tool consists of 17 questions scored with 1–7 Likert scales: 1 being very
poor, 4 marginal and 7 being excellent. The NTCS addresses the overall
performance of the team, ability to do the standard of care, teamwork and
communication.
Accumulated result of the NTCS shows that the learners have an overall
score between 3.9–4.3 showing that NTCS performance is marginal (between
fair and good). This score could be due to the fact that these are neophyte
learners being introduced to the IPE teamwork and communication training for
the first time. In summary, a total of 97 students were rated by the evaluators. An
average of 10% did poorly. That is, 9% did poorly on the overall performance,
13% did poorly on the initiation of treatment, 8% did poorly on the standard of
care. Only 1–4% scored excellent. The good, very good and excellent scores
combined totaled to 66% (Fig. 8.6).
Fig. 8.6: Non-Technical and Cognitive Skills Evaluation of Drew/UCLA IPE cohort.

Typical student comments about the virtual experience are as follow:


Thank you for taking the time to do this experience for us, I really did find it helpful in teaching us to
communicate in the clinical setting.

I apologize for my inability to support my team members in the simulation today. I have over 60%
hearing loss and even with hearing aids speech over a microphone is very difficult for me to
comprehend. I am very embarrassed about the situation and once again apologize. Thank you for
attempting to work with me during the process and remaining patient with me.

I really enjoyed the session and find this to be extremely helpful in learning to get comfortable in the
clinical environment as well as working with a team.

8.6.4 Summary and lessons learned

This case study used a sophisticated approach developed for teaching patient
management through virtually simulated problems. In these problem scenarios,
the student was presented with a series of diagnostic and management decisions
they must make as they proceed through the exercise. They must select the kind
of data they need and its sources, which may include the presented facts, their
senses, their memory and books, the patient and their family, other medical and
paramedical personnel, laboratories, imaging. They have to sort relevant and
distracting information, entertain various diagnostic hypotheses and revise them
as incoming data are evaluated, and at some point commit themselves to specific
diagnostic and therapeutic decisions, some of which may prove to be mistakes
and which in fact are at the heart of learning during the actual process of
debriefing. During the virtual simulation the student is frequently faced with the
necessity of acting when they do not feel ready to do so, and simultaneously they
may be tempted, deliberately, to refer the problem to a consultant.
Within the virtual world, they can perform without the embarrassment of
making a mistake since they are not interacting in person but instead act through
their avatar. Though they may feel they are inadequate, they would still want to
try since it is their avatar acting and not exactly them. Lack of readiness is a
feeling most neophyte students always have. They want to do their best no
matter what.
Prior to the immersive session, the facilitator always starts the introductory
briefing with the basic assumption: to make sure that the students realize that the
faculty believes that everyone participating in the simulation is intelligent, well
trained, and cares about doing their best and wants to improve. This
understanding despite their lack of readiness empowers them to perform at their
best during the virtual sessions. During the debriefing student reaction may be
concern at their level of professional competence during task completion,
although he/she is positively expected to adapt to this technique of recognizing
danger and knowing when to call for help, given the limited clinical knowledge
and skills of neophyte medical and nursing students.
The immersive simulation offers some substantial advantages over
conventional simulation methodology. Namely, the project was able to use a
minimum of one faculty to facilitate all the virtual sessions, versus the 16 faculty
to facilitate the traditional IPE simulation sessions. In addition to these
substantially lower staffing costs, other advantages include the possibility of
remote training and the relative ease of organizing training sessions. However,
these advantages come with challenges of organizing schedules of sessions for
students from different disciplines, finding the opportune time to meet with
different time zones, offering dynamic realistic responses, simulating the look
and feel of a variety of patients (cultural and ethnic variations), simulating subtle
expressions and affective signals.
As mentioned, the challenges in IPE simulation include a range of issues:
temporal, geographic, logistical, limited resources, difficulty in set-up of
scenarios, faculty availability and schedule, the need for students and facilitator
to be physically present in the same location. The use of the immersive
environments to teach clinical and teamwork skills in this case study has shown
that these challenges can be overcome.
In addition and importantly, immersive environments provide the
opportunity to study clinicians as they engage with the virtual patient in the
context of diagnostic and therapeutic tasks, as well as communicating and
working together as a team to manage and generate a solution to the problem at
hand. Interestingly, one of the main features that the students requested was
affective facial expressions for the virtual patient as well as for the avatars of
team members.
Virtual simulation is a new and unique tool in education that allows
computer-aided simulation of virtual reality to introduce material and improve
understanding of participants on a topic, in our case, the actual hands-on
simulation.

8.7 The potential for learning


The Drew/UCLA IPE case study illustrates two technologies being brought
together: interactive virtual patients and immersive clinical environments. There
is research into learning using both technologies separately, but due to the novel
approach of offering the two together, as yet, little research on the combination.
A part of the rationale for this chapter is to explore if the effect of bringing the
two together is more than simply additive.
This section explores at a more general level the potential that interactive
virtual patients in immersive clinical environments have for learning. Using the
IPE case study from Drew/UCLA School of Medicine in the last section as an
exemplar we explore the issues that it raises. The questions asked will include:
– Why choose virtual worlds over other technologies for learning?
– What expertise do you need to develop and deploy virtual worlds?
– What issues are there in running a virtual world scenario?
– How do you assess and validate activity in virtual worlds?

8.7.1 Why choose immersive clinical environments?

The answer to this question is not straightforward. The reason for this is that the
decision is not simply made on a technical basis i.e. which technology is
cheaper, faster, etc. Clearly certain technologies will have certain niches within
an educational program: classroom for this area, e-learning for another, high
fidelity simulation here and streaming video there. An analogy can be made to
having a suit of cards from the card deck in your hand: not all cards can be used
in every game situation. A single card may be appropriate in one case, but in
another it will be a subset of the cards you hold. The point is that it depends on
the context and the range of cards (technologies) that you have at your disposal.
Virtual worlds, however, are not all created equal. Kim et al. whilst
researching the use of virtual worlds in education found,
While doing the literature review and coding the articles included in the sample, we realized that in
many of the studies, the authors did not explain why they used a certain virtual world platform for the
specific contexts [23].
The early implementation of new technologies in organizations is often carried
out by enthusiastic “intrepreneurs” who simply use what they can get their hands
on at zero or little cost – hence the widespread use of Second Life in education.
So, while it is theoretically possible for a VW platform to support all types of educational initiatives,
designers typically make choices that lead technologies to be more or less suited for different teaching
and learning purposes.

(Dickey, 2003, 2005 quoted in [24])

A more measured approach is offered by Robbins and Butler [24]. They


recognize both that technologies evolve rapidly and are suited to certain niches.
To address this they have developed a virtual worlds’ taxonomy based on
generalized attributes and a pedagogies taxonomy based on a learner/instructor
dichotomy. They then propose matching pedagogy types to virtual world types.
Having done this, they then apply this approach to their own field – information
systems – before offering a set of guidelines for the implementation of virtual
worlds in other domains.
They categorize virtual worlds along two axes: purpose and access to
knowledge resources. By access to knowledge resources they mean access to a
wider online community e.g. the World of Warcraft player base of some 7.5
million. This gives four virtual world types (Fig. 8.7).

Fig. 8.7: Four types of virtual worlds and examples (adapted from [24]).

Pedagogies are similarly categorized along two axes: learning objectives and
instructional approaches.
The learning objectives dimension is divided into procedural skills and
declarative knowledge. This distinction comes from the computer science
division into data and programs. Procedural skills are about how to do something
both manually e.g. ride a bike and to work with others to implement a sequence
to deliver a result e.g. move an unconscious patient. Learning may be tacit in
that the learner cannot explain how to do it. Declarative knowledge on the other
hand is to know that something is the case e.g. that C is the third letter of the
alphabet, however it also covers knowledge about knowledge so I have to know
what the alphabet is and maybe also that there are numerous alphabets. In reality,
to respond to a procedural task in the clinical space it is necessary to access to
declarative knowledge. Therefore, the two forms work together. In the context of
medicine, decision-making and problem solving require both procedural skills
and declarative knowledge i.e. both notions of sequence and of the underlying
knowledge and associated cognitive skills required to implement and develop it.
The instructional dimension is divided into objectivist and constructivist
approaches. Objectivist approaches believe that there is an objective reality “out
there” that can be taught and learned. On the other hand, constructivist
approaches teach as if reality or meaning is developed by the learner (often with
the help of the teacher and other students) i.e. meaning is literally constructed by
the learner from their experiences and so becomes unique and belongs to them.
This then gives the pedagogical i.e. both learning and teaching, taxonomy in
Fig. 8.8.

Fig. 8.8: Taxonomy of pedagogies [24].

These two grids are now overlaid and from the intersections it is now possible to map the appropriate
virtual world onto the chosen learning teaching approach. This is shown below following a discussion
of learning and teaching approaches.

What is also interesting in this discussion about the potential that virtual
worlds have for learning are the implementation guidelines proposed by Robbins
and Butler [24]. These are:
1. Articulate your learning objectives
2. Choose your instructional approach
3. Choose your virtual world
4. Pilot for utility, viability and sustainability
Whilst these guidelines are useful we would propose a more generic project-
based approach. This is known as the 4D approach – Decide, Design, Develop
and Deploy [25] – and would wrap Robbins and Butler’s guidelines above into
them, viz:
– Decide/strategic level: learning objectives, instructional approach…
– Design: choose your virtual world, resourcing requirements…
– Develop: develop your technical and human resources. Recruit participants.
– Deploy: launch your virtual world service; deliver the project.
This 4D approach is now explored in the context of the Drew/UCLA IPE case
study.

8.7.2 Decide

Learning in medicine (the learning objectives)


Becoming a skilled professional requires the acquisition of theoretical knowledge and the practice of
skills under the guidance of an expert. The idea of learning-through- apprenticeship is long accepted in
medicine and, more generally, in the health sciences, where practicum courses are an essential part of
most curricula [26].

The Drew/UCL IPE case study had the following set of learning objectives for
the Red Man syndrome:
Learning objectives for MD provider learners:
– identify the appropriate vancomycin infusion rate for an adult patient
– identify the potential complications related to rapid vancomycin infusion
– recognize signs and symptoms of Red Man syndrome/allergic reaction
– prioritize treatments and interventions for Red Man syndrome
– order treatments and interventions for Red Man syndrome
– collaborate with the interdisciplinary healthcare team using SBAR
communication and crisis resource management principles.
Learning objectives for RN learners:
– identify the appropriate vancomycin infusion rate for an adult patient
– identify the potential complications related to rapid vancomycin infusion
– recognize signs and symptoms of Red Man syndrome/allergic reaction
– prioritize treatments and interventions for Red Man syndrome
– implement treatments and interventions for Red Man syndrome
– collaborate with the interdisciplinary healthcare team using SBAR
communication and crisis resource management principles.
In the “Red Man” case study it seems that a complex “situated” set of skills are
being evaluated:
– some technical (procedural) skills e.g. implement treatments and interventions
– cognitive (declarative knowledge) skills e.g. recognize signs and symptoms
– nontechnical skills (behavioral mixing of declarative knowledge and
procedural skills) e.g. team communication and collaboration.
This is similar for the post-partum sepsis example.
It is the teaching approach however that will provide the context in which
these skill sets are to be realized.

Teaching in medicine (the instructional approach)


It is evident from the discussion above that the instructional approach should
vary with the type of knowledge being learned. To reiterate briefly, objectivist
approaches believe that there is an objective reality “out there” that can be taught
and learned. One the other hand, constructivist approaches teach as if reality or
meaning is developed by the learner (often with the help of the teacher or other
students).
For example, as a medical educator technical (procedural) skills are taught
by demonstration, through practice and by observation in classroom or labs other
in situ. Declarative (cognitive) skills in the medical domain are taught on the one
hand by lecture and on the other hand by mentoring the development of complex
hypotheses by the student about real clinical cases, often in a real world clinical
context.
The teaching approach developed in the case study: Inter-professional
Education (IPE) is usually taught as a combination of didactic lectures and case
scenarios. Challenges encountered with case simulation include temporal,
geographic, logistical, limited resources, difficulty in set-up of scenarios, and the
need for students and facilitator to be physically present at the same location. In
addition, in practice-based healthcare professions, methods of teaching and
learning focus on enabling students to assimilate clinical knowledge and skills.
Medical and nursing students need to learn how to apply classroom learning in
clinical context, most importantly that these concepts that used to be taught in
silos will be integrated as they work interprofessionally in clinical practice.
There is an increased recognition of the importance of inter-professional training
for professional accreditations.
This would appear therefore to fit, although not entirely neatly, the
constructivist instructional approach.

8.7.3 Design

Virtual worlds
From the above learning and teaching approaches and using the Drew/UCLA
IPE case study as an example this would then give the grid shown in Fig. 8.9.

Fig. 8.9: CliniSpace located in the virtual world/ pedagogy taxonomy grid (adapted from [24]).

This appears to put the Drew/UCLA IPE case study in the “multipurpose,
constructivist” side of the grid and not quite fully in either the “procedural or
declarative” parts of the grid. (Taxonomies, although often informative are
always abstractions, so reality sometimes has to be massaged to fit.) If the
higher-order, team-centric communication and collaboration skills are addressed
then this requires the practice of procedural skills with a constructivist teaching
approach. This fits into the top left quadrant. This of course is what was chosen
in the case study i.e. the CliniSpace immersive clinical environment with few
external knowledge sources.
Since Drew/UCLA however is deploying an immersive clinical environment
merged with an interactive virtual patient the reality is more complex.

Interactive virtual patients


The question therefore arises: “Where does virtual patient technology fit within
this taxonomy?” Interactive virtual patients are examples of objectivist learning
requiring procedural teaching needing a specialized virtual world with few
(external) knowledge sources. This places interactive virtual patients in the
upper right quadrant (Fig. 8.10).

Fig. 8.10: CliniSpace virtual world (VWQ4) with interactive virtual patient (VWQ1) functions
located in virtual worlds/pedagogies taxonomy grid (adapted from [24]).

8.7.4 Develop

Both developing realistic immersive clinical environments and interactive virtual


patients from scratch are major development tasks. The graphics for the
environment have to be prepared as do the interactive and persistent medical
objects in that environment e.g. beds, IV fluids,… The virtual patient models
have to be developed into such a form that they can then power the in-world
patient avatars. Finally, the links between the medical objects and the patient
avatar responses have to be made.
Fortunately, there are now platforms arriving on the market that are starting
to offer “out-of-the-box” configuration of both the immersive environments and
the virtual patient. These allow medical educators and not just technical experts
to develop virtual patients and set up virtual environments. The Drew/UCLA
case study used the CliniSpace Virtual Sim Center.
At this stage of the project, having obtained the commitment to proceed and
having prepared the detailed design then the project is at the stage prior to
deployment:
– prepare the simulation scenarios
– modify the 3D elements in the virtual world – graphics, interactive objects,
access, etc.
– author the interactive virtual patients
– develop the facilitators
– develop any assessment
This section now focuses on developing interactive virtual patients as these are
the novel elements in this case study. Virtual world modification, although
nontrivial technically, has been addressed elsewhere [27].

Interactive virtual patients


Virtual Patients cannot replace real patients, but they can be of great assistance in areas where there are
no other suitable learning tools, such as clinical problem solving… and arguably therefore should be an
essential element of every undergraduate course [14].

This section now develops in more detail how the CliniSpace Virtual Sim Center
platform explored in the Drew/UCLA case study delivers configurable clinical
environments containing interactive virtual patients.
In CliniSpace each virtual patient (DynaPatient) has an underlying
pathophysiological model:
– a set of states (healthy, deteriorating, fibrillation, sepsis, severe sepsis, shock,
asystole, death)
– mathematical equations that define the smooth changes within that state
– conditions that define transitions between states
– the physiology model interacts with the environment
– each model has a “driver” or independent variable
– some of the variables are “observable”, for example, as the vital signs
appearing on a monitor
– the AI of the virtual patient is linked to the AI of the environment. Each object
in the world has its own rules: e.g. administering fluids at the IV stand
temporarily decreases the heart rate and increases the blood pressure. An
important lesson to be learned is that acute blood-loss must be stopped before
administration of fluids will become helpful (Fig. 8.11).
The patient can be examined in real-time (Fig. 8.12).
And will respond in real-time to appropriate medication (Fig. 8.13).
Authoring is via a simple to use interface, subject of course to the required
medical expertise (Fig. 8.14).
The summary graphic in Fig. 8.15 shows the interactions between the
various elements of the CliniSpace Virtual Sim Center.
It places the physiology of real patients at the center. In detail:
– the pathophysiology of real patients is analyzed and the key elements
enumerated as a set of algorithmic models;
– these are then modeled as a “DynaPatient” with a complex, dynamic
pathophysiology e.g. myocardial infarction;
– the physiological variables of this DynaPatient are then modified by the
medical educator using a simple to use authoring tool and at the same time the
clinical environment is modified e.g. medications made available, equipment
made available in the ward, bed placed in certain position, etc.;
– the modified variables are then loaded into a virtual patient avatar model in
the immersive clinical environment and the case started;

Fig. 8.11: Graphical representation of a pathophysiological model for a patient with traumatic
hemorrhage without intervention (© [19]).
Fig. 8.12: Interactive virtual patient in CliniSpace – points of interaction (© [19]).
Fig. 8.13: Virtual patient physical responses to interventions in CliniSpace (© [19]).

– the clinician avatar then enters the virtual environment and examines,
diagnoses and/or treats the patient using the available interactive medical
objects e.g. BP cuff, medications, intubation, imaging,… ;
– depending on the intervention this may alter the underlying medical model
(DynaPatient) which then in turn can be viewed as signs and symptoms in the
virtual patient avatar or available data displays e.g. skin color change, urine
output up, SPO2 steady,…;
– this process of examination, diagnosis and treatment then continues for the
duration of the scenario with the interactive virtual patient responding in real
time;
– finally an “invisible” facilitator can simultaneously view the actions of the
clinician avatar during their session and can be available to assist or intervene.

Fig. 8.14: Overview of elements of Clini-Space DynaPatient authoring interface (© [19]).


Fig. 8.15: Summary graphic showing interactions within CliniSpace [13].

If we now move on to look at the role of virtual patients in learning in a


comprehensive review and meta-analysis:
We found that virtual patients, in comparison to no intervention, are consistently associated with higher
learning outcomes [28]

and that whilst


Virtual Patients cannot replace real patients, but they can be of great assistance in areas where there are
no other suitable learning tools, such as clinical problem solving… and arguably therefore should be an
essential element of every undergraduate course [14].

However, at this stage little in-depth research has been done, in particular,
The dearth of research and an evidence-based framework for case utilisation and design was addressed
by all the study participants [29].

And finally in terms of resource usage,


Although the randomized controlled study did not show that the virtual patient simulation was superior
to the mannequin-based simulation, both simulations have demonstrated to be effective refresher
learning strategies for improving nursing students’ clinical performance. Given the greater resource
requirements of mannequin-based simulation, the virtual patient simulation provides a more promising
alternative learning strategy to mitigate the decay of clinical performance over time [30].
8.7.5 Deploy

Deployment of immersive environments with interactive virtual patients requires


a number of coordinated elements. This is demonstrated in the case study in this
chapter. In summary:
– students and faculty have to be scheduled where real-time in-world interaction
is to occur
– access has to be scheduled to the online resources
– technical support has to be in place
– application training has to be given to faculty and students
– individual mentoring may need to be available to assist
– assessment systems have to be in place – debriefing, surveys, evaluation,
action analysis, video analysis, etc.

Assessment and validation in immersive clinical environments


It would seem that the assessment of team work in medicine is still a work in
progress:
Quality of care and patient safety depend on effective teamwork, … yet there is no validated method of
assessing teamwork [31].

And when team training is moved into immersive clinical environments:


Assessment methods used for manikin-based simulation training and virtual world training studies were
developed initially to assess the individual’s performance. They have now been extended to assess team
performance. As multiple simultaneous teams interact in the virtual world, observer-based rating
methods need to be augmented with other objective, automated measurement tools. Training in these
more complex simulation environments requires new assessment tools to measure individual and team
processes as well as patient outcome variables [32].

There are however some attempts to create measures of team functioning such as
the Team Climate Inventory (TCI) developed by Anderson and West and quoted
in a review of assessment measures for nontechnical skills in medical emergency
care [33]. So at this stage the use of standard tools such as:
– participants – debriefing and surveys
– observers – in-world evaluation, post-simulation testing, post-simulation
analysis
– tools – action tracking analysis and other automated tool
are deployed until such time as a validated method for assessing teamwork is
available.

8.8 Conclusion: “Learning by Doing… Together”


There is a wider context to the discussion in this chapter on using interactive
virtual
patients in immersive clinical environments. Cook argues
Diagnostic errors represent a significant source of patient morbidity, and cognitive errors represent the
most common cause of diagnostic error. Cognitive errors also lead to suboptimal treatment decisions.
Evidence suggests that most cognitive errors arise from faulty interpretation, synthesis, and judgment
rather than insufficient data gathering or fund of knowledge and that decreasing the incidence of
cognitive error will require that health care providers experience multiple, varied patient cases. Yet even
as the rapid growth of medical information and expectations for quality care have increased the
complexity of medical decision making, we see decreased time for education and heightened concerns
regarding patients as educational subjects [28].

To expand this, on the one hand:


– medical knowledge is increasing exponentially
– patient expectations are following the same curve
– medical technology is being introduced at an unprecedented rate.
However, on the other hand:
– training “face time” with patients is and has been declining for some time.
Graafland has proposed;
The current commitment to reduce error in clinical practice has led to recognition of team training in
managing crisis situations, such as anesthesia crisis resource management and emergency medicine
crisis resource management, and may also be of use to surgical residents. Crisis resource management
is derived from aviation, and focuses on nurses and physicians together in crisis situations. Serious
games allow such training in a relatively cheap, readily available environment with a large variety of
cases, providing an alternative to expensive high-fidelity simulators [34].

Serious games in this context include the new virtual simulation technologies
and so are here proposed as a part solution to these training “face time” issues.
The opening question to this chapter was “how and what can we learn by
deploying interactive virtual patients in immersive clinical environments?”
Starting with the innovative Drew/UCLA IPE case study in Section 8.1, this
has demonstrated conclusively [35] that it is possible to develop and deploy
interactive virtual patients in immersive environments in a mainstream medical
training environment. This was a complex deployment and clearly would not
have been done without the Drew/UCLA faculty, who otherwise have major
clinical responsibilities, recognizing the potential for learning with this
technology combination. The rationale for the use of virtual patient and virtual
world technologies in inter-professional medical education was specific and spelt
out, namely
Medical and nursing students need to learn how to apply classroom learning in clinical contexts, more
importantly that these concepts that used to be taught in silos will be integrated as they work inter-
professionally in clinical practice.

What is interesting here, from the perspective of using technologies, are two
important points:
– application of classroom knowledge in clinical contexts
– concepts previously taught in silos being integrated in the context of practice.
The immersive clinical environment deployed, CliniSpace, was seen as being
capable of providing a realistically simulated clinical context to enable the
learning of specific sets of integrated concepts that would otherwise be difficult
to impart.
Lessons learned from the Drew/UCLA project were in these areas:
– scheduling and deployment of the immersive environment and virtual patient
– overcoming the challenges of teaching IPE in face-to-face environments
– the opportunity to study collaborating clinicians in a clinical team context
– resource advantages: cost, scheduling, remote participation.
In the analysis in Section 8.2 a number of questions were posed. These are
repeated below with a summary of the answers offered:
– Why choose virtual worlds over other technologies for learning?
Realistic clinical context; resourcing advantages;
– What expertise do you need to develop and deploy virtual worlds?
Design, development and deployment expertise at a number of levels: project
management, technology – virtual patient and virtual world – development
and deployment; simulation management; team lesson development and
delivery;
– What issues are there in running a virtual world scenario?
Technology access; coordination logistics: faculty availability, patient actor
availability, assessor availability; assessment;
– How do you assess and validate activity in virtual worlds?
Assessment: novel and “complex simulation environments require new tools to
measure individual and team processes as well as patient outcomes” [32].
In conclusion, it is not the new technologies per se that are the main subject of
this chapter. Rather it is what they can offer to medicine, hence the subtitle of
this chapter, ‘The potential for learning’. Immersive environments were used
because they provided an affordable, manageable solution to the problems
encountered in delivering IPE. They can be seen as places for authentic,
participatory experiences where experts can support learners and learners can
support other learners in moving to their next level in learning and practice.
A word of caution, however, is appropriate at this stage. Owens et al. [36],
looking at the operation of teams in non-clinical settings, identified “behaviors
of interest” in their study of virtual world project teams. These were: co-
ordination – “where people and technology resources work together to carry our
specified activities in order to accomplish stated goals”; role clarity – “state of
understanding individual roles… knowing ones responsibilities and knowing
what is expected” and shared understanding – “mutual knowledge, mutual
beliefs, and mutual assumptions that team members develop”. These behaviors
are one part of a three part model of interaction in virtual worlds that consists of
the technical components – virtual world visualization and communication
capabilities; the social components – the behaviors listed above by people-
controlled avatars; and the outcomes – group effectiveness in terms of both task
and team-related outcomes. And, in an earlier study [37] which deployed the
same model as Owens et al. [36], Davis et al. conclude that virtual worlds have a
richness that can “enhance team-building and cohesiveness”, but that “positive
outcomes are never guaranteed”. Importantly,
managers should not assume that people will behave and look like their real world counterparts…
Teams in these environments should not be managed just like traditional virtual teams, as people and
behaviours may be fundamentally different.

To turn to effective resource use, one of the heartening facts from the
Drew/UCLA case study was the steeply decreased use of faculty:
The project was able to use a minimum of one faculty to facilitate all the virtual sessions, versus the 16
faculty to facilitate the traditional IPE simulation sessions.

There is a further and novel aspect about immersive environments that was
identified in the Drew/UCLA case study. That is, their use as spaces for research
and modeling and testing as identified earlier by Heinrichs et al.
Further, as immersive spaces capable of exhibiting ‘real work place’ characteristics, they will
increasingly be developed as a ‘safe play space’ for the modelling and testing of complex team-based
health related activities [38].
This chapter has demonstrated that interactive virtual patients in immersive
clinical environments offer a specific set of social and clinical experiences that
are appropriate for team-based learning and beyond. Further it has argued that
there are a number of internal and external drivers in medicine that will drive the
introduction of these two new technologies. The internal drivers include: rapid
change in medicine, constrained budgets, new equipment and techniques, less
training “face time”, and a rising interest in simulation whilst the external drivers
include: lower costs and the development of simulation technologies tailored for
medicine. These drivers will assist the adoption of both virtual patient and
virtual world technologies to deliver a powerful new set of tools for professional
development in medicine. They offer complex, immersive, cooperative,
engaging experiences for learning by doing … together.

References
[1] ESRB. (2014). Video Games Industry Statistics. Retrieved from ESRB:
www.esrb.org/about/video-game-industry-statistics.jsp
[2] WIkia. (2014). Video Games Sales Wiki. Retrieved from Wikia:
http://vgsales.wikia.com/wiki/Video_game_industry
[3] Bell MW. Towards a definition of virtual worlds. Journal of Virtual Worlds Research. 2008;1(1)
http://dx.doi.org/10.4101/jvwr.v1i1.283
[4] Heinrichs WL, Fellander-Tsai L, Davies D. Clinical Virtual Worlds: The Wider Implications for
Professional Development in Healthcare. In Bredll L,BW. Serious Games and Virtual Worlds
in Education, Professional Development and Healthcare. Hershey, USA: IGI Global; 2013. pp.
221-240.
[5] Wikipedia. (2014). Virtual Worlds. Retrieved from Wikipedia:
http://en.wikipedia.org/wiki/Virtual_world
[6] De Freitas S. (2008). Serious virtual worlds. A scoping guide. UK: JISC e-Learning
Programme, The Joint Information Systems Committee (JISC).
[7] Ghanbarzadeh R, Ghapanchi AH, Blumenstein M, Talaei-Khoei A. A decade of research on
the use of three-dimensional virtual worlds in health care: a systematic literature review. J.
Med. Internet Res. 2014;16(2):e47.
[8] Talbot TB, Sagae K, John B and Rizzo AA. sorting out the virtual patient: how to exploit
artificial intelligence, game technology and sound educational practices to create engaging
role-playing simulations. Int.J. of Gaming and Computer-Mediated Simulations. 2012;4(3):1–
19.
[9] Medbiq.org. MedBiquitous Consortium ¡Advancing the Health Professions Through
Technology Standards [Internet]. ANSI/MEDBIQ VP.1-.1-2010 1. [cited 7 June 2015].
Available from: http://www.medbiq.org/
[10] Danforth DR, Procter M, Chen R, Johnson M and Heller R. Development of virtual patient
simulations for medical education. Journal for Virtual Worlds Research. 2009;2(2).
[11] Bracegirdle L and Chapman S. Programmable patients: Simulation of consultation skills in a
virtual environment. Bio-Algorithms and Med-Systems. 2010;6(11):111–115.
[12] Rizzo AA. An intelligent virtual human system for providing healthcare information and
support. Medicine Meets Virtual Reality. 2011;18:503–509.
[13] Heinrichs WL, Dev P, Davies D. Patients should not be passive! Creating and managing
active virtual patients in virtual clinical environments. In: Schoten B, Fedtke S, Schijven M,
Vosmeer M, Gekker A, eds. Games for Health. Wiesbaden: Springer Vieweg; 2014, pp. 56–
61.
[14] Poulton T and Balasubramaniam C. Virtual patients: a year of change. Medical Teacher.
2011; 33(11):933–937.
[15] Clinispace.com. CliniSpace – Immersive Learning Environments for Healthcare [Internet].
2015 [cited 7 June 2015]. Available from: http://www.clinispace.com/
[16] Ng G and Kutzin J. Red in the Face. Interprofessional Simulation Case Scenario.
MedEdPortai Publications. 2011 doi: http://dx.doi.org/10.15766/mep_2374–8265.8408
[17] Kutzin J, Rosenthal M, Byrnes M, Mal F, Paparella-Pitzel S and Lo V. Sepsis in a Postpartum
Patient – A Simulation Scenario for Interprofessional Education. MedEdPortal Publications.
2012. doi: http://dx.doi.org/710.15766/7mep_2374–8265.9236
[18] Innovationinlearning.com. Innovation in Learning, Inc.: Products [Internet]. 2015 [cited 7 June
2015]. Available from: http://www.innovationinlearning.com/products.html
[19] Innovation in Learning I. Virtual Sim Center [Internet]. virtualsimcenter.clinispace.com. 2015
[cited 7 June 2015]. Available from: http://virtualsimcenter.clinispace.com/
[20] Miller R. Miller’s Anesthesia. 6th Edition. Philadelphia: Elsevier, 2005.
[21] Stiegler M, Dhillon A, Huang Y, Uijdehaage S, Stiner J, Zacharia S et al. Non-Technical and
Cognitive Skills (NTCS) Self-Reflection and Faculty Evaluation Tools. MedEdPORTAL
Publications. 2011; https://www.mededportal.org/publication/9024
[22] Twitch. Parciaga [Internet]. 2015 [cited 7 June 2015]. Available from:
http://www.twitch.tv/parciaga
[23] Kim SH, Lee JL and Thomas MK. Between purpose and method: a review of educational
research on 3D virtual worlds. Journal for Virtual Worlds Research. 2012 5(1).
[24] Robbins RW and Butler BS. Selecting a virtual world platform for learning. Journal of
Information Systems Education. 2009;20(2):199–210.
[25] AmbientPerformance.com [Internet]. 2015 [cited 7 June 2015]. Available from:
http://www.ambientperformance.com
[26] Chodos D, Stroulia E, Boechler P, King S, Kuras P, Carbonaro M et al. Healthcare education
with virtual-world simulations. In: Clarke, WJ (ed.) Proceedings of the 2010ICSE Workshop
on Software Engineering in Health Care, SEHC’10. ACM. 2010, pp. 89–99 .
[27] Bartle RA. Designing Virtual Worlds. San Francisco: New Riders Publishing, 2004.
[28] Cook DA. Computerized virtual patients in health professions education: a systematic review
and meta-analysis. Academic Medicine. 2010;85(10):1589–1602.
[29] Posel NS. Virtual patient cases: a qualitative study of the requirements and perceptions of
authors. International Journal of Medical Education. 2012;3:175–182.
[30] Liaw SY, Chan SW, Chen FG, Hooi SC and Siau C. Comparison of virtual patient simulation
with mannequin-based simulation for improving clinical performances in assessing and
managing clinical deterioration: randomized controlled trial. Journal of Internet Medical
Research. 2014;16(9).
[31] Epstein RM. Assessment in medical education. New England Journal of Medicine. 2007;
356(4):387–396.
[32] Heinrichs WL. Simulation for team training and assessment: case studies of online training
with virtual worlds. World Journal of Surgery. 2008;2:161–170.
[33] Cooper SE. Measuring non-technical skills in medical emergency care: a review of
assessment measures. Open Access Emerg Med. 2010;2:7–16.
[34] Graafland MS. Systematic review of serious games for medical education and surgical skills
training. British Journal of Surgery. 2012;99(10):1322–1330.
[35] Arciaga P, Windokun A, Calmes D, Dev P and Shaheen M. Technology Innovations abstract
initial experience with the use of virtual simulation to teach students interprofessional
education: the Charles R. Drew University (CDU) experience. Simulation in Healthcare. 2013
Dec;8(6).
[36] Owens D, Mitchell A, Khazanchi D and ZIgurs I. An empirical investigation of virtual world
projects and metaverse technology capabilities. SIGMIS Database. 2010;42(1):74–101.
[37] Davis AM. Avatars, people, and virtual worlds: Foundations for research in metaverses.
Journal of the Association for Information Systems. 2009;10(2):90-117.
[38] Heinrichs WL, Davies D and Davies J. Virtual worlds in healthcare: applications and
implications. In: Arnab S, Dunwell I and Debattista K, eds. Serious Games for Healthcare:
Applications and Implications. Hershey: IGI Global; 2013, pp. 1–22.
Joanna Jaworek-Korjakowska and Ryszard Tadeusiewicz
9 Melanoma thickness prediction

9.1 Introduction
Cancer is a generic term for a large group of diseases that can affect any part of
the body. Other terms which are used are malignant tumors and neoplasms. One
defining feature of cancer is the rapid creation of abnormal cells that grow
beyond their usual boundaries, and which can then invade the adjoining parts of
the body and spread to other organs. Cancer arises from one single cell. The
transformation from a normal cell into a tumor cell is a multistage process,
typically involving a progression from a pre- cancerous lesion to malignant
tumors. These changes are the result of the interaction between genetic factors of
a person and three categories of external agents, containing physical, chemical
and biological carcinogens [1]. Cancer is a complex group of diseases with many
possible causes, including genetic factors, lifestyle factors, such as tobacco use,
diet, and physical activity, as well as certain types of infections and
environmental exposures to different types of chemicals and radiation. Aging is
another fundamental factor in the development of cancer. The incidence of
cancer increases dramatically with age, mostly due to the growing risk of
specific cancers that is likely to occur with age. The overall risk accumulation is
combined with the tendency for cellular repair mechanisms to be less effective
as a person grows older. Cancer is a leading cause of death worldwide,
accounting for 8.2 million deaths in 2012 [1]. Fig. 9.1 presents the incidence of
selected cancers in Australia during the last 20 years.
Fig. 9.1 Incidence rate of leading cancers in Australia [2].

In this chapter we describe the prediction of melanoma thickness, as one of the


very important parameters in skin cancer diagnosis and treatment. This paper is
organized in six sections as follows. Section 9.2, Motivation gives an overall
description of the importance of the skin cancer diagnosis issue and also presents
alarming statistics. Section 9.3, Clinical definition and importance presents the
most significant medical aspects of melanoma such as melanoma disease stages
and diagnostic algorithms. Section 9.4, Determination of melanoma thickness
describes the algorithm of melanoma thickness prediction including dermoscopic
criteria (global and local features) as well as most predictive criteria for thin and
thick melanoma differentiation. Section 9.5, Melanoma thickness simulations
specifies the overview of the software and presents the proposed application.
Section 9.6 closes this chapter with conclusions, discusses the results and
highlights future possibilities.

9.2 Motivation
In the last few years an increasing rate of melanoma incidence has been
observed worldwide and the diagnoses and deaths are rising more rapidly than
any other form of skin cancer. Moreover, melanomas are fast growing and highly
malignant tumors, often spreading to nearby lymph nodes, lungs and brain. One
of the major contributors to the development of melanoma is ultraviolet radiation
(long-term sun exposure and sunburn) that causes damage to the DNA in cells.
Also, the negative influence of certain lifestyle behaviors is of great importance.
Malignant melanoma is likely to become one of the most common malignant
tumors in the future, likely to reach a ten times higher incidence rate. Since the
early 1970s, malignant melanoma incidence has increased significantly, for
example in the USA it is growing by approximately 4 percent every year [3].

Fig. 9.2: Malignant melanoma incidence and mortality rate statistics during the last 50 years.
Based on [25].

Current estimates show that by 2015 one in fifty Americans will develop
melanoma in their lifetime; indeed the statistics can be even worse [4]. The
country with the highest rates of skin cancer in the world is Australia where over
750 000 people are treated for skin cancer every year [10]. However, even in
Poland, where the authors of this chapter are from, the number of occurrences of
this disease is alarming. Fig. 9.2 presents the growth of incidence and mortality
rate for malignant melanoma during the last 50 years.
Due to the high skin cancer incidence, dermatological oncology has become
a quickly developing branch of medicine. Nowadays the progress is visible both
in primary research concerning pathogenesis of tumors (the role of genes or
viruses in tumor development) and in the development of new, more efficient
methods of diagnosis [5]. One of the main tasks of modern dermatology is the
detection of melanoma in its early stage of development, because the survival
rate after identification of a less than 0.75 mm thick melanoma is nearly 100%
[6, 7]. This can be achieved by combining new instruments with computer-aided
diagnosis. Most of these automated systems are based on the widely used
melanoma diagnosis method: the ABCD rule of dermoscopy. In general, the
systems are based on medical image processing techniques, feature analysis
algorithms, as well as classification methods. Their potential benefits are very
promising, but the implementation of the diagnostic algorithms, development
and their use in clinical practice still poses a big challenge [8, 9]. It has been
widely stated that they can achieve the same accuracy range as dermoscopic
diagnosis performed by experts or they can even be more accurate [10]. Another
interesting and necessary goal of modern dermatology, which is the topic of this
chapter, is to most accurately assess the thickness of the skin mole which is then
used to establish the size of the surgical margin.

9.3 Clinical definition and importance


The skin is the body’s largest organ that covers the entire body and protects it
against infection, sunlight and injury. Similarly to other parts of the body, cancer
may occur also in the skin. Skin cancer is the most commonly diagnosed type of
cancer in all people, regardless of age, gender or race. The most malignant type
of skin cancer is melanoma, which is a dangerous proliferation of melanocytes.
Malignant melanoma (Latin: melanoma malignum) originates in pigment
producing cells called melanocytes, which derive from neural crest (Fig. 9.3).
This tumor occurs mainly in skin, but it can also be found in mucous membranes
of the gastrointestinal tract and even in the eye [6, 7, 11, 12].
The examination of small moles is possible through digital epiluminescence
microscopy (ELM, also dermoscopy or dermatoscopy) which is a noninvasive,
in vivo technique which, by employing the optical phenomenon of oil
immersion, makes subsurface structures of the skin accessible for examination
and thus provides the additional criteria for the clinical diagnosis of pigmented
skin lesion [6, 14]. ELM uses an optic magnification to visualize the features
that are otherwise invisible to the naked eye (as shown in Fig. 9.4).

Fig. 9.3 Skin cancer begins in the cells that make up the outer layer (epidermis) of the skin.
Melanoma begins in the pigment layer which contains melanocytes. Basal cell carcinoma, another
type of skin cancer, arises in the basal cells of the outer part of the skin [13].

Fig. 9.4 Lesion observed with the naked eye in comparison to the dermoscopy examination. The
global and local features become visible (based on [6]).
Diagnostic instruments commonly used for dermoscopic examination like
the dermatoscope and the videodermatoscope (digital dermatoscope) are
frequently connected to a personal computer or tablet. The videodermatoscope
enables medical image processing and analysis, image storage and indirect
visualization of the skin lesion on a monitor in a high quality. The dermoscopic
diagnosis of a pigmented skin lesion is based on various analytic methods and
algorithms that have been developed in the last 25 years. The most important and
widely used ones include: pattern analysis, ABCD rule of dermoscopy, 7-point
checklist and the Menzies method [6]. The common denominator of all these
diagnostic methods are the particular dermoscopic features, called dermoscopic
criteria, that represent the rationale for the morphological diagnosis of
pigmented skin lesions [6]. Physicians assess the dermoscopic image based on
the presence or absence of different global and local features.
Besides diagnostic algorithms, another crucial issue is the determination of
melanoma thickness. A melanoma’s disease stage is determined by the thickness,
depth of penetration and the degree to which the melanoma has spread to lymph
nodes and distant parts in the body. Doctors all around the world use a number of
different systems and scales to describe the stages of melanoma. In this work we
will shortly describe a few of them, including the Breslow Scale, the Clark Scale
and the TNM staging system developed and maintained by the American Joint
Committee on Cancer (AJCC) [15].
The depth of invasion was first reported as a prognostic factor in melanoma
by the pathologist Alexander Breslow, M.D. at George Washington University in
1970 [16]. For the Breslow scale, a pathologist measures the thickness of the
melanoma with a small ruler, called a micrometer. It measures in millimeters
(mm) how far the melanoma cells have reached down through the skin from the
surface. In recognition of his contribution, the depth of invasion of melanoma is
referred to by the eponym Breslow’s depth.
The Clark scale measures how deeply the melanoma has grown into the skin
and which levels of the skin are affected. In Clark’s scale five anatomical levels
are recognized, and higher levels worsen prognostic implications. These days, it
is more common for doctors to use the Breslow scale rather than the Clark scale
when they stage melanoma. The Clark’s level has a lower predictive value, is
less reproducible, and is more operator-dependent in comparison with Breslow’s
method [16].
Nowadays, the most popular staging method is the TNM guideline. The
TNM system is based on three pieces of information: Tumor, Node, and
Metastases. This staging system describes the size of a primary tumor (T),
determines if any of the lymph nodes contain cancer cells (N) and whether the
cancer has spread to another part of the body (M). Each of the parameters of
TNM system can be classified into different staging groups, therefore we will
concentrate only on the tumor parameter. The T category is assigned a number
from 0 to 4 according to the tumor’s thickness [15–17].
In general, the thicker the tumor appears, the more serious the disease is
likely to be (Fig. 9.5). In stage 0, the abnormal melanocytes are found in the
epidermis and this stage is also called melanoma in situ. Stage I refers to the
phase when cancer has already been formed and is less than 2 mm deep with no
ulceration. In stage II the tumor is about 4 mm thick with ulceration. In stage III
cancer spreads to one or more lymph nodes and becomes increasingly deeper. In
the last stage, IV, the cancer spreads to other parts of the body, such as the lung,
liver, brain, bone, soft tissue, or gastrointestinal (GI) tract. At this stage, cancer
may have already spread to places in the skin which are far away from where it
first started [18, 19].

Fig. 9.5: Presentation of five stages in malignant melanoma evolution process [20].

In the next section we will describe the thickness algorithm based on the
analysis of clinical and dermoscopic criteria.

9.4 Algorithm for the determination of melanoma thickness


The stages of melanoma, described in the previous section, show how deeply it
has grown into the skin and whether it has spread to lymph nodes, other organs,
or not. In this section we will describe the melanoma thickness algorithm
proposed by Argen- ziano in 1999 [6]. To determine the melanoma thickness,
both clinical and dermoscopic criteria are taken into account. It has been proved
that the combination of both criteria increases the accuracy of the melanoma
thickness prediction.
The clinical criteria include melanoma elevation, which can be categorized
as flat, palpable, or nodular. Fig. 9.6 presents the difference in a mole’s look
during the skin mole examination.

Fig. 9.6: Melanoma elevation: (a) flat, (b) palpable and (c) nodular [6].

Another clinical criterion which is important in the evaluation of melanoma


thickness is the largest diameter of the skin mole.
In 1997 Argenziano described the most significant dermoscopic criteria
including pigment network, blue-whitish veil and vascular pattern that can be
associated with the particular melanoma thickness category [6].

Pigment network. The pigment network appears as a delicate, regular grid of


brownish lines over a diffuse light-brown background as shown in Fig. 9.7 [6]. A
typical pigment network is regularly distributed throughout the lesion, while an
atypical pigment network is irregularly meshed and dispersed.
Histopathologically the pigment network correlates to the dermal papillae with
pigmented and elongated rete ridges.

Fig. 9.7: Comparison of pigmented network observed during a dermoscopy examination and
histopathological appearance [6].

Blue-whitish veil. The blue whitish veil is described as an irregular structureless


area of confluent blue to gray pigment. This local structure is mostly found in
malignant melanomas and Spitz/Reed nevis. In histopathological images the
blue-whitish veil corresponds to an acanthotic epidermis with compact
orthokeratosis and clear hypergranulosis usually overlying a large melanin area,
such as confluent nests of heavily pigmented melanocytes in the upper dermis
[6] (see Fig. 9.8).
Fig. 9.8: Comparison of blue-whitish veil observed during a dermoscopy examination and
histopathological appearance [6].

Vascular structures. The vascular structures appear as red dots or lines. In


dermoscopic images a broad spectrum of different types of vascular patterns can
be found. Atypical vascular patterns which are crucial for the detection of
melanomas appear as irregularly distributed structures within the skin mole (see
Fig. 9.9).

Fig. 9.9 Different types of vascular structures [6].

Before we present the melanoma thickness algorithm we would like to describe


the three main categories of melanoma thickness proposed in [6]:
– Thin melanoma: is a skin lesion with expected thickness less than 0.76 mm.
– Intermediate melanoma: is a skin lesion with expected thickness between 0.76
and 1.5 mm.
– Thick melanoma: is a skin lesion with expected thickness above 1.5 mm.
Fig. 9.10: Melanoma thickness examples: (a) thin, (b) intermediate, (c) thick [6].

Tab. 9.1: Most predictive criteria for each of the category of melanoma thickness [6].

Melanoma thickness Criteria


Thin (< 0.76 mm) – flat lesion (near 100%)
– palpable lesion (near 62%)
– absent blue-whitish veil
– absent pigment network
– absent atypical vascular structure

Intermediate (0.76–1.5 mm) – palpable or nodular lesion


– present blue-whitish veil
– present pigment network
– present atypical vascular structure
– diameter < 15 mm

Thick (>1.5 mm) – nodular lesion (near 50%)


– present blue-whitish veil
– present pigment network
– present atypical vascular structure
– diameter > 15 mm

Tab. 9.1 presents the most predictive criteria for each of the categories of
melanoma thickness.
In [6] Argenziano proposed an algorithm for the preoperative evaluation of
melanoma thickness which is presented in Fig. 9.11. In cases other than those
listed in the algorithm we are not able to determine whether the melanoma is
thin, intermediate or thick. In Fig. 9.10 we present three examples of melanoma
thickness. The first example represents a thin melanoma. It is a palpable case
without blue-whitish veil and atypical vascular pattern but it contains the
pigment network. The second example is an intermediate melanoma. It is also a
palpable lesion but additionally contains a blue-whitish veil and vascular
structures. The last example is a thick melanoma. This is a nodular case with
diameter greater than 15 mm and a blue-whitish veil.
Fig. 9.11: Algorithm for the determination of melanoma thickness (based on [6]).

9.5 Melanoma thickness simulations


The automatic diagnostic system has been designed to enable the classification
of the skin lesion with the determination of melanoma thickness algorithm. The
overview of the algorithm is presented in Fig. 9.12. The system is divided into
four main stages, including preprocessing (image enhancement), segmentation,
dermoscopic criteria detection and classification. Every stage consists of a few
smaller steps mostly presented as one algorithm.

Fig. 9.12: System overview.

Now, we would like to describe the main goal of each stage presented in Fig.
9.12. The first step in every medical image processing system is image
acquisition, which aims at obtaining an image of the best quality. In most cases,
the next stage is called preprocessing and it is responsible for reducing the
amount of artifacts and noise in the image, which results both in better border
detection, segmentation and the classification stage. After the proper
preprocessing of the image the next step, called segmentation, which focuses on
the precise detection of the region of interest, is applied. The outcome of the
segmentation stage is of great importance. Parameters and features are calculated
on the basis of this region which, in turn, affects the next step – classification. In
many cases it is quite difficult to obtain high accuracy in medical imaging due to
the complexity of these kinds of images. After the segmentation stage, the
feature detection and analysis can take place. The acquired parameters influence
and determine the last stage, which is classification.
For dermoscopy images the preprocessing step is obligatory, because of
extraneous artifacts, such as skin lines, air bubbles and hairs which appear in
almost every image. The preprocessing stage consists of three parts, because of
the specific features of the dermoscopy images. The first step is the removal of
black frames that are introduced during the digitization process. The second step
is a simple smoothing filter. The last step is connected with dark and thick hairs
which need to be unpainted before the segmentation process and local feature
extraction (Fig. 9.13). The detailed description of the presented algorithms can
be found in [21].
The segmentation process in one of the most important and challenging steps
in dermoscopy image processing. It has to be fast and accurate, because the
subsequent steps crucially depend on it. Furthermore, the analysis of clinical
local features, such as pigment network, streaks, dots and globules, depends on
the accuracy of the border detection. The segmentation process for dermoscopic
images is extremely difficult due to the low contrast between the healthy skin
and the mole. The skin lesion has mostly varied coloring, and the surrounding is
covered with the traces of the preprocessing step which makes the process even
harder to carry out. Because of the difficulties described above, numerous
methods have been implemented and tested. The detailed description of the
chosen region-growing segmentation algorithm can be found in works [22–24].
Fig. 9.13: Black frame and hair removal process: (a) input image, (b) after frame removal, (c) gray
scale, (d) result of top-hat transform and threshold, (e) hair detection, (f) inpainting.

The clinical criteria including melanoma elevation and maximal diameter


should be entered by the user of the application. The assessment of this criterion
is difficult for an automatic diagnostic system.
The detection of the pigment network consists of two steps. Firstly, we
perform the extraction of the dark local structures including pigment network as
well as dots and globules on the segmented area with the adaptive threshold
algorithm on the RGB images. In the second step we have to differentiate
globules from the pigment network. In our research we use the shape parameter
solidity to distinguish the pigment network from dots and globules. Solidity
refers to the extent to which the shape is convex or concave. The solidity of
convex shapes is always 1. We calculate the solidity parameter for each labeled
region. If the parameter is close to 1 we remove the object from the image. In
Fig. 9.14 we show the result of each step.
A blue-whitish veil appears as a diffuse gray-blue area. Detection of
homogeneous areas is mostly based on the analysis of different color spaces like
RGB, HSI or HSV. Through experimental studies we have obtained individual
values for each color. For every pixel in the lesion the nearest reference color is
found by calculating the Euclidean distance to the pixel color.
The detection of atypical vascular structures is a challenging problem in
dermoscopic image analysis. In dermoscopic images, up to seven different
vascular structures can be found. Vascular structures that are relatively common
in melanomas include irregularly distributed dotted vessels and irregular linear
vessels. The evaluation of vascular structures can be achieved in two steps.
Firstly, the red areas lying on the segmented area of the skin mole have to be
marked. Secondly, based on different experimental results the detected areas
have to be classified as typical or atypical.
Fig. 9.14: Pigment network detection: (a) medical image, (b) the first step (dark local structure
detection, (c) the second step (removal of extant structures).

Fig. 9.15: Example for blue-whitish veil detection: (a) dermoscopy image, (b) result.

The implemented Thickness App is based on the described algorithm for the
determination of melanoma thickness. After launching the application the user
can select an image from a proposed database (Fig. 9.16).
After this step the user is asked to choose the means of evaluating the
melanoma thickness (automatic diagnosis or self-assessment). If the user
chooses the automatic diagnosis button, the options in Fig. 9.17 are
automatically opened. If the user selects the self-assessment option, Fig. 9.17 is
empty and ready to be marked by the user. After choosing the clinical and
dermoscopic criteria based on the dermoscopic image (Fig. 9.18) the system
calculates the final decision which is presented in Fig. 9.19.
Fig. 9.16: Thickness App: the user selects the image to be assessed.

Fig. 9.17: Thickness App: the user selects the way the image is evaluated.

Fig. 9.18: Thickness App: the User selects the clinical and dermoscopic criteria.
Fig. 9.19: Thickness App: result for the evaluation of melanoma thickness.

9.6 Conclusions
In the last few years significant progress has been made in the fields of
electronics, computer science and biomedical engineering. A very promising tool
which combines all of these achievements is the implementation of automatic
medical diagnostic systems that are becoming crucial in diagnostic processes and
in healthcare systems today. Radiology services including X-ray, CT and MRI,
which are extremely important for patients and their physicians for accurate
diagnosis, would be impossible without innovative diagnostic technology.
Undoubtedly, this also applies to dermatology. Screening systems can be used
not only by young, inexperienced dermatologist but first and foremost by family
physicians, which can contribute to the early detection of melanoma. Regular
check-ups play a vital role in allowing for the early detection of melanoma.
Screening systems also provide a new opportunity for people living in remote
and rural areas, outside regional centers and thus facing difficulties in making an
appointment with a dermatologist. The importance of diagnosing melanoma in
the early stage and the reduction of the melanoma-related mortality rate can be
achieved by precise computer-aided diagnostic systems.
The proposed and described melanoma thickness application based on the
publication [4] is another opportunity to help young and inexperienced
dermatologists who might have problems with the diagnosis of pigmented skin
lesions. The aim of the presented automatic diagnostic system is to increase the
correct evaluation of melanoma thickness and reduce the number of unnecessary
biopsies.
You are more than welcome to familiarize yourself with these issues as well
as experiment with the application on the website:
home.agh.edu.pl/jaworek/ThicknessApp.

Acknowledgment
Scientific work partly supported by the AGH University of Science and
Technology as project number 11.11.120.612. The work of Joanna Jaworek-
Korjakowska was funded by the National Science Center, Poland based on the
decision number DEC-2011/ 01/N/ST7/06783.
References
[1] World Health Organization, “Cancer – Key Facts,” 2014.
www.who.int/mediacentre/factsheets/fs297/en/.
[2] Accessible Design and Consulting, Inc., “Cancer in Australia in brief,” 2014.
http://www.aihw.gov.au/cancer-in-australia/in-brief/.
[3] World Health Organization, “Health effects of UV radiation,” 2014.
http://www.who.int/uv/health/uv_health2/en/index1.html.
[4] Sun Smart, “Skin can facts stats,” 2014. www.sunsmart.com.au/about/skin-cancer-facts-
stats.
[5] Celebi ME, Stoecker WV and Moss RH. Advances in skin cancer image analysis.
Computerized Medical Imaging and Graphics. 2011;35(2):83–84.
[6] Argenziano G, Soyer H, De Giorgi V et al. Interactive Atlas of Dermoscopy (Book and CD-
ROM). Milano: Edra Medical Publishing and New Media, 2000.
[7] Jaworek-Korjakowska J. Automatic detection of melanomas: An application based on the
ABCD criteria. In: Pitka E and Kawa J, eds. Information Technologies in Biomedicine
Proceedings Lecture Notes in Computer Science. Berlin: Springer; 2012, pp. 67–76.
[8] Scharcanski J and Celebi ME, eds. Computer Vision Techniques for the Diagnosis of Skin
Cancer. Berlin: Springer Verlag, 2014.
[9] xKorotkov K and Garcia R. Computerized analysis of pigmented skin lesions: a review.
Artificial Intelligence in Medicine. 2012;56(2):69–90.
[10] Fabbrocini G, De Vita V, Cacciapuoti G, Di Leo G, Liguori C, Paolillo A, Pietrosanto A and
Som- mella P. Automatic diagnosis of melanoma based on the 7-Point Checklist. In:
Scharcanski J and Celebi ME, eds. Computer Vision Techniques for the Diagnosis of Skin
Cancer. Berlin: Springer Verlag; 2014, pp. 71–109.
[11] Jaworek-Korjakowska J and Tadeusiewicz R. Assessment of asymmetry in dermoscopic
colour images of pigmented skin lesions. In: Boccaccini AR, ed. Proceedings of 10th IASTED
International Conference on Biomedical Engineering, (Biomed 2013) Innsbruck, Austria,
2013.
[12] Cancer Research UK, “Skin cancer Key Facts,” 2012. www.cancerresearchuk.org/.
[13] Human skin diagram, 2015, http://en.wikipedia.org/wiki/Skin.
[14] Binder M, Schwarz M, Winkler A, Steiner A, Kaider A, Wolff K and Pehamberger H.
Epiluminescence microscopy: a useful tool for the diagnosis of pigmented skin lesions for
formally trained dermatologists. Journal of the American Academy of Dermatology.
1995;131:286–291.
[15] Melanoma of the skin. In: Edge SB, Byrd DR, Compton CC, et al., eds. AJCC Cancer
Staging Manual. 7th ed. New York, NY: Springer; 2010, pp. 325–44.
[16] Stages of melanoma, 2014. http://www.cancerresearchuk.org/about-
cancer/type/melanoma/treatment/stages-of-melanoma#bres.
[17] Melanoma, 2014, http://www.drugs.com/mcd/melanoma.
[18] National Cancer Institute, “Stages of melanoma,” 2014. www.cancer.gov.
[19] Melanoma Foundation of New Zealand, “About melanoma – key information,” 2014.
http://www.melanoma.org.nz/About-Melanoma/Key-Information/.
[20] Melanoma stages, 2015, http://en.wikipedia.org/wiki/Melanoma.
[21] Jaworek-Korjakowska J and Tadeusiewicz R. Hair removal from dermoscopic colour images.
Bio-Algorithms and Med Systems. 2013;9(2):53–58.
[22] Jaworek-Korjakowska J, Analiza i detekcja struktur lokalnych w czerniaku zlosliwym
(Detection and analysis of local structures in malignant melanoma). PhD thesis, AGH
University of Science and Technology, Krakow, Poland, 2013.
[23] Jaworek-Korjakowska J and Tadeusiewicz R. Determination of border irregularity in
dermoscopic color images of pigmented skin lesions. Engineering in Medicine and Biology
Society (EMBC), 2014 36th Annual International Conference of the IEEE. 26–30 Aug. 2014,
pp. 6459–6462,
[24] Jaworek-Korjakowska J and Tadeusiewicz R. Assessment of dots and globules in
dermoscopic color images as one of the 7-point checklist criteria. In: Proceedings of IEEE
International Conference on Image Processing (ICIP 2013), Melbourne, Australia, 2013, pp.
1456–1460.
[25] National Cancer Register, “Onkologia – czerniak zlosliwy,” 2014.
http://www.onkologia.org.pl/,2013.
Part VI: Therapy
Ryszard Tadeusiewicz
10 Simulating cancer chemotherapy

10.1 Simulating untreated cancer


As is commonly known, cancer occurs when healthy tissue mutates, forming
cells which proliferate without bound and ignore external signals commanding
them to stop multiplying. This spontaneous proliferation causes a progressive
increase in the number of anomalous cells until they begin to disrupt the function
of their host organ. At later stages of the process (which will not be considered
in our simulation) individual “rogue” cells may detach from the primary tumor,
penetrate into the bloodstream and invade the lymphatic system. These cells then
cause metastasis by lodging in other organs, continuing to multiply and
eventually causing the failure of those organs as well. Fig. 10.1 provides a
schematic depiction of cancer development.

Fig. 10.1: Schematic depiction of the development of cancer.

Let us try to model the presented process. In this section, as well as the
following two sections, we will consider a specific type of childhood cancer
known as neuroblastoma – although it should be noted that any other type of
cancer may be modeled in a similar fashion. The state of the disease may be
described by referring to the number of anomalous (cancerous) cells. This
number changes over time: as cells continue to divide, the overall rate of
proliferation increases in proportion to the number of existing cancerous cells. If
we designate the number of proliferating cells as P, we obtain:
In order to acknowledge the temporal variability of the number of proliferating
cells we will use the notation P(t). The value of this function is given by a
differential equation which relates the increase in the number of cells (dP/ dT) to
the current number of such cells, P(t) multiplied by the proliferation rate
constant γ.
Equation (10.1) describes a type 1 dynamic object which corresponds to the
model presented in Fig. 10.2.

Fig. 10.2: Cancer development model as a type 1 dynamic object.

The model has one input – the proliferation rate constant γ which is arbitrarily
set but remains fixed throughout the simulation phase:

We are also dealing with a single time-dependent process: the production of


additional cancerous cells as a result of proliferation P( t), which corresponds to
the output function Y(t):

It is easy to prove that for any initial value P(0) greater than 0 P(t) exhibits
exponential growth. Such growth is initially slow but eventually accelerates. Fig.
10.3 presents sample solutions to equation (10.1) for various values of γ. In
practice, the proliferation rate depends on the patient’s age: in infants and young
children whose organisms are developing rapidly, γ tends to be greater than in
the elderly. The consequences – from the point of view of disease development –
are illustrated in Fig. 10.3 which shows various time curves corresponding to
approximate patient age.
Fig. 10.3: Solutions to equation (10.1) for various values of γ.

Fig. 10.4: The model explains the dynamics of the disease and predicts its conclusion.

It is evident that cancer progresses more quickly in children than in older


patients (note that while older people are more likely to develop cancer in the
first place, this correlation is beyond the scope of our sample model).
We should note that solutions to equation (10.1) are characterized by a rapid
increase in the rate of P(t) growth for high values of t, especially when γ is also
large. This creates very unfavorable conditions for patients: early stages of the
disease can go entirely unnoticed but the proliferation curve eventually becomes
steep and the patient’s state will rapidly deteriorate, leading to inevitable death
unless the cancer can be eradicated surgically or pharmacologically (Fig. 10.4).
Further analysis of the curves plotted in Fig. 10.3 reveals another important
aspect: solutions to equation (10.1) are strongly dependent on initial conditions.
For low initial values of P(t), i.e. when the initial number of proliferating cells is
small, cancer develops more slowly and is easier to control. On the other hand,
starting with a large value of P(t) produces a much steeper curve, reducing the
efficacy of treatment (see Fig. 10.5). This fact corresponds to the basic rule in
oncology which emphasizes early detection as a prerequisite of successful
treatment.
The above mentioned properties of our biocybernetic model of a dynamic
system (cancer development) follow from analysis of the time curves given by
equation (10.1) for various combinations of initial conditions and proliferation
rate constants. While the model is quite primitive and does not – by itself – yield
much insight, the reader is encouraged to try out the simulation software
available at www.tadeusiewicz.pl. Fig. 10.6 presents the main window of the
application. The program is user-friendly and may lead to a number of
interesting observations.

Fig. 10.5: Time curves given by equation (10.1) for various initial conditions.

Fig. 10.6: User interface of the cancer proliferation model application.


10.2 Enhanced model of untreated cancer
In order to explain the principles of constructing advanced biocybernetic systems
let us return to our sample model and observe that actual tumors often consist of
rapidly proliferating cells (usually located near the tumor’s surface) as well as
“quiescent” cells which have stopped dividing and are found deep inside the
tumor. Assuming that P(t) denotes the number of proliferating cells and Q(t)
corresponds to the number of quiescent cells, Fig. 10.7 presents the enhanced
model of an affected organ.

Fig. 10.7: Enhanced cancer development model.

For the sake of convenience we will divide the model into two parts which
correspond to P(t) and Q(t) respectively, as shown in Fig. 10.8. The composition
of both groups changes with time: some cells remain in their original group
while others move to the other group. Let us define two coefficients α and β such
that αP(t) proliferating cells become quiescent while βQ(t) quiescent cells
resume proliferation. In Fig. 10.8 α and β represent the gain coefficients of
blocks which “shift” cells from the proliferating group to the quiescent group
and the other way around.

Fig. 10.8: Cancer development as a biocybernetic model describing a complex object.

Comparing Figs. 10.7 and 10.8 reveals how the introduction of an internal
structure mirrors the properties of a complex system. The formal (mathematical)
definitions of objects described as “proliferating” and “quiescent” in Fig. 10.8
are given by equations (10.4) and (10.5).
Resolving these equations produces time curves similar to the ones depicted in
Figs. 10.3, 10.4 and 10.5, but instead of individual curves we are now dealing
with pairs of curves.
While a competent mathematician might be able to draw a number of
conclusions simply by looking at equations (10.4) and (10.5), biocyberneticians
should instead refer to the simulation model: merely imagining the dynamic
interplay of two separate cell populations in a general case is neither simple nor
reliable. To avoid falling into the trap of unsupported correlations and false
conclusions, we will now proceed with the analysis of the simulated behavior of
our object. To facilitate computer-aided simulations it might be useful to begin
by modifying the model once again, striking a

Fig. 10.9: Schematic depiction of the cancer model adapted for simulations.

compromise between the general case shown in Fig. 10.7 and the rather complex
system seen in Fig. 10.8. The resulting enhanced formal model is visualized in
Fig. 10.9.
As shown, the model remains a type 1 dynamic object, with one input ()
and two output signals: P(t) and Q(t). Additionally, the model has two
parameters (α and β) which may affect its behavior. This object can be simulated
with the application available at www.tadeusiewicz.pl, as shown in Fig. 10.10.
Fig. 10.10: User interface of the cancer proliferation and quiescence model application.

10.3 Simulating chemotherapy


Dynamic systems which comprise many interlinked processes – such as the ones
described above – are usually more interesting than static models. Even so, they
provide only a limited set of controls: having specified the initial values for the
model’s parameters the user may only observe the simulation without being able
to intervene.
Such models of cancer (or of lethal epidemics) may be frustrating: while we
can observe the dynamics of a disease, we can do nothing to affect its spread. As
a result, modern biocybernetic models are usually developed in such a way as to
provide the user with a measure of control over the simulated process. This step
marks the transition from a level 1 (spontaneous) to a level 2 (controllable)
dynamic model.
Let us observe this transition on the basis of our cancer cell proliferation
process which we aim to control (obviously by arresting its progress) through
treatment. One of the available treatment options is chemotherapy via
intravenous administration of Topotecan (TPT). This drug becomes active at a
specific stage of cell division: it interferes with DNA replication, leading to cell
death. Since cancer cells divide more rapidly than healthy cells, TPT affects
cancer cells more strongly and therefore may prove effective in the treatment of
cancer.
We will now attempt to create a model which accepts the effect of TPT
injections as its input signal. Note that we specifically refer to the concentration
of TPT in blood plasma rather than to the actual injections as the drug only
becomes effective if it can reach its target cells via the bloodstream. The
correlation between the injection volume and the cellular uptake of the drug is a
complicated one, and while we will consider it in more detail later on, for now it
is enough to assume that we can accurately control the concentration of TPT in
blood plasma, treating it as an input signal in our model. This signal will be
designated X(t) since intravenous delivery of the drug (by means of an
intravenous drip) occurs intermittently, producing time-dependent changes in
TPT concentration in blood plasma.
The updated model will retain the same output signals as its predecessor: the
number of proliferating cells P(t) and the number of quiescent cells Q(t). The
proliferation rate constant (γ) – formerly the model’s input signal – is now
“relegated” to a configurable parameter, much like the still-present migration
coefficients α and β.

Fig. 10.11: Cancer model acknowledging the treatment process.

As it turns out, the enhanced model requires one additional parameter (see Fig.
10.11). We assume that the number of cells killed off by TPT is proportional to
the drug’s concentration in blood plasma, X(t), and that the model is linear, i.e.
the number of killed cells is proportional to the concentration of the drug and to
the number of proliferating cells. A coefficient must therefore be added to the
model to quantify this relation – we will designate it δ.
To obtain a mathematical description of the model visualized in Fig. 10.11,
let us extend equations (10.4) and (10.5) with components expressing the
destructive impact of TPT on proliferating cells. Equations (10.6) and (10.7)
formally describe the updated model while Fig. 10.12 provides a graphical
representation.
Of particular note in Fig. 10.12 is the change in input signal. Despite some
superficial similarities, Figs. 10.8 and 10.12 describe two fundamentally
different models: the former is a type 1 dynamic model while the latter is a type
2 (controllable) model.

Fig. 10.12: Controllable cancer model suitable for simulating the treatment process.

Analysis of the model described by equations (10.6) and (10.7) calls for a
preliminary specification of the properties of the control signal X(t). Let us
assume that, following initial work-up, the patient begins to receive TPT
intravenously 48 hours after cancer is detected. Since the treatment process is
spread over time, we will try to maintain the IV drip for as long as possible,
ensuring constant concentration of the drug in blood plasma at a level of 5
ng/ml. During this time TPT will destroy proliferating cells, hence P(t) will
decrease rapidly (as opposed to exponential growth in the absence of treatment).
More importantly, decreases in P(t) also cause corresponding decreases in Q(t),
thus causing the entire tumor to shrink.
Unfortunately, the presence of TPT in the patient’s bloodstream also causes
undesirable side effects (which will be more thoroughly described in the next
section) – hence the intravenous drip is only maintained until the end of the 7th
day (exactly 168 hours after detection). The subsequent pause enables the
patient’s organism to begin repairing the damage caused by the toxic drug. 216
hours after detection, i.e. at the beginning of the 10th day, the TPT drip is
resumed and continues until the end of the 14th day (336 hours). Our simulation
shows that at this point the tumor will have shrunk by more than 50 % compared
to its initial volume. The patient is not yet cured but their condition will have
improved significantly, with hopes for total remission.
Fig. 10.13: Time curve describing the control signal. Refer to chapter text for an in-depth
description.

The above described process can be conveniently simulated with the aid of a
computer.
The corresponding application (available at www.tadeusiewicz.pl) is shown
in Fig. 10.14.
Our application supports a range of simulations, for various drug
administration regimens, tumor growth characteristics and tumor volume at
detection. The treatment process whose control signal is visualized in Fig. 10.13
produces results shown in Fig. 10.15. We encourage the reader to experiment
with the model’s settings in order to become familiar with the ways in which
biocybernetic simulations can be fine-tuned and controlled.

Fig. 10.14: GUI of the presented cancer treatment simulator.


Fig. 10.15: Results of simulated treatment administered over a two-week period (with IV drips
between days 3-7 and 10-14).

10.4 Simulation software available for the reader


As already mentioned, the authors have developed5 a series of Matlab-based
simulation applications. All these applications are available free of charge at
www.tadeusiewicz.pl. We invite all interested readers to download them for their
personal use.
Piotr Dudek and Jacek Cieslik
11 Introduction to Reverse Engineering and
Rapid Prototyping in medical applications

11.1 Introduction
As the applications of Reverse Engineering and Rapid Prototyping grow more
widespread and diverse, it is worth outlining the technology and reviewing its
development. Biomedical engineering is an emerging field where a lot of
research is going on to combine engineering technologies for medical problems
solving.
Biomedical engineering is a technological field with great potential for future
advances. This field encompasses medical treatment engineering, tissue
engineering, genetic technology, and medicine engineering. Building sets of
medical information, medical images, and biomedical materials and applying
these sets of medical data assists in the development of all aspects of biomedical
engineering. In recent years, Computer-Aided Design (CAD) has been
increasingly applied in biomedical design [1].
The different knowledge related to medical information, the various medical
images and different materials used in medical science are very easy to achieve
with the help of biomedical engineering. Reverse Engineering has been
successfully used in engineering applications to recover virtual models from
existing components, for which a knowledge base does not exist. Recent
developments in additive manufacturing processes have made it easy to create
prototypes of complex objects, also with biocompatible materials. Reverse
Engineering in medical applications is necessary for the following reasons:
– the digital model does not exist,
– the shapes of medical objects are complex.
In this chapter methods of receiving and processing data in Reverse Engineering
and subsequently preparing data for use in medical, dental, orthodontic and other
areas are presented.
11.2 Reverse Engineering
Reverse Engineering (RE) techniques encompass many engineering approaches
in which an existing product is investigated either prior to or during the
reconstruction process. Reverse Engineering is generally defined as a process of
analyzing an object or existing system to identify its components and their
interrelationships, investigate how it works in order to fix it, redesign or produce
a copy without access to the design from which it was originally produced [2]. It
can be used in three areas, presented in Tab. 11.1.
Tab. 11.1: Types of Reverse Engineering.

Area of RE Objectives
Industrial Industrial RE is used to reconstruct 3D models of physical objects for Engineering Design,
CAD/CAM/CNC, Product Development, Quality Control and Dimensional Inspection. For
3D models a high accuracy is required, from ±20 to ±50 μm. In the some areas, like Mold
and Tooling or micro-manufacturing, the accuracy requirement is raised up to 1-5 μπι. In
other industries, such as the ship building or aerospace industry, the accuracy requirement is
quite flexible, depending on the size of the objects and their functions.
Architecture Artistic and architectural RE is used for 3D modeling of the objects for architectural
and Art and fine art applications. The size of the objects can be very different, and varies
from 10 x 10 x 10 mm (jewelry), to very large ones, including statues, architectural
prototypes, houses and buildings. The accuracy requirement is normally low, but for
some applications must be higher. In those cases the outside appearance is very
important, including the general shape and forms of the objects. They are more
important than the required accuracy.
Medical Medical RE is normally applied by using patients’ data or biomedical objects to
reconstruct 3D models of anatomical structures or objects of interest for
development of different medical products, applications, and biomedical research.
The accuracy requirements are dependent on the specific applications. For example,
for the personalized cranio-maxillofacial implants, biomodels and training models,
the accuracy requirement is raised up to 100 μm, which is not high compared to
Industrial RE. But for the surgical tools and functional implants such as spine, hip
and knee implants, the accuracy requirements are very high.

Reverse Engineering has been widely applied during recent years in medical and
dental applications. The process of Reverse Engineering involves turning the
physical product back to the virtual models, in three dimensions, from which the
conceptual design can be obtained. The final target of all RE processes is to
obtain 3D data representing the geometries of the objects. There are two types of
end-use data representation that are commonly used: Polygons or Triangle Mesh
and Non-Uniform Rational B-Spline (NURBS).
A polygon or triangle mesh includes vertices and edges that define the shape
of an object and also a normal, which defines inside and outside of the triangles.
This type of data is the simplest way of representing the geometries of objects;
however, it is not an accurate representation of the geometries. NURBS surfaces
are the ultimate output of the RE process that we would like to obtain for
applications where accuracy requirements are high. NURBS are basically an
accurate way to define free-form curves and surfaces.
Additionally, it is possible to produce CAD data from the obtained data, but
for medical applications it is normally difficult or not possible to do this, due to
very complicated shapes [3, 4].
Reverse Engineering for medical applications can be described as four
phases presented in Fig. 11.1.

Fig. 11.1: Scheme of data processing and information flow in Reverse Engineering in medical
applications.

11.2.1 Phase one – Inputs of medical RE

This is a very important phase, because it determines the methods of data


acquisition. It determines not only the techniques and methods for data
acquisition, but also data processing and analysis.
Depending on the end-user application, there are different types of inputs for
medical RE, which need to meet the technical requirements and clinical
regulations. The state of the art of end-use applications includes personalized
implants for bone reconstruction, dental implants and simulations, surgical tools,
medical training, orthopedics, ergonomics, orthosis, prosthesis, and tissue
engineering [5].

11.2.2 Phase two – Data acquisition

The different methods of data acquisition can be divided on two main group –
contact (tactile) or noncontact methods. Contact methods use sensing devices
such as a hard probe, to obtain a cloud of points describing the measured object
(Fig. 11.2).
In the tactile approach a touch probe is used in conjunction with a robotic
mechanism such as a coordinate measurement machine (CMM), an articulated
arm or computer numerical control (CNC) devices to determine the position of
the object (Cartesian coordinates). Accuracy is considered the main advantage of
the tactile approach, but the digitization process is quite slow and it is difficult to
digitize a complex geometry. A wide range of objects can be probed with this
approach regardless of color, shininess and transparency. This approach is not
appropriate for deformable materials.
Fig. 11.2: Methods of digitalization.

In the noncontact approach, a medium is used to measure the physical object


using the principle of reflection or penetration of radiation. Many three-
dimensional scanners rely on the principle of reflection, such as laser beam or
white/blue light scanners. Laser scanners can also be used in conjunction with an
articulated arm for improved accuracy. Using laser beams and white light has the
advantage of fast digitization and continuous data, but objects that are too shiny
or transparent present complications for data acquisition. The medium travels
from the generator to the object before it is reflected and transmitted to the
receiver unit. The determination of geometry can be processed using at least one
two-dimensional image combined with some optical parameters such as
reflection angle, distance and time of flight. The initial geometry is presented in
the form of a cloud of points or a polygon model.
Although laser beam and white light systems are applied in many medical
applications, the most efficient noncontact device is based on the principle of
penetration. This system uses the medium that penetrates through the object to
capture both internal and external geometries. The most popular devices are
computed tomography scanners, which includes the use of X-ray.
The digitization process starts the transmission of the X-ray through the
object. Data acquisition is performed at constant intervals throughout the entire
object which subsequently gives a series of slice images. Each slice contains
information on the object’s position and the value of the Hounsfield unit (HU),
which is proportional to tissue density. A higher Hounsfield value indicates a
high-density object such as enamel or bone. A lower Hounsfield value indicates
a low-density object such as fat or soft tissue. In order to reconstruct the three-
dimensional human body model, the optimal Hounsfield values must be selected,
using thresholds, but in this method also manual selection or semi-automated
activities are needed. After that the selected regions of each slice are combined
to construct the volumetric model (Figs. 11.3 and 11.4).

Fig. 11.3: Creating a surface model from DICOM data using InVesalius software.
Nowadays, a CT scan is a fairly routine procedure. However, a number of
criteria must be borne in mind to ensure the acquisition of useful data especially
for use in biomedical RE applications, for example:
– type of scanner (axial or helical)
– slice thickness (recommend max. 1 mm)
– scan spacing: 0.5 mm or at least half the smallest dimension of interest
– X-ray strength in the case of CT, pulse sequence in the case of MR
– resolution – for RE highest resolution is usually the best option
– field of view (FOV). The object imaged should fill the field of view without
extending beyond it.
– X–Y dimensions of a single pixel. These dimensions and the scan spacing
determine the resolution of the coordinate system for reconstruction.
– artifacts. If significant variations in material densities exist within the object to
be scanned, distortion can be experienced. In the case of metal artefacts, the
distortion can be severe.
Images reconstructed at 512 x 512 pixels with 16 bit/pixel resolution require
about 0.5 MB (megabytes) of memory per slice. Average datasets can be
expected to be in the range from 25 to 100 MB, but high resolution dataset can
needed gigabytes and powerful computers for data processing [6, 7].
The slice or scan spacing is critical for 3D model reconstruction and should
not be confused with slice thickness. Anything over 3 mm is generally not
acceptable for complex structures. Slice spacing determines spatial accuracy.
The accuracy in the z-axis is determined by the spacing and, if possible, should
be at least half the size of the smallest feature that is to be reconstructed.
Lastly, medical reconstruction requires a good understanding of anatomy,
which can only come with experience, and an understanding of the types of
tissue that are preferentially imaged by CT and MR scanners.

11.2.3 Phase three – Data processing

This stage is based on two types of raw data from the acquisition process,
including point clouds or 2D slice images. Different data processing approaches
and workflow are used to obtain the right 3D models of the anatomical structures
or objects of interest for medical applications development and research.
For point clouds as an input for medical RE it is necessary to scan the object
in different views in order to capture the entire geometry or the area of interest.
Therefore, it is necessary to align, register and merge clouds of points. Usually,
if clouds have points from other, noninterest regions, it is necessary to clean the
clouds of these objects before aligning clouds of points in their proper
orientation in a common coordinate system.
In addition, some amount of error is always introduced into the scan data and
points may be placed in undesirable regions or overlapped. This is because
points have been scanned more than once when scanning complex shapes.
Moreover, when point cloud aligning is applied, the registered scan data
normally contains overlapping points. Additionally, scanners often produce some
noise points or artifacts. Therefore, data cleaning and optimization is required. In
this step two functions are used: noise and point redundancy reduction, and
sampling points for minimizing the number of points in the point cloud so that it
is easier to work with and structures the data well. Sometimes, to produce
smoother shapes, noise reduction and smoothing operations are performed at this
stage.
Finally, the optimized point cloud data is triangulated to create 3D triangle
mesh or polygon models of the object. Triangulation gives a discreet
representation of an object via a set of polygons that defines the whole object
without deviating from the collected points by more than a given tolerance. For
visual and future computational purposes, this wireframe mesh approximates the
shape and size of an object using triangles. Smaller triangles result in a better
approximation, inevitably raising the file size and slowing the processing speed.
3D triangle mesh models are then cleaned, optimized, manipulated and
controlled. Some imperfections are corrected – some holes filled, unwanted
elements removed, etc. After the process of improvement, the polygon mesh can
be converted into 3D NURBS or CAD models to meet the requirements from the
end-use applications.
For CT/MRI scanners, the images are normally stored in the DICOM
(Digital Imaging and Communications in Medicine) format. However, with
applications that use micro-CT imaging systems, different data formats such as
BMP or PNG can be used. The image resolutions achievable with these micro-
CT systems extend into the range of light microscopy, down to one or a few
microns.
This type of data requires specialized image processing tools and packages
for image processing for 3D data reconstruction of the hard and soft tissues or
objects of interest. Two basic steps are used for 3D reconstruction from 2D slice
images: image segmentation, and Region of Interest (ROI) growing.
Segmentation by threshold techniques is used to define the region of interest that
presents the object for 3D reconstruction; it is based on the gray scale value of
image pixels. ROI can be defined by a lower and a higher threshold from the
Hounsfield scale or only by a lower threshold. In the former case the pixel value
must be in between both threshold values to be part of the segmentation object.
In the latter case, the segmentation object will contain all pixels in the images
with a value higher than or equal to the threshold value. ROI growing provides
the capability to split the segmentation into separate objects; it is useful for the
separation of anatomical structures, especially bone and soft tissues. For
segmentation, 2D image processing techniques can also be used such as
smoothing, noise filtering, cavity fill, island removal filter, morphological filters,
etc., which helps in image processing.
The outputs of image segmentation and ROI growing are 3D triangle mesh
models or 2D contours of the ROI or anatomical structures. As in the case of
point clouds, a triangle mesh from DICOM is processed in the same way. 2D
contours are used to create the appropriate CAD or NURBS models to meet the
requirements for the enduse biomedical applications.

11.2.4 Phase four – Biomedical applications

The resulting object in the form of a 3D mesh can be used directly by a Rapid
Prototyping application or by 3D graphics applications.
However, for biomedical applications that require high accuracy for further
complex geometrical modeling, design or analysis, it is necessary to transform
the form of a triangle mesh into CAD or NURBS models, which are again used
as the reference for medical product development and research in which
CAD/CAM/CAE system are used.

Fig. 11.4: Scheme of data processing and information flow from CT/MRI images to CAD model.
11.3 Software for medical RE
Surgical operation simulation in a virtual computer model requires specific
functions in the software to simulate surgical actions and to calculate certain
parameters: volume, distance, bone density, etc. Besides pure simulation of
surgery actions, associations with other software packages (CAD, FEA, CFD,
etc.) can facilitate or even be necessary to complete the surgery simulation.
Additionally, there is no single software that can fulfill and completely
satisfy the requirements in data processing and geometrical modeling. So, the
selection of the software depends on the end-use application, especially the
complexity of the geometrical modeling processes. The following list includes
typical software and tools that are necessary for the implementation of medical
RE applications: medical image processing, Rapid Prototyping, Finite Element
Analysis and simulation, Reverse Engineering and dimensional inspection,
freeform modeling, CAD/CAM, and specialized dental applications.

11.3.1 Mimics Innovation Suite

One of the best-known software packages for RE biomedical applications is


Mimics Innovation Suite, Materialise NV (Leuven, Belgium), providing high-
quality solutions supporting clinicians in diagnosis and decision-making.
Materialise’s interactive medical image control system (Mimics) is an
interactive tool for the visualization and segmentation of CT images as well as
MRI images and 3D rendering of objects. A very flexible interface to rapid
prototyping systems is included for building distinctive segmentation objects.
The different modules link Mimics to various application fields: Rapid
Prototyping (RP), Finite Element Analysis (FEA) and Computer-Aided Design
(CAD). Selecting the hard tissues (bones) and leaving behind the soft tissue is
performed by applying the “thresholding” function and selecting a certain range
of high density threshold values, then using appropriate masking techniques to
generate a 3D object. These 3D voxel masks can be used for visualization,
consultation, planning of surgical procedures, design of the implant, and finally
can be exported for the RP process. Also, it could be exported as an Initial
Graphics Exchange Specification (IGES) or other form of file that can be
manipulated as a CAD model or FEA can be used on the model e.g. in ANSYS
software. Mimics performs all kinds of engineering operations that start from
medical imaging data up to the error free STL file for Rapid Prototyping models.
Mimics goes beyond visualization of the anatomical data; and has three main
engineering applications. Using the 3-matic package included in Mimics it is
possible to:
– perform 3D measurements and engineering analyses
– design patient-specific implants or surgical guides
– prepare anatomical data and/or implants for Finite Element Method
simulations.

11.3.2 Simpleware ScanIP

This is a software system providing functionality for medical RE consisting of


four modules: ScanIP – image processing, measurement and visualization; +FE
module – volume/surface generation for FE/CFD; +NURBS – NURBS model
generation for CAD; +CAD – integration of CAD models within image. ScanIP
provides an image processing software environment for rapidly converting 3D
scan data (MRI, CT, micro-CT, FIB-SEM…) into computational models. The
software offers image segmentation and quantification tools, enabling easy
visualization and analysis of image data. Segmented images can be exported as
surface models and meshes to CAD packages and for 3D printing. Additional
module options are available for generating volume meshes for FEA and CFD,
for integrating image and CAD data, and for exporting NURBS-based models
(Fig. 11.5).
Fig. 11.5: ScanIP, courtesy of Simpleware Ltd., Exeter.

11.3.3 3D-DOCTOR

A product of AbleSoftware Corp. that is an advanced 3D modeling, image


processing and measurement software for MRI, CT, PET, microscopy, scientific,
and industrial imaging applications. 3D-DOCTOR supports both gray scale and
color images stored in DICOM, TIFF, Interfile, GIF, JPEG, PNG, BMP, PGM,
MRC, RAW or other image file formats. 3D-DOCTOR creates 3D surface
models and volume rendering from 2D crosssection images in real time (Fig.
11.6).

Fig. 11.6: 3D-DOCTOR (source: http://www.ablesw.com/3d-doctor/rapid.html).

11.3.4 Amira

A product of Visage Imaging GmbH that is a multifaceted 3D software platform


for visualizing, manipulating, and understanding data from computed
tomography, microscopy, MRI, and many other imaging modalities. Amira
enables advanced 3D imaging workflows for specialists in research areas
ranging from molecular and cellular biology to neuroscience and bioengineering.
Fig. 11.7: 3D skull model done InVesalius.

11.3.5 Other software for 3D model reconstruction

However, if we only need to reconstruct 3D models of the anatomical structures


from CT/MRI data for further development, the free and open source MIP
packages can be useful. Examples include: 3D Slicer (Slicer), Julius framework
(CAESAR Research Center) and MedINRIA (INRIA Sophia Antipolis) or
InVesalius (Figs. 11.7 and 11.8).

Fig. 11.8: 3D Slicer.

In comparison to commercial solutions, the open-source software 3D Slicer


provides many modules for medical applications and for RP technologies. 3D
Slicer is a free, open source software package for visualization and image
analysis and it is natively designed to be available on multiple platforms,
including Windows, Linux and Mac OS X. The module Editor provides
functionality of selecting, thresholds, ROI Growing, etc. for creating 3D models
of interesting structures. The final objects can be saved as mesh objects using the
module Modelmaker.
Most of the Rapid Prototyping packages allow basic operations for
manipulating the STL files as well as editing, dividing, and repairing 3D models.
The typical RP packages include Magics (Materialise NV), VisCAM RP
(Marcam Engineering GmbH) and NetFabb.

11.3.6 RE and dimensional inspection

This software provides powerful freeform modeling tools, especially triangle


mesh control and manipulations which are not commonly available in RP and
CAD packages. The typical RE packages are Geomagic Verify, Geomagic
Control, Geomagic Studio, Polyworks, GOM Inspect, and more (Fig. 11.9).

11.3.7 Freeform modeling

The freeform modeling techniques such as Geomagic Claytools or Sculpt


modeling systems and ZBrush (Pixologic, Inc.) can be used for modeling the
implant or anatomical structures for simulation or development of the medical
training models.

11.3.8 FEA simulation and CAD/CAM systems

There are many programs for finite element modeling and simulation tasks. FEA
simulation packages are needed for optimizing the design as well as the
biomedical engineering aspects of the applications. CAD/CAM packages are
very powerful for 3D geometrical modeling tasks and are commonly used to
implement the final CAD operations of the design tasks.
There are also specialized applications for dental and orthodontic
applications, such as Trios 3Shape or 3D Reshaper Dental (Fig. 11.10). These
applications provide comprehensive solutions for creating implants, prostheses,
etc., starting from scanners up to build-up, control and inspection of
manufactured elements.
Fig. 11.9: Geomagic Studio v2013.

Fig. 11.10: 3D Reshaper Dental (source: http://www.3dreshaper-dental.com/en/3dreshaper-


dental/dental-model-creator/).

11.4 Methods of Rapid Prototyping for medical


applications – Additive Manufacturing
The terms Rapid Prototyping or Additive Manufacturing are used in a variety of
industries to describe a process for rapidly creating a system or part
representation before final release or commercialization.
Additive Manufacturing (AM) is a layer-based automated fabrication process
for making scaled three-dimensional physical objects directly from 3D-CAD
data without using part-depending tools. It was originally called 3D Printing and
is still frequently called that. Often used terms for this technology include
Additive Layer Manufacturing, Rapid Prototyping, Direct Manufacturing, etc.
The technical realization of AM is based solely on layers and therefore it is
called “layer-based technology”, “layer- oriented technology”, or even “layered
technology”.
AM involves a number of steps that move from the virtual CAD description
to the physical resultant part, such as: creating a CAD model, converting to STL
format, file transfer to machine, setup 3D printer, build-up, and if necessary
removing support elements, post-processing [6].
The STL file format was made public domain to allow all CAD vendors to
access it easily and hopefully integrate it into their systems. STL is now a
standard output for nearly all solid modeling CAD systems and has also been
adopted by AM system vendors. STL uses triangles to describe the surfaces to be
built. Each triangle is described as three points and a facet normal vector
indicating the outward side of the triangle, in a manner similar to the following:

There are numerous ways to classify AM technologies. One of the methods is


classifying by the state of the material before the part is manufactured. The
prototype material can be liquid, molten material, solid sheet or powder.
Additive Manufacturing has been used for medical applications almost from the
very start of when this technology was first commercialized. The most
significant technologies for medical applications are presented in the following.
There are several types of 3D printer available. They may use different
materials, but all involve the same basic approach for “printing” an object:
spraying or otherwise transferring a substance in multiple layers onto a building
surface, beginning with the bottom layer.
Before printing can begin, a 3D image of the item to be printed must first be
created by using a computer-assisted design (CAD) software program. That
object is then sliced into hundreds or thousands of horizontal layers, which are
placed one on top of another until the completed object emerges.

11.4.1 Liquid-based RP technology

Liquid polymers appear to be a popular material. The first commercial system


was 3D Systems’ stereolithography process based on liquid photopolymers. The
selective solidification of liquid monomeric resin (of the epoxy, acrylate, or vinyl
ether type) by ultraviolet radiation is called (photo)-polymerization. There are
various processes that differ only in the way the UV-radiation is generated and
by the way the contouring is done.

11.4.2 Stereolithography (SLA)

It is not only the oldest but also still the most detailed AM process. A laser
stereolithography machine consists of a build chamber filled with the liquid
build material and a laser scanner unit mounted on top of it which generates the
x–y contour. The build chamber is equipped with a build platform which can be
moved in the build (z) direction. The laser beam simultaneously does the
contouring and the solidification of each layer as well as the bonding to the
preceding layer. The motion of the beam is controlled by the slice data of each
layer and directed by the laser scanner (galvo), see Figs. 11.11 and 11.12.
After solidification of one layer, the build platform, including the partially
finished part, is lowered by the amount of one layer thickness and, using a
recoater a new layer of resin is applied. The process continues until the part is
finished. This technology requires supports. Such printers are manufactured by
3D Systems, DWS.

Fig. 11.11: Part creating process by SLA.


Fig. 11.12: SLA printed parts with support structures.

11.4.3 Polymer printing and jetting

If the curable build material is applied by print heads, the process is called
polymer printing or polymer jetting. The process is commercialized by Objet,
Israel (now Stratasys). It can be regarded as a 3D printing process. However, due
to the part building by UV curing of liquid monomers it is a polymerization or
stereolithography process (Fig. 11.13).
The build material is directly applied to the build platform through a multi-
nozzle piezoelectric print head, like an ink-jet printer. Solidification is done
simultaneously by a light curtain. The parts need supports during the build
process and the supports are applied simultaneously by a second set of nozzles
so that each layer consists either of build or of support material. Consequently,
the supports are solid and use a large amount of material. The support material
can be washed out without leaving marks in a mostly automated post-process, so
the part is very smooth. Using a proprietary technology called Poly-Jet Matrix
together with a family of fabricators called Connex offers the unique ability to
print parts and assemblies made of multiple model materials, with different
mechanical or physical properties, all in a single build. This opens up the future
possibility of composing multi-material parts. Typical parts are thin walled and
detailed. They exhibit precise interior hollow structures (Fig. 11.14).
Fig. 11.13: The Objet PolyJet process.

Fig. 11.14: Objet Connex examples (a, b) by Statasys (Euromold 2014).

11.4.4 Digital Light Processing (DLP)

This variation of the photo-polymerization process works with a commercial


Digital Light Processing (DLP) projector as the UV light source. It projects the
complete contour of a cross-section of the actual layer and initiates the
solidification simultaneously (Figs. 11.15 and 11.16). The process was
commercialized by Envisiontec and is continued by Formlabs, DWS or
Solidator.
This method uses a projector for printing objects, like the kind used for
office presentations, to project the image of the cross-section of an object into a
container of photopolymer. The light selectively hardens only the area specified
in that image. The most recently printed layer is then repositioned to leave room
for unhardened photopolymer to fill the newly created space between the print
and the projector. Repeating this process builds up the object one layer at a time.
DLP is known for its high resolution, typically able to reach a layer thickness of
under 30 μm, a fraction of a copy paper sheet. Objects printed using DLP are
known to have less visible layers than are visible with other techniques, but
require support structures, like the stereolithography (SLA) method. A digital
micromirror device is the core component of DLP printers and projects a light
pattern of each cross-sectional slice of the object through an imaging lens and
onto the photopolymer resin. The projected light causes the resin to harden and
form the corresponding layer which fuses it to the adjacent layer of the model.
Compared with SLA, DLP can have relatively faster build speeds. This is
because a single layer is created in one digital image, as opposed to SLA’s laser
process which must scan the container with a single point.

Fig. 11.15: Digital Light Processing 3D printing.

Fig. 11.16: Models created from intraoral capture devices, as a replacement of the traditional
physical impression (source: http://www.dwssystems.com/printers/dental-biomedical).

11.4.5 Solid sheet materials

The cutting of contours out of prefabricated foils or sheets of even layer


thickness according to the sliced 3D CAD file and the subsequent bonding on
top of the preceding layer is called laminated object manufacturing (LOM). A
laser, a knife, or a milling machine can be used as a cutting device. The bonding
of adjacent layers is done by glue, ultrasound, soldering, or diffusion welding.
MCOR Technologies use a tungsten carbide drag blade instead of a laser in
their machines. The process is based on loose sheets of office paper that are
glued using standard white polyvinyl acetate (PVA) glue. Unfortunately, the
printing process is very slow. Due to the possibility of printing in color, this
machine can be used for creating teaching and training models (Fig. 11.17).

Fig. 11.17: Object printed on an MCOR machine (source: http://mcortechnologies.com/).

11.4.6 Fused Deposition Modeling (FDM)

Developed by Stratasys Ltd., rapid prototyping systems can fabricate parts in a


range of materials including elastomers, ABS (acrylonitrile butadiene styrene),
polycarbonate (PC), and investment casting wax (Fig. 11.18).
In the physical process of model fabrication, a filament is fed through a
heated element and becomes molten or semi-molten. The liquefied filament is
fed through a nozzle, using a solid filament as a piston, and deposited onto the
partially constructed part. The newly-deposited material fuses with adjacent
material that has already been deposited. The head moves on the X–Y plane and
deposits material according to the geometry of the currently printed layer. After
finishing a layer, the platform holding the part moves vertically in the z direction
to begin depositing a new layer on top of the previous one. The process is similar
to building a model with a very small hot-glue gun. The production system
possesses a second nozzle in the head that extrudes support material. This is
responsible for building support for any structure that has an overhang angle less
than 45° from horizontal as a default. Support materials can be broken away or
dissolved.
Fig. 11.18: Fused Deposition Modelling.

11.4.7 Selective Laser Sintering (SLS)

The method uses a laser to sinter powder-based materials together, layer by


layer, to form a solid model. The system consists of a laser, part chamber, and
control system. The part chamber consists of a build platform, powder cartridge,
and leveling roller. A thin layer of build material is spread across the platform
where the laser traces a twodimensional cross section of the part, sintering the
material together. The platform then descends a layer thickness and leveling
rollers push material from the powder cartridge across the build platform, where
the next layer is sintered to the previous. This continues until the part is
completed (Fig. 11.19).

Fig. 11.19: Selective Laser Sintering system.

SLS does not require the use of additional supports to hold an object together
while it is being printed due to the fact that the part being constructed is
surrounded by unsintered powder at all times, this allows for the construction of
previously impossible geometries. SLS machines can print objects in a variety of
materials, such as plastics, glass, ceramics and even metal (which is a related
process known as Direct Metal Laser Sintering). The properties and possibilities
of the SLS method make it a popular process for creating both prototypes as well
as final products. The physical process can be full melting, partial melting, or
liquid-phase sintering. Depending on the material, up to 100% density can be
achieved with material properties comparable to those from conventional
manufacturing methods. In many cases a large number of parts can be packed
together within the powder bed, allowing very high productivity.

11.4.8 Selective Laser Melting (SLM)

The method uses a laser to melt powdered metal in a chamber of inert gas. When
a layer is finished, the powder bed moves down and an automated roller adds a
new layer of material which is melted to form the next layer of the model. SLM
is ideal for applications where high strength or high temperatures are required as
it results in extremely dense and strong parts that match characteristics of the
target material. SLM is a metal additive manufacturing technique similar to SLS.
The main difference between methods is that SLS sinters the material, while
SLM melts the material, creating a melt pool in which material is consolidated
before cooling to form a solid structure. Both SLM and DMLS (Direct Metal
Laser Sintering) require support structures.

Fig. 11.20: Removable partial dentures created on EOS M270 using cobalt chrome material
(Euromold 2014).

Fig. 11.21: Custom made porous cranial implant using EOSINT M280/EOS titanium 64 (Euromold
2014).
The types of materials available for this process include stainless steel, tool steel,
cobalt chrome, titanium and aluminum. This technology is used to manufacture
direct parts for a variety of industries including aerospace, dental, medical and
others that need small to medium size, highly complex parts as well as the
tooling industry where it is used to make direct tooling insert elements (Figs.
11.20 and 11.21).

11.4.9 Electron Beam Melting (EBM)

EBM is an additive manufacturing technique capable of fabricating surgical


implants with a solid, porous or hybrid solid combined with porous geometries.
The EBM process is a layered manufacturing technique capable of producing
fully dense metal parts starting from metal powder. In this technique, a thin layer
of loose metal powder is spread on a build plate, followed by melting of select
areas using a finely focused electron beam. Elaborated CAD models are sliced
into layers. Then the system indicates which areas of the loose titanium powder
will be melted by the electron beam. The process is repeated layer by layer to
achieve three-dimensional titanium parts. The equipment operates in a vacuum
at a high temperature, typically 700 °C. The result is stress-free parts without the
oxidation issues that can occur with other metal AM approaches. EBM allows
for excellent control over part chemistry, mechanical properties and geometry
with its ability to build unique structures for a wide range of applications. One
example is a porous-coated surgical implant (Fig. 11.22). Using EBM, parts with
an integrated porous structure can be produced in a single step and with greater
control over the three-dimensional structure of the porous structure. In this way,
pore size and relative density of the surface can be intelligently tailored to
facilitate bone in growth. This is because medical implants with a porous
structure present an increased surface area to which new bone can attach,
promoting stable osteointegration. Cast or machined implants require a
secondary process to etch a porous surface that is bonded in place. By creating
an integral porous structure, EBM prevents the part from debonding over time
[8].
Fig. 11.22: Acetabular cups using Arcam EBM technology (Euromold 2014).

11.4.10 Tissue engineering

This technology has become significantly important to the future of medicine for
many reasons. The problems of obtaining available organs for transplantation
leave many patients on lengthy waiting lists for life-saving treatment. Being able
to produce organs using a patient’s own cells can not only alleviate this shortage,
but also addresses issues related to rejection of donated organs. This will be
important for developing therapies and testing drugs. Tissue engineering
provides a good practical means for researchers to study cell behavior, such as
cancer cell resistance to therapy, and to test new drugs or combinations of drugs
to treat many diseases.
Tissue engineering is the use of a combination of cells, engineering and
materials methods, suitable biochemical and physicochemical factors to improve
or replace biological functions. While it was once categorized as a subfield of
biomaterials, having grown in scope and importance it can be considered as a
separate field in bioscience.
Compared with nonbiological printing, 3D bioprinting involves additional
complexities, such as the choice of materials, cell types, growth and
differentiation factors, and technical challenges related to the sensitivities of
living cells and the construction of tissues. Addressing these complexities
requires the integration of technologies from the fields of engineering,
biomaterials science, cell biology, physics and medicine. Tissue engineering has
already been used for the generation and transplantation of several tissues,
including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue
and cartilaginous structures [9].
Fig. 11.23: Envisiontec 3D-Bioplotter® (Euromold 2014).

11.5 Case studies


Technologies of Reverse Engineering and Rapid Prototyping can be used in
many medical applications. The most obvious application is as a means to design
and develop medical implants, devices and instrumentation. Examples of
medical instruments designed using RE and RP technologies include retractors,
scalpels, surgical fasteners, display systems and many other devices (Tab. 11.2).

11.5.1 One-stage pelvic tumor reconstruction

Reverse engineering techniques and 3D printed custom implants of part of the


pelvis helped in creating custom implants for a 64-year-old patient with a pelvic
chondrosarcoma (Fig. 11.24) according to a clinical case presented by Mobelife.
No standard implant would allow a partial acetabular resection of the anterior column and superior
pubic ramus. Therefore, a custom implant was requested to fill up the bone defect and cover the
remaining acetabular bone, in order to reconstruct normal hip joint functionality. Based on the aMace®
technology, Mobelife designed patient-specific cutting guides for a 1-stage resection and reconstruction
surgery. The tumor was resected exactly according to plan and the custom implant – designed for
perfect fit – positioned easily and fixed stably. Long spongious screws were pre-drilled exactly
according to optimal patient-specific fixation planned, using the custom drill guides provided.
Postoperative X-rays show extremely accurate reconstruction with retention of the posterior column
[17].

According to Mobelife, one week after surgery the patient was already up and
walking.
Tab. 11.2: Some RE and RP medical applications.

Application Example
Surgical tools Drilling guides for spine, knee surgery (arthroscopy), the jigs to assist the process of
removing tumors in bone reconstruction surgery, etc.
Surgical Medical models for surgeons to enhance surgical skills, learn and practice physical
training examination, general medical procedures and clinical skills [10]. Virtual 3D models
for medical simulation, biomedical analysis and study [11].
Personalized Implants for bone reconstructions for patients with skull defects due to traffic
implants accidents [12] or bone tumors; pelvic, acetabular etc. revision. Personalized
implants for cosmetic cranio-maxillofacial surgery [13, 14].
Orthopedics Development of hip and knee implants as well as the surgical tools such as
orthopedic plates, fixation tools and screws. 3D models for biomedical analysis and
study.
Dental Implants for bone reconstruction of the mandible and for tooth reconstruction and
application replacement, drilling guide, digital impression models, long-term temporary crowns
and bridges [15].
Simulation of an implant position on 2D and 3D models, identification of the
mandibular canal, calculation of the bone density and surgical planning [16].
Prostheses Design and manufacturing of personalized prostheses, personalized orthoses and
ergonomic products such as shoes, and sport products, etc.

Fig. 11.24: Creating custom implants for a 64-year-old patient with a pelvic chondrosarcoma (a–
e).

11.5.2 Orbital reconstruction following blowout fracture

This example presents the case of a 29-year-old male who sustained trauma to
the right orbit. Orthoptic examination revealed limited supra- and infraduction of
the right eye (Figs. 11.25 and 11.26).
Fig. 11.25: Process of traditional orbital reconstruction (a–c) [18].

Surgeons first made CT scans of the patient’s eye sockets and used them to
create a 3D virtual model. This model was mirrored for determination of the
right position of orbital bone and 3D printed. An appropriately shaped titanium
mesh implant was obtained using the printed model. Pre- or intraoperative
titanium meshes shorten operating times and decrease the number of attempts
required to position the implant in the orbital cavity and assess its shape and fit.
This significantly reduces the risk of inferior rectus damage. As the implant is
tailored to the shape of the orbit, the whole area of bony defect can be covered
with the mesh. All of the above factors influence the long-term results, which are
better than with the standard method [19].

Fig. 11.26: Orbital reconstruction (a–c) (image courtesy of Dr. M. Elgalal).

11.6 Summary
In broad terms, Reverse Engineering and Rapid Prototyping are used extensively
today in a variety of industries and fields, from more obvious applications (such
as product design) to less intuitive ones (such as art or archaeology).
These technologies are definitely widely spread in different fields of
medicine and show great potential in medical applications. Various uses of RE
and RP within surgical planning, simulation, training, production of models of
hard tissue, prostheses and implants, biomechanics, tissue engineering and many
other cases open up a new chapter in medicine. Due to these technologies
doctors and especially surgeons are privileged to do some things which previous
generations could only have imagined. CT, MRI, etc. imaging have provided
foundations for computerized data visualization and tools for preprocedural
planning.
Rapid prototyping techniques have developed rapidly and provide means for
the fabrication of solid replicas of anatomy as revealed by medical image data
and can be very helpful in bridging the gaps in current digital planning and
treatment delivery techniques. RP technology provides the possibility to produce
biocompatible implants with a complicated shape and structure and in future
maybe also organs using tissue engineering and bioprinters.

References
[1] Cieślik J and Kasperkowicz J. Zastosowanie systemów CAE do planowania operacji
chirurgicznych: (Application of CAE systems for planning surgical procedures) Mechatronika,
red. Lucyna Leniowska. Rzeszów: Uniwersytet Rzeszowski, 2011. Nauka dla Gospodarki;
2/2011. pp. 57–70.
[2] Pham DT and Hieu LC. Reverse Engineering hardware and software. In: Raja V and
Fernandes KJ, eds. Reverse Engineering: An Industrial Perspective. London: Springer-Verlag
London Ltd. 2006. pp. 33–70.
[3] Zwierzyński AJ and Cieślik J. Model kinematyczny redundantnego narzędzia
laparoskopowego oraz wybrane zagadnienia planowania trajektorii (Kinematic model of
redundant laparoscopic tool and selected aspects of trajectory planning). Prace Naukowe
Politechniki Warszawskiej. Elektronika ; 2012 z. 182, Postępy robotyki, T. 1 pp. 129–138.
[4] Zwierzyński AJ and Cieślik J. Opis geometrii ciała dla celów planowania trajektorii
redundant-nych narzędzi laparoskopowych (Description of the geometry of the body for the
purpose of trajectory planning of redundant laparoscopic instruments). Prace Naukowe
Politechniki Warszawskiej. Elektronika, 2012 z. 182, Postępy robotyki, T. 1 pp. 139–148.
[5] Rajendra JS, Raut LB and Kakandikar GM. Analysis of integration of Reverse Engineering
and generative manufacturing processes for medical science – a review. International Journal
of Mechanical Engineering and Robotics Research. 2013;2(4).
[6] Gibson I. Advanced Manufacturing Technology for Medical Applications. Reverse
Engineering, Software Conversion and Rapid Prototyping. Chichester: John Wiley & Sons
Ltd. 2005.
[7] Hieu LC, Sloten JV, Hung LT, Khanh L, Soe S, Zlatov N, Phuoc LT and Trung PD. Medical
Reverse Engineering Applications and Methods. 2ND International Conference on
Innovations, Recent Trends and Challenges in Mechatronics, Mechanical Engineering and
New High-Tech Products DevelopmentMECAHITECH ‘10, Bucharest, 23–24 September
2010.
[8] Renovis-surgical.com. Bone Ingrowth into Tesera Trabecular Technology™ Porous Structure
A Weight-Bearing Ovine Study. Renovis Surgical [Internet]. 2015 [cited 7 June 2015].
Available from: http://www.renovis-surgical.com.
[9] Sun W, Starly B, Nam J and Darling A. Bio-CAD modeling and its applications in computer-
aided tissue engineering. Computer-Aided Design. 2005;37:1097–1114.
[10] D’Urso PS, Barker TM, Earwaker WJ, Bruce LJ, Atkinson RL, Lanigan MW, Arvier JF and
Effeney DJ. Stereolithographic biomodelling in cranio-maxillofacial surgery: a prospective
trial. Journal of Cranio-maxillo-facial Surgery. 1999;27:30–37.
[11] McDonald JA, Ryall CJ and Wimpenny DI. Medical models, the ultimate representations of a
patient-specific anatomy. In: Wouters K, ed. Rapid Prototyping Casebook. Trowbridge:
Professional Engineering Publishing Limited. 2001, pp. 177–182.
[12] Ciocca L, Fantini M, De Crescenzio F, Corinaldesi G and Scotti R. Direct metal laser
sintering (DMLS) of a customized titanium mesh for the prosthetically guided bone
regeneration of atrophic maxillary arches. Medical & Biological Engineering & Computing.
2011:49:1347–1352.
[13] Lethaus B, Poort M, Laeven P, Beerens M, Koper D, Poukens J and Kessler P. A treatment
algorithm for patients with large skull bone defects and first results. Journal of Cranio-maxillo-
facial Surgery. 2011;39:435–440.
[14] Maravelakis E, David K, Antoniadis A, Manios A, Bilalis N and Papaharilaou Y Reverse
engineering techniques for cranioplasty: a case study. Journal of Medical Engineering &
Technology. 2007;32(2):115–121.
[15] Metzger MC, Hohlweg-Majert B, Schwarz U, Teschner M, Hammer B and Schmelzeisen R.
Manufacturing splints for orthognathic surgery using a three-dimensional printer. Oral
Surgery, Oral Medicine, Oral Pathology, Oral Radiology and Endodontics. 2008;105:1–7.
[16] Jiang N, Hsu Y, Khadka A, Hu J, Wang D, Wang Q and Li J. Total or partial inferior border
ostec- tomy for mandibular contouring: indications and outcomes. Journal of Cranio-maxillo-
facial Surgery. 2012;40:277–284.
[17] Mobelife.be. Clinical cases [Internet]. 2015 [cited 7 June 2015]. Available from:
http://www.mobelife.be/clinical-cases/case/list/.
[18] Elgalal M, Walkowiak B, Stefańczyk L and Kozakiewicz M. Design and fabrication of patient
specific implants using Rapid Prototyping techniques. http://www.euris-programme.eu.
[19] Kozakiewicz M, Elgalal M, Piotr L, Broniarczyk-Loba A and Stefanczyk L, Treatment with
individual orbital wall implants in humans – 1-year ophthalmologic evaluation. J.
Craniomaxillofac. Surg. 2011 Jan; 39(1):30–6.
Zdzisław Wiśniowski, Jakub Dąbroś, and Jacek Dygut
12 Computer simulations in surgical education

12.1 Introduction
“Practice makes perfect” – goes the old saying. Unfortunately, due to the
peculiar nature of surgery, prospective surgeons cannot follow the traditional
“try until you succeed” approach. To attain professional excellence a surgeon
must undertake many exercises, which – obviously – need to take place outside
of the operating theater. In the early days of medicine such exercises were
carried out on cadavers or live animals. As the centuries went by, the steady pace
of scientific progress coupled with the emergence of modern digital technologies
have led to the replacement of live organisms with inorganic objects, both
physical (including phantoms and models of various organs) and virtual.
Rapid developments in IT have turned computers into an indispensable
attribute of medical science [1]. The concept of virtual reality, coined at the end
of the 20th century as an amalgam of 3D graphics and motion detection, presents
an interesting alternative to traditional training methods. Virtual reality bases on
computer models, simulations and visualization, particularly in the context of 3D
objects. The model describes elements of the virtual space in terms of
mathematical equations. Such elements are then assembled – with the use of
suitable algorithms – to realistically visualize the model’s reaction to external
stimuli, along with its various possible states.
The main goal of this chapter is to present the fundamental aspects of in
silico surgical simulations [2, 3]. Many of the presented tools are already being
applied in medical training curricula, with more to follow in the near future [4].
The term in silico represents an analogy to the well-known concepts of in
vivo and in vitro. It is used to describe models, simulations and visualization of
medical phenomena with the use of computers and electronic hardware. In silico
medicine is the fruit of major advances which have occurred in IT and computer
science over the years. It applies to computer software as well as to any
technologies based on computerized data processing – including algorithms,
systems, data mining tools, specialized numerical hardware, visualization
platforms etc.
12.2 Overview of applications
We will now proceed with an overview of publicly available applications which
assist medical students in their training. The status presented is valid as of the
end of 2014. Most of the presented applications come with a broad set of
features and have many possible uses – the common trait is that they all provide
support for surgical education [5]. In its most basic form this education is
passive and focuses on familiarizing students with the details of the human
anatomy, various pathologies and video recordings of surgical procedures.

12.2.1 Gray’s Anatomy Student Edition, Surgical Anatomy – Student


Edition, digital editions of anatomy textbooks for the iOS
(free) and Android (paid)

Archibald Industries has released mobile digital editions of two famous anatomy
textbooks: Gray’s Anatomy by Henry Gray and Surgical Anatomy by Joesph
Maclise. Both are available free of charge to medical students, doctors and
nurses.
The applications feature photos and descriptions which closely correspond to
the original contents of both books. Gray’s Anatomy, originally published in
1858, contains illustrations and descriptions of human anatomy, while Surgical
Anatomy (1856) is a set of descriptive drawings presenting the most important
parts of the human organism.
Both publications share a common interface with identical navigation
features. Gray’s Anatomy comprises 1247 illustrations while Surgical Anatomy
includes 180 figures. In addition to manual browsing tools the applications
contain textual descriptions, along with introductory and summary sections.
Each book is subdivided into chapters. The menu bar seen at the bottom of the
screen provides thumbnail shortcuts, displays the title of the current chapter and
of the illustration currently being displayed, and can help readers locate related
content in Google and Wikipedia. The user may create custom bookmarks and
access a concise help system. All illustrations are stored in very high definition,
with no quality loss when zooming in on an interesting detail (within limits set
by the application’s designers).

12.2.2 Essential Skeleton 4, Dental Patient Education Lite,


3D4Medical Images and Animations, free educational
software by 3D4Medical.com, available for iOS, Android
(Essential Skeleton 3 – earlier version; paid editions of
Essential Anatomy 3 and iMuscle 2)

Out of twenty seven educational applications offered by 3D4Medical three are


available free of charge. They all focus on human anatomy and can be installed
in the same way as SpineDecide.

Essential Skeleton 4
High-quality three-dimensional, fully controllable and scalable model of the
human skeleton. Individual bones and bone clusters may be highlighted,
rendered semitransparent (Fig. 12.1) or hidden entirely (Fig. 12.2).

Fig. 12.1: Essential Skeleton4 – different types of bones presentation.


Fig. 12.2: Essential Skeleton4 – hiding some bones structure.

Users may add markings and labels to the model (Fig. 12.3), select portions of
the skeleton using tabs or hide selected structures. All features are available from
a sidebar which can be minimized to free up more space for the main display
window. The tool provides an invaluable aid in learning anatomy and comes
with a customizable quiz where the user must match labels (in English or Latin)
to specific skeletal structures (Fig. 12.4). Upon completing the quiz the
application displays the user’s score along with any previous records, enabling
students to track their progress.

Dental Patient Education Lite


A feature-limited version of a commercial application which provides an in-
depth look at human dentition. Much like Essential Skeleton, it is controlled by a
sidebar menu and supports adding markings and tags to the model. The starting
screen shows a human head. By using the layer management tool we can remove
individual muscle layers, finally uncovering the dental structure (Fig. 12.5),
whose elements can be double-tapped to obtain a more detailed view. After
selecting a tooth we can split it in two planes, revealing its internal structure. The
navigation menu (bottom left-hand corner of the screen) contains links to
animations presenting dental conditions, diagnoses, treatment and prophylaxis
(Fig. 12.6). The Lite edition of the application is restricted to twelve sample
animation while the commercial version contains nearly two hundred.

Fig. 12.3: Essential Skeleton4 – adding labels to the model.


Fig. 12.4: Essential Skeleton4 – naming specific skeletal structures.

Fig. 12.5: Dental Patient Education Lite – presenting the dental structure.

Fig. 12.6: Dental Patient Education Lite – animation of the dental treatment.
3D4Medical Images and Animations
A repository of high-quality 3D visualizations (Fig. 12.7) and animations
covering various aspects of medicine and related disciplines (e.g. physical
fitness), freely available for noncommercial use with the option to purchase
commercial licenses. These assets can be used to create presentations, posters,
animations etc. The repository contains over 500 images divided into seven
categories and 42 videos divided into five categories [6].

Fig. 12.7: 3D4Medical Images & Animations – initial screen, choosing image category.

Other applications from 3D4Medical


As already noted, 3D4Medical also offers commercial applications – 23 of them
– in various price ranges, from € 1.79 for Student Muscle System (a counterpart
of Essential Skeleton focusing on muscle tissue) to € 44.99 for the full version of
Dental Patient Education and Orthopedic Patient Education (a set of 141
animations detailing orthopedic mechanisms, common injuries and treatment
options). The average cost of an application license is € 10.16; however six
applications are available at € 6.99. Of particular note is Essential Anatomy 4 –
an enhanced version of Essential Skeleton, with a broader thematic range and
good educational value. This software package is available at € 21.99.

12.2.3 SpineDecide – An example of point of care patient education


for healthcare professionals, available for iOS
The Decide – Point Of Care Patient Education for Healthcare Professionals
series is a collection of nine applications developed by Orca Health and targeted
at doctors who wish to explain medical concepts to their patients. It can also
augment medical training courses, informing students about the various
structures of the human body, known pathologies and treatment methods. The
software is distributed free of charge and can be downloaded from the mobile
device’s online store. Once all necessary files have been downloaded, the user
only needs to click “Free/Install” to install the application.
As already noted, the software is subdivided into three segments. The first of
these is devoted to 3D visualizations of the human spine or its individual parts
(regions, vertebrae and intervertebral discs). The model can be rotated, zoomed
and translated using a touchpad-like interface in the bottom part of the screen.
Additionally, some screens present a scrollbar which decouples elements of the
model. The model itself is richly detailed and comes with a range of color coded
profiles. All textures are based on high-resolution images.
The second visualization module focuses on pathological changes such as
thoracic hyperkyphosis and provides a selection of videos or alternating displays
presenting the normal and pathological spine, supplemented by 2D photographs
and MRI scans, as well as in-depth textual descriptions of symptoms, diagnostic
criteria, recommended treatments, post-operative care and prevention methods.
The final module visualizes a range of surgical procedures using the previously
described 3D models.
The user is also provided with a set of drawing tools which can be used to
add markings to any static image – for example to direct the patient’s attention to
a pathological change. Users may upload their own photos and create video
sequences (which, however, cannot be graphically annotated). The Decide –
Point of Care Patient Education for Healthcare Professionals also includes other
modules: knee, hand, shoulder, eye, foot, heart, dental and ear-nose-throat.

12.2.4 iSurf BrainView – Virtual guide to the human brain, available


for iOS

The application is billed by its authors as “a great tool for teaching brain MRI
and for learning neuroanatomy”. It is available free of charge from the AppStore
with a similar installation procedure to the applications described previously.
iSurf Brain View, developed by Netfilter (netfilter.com), features a “model”
brain composed of a great number of MRI scans in all three anatomical planes
(coronal, sagittal and transverse) [7]. The visualization plane can be selected
from the menu placed at the bottom of the viewport. A separate scrollbar is used
to translate the model along the currently selected plane. While the scans
themselves cover the entire head, any bones, muscles and sensory organs are
skipped. The application provides a full range of views, from the tip of the head
all the way to the base of the jaw.
Despite pixelation artifacts, images are crisp and it is easy to discern
individual characteristics. In order to enhance the user’s experience structures of
the brain are color coded and can be highlighted as needed (Fig. 12.8). The
application also provides an online interface which can download descriptions of
selected structures from Wikipedia (Fig. 12.9).

Fig. 12.8: iSurf Brain View – highlighting the brain structures.

Fig. 12.9: iSurf Brain View – presenting descriptions of the brain structures.

iSurf Brain View provides a rotatable 3D model of the left hemisphere, a quiz
feature which asks the user to name specific structures of the brain, and a set of
scientific articles which further in-depth information on the human brain.

12.2.5 Monster Anatomy Lite – Knee – Orthopedic guide, available


for iOS (Monster Minds Media)

Monster Anatomy Lite – Knee is very similar to iSurf Brain View but instead
contains MRI scans (with a 4–5 mm spread) of the knee joint and its
surroundings. Additionally, Monster Mind Media SAS offers two paid (€ 16.99)
extended editions: Lower Limb and Upper Limb, both developed under the
supervision of Professor Alain Blum from the Centre Hospitalier Universitaire in
Nancy, France [8].
In addition to browsing scans in the coronal, sagittal and transverse planes,
Monster Anatomy Lite enables the user to highlight and observe the structure of
bones, tendons, muscles (Fig. 12.10), blood vessels and nerves (Fig. 12.11).
Navigation is somewhat different than in iSurf Brain View – the user may click
and drag a miniature view of the knee joint as well as change the current layer by
one step (in either direction) to accurately observe areas of interest.
Both applications can be supplemented by other similar software available
from the AppStore. For example, Monster Anatomy Lite – Knee meshes well
with the previously mentioned Decide toolkit, as well as with Essential Anatomy
4 and AO Surgery Reference (which is also available for Android devices). For
its part, iSurf Brain View can be assisted by Brain Anatomy (a free, simple and
user-friendly app by Gianluca Musumeci [9] showing frontal and transverse
cross-sections of the human brain and enabling fragments to be selected and
contrasted with MRI scans – see Fig. 12.12) or by 3D Brain (a 3D model of the
brain where individual structures can be highlighted to provide a clear view of
parts of the cerebrum not visible from outside – see Fig. 12.13).
Fig. 12.10: Monster Anatomy Lite – Knee: cross-section view along the knee.

Fig. 12.11: Monster Anatomy Lite – Knee: cross-section view across the knee.
Fig. 12.12: Brain Anatomy – scan selection.

Fig. 12.13 Brain Anatomy – coloring brain structures.

12.2.6 AO Surgery Reference – Orthopedic guidebook for diagnosis


and trauma treatment, available for iOS and Android

Published by the AO Foundation, this free mobile application for Android and
iOS devices can be downloaded from the Android Market or from the AppStore
[10]. It is designed to serve as a handy database/catalogue of fractures, helping
expedite diagnosis and ensure that the selected treatment is the right one.
The application follows a “top-down” approach, taking its user through all
stages of the fracture treatment process, starting with proper diagnosis. The
initial screen shows a complete skeleton (Fig. 12.14) and asks the user to select
an area of interest by tapping the corresponding element. Further intermediate
screens may then be displayed to fine-tune the analysis (Fig. 12.15). The user
indicates the fracture zone, selects its type (transverse, longitudinal, compound
etc.) and describes its complexity (Fig. 12.16). Once the selection process is
done, the application displays a list of possible treatment options, with scrollable
descriptions accessed by clicking icons on the right-hand side of the screen. The
list is often subdivided into surgeries, noninvasive procedures and general
recommendations (Fig. 12.17).
The next stage involves surgical preparation and surgery itself – the system
shows how to position the patient on the operating table (if required), how to
expose the surgical site and how to carry out the procedure (Fig. 12.18). All of
these steps are shown using realistic drawings or photographs. The application
then moves on to postoperative care (Fig. 12.19), including positioning
protocols, activities to be avoided by the patient, recommended medication and
further check-ups. The application has a robust, user-friendly interface complete
with pictures and links to external source of information. All descriptions are
clear and concise.1

Fig. 12.14: AO Surgery Reference – initial screen, skeleton parts (© AO Foundation,


Switzerland).

Fig. 12.15: AO Surgery Reference – skeleton part selection (©AO Foundation, Switzerland).
Fig. 12.16: AO Surgery Reference – diagnosis of the fracture (© AO Foundation, Switzerland).

Fig. 12.17: AO Surgery Reference – list of general recommendations (© AO Foundation,


Switzerland).
Fig. 12.18: AO Surgery Reference – surgery procedure description (© AO Foundation,
Switzerland).

Fig. 12.19: AO Surgery Reference – post-operative care description (© AO Foundation,


Switzerland).

12.2.7 iOrtho+ – Educational aid for rehabilitationists, available for


iOS and Android

A simple, user-friendly application by Therapeutic Articulations intended as a


therapeutic and educational aid for rehabilitationists. It includes a set of reflex
and reaction tests together with information on expected results and literature
references [11].
Tests are launched in a way similar to AO Surgery Reference – by pointing
to a part of the skeleton (Fig. 12.20). The free version of the application covers
the foot and the ankle joint. Tests are divided into categories which correspond
to selected parameters or system components (e.g. the function of a specific joint
or tendon). Each test (Fig. 12.21) comes with a photograph, a video recording
showing the applicable medical procedure, a description of the procedure’s
course and aims, and a result interpretation sheet.

Fig. 12.20: iOrtho+ – initial screen, choosing the part of the skeleton.
Fig. 12.21: iOrtho+ – one of the tests for ankle and foot description.

The menu bar displayed at the bottom of the screen includes a set of
rehabilitation exercises (“mobilizations”), showing how and where to apply
force (Fig. 12.22). Like most of the applications discussed in this chapter,
iOrtho+ has a straightforward interface, although the free version is limited to
two content packs out of a total of 16 (specifically – ankle tests and knee joint
rehabilitation exercises).2

Fig. 12.22: iOrtho+ – presentation of rehabilitation exercises.

12.2.8 DrawMD – Based on General Surgery and Thoracic Surgery


by Visible Health Inc., available for iOS

The next application in this study comprises a set of visualization boards, with
sixteen thematic packages available from the publisher’s website. In addition to
the two sets discussed here, users can download the following packages:
urology, anesthesia and critical care, cardiology, orthopedics, OBGYN, vascular,
female pelvic surgery, ENT (ear-nose-throat), pediatrics, transplant surgery,
ophthalmology, breast health, pulmonology and speech-language pathology. The
visualizations focus on selected tissues and organs, enabling doctors to better
explain medical problems and treatment options to their patients [12].
Each board supports freehand markings and textual annotations; additionally
the program provides a set of custom “stamps” representing important organs,
pathological changes and surgical instrumentation (Fig. 12.23). These can be
used e.g. to present the placement of a stent, the location of a tumor or even the
function of ion channels. In keeping with the latest AppStore trends, DrawMD
can import custom pictures and video recordings (Fig. 12.24), such as X-ray, CT
or USG scans. Annotated boards can be saved (Fig. 12.25) and accessed from
the application’s starting screen (Fig. 12.26).3

Fig. 12.23: DrawMD – visualization of selected organs (©Visible Health, Inc.).

The user may also create a custom account with which they can later log into the
application, post in forums and submit image sharing requests to other users. The
application includes an English-language tutorial complete with screenshots,
enabling new users to familiarize themselves with its features.
Fig. 12.24: DrawMD – choosing video recording or pictures (©Visible Health, Inc.).

Fig. 12.25: DrawMD – making and saving annotations (©Visible Health, Inc.).
Fig. 12.26: DrawMD – starting screen, accessing saved annotations (©Visible Health, Inc.).

12.2.9 MEDtube, available for iOS and Android

Likely inspired by the popular YouTube video sharing service, MEDtube [13]
provides a mobile application with recordings of actual surgeries (Fig. 12.27).
According to the distributor, over 10 thousand HD video files are available, each
accompanied by a brief description of the corresponding case. The dataset is
divided into categories corresponding to areas of medical practice (Fig. 12.28).
MEDtube is also available on the www video education portal (Fig. 12.29)
where medical student can find the online library with high-quality multimedia
content of clear practical and educational value, counting over ten thousands
professional medical videos also divided in specific categories (Fig. 12.30).
MEDtube is free to use (only free registration is required). The resources are
provided by physicians, medical societies, healthcare centers and universities
from all over the world and before publication all the uploaded materials are
accepted by medical experts (Figs. 12.31 and 12.32).
Fig. 12.27: MEDtube – video recording of the surgery.
Fig. 12.28: MEDtube – choosing medical category.

Fig. 12.29: MEDtube – educational medical video portal.


Fig. 12.30: MEDtube – choosing specialty.

Fig. 12.31 MEDtube – choosing video recording of the surgery.


Fig. 12.32 MEDtube – presentation of the video recording of the surgery.

12.3 Specialized applications


One of the most interesting and advanced applications discussed in this chapter
is the Touch Surgery system [14]. It enables the user to conduct virtual surgery,
taking them through all stages of the process and describing the procedures and
instruments required at each stage. The authors’ (Jean Nehme, Andre Chow)
goal is to promote best practices in surgery in order to improve the overall
quality of medical care [14]. Realistic visualizations created by the developers in
collaboration with renowned universities (including Duke University, Stanford
University, University of Toronto, Imperial College London and Dartmouth
College) prepare the student for participation in actual surgeries.
Touch Surgery is an interactive mobile simulator (application on
smartphone) which provides a step-by-step walkthrough of the entire surgical
procedure, allowing the user to make decisions at each stage. Crucially, it fits in
a pocket and can be run at any time and place of the user’s choosing [15]. It can
serve young adepts of surgery as well as professionals interested in furthering
their knowledge of orthopedic surgery, trauma surgery, neurosurgery, general
surgery, plastic surgery, cardiosurgery, vascular surgery etc. – all in accordance
with the time-tried concept of repetitio est mater studiorum. Advanced 3D
visualization techniques recreate the work of actual surgeons, immersing the user
in a lifelike virtual operating theater. All presentations are richly detailed and
mimic real-world conditions and procedures. A dynamic flow of scenes closely
tracks the movements of the surgeon, OR staff, the operating table, support
equipment etc. This fluid motion is arguably the most important feature of the
application, strengthening the illusion of participating in real surgery.
Touch Surgery can assist trainees in becoming accustomed to the procedures
required in a real clinical setting. It is clear that any hand motions and other
actions should be well practiced prior to operating on real patients. “Decision
before incision” – goes the surgical mantra. Computerized simulations are an
invaluable training tool, enabling the student to obtain practical knowledge
which will help them make the right decisions in the future. An important aspect
of training is selection of the appropriate methodology. Here, we can
differentiate between two broad approaches: teacher-assisted learning and
individual learning. Touch Surgery unifies both concepts as on the one hand the
student may individually perform certain actions (especially in the Test mode),
while on the other hand the system plays the role of a tutor in the Learn mode,
providing the student with hints and recommendations.
Medical simulations can help students develop appropriate “reflexes” and
ensure that these are effectively applied in practice. This, in turn, improves the
quality of medical care as the trainee does not need to painstakingly recall each
sequence of actions prior to making an incision or separating tissue layers. An
undeniable advantage of Touch Surgery is that it implements the popular
“edutainment” concept and can be freely experimented with at no risk to actual
patients – note that a relaxed student can assimilate knowledge much faster than
an anxious one, regardless of age.
Individual modules of Touch Surgery are based on cognitive analysis of
various surgical procedures. Each of them is divided into a number of stages and
decision points. The application can assess the skills of prospective surgeons,
tracking and grading their progress [14]. Each procedure can be practiced in the
Test mode after being explained to the student in the Learn mode. The
application also keeps track of the student’s results and maintains an
individualized library of surgeries. According to Jean Nehme (who co-developed
the application with several colleagues, including the orthopedic surgeon Andre
Chow) Touch Surgery responds to the pressing need to train specialists without
interfering with their everyday medical practice.

12.3.1 Application description

Having created an account the user is presented with a list of available surgeries
(Fig. 12.33). While the basic version of Touch Surgery contains only a few
representative procedures, additional ones can be downloaded from the Internet
free of charge. The full list includes (among others) neurosurgery, orthopedics,
plastic surgery and general surgery, with 74 modules (including 32 orthopedic
procedures) currently available and new ones being introduced on a regular
basis.
The following description details a sample orthopedic surgery, presenting
firsttime users with a walkthrough of its individual stages: I – patient
preparation, II – femoral preparation, III – proximal locking and IV – distal
locking.

Patient preparation
The femoral nailing simulation was prepared by the MSK Lab at Imperial
College London in collaboration with the developers of Touch Surgery. The
procedure itself consists of four stages, each of which is presented in detail,
enabling users to repeat and memorize the necessary steps so that they can be
smoothly performed in a real OR setting. In the authors’ opinion advanced,
visually appealing animations can provide an invaluable aid in understanding
and memorizing surgical procedures. In this case, the first stage, called “patient
preparation” shows how to position the patient and otherwise prepare them for
insertion of an intramedullary nail into the right femur (Fig. 12.34).
The next step enables the user the select one of two content presentation
modes. This is done by clicking buttons displayed above the main text field. The
Learn button replays the procedure in guided learning mode, while the Test
button allows the user to try their own hand at performing the necessary actions.
The learning mode is primarily aimed at trainees (students or doctors seeking
specialization). Activating it takes the user to a new screen which presents a
detailed overview of the required actions (e.g. “drag the green ring onto the pink
ring”). Once a given step is complete, the user may proceed to the next step,
where the patient is shown sitting on the operating table. The application will
now monitor the patient’s status as the user goes through application of traction,
alignment of limbs, insertion of the perineal post etc. (Figs. 12.35 and 12.36).
The learning mode concludes with surgical site cleanup. The operating table is
assembled by dragging the green ring displayed in the visualization window. The
direction of motion conforms to the natural movement performed in an OR
setting (as indicated by the red arrow in Fig. 12.37).
Fig. 12.33: Library module – The list of operation types. Selection of position (arrow) starts
installation (via the Internet) of the next module – (Patient preparation).

Fig. 12.34: The explanation of correct patient preparation.


Figs. 12.35 and 12.36: The sequence of action steps to prepare the operation table. The outcome
of the user’s action (in this case placement of the perineal post). Dragging the green ring onto the
pink ring in the direction of the patient’s crotch (as indicated by the arrow) advances the
visualization to the next step, i.e. placement of the perineal post which provides countertraction
while operating on the upper extremity of the femur.

Subsequently the application presents – in more than a dozen steps – the process
of setting up the operating table and positioning the patient’s upper and lower
limbs in order to enable the surgery. Each stage is supplemented by multimedia
presentations showing the correct course of action (see Fig. 12.37 – patient is
prepared for right-side femoral intramedullary nailing, with axial stability
facilitating fixation of the fractured femur). The course is based on the simple
“cause and effect” principle (Figs. 12.39 and 12.40).
Another important feature offered by Touch Surgery as part of the patient
preparation phase is a selection of intra-operative radiographs showing how the
fractured femur is fixated on the operating table (Fig. 12.38).
The presented module encourages surgeons to take scans of not just the
fractured bone, but also of the proximate joints – in this case, the hip joint and
the knee joint. This step minimizes malpractice risks by revealing additional
joint injuries, should any exist. The Test mode of the application enables the user
to apply their newly acquired knowledge by performing unassisted virtual
surgery on the femur. At each step the user is presented with a multiple choice
question with four possible answers only one of which is correct (Figs. 12.39
and 12.40). The presented procedure consists of 20 steps and therefore
comprises 20 questions.
Fig. 12.37: Patient ready for the next step of the operation.

Fig. 12.38: Intra-operative radiograph.


Figs. 12.39 and 12.40: Test module – the correct decision on the step shown in Figs. 12.39 leads
to the next step as shown in Figs. 12.40.

In order to answer each question the user taps the selected field (in this case –
“increase femoral traction”), following which the application displays the
resulting position of the patient (Fig. 12.40). Users may also tap other answers
(some of which may be incorrect) – this does not cause the application to revert
to the beginning of the test; however, any incorrect answers are tallied up at the
end affect the user’s final score (Figs. 12.41 and 12.42).

Figs. 12.41 and 12.42: The results of the module Test shown in Figs. 12.41 – the decision “failed”
due to the low number of points (40, expected 70) with graphic presentation of validation in each
step of the procedure.

Femoral preparation
The second part of Touch Surgery, i.e. femoral preparation, begins in the Learn
mode as a continuation of the previously described preparatory step. The first
task is to select the right scalpel. This is followed by a 3–5 cm incision proximal
and posterior to the greater trochanter, extending 4 cm distally (Fig. 12.43).
We are presented with an image of the operating field immediately prior to
the first incision. Two rings are visible. Dragging the green ring onto the pink
ring performs the incision and visualizes its results in the form of an animation
(Fig. 12.44).
The next step is to incise the fat layer. Again, two rings are displayed and the
user is asked to drag the green ring onto the pink ring, simulating the incision.
The actual procedure is too complex to cover in detail – instead we will
briefly outline the remaining phases. The next phase, i.e. proximal locking,
consists of 29 modules which together simulate closed surgery of a “freshly”
fractured femur. With this surgical technique osteogenic cells are retained in the
fracture hematoma, reducing the risk of nonunion and iatrogenic infections (e.g.
by S. aureus). The second module of Touch Surgery focuses on precise,
controlled setting of the broken bone using intraoperative imaging, with
particular attention devoted to rotational, axial and lateral fractures. Initial
imaging is used to locate the area between the greater trochanter and the
piriformis fossa, which is where the intramedullary nail will be inserted. A
surgical awl is used to create an entry portal to the intramedullary canal (Fig.
12.45). A guide wire is then inserted all the way into the canal, and its placement
verified with a radiograph to ensure correct implantation of the intramedullary
nail (Figs. 12.46 and 12.47).

Fig. 12.43: Second step of operation module. The correct movement of the green ring toward the
pink one opens a short animation of skin cut along a length of 3-5 cm.
Fig. 12.44: The result of the action shown in Fig. 12.43. The continuation of skin cutting makes
the cut deeper.

The guide wire is threaded through the entry portal between the greater
trochanter and the piriformis fossa. This is followed by application of several
cannulated reamers with progressively broader heads, until the diameter of the
head exceeds that of the intramedullary nail by approximately 1.0–1.5mm. This
preparation of the intramedullary canal concludes the second stage of treatment
and the application again presents the user with a choice between carrying on in
the Learn mode or switching to the Test mode.

Figs. 12.45, 12.46 and 12.47: The insertion point for the awl and guide wire with radial
examination of action shown in Figs. 12.46 and 12.47 with comments for each action.

Proximal locking
The third phase of the surgery involves proximal locking of the intramedullary
nail. The nail is attached to a targeting guide with a locking pin, then introduced
into the intramedullary cavity in such a way that its head protrudes from the
entry portal. This phase comprises 28 animated steps (Fig. 12.48).

Fig. 12.48: The visualization of the third step of operation module – nail fixation.

The illustration shows how to use the targeting guide to pinpoint the placement
of the proximal locking canal (here, the user is asked to select a scalpel in order
to expose the bone). Installation of two locking screws in the proximal part of
the femur concludes the third phase of the surgery.

Distal locking
The final part of the application explains the distal locking procedure, with a
total of 47 animated and richly detailed visualizations (Fig. 12.49).
Fig. 12.49: The fourth step of the distal locking procedure.

Here, another locking canal must be drilled to house the distal locking pin. This
is done with the use of the “freehand locking” technique. The drill guide must be
centered over the locking canal and in order to ensure this, the canal is drilled
with help from image-enhanced radiography. The surgeon is reminded that
before distal locking begins the length and rigidity of the femur must be verified.
Much like the previously described modules, phase 4 of the surgery can be
accessed in Learn or Test modes, enabling users to learn the procedure with help
from an interactive tutorial or try their own hand at performing the required
actions.

12.4 Simulators
Surgical simulators represent a common ground between advanced IT systems
and modern surgical practice. The concept has emerged from the world of
videogames [16, 17], which – owing to 3D visualization and sensitive motion
detection technologies – can accurately track gestures, translating them into
actions performed by the player’s avatar in virtual space. Clearly, virtual reality
carries great promise for medical training [18], particularly in the area of
minimally invasive surgery where detailed visualization and accurate eye-hand
coordination is of utmost importance.
Collections of CAT and NMR scans can be assembled into realistic virtual
environments detailing any organ in the human body. Virtual controllers coupled
to motion detectors and 3D visualization hardware can then simulate real-life
surgeries. Modern simulation tools include feedback loops where the controller
reacts differently depending on the force applied by the user and the position of
the surgical implement in the patient’s body.
Arguably the most technologically advanced surgical robot currently on the
market is the Da Vinci Surgical System. Designed to advance the state of the art
in minimally invasive surgery and introduced into clinical practice at the
beginning of the 21st century, by 2012 it had been used in over 200 000
procedures carried out throughout the world. Its operation can be witnessed,
among others, in the following videos (all available on YouTube):
“Live robotic da Vinci radical prostatectomy during EAU congress” by the
European Urological Association, “India’s 1st Da Vinci Robotic Live Surgery”
by the Muljibhai Patel Urological Hospital, “Da Vinci Robotic
Hysterectomies/Uterine Fibroids”.

12.4.1 Selected examples of surgical simulators

LapSim by Surgical Science


Surgical Science is a Swedish company founded in the 1990s. Its mission is to
explore the application of virtual reality in developing minimally invasive
surgery simulators [19]. In 2001 the company released LapSim (Figs. 12.50 and
12.514) – a surgical simulator targeted at students and interns specializing in
cholecystectomy, hysterectomy, nephrectomy and other MI procedures. LapSim
trains users in operating cameras and surgical instruments, developing eye-hand
coordination, rapid and accurate grasping, incising and clamping, wound closure
etc. A detailed description of the simulator along with a variety of video
presentations is available on the company website [20].

dV-trainer by Mimic
Mimic Technologies Inc., an American company founded in the 1990s,
specializes in VR medical training with the use of surgical robots [21]. In 2003
the company initiated collaboration with Intuitive Surgical, developer of the Da
Vinci Surgery System and a global leader in robot-assisted minimally invasive
surgery. This collaboration bore fruit in 2007 with the release of dV-trainer (Figs.
12.52 and 12.535), parts of which were copied over from the most up-to-date
version of Da Vinci and subsequently updated in step with further development
work on the robot itself. The operator has access to a console which closely
corresponds to the controls of the actual robot. All input is processed by the
simulator and its results visualized by a customized display, mimicking a real-
life operating theater.

Fig. 12.50: LapSim – surgical simulator.

Fig. 12.51: LapSim – surgical simulator console.

Fig. 12.52: dV-trainer – surgical robot.


Fig. 12.53: dV-trainer – surgical robot, screen shot.

The Skills Simulator by Intuitive Surgical


The Skills Simulator (co-developed with Mimic Technologies) [22] simulates a
range of basic and advanced procedures preparing the user for interaction with
the Da Vinci robot (Figs. 12.54 and 12.556). The list of exercises covers the
following categories:
– operating the EndoWrist device designed to enhance the natural agility of the
human hand
– operating surgical cameras and forceps with 3D visualization feedback
– resections carried out with various instruments and varying force
– applying surgical needles in a variety of situations, closing various types of
wounds
– solving the so-called fourth hand problem, i.e. coping with situations which
require sudden application of an additional instrument

12.5 Summary
Innovative technical solutions – whether assuming the form of advanced
customized hardware or computer applications available for mobile devices –
create fresh opportunities for adepts of surgery [23]. Modern-day students,
interns and doctors specializing in specific fields of surgery can obtain
convenient, rapid access to vast volumes of information and knowledge.
Simulation techniques will therefore continue to play a prominent role in
medical education [24–28]. Many of the tools described in this chapter are
already part of academic curricula at medical schools where students learn how
to deal with medical issues and perform surgeries [29–31]. The same tools are
also being used by specialists to further hone their surgical skills.
Fig. 12.54: Da Vinci – surgical robot (© 2015 Intuitive Surgical Inc.).

Fig. 12.55: Da Vinci – surgical robot, screen shots (© 2015 Intuitive Surgical Inc.).

Dedicated applications such as Touch Surgery are an example of how


modern simulation techniques can help surgeons practice their trade in a
nonclinical setting. The ability to repeat a procedure many times and to do so at
the time and place of the user’s choosing paves the way to effective assimilation
of knowledge. While computer applications offer limited opportunities to
practice manual skills, they benefit greatly from their wide availability and ease
of use [14].
Although “real” surgical simulators remain very expensive, we can hope that
ongoing progress in IT and computer hardware development will eventually
make them more accessible to medical practitioners [32].

References
[1] Lenoir T. In: Thurtle P. ed. Semiotic Flesh: Information and the Human Body, Seattle, WA:
University of Washington Press, 2002, pp. 28–51.
[2] Sutherland LM, Middleton PF, Anthony A, Hamdorf J, Cregan P, Scott D, et al. Surgical
simulation: a systematic review. Ann Surg. 2006 Mar;243(3):291–300.
[3] Cooper JB and Taqueti VR, A brief history of the development of mannequin simulators for
clinical education and training. Postgrad Med J., 2008:84 (997):563–570. doi: 10.1136/
qshc.2004.009886.
[4] Petty MD and Windyga PS. A high level architecture-based medical simulation system.
SIMULATION. 1999;73:281–287.
[5] Ziv A, Ben-David S and Ziv M. Simulation based medical education: an opportunity to learn
from errors. Medical Teacher. 2005;27:193–199.
[6] 3d4medical.com. Award Winning apps for medical students, educators and professionals –
3d4medical.com [Internet]. 2015 [cited 1 June 2015]. Available from:
http://applications.3d4medical.com/apps_home.
[7] netfilter.com.br. Mobile Platforms NetFilter [Internet]. 2015 [cited 1 June 2015]. Available
from: http://www.netfilter.com.br/mobile-platforms/?lang=en.
[8] Monster Minds Media. [Internet]. 2015 [cited 1 June 2015]. Available from:
http://monstermindsmedia.fr/?page_id=12.
[9] Musumeci G. Brain Anatomy, Available: https://sensortower.com/ios/us/gianluca-
musumeci/app/brain-anatomy/548219833.
[10] AO Foundation. Mobile Apps [Internet]. 2015 [cited 1 June 2015]. Available from:
https://aotrauma.aofoundation.org/Structure/education/self-directed-learning/mobile-
apps/Pages/mobile-apps.aspx#null.
[11] Therapeutic Articulations. iOrtho+ Mobile App for iPhone, iPad, & Android – Therapeutic
Articulations [Internet]. 2015 [cited 1 June 2015]. Available from:
http://www.therapeuticarticulations.com/iPhone___iPad_App.php.
[12] Roth M, Roth M, Cox J, Roth M, Cox J, LaCava C, et al. drawMD Archives – Visible Health
[Internet]. Visible Health. 2015 [cited 1 June 2015]. Available from:
http://www.visiblehealth.com/category/drawmd/.
[13] MEDtube.net. Medical Videos, Surgery, Procedures Videos [Internet]. 2015 [cited 1 June
2015]. Available from: https://medtube.net/.
[14] Touch Surgery Surgical Simulator. The Mobile Surgical Simulator App [Internet]. 2015 [cited
1 June 2015]. Available from: https://www.touchsurgery.com/.
[15] Al-Hadithy N, Gikas PD and Al-Nammari SS. Smartphones in orthopaedics. Int. Orthop. 2012
Aug;36(8):1543–7
[16] Bradley H. Can video games be used to predict or improve laparoscopic skills? Journal of
Endourology 2005;19(3): 372–376. doi: 10.1089/end.2005.19.372.
[17] Curet MJ. The impact of video games on training surgeons in the 21st century – Invited
Critique. Arch Surg. 2007;142(2):181–186. doi: 10.1001/archsurg.142.2.186.
[18] Gurusamy KS, Aggarwal R, Palanivelu L and Davidson BR. Virtual reality training for surgical
trainees in laparoscopic surgery. Cochrane Database Syst Rev. 2009 Jan;(1):CD006575.
[19] Grantcharov T, Bardram L, Funch-Jensen P and Rosenberg J. Learning curves and impact of
previous operative experience on performance on a virtual reality simulator test laparoscopic
surgical skills. Am J Surg. 2003;185(2):146–149. doi: 10.1016/s0002–9610(02)01213–8.
[20] Surgical Science. LapSim The Proven Laparoscopic Training System, simulation [Internet].
2015 [cited 1 June 2015]. Available from: http://www.surgical-science.com/lapsim-the-proven-
training-system/.
[21] Mimic Simulation.com. Mimic Simulation |dV-Trainer [Internet]. 2015 [cited 1 June 2015].
Available from: http://www.mimicsimulation.com/products/dv-trainer/.
[22] Intuitive Surgical.com. Da Vinci [Internet]. 2015 [cited 1 June 2015]. Available from:
http://www.intuitivesurgical.com/company/media/.
[23] Grunwald T, Krummel T and Sherman R. Advanced technologies in plastic surgery: how new
innovations can improve our training and practice. Plast. Reconstr. Surg. 2004 Nov;114(6):
1556–67.
[24] McGaghie WC, Issenberg SB, Petrusa ER and Scalese RJ. A critical review of simulation-
based medical education research: 2003–2009, Med Educ. 2010 Jan;44(1):50–63. doi:
10.1111/j.1365- 2923.2009.03547.x.
[25] Akaike M, Fukutomi M, Nagamune M, Fujimoto A, Tsuji A, Ishida K and Iwata T. Simulation-
based medical education in clinical skills laboratory. J Med Invest. 2012;59(1–2):28–35.
[26] Milburn JA, Khera G, Hornby ST, Malone PSC and Fitzgerald JEF. Introduction, availability
and role of simulation in surgical education and training: Review of current evidence and
recommendations from the Association of Surgeons in Training. International Journal of
Surgery. 2012;10(8):393–398. doi: 10.1016/j.ijsu.2012.05.005.
[27] Dygut J and Sylwia P. Student medical education with the real orthopedic case presented as
interactive computer simulation. Bio-Algorithms and Med-Systems. 2014;10(5).
[28] Jacek D, Płonka S and Roterman-Konieczna I. Involvement of medical experts in legal
proceedings: an e-learning approach. Bio-Algorithms and Med-Systems, 2014 Jan;10(3).
[29] Betz R, Ghuysen A and D’Orio V. The current state of simulation in medical education. Rev
Med Liege. 2014 Mar;69(3):132–8. French. PubMed PMID: 24830212.
[30] Sesam-web.org. SESAM – Society in Europe for Simulation applied to medicine [Internet].
2015 [cited 1 June 2015]. Available from: http://www.sesam-web.org/.
[31] ssih.org. The Society for Simulation in Healthcare [Internet]. 2015 [cited 1 June 2015].
Available from: http://www.ssih.org/.
[32] Atesok K, Mabrey JD, Jazrawi LM and Egol KA. Surgical simulation in orthopaedic skills
training. J. Am. Acad. Orthop. Surg. 2012 Jul;20(7):410–22.
Part VII: Support of therapy
Łukasz Czekierda, Andrzej Gackowski, Marek Konieczny, Filip
Malawski, Kornel Skałkowski, Tomasz Szydło, and Krzysztof
Zieliński
13 From telemedicine to modeling and proactive
medicine

13.1 Introduction
Contemporary healthcare delivers vast amounts of data. Telemedical systems
contribute considerably to this, broadening the spectrum of available information
greatly to include:
– general medical and social profiles
– history of previous therapies
– laboratory and imaging examination results
– records of everyday activities of patients and prescribed rehabilitation
– records of food calories and times of meals
– basic body parameters such as blood pressure, heart rate, body weight,
temperature, blood glucose level, etc.
– records of drug intake

The sensing and monitoring of our life activity is going to be pervasive [1]. It
will be a source of very valuable information which may considerably improve
healthcare operation leading to personalized and proactive medicine, currently
perceived as a very promising research area. At the same time demand for
healthcare will become more and more widespread due to commonly observed
trends:
– better living conditions (including medical care) and civilization progress
mean that people live longer than before. An ageing population is, however, at
greater risk of cognitive impairment and frailty
– chronic diseases like obesity and diabetes affect a considerable percentage of
the population in highly developed countries
– intensive and successful promotion of a healthy lifestyle and awareness of the
importance of self-management of health and diseases

Efficient utilization of this information requires employing advanced ICT


technologies such as big data processing, machine learning, cognitive
computing, predictive analytics etc. One of the interesting areas of medical data
processing is modeling of selected aspects of human body behavior, supporting
medical diagnoses and proactive treatment.
This chapter presents how existing telemedical systems may contribute to this
vision of future medicine. In Section 13.2 the traditional model of healthcare
supported by telemedicine is described, and a new model of personalized,
proactive medicine to enhance it is specified. Section 13.3 contains an overview
of computational methods which can be utilized in the new approach. Sections
13.4 and 13.5 characterize existing TeleCARE and TeleDICOM systems
developed by the authors and their role in proactive data-driven medicine.

13.2 ICT-driven transformation in healthcare


There are a few key factors contributing to transformation in healthcare systems
evolution which results from wide ICT adoption, such as: remote pervasive
access to medical data, the collection of large amounts of medical data which
could be used for knowledge extraction and modeling, development of new
methods for real-time analysis of vast streams of data which could be explored
in medical decision support systems. The existing and future possibilities in
these areas are elaborated in the following points.

13.2.1 Overview of telemedicine

Continuing technical progress and the increasing adoption of the Internet have
positioned it as the basic communication medium and infrastructure for
providing services in today’s world. Applications of the Internet include
telemedicine, which has become a very broad term, encompassing many
remotely performed processes in medicine, given as follows [2]:
– Telemonitoring of patients’ vital parameters (physical activity, weight, blood
pressure, heart rate, etc.). If there is feedback on the basis of the parameters
gathered, the term telecare is more suitable.
– Telerehabilitation – observation and supervision of the process of
rehabilitation and its progress.
– Telediagnosis and teletreatment – facilitating remote cooperation between the
patient and medical staff with the aim of obtaining clinical information that
may help to introduce and optimize medical or paramedical aid.
– Tele-education of patients – fostering medical consciousness and helping to
promote a healthy lifestyle.
– Tele-education of medical staff (medical students, nurses, postgraduate
training of physicians, etc.).
– Teleconsultation – involves cooperation between medical practitioners and
experts in order to discuss particularly difficult cases, exchange ideas and
make therapeutic decisions. This process is sometimes called tele-expertise. It
may also be a part of tele-education of medical staff observing the
teleconsultations.

Plenty of various telemedical systems have so far been successfully deployed in


many countries around the world. Rapid progress in medical research and
ubiquitous access to the Internet definitely mean that such systems will become
pervasive in the near future.

13.2.2 Traditional model of healthcare supported by telemedicine

The processing model of medical data generally continues to be traditional even


if telemedical tools are utilized. In this approach, the data (symptoms, results of
various examinations or information gathered from telemonitoring or
telerehabilitation systems) are the input for the reasoning process performed by
medical doctors. Their knowledge and experience are employed to diagnose the
disease or medical problem, to evaluate the effectiveness of previous therapy and
(possibly) to modify it appropriately. In some cases it may be possible for them
to predict the progress of the disease in the future. When the knowledge and
experience of the doctor taking care of a patient are not sufficient to diagnose
and properly treat a case, a consultation within a local team of experts or remote
consultation using appropriate telemedical systems can be organized.
The described process is illustrated in Fig. 13.1. Patient-related data acquired
from various sources is gathered in the medical patient profile module (which
can be identified with the electronic health record although it can also be
provided with some paramedical information, e.g. everyday activity).
The quantity of medical information associated with a single patient and its
complexity is growing considerably. As stated and justified in the introduction,
healthcare will become more and more widespread in the near future. As a
consequence, although the described approach employs modern telemedical
tools, it cannot be preserved in such a form for a longer period of time [1]:

Fig. 13.1: Traditional healthcare model supported with telemedicine.

– It is poorly scalable and not cost-effective – to increase the scale of such a


healthcare system, employing more medical personnel is inevitable,
– it is generally reactive, i.e. medical doctors react to reported ailments and
observed symptoms. Therapeutic processes are frequently initiated by patients
themselves who visit their doctors when they feel bad.

A solution to these issues is much better utilization of the potential of ICT


technologies in the area of medical data processing. For efficient, large-scale
operation, this process must be automated at least to some degree. This could
help physicians with simple repetitive tasks allowing them to concentrate on
more complicated issues. Efficiently utilized techniques of medical data mining
and its intelligent analysis [3] may yield a more proactive approach to the
treatment. These issues will be discussed in the following subsections.

13.2.3 Modeling as knowledge representation in medicine

Mathematical and computer modeling has been practiced for many years as a
very effective method of representing very complex relations and processes in
medicine. Although it is impossible at the moment to create a full model of a
human body, due to insufficient knowledge, many precise models of particular
diseases (cardiovascular, diabetes etc.) [4, 5], single organs (heart, pancreas, etc.)
[6, 7] or systems (cardiovascular, respiratory etc.) [8] exist. In general, modeling
in medicine is a very broad area and cannot be discussed within a single book.
This chapter focuses on models which can be developed using medical data
stored in telemedical systems as part of teletreatment, telemonitoring and
teleconsultation activities (see Sections 13.4 and 13.5). Considering these kinds
of systems as a source of valuable diagnostic information, various models that
can support physicians can be developed, for example:
– Teletreatment and telemonitoring systems may analyze the received data
related to a patient’s health state and use data mining methods to build
personalized models of disease progression or therapeutic efficacy. Based on
such models the systems can undertake various actions, e.g. recommend
certain activities to a patient, report certain signs to their referring doctor or
generate alerts to appropriate medical personnel on duty (nurses, general
practitioners or specialists).
– In imaging medicine various Computer-Aided Diagnosis (CAD) systems may
be employed, which can build models from available data and suggest a
diagnosis, or at least indicate some findings which are suspected to be atypical
or potentially dangerous. The final decision is taken in this case by medical
personnel.
In both cases the obtained models result in better understanding of the disease
itself but also contribute to monitoring positive or adverse reactions to therapy.
Of course, the knowledge represented in the form of models always has to be
applied with care and any actions suggested by such models should be verified
by medical professionals.

13.2.4 Towards a personalized and proactive approach in medicine

An obvious prerequisite for automatic support of treatment is having a very


reliable, continuously verified and improved computer model describing the
operation of the human body.
The construction of each model is a long and complex process aiming at
creating the best possible representation of a fragment of the real world. The
same is true in the case of modeling in medicine. A crucial issue in this area is
the selection of data representative of the considered disease and subpopulation
which will be used for model development. This process is an iterative one with
many feedback loops in preclinical and clinical phases, during which the model
may be modified and improved.
The already mentioned trends in healthcare operation may drastically
facilitate acquisition of data essential for the development and verification of the
models:
– Recently, access to real medical data and its processing has become
considerably simpler due to its digital representation.
– A patient’s health state can be described more and more precisely. Patients’
medical records may contain information gathered not only during traditional
diagnostic processes (laboratory, imaging, etc.) but also thanks to pervasive
sensing. Some parameters can already be obtained noninvasively with
available and relatively cheap sensors. Implantable devices are already able to
provide unique data suitable for frequent or continuous monitoring. In the
future, probably more sensor implants will be available providing not only
physical but also biochemical parameters. Medical records may be
supplemented with paramedical data (such as social conditions, physical
activity etc.) which can be important in the treatment of some diseases.

It is possible that advanced methods of automatic knowledge extraction and


processing will soon be employed in the process of data acquisition and model
verification supporting medical specialists whose expert knowledge in the given
domain is so far indispensable in developing the model and evaluating the
correctness of its operation.
These two above-mentioned factors mean that medical models can be
developed faster and are more reliable than before. As a consequence, they can
be responsibly applied in support of the treatment process: automatically taking
simple decisions and only referring the more important and toughest issues to
medical doctors. Almost immediate feedback gathered through telemonitoring of
patients for which the model has been run may facilitate the process of the
model’s improvement or even self-improvement (the same relates to clinical
trials).
Individuality of human organisms makes it necessary to “personalize” the
models. Models obtained using individual medical data of patients take into
account their individual features which can be significant in the model utilization
phase.
With a reliable model of human body operation in some aspects (an organ,
the course of a disease, etc.) it is possible to predict the body’s behavior in these
aspects under certain conditions or in the future. It opens room for new
applications (the first two are particularly important for this chapter’s discussion
and thus will be elaborated in a few following paragraphs):
– designing and verifying a strategy of therapy prior to applying it
– preventive treatment
– planning surgical operations or interventions (dental, beauty treatment etc.)
– research

It may be possible to choose the best possible therapy prior to applying it. When
a therapy has been prescribed, its efficacy can be verified by comparing the
compliance of its progress with previous assumptions. Significant unconformity
can trigger its modification.
It may also be possible to predict whether and how a patient’s disease will
evolve in the near future (note that ethical issues are beyond the scope of this
chapter). In consequence, such an approach may allow implementing preventive
treatment. For example, by detecting susceptibility to some diseases and
triggering earlier interventions it may be possible to either avoid development of
a disease at all or to minimize its negative consequences. Earlier detection of
risks associated with the progress of chronic diseases, ageing etc. is generally
more efficient and cheaper than reactive treatment. Even if the prediction
algorithms generate false positive hypotheses, additional examinations which
may also be recommended by this automated procedure operation can determine
the correctness of the hypothesis.
Appropriate software tools made available to the patients which visualize the
progress of the prescribed therapy may improve efficacy of the treatment by
increasing their motivation to actively participate in the treatment of their own
disease.
It should be noted and emphasized that the goal of treatment automation is
not to eliminate the need for medical doctors in the therapeutic process – their
approach to the case is, and probably always will be, more holistic. Moreover,
even the best state-of-the-art medical equipment cannot make a diagnosis
without the knowledge, experience and intuition of medical doctors. Biology is
also much more complex than technical science and there may be many different
atypical situations making the modeling very difficult. Biological systems’
reactions may depend on multifactorial unpredictable reasons. However, the
model-based approaches not only aim at supporting medical staff by taking over
processing of simple and routine issues. Cognitive functions of new generation
systems (according to IBM’s [9] definition of cognitive computing) allow them
to learn and efficiently penetrate the complexity of huge amount of data to help
human experts make better decisions. In the discussed area it means, among
others, indicating important facts to be considered by medical doctors and
suggesting appropriate actions to take.

13.2.5 Model of proactive healthcare


A new model of a healthcare, compliant with the above discussion, is presented
in Fig. 13.2. The previously presented model (Fig. 13.1) has been complemented
with a decision system and knowledge representation modules.

Fig. 13.2: Model of proactive, knowledge-based healthcare.

The knowledge representation module contains models; since creation of a


holistic model is still impossible, multiple models are used to describe a single
patient suffering from multiple diseases (e.g. diabetes and respiratory system
problems). A new model is instantiated for a patient when a new disease or
disability is diagnosed. The new model is usually not devoid of any knowledge
at the beginning of its operation as in most cases it can be obtained based on
models that refer to the same disease of patients with a similar medical profile.
Extraction of such generic models is performed using cross-patient data analysis
methods. These generic models are next fed with specific patient data stored in a
medical patient profile – depending on the disease or treatment (arrow A). The
data passed to the model comprises not only results of examinations performed
during a treatment or acquired from telemonitoring systems, but also basic
information characterizing a patient (such as: age, weight, gender, comorbidities,
etc.). Methods of extracting the knowledge in the form of models, as well as
their further personalization are described in the next section.
The decision system module is fed with the data stored in the knowledge
representation module (arrow B) and is able to take decisions automatically. The
decisions are of various categories:
– diagnosis
– diagnostic hypothesis, which is the request for gathering more data
– providing a patient education module selected according to the diagnosis
– initiation of a new therapy
– optimization of the current therapy

The module can operate in a fully automatic mode or in an advisory mode. In the
latter mode it only suggests certain steps which need to be performed by
dedicated medical personnel (nurse, medical doctor, medical experts). Probably a
blended mode is optimal – more important decisions need approval. Selection of
the best option in a particular case is up to a medical doctor and should depend
on the complexity of the case, confidence in automatic reasoning etc. Sometimes
the decision can be made solely by medical personnel (arrow F) using data
stored in the medical patient profile – in exactly the same way as in the
traditional model.
The decision is announced to the patient either using traditional means of
patient-healthcare contact or via the feedback channel of a teletreatment system.
At the same time, it is stored in the medical patient profile for evidence
purposes. The decision can also influence the parameters of the model (e.g. a
new therapy strategy is run) (arrows D and E).

13.3 Computational methods for models development


Various computational methods have been successfully applied in modeling the
progression of the following categories of diseases:
– neurological diseases such as Parkinson’s disease, Alzheimer’s disease,
schizophrenia or dementia [10, 11]
– cardiovascular diseases, e.g. arterial hypertension [12, 13], coronary artery
disease [14, 15] or pulmonary hypertension [16]
– metabolic diseases, e.g. diabetes [17]
– neoplasms [5]
– AIDS [18]

These models allow for automatic extraction of selected aspects or features that
are supportive to physicians in making decisions regarding the applied therapy.
Such models are particularly useful in case of chronic diseases (e.g. diabetes,
Parkinson’s disease, Alzheimer’s disease), which advance slowly over time.
Modeling of these diseases’ progress is challenging due to various factors such
as possible incompleteness of data, irregularity in observations and considerable
heterogeneity of patients’ conditions [19, 20].
Besides pure modeling of a disease’s progression, the models obtained from
data amassed by telemedical systems can also be applied to simulations of the
course of a disease when a certain therapy is applied. In this way they are often
used for improving therapies, e.g. warfarin or insulin dosing [21],
immunotherapy [22] or chemotherapy [23, 24]. Personalized models of a
therapy’s effect are used by physicians to support them in the therapeutic
process, thus their accuracy and comprehension is essential.
Among the computational methods used for developing models of disease
progression and therapy effects, the following principal groups of methods can
be distinguished:
– regression methods [10, 17, 24, 25]
– supervised learning methods [10, 12, 15, 17]
– unsupervised learning methods [11, 13]
– Markov Decision Process (MDP)-based methods [16, 23]
– Monte-Carlo methods [18]

The first group of methods, i.e. regression-based prediction, roughly speaking


relies on fitting a multidimensional function h(x), called a hypothesis, onto a
given dataset, so that its values are as close as possible to the values in the
dataset within a specific function form. The function found can then be used
either to predict future values (in this case time is one of the function’s
dimensions) or to classify patterns. From the medical modeling viewpoint, the
first approach takes place when a disease’s progress or a therapy response is
simulated [17, 24, 25], whereas the second case mainly refers to CAD software
[10].
The supervised learning methods constitute a generalization of the regression
methods. The generalization relies on applying more sophisticated hypothesis
functions (e.g. based on composed functions – logistic regression [26], SVM
[27]) or generative modeling of the function’s structure (e.g. neural networks
[28], Bayesian networks [26]). Application of these methods in most cases
improves the accuracy of prediction, nonetheless, fitting algorithms for these
functions usually requires larger datasets for their feeding.
In contrast to the regression and supervised learning methods, the
unsupervised learning methods do not require a dataset with example prediction
values, since they are devised to find structures in datasets. The structures
returned by these methods can be identified with the prediction values, which in
the case of supervised learning methods are known a priori. With regard to
medical modeling these methods are most commonly used in CAD systems to
extract certain information from complex medical data (especially imaging data),
which is next used to make predictions about a disease’s progress. For example,
in [11] the Linear Discriminant Analysis (LDA) method was applied to identify
regions in MR images which facilitated discrimination between patients with
Alzheimer’s disease and those without, who were the healthy control [11].
The methods based on MDP, like reinforcement learning [29], are used to
model time series, i.e. processes over time. In the context of medical applications
they are particularly useful for modeling interactions between diseases [16] or
the influence of the drugs taken on a therapy [23]. In general, these methods
constitute generalizations of the supervised learning methods in cases which
require different classifications depending on the time moment.
Finally, the Monte-Carlo methods constitute algorithms which return
probabilistic predictions based on historical data. Their usage in medical
simulations is only sensible in cases where limited data are available or when the
other methods fail [18].
All the aforementioned methods were originally developed for the purpose of
analyzing relatively small datasets, i.e. they are usually sufficient for predictions
concerning a single patient. Although personalized models obtained from a
single patient’s data are sufficient in many kinds of diseases, the rapid
development of telemedical software and the ubiquity of medical sensors open
new possibilities for cross-patient data analysis and screening. Such analyses are
particularly useful for the correct diagnosis and treatment of diseases such as
glaucoma [30, 31] or cancer[32], which can be detected early based on massive
comparative analysis of results from medical examinations. The demand for
massive data processing methods has yielded a new branch of analysis methods
– Big Data – which has been widely studied in recent years. The big data
methods constitute counterparts of traditional data processing methods
(including those described above), however, they are designed for processing
very large datasets (in the order of tera- or exabytes). Employment of these
methods in telemedical systems to the anonymized analysis of amassed datasets
can be beneficial in several cases, for example, in early risk detection and
classification of patients to a risk group [33, 34]. It should be noted, however,
that many of these applications are currently in an early research stage, thus their
clinical impact is unknown.
Another novel area of computational methods, which seems to be supportive
for physicians but requires huge datasets, is cognitive computing [9]. This group
of methods is used to make decisions under situations characterized by high
ambiguity and uncertainty. With regard to proactive healthcare, cognitive
computing systems such as IBM’s Watson have been successfully applied in
surgery support [35, 36] and CAD systems [37]. Although initial research results
seems to be very promising, more extensive use of cognitive computing in
medicine requires further study since the existing approaches are currently
immature.
A conclusion from the above considerations is that many computational
methods may be used for modeling various types of diseases and therapies in
medicine. A common trait of these methods is the necessity of feeding them with
medical data. As stated at the beginning of this section, telemedical systems
constitute a potential source of medical data. Due to their functionality such
systems inherently collect large volumes of medical data. Taking into account
the selected categories of telemedical systems, i.e. telemonitoring, teletreatment
and teleconsultation, the data stored in them can be roughly divided into two
types:
– Imaging data such as RTG, MR or tomography results. This kind of data is
commonly exchanged in teleconsultation systems.
– Parametric data, i.e. sets of parameters describing a patient’s health state in
discrete time moments. This type of data is gathered by all telemonitoring and
teletreatment systems.

The following two subsections focus on applications of the described


computational methods to these two types of medical data.

13.3.1 Computational methods for imaging data

Medical images constitute a very important and rich source of medical


information and extracting knowledge from them in an automatic or
semiautomatic way is technically challenging.
Medical imaging data is provided by dedicated acquisition equipment
(computer tomographs, ultrasound devices, radiographs) and represented in the
DICOM [38] format. DICOM documents contain not only the imaging
information itself but also many parameters describing a patient and their
examination conditions. Such metadata are added automatically during
examination (some of it is taken from HIS [Hospital Information Systems] such
as the patient’s personal data). The imaging data is typically stored in specialized
repositories called PACS (Picture Archiving and Communication System). Most
medium and large hospitals have their own PACS repositories. In a typical
processing flow the data is accessed from a PACS repository and analyzed by a
radiologist or another expert optionally supported with CAD software [39]. As a
result of this process a diagnosis is created and stored in the PACS or HIS
system.
Since medical images are usually complemented with additional data the
supervised learning methods can be applied in order to model a disease’s
progress or for diagnosis support, similarly as was already done in [10, 17, 25].
Moreover, if the modeling functionality includes a specific patient therapy and
treatment history, it would yield a simulation tailored to specific patient
conditions (e.g. drugs taken, operations completed, etc.). In the case when an
applied therapy constitutes a simulation parameter, the MDP-based methods can
be more supportive than simple regression algorithms.
In many cases model development requires a preliminary step devoted to
recognition and extraction of certain structures in the medical data (e.g.
abnormal areas of medical images) or dimensions which contribute most to
changes (taking into account too many dimensions which do not contribute to an
actual model results in the dimensionality curse issue – see [40, 41]). This step is
often necessary since making predictions based on analysis of raw imaging data
is often impossible [42, 43]. It can be implemented using the unsupervised
learning methods (see [11]), or the recently developed deep learning methods
[44]. Finally, the accuracy of existing models can be sometimes improved by
application of boosting methods, e.g. AdaBoost [45] or initial screening with the
big data analysis methods.

13.3.2 Computational methods for parametric data

In contrast to teleconsultation systems, telemonitoring and teletreatment systems


mostly collect numerical data describing a patient’s health state over a longer
time horizon (see Section 13.4), such as blood pressure, weight, pulse, etc. The
longer time horizon in which the data is collected involves model development
methods based on time series analysis or probabilistic methods, i.e.
reinforcement learning and the Monte-Carlo methods. These models are
particularly useful, for example, to predict dosages of drugs which have to be
taken by patients in order to maintain a specific health condition.
For instance, the reinforcement learning algorithms can be applied to
diabetics in order to predict personalized insulin dosages allowing the
maintenance of an appropriate glucose level (see the case study for the
TeleCARE system – Section 13.4.3). Another simulation type that can be
achieved using models developed from the data collected by telemonitoring and
teletreatment systems is simulation of future changes in a patient’s health state
with specific treatment conditions assumed. Such simulations are made based on
historical parameters describing the patient’s state, as well as on information
about their therapy and medical history. As a consequence, the simulation
outcome is personalized for a specific patient, thus it constitutes a good staple
for instructing medical students and practitioners. By using such simulations the
students can observe how an example therapy would influence a particular
patient if it were applied.
It should be noted, however, that in the case of short-term predictions which
include only historical data from a limited time period, the standard regression
and supervised learning methods can have better accuracy of prediction than the
reinforcement learning methods. Moreover, training a model from scratch can be
accelerated by initial classification of a patient to a risk group and starting from
an average model calculated for the risk group. In such a case, however, some
screening methods would have to be applied for the purpose of recognition of
the risk groups [33].

13.4 TeleCARE – telemonitoring framework

13.4.1 Overview

The TeleCARE system is a telemonitoring framework that can be used for


monitoring patients’ health parameters. In addition, it can provide feedback
based on automatic analysis of the gathered data which means that it can also be
classified as a teletreatment solution. The main goal of the TeleCARE
framework is to deliver personalized medical care for a large group of patients. It
is especially useful in cases of continuing patient care in their home
environment. In particular for:
– patients who require analysis of health parameters after a period of
hospitalization (for instance, associated with prior surgery);
– elderly patients who require continuous monitoring of health parameters;
– people with chronic diseases who require continuous monitoring of a
therapeutic process.

Not only patients, but also medical personnel can benefit from using TeleCARE.
The framework delivers functionality that allows physicians to create detailed
personalized health plans. The plans include information such as schedules of
medical data monitoring and daily dosing of medications. They can be used in
short- as well as long-term treatment processes. In addition, physicians can
adjust the treatment process based on the current data gathered by the patient in a
proactive way. The health plans complement medical patient profiles with the
ability to cooperate with different decision systems. As the result, it facilitates
delivery of personalized healthcare that includes the knowledge and experience
of many medical experts (see Fig. 13.2). Therefore, the framework allows for the
monitoring and supervision of individual therapeutic processes with respect to
the individual needs of patients. Notifications about changes in the treatment
process are sent directly to the patients, without the need for scheduling medical
office appointments. This is particularly important for patients with disabilities
or poor exercise tolerance who may have mobility problems. The framework
also allows for the identification of patients’ health conditions requiring
immediate notification of medical personnel. With the support of the system,
patients can have continuous contact with the medical specialist responsible for
the patient’s treatment and, as a result, both safety and patient comfort associated
with the therapeutic process are improved. The framework also delivers the
functionality required for conducting statistical, cross-cutting surveys related to
the progress of the medical treatment.
The system is currently at the stage of pilot studies carried out in
collaboration with medical practitioners from the John Paul II Hospital in
Kraków. It is mainly used for treating heart failure patients. Heart failure (HF) is
a chronic disease that decreases physical capacity limiting the patient’s ability to
visit the cardiologist’s office. If not managed properly, HF may lead to
decompensation requiring urgent hospitalization or causing death. Monitoring of
the patient’s weight, heart rate and blood pressure as well as some invasive
parameters from the implantable sensors (for example, pulmonary pressure
sensor) allows therapy to be adjusted before decompensation occurs. The
TeleCARE system was tested as a tool for such remote care. Thanks to such
monitoring it was possible to up-titrate the drug doses to the maximum tolerated
and in several cases to modify the diuretic therapy and avoid decompensation.
The changes in the patient’s weight, modification of diuretic doses and the
resulting weight decrease could be analyzed and used for modeling the behavior
of the volume status. However, using such an automatic medication advisory
model for the patient should be implemented with caution until the efficacy of
such an approach has been proven in randomized trials. The system proved to be
a good tool to follow-up the patients and optimize their treatment.
Fig. 13.3: TeleCARE modules

As shown in Fig. 13.3, the system consists of three main parts (from left): (i)
data acquisition subsystem, (ii) data processing subsystem, and (iii) web
subsystem. The first one is responsible for collecting data from patient’s medical
devices and sending them for further processing. The subsystem is implemented
as a native application executed on patients’ mobile smartphones. The medical
data can be collected automatically (directly from devices, requires Bluetooth
capable medical equipment) or manually (patient reads the single measurements
from a device and submits them using an input form in the mobile application).
The second part of the framework is responsible for receiving measured data
from patients and later processing, analyzing and visualizing it in accordance
with the rules previously individually defined by physicians [46]. The third part
– the web subsystem – is implemented as a web interface for the users allowing
them to interact with the system through a web browser.

13.4.2 Contribution to the model-based proactive medicine concept

The TeleCARE system can not only utilize the knowledge provided by
practitioners, but can also support the process of development of medical models
by medical data acquisition.

Support for medical data acquisition


Similar to existing data acquisition solutions, such as Microsoft HealthVault [47]
or Apple HealthKit [48], the TeleCARE data acquisition module can be
integrated with many different medical devices. However, new devices and new
medical data sources can be added to the TeleCARE framework easily. In the
case of the above-mentioned commercial solutions, this process may require an
entire certification process for new devices. Based on current research [49], it is
presently possible to acquire a huge range of medical data from available body
sensor networks (for example ECG, EEG, SpO2, heart and respiration rates,
blood pressure, body temperature, glucose level or spatial location) as well as
built-in smartphone sensors (for example camera, GPS or accelerometers). In
addition, the framework supports devices compatible with the Bluetooth Health
Device Profile (HDP) [50]. During the pilot studies data from a blood pressure
monitor, precision weight scales and a custom made pillbox as well as the built-
in GPS sensor from the smartphone were gathered. All medical data describing a
patient’s health state were collected over a long time span (up to a few years)
and stored in a relational database.

Supporting diseases management


The basic interaction between patients and physicians in the TeleCARE system
has been presented in the previous subsection. Computational methods and
modeling may extend the concept in terms of patient-physician interaction in
remote monitoring systems. Model-based modules installed on mobile devices
can improve the treatment process by shortening the time needed to receive
feedback in the system on the patient’s actual condition. Such a module may, for
example, proactively suggest the medicine dosages that patients should take,
with respect to the type of activity they have started.
The modeling modules on the server side of the system might be used in two
ways. First, usage is similar to the previous scenario, however, these modules
might be more complex because of the fact that servers offer more
computational resources than mobile devices. Unfortunately, such a usage
requires constant Internet access to the server side of the system. The second
usage assumes that the physician may predict the changes in the patient’s
condition using models and thus can proactively personalize the treatment
process to avoid a deterioration in the patient’s condition.

Supporting education
Apart from the patient-physician interaction, computational modeling might be
used for educational purposes. In this case, models may mimic the behavior of
the virtual patient reacting to different modifications of treatment plans.
Telemedicine solutions with built-in computational modeling methods can be
effectively used for prediction of a disease progress or therapy effects. The
accuracy of prediction which is possible to achieve in this case is high, due to
large volumes of data amassed by telemedical systems. Results can be applied in
miscellaneous scenarios, e.g. therapeutic, diagnostic, research, educational and
others.

13.4.3 Case study

Diseases of affluence have vastly increased in prevalence in recent years. They


include obesity, cardiovascular diseases, arterial hypertension, type 2 diabetes,
osteoporosis and others. As mentioned in the previous section, the TeleCARE
system was initially designed for patients with cardiovascular diseases who are
being supervised by physicians. It can be extended to treat other diseases by
acquiring relevant health data e.g. by recording the blood glucose concentration.
Enriching the TeleCARE system with some intelligent support system might
improve disease management.
Diabetes management concentrates on lowering blood glucose to as close to
normal as possible, without causing hypoglycemia. This can usually be
accomplished with a diet, exercise, and use of insulin and other oral medication.
The main problem is setting proper insulin doses because they are influenced by,
among others, the nutritional content of meals, physical activity and the time
when these activities took place. The TeleCARE system can be used for diabetes
management by achieving the following objectives:
1. The system should record all the elements that might influence the required
insulin dose.
2. The system should remind the patient of measurements, physical activities
and other tests.
3. The system should allow for bidirectional communication between
physician and patient.
4. The system should be able to analyze the data about the glucose level,
insulin doses, diet and physical activity and, based on the recorded
information, propose modifications to the insulin dose.

The first and second objectives related to recording and reminding patients of
measurements and activities are covered by the TeleCARE system because of its
open architecture and the possibility for adding new medical devices and
parameters. The third objective is related to the idea of telecare systems
presented in the previous sections. The last objective can be achieved by the
model-based supporting system.
Intensive insulinotherapy is an approach used mainly in young individuals
that stimulates the natural production of insulin by the pancreas. The goal of this
treatment method is to keep the glucose level normal in spite of relatively
flexible physical activity and meals. It requires several insulin injections daily
(or insulin pump infusion) with many adjustments of the doses [51]. The
parameters of the models predicting the level of the sugar in the blood and the
proposed insulin doses could be provided by the physician remotely. Such a
method can extend the mobile part of the system [52]. In this case, a mobile
application suggests an insulin dose that the patient should take based on the
nutritional content of planned meals.
The data collected during treatment – insulin doses taken and the other
parameters such as meals, level of blood sugar, activities, blood pressure – can
be used for modeling patients’ responsiveness to various types of insulin and
their reaction to nutrition, e.g. machine learning methods might be used [4,
53–56]. The results might be used by the physician for personalizing the
treatment process. In the case of diabetes management, these methods might be
more accurate than the simple bolus calculation performed on the mobile phone,
but as the computations are performed on the server side, an Internet connection
is necessary to send results remotely to the mobile device.

13.5 TeleDICOM – system for remote interactive


consultations

13.5.1 Overview

TeleDICOM [57] is one of the teleconsultation systems for imaging medicine


which are currently used worldwide. Simple solutions offer medical file sending
capabilities [58, 59], more sophisticated ones provide various communication
channels (audio, video, chat, telepointer) and image processing tools [60–63].
TeleDICOM is a complete system for remote consultation of medical data stored
in DICOM format as well as in other general purpose graphical formats. It offers
interactive teleconsultations with full synchronization of workspace and a set of
advanced measurement and image manipulation tools, based on professional
DICOM viewers. Its features make it a useful tool not only for remote
consultations of routine cases but also tough, complicated diseases requiring a
discussion in a multidisciplinary team of experts.
TeleDICOM has already been in use for eight years as a regular and
emergency hospital service in two separate teleconsultation networks (covering
ca. 40 hospitals located in southern Poland), currently reaching over 13 000
diagnosed cases. The TeleDICOM system is also regularly used during remote
meetings of partners in the European Project of Rare Cardiovascular Diseases
led by John Paul II Hospital in Krakow. This project aims to network leading
experts in European countries to discuss and propose solutions to the problems
of patients with very rare cardiovascular conditions.
A typical usage scenario in TeleDICOM is as follows (see Fig. 13.4). A
medical doctor wanting to consult about the imaging examination with one or
more experts selects the appropriate data which is usually stored in the medical
institution’s PACS archive. The physician then optionally provides the data with
an appropriate description and initial annotations and invites the other experts to
participate in an interactive or noninteractive consultation session. During the
session the data are analyzed, and the specialist tools can be used where
necessary. This process typically results in a diagnosis or a request for another
iteration with additional data or in another team. The conclusions can be passed
on orally or using an appropriate report (including DICOM SR document [64])
developed by the session participants. Moreover, the session can be recorded and
all participants’ actions (annotations, measurements performed using specialized
TeleDICOM tools) can be documented for the session. The record of the
consultation session can be used not only for evidence but also as educational
material for other doctors or students.
Fig. 13.4: A collaboration view of a sample medical network using TeleDICOM (South Poland
region).

Currently, a second version of the system, TeleDICOM II [2], is being


developed, focusing on further improvements to doctors’ cooperation and
providing flexible and extensible architecture.

13.5.2 Contribution to the model-based proactive medicine concept

Medical imaging data is a very important source of information about the


patient’s condition for doctors, but also a very challenging one in terms of
interpretation for both humans and computers. Advanced algorithms are
employed in CAD systems in order to help doctors in their analysis of medical
images. TeleDICOM can not only be enhanced with CAD-like functionality, but
can also improve the process of development of medical models by their
validation and by supporting the extraction of knowledge.

Supporting diagnostics
TeleDICOM has been developed as a tool for user-to-user interaction. It can be
utilized by users in various roles: medical doctors seeking expert advice, medical
experts offering such advice or discussing very complicated cases with other
experts, and medical students learning how to efficiently use the tools of
contemporary telemedicine. In general, expert knowledge in TeleDICOM is the
knowledge of the user (medical doctor) and thanks to the real-time collaboration
functionality the system provides efficient access to it. Nevertheless, it can also
be beneficial to utilize the knowledge gathered thanks to machine learning
processes – regarding diagnostics of some diseases or prediction of their
progress. Provided that its utilization for medical analysis is reasonable
(semi)automatic diagnostics functionality can be implemented. Such an
approach was tested in cases of mammography [65] or computed tomography
analysis [66].
TeleDICOM II employs Service Oriented Architecture (SOA) paradigms (i.e.
its functionality is built from many interoperating services) so extending its
functionality is a relatively simple task (comparing to TeleDICOM I which has a
rather monolithic architecture).
In TeleDICOM II terminology, scheduled consultation sessions are directed
to so-called consultation service instances which are currently implemented as a
standalone graphical application operated by a user. The consultation service can
however be implemented in a different way – by realizing the analysis
functionality independently or by being the proxy to a third-party system which
performs such an analysis. Another option is to add an appropriate software
module as an additional plug-in for the user application. Whichever way is
chosen, computer aided-diagnosis (CAD) functionality able to interactively
support detection of certain features in the medical images can be easily
achieved and does not require any changes to the Tele- DICOM II architecture.
Once the image analysis systems are mature enough, the diagnosis process could
be completely automatized (performed entirely by a computer) by providing the
appropriate implementation of the consultation service. However, such a
scenario is still in the rather distant future – despite the fact that CAD systems
are present in many medical domains [67], they still cannot be safely employed
without human supervision.
Having access to a large database of imaging data, an appropriate
TeleDICOM II service could find similar cases to the currently analyzed one,
providing the doctors with additional information on how to diagnose and treat
the patient. Such techniques are called Content-based Image Retrieval (CBIR)
[68].
Supporting model development and validation
In the previous sections the position of medical models in proactive medicine
was discussed and techniques of knowledge extraction and processing leading to
model construction were presented. An important issue is the reliability of model
operation. In the area of imaging medicine a selection of representative data
showing the progress of the analyzed diseases is crucial. Equally important is the
ability to properly extract significant features from the data [69] and later verify
the correctness of the model [70]. Unsupervised learning methods discussed in
Section 13.3 can be applied in this area.
Model development and verification require specialists in the given domain.
Those specialists may be in a distant location, therefore making appointments
with them is problematic. TeleDICOM as the sophisticated collaboration
framework can be particularly useful in this scenario by providing: scheduling of
teleconsultations, which facilitate making virtual appointments; real-time
communication essential for discussion; fully synchronized interactive tools for
comparison of various imaging data.

Supporting knowledge extraction


As has already been mentioned, the results of the teleconsultation session in
Tele- DICOM can take various forms. If annotations and measurements are
precisely performed such a diagnosis can be extremely useful not only for the
diagnosed patient but also as a valuable source of knowledge and good practices
in imaging data analysis. The importance of a proper database has been pointed
out in papers regarding building CAD systems [71, 72]. A special consultation
session can even be organized which aims at selecting certain features in the
imaging data. Such research has been conducted with the utilization of
TeleDICOM with the goal of determining the shape of the heart’s left ventricle
on echocardiographic images using a large set of reference data which was next
used to develop an FPGA circuit [73].

13.6 Conclusions
As a new and complementary paradigm of healthcare, proactive medicine opens
up exciting system design opportunities. Properly harnessed communication and
computing technologies supported by modeling techniques pave the way to
broad utilization of rich medical data sets. Such an approach could have a great
impact on medical services transformation and motivating lifestyle changes that
prevent diseases. It also stimulates changes in medical practices to care for those
who are already sick. To take full advantage of these new opportunities
researchers must consider carefully their methodological, epistemological, and
technical requirements. It should also be noted that the described concepts open
new discussions on ethical and legal aspects of semiautomated, and more
importantly automated, medical care.

References
[1] Mukhopadhyay S and Postolache OA. Pervasive and Mobile Sensing and Computing for
Healthcare: Technological and Social Issues. Berlin: Springer Science & Business Media,
2012.
[2] Czekierda L, Masternak T and Zielinski K. Evolutionary approach to development of
collaborative teleconsultation system for imaging medicine. Information Technology in
Biomedicine, IEEE Transactions. 2012;16(4): 550–560.
[3] Brown DE. Introduction to data mining for medical informatics. Clin Lab Med. 2008
Mar;28(1):9–35, v. doi: 10.1016/j.cll.2007.10.008.
[4] Lehmann ED and Deutsch T. A physiological model of glucose-insulin interaction in type 1
diabetes mellitus. Journal of Biomedical Engineering. 1992;14(3):235–242.
[5] Edelman EJ, Guinney J, Chi JT and Febbo PG, equal contributor Mukherjee S. Modeling
cancer progression via pathway dependencies. PLoS Comput. Biol. 2008; 4(2):e28. doi:
10.1371/ journal.pcbi.0040028.
[6] Sun H, Avants BB, Frangi AF, Sukno F, Geel JC and Yushkevich PA. Cardiac medial
modeling and time-course heart wall thickness analysis. Med Image Comput Comput Assist
Interv. 2008; 11(Pt 2):766–73.
[7] Tziampazis E and Sambanis A. Tissue engineering of a bioartificial pancreas: modeling the
cell environment and device function. Biotechnol Prog. 1995 Mar–Apr;11(2):115–26.
[8] Science Buddies Staff. Modeling the human cardiovascular system: the factors that affect
blood flow rate. Science Buddies. 2014,18 Dec; Web. 2015,10 Jan.
http://www.sciencebuddies.org/science-fair-projects/project_ideas/HumBio_p032.shtml.
[9] IBM Research. What is cognitive computing? http://www.research.ibm.com/cognitive-
computing/index.shtml#fbid=BrUXYNtK6-r.
[10] Cheng B, Wee CY, Liu M, Zhang D and Shen D. Brain Disease Classification and
Progression Using Machine Learning Techniques. In: Suzuki K, ed. Computational
Intelligence in Biomedical Imaging, New York: Springer, 2014, pp. 3–32, doi: 10.1007/978–1-
4614–7245-2_1.
[11] McEvoy LK, Fennema-Notestine C, Roddey JC, Hagler DJ Jr, Holland D, Karow DS, Pung
CJ, Brewer JB and Dale AM. Alzheimer disease: quantitative structural neuroimaging for
detection and prediction of clinical and structural changes in mild cognitive impairment.
Radiology. 2009;251:195–205.
[12] Hudson FG, Amaral LSdB, Duarte SFP, et al. Predicting increased blood pressure using
machine learning. Journal of Obesity. 2014, Article ID 637635, 12 pages. doi:
10.1155/2014/637635.
[13] Zhang G. An improved hypertension prediction model based on RS and SVM in the Three
Gorges Area. Computer Science and Software Engineering, 2008 International Conference.
Vol. 1, 12–14 Dec. 2008, pp. 810–814. doi: 10.1109/CSSE.2008.664 URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4721873&isnumber=4721668.
[14] Lee KL, McNeer JF, Starmer CF, Harris PJ, Rosati RA. Clinical judgment and statistics.
Lessons from a simulated randomized trial in coronary artery disease. Circulation. 1980
Mar;61(3):508–15.
[15] Exarchos K, Exarchos T, Bourantas C, Papafaklis M, Naka K, Michalis L, Parodi O and
Fotiadis D. Prediction of coronary atherosclerosis progression using dynamic Bayesian
networks. Conf Proc IEEE Eng Med Biol Soc. 2013:3889–92.
[16] Van Haaren J, Davis J, Lappenschaar M and Hommersom A. Exploring Disease Interactions
Using Markov Networks, Expanding the Boundaries of Health Informatics Using Artificial
Intelligence: Papers from the AAAI 2013 Workshop.
[17] Farran B, Channanath AM, Behbehani K and Thanaraj TA. Predictive models to assess risk
of type 2 diabetes, hypertension and comorbidity: machine-learning algorithms and validation
using national health data from Kuwait—a cohort study, Health Informatics, BMJ Open. 2013;
3:e002457 doi: 10.1136/bmjopen-2012-002457.
[18] Paltiel AD, Scharfstein JA, Seage GR 3rd, Losina E, Goldie SJ, Weinstein MC, Craven DE
and Freedberg KA. A Monte Carlo simulation of advanced HIV disease: application to
prevention of CMV infection. Med Decis Making. 1998 Apr–Jun;18(2 Suppl), pp. 93–105.
[19] Wang X, Sontag D and Wang F. Unsupervised learning of disease progression models. In:
Proceedings of the 20th ACM SIGKDD international conference on knowledge discovery and
data mining (KDD ’14). ACM, New York, NY, USA, 2014., pp. 85–94. doi:
10.1145/2623330.2623754 http://doi.acm.org/10.1145/2623330.2623754.
[20] De Winter W, DeJongh J, Post T, Ploeger B, Urquhart R, Moules I, Eckland D and Danhof M.
A mechanism-based disease progression model for comparison of long-term effects of
pioglitazone, metformin and gliclazide on disease processes underlying type 2 diabetes
mellitus. Journal of Pharmacokinetics and Pharmacodynamics. 2006;33(3):313–343.
[21] Vadher B, Patterson DLH and Leaning M. Prediction of the international normalized ratio and
maintenance dose during the initiation of warfarin therapy. Br J Clin Pharmacol. 1999 Jul;
48(1):63–70.
[22] Kronik N, Kogan Y, Vainstein V and Agur Z. Improving alloreactive CTL immunotherapy for
malignant gliomas using a simulation model of their interactive dynamics. Cancer
Immunology, Immunotherapy. 2008 Mar;57(3):425–439.
[23] Hassani A and Naghibi MB. Reinforcement learning based control of tumor growth with
chemotherapy, System Science and Engineering (ICSSE), 2010 International Conference. 1–
3 July 2010, pp. 185–189. doi: 10.1109/ICSSE.2010.5551776, URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5551776&isnumber=5551700.
[24] Mani S, Chen Y, Li X, Arlinghaus L, Chakravarthy AB, Abramson V, Bhave SR, Levy MA, Xu
H and Yankeelov TE. Machine learning for predicting the response of breast cancer to
neoadjuvant chemotherapy. J Am Med Inform Assoc. 2013 Jul–Aug;20(4):688–95. doi:
10.1136/amiajnl-2012-001332. Epub 2013 Apr 24.
[25] Corrigan B (Pfizer Global Research). A Comprehensive Clinical Trial Simulation Tool for
Alzheimer’s Disease: Lessons for Model Collaboration, On behalf of the CAMD M&S
Workgroup, September 26, 2013, Washington DC.
[26] James G, Witten D, Hastie T and Tibshirani R. An Introduction to Statistical Learning. New
York: Springer, 2013.
[27] Cristianini N and Shawe-Taylor J. An Introduction to Support Vector Machines and other
Kernelbased Learning Methods. Cambridge: Cambridge University Press, 2000. ISBN 0-521-
78019-5
[28] Werbos PJ. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral
Sciences. PhD thesis, Harvard University, 1975.
[29] Sutton RS and Barto AG. Reinforcement Learning: An Introduction. Cambridge, MA: MIT
Press, 1998.
[30] Perna G. Research: Mobile App Can Help Detect Early Signs of Glaucoma [Internet]. 2015
[cited 8 Jan 2015]. Available from: http://www.healthcare-informatics.com/news-
item/research-mobile-app-can-help-detect-early-signs-glaucoma.
[31] Chen TC. Spectral domain optical coherence tomography in glaucoma: qualitative and
quantitative analysis of the optic nerve head and retinal nerve fiber layer (An AOS Thesis).
Transactions of the American Ophthalmological Society 2009;107:254–281.
[32] Ridley EL. Big data in radiology will drive personalized patient care. Online:
http://www.auntminnie.com/ (accessed 2015-01-08).
[33] Groves P, Kayyali B, Knott D and Van Kuiken S. The ‘big data’ revolution in healthcare.
Center for US Health System Reform Business Technology Office. January 2013.
[34] Pentland A, Reid TG and Heibeck T. Revolutionizing Medicine and Public Health, Raport of
the Big Data and Health Working Group 2013.
[35] Taylor RH, Funda J, Joskowicz L, Kalvin AD, Gomory SH, Gueziec AP and Brown LMG. An
overview of computer-integrated surgery at the IBM Thomas J. Watson Research Center, IBM
Journal of Research and Development. 1996 Mar;40(2):163–183. doi: 10.1147/rd.402.0163,
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?
tp=&arnumber=5389453&isnumber=5389444.
[36] Gantenbein RE. Watson, come here! The role of intelligent systems in health care. World
Automation Congress (WAC), 2014. 3–7 Aug. 2014, pp. 165–168. doi: 10.1109/
WAC.2014.6935748, URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?
tp=&arnumber=6935748&isnumber=6935633.
[37] Steadman I. IBM’s Watson is better at diagnosing cancer than human doctors, Technology,
11 February 13, Online: http://www.wired.co.uk/news/archive/2013-02/11/ibm-watson-
medical-doctor (accessed: 2015-01-09).
[38] Graham RNJ, Perriss RW and Scarsbrook AF. DICOM demystified: A review of digital file
formats and their use in radiological practice. Clinical Radiology. 2005 Jun;60:1133–1140.
[39] Doi K. Computer-aided diagnosis in medical imaging: historical review, current status and
future potential. Computerized Medical Imaging and Graphics. 2007;31(4–5):198–211.
[40] Bellman RE. Dynamic Programming. Newburyport: Courier Dover Publications, 2003.
[41] Radovanovic M, Nanopoulos A and Ivanovic M. Hubs in space: Popular nearest neighbors in
high-dimensional data. Journal of Machine Learning Research. 2010;11:2487–2531.
[42] Hnin WK. Data mining based fragmentation and prediction of medical data, Computer
Research and Development (ICCRD), 2011 3rd International Conference, vol. 2, 11–13 March
2011, pp. 480–485. doi: 10.1109/ICCRD.2011.5764179, URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5764179&isnumber=5764069.
[43] Paul R and Hoque ASML. Clustering medical data to predict the likelihood of diseases,
Digital Information Management (ICDIM), 2010 Fifth International Conference, 5–8 July 2010,
pp. 44–49.
[44] Bengio Y. Learning Deep Architectures for AI. Foundations and Trends in Machine Learning
2 (1). 2010 Fifth International Conference on Digital Information Management (ICDIM). doi:
10.1109/ICDIM.2010.5664638 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?
tp=&arnumber=5664638&isnumber=5662242.
[45] Freund Y and Schapire RE. A Short Introduction to Boosting. In: Dean T, ed. Proceedings of
the Sixteenth International Joint Conference on Artificial Intelligence. San Francisco: Morgan
Kaufmann Publishers; 1999, pp. 1401–1406.
[46] Skałkowski K and Zielinski K. Applying formalized rules for treatment procedures to data
delivered by personal medical devices. Journal of Biomedical Informatics. 2013;46( 3):530–
540.
[47] Microsoft HealthVault, https://www.healthvault.com/.
[48] Apple HealthKit, https://developer.apple.com/healthkit/.
[49] Chiarini G, Ray P, Akter S, Masella C and Ganz A. mHealth technologies for chronic
diseases and elders: A systematic review. IEEE Journal on Selected Areas in
Communications. 2013; 31(9):6–18.
[50] Bluetooth Health Device Profile,
https://developer.bluetooth.org/TechnologyOverview/Pages/HDP.aspx.
[51] Pańkowska E and Błazik M. Bolus calculator with nutrition database software, a new concept
of prandial insulin programming for pump users. Journal of Diabetes Science and Technology.
2010 May 1;4:571–576.
[52] Kardyś B. Mobile phones in the management of affluence diseases, AGH Master Thesis,
2013.
[53] Georga EI, Protopappas VC and Polyzos D. Prediction of glucose concentration in type 1
diabetic patients using support vector regression, Information Technology and Applications in
Biomedicine (ITAB), 2010 10th IEEE International Conference. Nov. 2010, pp. 1–4.
[54] Tarin C, Teufel E, Pico J, Bondia J and Pfleiderer HJ. Comprehensive pharmacokinetic
model of insulin glargine and other insulin formulations. Biomedical Engineering, IEEE
Transactions. 2005 Dec;52(12):1994–2005.
[55] Roy A and Parker RS. Dynamic modeling of exercise effects on plasma glucose and insulin
levels. Journal of Diabetes Science and Technology. 2007 May;1(3):338–347.
[56] Kowalski P. Machine learning for the management of affluence diseases, AGH Master
Thesis, 2013.
[57] Gackowski A et al. Development, implementation, and multicenter clinical validation of the
TeleDICOM—advanced, interactive teleconsultation system. Journal of Digital Imaging.
2011;24(3):541–551.
[58] Gennari JH et al. Asynchronous communication among clinical researchers: A study for
systems design. International Journal of Medical Informatics. 2005;74(10):797–807.
[59] Lasierra N et al. Lessons learned after a three-year store and forward teledermatology
experience using internet: Strengths and limitations. International Journal of Medical
Informatics. 2012;81(5):332–343.
[60] Lee JS et al. A real time collaboration system for teleradiology consultation. Int J MedInform.
2003;72(1–3):73–9.
[61] Chang T, Lee J and Wu S. The telemedicine and teleconsultation system application in
clinical medicine. Conf Proc IEEE Eng Med Biol Soc. 2004;5:3392–5.
[62] Okuyama F et al. Telemedicine imaging collaboration system with virtual common
information space. In: Computer and Information Technology, 2006. CIT’06. The Sixth IEEE
International Conference. IEEE. 2006.
[63] Hsu YC et al. Design and implementation of teleconsultation system for instant treatment. In:
Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008. The 2nd International
Conference. IEEE. 2008; pp. 1355–1358.
[64] Noumeir R. Benefits of the DICOM structured report. Journal of Digital Imaging. 2006;19(4):
295–306.
[65] Sovio U, Aitken Z, Humphreys K, Czene K et al. Comparison of fully and semi-automated
area-based methods for measuring mammographic density and predicting breast cancer risk.
British Journal of Cancer. 2014 Apr;110:1908–1916.
[66] Folio LR1, Sandouk A, Huang J, Solomon JM, Apolo AB. Consistency and efficiency of CT
analysis of metastatic disease: semiautomated lesion management application within a
PACS. AJR Am J Roentgenol. 2013 Sep;201(3):618–25.
[67] Suzuki K. Machine Learning in Computer-Aided Diagnosis: Medical Imaging Intelligence.
Hershey, PA: Medical Information Science Reference, 2012.
[68] Akgül CB et al. Content-based image retrieval in radiology: current status and future
directions. Journal of Digital Imaging. 2011;24(2):208–222.
[69] Doi K. Current status and future potential of computer-aided diagnosis in medical imaging.
The British Journal of Radiology. 2005;78(1):3–19.
[70] Faust O, Acharya UR and Tamura T. Formal design methods for reliable computer-aided
diagnosis: a review. Biomedical Engineering, IEEE Reviews. 2012;5:15–28.
[71] Yuan K et al. Brain CT image database building for computer-aided diagnosis using content-
based image retrieval. Information Processing & Management. 2011;47(2):176–185.
[72] Horikoshi H et al. Computer-aided diagnosis system for bone scintigrams from Japanese
patients: importance of training database. Annals of Nuclear Medicine. 2012;26(8):622–626.
[73] Świerczak ZB, Kasperek J and Rajda P. Skeletonization hardware implementation in FPGA
device. PAK. 2007;53(7):75–77.
14 Serious games in medicine
Video games are of growing interest to many health professionals. In this
chapter, we review the most interesting areas of research and scientific activity.
Serious games are still in an early stage of development: progress is noticeable,
but there is still a need for better cooperation between specialists from different
fields of healthcare and game designers and developers. In addition, we focus on
the role of graphic design tools applied to the development of serious games.
Finally, we present several serious games for health designed for children and
older adults.

Paweł Węgrzyn
14.1 Serious games for health – Video games and health
issues

14.1.1 Introduction

For current generations of children and youth video games are a popular and
ubiquitous form of entertainment. Young people are exposed to video games
throughout their entire lives. A game platform may be a computer, a TV console,
a portable console, a tablet, a cell phone or any other smart devices with video
displays and processors. For several years there has been a wide-ranging
discussion about the advantages and detrimental effects of playing video games.
The concept of serious games is a popular topic nowadays. A Google search
on “serious games” renders about 114 000 000 hits [2015-02-21]. This
oxymoron is usually associated with the popular book by Clark Abt [1]. Over
forty years ago, the author was curious about how interactive games, simulations
and other gamelike activities could facilitate education for various life roles in a
highly technological and complex society. Although Abt referred rather to games
in general, today serious games are mainly considered as computer games
designed for educational purposes.
Serious games are primarily educational games, but this term is usually
interpreted much more broadly. Serious games also include applications such as
computer games to support medical therapy and rehabilitation, to explore the
world, to promote leading a healthy lifestyle or to open possibilities for new
artistic creations. The most useful and common definition is that a serious game
is a game in which it is about something more than just fun. Thus, we can define
a serious game more precisely as “any form of interactive computer-based game
software for one or multiple players to be used on any platform and that has been
developed with the intention to be more than entertainment” [2]. In particular,
following this definition we should take into account all products developed
primarily for interactive entertainment that can also be used for interactive
education, interactive training or interactive simulation [3]. Actually, there are so
many different definitions of serious games in the scientific literature that there
is a good deal of confusion about what this notion involves.
An important area of serious game applications is using videogame
technologies to improve healthcare [4]. Therefore, serious games for health
define the intersection of serious games and healthcare issues. As a precise
definition of health we may adopt that given by the World Health Organization:
“a state of complete physical, mental, and social well-being and not merely the
absence of disease or infirmity” [5].
Video games for health are increasingly incorporating various input sensors
like 3D cameras, 3D glasses, accelerometers, balance boards, gloves or medical
diagnostic devices. The part of the machine that handles the human–machine
interaction is called the user interface. A Natural User Interface (NUI) is a user
interface that is effectively invisible, so that a user operates the device through
intuitive actions related to natural, everyday human behavior. The user interface
is also responsible for communication between a patient and a healthcare
professional through a machine. User interface design may have considerable
influence on the efficacy of serious games for health.
Recently, serious games for health have been the subject of numerous
scientific research projects and conferences. Some studies suggest that serious
games may be useful in healthcare and health professional education and
training. However, there are few methodological studies on the evaluation and
assessment of this type of applications. This situation is expected to improve in
the near future.
In this review, we will survey scientific studies on video games in the context
of health issues. Therefore, our review includes papers on serious games for
health as well as papers on health problems related to the usage of video games.
First and foremost, we include papers that are review reports and meta-analyses.

14.1.2 Previous surveys


In this section, we mention previous review articles on serious games for health.
We assume that it is enough to take into consideration recent publications over
the past few years.
Let us start with the review by Kato [6], where older reviews are also
mentioned. The author surveys positive examples of using video games in
healthcare. The notion of serious games is used there, but with a narrower
definition in which serious games are understood as video games that have been
intentionally designed for training and education [7]. Various examples of video
games for health education and health training are then reviewed. Nevertheless,
examples are also given of commercial entertainment games used for certain
goals in healthcare, like health improvement or surgical training. The main
purpose of the review was to prove that playing video games may be considered
a positive and effective intervention in the area of healthcare.
Serious games in women’s healthcare are surveyed by de Wit-Zuurendonk
and Oei [8]. They adopt another definition of serious games from Stokes [9]:
games that are designed to entertain players as they educate, train, or change
behavior. The authors searched the scientific literature and selected 30 relevant
papers. The studies reported in the selected papers indicate that serious gaming
is a stimulating learning method and that students are enthusiastic about its use.
The authors also comment that there is a lack of studies proving the clinical
effectiveness of serious gaming.
Many papers examine the well-recognized potential of games for learning
(game-based learning). Learning is usually defined as the acquisition,
modification or reinforcement of knowledge, skills or attitudes (KSA
components).
Connolly et al. [10] review the literature on game-based learning. In
particular, the authors have identified 129 papers reporting empirical evidence
about the impacts and outcomes of computer games and serious games with
respect to learning and student engagement. They developed a multidimensional
framework for categorizing such games. Any game can be placed with respect to
five dimensions: digital or nondigital game, primary purpose of the game, game
genre, subject discipline and a platform for delivery of the game. They suggest
eight relevant subject disciplines: health, society, mathematics, language,
engineering, general knowledge, geography and science. Of the 129 empirical
papers included 21 are on the subject of health (12 of them classified by the
reviewers as higher quality papers). According to the conclusions of the review,
there is much research about game-based learning, various positive and negative
impacts and outcomes associated with playing digital games have been identified
and there is quite a body of empirical evidence concerning the effectiveness of
games-based learning. However, this evidence is not rigorous and well proved,
e.g. lack of randomized controlled trial (RCT) studies for the effectiveness of
games-based learning or poor understanding of the nature of engagement in
games.
The survey of 108 serious games for health is given by Wattanasoontorn et
al. [11]. Their taxonomy of serious games for health is based on three
dimensions: serious game characteristics, player characteristics and health
problem characteristics. The serious game dimension is described by game
purpose (focus on entertainment, focus on health or focus on acquiring health
and medical skills) and by game functionalities (twelve technological
characteristics). A player can be categorized as patient/nonpatient or
professional/nonprofessional. The health dimension of serious games for patients
refers to the stage of the disease that is the subject of the game (susceptibility,
presymptomatic, clinical disease, recovery/disability) together with the relevant
purpose of the serious game (health monitoring, health detection, treatment or
therapy, rehabilitation, education). The health dimension of serious games for
nonpatients refers to three categories: health and wellness, training and
simulation for professionals and training and simulation for nonprofessionals.
Finally, the authors classify and briefly describe the 108 surveyed games using
fifteen characteristics (author, disease, purpose, application area, interactive tool,
interface, players, genre, adaptability, progress monitoring, feedback, portability,
engine, platform, connectivity).
Primack et al. [12] present the results of their investigation of the scientific
literature on studies about using video games to improve health outcomes. The
authors did not use the concept of serious games. They canvass computer
applications useful for health outcomes, and they focus rather on the question
whether a computer application is actually a video game. They follow the
American Heritage Dictionary that defines video game as an electronic or
computerized game played by manipulating images on a video display or
television screen. They assume that in order to be a game an interactive or
competitive application should have a system of reward and should include fun
elements. The authors use also inclusion criteria for selecting admissible
scientific studies. An eligible study must be an RCT (observational studies are
excluded), use a video game as the intervention and test its effect on a health-
promoting, clinically relevant health outcome. Of 1452 surveyed relevant
articles, only 38 (2.6%) met the inclusion criteria described above. The excluded
articles usually suffer from lack of a health-promoting, clinically relevant health
outcome (46%), do not involve computer applications and are not video games
(38%) or are not RCTs (14%). Among the 38 included studies, the authors
identified 195 examined health outcomes. The global conclusion is that
purposeful video game-based interventions improved 69% of psychological
therapy outcomes, 59% of physical therapy outcomes, 50% of physical activity
outcomes, 46% of clinician skills outcomes, 42% of health education outcomes,
42% of pain distraction outcomes, and 37% of disease self-management
outcomes. This can be considered as a proof of the positive impact of video
game-based interventions, but the studies are generally acknowledged as being
of poor quality and having relatively brief follow-up periods. The total number
of study participants assessed in all 38 studies was only 2662.
Graafland, Schraagen and Schijven [13] also conducted a review of serious
games for health based on the examination of scientific literature. They define
serious games as digital games for computers, game consoles, smartphones or
other electronic devices, directed at or associated with improvement of
competence of professionals in medicine. This definition is the narrowest one
and it excludes games for patients or nonprofessionals. The authors have
identified 25 articles describing a total of 30 serious games. The games fall into
two categories: games developed for specific medical education purposes (17)
and commercial games useful for developing skills relevant to medical personnel
(13). Only six serious games were identified that had a process of validation
(three games for team training in critical care and triage and three commercial
games useful to train laparoscopic psychomotor skills). None of the serious
games had completed a full validation process for the purpose of use.
In the recent review by Horne-Moyer et al. [14], the authors collected known
electronic games that have been used for interventions for a variety of health
goals. They reviewed electronic games for health-related problems and designed
for therapeutic purposes, electronic games developed primarily for
psychotherapy and commercial entertainment games used for psychotherapy.
The general conclusion of the survey is that therapies with electronic games are
rather equivalent but not superior in efficacy to traditional treatments. For some
patients, therapies with electronic games may be more enjoyable or acceptable.
The lack of suitable RCT studies has again been acknowledged.
Ricciardi and De Paolis [15] review serious games for education and training
of healthcare professionals. The authors confirm that serious gaming is useful for
healthcare professionals, but serious games for health are generally not
widespread. There are only few health-related fields where serious gaming can
be found (surgery, odontology, nursing, cardiology, first aid, dietitian and
diabetes, psychology). The main advantage of serious games is that they are
cheaper than traditional training methods.
When using serious games for health outcomes, we must properly assess
whether serious gaming for some individual games is safe and effective. The
issue of assessment of serious games for health is discussed by Graafland,
Dankbaar et al. [16]. They define serious games as digital applications
instigating a specific behavioral change to its user, in the form of skills,
knowledge, or attitudes useful to reality [4]. This definition excludes in
particular games for health with only informational purposes. The assessment
framework provides 62 items in five main themes: game description (metadata,
development, sponsoring/advertising, potential conflicts of interest), rationale
(purpose, medical device, user group, setting), functionality (purposes/didactic
features, content management, potentially undesirable effects), validity (design
process, user testing, stability, validity/effectiveness), data protection (data
procession, data security, privacy). This assessment framework may be useful for
developers of serious games for health. However, it has some drawbacks. For
instance, it does not include visual perception mechanisms, user experience,
immersion and flow. Maybe more important, this assessment framework does
not touch on pure game elements (e.g. fun, entertainment, challenge). We know
that these factors are not vague. Their influence on the effectiveness of game-
based interventions is rather well recognized and pretty well understood. The
entertainment factor may improve or deteriorate game user performance.

14.1.3 Evidence review

We are going to review the literature on video games with respect to health
issues. Most of the selected topics can be recognized as serious gaming for
health, but our review is not to be circumscribed by some specific definition of
serious games.

14.1.3.1 Health education for nonprofessionals


Many video games designed for educating nonprofessionals about health topics
have been developed. These games are not intended to support any healthcare
interventions. The only purpose is to inform. Most of them are simple casual
games one can play online without payment. However, there are also games that
can be used in academic education. For instance, an educational game for
learning human immunology [17]. A review of scientific literature on using
video games for health education (and also for physical education) is given in
[18]. This review presents the literature on various projects, on empirical
evidence on the educational effectiveness of video games and on future research
perspectives concerning the subject. The main conclusion of the review is that
video games as educational tools may effectively improve KSA components of
young students.

14.1.3.2 Physical well-being


Video games can support some actions taken to improve physical well-being.
Taking into account the evidence, we select the most relevant areas of physical
health issues: therapy, pain distraction, rehabilitation, disease self-management
and life support.
14.1.3.2.1 Therapy

Video gaming (or serious video gaming) may support medical treatment and
therapy.
The video game Re-Mission was developed to actively involve adolescents
and young people with cancer in their own treatment [19]. Playing the serious
game was a psychoeducational intervention during the therapy process to
stimulate patients’ interest in understanding illness issues. The objective of the
game was to change players’ illness representations in order to promote
adherence to self-care during treatment and to teach self-care skills and related
cancer knowledge. A randomized trial was conducted at 34 medical centers in
the United States, Canada and Australia [20]. It showed that video game
intervention significantly improved treatment adherence and indicators of
cancer-related self-efficacy and knowledge in adolescents and young adults who
were undergoing cancer therapy. A psychotherapeutic game for children with
brain tumor is presented in [21]. The results showed a significant improvement
regarding the behaviors of young patients.
The treatment for amblyopia (visual impairment without apparent organic
pathology) can be supported by a complex video game that trains contrast
sensitivity [22]. They tested the game on 20 amblyopic subjects (10 children and
10 adults), and the improvement of contrast thresholds was observed as better
for adults than for children.
Improvement for patients with multiple sclerosis was observed after training
with a video game balance board on Nintendo Wii [23]. The clinical tests
indicated that the microstructure of superior cerebellar peduncles was modified.
The suggestion is that high-intensity serious game playing could induce
favorable microstructural changes in the brains of patients with multiple
sclerosis.
14.1.3.2.2 Pain management

Video games can be also used as a technique for pain management.


A video game-based system was used to distract adolescent patients from
high levels of pain during wound care [24]. Immersion in virtual world helps to
draw attention away from the real world. Patients can better tolerate painful
procedures. Serious gaming for pain and discomfort distraction at the dentist’s is
presented in [25].
14.1.3.2.3 Prevention, rehabilitation and disease self-management

Effective rehabilitation must be early, intensive and repetitive. Effective


prevention or proper self-management must be regular. The real challenge is to
maintain patient motivation and engagement. Using video games can be
effective in building motivation, adhering to training or treatment regimes,
leading exercises for motor recovery, managing the impact of disability or
disease on functioning, emotions and interpersonal relationships. The games
often incorporate novel input sensors and natural user interfaces.
A survey and a classification of serious games for rehabilitation is given in
[26]. The authors put forward their own definition of serious games: games that
engage a user and contribute to the achievement of a defined purpose other than
pure entertainment (whether or not the user is consciously aware of it). Different
rehabilitation systems based on serious games have been identified and
discussed. Serious games in prevention and rehabilitation are examined in [27].
Serious games for enhancement of upper limb stroke rehabilitation are
discussed in [28]. Several examples of games are discussed and some general
game design principles are postulated. They state that there are positive results
associated with using such games (playability, usability) in both healthy users
and users with varying degrees of impairment caused by stroke. Single-blinded
clinical trial with serious games on two parallel groups involving stroke patients
are presented in [29]. The feasibility, safety and efficacy of serious game-based
therapy is better in comparison with some standard recreational therapy. A
virtual reality system for stroke rehabilitation is presented in [30]. A significant
improvement in dynamic balance in chronic stroke patients with this system has
been demonstrated using RCT. Another virtual reality rehabilitation system
(Gesture Therapy) is presented in [31]. The authors postulate some design
principles for rehabilitation games: promote repetition, task-oriented training,
appropriate feedback, motivating environment.
Four small serious games with input sensors for physical (neuromuscular)
rehabilitation are described in [32]. To meet different requirements for various
therapies, they put forward a specialized configurable architecture that enables
therapists to define game controls depending on individual patient needs. A
system based on a serious game for balance rehabilitation of adults with cerebral
palsy is presented in [33]. A 24-week physiotherapy intervention program was
conducted with nine adults with cerebral palsy and the results were promising.
A review of various Kinect applications in elderly care (fall detection and fall
risk reduction) and stroke rehabilitation is given in [34]. The current stage of
Kinect-based rehabilitation methods and their limitations are discussed. The
main conclusion is that Kinect already shows notable potential in making
therapy and alert systems financially accessible and medically beneficial to a
large population of elderly and stroke patients. Feasibility, safety and outcomes
of serious gaming with Kinect for patients with Parkinson’s disease are
examined in [35].
A Wii-based video games in patients with cystic fibrosis is presented in [36].
Exercise training increases exercise capacity, decreases dyspnea and improves
health-related quality of life. The video game-based training is well tolerated by
patients, but its effectiveness is to be examined in future.
Management of chronic pediatric diseases with interactive health games is
discussed in [37]. An interactive web application for education about asthma is
described in [38]. Serious games for children with diabetes are presented in [39]
and [40]. A serious video game to prevent type 2 diabetes and obesity among
youth is presented in [41].
14.1.3.2.4 Life support

Multiplayer video games, augmented and virtual reality systems can be very
useful for creating new means for teaching and training the emergency
treatments and techniques performed in an emergency situation in order to
support life after failure.
A massively multiplayer online video game (MMOG) has been created for
repeated team training of cardiopulmonary resuscitation procedures [42]. There
are four scenarios for virtual world team training with avatars. Self-efficacy,
concentration and mental strain are measured. The design, implementation, and
evaluation of multiplayer video game for advanced cardiac life support (ACLS)
training are described in [43]. The efficacy and performance outcomes are
compared with traditional ACLS training. The main conclusion is that virtual
training can provide a learning experience similar to face-to-face training.
Serious games directed at children and young adults that serve as a tool to create
awareness of cardiac arrest and cardiopulmonary resuscitation are described in
[44].

14.1.3.3 Mental well-being


The challenge for modern psychotherapy is to use novel information and
communication technologies (such as web-based technologies, mobile
technologies, social media, virtual reality, virtual humans and video games) to
address behavioral and mental health outcomes [45].
14.1.3.3.1 Perceptual and cognitive training

Humans have the ability to learn, to acquire KSA components and change
behavior as a result of experience. Skill learning can be related to improvement
of perceptual, cognitive or motor performance. A review of ideas and papers
about the effects of playing video games on perceptual or cognitive skills is
given in [46]. A review of various studies on the subject whether video game
players outperform nongamers on measures of perception and cognition is given
in [47]. The authors of the review conclude that the effects of gaming on
perception and cognition seem to be confirmed. However, there are many
methodological shortcomings of past studies and to develop validated methods
of game interventions future studies are necessary with a set of clinical trial best
practices and delimitation of alternative explanations for gaming effects.
Playing action video games can modify visual skills, like visual selective
attention and a range of visual skills [48]. Green and Bavelier have performed
four experiments with visual performance tests for habitual video game players
and nonplayers. The results show that changes in different aspects of visual
attention are recognizably better for video game players. Moreover, the authors
show that game players alter the spatial resolution of vision in tasks in which the
location and time of arrival of the stimulus is unknown/surprising as well as
known/foreseen to players [49]. Video game playing improves the ability to
inhibit attention from returning to previously attended locations and the
efficiency of visual search in easy and more demanding search environments
[50]. The enhancement of visual skills has been confirmed and extended to a
wider range of cognitive abilities, including attention, memory and executive
control in [51]. Li et al. [52] claim an improvement of the contrast sensitivity
function through action video game training. The contrast sensitivity function
(CSF) measures the ability to detect objects of different sizes at lower contrasts.
Video gaming may provide optimal regimen to increase self-control and
choice skills [53]. The speed of processing may be increased and perceptual
reaction times may be reduced. The act of playing action video games
significantly reduces reaction times without sacrificing accuracy [54]. The effect
of video game playing on the efficiency of attention allocation is examined in
[55]. An action video game play for blind adolescents to train navigation and
spatial cognition skills is presented in [56].
Multimedia learning is a form of learning supported by different sources of
information. Multimedia learning theory was elaborated by Richard Mayer [57].
Optimal learning occurs when visual and verbal materials are presented together
simultaneously [58]. A review of the literature about serious games for
multimedia learning is presented in [59]. Design principles for educational
serious games are discussed in [7]. The personalization principle, one of the
most important design principles of multimedia learning, is discussed in [60].
Serious games are supposed to improve learning as they instigate engagement
connected with a positive affective state and a high flow state. The study of the
relationship between learning and engagement in serious games is reported in
[61]. Designing serious games for elementary science education is discussed in
[62]. The article presents the design of the educational game “Crystal Island”
and promising results of the evaluation trial for classroom-based science
education. Boyle et al. [63] provide a narrative literature review of games,
animations and simulations to teach research methods and statistics. The final
conclusion of the review is that the evidence proves the great potential of game-
based learning, but there are currently few rigorous evaluations (only 26 papers
of 4040 met the inclusion criteria defined by the reviewers).
Transmedia learning is a form of learning based on transmedia storytelling.
Henry Jenkins has defined that a transmedia story unfolds across multiple media
platforms with each new text making a distinctive and valuable contribution to
the whole [64]. Transmedia learning is the sustained experience that results in
measurable behavior change. The behavior can be physical and overt,
intellectual, attitudinal, or a combination of all [65]. In transmedia learning,
students need to actively seek out content across multiple media platforms.
An artificial neural network model for simulating student cognition is
described by Lamb et al. [66]. The authors simulated a cognitive training
intervention using a randomized control trial design of 100 000 students. Results
suggest that it is possible to increase levels of student success using a targeted
cognitive attribute approach and that computational modeling provides a means
to test educational theory for future education research.
It is crucial for serious game designers and developers to know how to apply
cognitive neuroscientific principles to validate proposed games. Such a cognitive
evaluation method is proposed in [67].
14.1.3.3.2 Mental aging

Training on complex video games may be very useful for mental well-being of
older adults. The decline of various cognitive abilities may be traced from
feedback provided by such games. Traditionally, experimental tests of mental
health are based on studying repetitive performance of individual perceptual and
cognitive tasks. Serious games for use with older adults are usually very
different from serious games for young persons (see for instance [68]).
The question whether video game-based training may attenuate the decline
of cognitive abilities in older adults is discussed in [69]. Some older adults were
trained in a real-time strategy video game for 23.5 hours. Cognitive functions
such as task switching, working memory, visual short term memory, and mental
rotation improved significantly. A video game-based training improving five
cognitive functions (visuospatial memory, attention, memory for prose, working
memory and prospective memory) is discussed in [70]. The training includes
several serious games [71].
14.1.3.3.3 Mental disorders

A video game “PlayMancer” as a complementary therapy tool in mental


disorders is described and evaluated in [72]. The main results of clinical tests
are: the acceptance of the game by mental disorder patients, possibilities to work
out problems that are difficult to treat (such as impulsiveness, emotional
regulation, frustration), possibilities to apply techniques that tend to be difficult
to apply in mental disorder patients (such as controlled intensive exposure,
immediate positive and negative reinforcing, complex biofeedback approach,
real-time monitoring of physiological-emotional reactions).
Serious games may increase participation of young adults with physical and
intellectual disabilities in leisure activities [73]. The games with their associated
physical and cognitive environment provide an opportunity for physical exercise
in an enjoyable and motivating manner for people with intellectual disabilities. A
serious game addressing money management skills for people with intellectual
disability is described in [74]. The results of a qualitative evaluation have been
also satisfactory and very promising.
Serious games seem to be a promising technology for autistic children. A
review of recent applications and publications is given in [75]. A video game
with a multisensory environment called JeStiMulE for autistic children is
presented in [76]. A game for teaching emotion identification skills to
preschoolers is described in [77]. A serious game called ECHOES for fostering
social communication in children with autism is presented in [78].
A serious game for treating ADHD is presented in [79]. Probably, future
serious games for ADHD, and maybe for other mental disorders, should make
use of brain-computer interfaces [80]. A game-based neuropsychological
intervention on patients with alcohol dependence is reported in [81].
Treating cockroach phobia using a serious game on a mobile phone and
augmented reality exposure is studied in [82]. An approach to the treatment of
schizophrenia with computerized cognitive training and video game playing is
summarized in [83].
Serious games may be also very useful for people with Alzheimer’s disease,
which is one of the most important challenges for healthcare systems with an
aging population [84]. The design of serious game with haptic technology for
diagnosis and management of Parkinson’s disease is reported in [85]. There is
some evidence for striatal dopamine release while playing video games [86].

14.1.3.4 Social well-being


Online video gaming actually appears to play a role in the socialization of game
players, particularly for those who play massively multiplayer online role-
playing games (MMORPGs). Two of the researchers who explore the social
interactions that occur both within and outside of MMORPGs are Cole and
Griffiths [87]. The study demonstrates that players desire and appreciate the
social interactions in online gaming. Moreover, virtual gaming allows players to
express themselves in ways they may not feel comfortable doing in real life
because of their appearance, gender, sexuality, and age. Game players are not the
antisocial loners, gaming does not eliminate social interaction, but supplements
it [88]. The impact of video gaming on relationship quality with parents and
friends is examined in [89]. It is suggested that family therapists must learn
about video games, their contextual impacts, addictive aspects and possible uses
in the therapeutic setting while ignoring the myth that all gaming is harmful [90].
Some authors have defined prosocial games in which game characters help
and support each other in nonviolent ways [91]. Video gaming of that type is
capable of increasing both short-term and long-term prosocial behaviors.
As far as psychosocial adjustment is considered, playing video games for up
to an hour a day can be beneficial for children aged 10 to 15 years [92]. When
children play for more than for three hours a day, they are less well adjusted.
Nevertheless, the impact of video games on children is rather small when
compared with other factors.
Video games can increase the realism of role plays used to help college
women resist sexual attacks [93]. This may be a novel approach to assessment
and intervention in the area of protecting women from sexual assault and
coercion.
Simulations of child sexual abuse interviews with computer-generated child
avatars could improve interview quality [94].

14.1.3.5 Healthcare professionals


Serious games and virtual/augmented reality systems are considered as useful
tools for education and expert training of healthcare professionals. Some surveys
about games and scientific literature can be found in [95] and [13].
There are serious games for various health professionals, like physicians,
dentists, nurses, surgeons, dietitians, therapists, psychologists, social workers,
audiologists, speech pathologists, optometrists, or emergency medical
technicians.
14.1.3.5.1 Medical education

A serious game that is an interactive web-based application for breast imaging in


radiology for medical students is described and evaluated in [96]. The benefits of
learning with games include interactivity, novelty, flexible scheduling,
instructors being relieved of the need to deliver repetitious lectures, and greater
consistency in quality. The disadvantages include the lack of human interaction
and mentoring, material presented in a format that is less pleasant to read than in
a textbook, and the possibility of a student being left with unanswered questions.
Serious games can be an additional teaching method, not a replacement for other
methods.
A web-based game to teach pediatric content to medical students is presented
in [97]. The authors point out an enjoyable and motivating method of learning
enhanced by group interactions, competition, and fun as the advantages of game-
based education.
14.1.3.5.2 Medical modeling and simulation

In computer graphics, 3D modeling is the process of developing a mathematical


representation of any object (3D model), that is a collection of numerical data,
mathematical rules and algorithms about the visual appearance of the object
(shapes and colors). Then, the 3D model of the object can be equipped with
some key characteristics and algorithms for simulating its functions and
behaviors (physics and artificial intelligence algorithms). Medical modeling,
defined consequently as the process of developing 3D models for objects in the
medical domain, is a compelling business. The challenge is to incorporate data
from medical imaging tools used for diagnosis to virtual 3D models.
Medical simulation can be defined as the imitation of the operation of a real-
world system over time, for the purpose of medical training, education, or
assessment. If a simulation deals with patients and real-life clinical scenarios,
then we occasionally refer to virtual patients. Obviously, simulations or virtual
patients need 3D models to be developed in the first place.
There is no substantial literature on medical modeling and simulations.
Moreover, some medical simulations cannot be regarded as (serious) games. A
review of the literature is given in [95]. There are many papers on individual
simulations, for instance a simulation of acute coronary syndrome [98] or a
simulation of post-partum hemorrhage [99]. Actually, most of simulations are
not games, but the relevant technologies and models used there may be
applicable for future game development. A game-based learning environment for
virtual patients in Second Life is presented in [100].
Recently,a new brand of simulation games called biotic games [101] have
been engineered. Biotic games operate on real biological processes, for instance
real paramecia are guided by players using electric fields. The use of living
things in games has posed some ethical questions.
14.1.3.5.3 Medical professional training

Surgical training in the clinical setting is constrained by the complexity of


procedures, legal and ethical concerns, and it is also time and money consuming.
Therefore, many programs have been launched to teach trainees surgical skills
outside the operating room using simulators and video games [102–105]. A
literature review on using video games for training surgical abilities is given in
[106]. Video game-based training has been studied in relationship to
laparoscopic, gastrointestinal endoscopic, endovascular, and robotic surgery. A
more recent review is to be found in [107]. The all-embracing conclusion is that
video gaming seems to be useful to help surgeons acquire or retain of some
skills, but there is no standardized method to assess video game experience.
An educative game that teaches medical students how to carry out a
neurological examination is described in [108]. In fact, it is a multimodal
training with a natural user interface (Kinect). The users are tested with respect
to both theoretical expertise and motor performance.
14.1.3.5.4 Medical decision-making

Serious games can be used for training in complex medical decision-making.


Some randomized controlled trials with games show that students are able to
improve their decision-making abilities, professional expertise and cost
consciousness of medical care.
We can mention serious games for clinical decision-making in surgery [109],
surgeons’ response to equipment failure in the laparoscopic environment [110],
physician decision-making in the context of trauma triage [111] and geriatric
medical decisionmaking with weighing patient preferences, and appropriateness
and costs of medical care [112].

14.1.3.6 Healthy lifestyle


Video games and stories are useful in promoting health-related behavior change.
It is still unclear how characteristics of the games are related to outcomes, but
immersive, attention-maintaining properties of stories and fantasy, the engaging
properties of interactivity, tailored messages and goal setting are the principal
arguments advanced for game-based technologies [113]. A meta-analysis of
serious digital games for healthy lifestyle promotion is given in [114]. The
authors analyze 54 serious digital game studies for healthy lifestyle promotion.
They conclude that serious games have positive effects on healthy lifestyles and
their determinants (especially for knowledge) and on clinical outcomes.
14.1.3.6.1 Persuasive games

Persuasion technologies are designed to change human attitude or behavior of


the users through persuasion. Ian Bogost claims that video games open a new
domain for persuasion [115]. Persuasive games for health are designed to alter
human behavior or attitudes using various persuasive technology strategies. A
review of the subject can be found in [116]. Recently, quite a lot of persuasive
games for health have been developed, targeted at modifying one or more
aspects of users’ behaviors and promoting healthy behavior change.
Nutrition is one of the most popular targets of persuasive games for health.
Obesity and overweight are serious problems nowadays. A serious game called
Squire’s Quest! has been designed to enable children to increase fruit, juice and
vegetable consumption [113]. The usage of the game was pretty successful. A
platform of serious games for nutritional education is presented in [118].
Tailored health communication (tailored messages, tailoring) is any
combination of information and behavior change strategies intended to reach one
specific person based on information unique to that person, related to the
outcome of interest, and derived from an individual assessment [119]. A review
of computer-tailored health interventions delivered over the web is given in
[120]. The authors have screened 503 studies and selected 30 eligible ones. They
claim that message tailoring is achieved through a combination of three main
mechanisms including: feedback (individual recommendations based on an
expert assessment of the individual’s needs or characteristics related to the
targeted behaviors), personalization (inclusion of specific and personally
identifiable information within the content gathered during the assessment
phase) and adaptation (creating content packages that are pertinent to an
individual and selected based on known determinants of the targeted behavior).
14.1.3.6.2 Exergames

Exergames, called also fitness games, are video games that combine play and
exercise [121]. The effectiveness of exergaming is examined in [122-124]. The
evidence from different trials is rather mixed. Personalized exergames [124]
combine methods and concepts of exergames and tailored health
communication.

14.1.3.7 Video game-related health problems


There is a long scientific debate about the detrimental effects of video gaming
that has focused mainly on the adolescent population of gamers. Video gaming is
suspected of causing addiction and dependency, lower academic achievements,
and psychosocial and physical impairments. Recent studies show that most of
these suspicions are rather groundless.
An instrument for measuring problem video game playing in adolescents is
suggested in [125]. The investigation of the addictive potential of gaming as well
as the relationship between excessive gaming and aggressive attitudes are
examined in [126]. Excessive video game playing and its negative consequences
on players’ cognitive characteristics are examined in [127]. Associations
between playing video games and substance use problems are discussed in [128].
Spending time playing video games does not involve negative consequences, but
adolescents who experience problems related to video games are likely to also
experience problems in other facets of life [129].
The effects of playing violent video games are indicated in [130], and meta-
analytic procedures have been used to test the effects of violent video games on
aggressive behavior, aggressive cognition, aggressive affect, physiological
arousal, empathy/desensitization, and prosocial behavior [131]. A meta-analytic
review of positive and negative effects of violent video games is given in [132].
Results do not support the conclusion that violent video game playing leads to
aggressive behavior. However, violent video game playing is associated with
higher visuospatial cognition. It may be advisable to reframe the violent video
game debate in reference to potential costs and benefits of this medium.
There is a correlation between video game usage and academic performance
markers, but it is not clear that a causal relation exists [133].
Effects of playing video games on sleep quality are investigated in [134]. No
significant effects of computer games are found on slow-wave sleep. Playing an
exciting computer game affects sleep latency and REM sleep but a bright display
does not affect sleep variables.
Relationships between video gaming, obesity, academic performance and
psychosocial functioning of adolescents are studied in [135]. They investigated
219 college-aged males. Current game players reported a weekly average of 9.73
hours of game play, with almost 10% of current players reporting an average of
35 hours of play per week. The frequency of play is not significantly related to
body mass index or grade point average. The results suggest that gaming may
provide a healthy source of socialization, relaxation, and coping.
The stereotype of the typical online player being a socially with drawn young
male with limited sex role identity is broken by the report [136]. Associations
between violent video gaming, empathic responding, and prosocial behavior
enacted toward strangers, friends, and family members are investigated in [137].
Nakamuro et al. report on a longitudinal experiment [138] that has checked
the causal effect of hours of TV watched or of video games played on school-
aged children’s problem behavior, orientation to school, and obesity. The results
suggest that the magnitude of the effect is sufficiently small to be considered
negligible.
Principles for the wise use of computers by children are discussed in [139].

14.1.4 Conclusions

Serious games related to health issues are growing rapidly in numbers and also
in new areas of applications, but rather in a sparse manner. The numbers of
research studies and scientific articles are growing as well, but most of the
research studies only weakly fulfil criteria for high-quality scientific
investigations. Research studies suffer mainly from poor knowledge about game
design, user experience, human-computer interaction, and assessment of medical
interventions by randomized controlled trials.
It is rather well recognized that video game technologies can provide
effective, efficient, appealing and low cost interventions for a variety of health
issues. In particular, we can use advanced graphic and audiovisual performance,
multiplayer mode, Internet communication and social media. We can build
multisensory systems with novel input devices and virtual and augmented
realities.
Many studies demonstrated that serious game interventions had similar or
even better results than traditional medical or educational interventions.
However, their assessment is usually very weak. Progress will be made by
investigating in more detail how characteristics of games lead to more effective
health interventions.
Serious games for health are still in their infancy: both in terms of game
development and scientific studies. In order to have more effective games and
higher quality research, we need to link professional knowledge about video
game design and analysis with professional knowledge about health issues.
Video games are not merely platforms for well-established healthcare
interventions, but internal features of game technologies (i.e. interactivity,
engagement, challenge, avatars’ cooperation) should encourage clinicians and
technicians to establish novel various healthcare interventions.

References
[1] Abt C. Serious Games. New York: Viking Press; 1970.
[2] Ritterfeld U, Cody M and Vorderer P (Editors). Serious Games. Mechanisms and Effects.
New York: Routledge, Taylor & Francis; 2009.
[3] Zyda M. From visual simulation to virtual reality to games. Computer. 2005;38: 25–32.
[4] Michael D and Chen S. Serious Games: Games that Educate, Train, and Inform. Mason,
USA: Course Technology, Cengage Learning; 2006.
[5] Grad F. The Preamble of the Constitution of the World Health Organization. Bulletin of the
World Health Organization. 2002;80:981–984.
[6] Kato PM. Video games in health care: closing the gap. Review of General Psychology. 2010;
14:113–121.
[7] Annetta LA. The “I’s” have it: a framework for serious educational game design. Review of
General Psychology. 2010;14:105–112.
[8] de Wit-Zuurendonk LD and Oei SG. Serious gaming in women’s health care. BJOG :An
International Journal of Obstetrics and Gynaecology. 2011;118(SI, suppl.):3.
[9] Stokes B. Video games have changed: time to consider “serious games”. The Development
Education Journal. 2005;11:108.
[10] Connolly TM, Boyle EA, MacArthur E, Hainey T and Boyle JM. A systematic literature review
of empirical evidence on computer games and serious games. COMPUTERS &
EDUCATION. 2012; 59:661–686.
[11] Wattanasoontorn V, Boada I, García R and Sbert M. Serious games for health. Entertainment
Computing. 2013;4:231–247.
[12] Primack BA, Carroll MV, McNamara M, Klem ML, King B, Rich M et al. Role of video games
in improving health-related outcomes. A systematic review. American Journal of Preventive
Medicine. 2012;42:630–638.
[13] Graafland M, Schraagen JM and Schijven MP. Systematic review of serious games for
medical education and surgical skills training. British Journal of Surgery. 2012;99:1322–1330.
[14] Horne-Moyer HL, Moyer BH, Messer DC and Messer ES. The use of electronic games in
therapy: a review with clinical implications. Current Psychiatry Reports. 2014;16:1–9.
[15] Ricciardi F and De Paolis T. A comprehensive review of serious games in health professions.
International Journal of Computer Games Technology. 2014, art. no 787968.
[16] Graafland M, Dankbaar M, Mert A, Lagro J, De Wit-Zuurendonk L, Schuit S et al. How to
systematically assess serious games applied to health care. Journal of Medical Internet
Research. 2014;16:1–8.
[17] Cheng MT, Su T, Huang WY and Chen JH. An educational game for learning human
immunology: What do students learn and how do they perceive? British Journal of
Educational Technology. 2014;14:820–833.
[18] Papastergiou M. Exploring the potential of computer and video games for health and
physical education: A literature review. Computers & Education. 2009;53:603–622.
[19] Beale IL, Kato PM, Marin-Bowling VM, Guthrie N and Cole SW. Improvement in cancer-
related knowledge following use of a psychoeducational video game for adolescents and
young adults with cancer. Journal of Adolescent Health. 2007;41:263–270.
[20] Kato PM, Cole SW, Bradlyn AS and Pollock BH. A video game improves behavioral
outcomes in adolescents and young adults with cancer: A randomized trial. Pediatrics.
2008;122: E305-E317.
[21] Sajjad S, Abdullah AH, Sharif M and S. Mohsin. Psychotherapy through video game to target
illness related problematic behaviors of children with brain tumor. Current Medical Imaging
Reviews. 2014;10:62–72.
[22] Hussain Z, Astle AT, Webb BS and McGraw PV. The challenges of developing a contrast-
based video game for treatment of amblyopia. Frontiers in Psychology. 2014;5:art. no 1210.
[23] Prosperini L, Fanelli F, Petsas N, Sbardella E, Tona F, Raz E et al. Multiple sclerosis:
changes in microarchitecture of white matter tracts after training with a video game balance
board. Radiology. 2014;273:529–538.
[24] Hoffman HG, Doctor JN, Patterson DR, Carraougher GJ and Furness TA. Virtual reality as
an adjunctive pain control during burn wound care in adolescent patients. Pain. 2000;85:305–
309.
[25] Bidarra R, Gambon D, Kooij R, Nagel D, Schutjes M and Tziouvara I. Gaming at the dentist’s
– serious game design for pain and discomfort distraction. Games for Health. 2013:207–215.
[26] Rego P, Moreira PM and Reis LP. Serious games for rehabilitation: A survey and a
classification towards a taxonomy. Sistemas y Tecnologias de Informacion. 2010:349–354.
[27] Wiemeyer J. Gaming for health – serious games in prevention and rehabilitation. Deutsche
Zeitschrift fur Sportmedizin. 2010;61:252–257.
[28] Burke JW, McNeill MDJ, Charles DK, Morrow PJ, Crosbie JH and McDonough SM.
Optimising engagement for stroke rehabilitation using serious games. Visual Computer.
2009;25: 1085–1099.
[29] Saposnik G, Teasell R, Mamdani M, Hall J, McIlroy W, Cheung D et al. Effectiveness of
virtual reality using wii gaming technology in stroke rehabilitation a pilot randomized clinical
trial and proof of principle. Stroke. 2010;41:1477–1484.
[30] Cho KH, Lee KJ and Song CH. Virtual-reality balance training with a video-game system
improves dynamic balance in chronic stroke patients. Tohoku Journal of Experimental
Medicine. 2012;228:69–74.
[31] Sucar LE, Orihuela-Espina F, Velazquez RL, Reinkensmeyer DJ, Leder R and Hernandez-
Franco J. Gesture Therapy: an upper limb virtual reality-based motor rehabilitation platform.
IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014:634–643.
[32] Omelina L, Jansen B, Bonnechere B, Van Sint Jan S and Cornelis J. Serious games for
physical rehabilitation: designing highly configurable and adaptable games. In: Proc. 9th Intl
Conf. Disability, Virtual Reality & Associated Technologies: Laval, France. 2012; pp. 195–201.
[33] Jaume-i-Capo A, MaMartinez-Bueso P, Moya-Alcover B and Varona J. Interactive
rehabilitation system for improvement of balance therapies in people with cerebral palsy.
IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2014:419–427.
[34] Webster D and Celik O. Systematic review of Kinect applications in elderly care and stroke
rehabilitation. Journal of Neuroengineering and Rehabilitation. 2014;11:1–24 art. no 108.
[35] Pompeu JE, Arduini LA, Botelho AR, Fonseca MBF, Pompeu SMAA, Torriani-Pasin C et al.
Feasibility, safety and outcomes of playing Kinect Adventures!(TM) for people with
Parkinson’s disease: a pilot study. Physiotherapy. 2014;100:162–168.
[36] del Corral T, Percegona J, Seborga M, Rabinovich RA and Vilaro J. Physiological response
during activity programs using Wii-based video games in patients with cystic fibrosis (CF).
Journal of Cystic Fibrosis. 2014;13:706–711.
[37] Lieberman DA. Management of chronic pediatric diseases with interactive health games:
theory and research findings. The Journal of Ambulatory Care Management. 2001;24:26–38.
[38] Krishna S, Francisco BD, Balas EA, Konig P, Graff GR and Madsen RW. Internet-enabled
interactive multimedia asthma education program: A randomized trial. Pediatrics. 2003;111:
503–510.
[39] Brown SJ, Lieberman DA, Gemeny BA, Fan YC, Wilson DM and Pasta DJ. Educational
video game for juvenile diabetes: Results of a controlled trial. Medical Informatics.
1997;22:77–89.
[40] Fuchslocher A, Niesenhaus J and Kramer N. Serious games for health: An empirical study of
the game “Balance” for teenagers with diabetes mellitus. Entertainment Computing. 2011;
2:97–101.
[41] Thompson D, Baranowski T, Buday R, Baranowski J, Thompson V, Jago R et al. Serious
video games for health: how behavioral science guided the development of a serious video
game. Simulation & Gaming. 2010;41:587–606.
[42] Creutzfeldt J, Hedman L, Medin C, Heinrichs WL and Fellander-Tsai L. Exploring virtual
worlds for scenario-based repeated team training of cardiopulmonary resuscitation in medical
students. Journal of Medical Internet Research. 2010;12:art. no e38.
[43] Khanal P, Vankipuram A, Ashby A, Vankipuram M, Gupta A, Drumm-Gurnee D et al.
Collaborative virtual reality based advanced cardiac life support training simulator using
virtual reality principles. Journal of Biomedical Informatics. 2014;51:49–59.
[44] Semeraro F, Frisoli A, Ristagno G, Loconsole C, Marchetti L, Scapigliati A et al. Relive: A
serious game to learn how to save lives. Resuscitation. 2014;85:E109-E110.
[45] Mohr DC, Burns MN, Schueller SM, Clarke G and Klinkman M. Behavioral intervention
technologies: evidence review and recommendations for future research in mental health.
General Hospital Psychiatry. 2013;35:332–338.
[46] Green CS, Bavelier D. Exercising your brain: a review of human brain plasticity and training-
induced learning. Psychology & Aging.2008;23:692–701.
[47] Boot WR, Blakely DP and Simons DJ. Do action video games improve perception and
cognition? Frontiers in Psychology 2011;2:art. no 226.
[48] Green CS and Bavelier D. Action video game modifies visual selective attention. Nature.
2003;423:534–537.
[49] Green CS and Bavelier D. Action-video-game experience alters the spatial resolution of
vision. Psychological Science. 2007;18:88–94.
[50] Castel AD, Pratt J and Drummond E. The effects of action video game experience on the
time course of inhibition of return and the efficiency of visual search. Acta Psychologica.
2005; 119:217–230.
[51] Boot WR, Kramer AF, Simons DJ, Fabiani M and Gratton G. The effects of video game
playing on attention, memory, and executive control. Acta Psychologica. 2008;129:387–398.
[52] Li RJ, Polat U, Makous W and Bavelier D. Enhancing the contrast sensitivity function through
action video game training. Nature Neuroscience. 2009;12:549–551.
[53] Millar A. Self-control and choice in humans: Effects of video game playing as a positive
reinforcer. Learning and Motivation. 1984;15:203–218.
[54] Dye MWG, Green CS and Bavelier D. Increasing speed of processing with action video
games. Current Directions in Psychological Science. 2009;18:321–326.
[55] Dye MWG, Green CS and Bavelier D. The development of attention skills in action video
game players. Neuropsychologia. 2009;47:1780–1789.
[56] Connors EC, Chrastil ER, Sanchez J and Merabel LB. Action video game play and transfer
of navigation and spatial cognition skills in adolescents who are blind. Frontiers in Human
Neuroscience. 2014;8:1–8.
[57] Mayer RE. Multimedia Learning. New York: Cambridge University Press; 2001.
[58] Mayer RE. The Cambridge Handbook of Multimedia Learning. Cambridge, UK: Cambridge
University Press; 2005.
[59] Mitchell A and Savill-Smith C. The use of computer and video games for learning: A review of
the literature. London: Learning and Skills Development Agency; 2004.
[60] Brom C, Bromova E, Dechterenko F, Buchtova M and Pergel M. Personalized messages in a
brewery educational simulation: Is the personalization principle less robust than previously
thought? Computers & Education. 2014;72:339–366.
[61] Brom C, Buchtova M, Sisler V, Dechterenko F, Palme R and Glenk LM. Flow, social
interaction anxiety and salivary cortisol responses in serious games: A quasi-experimental
study. Computers & Education. 2014;79:69–100.
[62] Lester JC, Spires HA, Nietfeld JL, Minogue J, Mott BW and Lobene EV. Designing game-
based learning environments for elementary science education: A narrative-centered learning
perspective. Information Sciences. 2014;264:4–18.
[63] Boyle EA, MacArthur EW, Connolly TM, Hainey T, Manea M, Karki A, et al. A narrative
literature review of games animations and simulations to teach research methods and
statistics. Computers & Education. 2014;74:1–14.
[64] Jenkins H. Convergence Culture: Where Old and New Media Collide. New York: New York
University Press; 2006.
[65] Raybourn EM. A new paradigm for serious games: Transmedia learning for more effective
training and education. Journal of Computational Science. 2014;5:471–481.
[66] Lamb RL, Vallett DB, Akmal T and Baldwin K. A computational modeling of student cognitive
processes in science education. Computers & Education. 2014;79:116–125.
[67] Lee S, Baik Y, Nam K, Ahn J, Lee Y, Oh S, et al. Developing a cognitive evaluation method
for serious game engineers. Cluster Computing–The Journal of Networks Software Tools and
Applications. 2014;17:757–766.
[68] Studenski S, Perera S, Hile E, Keller V, Spadola-Bogard J and Garcia J. Interactive video
dance games for healthy older adults. Journal of Nutrition Health & Aging. 2010;14:850–852.
[69] Basak C, Boot WR, Voss MW and Kramer AF. Can training in a real-time strategy video
game attenuate cognitive decline in older adults? Psychology and Aging. 2008;23:765–777.
[70] Laskowska I, Zając-Lamparska L, Wiłkość M, Malicki M, Szałkowska A, Jurgielewicz A, et al.
A serious game – a new training addressing particularly prospective memory in the elderly.
Bio-Algorithms and Med-Systems. 2013;9:155–165.
[71] Jurgielewicz, A., Lewandowski P. Serious game–diagnosis in elderly patients–house and
shop. Jagellonian Univ. MSc Thesis. 2012 (in Polish); Buczkowski, K. Serious game in
medicine – psychological disorders diagnostics – trip. Jagellonian Univ. MSc Thesis. 2014 (in
Polish).
[72] Fernandez-Aranda F, Jimenez-Murcia S, Santamaria JJ, Gunnard K, Soto A, Kalapanidas E,
et al. Video games as a complementary therapy tool in mental disorders: PlayMancer, a
European multicentre study. Journal of Mental Health. 2012;21:364–374.
[73] Yalon-Chamovitz S and Weiss PL. Virtual reality as a leisure activity for young adults with
physical and intellectual disabilities. Research in Development Disabilities. 2007;29:273.
[74] Lopez-Basterretxea A, Mendez-Zorrilla A and Garcia-Zapirain B. A Telemonitoring tool based
on serious games addressing money management skills for people with intellectual disability.
International Journal of Environmental Research and Public Health. 2014;11:2361–2380.
[75] Boucenna S, Narzisi A, Tilmont E, Muratori F, Pioggia G and Cohen D, et al. Interactive
technologies for autistic children: a review. Cognitive Computation. 2014;6:722–740.
[76] Serret S, Hun S, Iakimova G, Lozada J, Anastassova M, Santos A, et al. Facing the
challenge of teaching emotions to individuals with low- and high-functioning autism using a
new Serious game: a pilot study. Molecular Autism. 2014;5:art. no 37.
[77] Christinaki E, Vidakis N, Triantafyllidis G. A novel educational game for teaching emotion
identification skills to preschoolers with autism diagnosis. Computer Science and Information
Systems. 2014;11:723–743.
[78] Bernardini S, Porayska-Pomsta K and Smith TJ. ECHOES: An intelligent serious game for
fostering social communication in children with autism. Information Sciences. 2014;264:41–
60.
[79] Roh CH and Lee WB. A study of the attention measurement variables of a serious game as a
treatment for ADHD. Wireless Personal Communications. 2014;79:2485–2498.
[80] Liarokapis F, Debattista K, Vourvopoulos A, Petridis P and Ene A. Comparing interaction
techniques for serious games through brain–computer interfaces: A user perception
evaluation study. Entertainment Computing. 2014;5:391–399.
[81] Gamito P, Oliveira J, Lopes P, Brito R, Morais D, Silva D, et al. executive functioning in
alcoholics following an mhealth cognitive stimulation program: randomized controlled trial.
Journal of Medical Internet Research. 2014;16:269–281.
[82] Botella C, Breton-Lopez J, Quero S, Banos RM, Garcia-Palacios A, Zaragoza I, et al.
Treating cockroach phobia using a serious game on a mobile phone and augmented reality
exposure: a single case study. Computers in Human Behavior. 2011;27:217–227.
[83] Subramaniam K, Luks TL, Fisher M, Simpson GV, Nagarajan S and Vinogradov S.
Computerized cognitive training restores neural activity within the reality monitoring network
in schizophrenia. Neuron. 2012;73:842–853.
[84] Robert PH, Konig A, Amieva H, Andrieu S, Bremond F, Bullock R, et al. Recommendations
for the use of Serious Games in people with Alzheimer’s Disease, related disorders and
frailty. Frontiers in Aging Neuroscience. 2014;6:art. no 54.
[85] Atkinson S and Narasimhan V. Design of an introductory medical gaming environment for
diagnosis and management of Parkinson’s disease. Trends in Information Sciences and
Computing. 2010:94–102.
[86] Koepp MJ, Gunn RN, Lawrence AD, Cunningham VJ, Dagher A, Jones T, et al. Evidence for
striatal dopamine release during a video game. Nature. 1998;393:266–268.
[87] Cole H, Griffiths MD. Social interactions in massively multiplayer online role-playing gamers.
Cyberpsychology. 2007;10:575–583.
[88] Taylor N, Jenson J, de Castell S and Dilouya B. Public displays of play: studying online
games in physical settings. Journal of Computer Mediated Communication. 2014;19:763–
779.
[89] Padilla-Walker LM, Nelson LJ, Carroll JS and Jensen AC. More than a just a game: video
game and internet use during emerging adulthood. Journal of Youth and Adolescence. 2010;
39:103–113.
[90] Jordan NA. Video games: support for the evolving family therapist. Journal of Family
Therapy. 2014;36:359–370.
[91] Gentile DA, Anderson CA, Yukawa S, Ihori N, Saleem M, Ming LK, et al. The effects of
prosocial video games on prosocial behaviors: international evidence from correlational,
longitudinal, and experimental studies. Personality and Social Psychology Bulletin.
2009;35:752–763.
[92] Przybylski AK. Electronic gaming and psychosocial adjustment. Pediatrics. 2014;134:1–7.
[93] Jouriles EN, McDonald R, Kullowatz A, Rosenfield D, Gomez GS and Cuevas A. Can virtual
reality increase the realism of role plays used to teach college women sexual coercion and
rape-resistance skills? Behavior Therapy. 2009;40:337–345.
[94] Pompedda F, Zappala A and Santtila P. Simulations of child sexual abuse interviews using
avatars paired with feedback improves interview quality. Psychology Crime & Law. 2015;
21:28–52.
[95] Hansen MM. Versatile, Immersive, creative and dynamic virtual 3-d healthcare learning
environments: a review of the literature. Journal of Medical Internet Research. 2008;10: art.
no e26.
[96] Roubidoux MA, Chapman CM and Piontek ME. Development and evaluation of an
interactive web-based breast imaging game for medical students. Academic Radiology.
2002;9: 1169–1178.
[97] Sward KA, Richardson S, Kendrick J and Maloney C. Use of a web-based game to teach
pediatric content to medical students. Ambulatory Pediatrics. 2008;8:354–359.
[98] Amine EM, Pasquier P, Rosencher J, Steg G, Carli P, Varenne O, et al. Simulation modeling
and computer simulation (serious game)? The case of acute coronary syndrome. Annales
Francaises d Anesthesie et de Reanimation. 2014;33:A202-A202.
[99] Galland A, Pasquier P, Kerneis MA, Monneins N, Chassard D, Ducloy-Bouthors AS, et al.
Simulation modeling and computer simulation (serious game)? The example of post-partum
hemorrhage (Hemosims). Annales Francaises d Anesthesie et de Reanimation. 2014;33:
A203-A203.
[100] Imperial College London. Game-based Learning for Virtual Patients in Second Life. 2008.
http://www.imperial.ac.uk/edudev/flyers/technologies/Game_Based_Learning.pdf.
[101] Harvey H, Havard M, Magnus D, Cho MK and Riedel-Kruse IH. Innocent fun or
“microslavery”? An ethical analysis of biotic games. Hastings Center Report. 2014;44:38–46.
[102] Stefanidis D, Korndorffer JR, Sierra R, Touchard C, Dunne JB and Scott DJ. Skill retention
following proficiency-based laparoscopic simulator training. Surgery. 2005;138:165–170.
[103] Stefanidis D, Scerbo MW, Sechrist C, Mostafavi A and Heniford BT. Do novices display
auto- maticity during simulator training? The American Journal of Surgery. 2008;195:210–
213.
[104] Hogle NJ, Widmann WD, Ude AO, Hardy MA and Fowler DL. Does training novices to
criteria and does rapid acquisition of skills on laparoscopic simulators have predictive validity
or are we just playing video games? Journal of Surgical Education. 2008;65:431–435.
[105] Verdaasdonk EGG, Dankelman J, Schijven MP, Lange JF, Wentink M and Stassen LPS.
Serious gaming and voluntary laparoscopic skills training: A multicenter study. Minimally
Invasive Therapy & Allied Technologies. 2009;18:232–238.
[106] Lynch J, Aughwane P and Hammond TM. Video games and surgical ability: a literature
review. Journal of Surgical Education. 2010;67:184–189.
[107] Jalink MB, Goris J, Heineman E, Pierie JP and ten Cate Hoedemaker HO. The effects of
video games on laparoscopic simulator skills. The American Journal of Surgery.
2014;208:151–156.
[108] Rybarczyk Y, Carrasco G, Cardoso T and Pavao Martins I. A serious game for multimodal
training of physician novices. In: 6th International Conference of Education, Research and
Innovation-ICERI2013; 2013; Seville, Spain. p. 18–20.
[109] Graafland M, Vollebergh MF, Lagarde SM, van Haperen M, Bemelman WA and Schijven MP.
A serious game can be a valid method to train clinical decision-making in surgery. World
Journal ofSurgery. 2014;38:3056–3062.
[110] Graafland M, Bemelman WA and Schijven MP. Prospective cohort study on surgeons’
response to equipment failure in the laparoscopic environment. Surgical Endoscopy and
Other Interventional Techniques. 2014;28:2695–2701.
[111] Mohan D, Angus DC, Ricketts D, Farris C, Fischhoff B, Rosengart MR, et al. Assessing the
validity of using serious game technology to analyze physician decision making. Plos One.
2014;9:art. no e105445.
[112] Lagro J, van de Pol MHJ, Laan A, Huijbregts-Verheyden FJ, Fluit LCR and Rikkert MGMO. A
randomized controlled trial on teaching geriatric medical decision making and cost
consciousness with the serious game GeriatriX. Journal of the American Medical Directors
Association. 2015;15:art. no 957.e1. 2015.
[113] Baranowski T, Buday R, Thompson D and Baranowski J. Playing for real – video games and
stories for health-related behavior change. American Journal of Preventive Medicine.
2008;34:74–82.
[114] DeSmet A, Van Ryckeghem D, Compemolle S, Baranowski T, Thompson D, Crombez G, et
al. A meta-analysis of serious digital games for healthy lifestyle promotion. Preventive
Medicine. 2014;69:95–107.
[115] Bogost I. Persuasive Games. Cambridge, MA: The MIT Press; 2007.
[116] Orji R, Vassileva J and Mandryk RL. Modeling the efficacy of persuasive strategies for
different gamer types in serious games for health. User Modeling and User-Adapted
Interaction. 2014;24:453–498.
[117] Baranowski T, Baranowski J, Cullen KW, Marsh T, Islam N, Zakeri I, et al. Squire’s Quest!
Dietary outcome evaluation of a multimedia game. American Journal of Preventive Medicine.
2003;24:52–61.
[118] Barreira GJ, Carrascosa R and Segovia P. Nutritional serious-games platform. eChallenges.
2010:1–8.
[119] Rimer BK and Kreuter MW. Advancing tailored health communication: a persuasion and
message effects perspective. Journal of Communication. 2006;56:S184–201.
[120] Lustria MLA, Cortese J, Noar SM and Glueckaluf RL. Computer-tailored health interventions
delivered over the web: Review and analysis of key components. Patient Education and
Counseling. 2009;74:156–173.
[121] Bogost I. The Rhetoric of Exergaming. Paper presented at the Digital Arts and Cultures
conference, Copenhagen Denmark, December 2005; [cited 7 June 2015]. Available from:
http://bogost.com/writing/the_rhetoric_of_exergaming/2005.
[122] Daley AJ. Can exergaming contribute to improving physical activity levels and health
outcomes in children? Pediatrics. 2009;124:763–771.
[123] Laikari A. Exergaming–Gamingfor health: A bridge between real world and virtual
communities. In: IEEE 13th International Symposium on Consumer Electronics; 2009; Kyoto.
pp. 665–668.
[124] Göbel S, Hardy S, Wendel V and Steinmetz R. Serious games for health – personalized
exergames. Proceedings ACM Multimedia. 2010:1663–1666.
[125] Tejeiro Salguero RA and Bersabe Moran RM. Measuring problem video game playing in
adolescents. Addiction. 2002;97:1601–1606.
[126] Grusser SM, Thalemann R and Griffiths MD. Excessive computer game playing: Evidence
for addiction and aggression? Cyberpsychology & Behavior. 2007;10:290–292.
[127] Sun DL, Ma N, Bao M, Chen XC and Zhang DR. Computer games: a double-edged sword?
Cyberpsychology & Behavior. 2008;11:545–548.
[128] Ream GL, Elliott LC and Dunlap E. Playing video games while using or feeling the effects of
substances: associations with substance use problems. International Journal of
Environmental Research and Public Health. 2011;8:3979–3998.
[129] Brunborg GS, Mentzoni RA and Froyland LR. Is video gaming, or video game addiction,
associated with depression, academic achievement, heavy episodic drinking, or conduct
problems? Journal of Behavioral Addictions. 2014:27–32.
[130] Anderson CA. An update on the effects of playing violent video games. Journal of
Adolescence. 2004;27:113–122.
[131] Anderson CA, Shibuya A, Ihori N, Swing EL, Bushman BJ, Sakamoto A, et al. Violent video
game effects on aggression, empathy, and prosocial behavior in Eastern and Western
countries: a meta-analytic review. Psychological Bulletin. 2010;136:151–173.
[132] Ferguson CJ. The Good, The Bad and the Ugly: a meta-analytic review of positive and
negative effects of violent video games. Psychiatric Quarterly. 2007;78:309–316.
[133] Anand V. A study of time management: The correlation between video game usage and
academic performance markers. Cyberpsychology & Behavior. 2007;10:552–559.
[134] Higuchi S, Motohashi Y, Liu Y and Maeda A. Effects of playing a computer game using a
bright display on presleep physiological variables, sleep latency, slow wave sleep and REM
sleep. Journal of Sleep Research. 2005:267–273.
[135] Wack E and Tantleff-Dunn S. Relationships between electronic game play, obesity, and
psychosocial functioning in young men. Cyberpsychology. 2009;12:241–244.
[136] Griffiths MD, Davies MNO and Chappell D. Breaking the stereotype: the case of online
gaming. Cyberpsychology & Behavior. 2003;6:81–91.
[137] Fraser AM, Padilla-Walker LM, Coyne SM, Nelson LJ and Stockdale LA. Associations
between violent video gaming, empathic concern, and prosocial behavior toward strangers,
friends, and family members. Journal of Youth and Adolescence. 2012;41:636–649.
[138] Nakamuro M, Inui T, Senoh W and Hiromatsu T. Are television and video games really
harmful for kids? Contemporary Economic Policy. 2015;33:29–43.
[139] Straker LM, Pollock C and Maslen B. Principles for the wise use of computers by children.
Ergonomics. 2009;52:1386–1401.

Ewa Grabska
14.2 Serious game graphic design based on
understanding of a new model of visual
perception – computer graphics
14.2.1 Introduction

The purpose of serious games is to improve an individual’s knowledge, skills, or


attitude in the real world. This section focuses on the role of graphic design tools
applied to serious games in improving the ability to coordinate and share visual
attention. The lack of this ability is one of the social communication problems
which can be identified for example with children both with Attention
Deficit/Hyperactivity Disorder (ADHD) and Autism Spectrum Condition (ASC).
The challenge is to invent a method to help children with ADHD or ASC
without drug treatment. The study of various media and techniques has shown
that experience playing serious action games can influence visual selective
attention [1].
Games in healthcare have been employed as tools to support a wide range of
activities from therapy to the training of specific skills. Such a tool was
described by Bernardini, Porayska-Pomsta and Smith [2], and it was used to help
young children with autism. The authors presented ECHOES – the serious game
designed to help children with ASC. The interactive learning activities take place
in a two-dimensional magic sensory garden with different objects and the
ECHOES virtual agent who is a partner to children. The agent has a child-like
physical appearance established in combined research studies. The focus of
ECHOES is on supporting children’s social communication by inviting the child
to practice basic skills, such as responding to bids for interaction, and initiating
bids for interaction.
Frequent playing of serious games that promote the principles and techniques
needed for effective visual communication have profound effects on the visual
system and motor responses. They are incorporated into rehabilitation
techniques. Many computer tests exist that measure the visual attention of
children with ADHD responding to visual stimulation. For instance, the test
proposed by Roh and Lee [3] contains three objects: a triangle, a circle, and a
square. A triangle in a square is the target object, while a nontarget stimulus is
either a circle or a rectangle in a square. A response should be made only when
the target object appears on the monitor screen. Research on variables that
measure visual attention of children with ADHD has led to the development of
computer games improving children’s attention. Kim and Hong [4] give another
reason for a positive effect of games on attention improvement, namely their
association with voluntary participation and motivation.
The problems of effective control and allocation of visual attention are
essential in visual communication. Learning their principles is an intensely
personal venture. It is therefore necessary to create conditions conducive to
personal exploration of these rules. Understanding how to create these
conditions is a formidable challenge. Therefore it is advisable to develop
theories and models of graphic design in the framework of visual perception
useful to serious games.

14.2.2 A new model of perception for visual communication

A new model of perception as a dynamic process has emerged over the last
decade. Personal attentional capacity is very low and information unrelated to
the current task is quickly replaced with something needed right now. According
to Ware [5] we are conscious of the field of information to which we have rapid
access rather than being immediately conscious of the world. Very limited pre-
processing is used to direct attention. A new perspective for serious games is to
develop graphic designs based on a scientific understanding of visual attention.
The player usually tries to solve some kind of cognitive problem. In the
framework of the new model of perception, playing the game for the player
consists of a series of acts of attention called visual queries, driving eye
movements and finding patterns. Understanding how visual queries work could
allow the game designer to influence the movements. One direction of research
into the discovery of this mechanism was to study the properties of simple
patterns that made them easy to find. Some things seem to pop out from the
monitor screen at the player. According to Triesman and Gormican [6] the
relationship of a visual search target to the other objects that surround it plays an
essential role in pop-out effects. The target becomes the center of fixation if it is
distinct in some feature channel of the primary visual cortex. In other words,
graphic design can be helpful in programming an eye movement. Visual
properties that can be used in the planning of the next eye movement are called
tunable. According to Ware [5] an object that pops out can be seen in a single
eye fixation and processing to separate the pop-out object from its surrounding
takes less than a tenth of a second, whereas objects that do not pop-out require
several eye movements to find and processing to notice them takes between one
and a few seconds.
Let us consider a configuration of objects on the monitor screen. When a
single target object differs in some features from all other objects which are
identical to one another then the strongest pop-out effects occur. Understanding
pop-out effects is explained in terms of the basic features that are processed in
the primary visual cortex. Color, shape, orientation, shadow and blinking are
features leading to pop out. Examples of patterns that show the pop-out effect
are presented in Fig. 14.1.

Fig. 14.1: Patterns showing the pop-out effect (a–d).

So far we have only considered target objects showing the pop-out effect that
differ from a single surrounding feature. Fig. 14.2 shows the target object that
differs in a greater number of features from all other objects. It is easy to see that
the pop-out effect for this pattern is stronger than in the case of the similar
pattern in Fig. 14.1.
Fig. 14.2: Another pattern showing the pop-out effect.

Let us consider the more complex problem of focusing attention on the presence
of a greater number of target objects based on two features. Trying to find these
objects is called a visual conjunctive search. Fig. 14.3 presents a number of
white and gray pots with flowers that have a round or conical shape.

Fig. 14.3: An example of the visual conjunctive search.

There are six round white flower pots. They do not show a pop-out effect
because the primary visual cortex can either be tuned for shapes, or colors, but
not for both object attributes.
The study included an analysis of corresponding feature space diagrams
determined by axes of different feature channels that allows one to understand
what makes objects different. Solutions to more complex design problems, for
instance making several objects easily searchable at the same time, are based on
the analysis.
Fig. 14.4: The graphic and corresponding feature space diagram.

Fig. 14.4 presents a graphic composed of visual elements that are segments and
the corresponding diagram with the three feature canals represented by axes of
length, color and orientation. The diagram characterizes the graphic by the
number of its congruent visual elements for the values of all three features. The
graphic in Fig. 14.5 has equipotent sets of congruent segments, i.e. this graphic
consists of 48 segments and 16 sets of congruent elements.

Fig. 14.5: The graphic with 16 sets of congruent elements.

The designer usually creates a complex visual object. Its elements can differ in
color, size, shape, texture, and orientation. The challenge for designers is to
make a visual object with more than three elements rapidly show the pop-out
effect. In Ware’s opinion creating a display containing more than eight visual
elements supporting pop-out searching is probably impossible. Fig. 14.6 presents
seven visual elements such that each would be independently searchable.
Fig. 14.6: Seven visual elements with pop-out effect.

14.2.3 Visibility enhancement with the use of animation

Computers offer new possibilities in generating and manipulating graphics. If we


consider differences between text and hypertext we see that graphic design on
the computer need not be just a straight conversion of graphic design on paper.
As has been considered, human perception goes under the name “active vision”,
which means to understand perception as a dynamic process of constructing
images. Thus it is not just restricted to perceiving objects and is often aided by
computer tools enhancing and extending a user’s mind. A computer screen is an
example of such a tool, providing a platform for mental and external
manipulation [7]. Viewer’s attention is the goal of visual communication.
Animation effectively supports an orienting response. Animating objects is a
method of visibility enhancement.
In ECHOES, the serious game considered here, the virtual agent who is a
partner to the child with ASC uses the programmer language of signs and
symbols called Macato to support spoken language. For example: “Yes” is
accompanied by a head nod, “Good job” by a thumb up. The more complex
signs need sequences of agent gestures, for instance “Your turn” is indicated by
the hand held in a fist and by the base of the hand pointing towards the person
being addressed. The agent can perform a number of positive facial expressions
that are implemented by changes in his lips and eyebrows and accompanied by
body gestures corresponding to emotions.
It should be noted here that we rapidly become habituated to simple motion
[5]. According to Hilstrom and Yantis [8], the objects which most powerfully
elicit the orienting response are not objects which move, but objects that emerge
into the visual field. An example of key frames for an emergent object is shown
in Fig. 14.7.
Fig. 14.7: Key frames for an emergent object.

14.2.4 Conclusion

Effective visual communication is an integral part of designing serious games


that needs logical thinking and strategies for problem solving. This section is an
attempt to present some visual communication issues from the perspective of the
new model of perception. The goal of visual communication is viewer’s
attention and a necessary condition is a high level of continuous interest. It is
also the first step to open up the players to previously unexplored avenues of
problem solving in real life.

References
[1] Castel AD, Pratt J and Drummond E. The effects of action video game experience on the
time course of inhibition of return and the efficiency of visual search. Acta Psychologica.
2005; 119:217–230.
[2] Bernardini S, Porayska-Pomsta K and Smith TJ. ECHOES: An intelligent serious game for
fostering social communication in children with autism. Information Sciences. 2014;264:41–
60.
[3] Roh CH and Lee WB. A study of the attention measurement variables of a serious game as a
treatment for ADHD. Wireless Pers Commun. 2014;79:2485–2498.
[4] Kim MY and Hong YG. A single case study on a child with ADHD by the application of
attention improvement training composed of play and games. The Korean Journal of
Elementary Counseling. 2009;8(1):15–32.
[5] Ware C. Visual Thinking for Design. Burlington, MA: Elsevier Inc. 2008.
[6] Triesman A and Gormican S. Feature analysis in early vision: Evidence from search
asymmetries. Psychological Review. 1988;95(1):15–98.
[7] Grabska E, Ślusarczyk G and Szłapak M. Animation in art design. In: Gero JS, ed. Design
Computing and Cognition’04. Dordrecht, Netherlands: Springer; 2004, pp. 317–335.
[8] Hilstron AP and Yantis S. Visual attention and motion capture. Perception and
Psychophysics. 1994;55(4):109–154.

Irena Roterman-Konieczna
14.3 Serious gaming in medicine
The subsection on serious gaming was included in the chapter devoted to
simulations supporting therapy. A review of online therapeutic games is
presented above.
Here, we focus on two separate issues. The first concerns support for
burdensome therapies in children (e.g. requiring dialysis or frequent collection of
blood samples for analysis) while the second concerns training memorization
skills in elderly patients.
To begin with we need to note that therapeutic gaming can be useful when
the outcome of therapy depends on close cooperation between the patient and the
physician, or when the therapy requires the patient to submit to a strict treatment
regimen (e.g. dietary requirements). A classic example involves weight loss
strategies which critically depend on the patient’s attitude and perseverance. The
type of gaming discussed in this chapter cannot be applied when the course of
the disease does not depend on the patient’s actions – even though any form of
emotional support and encouragement may bring medical benefits.

14.3.1 Therapeutic support for children

Treating children – especially in an inpatient scenario – seems quite


straightforward. Children like to win and most of them play video games, hence
the language of therapeutic gaming should be immediately familiar to them.
The outcome of therapy – especially when self-restraint is called upon –
depends to a great extent on the patient’s eagerness to defeat the “opponent” (in
this instance – the disease). When dealing with a child the need for restraint
should be supported by emotional arguments since appealing to reason alone is
often insufficient. A game which, by its nature, appeals to emotions – such as the
need to defeat one’s opponent – promotes a positive approach to the entire
therapeutic process. The goal of serious gaming is attained when clinical
indicators (concentrations of certain substances in the patient’s blood or urine)
approach or reach their respective normal ranges. These ranges define the
conditions for a “full victory” (Fig. 14.8). The child patient usually begins in a
state which is distant from the desired outcome. Successful therapy requires the
patient to overcome behavioral obstacles. Whenever progress is observed, the
child is rewarded by access to a game whose duration corresponds to the
magnitude of medical improvements. Upon reaching the final goal, the child is
offered a wide range of games which can be played over an arbitrarily long
period of time. One typical example is Formula 1 racing, with the duration of the
gaming session dependent on the outcome of therapy. Having reached his/her
goal, the player may freely select their car model and continue playing the game
for as long as they wish. The diagram presented in Fig. 14.8 (a) is not
particularly interesting to a child. In order to maximize effectiveness,
gamification of therapy must appeal to the child’s imagination – hence the
simple diagram (Fig. 14.8 (a)) is transformed into a scene (Fig. 14.8 (b)) with the
“home” acting as the desired final state (this concept has been suggested by
Kinga Juszka – a student of applied computer science at the Department of
Physics, Astronomy and Applied Computer Science of the Jagiellonian
University). The player’s icon, representing the current state, traces a path on the
meadow, keeping the child’s mind off the hardships associated with medical
therapy. Positive results of such games are particularly evident in hospital wards
where the competitive attitude is reinforced by group dynamics – each success
attained by one patient mobilizes other patients to increase their own efforts.

Fig. 14.8: Example of serious gaming aimed at child patients. (a) Simple graph presenting the
current status, along with the patient’s history in a coordinate system (results of clinical
examinations). (b) The same chart presented in a form which appeals to the child’s imagination.
The goal is to reach “home” (which reflects the natural wishes of children being treated on a
hospital ward). Original concept by Kinga Juszka.

Another interesting concept involving gaming as a form of reward has been


proposed by Anna Chyrek (student at the Department of Biotechnology of the
JU). In this case, the game assumes the form of a jigsaw puzzle (see Fig. 14.9).

Fig. 14.9: Example of a puzzle game where the duration of the session depends on the observed
progress of therapy. Left-hand image: initial chaotic state; central image: result of a brief gaming
session; right-hand image: result of unhindered access to the game, awarded for reaching the
desired medical result (expressed by the appropriate indicators). Original concept by Anna
Chyrek.

14.3.2 Therapeutic support for the elderly

A team of experts at Kazimierz Wielki University – Bydgoszcz Institute of


Psychology has devised a therapy which aims to increase memorization skills in
the elderly who go about performing their daily activities. In this case memory
exercises focus on a set of questions associated with a museum trip, a meeting
with friends and preparations for a visit by one’s grandchildren. The questions
themselves concern cooking recipes, places visited during a walk or
interpersonal relations between the invited friends. “Success” depends on the
final score, which is displayed at the end of the session and reflects the patient’s
memory skills on a given day. Some concerns raised with regard to this strategy
point out that results may not necessarily correspond to one’s improved memory
but simply follow from repetitive execution of similar tasks. Nevertheless, any
form of memorization helps train the patient’s mental faculties and may have
beneficial effects. Fig. 14.10 presents the selection of foodstuffs required by a
given recipe. Clicking on a product displays a pop-up form where the user is
asked to provide the required quantity. Correct answers (e.g. the correct number
of tomatoes) increase the user’s final score.
Fig. 14.10: Purchasing products needed to prepare a dish (vegetable salad). (a) product
selection; (b) quantity input box.

Both presented examples should be seen as a very brief introduction to the


immense potential of gaming strategies in the treatment of medical conditions –
such as mental disabilities or any therapies which critically depend on
cooperation between the patient and the physician. A classic example involves
weight loss strategies where taming one’s food cravings may be rewarded with
games appropriate to the patient’s age and preferences (such as gender).
Any serious therapeutic gaming should include a physician’s interface
presenting aggregate results. The physician should be able to review the progress
of therapy and generate concise reports, including charts and tables.
Index
3D graphics 1, 2, 3, 4, 5, 6, 7
big data processing 1

Additive Manufacturing 1
aggregate results 1
anatomical atlas 1
animation 1–2, 3, 4
animation 1
applications of biocybernetic models 1
automatic control 1, 2, 3

biochemical communication hubs 1


biochemical pathways 1
biological network databases 1
biological networks 1
biomedical applications 1
biotic games 1
bones 1, 2–3, 4

cancer 1, 2, 3
cardiac anomalie 1
chemotherapy 1, 2
clinical inter-professional education 1
clinical reasoning 1
cognitive computing 1
cognitive training with games 1
compressive stress 1–2
computer-assisted design 1
computer animation 1, 2
computer model 1
computer simulation 1
conformational space 1
conscious perceptions 1, 2, 3
cooperation 1, 2, 3, 4
coordination 1, 2, 3, 4, 5
crouching 1, 2
curricular integration 1

data acquisition 1
data processing 1
decision systems 1
DICOM 1, 2, 3, 4
Digital Light Processing 1
digitalization 1
disease self-management with games 1
DNA replication 1
drug administration regime 1, 2

E-learning 1
electrical impulse 1
Electron Beam Melting 1
elementary event 1, 2, 3
encoding of signal 1, 2
equilibrium 1, 2
exergames 1
Experiential learning theory 1
exponential growth 1, 2, 3, 4

feedback inhibition 1, 2, 3
feedback loop 1–2, 3–4, 5, 6, 7, 8
Figure/ground separation 1–2, 3, 4, 5
fitness games 1
full victory 1

game-based learning 1
game-informed learning 1
gamification 1
gamma oscillations 1, 2, 3
gap junctions 1–2, 3–4, 5, 6, 7–8
gene expression sets 1
gene profiles 1
graph 1
graphic design 1–2

heart contractions 1
heart rate 1
heart textures 1
hormonal signalling 1, 2, 3

immersive clinical environments 1


in silico 1
information entropy 1
informationless 1
initial conditions 1, 2, 3, 4
integrate and fire neurons 1, 2, 3, 4, 5
Interactive patient scenario 1
interactive virtual patients 1
Internal and external geometrie 1
investement in information 1
investment in energy 1

jigsaw puzzle as the award 1


knowledge representation 1

lateral coupling 1–2, 3–4, 5–6


liquid-based technology 1

machine learning 1
medical applications 1
medical education 1
medical image processing 1, 2, 3
medical imaging 1
medical professional training with games 1
medical simulations 1
memorization skills 1, 2
message delivery 1
mobile application 1, 2, 3, 4, 5, 6
mobile zone 1, 2, 3, 4
motion 1, 2, 3
motion capture 1
multimedia learning 1
muscles 1, 2, 3, 4–5

network analysis 1
network models 1
networks via literature mining 1

one bit 1,
opponent 1
organ simulation 1
organizm and cell relation 1, 2–3, 4
pain management with games 1
patient’s medical record 1
pedagogy 1
pelvic tumor reconstruction 1
persuasive games 1
pervasive sensing and monitoring 1
phase of the cycle 1
pop-out effect 1, 2
probabilisty 1
proliferating cells 1–2, 3, 4
prosocial games 1

quiescent cells 1, 2, 3, 4

Ramachandran plot 1
Rapid Prototyping packages 1, 2
reconstruction following blowout fracture 1
rehabilitation with games 1
resistive coupling 1, 2, 3, 4
reverse engineering 1, 2
script concordance test 1
Selective Laser Melting 1
Selective Laser Sintering 1
self-restraint 1
serious games 1
serious games for health 1
Shannon – information quantity 1
shear stress 1
signal enhancement 1
skin mole 1, 2, 3, 4
solution by energy 1, 1
solution by information 1, 2
spiking neurons 1–2, 3
standards 1
static and dynamic models 1
steady state 1, 2
steering signals 1, 2, 3, 4
stereolithography 1
structure-function 1, 2–3, 4, 5, 6
subject 1–2, 3, 4, 5, 6
surgical education 1
surgical simulator 1, 2–3, 4

tailored health communication 1


team assessment 1
teleconsultation 1
telemedicine 1
telemonitoring 1
therapeutic support 1
therapies in children 1
therapy with games 1, 2
thickness evaluation 1–2
tissue engineering 1
TNM system 1
Topotecan 1
transmedia learning 1
treatment regimen 1
tumor 1, 2, 3, 4

vascular structures 1, 2, 3
velocity 1–2
video game-related health problems 1
virtual agent 1, 2
virtual clinical worlds 1
virtual patient system 1
virtual patients 1, 2, 3
virtual reality 1, 2
Virtual worlds 1
visual attention 1–2, 3
visual communication 1–2, 3
visual perception 1, 2
visual queries 1
visual search strategy 1–2
visual space 1
visualization 1, 2, 3, 4, 5, 6
Fußnoten
6 Simulation-based analysis of musculoskeletal system properties

1 This software has been developed by Joanna Jaworek-Korjakowska, Eliasz


Kańtoch, Janusz Miller, Tomasz Pięciak and Jaromir Przybyo.
2 This model (as well as other models of simple biological structures, described
later in this book) is based on John Maynard Smith’s Mathematical Ideas in
Biology (Cambridge University Press, 1968).
3 For a running animal, the gait cycle consists of the stance phase, where the leg
rests against the ground and muscles exert a force which accelerates the body
in a slightly upward direction, and the swing phase in which the body of the
running animal moves in a ballistic arc, much like a bullet fired from a gun.
4 Legs must be swung forward, which requires energy. Once the leg contacts the
ground, muscle action displaces the body in a forward direction (which also
entails an energy expenditure). Furthermore, locomotion requires the body to
overcome external forces, such as air drag, increasing energy consumption.

10 Simulating cancer chemotherapy

5 The authors of the simulation software are Joanna Jaworek-Korjakowska,


Eliasz Kańtoch, Janusz Miller, Tomasz Pięciak and Jaromir Przybyło.

You might also like