Download as pdf or txt
Download as pdf or txt
You are on page 1of 303

STUDIES IN HEALTH TECHNOLOGY AND INFORMATICS 145

145
Advanced

in Rehabilitation
Advanced Technologies
Technologies in
Rehabilitation
Empowering Cognitive, Physical, Social and
Communicative Skills through Virtual Reality,
Robots, Wearable Systems and
Brain-Computer Interfaces

A. Gaggioli et al. (Eds.)


Editors: Andrea Gaggioli
Emily A. Keshner
Patrice L. (Tamar) Weiss
Giuseppe Riva

ISBN 978-1-60750-018-6
ISSN 0926-9630
ADVANCED TECHNOLOGIES IN REHABILITATION
Studies in Health Technology and
Informatics
This book series was started in 1990 to promote research conducted under the auspices of the EC
programmes’ Advanced Informatics in Medicine (AIM) and Biomedical and Health Research
(BHR) bioengineering branch. A driving aspect of international health informatics is that
telecommunication technology, rehabilitative technology, intelligent home technology and many
other components are moving together and form one integrated world of information and
communication media. The complete series has been accepted in Medline. Volumes from 2005
onwards are available online.

Series Editors:
Dr. O. Bodenreider, Dr. J.P. Christensen, Prof. G. de Moor, Prof. A. Famili, Dr. U. Fors,
Prof. A. Hasman, Prof. E.J.S. Hovenga, Prof. L. Hunter, Dr. I. Iakovidis, Dr. Z. Kolitsi,
Mr. O. Le Dour, Dr. A. Lymberis, Prof. J. Mantas, Prof. M.A. Musen, Prof. P.F. Niederer,
Prof. A. Pedotti, Prof. O. Rienhoff, Prof. F.H. Roger France, Dr. N. Rossing,
Prof. N. Saranummi, Dr. E.R. Siegel and Dr. P. Wilson

Volume 145
Recently published in this series
Vol. 144. B.K. Wiederhold and G. Riva (Eds.), Annual Review of Cybertherapy and
Telemedicine 2009 – Advanced Technologies in the Behavioral Social and
Neurosciences
Vol. 143. J.G. McDaniel (Ed.), Advances in Information Technology and Communication
in Health
Vol. 142. J.D. Westwood, S.W. Westwood, R.S. Haluck, H.M. Hoffman, G.T. Mogel,
R. Phillips, R.A. Robb and K.G. Vosburgh (Eds.), Medicine Meets Virtual
Reality 17 – NextMed: Design for/the Well Being
Vol. 141. E. De Clercq et al. (Eds.), Collaborative Patient Centred eHealth – Proceedings of the
HIT@HealthCare 2008 joint event: 25th MIC Congress, 3rd International Congress
Sixi, Special ISV-NVKVV Event, 8th Belgian eHealth Symposium
Vol. 140. P.H. Dangerfield (Ed.), Research into Spinal Deformities 6
Vol. 139. A. ten Teije, S. Miksch and P. Lucas (Eds.), Computer-based Medical Guidelines and
Protocols: A Primer and Current Trends
Vol. 138. T. Solomonides et al. (Eds.), Global Healthgrid: e-Science Meets Biomedical
Informatics – Proceedings of HealthGrid 2008
Vol. 137. L. Bos, B. Blobel, A. Marsh and D. Carroll (Eds.), Medical and Care Compunetics 5
Vol. 136. S.K. Andersen, G.O. Klein, S. Schulz, J. Aarts and M.C. Mazzoleni (Eds.), eHealth
Beyond the Horizon – Get IT There – Proceedings of MIE2008 – The XXIst
International Congress of the European Federation for Medical Informatics

ISSN 0926-9630
Advanced Technologies
in Rehabilitation
Empowering Cognitive, Physical, Social and Communicative
Skills through Virtual Reality, Robots, Wearable Systems and
Brain-Computer Interfaces

Edited by
Andrea Gaggioli
Catholic University of Milan, Milan, Italy
Istituto Auxologico Italiano, Milan, Italy

Emily A. Keshner
Temple University, Philadelphia, USA

Patrice L. (Tamar) Weiss


University of Haifa, Haifa, Israel
and
Giuseppe Riva
Catholic University of Milan, Milan, Italy
Istituto Auxologico Italiano, Milan, Italy

Amsterdam • Berlin • Tokyo • Washington, DC


© 2009 The authors and IOS Press.

All rights reserved. No part of this book may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, without prior written permission from the publisher.

ISBN 978-1-60750-018-6
Library of Congress Control Number: 2009927468

Publisher
IOS Press BV
Nieuwe Hemweg 6B
1013 BG Amsterdam
Netherlands
fax: +31 20 687 0019
e-mail: order@iospress.nl

Distributor in the UK and Ireland Distributor in the USA and Canada


Gazelle Books Services Ltd. IOS Press, Inc.
White Cross Mills 4502 Rachael Manor Drive
Hightown Fairfax, VA 22032
Lancaster LA1 4XS USA
United Kingdom fax: +1 703 323 3668
fax: +44 1524 63232 e-mail: iosbooks@iospress.com
e-mail: sales@gazellebooks.co.uk

LEGAL NOTICE
The publisher is not responsible for the use which might be made of the following information.

PRINTED IN THE NETHERLANDS


Advanced Technologies in Rehabilitation v
A. Gaggioli et al. (Eds.)
IOS Press, 2009
© 2009 The authors and IOS Press. All rights reserved.

INTRODUCTION
The proportion of the world population over 65 years of age is climbing. Life expec-
tancy in this age group is increasing, and disabling illnesses now occur later in life, so
the burden on the working–age population to support health care costs of aging popula-
tions continues to increase. These demographic shifts portend progressively greater
demands for cost effective health care, including long-term care and rehabilitation. The
most influential change in physical rehabilitation practice over the past few decades has
been the rapid development of new technologies that enable clinicians to provide more
effective therapeutic interventions.

New rehabilitation technologies can provide more responsive treatment tools or aug-
ment the therapeutic process. However, the absence of education about technological
advancements and apprehensions by clinicians related to the role of technology in the
treatment delivery process puts us at risk of losing the benefit of an essential partner in
achieving successful outcomes with the physically disabled and aging population.
There are two reasons that may explain why rehabilitation practitioners do not play an
integral role in the development and evaluation of these new technologies. First, the
engineers who develop these technologies do not recognize the value they could derive
by consulting with rehabilitation professionals in order to make their machine-user
interfaces more efficient, user friendly, and effective for specific disabilities. Second,
many rehabilitation professionals are uncomfortable with technology and fear that it
may take the place of individualized interactions with patients.

Funding challenges, a lack of public awareness about technology’s potential, a shortage


of trained experts, and poor collaboration among researchers, clinicians, and users are
often the cause for an absence of clinical trials that demonstrate the value of near-term
and future rehabilitation applications. If technology transfer is to become successful,
we need to establish collaborative interactions in which the goals of each discipline
become overlapping with the skills and goals of the other fields of endeavor and of the
consumer. The rapid rise of technological development is pushing the market place and
it is essential that rehabilitation specialists oversee the quality and validity of these new
applications before they reach the consumer.

It is clear from the chapters in this book that improvements in technology depend on
interdisciplinary cooperation among neuroscientists, engineers, computer programmers,
psychologists, and rehabilitation specialists, and on adoption and widespread applica-
tion of objective criteria for evaluating alternative methods. The goal of this book is to
bring ideas from several different disciplines in order to examine the focus and aims
that drive rehabilitation intervention and technology development.

Specifically, the chapters in this book address the questions of what research is cur-
rently taking place to further develop rehabilitation applied technology and how we
vi

have been able to modify and measure responses in both healthy and clinical popula-
tions using these technologies. In the following sections we highlight some of the is-
sues raised about emergent technologies and briefly describe the chapters from this
book that are dedicated toward addressing these issues.

1. Does Training with Technology Add to Functional Gains?

Before we can develop a successful intervention, we need to determine what the end
goal is. A number of different therapeutic technologies are already available for use in
clinics, but their value to the treatment program is not well defined. Developers and
clinicians must consider whether a technological device better targets diagnostic or
therapeutic interventions. Does it serve as an extension or repetition of conventional
therapeutic interventions? Do we want it to perfectly replicate the actions of a therapist
or to assist or augment the actions of the therapist? For example, as stated by Re-
inkensmeyer in his chapter, there has been a rapid increase in the number of robotic
devices that are being developed to assist in movement rehabilitation, yet it is still not
well understood how these devices enhance movement recovery, and whether they
have inherent therapeutic value that can be attributed to their robotic properties. Chap-
ters by Frisoli et al. and Piron et al. present results of clinical trials that demonstrate
improvements in functional outcomes on standard clinical scales when compared with
more traditional clinical interventions which would suggest value in adding technology
to therapeutic interventions.

2. Are there Rules that Govern Recovery of Function?

Are learning rules for recovery similar to those for skill acquisition? In particular,
should we be concerned mostly with error reduction or feedback enhancement? If we
are concerned with recognition of movement error, do we try to increase or decrease
that error for learning? How do we instruct patients to attend not only to the error, but
also to their own kinematics? If functional recovery depends on plasticity of the central
nervous system, can the use of technology enhance this plasticity? If we are attempting
to promote plastic changes in the nervous system, then motor learning principles most
likely should be adhered to and rules for learning need to be defined including the op-
timal length and frequency of the intervention and how much interference plays a role
in learning. Cameirao et al. use virtual reality to engage patients in task specific train-
ing scenarios that adapt to their performance thereby allowing for an individualized
training of graded difficulty and complexity. Deutsch provides an overview of virtual
reality gaming based technologies to improve motor and cognitive elements required
for ambulation and mobility in different patient populations. Levin et al. and Merians et
al. demonstrate how movement retraining can be optimized by combining virtual real-
ity with haptic devices if important motor learning elements such as repetition, varied
task practice, performance feedback and motivation are incorporated. Riva et al. dis-
cuss development of a new open source system that uses the principles of motor learn-
ing within real life context in order to increase generalization of recovered motor and
cognitive behaviours. Using a combination of robotics and virtual reality, Sanguineti et
al. demonstrated functional gains by tailoring their intervention to the different degrees
vii

of impairment and adapting the intervention as performance changed thereby exploit-


ing the nervous system’s capacity for sensorimotor adaptation.

3. Using the Body’s Own Signals to Augment Therapeutic Gains

Another rapidly advancing area of technology for rehabilitation is the application of the
individual’s own residual sensory and motor signals to augment function. Although
wheelchairs are still the most popular assistive device for patients with spinal cord inju-
ries and disabling neurological conditions, many users encounter difficulties in control-
ling their powered wheelchairs. The wheelchair represents an assistive device that, in
large part, requires the person to adapt to the technology rather than having the tech-
nology fit the abilities of the individual. Bonato discusses the emerging use of minia-
ture sensors that can be worn by the patient to measure and transmit information about
physiologic and motor functions. Carabalona et al. explore research on brain-computer
interfaces and discuss how technologies that are driven by or access the signals initi-
ated by each patient can support activity in their environments.

4. Technology Incorporates Cognition and Action

Clinicians often voice concerns about using technological interventions because they
appear to replace the human interaction which is believed to be a prime factor in the
success of rehabilitation programs. Rehabilitation clinicians work with patients using a
combination of verbal, visual, and physical interaction as well as a variety of treatment
tools and techniques. Delivering equivalent interventions to patients through techno-
logical devices presents significant obstacles, but also presents numerous opportunities
to enhance the quality, consistency, and documentation of care received. Several chap-
ters in this book explore how rehabilitation technology offers the capacity to individu-
alize treatment approaches by monitoring the specificity and frequency of feedback,
providing standardization of assessment and training, and presenting treatment within a
functional, purposeful and motivating context. Antonietti presents the field of music
therapy as a tool of the mind, using cognition and emotion as the avenue towards ac-
complishing goals for rehabilitation. Gaggioli et al. demonstrate how virtual reality can
be successfully used to support motor imagery techniques for mental practice in stroke
rehabilitation. Keshner and Kenyon discuss how cognitive processes such as perception
and spatial orientation can be accessed through virtual reality for the assessment and
rehabilitation of perceptual-motor disorders.

5. Technology Enhances the Impact of Rehabilitation Programs

One of the greatest challenges for healthcare in the coming decade will be accessibility
to the increasing numbers of individuals who are unable to travel to rehabilitation fa-
cilities or who do not have local rehabilitation facilities that provide the health mainte-
nance and extended care they require. Additionally, most of the responsibility for car-
ing for individuals with physical or psychological disabilities will fall on their family or
on health care aides who do not have the training to provide wellness and rehabilitation
interventions. The chapters in this book that address improved access to care and
viii

extending the reach of medical rehabilitation service delivery all emphasize the impor-
tance of human factors and user-centered design in the planning, developing, and
implementation of their systems. Brennan et al. present a brief history of tele-
rehabilitation and tele-care and offer an overview of the technology used to provide
these remote rehabilitation services. Mataric et al. demonstrate how combining the
technology of non-contact socially assistive robotics and the clinical science of neu-
rorehabilitation and motor learning can promote home-based rehabilitation programs
for stroke and traumatic brain injury. Weiss and Klinger discuss the practical and ethi-
cal considerations of using virtual reality for multiple users in co-located settings, sin-
gle users in remote locations, and multiple users in remote locations.

6. Summary

Although new technologies and applications are rapidly emerging in the area of reha-
bilitation, there are still issues that must be addressed before these can be used both
effectively and economically. First, we need to demonstrate that these devices are ef-
fective through clinical trials. Second, we must determine how to build devices cheaply
enough for mass use. Lastly, we need sufficiently educated physicians and therapists to
drive the technology development and applications. Although considerable engineering
knowledge is required to understand the potential capabilities of the various technolo-
gies, engineering alone will not determine the usefulness of these systems. The chap-
ters we have included in this book clearly demonstrate that in order to design appropri-
ate system features and successful interventions, developers and the users need to be
familiar with the scientific rationale for motor learning and motor control, as well as
the motor impairments presented by different clinical populations. Ultimately, the im-
pact of these new technologies will depend very much on mutual communication and
collaboration between clinicians, engineers, scientists, and the people with disabilities
that the technology will most directly impact.

Emily A. Keshner W. Zev Rymer


Temple University Northwestern University
Philadelphia, PA, USA Chicago, Illinois, USA
ix

CONTRIBUTORS
Sergei V. ADAMOVICH
Department of Biomedical Engineering, New Jersey Institute of Technology, NJ, USA
Sergei Adamovich received his Ph.D. degree in physics and mathematics from Moscow
Institute of Physics and Technology. He is currently with the department of Biomedical
Engineering at New Jersey Institute of Technology, USA. His research is funded by
National Institutes of Health and by the National Institute on Disability and Rehabilita-
tion Research.

Michela AGOSTINI
Laboratory of Robotics and Kinematics, I.R.C.C.S. San Camillo Venezia, Padova, Italy
Michela Agostini obtained the Degrees in Motor Science and in Physical Therapy at
the University of Padova. Her studies are focused on the clinical application of virtual
reality and telerehabilitation systems for motor recovery after neurological injury, with
specific interest in the motor learning principles involved in the human – machine in-
teraction.

Alessandro ANTONIETTI
Department of Psychology, Catholic University of the Sacred Heart, Milan, Italy
Alessandro Antonietti is Full professor of Cognitive Psychology and head of the De-
partment of Psychology at the Catholic University of the Sacred Heart in Milano. He
investigated the role played by media in thinking processes and he is interested in the
application of cognitive issues in the field of education and rehabilitation.

Massimo BERGAMASCO
PERCRO Laboratory, Scuola Superiore Sant’Anna, Pisa, Italy
Massimo Bergamasco is Full Professor of Applied Mechanics at Scuola Superiore
Sant’Anna and the current coordinator of the IP EU project SKILLS. His research ac-
tivity deals with the study and development of haptic interfaces for the control of the
interaction between humans and Virtual Environments.

Sergi BERMÚDEZ I BADIA


Institute of Audiovisual Studies, Universitat Pompeu Fabra, Barcelona, Spain
Dr. Sergi Bermudez i Badia is a PostDoc and head of the Robotic Systems laboratory
of SPECS at the Institute of Audiovisual Studies of the Universitat Pompeu Fabra. He
received his Ma. in telecommunications engineering from the Universitat Politècnica
x

de Catalunya (UPC) and his PhD from the Swiss Federal Institute of Technology
Zürich (ETHZ).

David BRENNAN
Center for Applied Biomechanics and Rehabilitation Research, National Rehabilitation
Hospital, Washington DC, USA
David Brennan, MBE, is a Senior Research Engineer at the National Rehabilitation
Hospital in Washington, DC. He has worked for over 10 years on telerehabilitation
research and development projects with funding from the National Institutes of Health,
and the United States Departments of Education and Defense

Simon BROWNSELL
School of Health and Related Research, University of Sheffield Regent Court, Sheffield,
UK
Dr. Brownsell is a Research Fellow at the University of Sheffield, UK. He has 12 years
experience working in telecare and telehealth and a particular interest in developing
evidence based services for older people. He has written over 50 articles, two books,
and three book chapters.Mónica

S. CAMEIRÃO
SPECS-Institut Universitari de l’Audiovisual (IUA), Universitat Pompeu Fabra, Barce-
lona, Spain
Mónica Cameirão is a PhD student in the SPECS group in the University Pompeu
Fabra in Barcelona. Mónica’s main interest is the application of new technologies for
rehabilitation and she is currently working on the development and clinical assessment
of interactive systems for the neurorehabilitation of motor impairments such as the
ones originated by stroke.

Roberta CARABALONA
Biomedical Technology Department (Polo Tecnologico), Fondazione Don C. Gnocchi,
Milan, Italy
Roberta Carabalona received the B.Sc. in Biomedical Engineering (1996) from
“Politecnico di Milano” and the M.Sc. in Biostatistics and Experimental Statistics
(2005) from “Università degli Studi di Milano-Bicocca”. She is researcher in the
Biosignal Analysis Area at the Biomedical Technology Department of Fondazione Don
Carlo Gnocchi (Milan, Italy). Her research interests include bio-signal analysis and
brain-computer interfaces.

Maria Chiara CARBONCINI


Department of Neurosciences, University of Pisa, Pisa, Italy
Maria Chiara Carboncini (MD) is the responsible for upper limb rehabilitation and
kinesiology at the Neurorehabilitation Unit of the University Hospital of Pisa.
xi

Maura CASADIO
Department of Informatics, Systems and Telematics, University of Genoa, Genoa, Italy
Maura Casadio received the Master degree in Electronic Engineering (2002), from the
University of Pisa, Italy, the Master degree in Biomedical Engineering and the Ph.D.
degree in Bioengineering, Material Science and Robotics (2006) from the University of
Genoa, Italy. She is now postdoctoral fellow at Rehabilitation Institute of Chicago,
USA.

Paolo CASTIGLIONI
Biomedical Technology Department (Polo Tecnologico), Fondazione Don C.Gnocchi,
Milan, Italy
Paolo Castiglioni received the Ph.D. in biomedical engineering (1993) from the
“Politecnico di Milano” University, Italy. He is coordinator of the Biosignal Analysis
Area at the Biomedical Technology Department of Fondazione Don Carlo Gnocchi
(Milan, Italy). His research interests include bio-signal analysis, physiological mecha-
nisms for the cardiovascular control, gravitational physiology, brain-computer inter-
faces.

Mauro DAM
Departement of neurosciences, University of Padova, Padova, Italy
Mauro Dam received a specialization in Neurology in 1979. From 1980 to 1982 he was
Visiting Fellow, National Institute on Aging, N.I.H., Bethesda, USA. He is currently
Associate professor of Neurology and Scientific Vice President of the Italian Scientific
Institutes for Research Hospitalization and Health Care, S Camillo Hospital, Venice.
His research interests include: brain metabolism, neuropharmacology, dementia, stroke,
neurorehabilitation.

Judith E. DEUTSCH
Department of Rehabilitation and Movement Sciences, University of Medicine and
Dentistry of New Jersey, USA
Judith E. Deutsch is Professor and Director of Rivers Lab. Her research focuses on the
development and testing of gaming and virtual reality to improve mobility for indi-
viduals post-stroke.

Esther DUARTE OLLER


Servei de Medicina Física i Rehabilitació, Hospital de L’Esperança, Barcelona, Spain
Esther Duarte Oller, MD, is a specialist in Physical Medicine and Rehabilitation since
1987. She is currently the head of the neurological rehabilitation unit in the Physical
Medicine and Rehabilitation Department of the Institut Municipal d’Assistència
Sanitària (IMAS), Hospitals del Mar i de l’Esperança in Barcelona, Spain.
xii

Jon ERIKSSON
Computer Science Department, University of Southern California, Los Angeles, USA
Jon Eriksson is a Master student at the Computer Science Department, University of
Southern California.

Antonio FRISOLI
PERCRO Laboratory, Scuola Superiore Sant’Anna, Pontedera (Pisa), Italy
Antonio Frisoli (Eng., PhD) is Assistant Professor of Applied Mechanics at Scuola
Superiore Sant’Anna. He is Associate Editor of IEEE Transaction of Haptics and Pres-
ence Teleoperators and Virtual Environments journals. His research interests are in the
field of robotic assisted rehabilitation, robotics, virtual reality and haptic interfaces.

Andrea GAGGIOLI
Faculty of Psychology, Catholic University of Milan, Milan, Italy
Andrea Gaggioli received a MSc in Psychology from University of Bologna and a
Ph.D. from the Faculty of Medicine of the University of Milan. He is a researcher at the
Faculty of Psychology of the Catholic University of Milan and senior researcher at the
Applied Technology for Neuro-Psychology Lab of Istituto Auxologico Italiano (Milan,
Italy). He is the founder of Positive Technology, a field that studies how technology
can be used to promote mental and physical wellbeing.

Psiche GIANNONI
School of Medicine, Master program in physiotherapy, University of Genoa, Genoa,
Italy
Psiche Giannoni is a trained physiotherapist, IBITA Advanced Course Bobath Instruc-
tor and EBTA Senior Bobath Instructor (country representative). She teaches and or-
ganizes basic and advanced courses for the treatment of adults with hemiplegia, chil-
dren with cerebral palsy. She is a Professor at the University of Genoa, Physiotherapy
School and anauthor of one book and about 30 scientific publications.

Furio GRAMATICA
Polo Tecnologico – Biomedical technology Department, Fondazione Don Carlo Gnoc-
chi ONLUS, Milano, Italy
Furio Gramatica, physicist, is the coordinator of the Biomedical Technology Depart-
ment at Fondazione Don Gnocchi, where he also leads a biophysics and nanomedicine
team. His main scientific interest is the application of nanotechnology to diagnosis and
targeted drug delivery. Formerly, he served as researcher and project manager at CERN
(European Laboratory for Particle Physics, Geneva).

Giovanni GREGGIO
School of Physical Medicine and Rehabilitation, University of Padua, Rovigo, Italia
xiii

Giovanni Greggio graduated in Medicine at the University of Padua in 2004, and spe-
cialized in Physical Medicine and Rehabilitation in 2009. He took part to the European
project “I-Learning” about upper limb rehabilitation after stroke.

Robert KENYON
Department of Computer Science, University of Illinois, Chicago, USA
Robert Kenyon received his Ph.D. in Physiological Optics from the University of Cali-
fornia, Berkeley and is a Professor of Computer Science at the University of Illinois at
Chicago. His research spans the areas of sensory-motor adaptation, effects of micro-
gravity on vestibular development, visuo-motor and posture control, flight simulation,
Tele-immersion, sensory/motor integration for navigation and wayfinding, virtual envi-
ronments, and the melding of robots and virtual reality for rehabilitation.

Emily A. KESHNER
Department of Physical Therapy and Department of Electrical and Computer Engi-
neering, Temple University, Philadelphia, PA, USA
Emily Keshner is Professor and Chair of the Department of Physical Therapy, a Profes-
sor in the Department of Electrical Engineering and Computer Science, and Director of
the Virtual Environment and Postural Orientation Laboratory at Temple University.
She is currently President of the International Society for Virtual Rehabilitation. Her
research focuses on how the CNS integrates multiple sensory demands with the biome-
chanical constraints of postural and spatial orientation tasks.

Evelyne KLINGER
LAMPA, Arts et Metiers ParisTech Angers, Laval, France
Evelyne Klinger, PhD, Eng, is Researcher of Arts et Métiers ParisTech in Laval,
France. Her work is dedicated to the design of virtual reality based methods, concepts
and systems for cognitive rehabilitation assessment and intervention. She created the
VAP-S, a virtual supermarket for executive functions exploration.

Luiz Alberto Manfré KNAUT


School of Rehabilitation, University of Montreal, PR, Brazil
Mr. Knaut is a physiotherapist (B.Sc.) who graduated from the Universidade Tuiuti do
Parana (Brazil) in 2003. He obtained his M.Sc. in Biomedical Sciences from University
of Montreal in 2008. He is currently affiliated with the Hospital Center of Rehabilita-
tion Ana Carolina Xavier and the Coritiba Foot Ball Club (Curitiba-Brazil).

Mindy F. LEVIN
School of Physical and Occupational Therapy, McGill University, Montreal, Quebec,
Canada
xiv

Mindy Levin is a researcher and neurological physiotherapist (McGill-1996). She ob-


tained an MSc (Clinical Sciences, University of Montreal-1985) and a PhD (Physiol-
ogy, McGill-1990). She was a Professor in the School of Rehabilitation (UdeM-1992-
2004) and Director of the Physical Therapy Program (McGill-2004-08). She holds a
Canada Research Chair in Motor Recovery and Rehabilitation.

Eliane C. MAGDALON
Department of Biomedical Engineering, University of Campinas, Campinas, SP, Brazil
Eliane Magdalon obtained a B.Sc. in Physical Therapy from the Methodist University
of Piracicaba in 2000 and her Master’s degree from the University of Campinas in
2004. She is currently completing her PhD in the Department of Biomedical Engineer-
ing (Rehabilitation Engineering) of University of Campinas, Campinas, SP, Brazil.

Maja MATARIĆ
Computer Science Department, University of Southern California, Los Angeles, USA
Maja J. Mataric is Professor of Computer Science and Neuroscience, Director of the
Center for Robotics and Embedded Systems (CRES), and the Viterbi School of Engi-
neering Senior Associate Dean for Research at the University of Southern California.
She received her Ph.D. in Computer Science and Artificial Intelligence at MIT in 1994.
With the goal of getting robots to help people, her research interests include human–
robot interaction and robot control and learning in complex environments.

Sue MAWSON
Center for Health and Social Care Research, Sheffield Hallam University, Sheffield,
UK
Sue Mawson is a Professor of Rehabilitation at Sheffield Hallam University, UK. Her
research focuses on improving quality of life of people with neurological problems,
particularly through exploration of the effectiveness of rehabilitative interventions. She
is a partner in the SMART trial, investigating benefits of technology for stroke rehabili-
tation.

Andrea MENEGHINI
Advanced Technology in Rehabilitation Lab Padua Teaching Hospital, Rehabilitation
Unit, University of Padua, Padua, Italy
Andrea Meneghini, MD, is a physiatrist specialized in Orthopedics and Traumatology.
He is head and founder of the Advanced Technology in Rehabilitation Lab at Padua
Teaching Hospital. He has more than 25 years of clinical and research experience. He
has been studying the use of virtual reality in the rehabilitation of hemiplegia since the
early ’90s.
xv

Alma S. MERIANS
Department of Rehabilitation and Movement Science, University of Medicine and Den-
tistry of New Jersey, NJ, USA
Dr. Alma Merians is Professor and Chairperson of the Department of Rehabilitation
and Movement Sciences. The major focus of her lab is to study basic mechanisms un-
derlying neuromuscular control of human movement and sensorimotor learning, both in
healthy populations and in people with neurological diseases like stroke or cerebral
palsy.

Pietro MORASSO
Dept. of Informatics, Systems, Telematics, University of Genoa, Genova, Italy
Pietro Morasso is full professor of Bioengineering at the Genoa University. Since 1970
he has been associated with the Neurophsysiological laboratory of Emilio Bizzi (MIT).
His scientific interests include neural control of movement, motor learning, anthropo-
morphic robotics, and rehabilitation engineering. He is author and co-author of 7 books
and over 300 papers (44 indexed in Medline).

Francesca MORGANTI
Department of Human Science, University of Bergamo, Bergamo, Italy
Francesca Morganti received a MSc in Psychology from Padua University, where she
took a specialization in Neuropsychology and Clinical Psychophysiology. She also
obtained a PhD in Cognitive Science from the University of Turin. Her research fo-
cuses on the application of interactive technologies to experimental psychology and
neuroscience, as well as the study of intersubjectivity from the perspectives of neuro-
science, cognitive science and social cognition.

Francesco PICCIONE
Department of Neurorehabilitation, IRCCS Hospital “San Camillo” Alberoni, Venice,
Italy
Francesco Piccione has a degree in Medicine and Surgery and Residency in Neurology
and Neurophysiopathology. He is currently Director of Unit of Neurodegenerative Dis-
orders and Neurophysiopathology in San Camillo Hospital, Venice. Expert in EMG,
EEG and Evoked Potentials, and scientific researcher in the Neurophysiology field
applied to disability improvement.

Maurizia PIGATTO
Dipartimento di Specialità Medico Chirurgiche, University of Padua, Padua, Italy
Maurizia Pigatto is a chartered Physiotherapist. She has over 25 years of clinical ex-
perience. She has collaborated with the School of Physioterapy and Master of Musi-
cotherapy at Padua University. She serves as senior research collaborator at Advanced
Technology in Rehabilitation Lab at Padua Teaching Hospital.
xvi

Lamberto PIRON
Neurorehabilitation Department, I.R.C.C.S. San Camillo Hospital, Venice, Italy
Lamberto Piron is a neurologist. He is the director of the “Cerebro-vascular diseases”
Operative Unit and of the “Kinematics and Robotics” laboratory at I.R.C.C.S. San
Camillo Hospital. His research focuses on the use of virtual environments, robotics and
telerehabilitation for training patients with arm motor impairment after neurological
lesions.

Ilaria POZZATO
Rehabilitation Unit, University of Padua, Padua, Italy
Dr. Ilaria Pozzato is currently postgraduate training at the Medical School of Speciali-
zation in Physical Medicine and Rehabilitation at Padua University. She has graduated
in Medicine at the University of Padua with a thesis on the application of virtual reality
and motor imagery training for upper limb rehabilitation of hemiplegic patients.

David J. REINKENSMEYER
Department of Mechanical and Aerospace Engineering, University of California at
Irvine, CA, USA
David J. Reinkensmeyer received his B.S. degree from the Massachusetts Institute of
Technology and his M.S. and Ph.D. degrees from the University of California at Berke-
ley. He was a research associate at the Rehabilitation Institute of Chicago before join-
ing the University of California at Irvine.

Giuseppe RIVA
Department of Psychology, Catholic University of Milan, Milan, Italy
Giuseppe Riva, Ph.D. is Associate Professor of General Psychology and Communica-
tion Psychology at the Catholic University of Milan, Italy; Director of the Interactive
Communication and Ergonomics of NEw Technologies – ICE-NET – Lab. at the
Catholic University of Milan, Italy, and Head Researcher of the Applied Technology
for Neuro-Psychology Laboratory – Istituto Auxologico Italiano (Milan, Italy). His
research activities focus on methods and assessment tools in psychology and the use
virtual reality in assessment and therapy.

Bruno ROSSI
Neurorehabilitation Unit, Department of Neurosciences, University of Pisa, Pisa, Italy
Bruno Rossi (MD) is Head of the Neurorehabilitation Unit, Department of Neurosci-
ence University Hospital Pisa, and Full Professor of Physical Medicine and Rehabilita-
tion. His research interests include clinical neurophysiology, EMG in neuromuscular
disorders, brain-stem and spinal reflexology, muscle fatigue analysis, clinical neurol-
ogy, psychophysiology of consciousness disorders and neurorehabilitation.
xvii

William Zev RYMER


Rehabilitation Institute of Chicago/Department of Physical Medicine and Rehabilita-
tion, Northwestern University, Chicago, Illinois, USA
Dr W, Zev Rymer was trained in Medicine at Melbourne University, Australia, and
received a PhD from Monash University. After postdoctoral training at NIH and Johns
Hopkins, he was appointed as a Physiology Professor at Northwestern University in
1977, and moved to the RIC in 1989 as the Searle Director of Research.

Vittorio SANGUINETI
Dept Informatics Systems Telematics, University of Genoa and Italian Institute of
Technology, Genoa, Italy
Vittorio Sanguineti was born in Genova, Italy in 1964. He got a Master’s Degree in
Electronic Engineering in 1989 and a PhD in Robotics in 1994, both at the University
of Genova. Until 1998 he was working, as a post-doctoral fellow, at the Institut de la
Communication Parlée, INPG (Grenoble, France); at the Department of Psychology,
McGill University, (Montreal, Canada); and at the Department of Physiology, North-
western University, (Chicago, USA). Since 1999 he has been an assistant professor at
the Dipartimento di Informatica, Sistemistica e Telematica (DIST) of the University of
Genova.

Valentina SQUERI
Dept Informatics Systems Telematics, University of Genoa and Italian Institute of
Technology, Genoa, Italy
Valentina Squeri received a Master’s Degree in Bioengineering at the University of
Genova in 2006. She is currently a PhD student at the University of Genoa and the Ital-
ian Institute of Technology. Her areas of interest include motor control, motor learning
and their application to robot therapy.

Sandeep SUBRAMANIAN
School of Physical and Occupational Therapy, McGill University, Quebec, Canada
Sandeep Subramanian, MSc, PT is currently enrolled in the PhD program in Rehabili-
tation Sciences at the School of Physical and Occupational Therapy, McGill University.
His research focuses on the use of feedback for motor learning in patients with chronic
stroke and the use of different environments to maximize motor recovery post-stroke.

Adriana TAPUS
Computer Science Department, University of Southern California, CA, Los Angeles,
USA
Dr. Adriana Tapus is a research associate at University of Southern California (USC,
USA) in the Interaction Lab/ Robotics Research Lab, Computer Science Department.
She received her Ph.D. in Computer Science from Swiss Federal Institute of Technol-
ogy, Lausanne (EPFL) in 2005, her M.S. in Computer Science from University Joseph
xviii

Fourier, Grenoble, France in 2002 and her degree of Engineer in Computer Science and
Engineering from “Politehnica” University of Bucharest, Romania. Her current re-
search interests are socially assistive robotics for post-stroke patients and people suffer-
ing from cognitive impairment and/or Alzheimer’s disease, humanoid robotics, ma-
chine learning, and computer vision...

Paolo TONIN
Department of Neurorehabilitation, IRCCS San Camillo S. Polo, Venice, Italy
Paolo Tonin is a neurologist and a physiatrist. He has carried out research in the reha-
bilitation of Stroke, Parkinson disease, Multiple Sclerosis, Traumatic Brain Injury, with
particular reference to the role of emerging technologies in neurorehabilitation. Dr.
Tonin is member of the Board of the Italian Society of Neurorehabilitation and of the
Management Committee of the World Federation of Neurorehabilitation.

Eugene TUNIK
Department of Rehabilitation and Movement Science, University of Medicine and Den-
tistry of New Jersey, Newark, NJ, USA
Dr. Tunik completed his degrees in Physical Therapy at Northeastern University and
doctorate at the Center for Molecular and Behavioral Neuroscience at Rutgers Univer-
sity. He studies brain mechanisms involved in motor control and learning in health and
disease and how this information can guide therapeutic interventions.

Andrea TUROLLA
Laboratory of Robotics and Kinematics, I.R.C.C.S. San Camillo Venezia, Noventa Pa-
dovana, Italy
Andrea Turolla is Physical Therapist. He obtained the Master’s Degree in Science of
Rehabilitation Health Profession at the University of Padua. His research focuses on
the application of virtual reality and robotic systems in motor rehabilitation, with spe-
cific interest in the motor learning principles involved in human-machine interaction.

Elena VERGARO
Department of Informatics, Systems and Telematics, University of Genoa, Genoa, Italy
Elena Vergaro received a Master’s degree in Biomedical Engineering (2006), from the
University of Genoa, Italy. She is now a Ph.D. student in Bioengineering at the same
university. Her area of interest is motor control and motor skill learning.

Paul VERSCHURE
Institute of Audiovisual Studies, Universitat Pompeu Fabra, Barcelona, Spain
Paul Verschure is a research professor with the Catalan Institute of Advanced Studies
(ICREA) and the Universitat Pompeu Fabra. Paul uses synthetic and experimental
xix

methods to find a unified theory of mind and brain and applies the outcomes to novel
real-world technologies and quality of life enhancing applications.

Patrice L. (Tamar) WEISS


Department of Occupational Therapy, University of Haifa, Haifa, Israel
Prof. Weiss, an occupational therapist with graduate training in kinesiology, physiology
and biomedical engineering, founded the Laboratory for Innovations in Rehabilitation
Technology with the objective of providing a conceptual and experimental environ-
ment for the formulation and implementation of research related to the development
and evaluation of innovative technologies for rehabilitation.

Carolee J. WINSTEIN
Division of Biokinesiology and Physical Therapy at the School of Dentistry, University
of Southern California, Los Angeles, USA
Carolee J. Winstein, PhD, PT, FAPTA is Professor and Director of Research in Bioki-
nesiology and Physical Therapy at the University of Southern California. She runs an
interdisciplinary research program focused on understanding control, rehabilitation and
recovery of goal-directed movements that emerge from a dynamic brain-behavior sys-
tem in brain-damaged conditions.

Carla Silvana ZUCCONI


Laboratory of Robotics and Kinematics, I.R.C.C.S. San Camillo, Venice, Italy
Carla S. Zucconi is a Physical Therapist. She received a Master’s Degree in Science of
Rehabilitation Health Profession from the University of Padova. Her research focuses
on the application of virtual reality and robotic systems in motor rehabilitation, with
specific interest in the motor learning principles involved in human-machine interac-
tion.
xxi

CONTENTS
Introduction, Emily A. Keshner and W. Zev Rymer v
Contributors ix

Section I. Advanced Technologies in Rehabilitation: An Introduction

Chapter 1. Rehabilitation as Empowerment: The Role of Advanced Technologies 3


G. Riva and A. Gaggioli

Section II. Training and Technology as an Aid in Functional Gains

Chapter 2. Robotic Assistance for Upper Extremity Training after Stroke 25


D.J. Reinkensmeyer
Chapter 3. Robotic Assisted Rehabilitation in Virtual Reality with the L-EXOS 40
A. Frisoli, M. Bergamasco, M.C. Carboncini and B. Rossi
Chapter 4. Assessment and Treatment of the Upper Limb by Means of Virtual
Reality in Post-Stroke Patients 55
L. Piron, A. Turolla, M. Agostini, C. Zucconi, P. Tonin, F. Piccione
and M. Dam

Section III. Rules that Govern Recovery of Function

Chapter 5. The Rehabilitation Gaming System: A Review 65


M.S. Cameirão, S. Bermúdez i Badia, E. Duarte Oller and
P.F.M.J. Verschure
Chapter 6. Virtual Reality and Gaming Systems to Improve Walking and
Mobility for People with Musculoskeletal and Neuromuscular
Conditions 84
J.E. Deutsch
Chapter 7. Virtual Reality Environments to Enhance Upper Limb Functional
Recovery in Patients with Hemiparesis 94
M.F. Levin, L.A.M. Knaut, E.C. Magdalon and S. Subramanian
Chapter 8. Virtual Reality to Maximize Function for Hand and Arm
Rehabilitation: Exploration of Neural Mechanisms 109
A.S. Merians, E. Tunik and S.V. Adamovich
Chapter 9. Robot Therapy for Stroke Survivors: Proprioceptive Training and
Regulation of Assistance 126
V. Sanguineti, M. Casadio, E. Vergaro, V. Squeri, P. Giannoni
and P.G. Morasso
xxii

Section IV. Using the Body’s Own Signals to Augment Therapeutic Gains

Chapter 10. Advances in Wearable Technology for Rehabilitation 145


P. Bonato
Chapter 11. Brain-Computer Interfaces and Neurorehabilitation 160
R. Carabalona, P. Castiglioni and F. Gramatica

Section V. Technology Incorporates Cognition and Action

Chapter 12. Why Is Music Effective in Rehabilitation? 179


A. Antonietti
Chapter 13. Computer-Guided Mental Practice in Neurorehabilitation 195
A. Gaggioli, F. Morganti, A. Meneghini, I. Pozzato, G. Greggio,
M. Pigatto and G. Riva
Chapter 14. Postural and Spatial Orientation Driven by Virtual Reality 209
E.A. Keshner and R.V. Kenyon

Section VI. Technology Enhances the Impact of Rehabilitation Programs

Chapter 15. Telerehabilitation: Enabling the Remote Delivery of Healthcare,


Rehabilitation, and Self Management 231
D.M. Brennan, S. Mawson and S. Brownsell
Chapter 16. Socially Assistive Robotics for Stroke and Mild TBI Rehabilitation 249
M. Matarić, A. Tapus, C. Winstein and J. Eriksson
Chapter 17. Moving Beyond Single User, Local Virtual Environments for
Rehabilitation 263
P.L. Weiss and E. Klinger

Subject Index 279


Author Index 281
Rehabilitation as Empowerment:
The Role of Advanced Technologies
Giuseppe RIVAa,b, Andrea GAGGIOLIa,b
a
Applied Technology for Neuro-Psychology Lab., Istituto Auxologico Italiano,
Milan, Italy
b
ICE-NET Lab., Catholic University of Sacred Heart, Milan, Italy

Abstract. Rehabilitation is placing increasing emphasis on the construct of


empowerment as the final goal of any treatment approach. This reflects a shift in
focus from deficits and dependence to assets and independence. According to this
approach, rehabilitation should aim to improve the quality of the life of the
individual by means of effective support to his/her activity and interaction. Here
we suggest that advanced technologies can play a significant role in this process.
By enhancing the experienced level of “Presence” - the non-mediated perception
of successfully transforming intentions into action - these emerging technologies
can foster optimal experiences (Flow) and support the empowerment process.
Finally, we describe the “NeuroVR” system (http://www.neurovr.org) as an
example of how advanced technologies can be used to support Presence and Flow
in the rehabilitation process.

Keywords. Empowerment, Rehabilitation, Presence, Virtual Reality, NeuroVR

Introduction

The field of rehabilitation is placing increasing emphasis on the construct of


empowerment as a critical element of any treatment strategy. This construct integrates
perceptions of personal control, participation with others to achieve goals and an
awareness of the factors that hinder or enhance one’s efforts to exert control in one's
life [1, 2]. The emphasis on empowerment reflects a critical shift in rehabilitation: from
a focus on deficits and dependence toward an emphasis on assets and independence.
The International Classification of Functioning, Disability and Health (ICF) of the
World Health Organization [3] defines disability as a “condition in which people are
temporarily or definitively unable to perform an activity in the correct manner and/or at
a level generally considered ‘normal’ for the human being.” In this definition the focus
is not on deficits but on assets: a person is disabled when he/she is not able to fully
exploit his/her relationship with everyday contexts [4].
In this chapter we suggest that the new emerging technologies discussed in the
book – with particular reference to robotics and virtual reality - have the right features
for improving the rehabilitation process. These technologies can improve the quality of
life of the disabled individual through an effective support of his/her activity and
interaction [5].
1. Empowerment in Rehabilitation

“Empowerment” is a term that is becoming very popular in rehabilitation services.


More and more rehabilitation programs claim to “empower” their clients. However, in
practice, few researchers and clinicians have specifically targeted aspects of
empowerment in rehabilitation programs. The main issue up until now has been the
lack of guidelines to assess and enhance empowerment during the rehabilitation
process.
In general, empowerment refers to processes and outcomes relating to issues of
control, critical awareness, and participation [2]. How does this apply to rehabilitation?
According to Zimmerman and Warschausky [6] empowerment in rehabilitation
should provide a sense of and motivation to control and the knowledge and skills to
help the patient to adapt to and influence his/her own environment. This approach
underlines the role of participation and control, supporting wellness versus illness, and
competence versus deficiency. In this view, the final goal of rehabilitation is to help
patients to become as independent as possible, by developing skills for changing
conditions that pose barriers in their lives.
To put this approach into practice, the next step is the definition of clear
empowerment outcomes. Table 1 provides a brief comparison of empowering
processes, goals and outcomes across the different levels of analysis (intrapersonal,
interactional and social) involved in a typical rehabilitation program.
Our analysis will focus on the first two levels – intrapersonal and interactional. We
believe that it is at these levels that emerging technologies can play a critical role.
The intrapersonal component refers to how the patients think about themselves[6].
At the intrapersonal level, the main goals of the rehabilitation process are to help the
individual in gaining control over his/her life. Specifically, the patient needs to recover
his/her decision-making power through full access to information and resources.
How is it possible to evaluate the success of an intrapersonal rehabilitation
strategy? According to the psychological literature, the key outcome variables are [6]:
• self-efficacy: this refers to perceptions about one's ability to achieve the
desired outcomes;
• sense of control: this refers to perceptions about one's ability to regulate
and manage the different domains of his/her personal experience.
Table 1. Empowerment outcomes in rehabilitation
Levels Process Goals Outcomes
Patient (Intrapersonal) Receiving help from therapist to To have decision-making Self-efficacy
gain control over his/her life power

To have access to Sense of control


information and resources
Therapist/Caregiver Helping patients and their To change perceptions of Critical awareness
(Interactional) family to evaluate/understand patient's competency and
their actual skills/situation capacity to act

Helping patients gain control Not to feel alone; to feel Participatory


over their lives part of a group behaviors
Health Care Providing opportunities for To effect change in one's Effective resource
Institution/System patients to develop and practice life and one's community management
(Social) skills
The interactional component refers to how people think about and relate to their
social environment. This component of any empowering rehabilitation strategy
involves the transactions between people and the environments (family, clinical setting,
work, etc.) that they are involved in. On the one hand, it includes the decision-making
and problem-solving skills necessary to actively engage in one's environment. On the
other, it includes the ability to mobilize and obtain resources.
Again, how is it possible to evaluate the success of an interactional rehabilitation
strategy? According to the psychological literature, the key outcome variables are [6]:
• critical awareness: this refers to one's understanding of the resources
needed to achieve a desired goal, knowledge of how to acquire those
resources, and skills for managing resources once they are obtained;
• participatory behaviors: this refers to one’s social activities affording the
opportunity for individual participation.
An increasing number of empirical studies are addressing empowerment in
rehabilitation. These studies focus on a variety of participatory programs targeting a
broad range of population groups and goals. Few authors, however, have investigated
the role of technology in this process.
In this chapter, we argue that the advanced technologies presented in this book can
enhance this process by supporting the experience of “Presence”, defined as the
“feeling of being there” [7]. The creation of a feeling of Presence can help patients to
cope with their context in an effective and transparent way.
In this view, technologies are used for triggering a broad empowerment process
within the optimal experience induced by a high sense of Presence [8].

2. Advanced Technologies in Rehabilitation: The Role of Presence

In recent years it has been possible to identify a clear trend in the design and
development of rehabilitation technologies: the shift from a general user-centered
approach to a specific activity-centered approach. In this last perspective, the goal of
technology should be the improvement of the quality of life of the individual, through
an effective support of his/her activity and interaction [4]. In this vision,
“…if a person is able to write a paper with a pen and another person is limited in
the pen use but is able to write the same paper using a computer keyboard, none of
them is defined as disabled. On the contrary if both of them will be in a condition in
which the tool, that allows them to write the paper, is not available in a specific
moment they will be both disabled in performing the activity.” (p. 286).
This “compensatory” approach in rehabilitation is usually divided [9] into person-
oriented and environmentally oriented interventions (see Figure 1).
Figure 1. The role of advanced technologies in rehabilitation
Person-oriented interventions include the recruitment of alternate cognitive or
physical resources to achieve a desired outcome. Environmentally oriented
interventions offer external cues to the subject in order to improve his/her handling of
the activity [10]. As noted by Crosson and colleagues [10], environmentally oriented
interventions may be:
“the only practical means for dealing with neurologically based deficits. Although
not ideal, external modification can be effective in many circumstances” (p. 53).
This viewpoint stresses the need of developing technological tools for providing
alternative affordances in planning specific activities. Moreover, as noted by Kirsk and
colleagues [9], any rehabilitation device has to support activity in a transparent way:
“In regard to device features, an ideal intervention will be one that is minimally
intrusive, provides assistance without assuming unnecessary control, and does not
demand of the user an uncharacteristic level of comfort with technological aids.” (p.
201).
In summary, rehabilitation technologies become empowerment tools when they
help people in coping with their context in an effective and transparent way. But how
can we assess whether rehabilitation technologies meet these requirements? A possible
answer to this question is “through Presence”. We will detail this point in the next
paragraph.

1.1 Presence: a first definition

The term “Presence” entered the general scientific debate in 1992 when Sheridan and
Furness used it in the title of a new journal dedicated to the study of virtual reality
systems and teleoperations: Presence, Teleoperators and Virtual Environments. In the
first issue, Sheridan clearly refers to Presence as an experience elicited by technology
use [11]: the effect felt when controlling real world objects remotely as well as the
effect people feel when they interact with and immerse themselves in virtual
environments.
However, as remarked by Biocca [12], and agreed upon by most researchers in the
area, “while the design of virtual reality technology has brought the theoretical issue of
Presence to the fore, few theorists argue that the experience of Presence suddenly
emerged with the arrival of virtual reality.” Rather, as suggested by Loomis [13],
Presence may be described as a basic state of consciousness: the attribution of
sensation to some distal stimulus, or more broadly to some environment. Due to the
complexity of the topic, and the interest in this concept, different conceptualizations of
Presence have been proposed in the literature.
A first definition of “Presence” is introduced by the International Society of
Presence Research (ISPR). ISPR researchers define “Presence” (a shortened version of
the term “telePresence”) as:
“a psychological state in which even though part or all of an individual’s current
experience is generated by and/or filtered through human-made technology, part or all
of the individual’s perception fails to accurately acknowledge the role of the
technology in the experience” [14].
This definition suggests that rehabilitation technology should provide a strong
feeling of Presence: the more the user experiences Presence in using a rehabilitation
technology, the more it is transparent to the user, the more it helps the user in coping
with his/her context in an effective way .
Nevertheless, the above definition has two limitations. First, what is Presence for?
Why do we experience Presence? As underlined by Lee [15]:
“Presence scholars, may find it surprising and even disturbing that there have
been limited attempts to explain the fundamental reason why human beings can feel
Presence when they use media and/or simulation technologies.” (p. 496).
Second, is Presence related to media only? As commented by Biocca [12], and
agreed by most researchers in the area:
“while the design of virtual reality technology has brought the theoretical issue of
Presence to the fore, few theorists argue that the experience of Presence suddenly
emerged with the arrival of virtual reality.”
(online: http://jcmc.indiana.edu/vol3/issue2/biocca2.html)
Recent insights from cognitive sciences suggest that Presence is a
neuropsychological process that results in a sense of agency and control [16-18]. For
instance, Slater suggested that presence is a selection mechanism that organizes the
stream of sensory data into an environmental gestalt or perceptual hypothesis about
current environment [19, 20].
Within this framework, supported by ecological/ethnographic studies [21-28], any
rehabilitation technology, virtual or real, does not provide undifferentiated information
or ready-made objects in the same way for everyone. It offers different opportunities
and creates different levels of Presence according to its ability in supporting the users'
intentions.

1.2 Presence: A Second Definition

Recent findings in cognitive science suggest that Presence is a neuropsychological


phenomenon, evolved from the interplay of our biological and cultural inheritance,
whose goal is the enaction of volition: Presence is the perception of successfully
transforming intentions into action (enaction).
Recent research by Haggard and Clark [29, 30] on voluntary and involuntary
movements, provides direct support for the existence of a specific cognitive process
binding intentions with actions. In their words [30]:
“Taken as a whole, these results suggest that the brain contains a specific
cognitive module that binds intentional actions to their effects to construct a coherent
conscious experience of our own agency.” (p. 385).
Varela and colleagues [31] define “enaction” in terms of two intertwined and
reciprocal factors: first, the historical transformations which generate emergent
regularities in the actor's embodiment; second, the influence of an actor's embodiment
in determining the trajectory of behaviors. As suggested by Whitaker [32] these two
aspects reflect two different usages of the English verb “enact”. On the one hand is “to
enact” in the sense of “to specify, to legislate, to bring forth something new and
determining of the future”, as in a government enacting a new law. On the other is “to
enact” in the sense of “to portray, to bring forth something already given and
determinant of the present”, as in a stage actor enacting a role. In line with these two
meanings, Presence has a dual role:
- First, Presence "locates" the self in an external physical and/or cultural space: the
Self is “present” in a space if he/she can act in it
- Second, Presence provides feedback to the Self about the status of its activity: the
Self perceives the variations in Presence and tunes its activity accordingly.
First, we suggest that the ability to feel “present” in the interaction with a
rehabilitation technology - an artifact - basically does not differ from the ability to feel
“present” in our body. Within this view, “being present” during agency means that 1)
the individual is able to successfully enact his/her intentions 2) the individual is able to
locate him/herself in the physical and cultural space in which the action occurs. When
the subject is present during a mediated action (that is, an action supported by a tool),
he/she incorporates the tool in his/her peri-personal space, extending the action
potential of the body into virtual space [33]. In other words, through the successful
enaction of the actor’s intentions using the tool, the subject becomes “present” in the
tool.
The process of Presence can be described as a sophisticated but covert form of
monitoring action and experience, transparent to the self but critical for its existence.
The result of this process is a sense of agency: the feeling of being both the author and
the owner of one’s own actions. The more intense the feeling of Presence, the higher
the quality of experience perceived during the action[34]. However, the agent directly
perceives only the variations in the level of Presence: breakdowns and optimal
experiences [16].
Why do we monitor the level of Presence? Our hypothesis is that this high-level
process has evolved to control the quality of action and behaviors.
According to Csikszentmihalyi [35, 36], individuals preferentially engage in
opportunities for action associated with a positive, complex and rewarding state of
consciousness, defined by him as “optimal experience” or “Flow”. The key feature of
this experience is the perceived balance between great environmental opportunities for
action (challenges) and adequate personal resources in facing them (skills). Additional
characteristics are deep concentration, clear rules for and unambiguous feedback from
the task at hand, loss of self-consciousness, control of one’s actions and environment,
positive affect and intrinsic motivation. Displays of optimal experience can be
associated with various daily activities, provided that individuals perceive them as
complex opportunities for action and involvement. An example of Flow is the case
where a professional athlete is playing exceptionally well (positive emotion) and
achieves a state of mind where nothing else is attended to but the game (high level of
Presence). From the phenomenological viewpoint, both Presence and Flow are
described as absorbing states, characterized by a merging of action and awareness, loss
of self-consciousness, a feeling of being transported into another reality, and an altered
perception of time. Further, both Presence and optimal experience are associated with
high involvement, focused attention and high concentration on the ongoing activity.
Starting from these theoretical premises, can we design rehabilitation technologies that
elicit a state of Flow by activating a high level of Presence (maximal Presence) [4, 37,
38]? This question will be addressed in the following section.

1.3 The Presence Levels

How can we achieve a high level of Presence during interaction with a rehabilitation
technology? The answer to this question requires a better understanding of what
intentions are.
According to folk psychology, the intention of an agent performing an action is
his/her specific purpose in doing so. However, the latest cognitive studies clearly show
that any action is the result of a complex intentional chain that cannot be analyzed at a
single level [39-41].
Pacherie identifies three different “levels” or “forms” of intentions, characterized
by different roles and contents: distal intentions (D-intentions), proximal intentions (P-
intentions) and motor intentions (M-intentions):
• D-intentions (Future-directed intentions). These high-level intentions act both
as intra- and interpersonal coordinators, and as prompters of practical
reasoning about means and plans: “helping my elderly father” is a D-intention,
the object that drives the activity “finding a nurse” (see Figure 2) of the
subject.
• P-intentions (Present-directed intentions). These intentions are responsible for
high-level (conscious) forms of guidance and monitoring. They have to ensure
that the imagined actions become current through situational control of their
unfolding: “posting a request for a nurse” is a P-intention driving the action
“going to the hospital’s bulletin board (see Figure 2),
• M-intentions (Motor intentions). These intentions are responsible for low-level
(covert) forms of guidance and monitoring: we may not be aware of them and
have only partial access to their content. Further, their contents are not
propositional: in the operation “putting the post on the board” (see Figure 2),
the motor representations required to move the arm are M-intentions.
Any intentional level has its own role: the rational (D-intentions), situational (P-
Intention) and motor (M-Intention) guidance and control of action. They form an
intentional cascade [40, 41] in which higher intentions generate lower intentions.
Figure 2. The intentional cascade
We previously defined Presence as the perception of successfully transforming
intentions into action (enaction). However, even if we experience a single feeling of
Presence during the enaction of our intentions, the three-level structure of the
intentional cascade suggests that Presence - on the process side - can be divided into
three different layers or sub-processes (for a broader and more in-depth description see
[21, 42]), described in Figure 3:
- Extended Presence (D-Intentions/Activities): The role of “Extended Presence”
is to verify the relevance to the Self of possible/future events in the external
world (Self vs. possible/future external world). The more the Self is able to
identify mediated affordances (that cannot be enacted directly) in the external
world, the higher the level of extended Presence will be.
- Core Presence (P-Intentions/Actions): This can be described as the activity of
selective attention made by the Self on perceptions (Self vs. present external
world). The more the Self is able to identify direct affordances (that can be
enacted directly with a movement of the body) in the external world, the
higher the level of core Presence will be.
- Proto Presence (M-Intentions/Operations): This is the process of
internal/external separation related to the level of perception-action coupling
(Self vs. non-Self). The more the Self is able to use the body for enacting direct
affordances in the external world, the higher the level of proto Presence will
be.
As underlined by Dillon and colleagues [43], converging lines of evidence from
diverse perspectives and methodologies support this three-layered view of Presence. In
their analysis they identify three dimensions common to all the different perspectives,
relating to a "spatial" dimension (M-intentions), a dimension relating to how consistent
the media experience is with the real world, "naturalness" (P-intentions), and an
"engagement" dimension (D-intentions).
Figure 3. Activity and Presence
The role of the different layers will be related to the complexity of the activity
done: the more complex the activity, the more layers will be needed to produce a high
level of Presence (Figure 3).
At the lower level – operations – proto Presence is enough to induce a satisfying
feeling of Presence. At the higher level – activity – the media experience has to support
all three layers.
As suggested by Juarrero [44] high level intentions (Future Intentions/Objects)
channel future deliberation by narrowing the scope of alternatives to be subsequently
considered (cognitive reparsing). In practice, once the subject forms an intention, not
every logical or physically possible alternative remains open, and those that do are
encountered differently: once I decide to do A, non-A is no longer a viable alternative
and should it happen, I will consider non-A as a breakdown [45].

1.4 How to design rehabilitation technologies that foster Presence and Flow

This perspective allows us to predict under which mediated situations the feeling of
Presence can be enhanced or reduced.
First, minimal Presence results from an almost complete lack of integration of the
three layers discussed above, such as is the case when attention is mostly directed
towards contents of extended consciousness that are unrelated to the present external
environment (e.g., I’m in the office trying to write a letter but I’m thinking about how
to find a nurse for my father). By the same reasoning, maximal Presence arises when
proto Presence, core Presence and extended Presence are focused on the same external
situation or activity [28]. Maximal Presence thus results from the combination of all
three layers with a tight focus on the same content. This experience is supported by a
rehabilitation technology that offers an optimal combination of form and content, able
to support the activity of the user in a meaningful way.
The concepts described above are summarized by the following points:
1) The lower the level of activity, the easier it is to induce maximal Presence. The
object of an activity is wider and less targeted than the goal of an action. So,
its identification and support is more difficult for the designer of a
rehabilitation technology. Furthermore, the easiest level to support is the
operation. In fact, its conditions are more “objective” and predictable, being
related to the characteristics (constraints and affordances) of the artifact used:
it is easier to automatically open a door in a virtual environment than to help
the user in finding the right path for the exit. At the lower level – operations –
proto Presence is enough to induce a satisfying feeling of Presence. At the
higher level – activity – the media experience has to support all the three
levels.
2) We have maximal Presence when the environment is able to support the full
intentional chain of the user: this can explain i) the success of the Nintendo
Wii over competing consoles (it is the only one to fully support M-intentions);
ii) the need for a long-term goal to induce a high level of Presence after many
experiences of the same rehabilitation technology.
3) Subjects with different intentions will not experience the same level of
Presence, even when using the same rehabilitation technology: this means that
understanding and supporting the intentions of the user will improve his/her
Presence during the interaction with the technology.
4) Action is more important than perception: I’m more present in a perceptually
poor virtual environment (e.g. a textual MUD) where I can act in many
different ways than in a real-like virtual environment where I cannot do
anything.

2. Transformation of Flow in Rehabilitation using Advanced Technologies

As we have seen previously, authentic rehabilitation implies the active participation of


patients in their contexts, their exposure to opportunities for action and development
and their freedom to select the opportunities which they perceive as most challenging
and meaningful for the subject [46, 47]. According to this vision, a critical asset
potentially offered by advanced technologies to the rehabilitation process is that they
can foster optimal (Flow) experiences triggering the empowerment [48].
Optimal experiences promote individual development. As underlined by
Massimini and Delle Fave, [49]:
“To replicate it, a person will search for increasingly complex challenges in the
associated activities and will improve his or her skill, accordingly. This process has
been defined as cultivation; it fosters the growth of complexity not only in the
performance of Flow activities but in individual behavior as a whole.” (p. 28).
This process can be also activated after a major trauma. As noted by Delle Fave
[50], to cope with dramatic changes in daily life and to access environmental
opportunities for action, individuals may develop a strategy defined as transformation
of Flow: the ability of the subject to use an optimal experience for identifying and
exploiting new and unexpected resources and sources of involvement.
Figure 4. Transformation of Flow
We hypothesize that it is possible to use advanced technologies to activate a
transformation of Flow to be used for rehabilitative purposes [8]. The proposed
approach is the following (Figure 4): first, to identify an enriched environment that
contains functional real-world demands; second, using the technology to enhance the
level of Presence of the subject in the environment and to induce an optimal
experience; third, allowing cultivation, by linking this optimal experience to the actual
experience of the subject.
It is well known that process of sequential development of the brain and the
sequential development of function, is guided by experience. The brain develops and
modifies itself in response to experience. Neurons and neuronal connections (synapses)
change in an activity-dependent fashion. Thanks to specific experiences, the brain can
even relocate functions to new areas if the primary site is destroyed [51, 52]. For
example, stroke victims can gain control over movements with therapy designed to
disable their abler body (Constraint-Induced Movement therapy) forcing the brain to
establish new circuits to control the areas with little or no control [53, 54]. The only
continuing limitation seems to be that some areas of the brain are only open to
maximum flexibility during short periods of life.
Apparently the transformation of Flow approach could be able to open new
plasticity phases, thus improving the possibility of recovery of the subject. Below are
reported some examples of technology-driven transformation of Flow.

2.1 Multi-Sensory Environment

A first example of the proposed approach is the Multi-Sensory Environment (MSE)


method used in the rehabilitation of neurological disabilities, learning disabilities and
older people with dementia [55-57]. The concept of multi-sensory environments
(Snoezelen) was developed in the 1980’s at the Haarendael Institute, Holland: MSEs
are purpose-built units or rooms using advanced sensory stimulating equipment that
targets the five senses of sight, hearing, touch, taste and smell. Their goal is the
stimulation of the primary senses to generate pleasurable sensory experiences in an
atmosphere of trust and relaxation without the need for intellectual activity. Exposure
to an MSE occurs through the agency of the caregiver, nurse or therapist who facilitates
the development of a relaxing and supportive environment [58].
The results from a randomized controlled trial (N = 50) showed the efficacy of this
approach in the treatment of older people with dementia [59]. In particular, the use of a
Multi-Sensory Environment appeared to have a greater influence on aspects of
communication in comparison to one-to-one activity and led to improvements in
behavior and mood at a four-week follow-up.
Moreover, positive results were obtained in the treatment of children recovering
from severe brain injury [60] and in the management of Rett disorder [61].
As underlined by Collier [55], the best results in using MSEs are achieved under
transformation of Flow (p. 364):
“…the MSE should include an appropriate level of stimulation that challenges the
individual to reach their maximum potential (sensory stimulation versus sensory
deprivation). The activity should be designed to address individual sensory needs, such
as offering a stronger stimulus if initial attempts are unnoticed, and be offered
alongside familiar activities and routines to enhance sensory awareness. The activity
should occur on a regular basis and offer a ‘just right challenge’ as the person with
brain injury will find it easier to cope with the demands of the environment if adequate
stimulation is provided… Finally, if the complexity of the activity, individual needs, and
MSE demands are matched, engagement in this activity may be achieved.”

2.2 Robots

The development of robots that interact socially with people and assist them in
everyday life has been a long-term goal of modern science [62, 63]. Within this broad
area of research, robotic psychology and robotherapy focus on the psychological
meaning of person–robotic creature communication and its intertwining with psycho-
physiological and social elements. As suggested by Libin & Libin [63]:
“Robotherapy is defined as a framework of human–robot interactions aimed at the
reconstruction of a person’s negative experiences through the development of coping
skills, mediated by technological tools in order to provide a platform for building new
positive experiences.” (p.370).
Recent research suggests that now, after more than 25 years of research, low-level
information, such as animacy, contingency, and visual appearance, can trigger long-
term bonding and socialization both in children [64] and in the elderly [65]: rather than
losing interest, the interaction between users and the robot improved over time.
Interestingly, the results highlighted the particularly important role that haptic
behaviors (motor intentions) played in the socialization process [64]: the introduction
of a simple touch-based contingency had a breakthrough effect in the development of
social behaviors toward the robot.
Also, as predicted by our model, the ability to address all levels of Presence in the
interaction with the rehabilitative robot helps in maintaining patients' interest high
during execution of the assigned tasks [66].

2.3 Virtual Reality

The basis of the Virtual Reality (VR) idea is that a computer can synthesize a three-
dimensional (3D) graphical environment from numerical data [67]. Using visual, aural
or haptic devices, the human operator can experience the environment as if it were a
part of the world. A VR system is the combination of the hardware and software that
enables developers to create VR applications. The hardware components receive input
from user-controlled devices and convey multi-sensory output to create the illusion of a
virtual world. The software component of a VR system manages the hardware that
makes up VR system.
Many researches using VR underline the link between this technology and optimal
experiences. However, given the limited space available, we focus on the ones that are
most relevant to the contents of this chapter.
A first set of results comes from the work of Gaggioli [46, 47]. Gaggioli compared
the experience reported by a user immersed in a virtual environment with the
experience reported by the same individual during other daily situations. To assess the
quality of experience the author used a procedure called Experience Sampling Method
(ESM), which is based on repeated on-line assessments of the external situation and
personal states of consciousness [47]. Results showed that the VR experience was the
activity associated with the highest level of optimal experience (22% of self-reports).
Reading, TV viewing and using other media – both in the context of learning and of
leisure activities – obtained lower percentages of optimal experiences (15%, 8% and
19% of self-reports respectively).
To verify the link between advanced technologies and optimal experiences, the
“V-STORE Project” investigated the quality of experience and the feeling of Presence
in a group of 10 patients with Frontal Lobe Syndrome involved in VR-based cognitive
rehabilitation [68].They used the ITC-Sense of Presence Inventory [69] to evaluate the
feeling of Presence induced by the VR sessions. Findings highlighted the association of
VR sessions with both positive affect and a high level of Presence.
Miller and Reid [70] investigated the personal experiences of children with
cerebral palsy engaging in a virtual reality play intervention program. The results show
that participants experienced a sense of control and mastery over the virtual
environment. Moreover, they perceived experiencing Flow and both peers and family
reported perceived physical changes and increased social acceptance. These results
were confirmed in two later studies with the same population group [71, 72].
The other hypothesis we suggested in this chapter is that the transformation of
Flow may also exploit the plasticity of the brain producing some form of functional
reorganization [73]. Optale and his team [74-76] investigated the experience of subjects
with male erectile disorders engaging in a virtual reality rehabilitative experience. The
results obtained - 30 out of 36 patients with psychological erectile dysfunction and 28
out of 37 clients with premature ejaculation maintained partial or complete positive
response after 6-month follow up - showed that this approach was able to hasten the
healing process and reduce dropouts. However, the most interesting part of the work is
the PET analysis carried out in the study. Optale used PET scans to analyze regional
brain metabolism changes from baseline to follow-up in the experimental sample [77].
The analysis of the scans showed, after the VR protocol, different metabolic changes in
specific areas of the brain connected with the erection mechanism.
Recent experimental results from the work of Hoffman and his group in the
treatment of chronic pain [78-81] might also be considered as fostering this vision.
Hoffman and colleagues verified the efficacy of VR as an advanced distraction tool
[82] in different controlled studies. The result showed dramatic drops in pain ratings
during VR compared to controls [83]. Further, using a functional magnetic resonance
imaging (fMRI) scanner they measured pain-related brain activity for each participant
when virtual reality was not present and when virtual reality was present (order
randomized). The team studied five regions of the brain known to be associated with
pain processing - the anterior cingulate cortex, primary and secondary somatosensory
cortex, insula, and thalamus - and found that during VR the activity in all regions
showed significant reductions [84]. In particular, the results showed direct modulation
of human brain pain responses by VR distraction: the amount of reduction in pain-
related brain activity ranged from 50 percent to 97 percent.
Interestingly, as predicted by our model, the level of pain reduction was directly
correlated to the level of Presence experienced in VR [79, 85]: the more the Presence,
the less the pain.

3. Transformation of Flow in Virtual Reality: The NeuroVR project

Although VR certainly has potential as a rehabilitation technology [86] [87, 88], most
of the actual applications in this area are still in the laboratory or at investigation stage.
In a recent review [89], Riva identified four major issues that limit the use of VR in this
field:
• the lack of standardization in VR hardware and software, and the limited
possibility of tailoring virtual environments (VEs) to the specific requirements
of the clinical or experimental setting;
• the low availability of standardized protocols that can be shared by the
community of researchers;
• the high costs (up to 200,000 US$) required for designing and testing a
clinical VR application;
• most VEs in use today are not user-friendly; expensive technical support or
continual maintenance is often required.
To address these challenges, we developed NeuroVR (http://www.neurovr.org) in
2007 – a free virtual reality platform based on open-source elements [90]. The software
allows non-expert users to adapt the content of 14 pre-designed virtual environments to
the specific needs of the clinical or experimental setting. The key characteristics that
make NeuroVR suitable as rehabilitation tool are the high level of control over
interaction with the tool, and the enriched experience provided to the patient.
These features transform NeuroVR into an “empowering environment”, a special,
sheltered setting where patients can start to explore and act without feeling threatened.
Nothing the patient fears can “really” happen to them in VR. With such assurance, they
can freely explore, experiment, feel, live, and experience feelings and/or thoughts.
Following the feedback of over 700 users who downloaded the first version, we
developed a new version – NeuroVR 1.5 – that improves the possibility for the
therapist to enhance the patient’s feeling of familiarity and intimacy with the virtual
scene by using external sounds, photos or videos. The NeuroVR Editor is built using
Python scripts that create a custom graphical user interface for Blender. The Python-
based GUI allows all the richness and complexity of the Blender suite to be hidden,
thus revealing only the controls needed to customize existing scenes and to create the
proper files to be viewed in the player. NeuroVR Player leverages two major open-
source projects in the VR field: Delta3D (http://www.delta3d.org) and
OpenSceneGraph (http:// www.openscenegraph.org). Both are building components
that the NeuroVR player integrates with an ad-hoc code to handle the simulations.
NeuroVR software was designed with the goal of enabling therapists to create
virtual environments that can enhance the feeling of Presence and support the
transformation of Flow. To accomplish this goal, the design process followed the
requirements derived from the three-layered theory of Presence summarized in par. 3.4
and developed in [8] and [19]:
1) The lower the level of activity, the easier it is to induce maximal Presence. The
object of an activity is wider and less targeted than the goal of an action. The
virtual exercises developed with NeuroVR can simulate a number of fine-
grained activities, such as opening the fridge, grabbing the water and closing
the fridge. These activities may in turn broken down to an even finer level –
depending on the goals and the complexity of the exercise.
2) We have maximal Presence when the environment is able to support the full
intentional chain of the user. The virtual environments developed using
NeuroVR support the three hierarchical levels indicated by the Presence
theory [19]:
- Extended Presence: NeuroVR allows the presentation of mediated
affordances that supports the Self in generating complex action plans;
- Core Presence: the VE can be programmed to present the patient
with direct affordances. For instance, it is possible to program the
appearance/disappearance of virtual objects/images that that trigger
the attention of the user. These objects/images can be activated by
user’s actions and behavior or by therapist’s commands.
- Proto Presence: the combined use of sensors and actuators supports
perception-action coupling and permits the patient to use his/her
body for enacting direct affordances in the virtual environment.
Movements can in turn be captured and recorded by means of
different input devices and wearable sensors (i.e. head tracking).
3) Subjects with different intentions will not experience the same level of
Presence, even when using the same rehabilitation technology. Since the
reduction of psychomotor performance can vary significantly among patients
suffering from neurological damages, complexity of virtual exercises can be
tailored to match the level of impairment of each patient. In this way, even
patients with a low level of cognitive functioning can successfully accomplish
virtual exercises, thereby increasing their feeling of presence, empowerment
and motivation for therapy.
4) Action is more important than perception: NeuroVR was explicitly designed
to find an optimal trade-off between perceptual realism and naturalness of
interaction. Whilst finding this trade-off was not an easy task, the level of
realism supported by the player is at least adequate to provide patients with
the feeling of “being there”. As several Presence scholars have pointed out
[24], [25], [32] the experience of Presence depends to a greater extent on the
ability of a medium to support users’ action in a transparent and natural way,
and is affected to a lesser extent by the quantity and quality of realism cues
depicted in the simulated environment.

4. Conclusions

The field of rehabilitation is placing increasing emphasis on the construct of


empowerment as a critical element in any treatment approach. This construct integrates
perceptions of personal control, participation with others to achieve goals, and a critical
awareness of the factors that hinder or enhance one's efforts to exert control in one's
life [1, 2].
In this chapter we suggested that the new emerging technologies discussed in the
book – from Virtual Reality to Robotics – have the right features to improve the course
of rehabilitation. Specifically, we claim that they are able to improve the quality of life
of the individual, by improving his/her level of “Presence”.
To be precise, by enhancing the experienced level of Presence, emerging
technologies can foster optimal (Flow) experiences triggering the empowerment
process (transformation of Flow). The vision underlying this concept arises from
“Positive Psychology” [91]. According to this vision, rehabilitation technologies should
include positive peak experiences because they serve as triggers for a broader process
of motivation and empowerment. Within this context, the transformation of Flow can
be defined as a person's ability to draw upon an optimal experience and use it to
marshal new and unexpected psychological resources and sources of involvement.
Although different technologies can be used to achieve this goal, one of the most
promising is Virtual Reality. On the one hand, it can be described as an advanced form
of human–computer interface that allows the user to interact with and become
immersed in a computer-generated environment in a naturalistic fashion. On the other,
VR can also be considered as an advanced imaginal system: an experiential form of
imagery that is as effective as reality in inducing emotional responses.
To this end, we developed NeuroVR, an “empowering rehabilitation tool” that
allows the creation of virtual environments where patients can start to explore and act
without feeling threatened [92, 93]. Nothing the patient fears can “really” happen to
them in VR. With such assurance, they can freely explore, experiment, feel, live, and
experience feelings and/or thoughts. VR thus becomes a very useful intermediate step
between the therapist’s office and the real world [94].
Clearly, further improving NeuroVR and building new virtual environments is
important so that therapists will continue to investigate the application of these tools in
their day-to-day clinical practice. In fact, in most circumstances, the clinical skills of
the rehabilitator remain the key factor in the successful use of VR systems.
Future research should also deepen analysis of the link between cognitive
processes, motor activities, Presence and Flow. This will allow the creation of a new
generation of rehabilitation technologies which are truly able to support the
empowerment process.

References

[1] M.A. Zimmerman, Taking aim on empowerment research: On the distinction between individual and
psychological conceptions. American Journal of Community Psychology, (1984), 18(1): p. 169-177.
[2] D.D. Perkins and M.A. Zimmerman, Empowerment theory: Research and applications. American
Journal of Community Psychology, (1995), 23: p. 569–579.
[3] WHO, International Classification of Functioning, Disability and Health. 2004, World Health
Organization.
[4] F. Morganti and G. Riva, Ambient Intelligence in Rehabilitation, in Ambient Intelligence: The
evolution of technology, communication and cognition towards the future of the human-computer
interaction, G. Riva, F. Davide, F. Vatalaro, and M. Alcañiz, Editors. 2004, IOS Press. On-line:
http://www.emergingcommunication.com/volume6.html: Amsterdam. p. 283-295.
[5] R.L. Glueckauf, J.D. Whitton, and D.W. Nickelson, Telehealth: The new frontier in rehabilitation and
health care, in Assistive technology: Matching device and consumer for successful rehabilitation, M.J.
Scherer, Editor. 2002, American Psychological Association: Washington, DC. p. 197-213.
[6] M.A. Zimmerman and S. Warschausky, Empowerment Theory for Rehabilitation Research: Conceptual
and Methodological Issues. Rehabilitation Psychology, (1998), 43(1): p. 3-16.
[7] G. Riva, F. Davide, and W.A. IJsselsteijn, eds. Being There: Concepts, effects and measurements of
user presence in synthetic environments. Emerging Communication: Studies on New Technologies and
Practices in Communication, ed. G. Riva and F. Davide. 2003, Ios Press. Online:
http://www.emergingcommunication.com/volume5.html: Amsterdam.
[8] G. Riva, G. Castelnuovo, and F. Mantovani, Transformation of flow in rehabilitation: the role of
advanced communication technologies. Behavior Research Methods, (2006), 38(2): p. 237-44.
[9] L.N. Kirsch, M. Shenton, E. Spirl, J. Rowan, R. Simpson, D. Schreckenghost, and E.F. LoPresti, Web-
Based Assistive Technology Interventions for Cognitive Impairments After Traumatic Brain Injury:
Selective Review and Two Case Studies. Rehabilitation Psychology, (2004), 49(3): p. 200-212.
[10] B. Crosson, P. Barco, C. Velozo, M.M. Bolesta, P.V. Cooper, D. Wefts, and T.C. Brobeck, Awareness
and compensation in post-acute head injury rehabilitation. Journal of Head Trauma Rehabilitation,
(1989), 4(46-54).
[11] T.B. Sheridan, Musing on telepresence and virtual presence. Presence, Teleoperators, and Virtual
Environments, (1992), 1: p. 120-125.
[12] F. Biocca, The Cyborg's Dilemma: Progressive embodiment in virtual environments. Journal of
Computer Mediated-Communication [On-line], (1997), 3(2): Online:
http://jcmc.indiana.edu/vol3/issue2/biocca2.html.
[13] J.M. Loomis, Distal attribution and presence. Presence, Teleoperators, and Virtual Environments,
(1992), 1(1): p. 113-118.
[14] International Society for Presence Research, The concept of presence: explication statement. 2000.
[15] K.M. Lee, Why Presence Occurs: Evolutionary Psychology, Media Equation, and Presence. Presence,
(2004), 13(4): p. 494-505.
[16] G. Riva, Being-in-the-world-with: Presence meets Social and Cognitive Neuroscience, in From
Communication to Presence: Cognition, Emotions and Culture towards the Ultimate Communicative
Experience. Festschrift in honor of Luigi Anolli, G. Riva, M.T. Anguera, B.K. Wiederhold, and F.
Mantovani, Editors. 2006, IOS Press. Online: http://www.emergingcommunication.com/volume8.html:
Amsterdam. p. 47-80.
[17] G. Riva, F. Mantovani, and A. Gaggioli, Are robots present? From motor simulation to “being there”.
Cyberpsychology & Behavior, (2008), 11(631-636).
[18] G. Riva, Virtual Reality and Telepresence. Science, (2007), 318(5854): p. 1240-1242.
[19] M. Slater, Presence and the sixth sense. Presence: Teleoperators, and Virtual Environments, (2002),
11(4): p. 435–439.
[20] M.V. Sanchez-Vives and M. Slater, From presence to consciousness through virtual reality. Nature
Review Neuroscience, (2005), 6(4): p. 332-9.
[21] G. Riva, J.A. Waterworth, and E.L. Waterworth, The Layers of Presence: a bio-cultural approach to
understanding presence in natural and mediated environments. Cyberpsychology & Behavior, (2004),
7(4): p. 405-419.
[22] G. Mantovani and G. Riva, "Real" presence: How different ontologies generate different criteria for
presence, telepresence, and virtual presence. Presence, Teleoperators, and Virtual Environments,
(1999), 8(5): p. 538-548.
[23] G. Mantovani and G. Riva, Building a bridge between different scientific communities: on Sheridan's
eclectic ontology of presence. Presence: Teleoperators and Virtual Environments, (2001), 8: p. 538-548.
[24] J.J. Gibson, The ecological approach to visual perception. 1979, Hillsdale, NJ: Erlbaum.
[25] A. Spagnolli and L. Gamberini, A Place for Presence. Understanding the Human Involvement in
Mediated Interactive Environments. PsychNology Journal, (2005), 3(1): p. 6-15. On-line:
www.psychnology.org/article801.htm.
[26] A. Spagnolli, D. Varotto, and G. Mantovani, An ethnographic action-based approach to human
experience in virtual environments. International Journal of Human-Computer Studies, (2003), 59(6): p.
797-822.
[27] L. Gamberini and A. Spagnolli, On the relationship between presence and usability: a situated, action-
based approach to virtual environments, in Being There: Concepts, Effects and Measurement of User
Presence in Synthetic Environments, G. Riva, W.A. IJsselsteijn, and F. Davide, Editors. 2003, IOS
Press: Amsterdam. p. 97-107. Online: http://www.emergingcommunication.com/volume5.html.
[28] J.A. Waterworth and E.L. Waterworth, Presence as a Dimension of Communication: Context of Use
and the Person, in From Communication to Presence: Cognition, Emotions and Culture towards the
Ultimate Communicative Experience, G. Riva, M.T. Anguera, B.K. Wiederhold, and F. Mantovani,
Editors. 2006, IOS Press: Amsterdam. p. 80-95. Online:
http://www.emergingcommunication.com/volume8.html.
[29] P. Haggard and S. Clark, Intentional action: conscious experience and neural prediction. Conscious
Cogn, (2003), 12(4): p. 695-707.
[30] P. Haggard, S. Clark, and J. Kalogeras, Voluntary action and conscious awareness. Nat Neurosci,
(2002), 5(4): p. 382-5.
[31] F.J. Varela, E. Thompson, and E. Rosch, The embodied mind: Cognitive science and human
experience. 1991, Cambridge, MA: MIT Press.
[32] R. Whitaker, Self-Organization, Autopoiesis, and Enterprises,. ACM SIGOIS Illuminations series,
(1995: p. online: http://www.acm.org/sigs/siggroup/ois/auto/Main.html.
[33] A. Clark, Natural Born Cyborgs: Minds, technologies, and the future of human intelligence. 2003,
Oxford: Oxford University Press.
[34] P. Zahoric and R.L. Jenison, Presence as being-in-the-world. Presence, Teleoperators, and Virtual
Environments, (1998), 7(1): p. 78-89.
[35] M. Csikszentmihalyi, Beyond Boredom and Anxiety. 1975, San Francisco: Jossey-Bass.
[36] M. Csikszentmihalyi, Flow: The psychology of optimal experience. 1990, New York: HarperCollins.
[37] G. Riva, The psychology of Ambient Intelligence: Activity, situation and presence, in Ambient
Intelligence: The evolution of technology, communication and cognition towards the future of the
human-computer interaction, G. Riva, F. Davide, F. Vatalaro, and M. Alcañiz, Editors. 2004, IOS
Press. On-line: http://www.emergingcommunication.com/volume6.html: Amsterdam. p. 19-34.
[38] E.L. Waterworth, M. Häggkvist, K. Jalkanen, S. Olsson, J.A. Waterworth, and W. H., The
Exploratorium: An environment to explore your feelings. PsychNology Journal, (2003), 1(3): p. 189-
201. On-line:
http://www.psychnology.org/File/PSYCHNOLOGY_JOURNAL_1_3_WATERWORTH.pdf.
[39] J. Searle, Intentionality: An essay in the philosophy of mind. 1983, New York: Cambridge University
Press.
[40] E. Pacherie, Toward a dynamic theory of intentions, in Does consciousness cause behavior?, S. Pockett,
W.P. Banks, and S. Gallagher, Editors. 2006, MIT Press: Cambridge, MA. p. 145-167.
[41] E. Pacherie, The phenomenology of action: A conceptual framework. Cognition, (2008), 107(1): p.
179-217.
[42] G. Riva, Enacting Interactivity: The Role of Presence, in Enacting Intersubjectivity: A cognitive and
social perspective on the study of interactions, F. Morganti, A. Carassa, and G. Riva, Editors. 2008, IOS
Press: Online: http://www.emergingcommunication.com/volume10.html: Amsterdam. p. 97-114.
[43] C. Dillon, J. Freeman, and E. Keogh. Dimension of Presence and components of emotion. in Presence
2003. 2003. Aalborg, Denmark: ISPR.
[44] A. Juarrero, Dynamics in action: Intentional behavior as a complex system. (1999.
[45] M.E. Bratman, Shared cooperative activity. Philosophical Review, (1992), 101: p. 327-341.
[46] A. Gaggioli, M. Bassi, and A. Delle Fave, Quality of Experience in Virtual Environments, in Being
There: Concepts, effects and measurement of user presence in synthetic environment, G. Riva, W.A.
IJsselsteijn, and F. Davide, Editors. 2003, Ios Press. Online:
http://www.emergingcommunication.com/volume5.html: Amsterdam. p. 121-135.
[47] A. Gaggioli, Optimal Experience in Ambient Intelligence, in Ambient Intelligence: The evolution of
technology, communication and cognition towards the future of human-computer interaction, G. Riva,
F. Vatalaro, F. Davide, and M. Alcañiz, Editors. 2004, IOS Press. On-line:
http://www.emergingcommunication.com/volume6.html: Amsterdam. p. 35-43.
[48] J.A. Waterworth, Virtual Realisation: Supporting creative outcomes in medicine and music.
PsychNology Journal, (2003), 1(4): p. 410-427.
http://www.psychnology.org/pnj1(4)_waterworth_abstract.htm.
[49] F. Massimini and A. Delle Fave, Individual development in a bio-cultural perspective. American
Psychologist, (2000), 55(1): p. 24-33.
[50] A. Delle Fave, Il processo di trasformazione di Flow in un campione di soggetti medullolesi [The
process of flow transformation in a sample of subjects with spinal cord injuries], in La selezione
psicologica umana, F. Massimini, A. Delle Fave, and P. Inghilleri, Editors. 1996, Cooperativa Libraria
IULM: Milan. p. 615-634.
[51] N. Doidge, The Brain that Changes Itself: Stories of Personal Triumph from the frontiers of Brain
Science. 2007, New York: Penguin Books.
[52] S. Begley, The Plastic Mind. 2008, London: Constable & Robinson.
[53] S.L. Wolf, C.J. Winstein, J.P. Miller, E. Taub, G. Uswatte, D. Morris, C. Giuliani, K.E. Light, and D.
Nichols-Larsen, Effect of constraint-induced movement therapy on upper extremity function 3 to 9
months after stroke: the EXCITE randomized clinical trial. Jama, (2006), 296(17): p. 2095-104.
[54] L.V. Gauthier, E. Taub, C. Perkins, M. Ortmann, V.W. Mark, and G. Uswatte, Remodeling the brain:
plastic structural brain changes produced by different motor therapies after stroke. Stroke, (2008),
39(5): p. 1520-5.
[55] L. Collier and J. Truman, Exploring the multi-sensory environment as a leisure resource for people with
complex neurological disabilities. NeuroRehabilitation, (2008), 23(4): p. 361-7.
[56] S.B.N. Thompson and S. Martin, Making sense of multi-sensory rooms for people with learning
disabilities. British Journal of Occupational Therapy, (1994), 57: p. 341-344.
[57] K.W. Hope, The effects of multi-sensory environments on older people with dementia. Journal of
Psychiatric and Mental Health Nursing, (1998), 5: p. 377-385.
[58] K.W. Hope and H.A. Waterman, Using Multi-Sensory Environments (MSEs) with people with
dementia. Dementia, (2004), 3(1): p. 45-68.
[59] R. Baker, S. Bell, E. Baker, S. Gibson, J. Holloway, R. Pearce, Z. Dowling, P. Thomas, J. Assey, and
L.A. Waering, A randomized controlled trial of the effects of multi-sensory stimulation (MSS) for
people with dementia. British Journal of Clinical Psychology, (2001), 40(1): p. 81-96.
[60] G.A. Hotz, A. Castelblanco, I.M. Lara, A.D. Weiss, R. Duncan, and J.W. Kuluz, Snoezelen: a
controlled multi-sensory stimulation therapy for children recovering from severe brain injury. Brain Inj,
(2006), 20(8): p. 879-88.
[61] M. Lotan and J. Merrick, Rett syndrome management with Snoezelen or controlled multi-sensory
stimulation. A review. Int J Adolesc Med Health, (2004), 16(1): p. 5-12.
[62] M.M. Behrmann and L. Lahm, Babies and robots: technology to assist learning of young multiple
disabled children. Rehabil Lit, (1984), 45(7-8): p. 194-201.
[63] E. Libin and A. Libin, New diagnostic tool for robotic psychology and robotherapy studies.
Cyberpsychol Behav, (2003), 6(4): p. 369-74.
[64] F. Tanaka, A. Cicourel, and J.R. Movellan, Socialization between toddlers and robots at an early
childhood education center. Proc Natl Acad Sci U S A, (2007), 104(46): p. 17954-8.
[65] M.R. Banks, L.M. Willoughby, and W.A. Banks, Animal-assisted therapy and loneliness in nursing
homes: use of robotic versus living dogs. J Am Med Dir Assoc, (2008), 9(3): p. 173-7.
[66] R. Colombo, F. Pisano, A. Mazzone, C. Delconte, S. Micera, M.C. Carrozza, P. Dario, and G. Minuco,
Design strategies to improve patient motivation during robot-aided rehabilitation. J Neuroeng Rehabil,
(2007), 4: p. 3.
[67] G. Riva and A. Gaggioli, Virtual clinical therapy. Lecture Notes in Computer Sciences, (2008), 4650: p.
90-107.
[68] G. Castelnuovo, C. Lo Priore, D. Liccione, and G. Cioffi, Virtual Reality based tools for the
rehabilitation of cognitive and executive functions: the V-STORE. PsychNology Journal, (2003), 1(3):
p. 311-326. Online:
http://www.psychnology.org/pnj1(3)_castelnuovo_lopriore_liccione_cioffi_abstract.htm.
[69] J. Lessiter, J. Freeman, E. Keogh, and J. Davidoff, A Cross-Media Presence Questionnaire: The ITC-
Sense of Presence Inventory. Presence: Teleoperators, and Virtual Environments, (2001), 10(3): p. 282-
297.
[70] S. Miller and D. Reid, Doing play: competency, control, and expression. Cyberpsychol Behav, (2003),
6(6): p. 623-32.
[71] D. Reid, The influence of virtual reality on playfulness in children with cerebral palsy: a pilot study.
Occup Ther Int, (2004), 11(3): p. 131-44.
[72] K. Harris and D. Reid, The influence of virtual reality play on children's motivation. Can J Occup Ther,
(2005), 72(1): p. 21-9.
[73] B.B. Johansson, Brain plasticity and stroke rehabilitation. The Willis lecture. Stroke, (2000), 31(1): p.
223-30.
[74] G. Optale, A. Munari, A. Nasta, C. Pianon, J. Baldaro Verde, and G. Viggiano, Multimedia and virtual
reality techniques in the treatment of male erectile disorders. International Journal of Impotence
Research, (1997), 9(4): p. 197-203.
[75] G. Optale, F. Chierichetti, A. Munari, A. Nasta, C. Pianon, G. Viggiano, and G. Ferlin, PET supports
the hypothesized existence of a male sexual brain algorithm which may respond to treatment combining
psychotherapy with virtual reality. Studies in Health Technology and Informatics, (1999), 62: p. 249-
251.
[76] G. Optale, Male Sexual Dysfunctions and multimedia Immersion Therapy. CyberPsychology &
Behavior, (2003), 6(3): p. 289-294.
[77] G. Optale, F. Chierichetti, A. Munari, A. Nasta, C. Pianon, G. Viggiano, and G. Ferlin, Brain PET
confirms the effectiveness of VR treatment of impotence. International Journal of Impotence Research,
(1998), 10(Suppl 1): p. 45.
[78] H.G. Hoffman, T.L. Richards, B. Coda, A.R. Bills, D. Blough, A.L. Richards, and S.R. Sharar,
Modulation of thermal pain-related brain activity with virtual reality: evidence from fMRI.
Neuroreport, (2004), 15(8): p. 1245-1248.
[79] H.G. Hoffman, T. Richards, B. Coda, A. Richards, and S.R. Sharar, The illusion of presence in
immersive virtual reality during an fMRI brain scan. CyberPsychology & Behavior, (2003), 6(2): p.
127-131.
[80] H.G. Hoffman, D.R. Patterson, J. Magula, G.J. Carrougher, K. Zeltzer, S. Dagadakis, and S.R. Sharar,
Water-friendly virtual reality pain control during wound care. Journal of Clinical Psychology, (2004),
60(2): p. 189-195.
[81] H.G. Hoffman, T.L. Richards, T. Van Oostrom, B.A. Coda, M.P. Jensen, D.K. Blough, and S.R. Sharar,
The analgesic effects of opioids and immersive virtual reality distraction: evidence from subjective and
functional brain imaging assessments. Anesth Analg, (2007), 105(6): p. 1776-83, table of contents.
[82] H.G. Hoffman, D.R. Patterson, E. Seibel, M. Soltani, L. Jewett-Leahy, and S.R. Sharar, Virtual reality
pain control during burn wound debridement in the hydrotank. Clin J Pain, (2008), 24(4): p. 299-304.
[83] H.G. Hoffman, J.N. Doctor, D.R. Patterson, G.J. Carrougher, and T.A. Furness, 3rd, Virtual reality as
an adjunctive pain control during burn wound care in adolescent patients. Pain, (2000), 85(1-2): p. 305-
9.
[84] H.G. Hoffman, T.L. Richards, A.R. Bills, T. Van Oostrom, J. Magula, E.J. Seibel, and S.R. Sharar,
Using FMRI to study the neural correlates of virtual reality analgesia. CNS Spectr, (2006), 11(1): p. 45-
51.
[85] H.G. Hoffman, S.R. Sharar, B. Coda, J.J. Everett, M. Ciol, T. Richards, and D.R. Patterson,
Manipulating presence influences the magnitude of virtual reality analgesia. Pain, (2004), 111(1-2): p.
162-8.
[86] P.L. Weiss and N. Katz, The potential of virtual reality for rehabilitation. J Rehabil Res Dev, (2004),
41(5): p. vii-x.
[87] D. Rand, R. Kizony, and P.T. Weiss, The Sony PlayStation II EyeToy: low-cost virtual reality for use
in rehabilitation. J Neurol Phys Ther, (2008), 32(4): p. 155-63.
[88] A. Rizzo, M.T. Schultheis, K. Kerns, and C. Mateer, Analysis of assets for virtual reality applications in
neuropsychology. Neuropsychological Rehabilitation, (2004), 14(1-2): p. 207-239.
[89] G. Riva, Virtual reality in psychotherapy: review. CyberPsychology & Behavior, (2005), 8(3): p. 220-
30; discussion 231-40.
[90] G. Riva, A. Gaggioli, D. Villani, A. Preziosa, F. Morganti, R. Corsi, G. Faletti, and L. Vezzadini,
NeuroVR: an open source virtual reality platform for clinical psychology and behavioral neurosciences.
Studies in Health Technology and Informatics, (2007), 125: p. 394-9.
[91] M.E.P. Seligman and M. Csikszentmihalyi, Positive psychology. American Psychologist, (2000), 55: p.
5-14.
[92] C. Botella, C. Perpiña, R.M. Baños, and A. Garcia-Palacios, Virtual reality: a new clinical setting lab.
Studies in Health Technology and Informatics, (1998), 58: p. 73-81.
[93] F. Vincelli, From imagination to virtual reality: the future of clinical psychology. CyberPsychology &
Behavior, (1999), 2(3): p. 241-248.
[94] C. Botella, S. Quero, R.M. Banos, C. Perpina, A. Garcia Palacios, and G. Riva, Virtual reality and
psychotherapy. Stud Health Technol Inform, (2004), 99: p. 37-54.
Robotic Assistance for Upper Extremity
Training after Stroke
David J. REINKENSMEYERa
a
Department of Mechanical and Aerospace Engineering
University of California at Irvine, CA, USA

Abstract. There has been a rapid increase in the past decade in the number of
robotic devices that are being developed to assist in movement rehabilitation of the
upper extremity following stroke. Many of these devices have produced positive
clinical results. Yet, it is still not well understood how these devices enhance
movement recovery, and whether they have inherent therapeutic value that can be
attributed to their robotic properties per se. This chapter reviews the history of
robotic assistance for upper extremity training after stroke and the current state of
the field. Future advances in the field will likely be driven by scientific studies
focused on defining the behavioral factors that influence motor plasticity.

Keywords. upper extremity, rehabilitation, robotics, motor control, plasticity

Introduction

In the early 1990’s there were a handful of robotic devices being developed for upper
extremity training after stroke. Today there are tens of prototypes and several
companies selling commercial devices [1]. However, use of robotic devices in
rehabilitation clinics is still rare. This chapter reviews the history of the field, and
identifies factors that limit clinical acceptance and important directions for future
scientific research. Section 1 reviews why engineers started investigating robots for use
in rehabilitation therapy, and initial reactions by clinicians to these efforts. Section 2
reviews key design decisions that had to be made for the first robotic therapy devices,
which in some ways defined the flow of the field. Section 3 reviews clinical results
from the field and two important scientific questions that these results have raised.
Section 4 discusses recent developments in robotic assistance for the upper
extremity. The chapter concludes by suggesting directions for future research.

1. Robotic Assistance: Beginnings and Therapist Response

1.1. Precursors from Therapists

The development of robotic devices for rehabilitation therapy can be seen as the logical
progression of a stream of technological development activity begun by therapists
Figure 1. Pre-cursors of robotic therapy devices. The three devices on the left (Swedish sling, arm
skateboard, and JAECO mobile arm support) are designed to provides assistance for arm movement without
using actuators. The device on the right is the Biodex Active Dynamometer, which is a single degree-of-
freedom robot that can be adjusted to assist or resist movement around different joints.
themselves. Rehabilitation professionals have long taken an active interest in
developing and using technology to assist in rehabilitation (Figure 1). Therapy catalogs
such as the Sammons-Preston catalog (http://www.sammonspreston.com/) contain
dozens of devices designed to assist in upper extremity therapy after stroke. Much of
this technology tries to meet one or more of three goals: increasing activity, providing
assistance, and assessing outcomes (Table 1).
Implicit in the development of this technology was the idea of partial automation;
that is, the technology might allow patients to practice some of the repetitive aspects of
rehabilitation therapy on their own, without the continuous presence of the
rehabilitation therapist.

1.2. Enter the Engineers

In the late 1980’s and early 1990’s engineers began to realize that robotic devices could
potentially be adapted to better fulfill these same goals [2, 3]. This work was a logical
continuation of work on what were probably the first robotic devices for rehabilitation
therapy: the active dynamometers, such as the Lido and Biodex machines,
Table 1. Typical goals of older, simpler therapy technology, and how robotic devices further these goals.
Goals of therapy technology Example of simple, How robotic devices further
existing technology these goals
Increase Activity: provide activities therabands, pegboards, Robots can simulate a variety of
that allow stroke patients to blocks computerized activities and
independently exercise and practice quickly and automatically switch
functional tasks. between them.
Provide Assistance: assist patients in Splints, arm supports Robots can generate arbitrary
positioning or moving the hand or arm patterns of assistance or resistance
with a therapeutic goal. force against the patient’s limb,
and automatically adjust this force
based on performance.
Assess Outcomes: measure the Grip force measurement Robots can assess performance in
movement performance of patients. devices, electrogoniometers, an integrated and objective way
timers using their sensors.
Figure 2. Some of the first robotic therapy devices for the arm to undergo clinical testing (left to right: MIT-
MANUS [2], MIME [4], the ARM Guide [5]). These devices were designed to provide active assistance to
patients during reaching movements with the arm.
developed in the late 1970’s and early 1980’s (Figure 1). Here we define a robot to be a
device that can move in response to commands (cf. American Heritage Dictionary).
Active dynamometers incorporate a computer-controlled motor, and thus fit this
general definition of a robot. They include a kit of levers and bars that can be attached
to the motor. The levers are designed to work with different limbs and joints (e.g.
elbow flexion/extension, or should abduction/adduction), allowing patients to exercise
a joint while the motor resists or assists movement. The dynamometer senses the torque
and limb rotation that the patient generates, and displays this information to the patient
and therapist for visual feedback and outcomes documentation.
Robotics engineers realized that not only one-joint robotic devices with simple
controllers like active dynamometers could be used in therapy, but also more
sophisticated robotic mechanisms with more than one joint and more sophisticated
controllers (Figure 2). Engineers began to delineate possible benefits of robots, in a
way that aligned with many of the therapists’ technological goals defined above (Table
1). Engineers also explicitly promoted the goal of partial automation: robots had the
potential to allow the patient to practice some of the repetitive aspects of rehabilitation
therapy on their own, without the continuous presence of the rehabilitation therapist.

1.3. A Skeptical Reception by Some Clinicians, and a Collaborative Approach by


Others

Some clinicians expressed skepticism toward the idea that robots could help them meet
rehabilitation goals. Skeptical clinicians had good reasons to be skeptical that included
the following points:
1) Robots cannot match therapists’ expertise and skill. Therapy involves manual
skills that are learned over the course of years by experience under the
guidance of expert mentors. Some of these skills require sophisticated manual
manipulations of complex joints (e.g. mobilizing the patient’s scapula). An
alert and perceptive therapist alters her therapy goals and assistance based on a
complex, ongoing consideration of the patient’s state and progress. In brief:
hands-on therapy requires expertise and is complex; it seems doubtful that a
robot could replicate hands-on therapy effectively.
2) Robots are unsafe: robots are dangerous because they can move patient’s
limbs but are not intelligent and sensitive to contra-indications to imposed
movement like human therapists. They could move a patient in a harmful way.
3) Robots might replace therapists. Also implicit in the dubious reception by
some therapists was a concern that robots might replace therapists, just as
robots had replaced assembly workers in factories. Indeed, another definition
of a robot is “a machine designed to replace human beings in performing a
variety of tasks, either on command or by being programmed in advance.”
(American Heritage Science Dictionary). Most engineers interested in robotic
therapy probably never assumed that a robot could replace a therapist, because
the job of a therapist is multifaceted and interpersonal, involving much more
than just rotely moving limbs. Rather, the goal in the mind of most engineers
was consistent with that of therapists’ own previous technological
developments (Figure 1): to provide a means for patients to practice therapy
on their own so that they could get more therapy at less cost (i.e. partial
automation).
Other clinicians were of course more receptive to the idea or robot-assisted therapy,
perhaps because they saw robotic therapy devices as the logical evolution of
technology already being used in therapy. Robotic devices were an opportunity to try to
improve on the forms of technology already used in clinics to partially automate
repetitive aspects of therapy.

1.4. Incentives for Forging Ahead

Several research groups went ahead and developed robotic therapy devices for the arm,
notably, MIT-MANUS [2], MIME [4], and the ARM Guide [5] (Figure 2),
collaborating with the rehabilitation professionals who saw potential for these devices.
These engineering teams were perhaps bolstered by the insights that robotics, control
theory, and computational approaches were giving to the understanding of human
motor control in the 1980’s (e.g. [6]). If engineering concepts and technology could
help improve understanding of normal human motor control, could they also improve
understanding of motor control after neurologic injury? The prospect of developing
computational models of motor plasticity using robotic tools was intriguing.
Another motivation in most research team’s minds was the possible business
opportunity presented by robotic therapy: more people than ever before were in need of
rehabilitation after stroke because of the demographics of aging in industrialized
nations and the increased stroke survival rates, and this trend was expected to continue.
At the same time, rehabilitation units were being forced to deliver less repetitive
therapy because of cost-saving attempts in the health care industry. For example, the
average length of stay for stroke survivors in inpatient rehabilitation facilities in the
U.S. decreased from 31 days to 14 days after prospective payment system
reimbursement was instituted in 1983 [7]. And yet rehabilitation science was finding
with increasing certainty that recovery could be influenced by activity: training
enhanced use-dependent plasticity (e.g. [8, 9]). Developers of robotic therapy devices
thought that robots might help people with a stroke by allowing them access to a
greater quantitative of repetitive therapy at less cost than would be possible with one-
on-one interactions with a clinician. This access might allow the creation of new
businesses, providing an additional incentive to pursue device development.
2. Initial Design Decisions

2.1. But what should the robot do?

To this point, I have spoken of “robot assistance” in general terms – the robot assists
the therapist and patient in some way that promotes rehabilitation. When it came time
to actually build robotic therapy devices, however, engineers had to determine exactly
what the robots were to do – for example, they had to write the computer program that
controlled the motors on the robot. Here, engineers encountered a problem: we
discovered that the specific movement and assistance patterns that were effective for
therapy were relatively unknown. Despite a history of over one hundred years, and the
presence of somewhat dogmatic schools of therapy (e.g. Neurodevelopmental
Treatment, Brunstrom Technique, Proprioceptive Neural Facilitation [10]), the field of
rehabilitation science had at that time few randomized controlled trials that defined the
elements of therapy that specifically aided recovery [11]. Clinical practice varied
widely, with details of therapeutic techniques sometimes in opposition to each other in
different clinics (e.g. should the therapist promote movement within synergy or avoid
it? Is movement against resistance therapeutic, or does it increase spasticity?),
depending on which school of therapy the clinic’s therapists had been educated in. The
general lack of evidence for specific motions to be practiced or assistance patterns to be
applied had the practical result that there was not a well-defined scientific basis on
which the design of robots and computer algorithms for movement training could be
based.

2.2. A Logical Target: Active Assist Exercise

Despite this uncertainty, or perhaps because of it, the therapeutic target that the robotic
therapy research teams chose for MIT-MANUS, MIME, and the ARM Guide was the
same: active assist exercise, and, indeed, this technique has continued to be the primary
target for robotic therapy devices. In this technique, the therapist manually assists the
patient in achieving desired movements. The “active” refers to the patient being active
and engaged; i.e. the patient tries to move during the exercise. The “assist” refers to the
therapist manually assisting the patient, but only as much as needed. Researchers chose
this technique as a target because most of the schools of therapy seemed to incorporate
active assist exercise as an element [10]. As a result, application of this technique could
be witnessed on almost any day on a visit to almost any rehabilitation clinic. The
technique was also amenable to robotic implementation – assisting movement was
something robots could do.
It was also straightforward to conceive of a scientific rationale for active assist
therapy, although the rationale was speculative rather than verified:
1) Suppleness Enhancement: at the lowest level of motor control of
biomechanics and reflexes, active assist exercise stretches soft tissue and
muscles, which might be helpful for preventing contracture and reducing
spasticity.
2) Plasticity Enhancement: at a middle level of motor control, active assist
exercise provides the patient’s motor system with somatosensory stimulation
that would normally not be available because the patient is paretic.
Somatosensory input had recently been shown to drive cortical plasticity [12].
3) Motivation Enhancement: at a high level of the motor system, active assist
exercise may motivate patients to exercise. If a patient cannot move well on
his own, he are she may be disinclined to try to move. Active assist exercise
allows the patient to be successful in achieving a desired movement,
presumably motivating practice and effort [13]. However, it should be noted,
assisting too much with a robot may decrease effort [14].
As stated earlier, the field of rehabilitation science was not well established and
none of these rationales was scientifically proven at the time. They still remain largely
unproven today, even though most robotic therapy devices still focus on implementing
active assist exercise.

2.3. But what joints?

A decision also had to be made about which joints of the upper extremity to focus on,
as development of a robotic exoskeleton that can assist in all joint movements of the
upper extremity was and remains an unsolved problem, especially for the hand and
shoulder complex. The first robotic therapy devices for the upper extremity that were
clinically tested (i.e. MIT-MANUS, MIME, and the ARM Guide, Figure 2) focused on
providing active assist exercise for elbow flexion/extension and for limited shoulder
movements (e.g. shoulder flexion below 90 degrees and limited external rotation).
Three reasons for this choice were:

1) Simplicity: these joints were viewed as simpler than the hand, wrist, and
complex shoulder movements.
2) Availability of tools: robots had already been developed to study motor control
at these joints, and thus there were technological precedents and scientific
concepts from which to build. For example, MIT-MANUS was essentially the
same robot that was concurrently being used in early, influential studies of
motor adaptation [15]. MIME used an industrial robot that had the scale of
human arm movements.
3) Pragmatism: the hand often appears to be hopelessly impaired following
stroke, and shoulder problems such as subluxation are governed by complex
biomechanical and neurological mechanisms which would be very difficult for
a robot to address. Reaching movements with the arm are needed for a lot of
functional activities. Robotic therapy research teams therefore aimed to
achieve functional improvements by making robots that focused on reaching
movements with the arm.
It is worth noting that it is still unclear which joints to focus on for an optimal
therapeutic result because of a lack of clinical trials addressing this question.
Intriguingly, a device focused on simple wrist and forearm movements, the
BiManuTrac, has produced the largest changes in impairment observed with robotic
therapy to date [16].

2.4. And what types of movements?

Finally a related decision had to made about what types of movements the patient
would perform with robot assistance. Should the movements be single-joint or multiple
joint? Should they be fast-as-possible or slow? Should they avoid abnormal synergy
patterns or work to build strength in those patterns? Bimanual, with two robots, or
unimanual? Should they have a functional goal?
The motions used by MIT-MANUS in the first clinical trials were unimanual
pointing movements in the horizontal plane [17]. The patient was instructed to move a
cursor to a target. After attaining the target, the target moved to a new location. The
robot helped the patient to make the movement to the target, following a normative
trajectory (minimum jerk trajectory) [17]. This type of paradigm had been used often
previously in motor control research. It required multiple-joint coordination, and was
functional in a sense, since pointing (or reaching) is a component of many activities of
daily living. MIME and the ARM Guide also focused on unimanual reaching
movements. MIME incorporated some bimanual reaching exercises also.

3. Initial Clinical Tests and the Questions they Raised

3.1. First Clinical Results

The basic findings of the initial clinical tests with the first three robotic therapy devices
for the arm (MIT-Manus, MIME, and the ARM Guide) were as follows (for detailed
reviews, see: [1, 9, 18]):
1. Statistically Significant Motor Gains: An additional dose of active assist
exercise, delivered with a robotic device with an intensity of several hours per
week for several weeks, significantly (in a statistical sense) improved motor
recovery in the acute or chronic stage following a stroke, as measured with
quantitative measures of range of motion or strength, or clinical impairment
scales (Figure 3). Patients typically maintained this improvement at long-term
follow-up (i.e. months later).
2. Modest Motor Gains: While statistically significant, the gains due to robotic
therapy were small – typically 2-6 points on the upper extremity Fugl-Meyer
scale [19], which ranges from 0-66 (Figure 3). Functional gains, as measured
with clinical ADL scales, typically were even smaller and sometimes not
significant [19].
3. Comparable Motor Gains: The gains due to robotic therapy were roughly the
same size as those due to a matched amount of conventional rehabilitation
therapy, or to unassisted rehabilitation practice, as well as comparable
between the different robots used (Figure 3). In other words, comparisons
between different types of therapy often led to statistically inconclusive results.
Clinical testing of second generation robotic therapy devices has essentially been
confirmatory of these findings, as reviewed in a recent systematic review [19].
Figure 3. Change in Fugl-Meyer Upper-Extremity Score with one to two months of training several hours
per week after chronic stroke, for three robotic devices (MIT-MANUS [17], MIME [4], and Gentle-S [20]),
and with conventional table-top exercise [21] and with the TWREX non-robotic exoskeleton [21] (see Figure
4). The Fugl-Meyer score varies from 0 (complete paralysis) to 66 (normal movement ability).

3.2. Questions Raised by Initial Clinical Testing

This initial clinical testing raised two important questions:


1. The Question of Necessity: Was the robot necessary for the observed
therapeutic benefit? I think the clearest way to express this question is as
follows [22]: Consider a control group for which the motors of the robot are
removed but the joints are allowed to move freely such that the robot allows
movement but does not assist movement. The unactuated robot provides the
same audiovisual stimulation, and the control group undergoes a matched
duration of unactuated therapy. Would this control group recover less than a
group that exercised with the actuated robot? If not, this would suggest that
the robotic properties themselves (i.e. the programmable actuators) were
superfluous. This result is scientifically plausible because, with regards to
motor plasticity after stroke, we know that practice is a key (or perhaps the
key) stimulant for motor plasticity.
2. The Question of Optimization. If one accepts that the robotic properties of
robotic therapy are helpful for enhancing recovery, a logical question is how
sensitive are therapeutic benefits to the optimization of the robotic
parameters? The first robotic therapy devices elicited therapeutic benefits
comparable to each other, even though they were fairly different in their
design and approach (e.g. number of degrees of freedom, details of the form of
assistance provided, stiffness levels). Can tuning the robot geometry and
control algorithm increase the therapeutic benefits? Or will any reasonable
robot (or non-robotic therapy) give approximately the same result?
4. State of the Field Today

4.1. Progress in answering questions about the necessity and optimization of robotic
actuation

Few randomized controlled trials have yet addressed whether robotic actuation is
necessary for therapeutic benefit and how much it can be optimized. A recent exception
was a study that found that chronic stroke patients who received a fixed dose of active
assist therapy for the hand from a robotic device (HWARD) recovered significantly
better than a group that received half as much active assist therapy [23]. The number of
patients included in this study was small (n = 13) and the baseline characteristics of the
subjects were slightly mismatched, however, so the result needs to be examined with a
larger study. The additional advantage due to more active assist therapy was moderate
(about 3 extra Fugl-Meyer points).
Notably, the process of answering the necessity and optimization questions is
theoretically endless because of the problem of “unlimited alternatives”. That is, even if
a randomized controlled trial demonstrates that the robotic properties being tested were
unnecessary to generate the observed benefits (i.e. a group trained with an unactuated
technique at similar dosage receives similar therapeutic benefits), or even if an
interesting tweak of a robot’s parameters does not substantially alter the clinical
outcomes, such a negative finding would of course only be for one particular
instantiation of robot therapy. Other robots or different control algorithms, some maybe
as yet unconceived, may produce better results. Since there are an infinite number of
possible robots and robot control algorithms, it may be impossible to provide definitive
answers to these questions. In addition, establishing negative results (i.e. no difference
between therapy groups) with a high level of precision requires large subject
populations because of the high inter-subject variability in stroke patients and the
nature of statistical power, again adding effort, cost, and time to the process.

4.2. Trends in the Field

If the field has not focused on answering the necessity and optimization questions with
clinical trials, what has it focused on? Three trends mark the field of robotic therapy for
the upper extremity today:
1. Rapid Proliferation of Innovative Hardware. Many cleverly designed robotic
devices have been or are being developed to assist at different joints, at more
joints, or at the same joints as before with improved weight, mass, or control
properties (Figure 4, see review: [1]). Non-robotic approaches are also being
developed, such as devices that passively relieve the weight of the arm [21,
24]. Initial testing suggests that passive devices may have similar clinical
benefits with lower cost and theoretically-better safety [21] (Figure 4). Several
companies are now selling upper extremity devices, and sales of these devices
number in the hundreds.
2. Development of New Control Strategies. Most current research on control
strategies still focuses on active assist exercise. To improve active assistance
algorithms, researchers are exploring several strategies, including:
Figure 4. Recently developed robotic and non-robotic therapy devices. Upper left: NeReBot: a 5 DOF cable
robot that can be used next to a patient’s bed [27]. Bottom left: ArmIn: a highly responsive robot that allows
naturalistic arm movement, including shoulder translation [28]. Middle top: Rupert: a lightweight
exoskeleton actuated with pneumatic muscles, which can be worn by the subject [29]. Middle bottom: T-
WREX – a non-robotic arm support device [21]. Upper right: HWARD – a 3 DOF hand and wrist robot [23].
Lower right: A cable driven glove that can be worn, and driven by a motor or the patients shoulder shrugs
[30].
• Improved Compliance and Feedforward Control: These efforts include
methods to make robots more compliant but still able to assist in spatial
movement, by incorporating feedforward control [25, 26]. Compliance
may have the advantage of making the patient feel more in control of
therapy, and thus more engaged. It also preserves the relationship between
motor commands that the patient generates and actual movement
direction, which may allow patients to better optimize motor commands,
since they receive accurate information about the results of a change in
their motor command, whereas a stiff robot will always enforce the same
trajectory.
• Adaptive Control: Several groups are making the controller adaptive, so
that the robot changes its assistance based on ongoing sensing of patient
performance [25, 31, 32]. The key concept here is that patient ability
changes during therapy, and it would theoretically be best to keep the
patient appropriately challenged, for provoking motor learning.
• Optimization: Optimization theory allows the goals of the therapy to be
expressed as a high-level control objective. For example, for active
assistance, my research group has proposed to minimize a weighted sum
of patient movement error and robot assistance force [33]. Minimization
of this cost function thus helps the patient achieve a desired trajectory, but
with as little robot force as possible (Assistance-as-needed). Optimization
theory provides a means to derive the robot therapy controller that
mathematically optimizes the cost function. Within an optimization
framework, robotic therapy controllers can be rigorously proven to satisfy
a “high level” goal, rather than being based on ad hoc strategies devised
by the research team.
• Neuro-Computational Modeling: My own research group has also begun
to develop computational models that model what the patient’s brain is
computing during therapy to gain insight into how better to design robotic
therapy controllers [34, 35]. The concept here is that if we can
mathematically model how behavioral signals drive adaptation, then we
should be able to design control strategies that mathematically optimize
adaptation.

Other therapeutic paradigms besides active assistance are also being explored
including:
• Error amplification strategies [36, 37]: The concept behind this approach
is that movement errors drive motor adaptation, and thus assistance may
be the wrong approach to take if the goal is to enhance motor adaptation,
since assistance reduces movement errors. Amplifying errors may
improve the rate or extent of motor adaptation by better provoking motor
plasticity. Clinically, this technique has only been shown to be effective
in reducing curvature errors during supported-arm reaching in the short-
term [38].
• Virtual environments (see review: [39]) Another alternate therapeutic
paradigm that differs from the active assistance paradigm that dominates
the field is to use the robot to create a virtual environment that simulates
different therapeutic activities. In this paradigm, the robot may not
physically assist or resist movement, but instead just provide a training
environment that simulates reality. Potential advantages of training in a
haptic environment over training in physical reality include: a haptic
simulator can create many different interactive environments simulating a
wide range of real-life situations; quickly switch between these
environments without a “set-up” time, automatically grade the difficulty
of the training environment by adding or removing virtual features; make
the environments more interesting than a typical rehabilitation
environment; automatically “reset” itself if virtual objects are dropped or
misplaced; and provide novel forms of visual and haptic feedback
regarding performance. In this haptic simulation framework, robotics may
benefit rehabilitation therapy not by provoking motor plasticity with
special assisting or resisting control schemes, but rather by providing a
diverse, salient, and convenient environment for semi-autonomous
training.

3. Rehabilitation Therapists are Accepting Robots as Scientific but not


Clinical Tools. A third trend is that while rehabilitation therapists are not
widely incorporating commercial robotic therapy devices for clinical use, they
are using robots in their research. The research therapists in the conference
that led to this book are setting the pace: they are doing groundbreaking
scientific work using robotics and related technology, as can be read in this
book’s other chapters (see chapter by Mataric, for example).
5. Conclusion

As mentioned in the Introduction, in the early 1990’s there were only a handful of
robotic devices being developed for upper extremity training after stroke. In 2008, there
are dozens of devices being developed. However, robotic therapy has not become a
standard therapeutic treatment in most clinics. What impedes clinical acceptance?
One important factor is that the therapeutic benefits of robotic therapy are modest,
and have not been shown to be decisively better than other, less expensive approaches
that can partially automate therapy (Figure 1). In other words, the necessity question
remains unanswered. There is little motivation for most clinics to buy expensive robots
until it is proven that the robots yield therapeutic or cost benefits that are substantially
better than current approaches.
The field seems to be investing the majority of its resources in developing new
devices, rather than in understanding and optimizing the content of robotic therapy.
One explanation for this phenomenon is that there is a lack of devices for certain
movements and applications, such as hand movement and naturalistic arm movement,
and the new technology addresses this lack, as well as improving features such as
portability and force control response (Figure 4). But another possible factor is that
engineers like to build devices and are good at it. Engineers’ motivation and expertise
for scientifically exploring the clinical effects of their devices is more limited, and this
may signal the need for an even greater role by clinician scientists.
The field will likely have to evolve to place more focus on scientific studies of the
mechanisms of motor plasticity to optimize technology, improve the benefits of robotic
therapy, and determine if routine clinical use makes sense. The question of “What are
the maximum benefits that we can obtain with robotic therapy?” can be illustrated by a
boy playing with a stomp rocket (Figure 5). A dose of robotic therapy is like stomping
on the air bladder. The altitude that the rocket reaches is like the resulting improvement
in motor control. The boy can increase the rocket altitude by stomping harder, just like
a robotic device can increase recovery if it uses an optimal training paradigm, but there
is a limit to how the rocket, and likely recovery also, can go. For upper extremity
recovery, the limit is probably dictated by the number of spared corticospinal neurons
following stroke. The limit for the rocket is well short of the Eiffel tower, despite the
perspective shown in Figure 5. Does a trick of perspective make us think that the limits
for recovery enhancement that are possible with robotic therapy are higher than they
really are, if indeed the amount of cell loss defines them?
Addressing the following two key questions would help answer this question, and
advance robotic therapy development:
1. What behavioral signals provoke plasticity during rehabilitation? Knowing
these signals would allow us to design robots that optimally influence those
signals. This would provide answers to questions like “What type of forces
(error attenuating or error amplifying)?, “What joints?”, “What movements?”,
and “What type of feedback?”.
2. What are the fundamental limits to the plasticity that can be provoked with
behavioral signals? Answering this question would define the limits we
should expect of robotic therapy optimization. It would thus allow us to
determine how much time to invest in optimizing robotic therapy itself. If the
cost function is relatively flat and we are already close to an optimum, it may
Figure 5. What are the maximum benefits that we can obtain with robotic therapy?
make sense to focus more attention on approaches that combine cell- or
molecule based regeneration techniques with robotic therapy, in search for a
synergy that improves clinical results beyond that achievable with either
robots or regeneration alone.

Acknowledgements

The contents of this chapter were developed in part with support from NIDRR
H133E070013 and NIH N01-HD-3-3352.

References

[1] B.R. Brewer, S.K. McDowell, and L.C. Worthen-Chaudhari, Poststroke upper extremity rehabilitation:
a review of robotic systems and clinical results, Top. Stroke Rehabilitation 14 (2007), 22-44.
[2] H.I. Krebs, N. Hogan, M.L. Aisen, and B.T. Volpe, Robot-aided neurorehabilitation, IEEE
Transactions on Neural Systems and Rehabilitation Engineering 6 (1998), 75-87.
[3] P.S. Lum, D.J. Reinkensmeyer, and S.L. Lehman, Robotic assist devices for bimanual physical therapy:
preliminary experiments, IEEE Transactions on Neural Systems and Rehabilitation Engineering 1
(1993),185-191.
[4] P.S. Lum, C.G. Burgar, S.P.C.M. Majmundar, and M. Van der Loos, Robot-assisted movement training
compared with conventional therapy techniques for the rehabilitation of upper limb motor function
following stroke, Archives of Physical Medicine and Rehabilitation 83 (2002), 952-9.
[5] D. Reinkensmeyer, L. Kahn, M. Averbach, A. McKenna, B. Schmit, and W. Rymer, Understanding and
promoting arm movement recovery after chronic brain injury: Progress with the ARM Guide, Journal
of Rehabilitation Research & Development submitted (2000).
[6] C.G. Atkeson, Learning arm kinematics and dynamics, Annual Review of Neuroscience 12 (1989), 157-
183.
[7] S.M. Schmidt, L. Guo, and S.J. Scheer, Changes in the status of hospitalized stroke patients since
inception of the prospective payment system in 1983, Archives of Physical Medicine and Rehabilitation
83 (2002), 894-898.
[8] R.J. Nudo, B.M. Wise, F. SiFuentes, and G.W. Milliken, Neural substrates for the effects of
rehabilitative training on motor recovery after ischemic infarct, Science 272 (1996), 1791-1794.
[9] G. Kwakkel, R. van Peppen, R.C. Wagenaar, S. Wood-Dauphinee, C. Richards, A. Ashburn, K. Miller,
N. Lincoln, C. Partridge, I. Wellwood, and P. Langhorne, Effects of augmented exercise therapy time
after stroke: A meta-analysis, Stroke 35 (2004), 2529-2539.
[10] C.A. Trombly, Occupational Therapy for Dysfunction, 4th Edition, Baltimore: Williams and Wilkins,
1995.
[11] G. Gresham, P. Duncan, W. Stason, H. Adams, A. Adelman, D. Alexander, D. Bishop, L. Diller, N.
Donaldson, C. Granger, A. Holland, M. Kelly-Hayes, F. McDowell, L. Myers, M. Phipps, E. Roth, H.
Siebens, G. Tarvin, and C. Trombly, Post-Stroke Rehabilitation. Rockville, MD: U.S. Department of
Health and Human Services. Public Health Service, Agency for Health Care Policy and Research, 1995.
[12] M.M. Merzenich, and W.M. Jenkins, Reorganization of cortical representations of the hand following
alterations of skin inputs induced by nerve injury, skin island transfers, and experience, Journal of
Hand Therapy 6 (1993), 89-104.
[13] D.J. Reinkensmeyer, and S.J. Housman, If I can't do it once, why do it a hundred times?: Connecting
volition to movement success in a virtual environment motivates people to exercise the arm after stroke,
Proc. Virtual Rehabilitation Conference (2007), 44-48.
[14] J.F. Israel, D.D. Campbell, J.H. Kahn, and T.G. Hornby, Metabolic costs and muscle activity patterns
during robotic- and therapist-assisted treadmill walking in individuals with incomplete spinal cord
injury, Physical Therapy 86 (2006),1466-78.
[15] R. Shadmehr, and F.A. Mussa-Ivaldi, Adaptive representation of dynamics during learning of a motor
task, Journal of Neuroscience 14 (1994), 3208-3224.
[16] S. Hesse, C. Werner, M. Pohl, S. Rueckriem, J. Mehrholz, and M.L. Lingnau, Computerized arm
training improves the motor control of the severely affected arm after stroke: a single-blinded
randomized trial in two centers, Stroke 36 (2005),.1960-6.
[17] S. Fasoli, H. Krebs, J. Stein, W. Frontera, and N. Hogan, Effects of robotic therapy on motor
impairment and recovery in chronic stroke, Archives of Physical Medicine and Rehabilitation 84
(2003), 477-82.
[18] L.E. Kahn, M.L. Zygman, W.Z. Rymer, and D.J. Reinkensmeyer, Robot-assisted reaching exercise
promotes arm movement recovery in chronic hemiparetic stroke: A randomized controlled pilot study,
Journal of Neuroengineering and Neurorehabilitation 3 (12) (2006).
[19] G. Kwakkel, B.J. Kollen, and H.I. Krebs, Effects of robot-assisted therapy on upper limb recovery after
stroke: a systematic review, Neural Repair 22 (2008), 111-121.
[20] F. Amirabdollahian, R. Loureiro, E. Gradwell, C. Collin, W. Harwin, and G. Johnson, Multivariate
analysis of the Fugl-Meyer outcome measures assessing the effectiveness of GENTLE/S robot-
mediated stroke therapy, Journal of Neuroengineering Rehabilitation 19 (2007), 4.
[21] S.J. Housman, V. Le, T. Rahman, R.J. Sanchez, and D.J. Reinkensmeyer, Arm-Training with T-WREX
after Chronic Stroke: Preliminary Results of a Randomized Controlled Trial, To Appear, 2007 IEEE
International Conference on Rehabilitation Robotics, 2007.
[22] L. Kahn, P. Lum, W. Rymer, and D. Reinkensmeyer, Robot-assisted movement training for the stroke-
impaired arm: Does it matter what the robot does?, Journal of Rehabilitation Research and
Development 43 (2006), 619-630.
[23] C.D. Takahashi, L. Der-Yeghiaian, V. Le, R.R. Motiwala, and S.C. Cramer, Robot-based hand motor
therapy after stroke, Brain 131 (2008), 425-437.
[24] A.H.A. Stienen, E.E.G. Hekman, F.C.T. Van der Helm, G.B. Prange, M.J.A. Jannink, A.M.M. Aalsma,
and H. Van der Kooij, Freebal: dedicated gravity compensation for the upper extremities, IEEE 10th
International Conference on Rehabilitation Robotics (2007), 804-808.
[25] E. T. Wolbrecht, D.J. Reinkensmeyer, and J.E. Bobrow, Optimizing compliant, model-based robotic
assistance to promote neurorehabilitation, IEEE Transactions Neural Systems and Rehabiltation
Engineering 16 (2008), 286-297.
[26] M. Mihelj, T. Nef, and R. Riener, A novel paradigm for patient-cooperative control of upper-limb
rehabilitation robots, Advanced Robotics 21 (2007), 843-867.
[27] S. Masiero, A. Celia, G. Rosati, and M. Armani, Robotic-assisted rehabilitation of the upper limb after
acute stroke, Archives of Physical Medicine and Rehabilitation 88 (2007), 142-9.
[28] T. Nef, and R. Riener, ARMin – Design of a Novel Arm Rehabilitation Robot, Proceedings of the 2005
IEEE International Conference on Rehabilitation Robotics, Chicago, Illinois, 2005, pp. 57-60.
[29] T.G. Sugar, J. He, E.J. Koeneman, J.B. Koeneman, R. Herman, H. Huang, R.S. Schultz, D.E. Herring,
J. Wanberg, S. Balasubramanian, P. Swenson, and J.A. Ward, Design and control of RUPERT: a device
for robotic upper extremity repetitive therapy, IEEE Transactions Neural Systems and Rehabilitation
Engineering 15 (2007), 336-346.
[30] H.C. Fischer, K. Stubblefield, T. Kline, X. Luo, R.V. Kenyon, and D.G. Kamper, Hand rehabilitation
following stroke: a pilot study of assisted finger extension training in a virtual environment. Top Stroke
Rehabilitation 14 (1)(2007), 1-12.
[31] H. Krebs, J. Palazzolo, L. Dipietro, M. Ferraro, J. Krol, K. Rannekleiv, B. Volpe, and N. Hogan,
Rehabilitation robotics: performance-based progressive robot-assisted therapy, Auto. Rob. 15 (2003), 7-
20.
[32] R. Riener, L. Lunenburger, S. Jezernik, M. Anderschitz, G. Colombo, and V. Dietz, Patient-cooperative
strategies for robot-aided treadmill training: first experimental results, IEEE Transactions Neural
Systems and Rehabilitation Engineering 13 (2005), 380-394.
[33] J.L. Emken, R. Benitez, and D.J. Reinkensmeyer, Human-robot cooperative movement training:
learning a novel sensory motor transformation during walking with robotic assistance-as-needed,
Journal of Neuroengineering Rehabilitation 4 (2007), 8.
[34] J.L. Emken, R. Benitez, A. Sideris, J.E. Bobrow, and D.J. Reinkensmeyer, Motor adaptation as a
greedy optimization of error and effort, Journal of Neurophysiology 97 (2007), 3997-4006.
[35] D.J. Reinkensmeyer, E. Wolbrecht, and J. Bobrow, A computational model of human-robot load
sharing during robot-assisted arm movement training after stroke, IEEE Engineering in Medicine and
Biology Society 2007 (2007), 4019-4023.
[36] J.L. Patton, M.E. Phillips-Stoykov, M. Stojakovich, and F.A. Mussa-Ivaldi, Evaluation of robotic
training forces that either enhance or reduce error in chronic hemiparetic stroke survivors, Experimental
Brain Research 168 (2005), 368-383.
[37] J.L. Emken, and D.J. Reinkensmeyer, Robot-enhanced motor learning: accelerating internal model
formation during locomotion by transient dynamic amplification, IEEE Transactions Neural Systems
and Rehabilitation Engineering 13 (2005), 33-9.
[38] J. Patton, M. Kovic, and F. Mussa-Ivaldi, Custom-designed haptic training for restoring reaching ability
to individuals with poststroke hemiparesis. Journal of Rehabilitation Research and Development 43
(2006), 643-56.
[39] J.L. Patton, G. Dawe, C. Scharver, F.A. Muss-Ivaldi, and R. Kenyon, Robotics and virtual reality: A
perfect marriage for motor control research and rehabilitation, Assistive Technology 18 (2006), 181-
195.
Robotic assisted rehabilitation in Virtual
Reality with the L-EXOS
Antonio FRISOLIa, Massimo BERGAMASCOa, Maria C. CARBONCINIb and
Bruno ROSSIb
a
PERCRO Laboratory, Scuola Superiore Sant’Anna, Pisa, Italy
b
Neurorehabilitation Unit, Department of Neurosciences, University of Pisa, Italy

Abstract. This study presents the evaluation results of a clinical trial of


robotic-assisted rehabilitation in Virtual Reality performed with the PERCRO
L-Exos (Light-Exoskeleton) system, which is a 5-DoF force-feedback
exoskeleton for the right arm. The device has demonstrated itself suitable for
robotic arm rehabilitation therapy when integrated with a Virtual Reality (VR)
system. Three different schemes of therapy in VR were tested in the clinical
evaluation trial, which was conducted on a group of nine chronic stroke
patients at the Santa Chiara Hospital in Pisa-Italy. The results of this clinical
trial, both in terms of patients performance improvements in the proposed
exercises and in terms of improvements in the standard clinical scales which
were used to monitor patients receovery are reported and discussed. The
evaluation both pre and post-therapy was carried out with both clinical and
quantitative kinesiologic measurements. Statistically significant improvements
were found in terms of Fugl-Meyer scores, Ashworth scale, increments of
active and passive ranges of motion of the impaired limb, and quantitative
indexes, such as task time and error.

Keywords. Exoskeleton, robotic-assisted rehabilitation, task-oriented


movement, reaching target, clinical protocol, Virtual Reality, Range of
Motion, Fugl-Meyer assessment

Introduction

Several studies demonstrate the importance of an early, constant and intensive


rehabilitation following cerebral accidents. This kind of therapy is an expensive
procedure in terms of human resources and time, and the increase of both life
expectance of world population and incidence of stroke is making the
administration of such therapies more and more important. The impairment of
upper limb function is one of the most common and challenging consequences
following stroke, that limits the patient’s autonomy in daily living and may lead to
permanent disability [1]. Well-established traditional stroke rehabilitation
techniques rely on thorough and constant exercise [2, 3], which patients are
required to carry out within the hospital with the help of therapists, as well as
during daily life at home. Early initiation of active movements by means of
repetitive training has proved its efficacy in guaranteeing a good level of motor
capability recovery [4]. Such techniques allow stroke patients to partially or fully
recover motor functionalities during the acute stroke phase, due to the clinical
evidence of a period of rapid sensorimotor recovery in the first three months after
stroke, after which improvement occurs more gradually for a period of up to two
years and perhaps longer [5, 6]. However after usual therapies, permanent
disabilities are likely to be present in the chronic phase, and in particular a
satisfying upper extremity motor recovery is much more difficult to obtain with
respect to lower extremities [7].
Several studies have attempted to investigate the efficacy of stroke
rehabilitation approaches [8, 9]. Intensive and task oriented therapy for the upper
limb, consisting of active, highly repetitive movements, is one of the most effective
approaches to arm function restoration [10, 11]. The driving motivations to apply
robotic technology to stroke rehabilitation are that it may overcome some of the
major limitations that manual assisted movement training suffers from, i.e. lack of
repeatability, lack of objective estimation of rehabilitation progress, and high
dependence on specialized personnel availability. Robotic devices for rehabilitation
can help to reduce the costs associated with the therapy and lead to new effective
therapeutic procedures. In addition, Virtual Reality can provide a unique medium
where therapy can be provided within a functional and highly motivating context,
that can be readily graded and documented. The cortical reorganization and
associated functional motor recovery after Virtual Reality treatments in patient with
chronic stroke are documented also by fRMN [12].
Among leg rehabilitation robot devices, Lokomat [13] has become a
commercial and widely diffused lower limb robotic rehabilitation device. It is a
motorized orthosis able to guide knee and ankle movements while the patient walks
on a treadmill.
Concerning arm rehabilitation devices, both cartesian and exoskeleton-based
devices have been developed in the last 10 years. MIT Manus [14, 15] and its
commercial version InMotion2 [16] are pantograph-based planar manipulators,
which have extensively been used to train patients on reaching exercises and have
been constantly evaluated by means of clinical data analysis [17]. It has been
designed to be backdrivable as much as possible and to have a nearly isotropic
inertia. ARM-guide [18, 19] is a device which is attached to the patient’s forearm
and guides the arm along a linear path having a variable angle with respect to the
horizontal position. Constraint forces and range of motion are measured throughout
the exercises. The MIME (Mirror Image Movement Enabler) system [20] is a
bimanual robotic device which uses an industrial PUMA 560 robot that applies
forces to the paretic limb during 3-dimensional movements. The system is able to
replicate the movements of the non-paretic limb.
Exoskeletons are robotic systems designed to work linked with parts of the
human body and, unlike robots, are not designed to perform specific tasks
autonomously in their workspace [21]. In such a condition, the issue of the physical
interaction between robots and humans is considered in terms of safety. The design
of exoskeleton systems stems from opposite motivations that intend the robotic
structure to be always maintained in contact with the human operators limb. Such a
condition is required for several applications that include the use of master robotic
arms for teleoperation, active orthoses and rehabilitation [22].
Experiments on exoskeletons have been performed at the JPL during 1970s
[23]. Sarcos [24] developed a master arm used for the remote control of a robotic
arm, while at PERCRO arm exoskeletons have been developed for interaction with
virtual environments since 1994 [22, 25, 26]. Exoskeletons can be suitably
employed in robotic assisted rehabilitation [27].
Two exoskeleton-based systems have been developed at Saga University,
Japan. The older one [28] is a 1-DoF interface for the human elbow motion, where
angular position and impedance of the robot are tuned relying on biological signals
used to interpret the human subjects intention. The newer neuro-fuzzy controlled
device [29] is a 2-DoF interface used to assist human shoulder joint movement.
Another device, the ARMin, has been developed at ETH, Switzerland [30, 31]. This
device provides three active DoFs for shoulder and one active DoF for elbow
actuation. The patient is required to perform task-oriented repetitive movements
having continuous visual, auditory and haptic feedback. The Salford Exoskeleton
[32], which is based on pneumatic Muscle Actuators (pMA) and provides an
excellent power over weight ratio, has also been used in physiotherapy and training.
A recent survey [33] on the efficacy of different robot assisted therapies
outlines that robotic-aided therapy allows a higher level of improvement of motor
control if compared to conventional therapy. Nevertheless, it is to be noted that no
consistent influence on functional abilities has yet been found.
This chapter presents the results of an extended clinical trial employing the L-
Exos system [34], a 5-DoF force-feedback exoskeleton for the right arm; the system
was installed at the Neurorehabilitation Unit of the University of Pisa, where it was
used for the robotic assisted VR-based rehabilitation in a group of 9 chronic stroke
patients[35, 36]. This work is intended to extend previous works concerning a pilot
study with the L-Exos system by providing significant therapy and clinical data
from a much larger set of patients.
Section 1 presents a general description of the L-Exos system, underlining the
main features which make the device useful for rehabilitation purposes, and a
description of the developed VR applications may be found in Section 2. Section 3
and Section 4 discuss the main results which have been obtained with the L-Exos
both in terms of improvements in the metrics used to assess patient performance in
the therapy exercises and in terms of improvements in the standard clinical scales
which have been used to monitor patients’ recovery. Conclusions and perspectives
opened by this pilot study are briefly reported in Section 5.

1. The L-EXOS system

L-Exos (Light Exoskeleton) is a force feedback exoskeleton for the right human
arm. The exoskeleton is designed to apply a controllable force of up to 100 N at the
center of the user’s hand palm, oriented along any spatial direction and it can
provide active and tunable arm weight compensation. The device mechanical
structure has been extensively described in [37], whereas a description of the model
of its novel tendon transmission may be found in [38]. For sake of clarity, a brief
review of the device kinematics will be provided in this section.
L-Exos has 5 DoFs, 4 of which are actuated and are used to define the position
of the end-effector in space (see Figure 1). The system is therefore redundant,
allowing different joint configurations corresponding to the same end-effector
position, which is fundamental for chronic stroke patients. Such subjects are likely
to implement compensatory strategies in order to overcome force and Range of
Motion (ROM) limitations remaining after stroke rehabilitation [39]. The 5th DoF
Figure 1. L-Exos kinematics.
is passive and allows free wrist pronation and supination movements. Moreover,
design optimizations allow total arm mobility to a healthy subject wearing the
device.
The structure of the L-Exos is open, the wrist being the only closed joint, and
can therefore be easily wearable by post-stroke patients with the help of a therapist.
In order to use the L-Exos system for rehabilitation purposes, an adjustable height
support was made, and a chair was placed in front of the device support, in order to
enable patients to be comfortably seated while performing the tasks. The final
handle length is also tunable, according to the patient’s arm length.
After wearing the robotic device, the subject’s elbow is kept attached to the
robotic structure by means of a belt. If necessary, the wrist may also be tightly
attached to the device end-effector by means of a second belt, which was used for
patients who were not able to fully control hand movements. A third belt can easily
be employed in order to block the patient’s trunk when necessary.
The L-Exos device was integrated with a projector used to display on a wide
screen placed in front of the patient different virtual scenarios in which to perform
rehabilitation exercises. The VR display is therefore a mono screen in which a 3D
scene is rendered. Three Virtual Rehabilitation scenarios were developed using the
XVR Development Studio [40]. The photo shown in Figure 2 was taken during a
therapy session, while one of the admitted patients was performing the required
exercises, and is useful to visualize the final clinical setup.

Figure 2. One admitted patient performing the robotic-aided therapy exercises.


2. Methods

A clinical pilot study involving 9 subjects with the main objective of validating
robotic assisted therapy with the L-Exos system was carried out at the Santa Chiara
Hospital of Pisa, Italy, between March and August 2007. Potential subjects to be
enrolled in the clinical protocol were contacted to take part in a preliminary test
session used to evaluate patients acceptance of the device. Most of the patients gave
an enthusiastic positive feedback about the opportunity.
Patients who were declared fit for the protocol and agreed to sign an informed
consent form concerning the novel therapy scheme were admitted to the clinical
trials. The protocol consisted of 3 one-hour rehabilitation sessions per week for a
total of six weeks (i.e., 18 therapy sessions). Each rehabilitation session consisted in
three different VR mediated exercises. A brief description of the goal of each
exercise will be provided in the next paragraphs, whereas a more detailed
description of the VR scenarios developed may be found in previous works [35,
36]. Some relevant control issues concerning the proposed exercises will be
reported as well.
The patient was on a seat as shown in Figure 3(D), with his/her right forearm
wearing the exoskeleton and a video projector displaying frontally the virtual
scenario. A preliminary clinical test was conducted to evaluate the ergonomics of
the system and the functionality as a rehabilitation device on a set of three different
applications. The test was intended to demonstrate that the L-Exos could be
successfully employed by a patient, and to measure the expected performance
during therapy.
To assess the functionality of the device, three different scenarios and
corresponding exercises were devised:
- A reaching task;
- A motion task constrained to a circular trajectory;
- An object manipulation task.
The tasks were designed in order to be executed in succession within one
therapy session of the duration of about one hour, repeated three times per week.

(A) (B)
(C) (D)
Figure 3. The arm exoskeleton during the execution of the reaching task. A: the starting position of the
reaching task; B: a subject in the middle of the path of the reaching task; C: a subject at the end-point of
the path of the reaching task; D: The overall system.

2.1. Reaching task

In the first task, the represented scenario is composed of a virtual room, where
different fixed targets are displayed to the patient as gray spheres disposed on a
horizontal row, as shown in Figure 4. The position of the hand of the patient is
shown as a green sphere, that is moved according to the end-effector movements.
The starting position of the task was chosen as a rest position of the arm, with
the elbow flexed at 90°, as shown in Figure 3(A). In this position, the exoskeleton
provides the support for the weight of the arm, so that the patient can comfortably
lean his arm on the exoskeleton.
When one of the fixed targets is activated, a straight trajectory connecting the
starting point and the final target is displayed in the simulation. The patient is
instructed to actively follow the position of a yellow marker, whose motion is
generated along the line connecting the start and end points according to a
minimum jerk model [41], approximated by a 5th degree polynomial with a
displacement profile as represented in Figure 5.
The patient is asked to move the arm to reach the final target with a given
velocity, minimizing the position error between the yellow marker that moves
automatically toward the target, and his/her own marker, represented by the green
sphere. The yellow marker reaches the target with zero velocity, and comes back on
the blue line towards the initial position. The patient is alerted of the start of the
exercise by a sound, that is generated automatically by the system. The therapist
can set the maximum speed of the task, by choosing among three maximum speeds
(v1 = 5 cm/s, v2 = 10 cm/s and v3 = 15 cm/s) and change the position of the fixed
targets that should be reached by the patient, both in terms of target height and
depth within the virtual room.
The movement towards multiple targets disposed on the same row and
backwards is activated in sequence, so that the patient can perform movements in
both medial and lateral planes, reaching targets at the same height. There are 7 fixed
targets placed symmetrically respect to the sagittal plane of the subject and the
fixed targets can be disposed at two different heights relative to the start position of
the task (h1 = 0.01 m and h2 = 0.12 m). During each series, the height of the fixed
target is not changed, and the following steps are executed in succession for each
series:
1) The first movement is executed towards the leftmost fixed target;
2) Once the fixed target is reached the moving marker returns back to its start
position, it stops for 2 seconds, and then it starts again towards the next
target on the right;
3) The last target of each series is the rightmost one.
In order to leave the patient the possibility to actively conduct the task and be
passively guided by the robot only when he/she is unable to complete the reaching
task, a suitable impedance control was developed. The control of the device is
based on two concurrent impedance controls acting respectively along tangential
and orthogonal directions to the trajectory.

2.2. Constrained motion task

In the second exercise the patient is asked to move freely along a circular trajectory,
as shown in Figure 6, where it is constrained by an impedance control. The virtual
constraint is activated through a button located on the handle. Position, orientation
and scale of the circular trajectory can be changed online, thus allowing the patient
to move within different effective workspaces. No guiding force is

Figure 4. The virtual scenario visualized in the reaching task.

Figure 5. The motion profile to be followed by the patient in the reaching task.
Figure 6. Example of the free motion constrained to a circular trajectory.
applied to the patient’s limb when he/she is moving within the given trajectory,
along which the patient is constrained by means of virtual springs.
Also in this task the therapist can actively compensate the weight of the
patient’s arm through the device, until the patient is able to autonomously perform
the task. This is accomplished by applying torques at the level of the joints, based
on a model of the human arm, with masses distributed along the different limbs
with a proportion derived from anatomical data. The absolute value of the each limb
mass is determined according to the weight of the subject.

2.3. Free motion task

In this task the patient is asked to move cubes represented in the virtual
environment, as shown for instance in figure 7, and to arrange them in a order
decided by the therapist, e.g. putting the cubes with the same symbol or with the
same color in a row, or putting together the fragments of one image.
For this task the device is controlled with a direct force control, with the
interaction force computed by a physics module based on the Ageia PhysX physics
engine [42]. By pressing a button on the handle, the patient can decide to select
which cube wants to move and release the cube through the same button. Collision
with and between the objects are simulated through the physics engine, so that it is
actually possible to perceive all the contact forces during the simulation.
Also in this task the device can apply an active compensation of the weight of
the patient arm, leaving to the therapist the possibility to decide the amount of
weight reduction.

Figure 7. An example of the manipulation of objects task.


3. Therapy results

The following paragraphs will describe the metrics used in order to quantitatively
evaluate patients’ performance in the reaching task and in the path following task
exercises. No quantitative data was computed for the last proposed task. A first
obvious possible quantitative measure, such as task completion time, was thought
as being not significant to evaluate patient performance improvements. This was
due to the high variability in the task difficulty among different therapy sessions
(initial cube disposition was randomly chosen by the control PC), and to the high
variability in patient’s attitude to consider the exercise as completed, i.e. the
accepted amount of cube misalignment and hence the amount of time spent in
trying to perform fine movements to reduce such misalignment.

3.1. Reaching task

Figure 8 shows a typical path followed by a patient during the reaching task. The
cumulative error for each task was chosen as being the most significant metric to
analyze reaching data. After the definition of a target position and of a nominal task
speed, the cumulative error in the reaching task is computed for iterations
corresponding to the given target position and speed. The cumulative error curves
are then fitted in a least square sense by a sigmoid-like 3-parameter curve,
represented with Eq. (1), where s is the cumulative error at time t, whereas a, b and
c are fitting parameters.
Fitting curves are then grouped and averaged on a therapy session basis, each
set containing the fitting curves computed for a single rehabilitation session.
Sample data resulting from this kind of analysis are shown in Figure 9, where a
greater dash step indicates a later day when a given target was required to be
reached with a given peak speed.
It is to be said that statistically significant improvements in the average fitting
curves from Week 1 to Week 6 are recognizable for more than half targets in only 4
out of 9 patients enrolled in the protocol. A typical improvement pattern for a
sample target is shown in Panel A of Figure 9 for Patient 6. This patient is
constantly improving his performance in the exercise, leading to a significant

(1)

Figure 8. Typical path followed during a reaching task – Blue straight line: ideal trajectory, Red: actual
trajectory.
decrease in the final cumulative error for a given target. A reducing of the mean
slope of the central segment of the fitting curve is therefore present, thus indicating
a higher ability to maintain a constant average error throughout the task.
Panel B of Figure 9 reveals an interesting aspect of the application of the belt
used to avoid undesired back movements. During the first therapy sessions, no belt
was present, and each therapy session registered a comparable value of the
cumulative error. As soon as the trunk belt is introduced, the error increases
dramatically, as formerly employed compensatory strategies are not allowed.
However, due to the fact that active patient’s movements become much more
stimulated, the cumulative error fitting curve improves significantly. It is to be
noted that, by the end of the therapy, values which are nearly comparable to the
ones obtained in the no-belt condition are reached.

3.2. Path following task

Total time required to complete a full circular path was the quantitative parameter
used to assess patient improvement for the constrained motion task. 3D position
data were projected onto a best fitting plane (in the sense of least squares), and the
best fit circle was computed for the projected points. Time to complete a turn was
then evaluated with regard to trajectory. Curvature along the trajectory, which is
irregular for the three patients, was not evaluated. In particular, due to the
deliberately low value of the stiffness which realizes the motion constraint, patients
sometimes move in an unstable way, bouncing from the internal side to the external
side of the trajectory and vice versa, requiring some time to gain the control of their
movements again. This behavior has detrimental effects on curvature computation.
Although three of the patients report no significant decrease of the completion
time from Week 1 to Week 6, three patients report a decrease of about 50% in the
task completion time, whereas three other patients report a decrease of about 70%
of the same performance indicator. Such results are significant from a statistical
point of view (p < 0.001 for the t-Student test for each patient showing
improvements).
Sample data from Patient 3 are shown in Figure 10, in order to visualize a
typical trend which was found in the patients reporting improvements in the motion
constrained exercise. It is interesting to note that, along with the significant
reduction in the mean time required to complete a circle, a significant reduction of

A B
Figure 9. A: sample reaching results for Patient 6; B: sample reaching results for Patient 3.
Figure 10. Sample constrained motion task results - Patient 3.
the associated standard deviation is recognizable, hence suggesting an acquired
ability of performing the exercise with a much higher regularity level.

4. Clinical results

All patients were evaluated by means of standard clinical evaluation scales:


• Fugl-Meyer scale: this scale [43] is used for the evaluation of motor
function, of balance, and of some sensation qualities and joint function
in hemiplegic patients. The Fugl-Meyer assessment method applies a
cumulative numerical score. The whole scale consists of 50 items, for
a total of 100 points, each item being evaluated in a range from 0 to
2.33 items concern upper limb functions (for a total of 66 points) and
are used for the clinical evaluations.
• Modified Ashworth scale: it is the most widely used method for
assessing muscle spasticity in clinical practice and research. Its items
are marked with a score ranging from 0 to 5, the greater the score, the
greater being the spasticity level. Only patients with modified
Ashworth scale values ≤ 2 were admitted to this study.
• Range Of Motion: it is the most classical and evident parameter used
to assess motor capabilities of impaired patients.
Clinical improvements in each scale have been observed by the end of the
therapy protocol for every patient, and they will now be discussed.

4.1. Fugl-Meyer assessment

The Fugl-Meyer assessment was carried out before and after robotic therapy. Every
patient reported a significant increment ranging from 1 to 8 points, 4 points (out of
66) being the average increment (p<0.005, paired t-Student test). Such results is
absolutely comparable with the results which may be found in the scientific
literature [33].

4.2. Ashworth assessment

Slight decrements of some values of the Modified Ashworth scale may be found
examining detailed clinician assessments. The following improvement index was
defined for each value of the Ashworth scale:
+1: decrement of one step (e.g. from 1 to 0/1);
+2: decrement of two steps (e.g. from 1+ to 0/1);
+3: decrement of three steps (e.g. from 1+ to 0);
-1: increment of one step (e.g. from 1 to 1+).

The total improvement index was computed for each patient. A mean
improvement of 6.2 points in the overall improvement index has been found, with a
standard deviation of 4.2 points. It can therefore be asserted that the robotic therapy
with the L-Exos device leads to improvements in patients’ spasticity levels.

4.3. ROM evaluation

Different ROM measurements, both active and passive, were conducted. Statistical
significance data elaborations on total ranges were performed by means of the
paired t-Student test. Statistically significant improvements (p < 0.05) have been
demonstrated for many ROMs, whereas many other ROM improvements reached
marginal significance (0.05 < p < 0.10). Only 1 ROM increment has been found as
not statistically significant.
It is to be noted that marginally significant or non significant improvements
were found for passive ROMs, whereas each active ROM improvement is
statistically significant. This observation confirms that the therapy with the L-Exos
has beneficial effects on the maximum range of motion both for joints directly
employed when performing the therapy exercises and for joints not directly
exercised by the rehabilitation exercises (e.g. wrist) and blocked in a fixed position
during the therapy. This evidence supports the theory stating that a dedicated
shoulder or elbow therapy and the resulting neural repair of cerebral areas involved
in proximal segments motor control may lead to a natural neural repair of cerebral
areas involved in the motor control of distal segments.
Further evidence supporting this theory is provided by a single patient who
reported unexpected significant improvements in hand movements. In particular at
the end of the therapy, he was able to control finger opening and closing motions at
a slow speed, whereas he had not been able to perform any hand movement after
the stroke event. It is to be noted that no hand movements are employed in any
exercise performed with the L-Exos system, due to the fact that hand and wrist are
blocked in a fixed position with respect to the forearm throughout the therapy.

5. Conclusions

The L-Exos system, which is a 5-DoF haptic exoskeleton for the right arm, was
successfully clinically tested on a group of nine chronic stroke patients with upper
limb motor impairments. In particular, the extended clinical trial presented in this
paper consisted in a 6-week protocol involving three one-hour robotic-mediated
rehabilitation sessions per week.
Despite most of the patients enthusiastically reporting major subjective benefits
in Activities of Daily Life after robotic treatment, it is to be said that no general
correlation has yet been found between such reported benefits and performance
improvements in the proposed studies. In other words, patients who improve on the
reaching task exercise may fail to present a corresponding performance
improvement in the path following task and vice versa, and this does not seem to be
correlated to the generalized extremely positive qualitative feedback. This
observation may be caused by a variety of factors and requires further studies to be
conducted.
Nevertheless, qualitative subject feedback is strongly supported by the clinical
analyses which definitely underline significant improvements in clinical metrics
deriving from robotic-mediated rehabilitation therapy, thus suggesting the possible
need for more complex metrics to be used in order to analyze exercise performance.
In particular, significant ROM increments for joints which are not actively
exercised by the robotic therapy is considered an extremely important result. As a
matter of fact, global cortical reorganization involving upper limb can be positively
stimulated by exoskeleton devices like the L-Exos, even though some limitations in
terms of number of DoFs are prgaigesent. Further differentiated clinical studies will
be conducted in order to evaluate which kind of robotic-assisted therapy is able to
provide the best possible rehabilitation outcome.

References

[1] H Nakayama, H S Jorgensen, H O Raaschou and TS Olsen, Recovery of upper extremity function
in stroke patients: the Copenhagen Stroke Study, Arch. Phys. Med. Rehabil. 75 (4) (1994), 394–
398.
[2] L Diller, Post-stroke rehabilitation practice guidelines. International handbook of neuropsycholo-
gical rehabilitation, Critical issues in neurorehabilitation, New York: Plenum, 2000, pp.167–182.
[3] J H van der Lee, R C Wagenaar, G J Lankhorst, T W Vogelaar, W L Deville and L M Bouter.
Forced Use of the Upper Extremity in Chronic Stroke Patients Results From a Single-Blind
Randomized Clinical Trial, Stroke 30 (1999), 2369-2375.
[4] C Butefisch, H Hummelsheim, P Denzler, and K H Mauritz. Repetitive training of isolated mo-
vements improves the outcome of motor rehabilitation of the centrally paretic hand, J Neurol Sci
130 (1) (1995), 59–68.
[5] S Katz, A B Ford, A B Chinn, et al., Prognosis after stroke: long term outcome of 159 patients.
Medicine 45 (1966), 236–246.
[6] C E Skilbeck, D T Wade, R L Hewer and V A Wood, Recovery after stroke, J. Neurol. Neurosurg
Psychiatry 46 (1) (1983), 5–8.
[7] T S Olsen, Arm and leg paresis as outcome predictors in stroke rehabilitation, Stroke 21 (2) (1990),
247–251.
[8] E Ernst, A review of stroke rehabilitation and physiotherapy, Stroke 21 (7) (1990), 1081–1085.
[9] S J Page, P Levine, S Sisto, Q Bond and M V Johnston, Stroke patients’ and therapists’ opinions of
constraint-induced movement therapy, Clinical Rehabilitation 16 (1) (2002), 55.
[10] S Barreca, S L Wolf, S Fasoli and R Bohannon, Treatment Interventions for the Paretic Upper
Limb of Stroke Survivors: A Critical Review, Neurorehabilitation and Neural Repair 17 (4)
(2003), 220–226.
[11] H M Feys, W J De Weerdt, B E Selz, G A Cox Steck, R Spichiger, L E Vereeck, K D Putman and
G A Van Hoydonck, Effect of a Therapeutic Intervention for the Hemiplegic Upper Limb in the
Acute Phase After Stroke A Single-Blind, Randomized, Controlled Multicenter Trial, Stroke 29
(1998), 785-792.
[12] S H Jang, S You, Y H Kwon, M Hallett, M Y Lee and SH Ahn, Reorganization Associated Lower
Extremity Motor Recovery As Evidenced by Functional MRI and Diffusion Tensor Tractography
in a Stroke Patient, Restor Neurol & Neurosci, 23 (2005), 325–329.
[13] S Jezernik, G Colombo, T Keller, H Frueh and M. Morari, Robotic Orthosis Lokomat: A
Rehabilitation and Research Tool, Neuromodulation 6 (2) (2003), 108–115.
[14] H I Krebs, N Hogan, M L Aisen and B T Volpe, Robot-aided Neurorehabilitation, IEEE
Transactions on Rehabilitation Engineering 6 (1) (1998), 75–87.
[15] B T Volpe, H I Krebs, N Hogan, L Edelstein, C Diels and M. Aisen, A novel approach to stroke
rehabilitation Robot-aided sensorimotor stimulation, Neurology 54 (10) (2000), 1938–1944.
[16] J Stein, H I Krebs, W R Frontera, S E Fasoli, R Hughes and N Hogan, Comparison of two
techniques of robot-aided upper limb exercise training after stroke, Am J Phys Med Rehabil, 83 (9)
(2004),. 720–728.
[17] S E Fasoli, H I Krebs, J Stein, W R Frontera and N. Hogan, Effects of robotic therapy on motor
impairment and recovery in chronic stroke. Arch Phys Med Rehabil, 84 (4) (2003), 477–482.
[18] D J Reinkensmeyer, J P A Dewald and W Z Rymer, Guidance-Based Quantification of Arm
Impairment Following Brain Injury: A Pilot Study, IEEE Transactions on Rehabilitation
Engineering 7 (1) (1999), 1.
[19] D J Reinkensmeyer, L E Kahn, M Averbuch, A McKenna-Cole, B D Schmit and W Z Rymer,
Understanding and treating arm movement impairment after chronic brain injury: progress with the
ARM guide, J Rehabil Res Dev 37 (6) (2000),.653–662.
[20] P S Lum, C G Burgar, P C Shor, M Majmundar and M Van der Loos, Robot-assisted movement
training compared with conventional therapy techniques for the rehabilitation of upper-limb motor
function after stroke, Arch Phys Med Rehabil. 83 (7) (2002), 952–959.
[21] C A Avizzano, and M Bergamasco, Technological Aids for the treatment of tremor. Sixth
International Conference on Rehabilitation Robotics (ICORR), (1999).
[22] M Bergamasco, Force replication to the human operator: the development of arm and hand
exoskeletons as haptic interfaces, Proceedings of 7th International Symposium on Robotics
Research, (1997).
[23] B M Jau, Anthropomorhic Exoskeleton dual arm/hand telerobot controller, IEEE International
Workshop on Intelligent Robots (1988), 715–718.
[24] A Nahvi, D D Nelson, J M Hollerbach and D E Johnson, Haptic manipulation of virtual
mechanisms from mechanical CAD designs, Proceedings of IEEE International Conference on
Robotics and Automation (1998).
[25] M Bergamasco, B Allotta, L Bosio, L Ferretti, G Parrini, G M Prisco, F Salsedo and G Sartini, An
arm exoskeleton system for teleopera-tion and virtual environments applications, IEEE Int. Conf.
On Robotics and Automation (1994), 1449–1454.
[26] A Frisoli, F Rocchi, S Marcheschi, A Dettori, F Salsedo and M Bergamasco, A new force-feedback
arm exoskeleton for haptic interaction in virtual environments. WHC 2005. First Joint Eurohaptics
Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator
Systems., (2005), 195–201.
[27] T Nef and R Riener, ARMin-Design of a Novel Arm Rehabilitation Robot. ICORR 2005, 9th
International Conference on Rehabilitation Robotics (2005), 57–60.
[28] K Kiguchi, S Kariya, K Watanabe, K Izumi and T Fukuda, An Exoskeletal Robot for Human
Elbow Motion Support Sensor Fusion, Adaptation, and Control, IEEE Transactions on System,
man and Cybernetics - Part B: Cybernetics 31 (3) (2001), 353.
[29] K Kiguchi, K Iwami, M Yasuda, K Watanabe and T Fukuda, An exoskeletal robot for human
shoulder joint motion assist, IEEE/ASME Transactions on Mechatronics 8 (1) (2003), 125–135.
[30] R Riener, T Nef and G Colombo, Robot-aided neurorehabilitation of the upper extremities,
Medical and Biological Engineering and Computing 43 (1) (2005) , 2–10.
[31] T Nef and R Riener (2005), ARMin-Design of a Novel Arm Rehabilitation Robot. 9th
International Conference on Rehabilitation Robotics,ICORR (2005), 57–60.
[32] N G Tsagarakis and D G Caldwell, Development and Control of a ’Soft-Actuated’ Exoskeleton for
Use in Physiotherapy and Training, Autonomous Robots 15 (1) (2003), 21–33.
[33] G B Prange, M J Jannink, C G Groothuis-Oudshoorn, H J Hermens and M J Ijzerman, Systematic
review of the effect of robot-aided therapy on recovery of the hemiparetic arm after stroke, J
Rehabil Res Dev 43 (2) (2006), 171–184.
[34] F Salsedo, A Dettori, A Frisoli, F Rocchi, M Bergamasco and M Franceschini, Exoskeleton
Interface Apparatus.
[35] A Frisoli, L Borelli, A Montagner, S Marcheschi, C Procopio, F Salsedo, M Bergamasco, M
Carboncini, M Tolaini and B Rossi, Arm rehabilitation with a robotic exoskeleleton in Virtual
Reality, ICORR 2007 10th International Conference on Rehabilitation Robotics. (2007), 631–642.
[36] A Montagner, A Frisoli, L Borelli, C Procopio, M Bergamasco, M Carboncini and B Rossi, A pilot
clinical study on robotic assisted rehabilitation in VR with an arm exoskeleton device, Virtual
Rehabilitation (2007).
[37] A Frisoli, F Rocchi, S Marcheschi, A Dettori, F Salsedo and M Bergamasco, A new force-feedback
arm exoskeleton for haptic interaction in virtual environments, Proceedings of IEEE WorldHaptics
Conference (2005), 195–201.
[38] S Marcheschi, A Frisoli, C Avizzano and M Bergamasco, A Method for Modeling and Control
Complex Tendon Transmissions in Haptic Interfaces, Proceedings of the 2005 IEEE International
Conference on Robotics and Automation (2005), 1773–1778.
[39] M Cirstea and M Levin, Compensatory strategies for reaching in stroke, Brain 123 (5) (2000),
940–953.
[40] E Ruffaldi, A Frisoli, M Bergamasco, C Gottlieb and F Tecchia, A haptic toolkit for the
development of immersive and web-enabled games, Proceedings of the ACM symposium on
Virtual reality software and technology (2006), 320–323.
[41] D J Reinkensmeyer, L E Kahn, M Averbuch, A McKenna-Cole, B D Schmit and W Z Rymer,
Understanding and treating arm movement impairment after chronic brain injury: progress with the
ARM guide, J Rehabil Res Dev. 37 (6) (2000), 653–662.
[42] http://www.ageia.com/
[43] A Fugl-Meyer, L Jaasko, I Leyman, S Olsson and S Steglind, The post-stroke hemiplegic patient.
A method for evaluation of physical performance, Scand J Rehabil Med 7 (1) (1975), 13–31.
Assessment and Treatment of the Upper
Limb by Means of Virtual Reality in Post-
Stroke Patients
Lamberto PIRONa, Andrea TUROLLAa, Michela AGOSTINIa, Carla ZUCCONIa,
Paolo TONINa, Francesco PICCIONEa and Mauro DAMb
a
I.R.C.C.S. San Camillo Hospital, Venice, Italy
b
Department of Neuroscience, University of Padua, Padua, Italy

Abstract. The disability deriving from stroke impacts heavily on the economic
and social aspects of western countries because stroke survivors commonly
experience various degrees of autonomy reduction in the activities of daily living.
Recent developments in neuroscience, neurophysiology and computational science
have led to innovative theories about the brain mechanisms of the motor system.
Thereafter, innovative, scientifically based therapeutic strategies have initially
arisen in the rehabilitation field. Promising results from the application of a virtual
reality based technique for arm rehabilitation are reported.

Keywords. Stroke, Rehabilitation, Motor Learning and Control, Augmented


Feedback, Virtual Reality

Introduction

Stroke is a leading cause of death and disability for men and women of all ages, classes,
and ethnic origins worldwide. Several epidemiological surveys were conducted on
cerebro-vascular disease, especially in the United States, where 500,000 new strokes
occur each year causing 100,000 deaths and leaving residual disability for 300,000
survivors. Moreover, approximately 3 million Americans have survived a stroke with
some degree of residual disability [1, 3].
Within 2 weeks after stroke, hemiparesis is present in 70-85% of patients and a
percentage, between 40 to 75%, is completely dependent in their activities of daily
living [4]. There is a lack of epidemiological data for European countries, although in
the United Kingdom the Oxfordshire Community Stroke Project (1983) reported an
annual incidence of 500 new cases in a 250,000 people community, with a peak in
people older than 75 years [5]. In a recent study conducted in Norway, a total annual
incidence of 2.21 strokes per 1000 people was reported. This rate is congruent with
other European countries showing that there are no regional variations within Western
Europe [6].
The estimates of the total cost of stroke are very variable in relation to the
difficulty of calculating the indirect cost resulting from disability and mortality. A 1993
estimate placed the total annual cost of stroke at $30 billion in the United States, of
which $17 billion are direct costs (hospital, physician, rehabilitation, equipment) and
$13 billion are indirect costs (lost productivity) [7].
The main cost of stroke survivors is related to their residual motor disabilities that
interfere with personal, social and/or productive activities. Surprisingly, there are few
therapeutic approaches to restore lost functions. Nowadays the available rehabilitative
therapies are currently working to develop treatments that are closely related to motor
learning principles.
Recently the development of tools for quantitative analysis of motor deficits gave
the opportunity to increment the amount of data in clinical practice to better study
human motor behavior with consequent important practical implications. First of all, it
will be possible to infer the anatomical structures that modulate the different elements
of motor control. Furthermore, it may help to better characterize motor deficits and, as
a consequence, to plan individually modified therapeutic approaches. Finally, the
quantitative analysis of movement may allow to monitor pharmacological therapies (i.e
drugs interacting with the central neurotransmitters levels) that could modify the
human motor behavior [8, 9].

1. Rationale

1.1 Neurophysiology of motor learning

Research on the physiological underpinnings of movement dynamics has traditionally


focused most extensively on the primary motor cortex (M1) pointing out that neurons
in M1 are modulated by external dynamic perturbations. Some investigators [10]
indicate that several premotor areas feed M1 which then projects to the spinal cord.
These areas are intensely interconnected with each other, with a parallel contribution to
the control of movement [11].
Other work on primates demonstrated that several cortical cells in motor and pre-
motor areas responded selectively to kinematic variations during motor adaptation
tasks. These cells, clearly identified in the monkey SMA, are involved in the process of
kinematics-to-dynamics transformation, hence in new motor task learning [11]. Doya et
al. [11, 12] proposed other correlations between the motor learning problem and
circuitry at the cortical level, suggesting that different brain areas are involved in three
different kinds of learning mechanisms: supervised learning, reinforcement learning
and unsupervised learning.
The cerebellum is supposed to be involved in real-time fine tuning of movement
by means of its feed-forward structure based on massive synaptic convergence of
granule cell axons (parallel fibers) onto Purkinje cells, which send inhibitory
connections to deep cerebellar nuclei and to inferior olive. The circuit of the cerebellum
is capable of implementing the supervised learning paradigm which consists of error
driven learning behaviors. Reinforcement learning is based on the multiple inhibitory
pathways of the basal ganglia that permit the reward predicting activity of dopamine
neurons and change of behavior in the course of goal directed task learning. The
extremely complex anatomical features of the cortex suggest that information coding is
established by an unsupervised learning paradigm in which the activity is determined
by the Hebbian-rule of synaptic updating. In this paradigm the environment provides
input but gives neither desired targets nor any measure of reward or punishment [11,
12].
Recent neurophysiologic studies demonstrated that some natural complex systems
have discrete combinatory architecture that utilizes finite numbers of primary elements
to create larger structures, such as motor primitives in spinal cord [13]. Poggio and
Bizzi [14] hypothesized a hierarchical architecture where the motor cortex is endowed
with functional modular structures that change their directional tuning during
adaptation, visuo-motor learning, exposure to mechanical load and reorganization after
lesions, i.e. the circuit of interneurons as central pattern generators, unit burst
generators, and spinal motor primitives contributing to motor learning. In the latter case
the force fields stored as synaptic weights in spinal cord may be viewed as representing
motor field primitives from which, through linear superimposition, a vast number of
movements can be fashioned by impulses conveyed by supraspinal and reflex pathways
[14]. Computational analysis [15] verifies that this proposed mechanism is capable of
learning and controlling a wide repertoire of motor behaviors. This hypothesis suggests
that the cortical lesion induced by a stroke could modify the hierarchical architecture
with negative influences on learning and controlling new motor behaviors.

1.2 Neurophysiopathology of stroke lesion

From the physiopathologic perspective, much evidence demonstrated that the location
of the stroke lesion is related to upper limb motor deficit severity. Specifically it is
argued that patients with cortical stroke have a better motor outcome than patients with
subcortical stroke. Furthermore, patients with mixed cortical plus subcortical stroke
tended to improve more than patients with pure subcortical stroke despite the expected
larger size of mixed lesions. Although subcortical strokes are normally smaller than
cortical strokes, they are more likely to involve primary (from M1) and secondary
motor pathways (from SMA and premotor area, PMA). The descending fibers from
primary and secondary motor areas converge in the internal capsule maintaining their
somatotopic distribution. Consequently, even small subcortical lesions produce
devastating motor effects. The probability of upper limb motor recovery after stroke is
hence linked strictly with the anatomical lesion: 75% for patients with lesions restricted
to the cortex (MI, PMA, SMA); 38.5% for those with subcortical or mixed cortical plus
subcortical lesions not affecting the posterior limb of the internal capsule (PLIC); and
3.6% for those with involvement of the PLIC plus adjacent corona radiata, basal
ganglia or thalamus [16].

1.3 Computational approach to the upper limb rehabilitation.

The computational approach to the motor system is a powerful analysis, in the field of
neuroscience, which offers the opportunity to unify the experimental data in a
theoretical framework. In the computational perspective, the motor behavior is intended
as the manifestation of an engineering system, whose basic task is to manage the
relationships between motor commands and sensory feedback. This management is
necessary for two reasons:
1. it ensures that our movements achieve their goals;
2. it enables us to learn by experience to make more accurate and effective
movements.
Recently, Han et al. developed a computational model for bilateral hand use in arm
reaching movements to study the interactions between adaptive decision making and
motor learning after motor cortex lesion [17]. This model combines a biologically
plausible neural model of the motor cortex with a non-neural model of reward-based
decision making and physical therapy intervention. The model demonstrated that in the
damaged cortex, during therapy, the supervised learning rules ensured that
underrepresented directions of movement were “repopulated”, thereby decreasing
average reaching errors.
The authors suggested that after stroke, if no therapy is given, plasticity due to
unsupervised learning may become maladaptive, thereby augmenting the stroke’s
negative effect. They also indicated that there is a threshold for the amount of therapy
based on three types of learning mechanisms (unsupervised, supervised and
reinforcement) required for the recovery process; below this threshold motor retraining
is “in vain”. In other words, there is an absent or exiguous use of the arm exhibiting the
“learned non-use” phenomenon. In the absence of supervised or reinforcement learning,
subsequent motor performance worsens with any amount of rehabilitation trials. On the
contrary, if unsupervised learning is not present, motor performance improves with any
amount of rehabilitation trials in the late period.

1.4 Virtual reality as an emerging therapy

Virtual Reality (VR) is an innovative technology consisting of a computer based


environment that represents a 3-D artificial world. VR has been already applied in
many fields of human activity. New computer platforms permit human-machine
interactions in real time, therefore the possibility of using VR in medicine has arisen.
The present level of technical advances in the computer interface allows the
development of VR systems as therapeutic tools in some neurological and psychiatric
pathology. For example, stroke survivors may undergo rehabilitative therapeutic
procedures with different VR systems [18, 19]. The use of a VR-based system coupled
to a motion tracking tool allows us to study the kinematics of arm movement in the
restorative process after stroke. Furthermore, the possibility of modifying the artificial
environment, where the patients could interact, may exploit some of the mechanisms of
motor learning.
We know from physiological studies that humans perform a large variety of
constrained and unconstrained movement in a smooth and graceful way because the
CNS enables us to rapidly solve complex computational problems. One hypothesis is
that the CNS needs little information in order to adapt movements to the changing of
the external requirements, providing that it already contains preprogrammed algorithms
for function [20]. These algorithms produce regularities in biological movements that
are not in any way implied by the motor task. Accordingly with this view, a given
movement can be characterized by variant and invariant elements. For instance, the
variant part of a reaching movement is the distance of the targets (corresponding to the
amplitude of the movement). The invariant part consists of straight paths with a bell-
shaped speed profile in all movements [21, 22].
In our laboratory, we experimented with a VR based setting for the assessment and
treatment of arm motor deficit in patients after stroke. We compared a VR based
(reinforced feedback in virtual environment, RFVE) and traditional physical therapy
technique (conventional therapy, CT) in the treatment of arm motor impairments in
post-stroke patients. The studied population met the following inclusion criteria: a
single ischemic stroke in the region of the middle cerebral artery at least six months
before the study (proven by means of CT scan or MRI); conventional physical therapy
treatment received in the early period after stroke; mild to intermediate motor
impairments of the arm assessed as a Fugl-Meyer Upper Extremity score (F-M UE)
between 20 and 60, at baseline [23]. Clinical history or evidence of memory
impairments, neglect, and apraxia or aphasia interfering with verbal comprehension
were all considered exclusion criteria.
The experimental intervention was the RFVE treatment and the control procedure
consisted of conventional physical therapy treatment. Both therapies were oriented
towards upper extremity motor rehabilitation. In the first one, the subject was requested
to perform different kinds of motor tasks while the movement of the entire
biomechanical arm system’s end-effector was simultaneously represented in a virtual
scenario by means of motion-tracking equipment. The equipment included a computer
workstation connected to a 3D motion-tracking system (Polhemus 3Space FasTrak,
Vermont, U.S.A) and a high-resolution LCD projector which displayed the virtual
scenarios on a large wall screen. The electromagnetic 3D motion-tracking sensor was
positioned on a manipulable object (rubber ball, polystyrene cube etc.) held by the
subject, or, alternatively, was attached to a glove worn by the patient in cases of severe
grasping deficits. The physical therapist could create numerous virtual motor tasks for
the arm through the use of flexible software, developed at the Massachusetts Institute
of Technology (Cambridge, MA, U.S.), which processes the motion data coming from
the end-effector receiver. The therapist selected the characteristics and the complexity
of the motor tasks in order to suit each patient’s arm deficit. In the virtual scenario, the
therapist determined the starting position and the characteristics of the target, such as
target orientation, for each task or the addition of other virtual objects to increase the
task’s complexity. A simple reaching movement could accomplish some tasks, while
others required more complicated movements, such as putting the envelope in the
mailbox, hitting the nail, or pouring the glass in the carafe. The subject moves the real
envelope, hammer, or glass and sees on the screen the trajectory of the corresponding
virtual object toward the virtual mailbox, nail, or carafe.
During the RFVE therapy, patients were asked to perform motor tasks according to
constraints specified beforehand by the therapist. Subjects were given information
about their arm movements during the performance of motor skills (knowledge of
performance, KP) by the movement of the end-effector’s virtual representation. The
therapist’s movement and trajectory could also be displayed in the background of the
virtual scene in order to facilitate the subject’s perception and adjustment to motion
errors (learning by imitation) [24]. Moreover, knowledge of the results (KR) regarding
motor task correctness was supplied to patients in the form of standardized scores and
by displaying arm trajectory morphology on the screen. Initially, the above mentioned
KP and KR were provided at a frequency of more than 90% and were gradually
decreased as performance improved.
In the CT group the subjects were asked to perform specific exercises for the upper
limb with a strategy of progressive complexity. First, the patients were requested to
control isolated motions without postural control, with physical therapist support if
necessary, then postural control was included and, finally, complex motion with
postural control was practiced. For example, patients were asked to touch different
targets arranged upon a horizontal plane in front of them; to manipulate different
objects; to follow trajectories displayed on a plane; to recognize different arm positions.
The physical therapists chose the exercises in relation to functional assessments
and patient needs.
The aim of this study was to compare the RFVE and CT approaches towards the
treatment of arm motor impairments in post-stroke patients. We hypothesized that a
rehabilitation technique based on motor learning rules, specifically the kinematic
information about arm movements in a virtual environment, could significantly
improve the motor outcome scores better than CT therapy. Before and after the
treatment, the degree of motor impairment and independence in daily living activities
were evaluated in both groups with the F-M UE score and the Functional Independence
Measure scale (FIM) [25]. At the same evaluation times, for all of the patients, we
determined the mean duration (MD) in sec, mean linear velocity (MLV) in cm/sec, and
the number of sub-movements (SM) in 36 motor trials organized into four tasks.
The patients’ starting position was the same in all of the trials. The different
orientation of the target (horizontal, vertical and diagonal on the subject’s frontal plane)
determined the complexity of the movement in terms of involving the activation of
different muscles. The patients were randomly assigned into the 2 groups and both
groups underwent the therapy for 1 hour treatment sessions daily, 5 days a week for 4
weeks.
Analyzing clinical (F-M UE and FIM) variables, we found, in both groups, a
statistical significance within the groups for the F-M UE (p-values<0.00, p-
values<0.016, respectively) and for the FIM (p-values<0.00, p-values<0.009,
respectively) scales. The robust regression analysis revealed that the F-M UE values
after the treatment were systematically higher in the RFVE patients than in the CT
subjects (β =-4.26, p-value<0.005). We observed the same result also for the FIM
values after the treatment (β =-4.59, p-value<0.02). The kinematic (MD, MLV, SM)
parameters changed significantly after the treatment only in the experimental group (p-
value = 0.01, 0.00 and 0.02 respectively), in contrast to those of the control subjects (p-
value = 0.18, 0.11 and 0.15 respectively). Finally none of patients who underwent the
RFVE therapy complained of any discomforts due to interaction with the virtual world,
such as cybersickness, altered eye-motor coordination or postural disequilibrium,
thereby demonstrating that this VR-application is safe for neurological patients.
Our results confirm that late therapy may improve motor performance as suggested
in others studies using different rehabilitation techniques [26, 27]. The kinematic
results were coherent with the RFVE rationale based on the amplification of kinematic
feedback to promote motor recovery; furthermore the improvement in motor
performance occurred concurrently with kinematic variation. In our opinion, the higher
results achieved with RFVE treatment were connected with the rationale of the VR
based technique which exploits the motor learning mechanisms.

2. Conclusion

In our VR setting, patients were given information about their arm movements during
the performance of motor skills (KP) that consisted of the representation of their end-
effector, and “virtual teacher” movement which showed the actual kinematics of the
hand path in order to practice “learning by imitation”. The teacher, as other relevant
feedback, realizes an ideal environment to implement new predictors or to modify
disrupted forward models. These mechanisms are developed by means of amplification
of the actual state. On the other side, new or better controllers can be developed by
means of different sensorimotor context presented in every scenario, as by the
utilization of graphic models that reproduce the visual objects’ appearance, giving
coherent contextual information. Furthermore, instructions imparted by the therapist
during the experimental procedure and the virtual representation of the correct
movement contributed to providing information about motor performance, thereby
exploiting so-called “supervised learning”. Moreover, the object’s trajectories
displayed on-screen allowed patients to evaluate the accuracy of their movement (KR),
thereby promoting the identification of successful motor strategies through the “trial
and error” paradigm. A second kind of KR provided to patients was a reward delivered
when the task performance score surpassed a pre-established threshold. These two
phenomena contributed to generating the basis for the “reinforcement learning”
mechanism.
In our experience, the synergistic activity of supervised, reinforcement and
learning by imitation facilitates faster development of the kinematic internal models
essential for motor learning. The opportunity for supplying patients with a
measurement of motor performance generated an auto-competitive stimulus for
progressively improving the correctness of arm trajectories session by session. The
above aspect, combined with the novelty and the originality of the VR-based therapy,
motivated the patients to enthusiastically participate in the rehabilitation sessions.

References

[1] J.H. Chesebro, V. Fuster, J.L. Halperin, Atrial fibrillation-Risk marker for stroke, The New England
Journal of Medicine 323 (1990), 1556-1558.
[2] R.D. Abbott, Y. Yin, D.M. Reed, et al., Risk of stroke in male cigarette smokers, The New England
Journal of Medicine 315 (1986), 717-720.
[3] M.L. Olsen, Autoimmune disease and stroke, Stroke (4) (1992), 13-16.
[4] B. Dobkin, Neurologic rehabilitation, Contemporary Neurology Series, 1995.
[5] Oxfordshire Community Stroke Project, Incidence of stroke in Oxfordshire : first year of experience of
a community stroke register, British Medical Journal 287 (1983), 713-717.
[6] H Ellekjaer, J Holmen, B Indredavik, and A Terent, Epidemiology of stroke in Innherred, Norway,
1994 to 996. Incidence and 30-day case-fatality rate, Stroke (1997), 2180-2184.
[7] PORT Study, Duke University Medical Center, Durham, NC, 1994.
[8] R.W.V. Flynn, R.S.M. MacWalter, A.S.F. Doney, The cost of cerebral ischaemia, Neuropharmacology
55 (3) (2008), 250-266.
[9] D.M. Feeney, A.M. De Smet, S. Rai, Noradrenergic modulation of hemiplegia: facilitation and
maintenance of recovery, Restorative Neurology and Neuroscience 22 (2004), 175-190.
[10] L.B. Goldstein, Neurotransmitters and motor activity: effects on functional recovery after brain injury,
NeuroRx 3 (2006), 451-457.
[11] C. Padoa-Schioppa, Li Chiang-Shan Ray, and E. Bizzi, Neuronal activity in the supplementary motor
area of monkeys adapting to a new dynamic environment, Journal of Neurophysiology 91 (2004), 449-
473.
[12] K. Doya, What are the computations of the cerebellum, the basal ganglia and the cerebral cortex?,
Neural networks 12 (1999), 961-974.
[13] K. Doya, Complementary roles of basal ganglia and cerebellum in learning and motor control, Current
Opinion in Neurobiology 10 (6) (2000), 732-739.
[14] E. Bizzi, F.A. Mussa-Ivaldi, S. Giszter, Computations underlying the execution of movement: a
biological perspective, Science 253 (1991), 287-291.
[15] T. Poggio, E. Bizzi, Generalization in vision and motor control, Nature 431 (2004), 768-774.
[16] F.A. Mussa-Ivaldi, Computational Intelligence in Robotics and Automation in Proc. 1997 IEEE (IEEE
Computer Society, Los Alamitos, California, Int. Symp., 1997, 84-90.
[17] N. Shelton, M.J. Reding, Effect of lesion location on upper limb motor recovery after stroke, Stroke 32
(1) (2001), 107-112.
[18] C.E. Han, M.A. Arbib, N. Schweighofer, Stroke rehabilitation reaches a threshold, PLoS Computational
Biology 4(8) (2008), e1000133.
[19] M.L. Aisen, H.I. Krebs, N. Hogan, F. McDowell, and B.T. Volpe, The effect of Robot assisted therapy
and rehabilitative training on motor recovery following stroke, Archives of Neurology 54 (1997), 443-
446.
[20] E. Todorov, R. Shadmehr, and E. Bizzi, Augmented feedback presented in a virtual environment
accelerates learning of a difficult motor task, Journal Motor Behavior 29 (1997), 147-158.
[21] P. Morasso, Spatial control of arm movements, Experimental Brain Research 42 (1981), 223-227.
[22] W. Abend, E. Bizzi, and P. Morasso, Human arm trajectory formation, Brain 105 (1982), 331-348.
[23] C.A.M. Stan Gielen, Movement dynamics, Current Opinion in Neurobiology 3 (6) (1993), 912-916.
[24] A.R. Fugl-Meyer, L. Jaasko, L. Leyman, S. Olsson, S. Steglind, The post-stroke hemiplegic patient. 1. a
method for evaluation of physical performance, Scandinavian Journal of Rehabilitation Medicine 7 (1)
(1975), 13-31.
[25] A.E. Russon, Learning by imitation: a hierarchical approach, Behavioral and Brain Sciences 21 (5)
(1998), 667-684.
[26] R. Keiith, C. Granger, B. Hamilton, G. Sherwin, The functional independence measure: a new tool for
rehabilitation, in M.G. Eisenburg, R.C. Grzesiak (Eds), Advances in clinical rehabilitation. Springer
Publishing Co., New York, 1987, pp. 6-18.
[27] M. Dam, P. Tonin, S. Casson, et al., The effects of long-term rehabilitation therapy on poststroke
hemiplegic patients, Stroke 24 (8) (1993), 1186-1191.
[28] P.T. Tangeman, D.A. Banaitis, A.K.Williams, Rehabilitation of chronic stroke patients: changes in
functional performance, Archives of Physical Medicine and Rehabilitation 71 (11) (1990), 876-880.
The Rehabilitation Gaming System:
a Review
Mónica S. CAMEIRÃOa, Sergi BERMÚDEZ i BADIAa,
Esther DUARTE OLLERb and Paul F.M.J. VERSCHUREa,c
a
Laboratory for Synthetic, Perceptive, Emotive and Cognitive Systems, Institut
Universitari de l’Audiovisual (IUA), Universitat Pompeu Fabra, Barcelona, Spain
b
Servei de Medicina Física i Rehabilitació, Hospital de L’Esperança, Barcelona, Spain
c
Institució Catalana de Recerca i Estudis Avançats, ICREA, Barcelona, Spain

Abstract. Stroke will become one of the main burdens of disease and loss of
quality of life in the near future. However, we still have not found rehabilitation
approaches that can scale up so as to face this challenge. Virtual reality based
therapy systems are a great promise for directly addressing this challenge. Here we
review different approaches that are based on this technology, their assumptions
and clinical impact. We will focus on virtual reality based rehabilitation systems
that combine hypotheses on the aftermath of stroke and the neuronal mechanisms
of recovery that directly aims at addressing this challenge. In particular we will
analyze the, so called, Rehabilitation Gaming System (RGS) that proposes the use
of non-invasive multi-modal stimulation to activate intact neuronal systems that
provide direct stimulation to motor areas affected by brain lesions. The RGS is
designed to engage the patients in task specific training scenarios that adapt to
their performance, allowing for an individualized training of graded difficulty and
complexity. Although RGS stands for a generic rehabilitative approach it has been
specifically tested for the rehabilitation of motor deficits of the upper extremities
of stroke patients. In this chapter we review the main foundations and properties of
the RGS, and report on the major findings extracted from studies with healthy and
stroke subjects. We show that the RGS captures qualitative and quantitative data
on motor deficits, and that this is transferred between real and VR tasks.
Additionally, we show how the RGS uses the detailed assessment of the
kinematics and performance of stroke patients to individualize the treatment.
Subsequently, we will discuss how real-time physiology can be used to provide
additional measures to assess the task difficulty and subject engagement. Finally,
we report on preliminary results of an ongoing longitudinal study on acute stroke
patients.

Keywords. Virtual reality, stroke, acute phase, rehabilitation, cortical plasticity,


gaming, individualized training.

Introduction

In the last decade there has been a growing interest in the use of Virtual Reality (VR)
based methods for the rehabilitation of cognitive and motor deficits after lesions to the
nervous system. Stroke patients have become one of the main target populations for
these new rehabilitative methods (see [1, 3] for reviews). This is due to stroke being
one of the major causes of adult disability worldwide [4], with restoration of normal
motor function in the hemiplegic upper limb being observed in less than 15% of
patients with initial paralysis [5]. This has a strong impact on the degree of
independence of these patients and it is associated with high societal costs. In addition,
we should take into account the psychological impact as many of these patients regress
into depression [6].
Rehabilitation following stroke focuses on maximizing the restoration of the lost
motor functions and on relearning skills for the performance of the activities of daily
living (ADLs). Most of the newest rehabilitation techniques rely on the fact that motor
function can be recovered by cortical plasticity [7, 9]. The ability of the brain to
reorganize itself after a brain injury has been observed by a remapping of the
surrounding areas of the lesion [10] and in other cases as a functional shift to the
contralateral hemisphere [11]. To maximize brain plasticity, several rehabilitation
strategies have been proposed that rely on a putative promotion of activity within
surviving motor networks (see [12] for review). Among those strategies we can find
intensive rehabilitation [13], repetitive motor training [14, 15], techniques directed
towards specific deficits of the patients [16], mirror therapy [17], constraint-induced
movement therapy [18], motor imagery [19], action observation [20], etc.
More recently, growing evidence of the positive impact of virtual reality
techniques on recovery following stroke has been shown [1, 2]. These systems allow
for the integration of several of the above mentioned rehabilitation strategies. Different
paradigms have been used, which we can group in different categories: learning by
imitation [21, 22], reinforced feedback [23, 24], haptic feedback [25, 26], augmented
practice and repetition [27, 28], video capture virtual reality [29], exoskeletons [30,
31], mental practice [32], and action execution/observation [33, 35]. The major
findings of these studies show that virtual reality technologies will become a more and
more essential ingredient in the treatment of stroke and order disorders of the nervous
system. Indeed, with VR we can have well controlled training protocols within
specifically defined interactive scenarios that are customized towards the needs of the
patient. However, it is not yet clear which characteristics of these systems are effective
for rehabilitation. Moreover, the quantification of the impact of these novel
rehabilitation technologies on patient’s recovery and well being is in general still very
anecdotal. One problem is that most of the reported studies are performed on small
numbers of chronic stroke patients [1, 2] although most of the cortical reorganization
happens in the first few months after stroke [36, 38]. Since plasticity is a requirement
for functional recovery, intervention at early stages of stroke should be pursuit more
vigorously.
The Rehabilitation Gaming System (RGS) is a VR based system that is targeted for
the induction and enhancement of functional recovery after lesions to the nervous
system using non-invasive multi-modal stimulation. Currently, the RGS is tested in the
context of the rehabilitation of motor deficits of the upper extremities after stroke. The
RGS assumes that neuronal plasticity is a permanent feature of the CNS and that
conditions for recovery can be induced by activating areas of the brain that are affected
by a lesion through the use of non-invasive multi-modal stimulation. In the specific
case of the rehabilitation of motor deficits after stroke, the working hypothesis of the
RGS is that action execution combined with the observation of correlated movements
in a virtual environment may activate undamaged primary or secondary motor areas
recruiting alternative networks that will improve the conditions for functional
reorganization. Indeed, it has been shown that VR stimulation can activate these motor
areas [39]. One candidate network that can provide the interface between multi-modal
stimulation and motor execution is the, so-called, mirror neuron system [40, 42]. The
mirror neurons have been shown to be active during the execution of hand, foot and
mouth goal oriented movements and also during the observation of these movements
while performed by others. This implies that we can recruit this system not only during
action execution but also during the observation of actions. We will in detail analyze
the impact of RGS on acute stroke patients [33, 35].
The RGS provides VR tasks performed in a first-person perspective where users
control the movement of two virtual arms with their own arm movements. The choice
of the first-person perspective relies on the fact that it has been shown that the
observation of hand movements produces an increase in cortical excitability modulated
by the orientation with respect to the observer [43]. In particular, those experiments
showed stronger responses when both the orientation of the hand and the orientation of
the observer coincided. This suggests that a first-person perspective could be more
effective than a third-person perspective in driving cortical activation during
performance of virtual tasks. Furthermore, the first-person perspective can recruit the
motor system to a greater extent and allows for the integration of kinesthetic
information [44] that can result in a higher degree of identification with the virtual
representation and thus in a more effective functional reorganization.
In addition to the above described neuronal principles, the RGS incorporates a
number of features that make it a very well suited system for rehabilitation. It proposes
tasks of graded complexity and difficulty. The varying complexity of the tasks allows
the user to re-construct the different elements of instrumental activities of daily living
(IADL) from overall stability to precision movements. In addition, the RGS controls
the individual task performance using a psychometric model of the training scenario.
This model is derived from game performance data obtained from a group of both
patients and healthy control subjects. As a result, the RGS can adapt the difficulty of
the task to the capabilities of the individual user, providing individualized training
while following a single rule for all users. This is relevant, since it reduces the effect of
external uncontrolled influences in choosing game parameters and training protocols,
eliminating important sources of error. For the same reason, the game instructions are
automatically provided by the system in written and auditory form.
In this chapter, we will analyze the RGS as a general strategy for functional
rehabilitation after lesions to the CNS. Our specific examples will be taken from results
of a number of pilot studies we have performed with stroke patients that address issues
such as the transfer of movements, deficits and training from real to virtual
environments. We will assess the validity of the psychometric difficulty model
implemented in the system by investigating the affective responses of the users.
Additionally, we will report on preliminary results of an ongoing clinical study with
acute stroke patients. Here the diagnostic and monitoring capabilities of the RGS will
be discussed as well as the effect of the training paradigm compared to the performance
of two control groups. Finally, we will try to extract which general principles are
behind the impact of the RGS.
Figure 1. The Rehabilitation Gaming System (RGS). The subject, resting the arms on a table surface, faces
a computer screen. The movements of the arms are visually captured by a camera positioned on top of the
display that detects color patches located on wrists and elbows. A pair of data gloves measure finger flexure.
An avatar moving according to the movements of the user performs a task in the virtual scenario. Adapted
from Cameirão et al. [1].

1. Methods

1.1. The Rehabilitation Gaming System

The RGS has been designed with standard, out of the box and inexpensive components
to provide stroke patients with the option to have this system at their homes for further
training and monitoring after discharge from the hospital.
The RGS consists of a standard PC (Intel’s Core 2 Duo Processor, Santa Clara,
California, USA) running the Linux operating system, a 3D graphics accelerator
(nVidia GeForce Go 7300, Santa Clara, California, USA), a 19” 4:3 LCD monitor, a
video camera (VGA) and a pair of data gloves (5DT, Pretória, South Africa) (Figure 1).
Arm movements are tracked by means of a vision based tracking system (AnTS) that
detects color patches located on the wrists and elbows (see section 1.2). The finger
flexion is captured by optic fiber data gloves that integrate seamlessly with our system
via a USB connection. The lycra textile of the gloves adapts to a wide range of hand
sizes and offers little resistance to finger bending. As many patients do not have the
ability to support their arms against gravity, the task is purposely in 2D and is
performed on a table surface.

1.2. AnTS

A vision based tracking system called AnTS has been adapted to track the movements
of the arms of patients during the training period at an update frequency of 35 Hz [45,
46]. The basic processing stream of AnTS starts with the acquisition of images from
the video camera that is placed on top of a computer screen (Figure 1). In this case, the
goal is the reconstruction of arm motion. Therefore, the head of the subject and a set of
color patches positioned on elbows and wrists are tracked. To locate those objects, a set
of noise filtering and image segmentation techniques based on color, shape and size
features are used. Color detection is performed by transforming the Red, Green and
Blue (RGB) data of the input images to the Hue, Saturation and Value (HSV) color
space, which encodes more robustly the identity of colors in dynamic environments
(changing light conditions, shadows, etc). Thereafter, Bayesian inference techniques
are used to locate the center of mass of objects using a model based on the Hue value,
velocity vector, object size and position that improves performance during occlusions
and target loss [46]. The position of the head and the color patches is subsequently fed
to a biomechanically constrained model of the upper body, and the joint angles are
computed. The biomechanical model imposes restrictions on the possible joint angles
and allows for a 3D approximation of arm movements using a single camera setup.

1.3. The Environment

The Torque Game Engine (www.garagegames.com), a popular, versatile and multi-


platform 3D engine has been chosen to implement the VR tasks. Torque provides both
a 3D rendering and a physics engine that allows generating high resolution and realistic
VR scenarios.
Our environment consists of a spring-like natural highland where the user interacts
in a first-person perspective. A human avatar is rendered in the world in such way that
only its arms are displayed on the screen. The joint angles captured by the tracking
system and the finger flexure provided by the data gloves are mapped to the
corresponding joints of the avatar skeleton. In this way, the user observes on the screen
two virtual arms that move according to his/her own movements.

1.4. Tasks

The task protocol consists of three different stages: two stages of calibration (see
below) and the training game. Both calibration phases allow measuring the properties
of movements in the real and virtual worlds, making possible the analysis of transfer
between both worlds. The training game is the core task of the RGS intervention, and it
deploys an exercise that is individualized for each subject depending on its
performance. All the intervention tasks provide automated written and auditory
instructions to minimize the influence of the human operator.

1.4.1. Real Calibration


The real calibration consists of performing a set of motor actions starting from a resting
position, i.e. positioning the palm of the hand on a randomized sequence of numbered
positions on the table surface (Figure 2, left panel). The patient receives auditory and
written instructions during the process, which lasts approximately 2 minutes. This task
allows recording every session basic properties of arm movement such as speed,
reaching distance, precision and reaction time.

1.4.2. Virtual Calibration


The user is asked to perform the same randomized sequence as in the real calibration
task but this time using the virtual arms and a virtual replica of the table displayed on
the screen (Figure 2, right panel).
Figure 2. Real and virtual calibration phases. Left panel: on the table surface, numbered dots are located at
specific positions on the left and right hand sides. The user is asked to place the palm of his/her hand on the
numbered dots in a randomized order. Right panel: the same setup is replicated in the virtual environment
and the user is asked to perform the same task with the virtual arms. The figure text reads “Place your right
hand palm above the number 2 and wait…”.
To prevent the patients of using the numbered positions in the physical table top as
external cues, the table surface is covered during this phase. This calibration phase
allows for a comparison on how the movements of the real calibration phase are
performed in a virtual world. Together with the analysis of real-to-virtual movement
transfer, the main role of the virtual calibration is to daily set the starting game
parameters of the training task.

1.4.3. Training
The main task of the user is to intercept spheres that are flying towards him/her by
hitting them with his/her virtual arms (‘Hitting’). We have purposefully taken a
relatively constrained task since it allows us to fully control all aspects of the training
scenario and understand its impact on recovery. The difficulty of the task is determined
by three gaming parameters: the speed of the spheres, the time interval between
consecutive spheres and the range of dispersion of the spheres. When the game starts,
the difficulty baseline is set by using the parameters measured during the virtual
calibration phase. The system automatically updates the task difficulty during the
game, depending on the performance of the subject. To be able to adjust the difficulty
level in an objective fashion, a difficulty model was developed based on experimental
data on the performance of stroke patients with random game parameters. With such a
model, the parameters are continuously adapted to keep the performance level at
around 70%, keeping patients at a challenging difficulty level but within their
capabilities to sustain motivation.
Starting from the ‘Hitting’ task, the RGS sequentially introduces tasks of graded
difficulty that require movement execution with increasing complexity and scoring,
ranging from arm extension/flexion to a coordination task that combines arm
movement with grasp and release (‘Hitting’, ‘Grasping’ and ‘Placing’) (Figure 3).
Figure 3. The 3 RGS training tasks of graded complexity. Left panel: ‘Hitting’ to train range of movement,
movement speed, and arm and shoulder stability. The approaching virtual spheres have to be intercepted with
the movements of the virtual arms. Middle panel: ‘Grasping’ to exercise finger flexure on top of movement
range, speed, and arm and shoulder stability. Now, the intercepted spheres can be grasped by flexing the
fingers. Right panel: ‘Placing’ to train not only grasp but also release. The grasped spheres can now be
released in the basket of the corresponding color. Adapted from Cameirão et al. [33].
First, the initially described ‘Hitting’ task consists on intercepting approaching
spheres. Successful interception sum 10 points to an accumulated score. Second, in the
‘Grasping’ task, the intercepted spheres have to be simultaneously grasped through
finger flexure. The correct execution sums 20 points to the game score. Finally, in the
‘Placing’ task, the spheres have to be grasped and then released in a basket that
matches their corresponding color. Here, a correct grasp and later release sums 20 + 10
points respectively to the accumulated score. During the game, visual and sound effects
provide online feedback on the performance of the subject.

1.5. Physiology

The Yerkes-Dodson law specifies that there is an optimal relationship between arousal
and performance [47]. Hence, the RGS paradigm aims at modulating the task difficulty
with respect to the arousal of the subject. In order to achieve this, the RGS capitalizes
on the availability of portable real-time physiology systems. Moreover, this will allow
us to analyze if specific game events trigger changes in the affective state of the
subjects. Heart rate (HR), heart rate variability (HRV), and galvanic skin response
(GSR) are measures that are widely used to address this question. Specifically, it has
been described that HRV can reflect the valence of stimuli [48, 49] and that the GSR
directly relates to stress and arousal [50].
The g.MOBIlab (www.gtec.at) signal acquisition system has been integrated with
the RGS. The g.MOBIlab system allows signal visualization, data logging and online
biosignal processing. We acquired single channel Electrocardiograms (ECG) and GSR
at a sampling frequency of 256 Hz. To measure physiological responses to single
events, the training game was restricted to the ‘Hitting’ task configured in such a way
that every 10 seconds a single sphere was delivered at a random position of the screen
at high speed. From the point of insertion, the sphere took approximately 2 seconds to
reach the avatar. During these experiments, the beginning of the task was preceded by a
resting period of 15 seconds. 31 spheres were delivered during the experiment, which
resulted in a total duration of 320 seconds. HR and GSR have also been recorded while
performing the ‘Hitting’ task with a random combination of game parameters, resulting
in a random difficulty level.
1.6. Clinical Study

In order to investigate the impact of the RGS in the early stages after stroke, a
randomized longitudinal study with controls is being conducted at the “Hospital de
L’Esperança” in Barcelona, Spain. The selected patients are within the first 3 weeks
post-stroke (acute/sub-acute stroke), presenting a first time stroke, with a severe to mild
deficit of the paretic upper extremity (2≤MRC≤4 [51]), showing no aphasia or other
cognitive deficits (assessed by the Mini Mental State Examination [52]) and age ≤ 80.
Patients are randomly assigned to one of three groups: RGS and two control
groups (Control A or Control B). Patients in the RGS group perform the three tasks of
the system (‘Hitting’, ‘Grasping’ and ‘Placing’) that are gradually introduced during
the intervention period. Patients assigned to Control A group perform the same type of
motor tasks (range of movement, grasping and object manipulation) as required by the
RGS but without the virtual feedback. This group controls for the effect of the first-
person perspective VR feedback. Finally, to control for the gaming effect and
motivational aspects, patients in the Control B group perform non-specific games, such
as Big Brain Academy® and Trivial Pursuit®, with the Nintendo Wii (Tokyo, Japan).
Intervention for all groups has a duration of 12 weeks plus a 12-week follow-up period
(Figure 4), with 3 weekly sessions of 20 minutes. The patients are evaluated at
admittance, week 5, week 12 (end of the treatment) and week 24 (12 weeks follow-up).
To have objective and quantitative arm movement data for an inter-group
comparison, patients from both control groups also perform the RGS real calibration
task once per week (see section 1.4.1).
The standard clinical evaluation scales for motor and function assessment that we
use include the Functional Independence Measure [53] and Barthel Index [54] for
outcome assessment; the Motricity Index [55] for the arm and the Fugl-Meyer
Assessment Test [56] for upper extremities for assessment of motor and joint
functioning; the Chedoke Arm and Hand Activity Inventory [57] for the functional
assessment of the recovering paretic upper limb; and the 9-hole Pegboard Test [58] for
the assessment of finger dexterity and coordination.

2. Results

2.1. Real vs Virtual

A crucial aspect for our research, and consequently for the possible benefits it can
provide users with, is to understand the responses of the patients to these new VR
technologies and the correspondence between task execution in real and virtual worlds.

Figure 4. Timeline of the study. The intervention period has a duration of 12 weeks plus a 12 weeks follow-
up period. The clinical evaluation of the patients is performed at several stages of the process.
Therefore, an analysis of how movements are transferred to the virtual world when
performing the same task as in reality is pivotal. These issues were addressed in a pilot
study with 6 naïve right handed stroke patients with left hemiparesis, mean age of 61
years (range 32-74), Brunnstrom Stage for upper extremity ranging from II to V [59],
and Barthel Index from 36 to 72 [35]. These naïve patients performed single trials of
the real and virtual calibration tasks (see section 1.4). Out of these 6 patients, two were
excluded from the analysis since they did not complete the execution of the real and/or
virtual tasks within the given time. From the real and virtual tasks we extracted
reaching distance and the speed information from the movements. The reaching
distance is measured as the farthest position the patients were able to reach from the
resting position, and the speed is computed as the mean speed of all the movement
sequences performed by each arm individually.
The measurements performed during the real calibration phase show that the task
is a valid method to quantitatively analyze the performance differences between paretic
and non-paretic arms. Thus, the calibration task is well suited to evaluate and monitor
the evolution of patients over sessions, independent of the specific training they are
exposed to (Figure 5). In addition, this allows for a direct comparison between the
performance in both real and virtual tasks.
The results for the real and virtual tasks show that the behavior in the virtual
environment is consistent with the one in the real world. This means that the RGS is
able to assess from both tasks the degree of impairment. This is measured in both the
reaching distance (Figure 5, left panel) and the movement speed, with the only
difference that naïve patients display slower movements in the VR environment for
both the paretic and healthy arms (Figure 5, right panel). This could be due to an
adaptation effect to the virtual environment. Nevertheless, the relative differences
between paretic and non-paretic arms are conserved in real and virtual worlds, meaning
that motor deficits are transferred (Figure 5, right panel). This strongly suggests that
improvements measured within the RGS virtual tasks will translate to measurable
improvements in real world tasks.

Figure 5. Real vs virtual reaching distance and speed of the movements. Left panel: maximum reaching distance
across patients for the paretic and non-paretic arms in real (up) and virtual (bottom) worlds. Adapted from
Cameirão et al. [35]. Right panel: mean speed for the paretic and non-paretic arms of all the patients in real and
virtual worlds. Vertical bars indicate the standard deviation.
2.2. Game Data Analysis

In addition to the data extracted from the calibration phases, the gaming scenario of the
RGS provides data about the movements of the arms of the patients synchronized with
all the game events that take place during the 20 minutes of training. At the end of
every single RGS session, all the data of the goal oriented motor actions (i.e. ‘Hitting’,
‘Grasping’ and ‘Placing’) are available for each patient (Figure 6).
Since all data related to arm movement is stored during the performance of the
game, such as joint angles and finger flexure, a number of performance indicators can
be measured for each trial. These include among others precision or accuracy of the
actions, speed and reaching distance.
As opposed to the calibration tasks where only few repetitions of a task are
performed in order to estimate some performance values, the training task of the RGS
offers approximately 300 repetitions per session. Consequently, more robust and
accurate data analysis can be realized which displays properties of the performed motor
actions, otherwise unlikely to be detected. In particular, the analysis of the RGS
training data provides information about the distribution of the caught and missed
sphere events, and the accuracy of the actions, i.e., the error distribution for both
paretic and non-paretic arms. Taking again as an example the case of patient 2, we can
observe that most of the spheres were missed on the farthest region of the left side
(Figure 7, left top panel). In addition, if we analyze the precision in touching the
spheres, the left hand is less accurate (Figure 7, right top panel), presenting a more
widely spread error distribution (Figure 7, bottom panels). These results point out the
importance of a high-resolution monitoring system to complement standard clinical
scales, which generally lack detailed quantitative information on motor performance.

Figure 6. Example of recorded time stamped game event data for patient 2. This plot shows over time the
position of both the left (blue line) and right hand (red line), and events (touched and missed spheres) during a
trial. The patient, with left hemiplegia, shows a reduced reaching distance and a higher number of missed
spheres with the paretic arm. Adapted from Cameirão et al. [60].
Figure 7. Example of game performance analysis with patient 2 (left hemiplegia). Top left panel: histogram
of game events (caught and missed spheres and their position in the field). Top right panel: error in sphere
interception for both arms. The bar denotes the median error, and the error bar the standard deviation.
Bottom left panel: sphere interception error histogram of the left arm (paretic). Bottom right panel: sphere
interception error histogram of the right arm (non-paretic). Adapted from Cameirão et al. [35].

2.3. Physiological Measures

An important foundation of the RGS system is that the motor actions performed in the
real and virtual worlds are equivalent, and therefore the results of the training in a VR
scenario can be generalized to the real world [35]. This means that subjects training
with our system should react to game events as if they were real. One way to study this
effect is by recording the physiological responses of subjects during the training phase
of the game. Hence, assuming that both Galvanic Skin Response (GSR) and Heart Rate
(HR) signals relate to the internal state of subjects, these data can be used to asses the
impact of the different game events and parameters for each individual.
In a first setup, to analyze the physiological responses to single game events during
the ‘Hitting’ task, we measured the skin conductance level (SCL) and extracted the
phasic responses (GSR) [61]. The skin responses were investigated for 5 healthy
subjects that performed the game. The goal was to understand how the discrete game
events (touched or missed spheres) would affect the stress level. In this case, although
the results were not uniform for all subjects, a prototypical response pattern was found
for 3 of them. As opposed to the touched sphere events, which did not trigger an event
related potential, the missed spheres did (Figure 8). Missed spheres events led to an
increased GSR (arousal) during the approach of the sphere followed by a fast decay
after the sphere was missed and a return to baseline. Additionally, in 2 of the 3 subjects
that displayed this GSR response pattern, there was a significant difference between the
GSR response for touched and missed spheres after the event occurred (p-value < 0.05,
t-test).
Figure 8. Electrodermal response analysis. Top panel: skin conductance level (SCL) during a trial. Middle
panel: galvanic skin response (GSR) during the same trial. Bottom panel: average event centered GSR mean
response for missed (solid line) and touched (dashed line) spheres. The gray area indicates when a sphere is
approaching, 0 is the time of the event and δ the time between the event and the minimum of the GSR
signal. Adapted from Cameirão et al. [61].
These results indicate that there is an arousal prior to a missed sphere event that
could eventually be used to predict when patients are likely to fail. Therefore, it would
be possible to use this biofeedback information in real time to modify game parameters
to keep performance and arousal at a desirable level.
In a second study with 5 healthy subjects, we investigated the HR game event
related changes and also the validity of our model of game difficulty. Interestingly,
when the subjects were exposed to a random combination of game parameters, we
found a correlation between the difficulty of the parameters and the HR. The difficulty
model, previously developed from experimental data of the performance of chronic
stroke patients with randomly changing game parameters, was now used to compute
the difficulty of each trial. The difficulty level, measured from 0 (easy) to 1 (hard), had
an impact on the measured HR for all subjects, relating low difficulty to lower stress
levels and higher difficulty to higher levels of stress (Figure 9, left panel).
In all subjects exposed to the task we could detect game event related responses in
either HR or HRV. Nevertheless, although HR or HRV significant changes were found
immediately before and after the event occurred, these were not consistently found for
all subjects (Figure 9, right panels). In addition, no differences were found between
event types (touched or missed spheres) leading both of them to comparable
physiological changes.
59

58

57

56
HR (bpm)

55

54

53

52
game events
51 linear fit

50
0 0.2 0.4 0.6 0.8 1
Difficulty level

Figure 9. Heart rate event related responses. Left panel: example plot of the difficulty of the game trials vs
the measured Heart Rate (HR) for a healthy subject. The difficulty level (X-X axis) is assessed by a difficulty
model based on data of stroke patients. The HR (Y-Y axis) is measured as beats-per-minute (bpm). There is
a monotonically increasing relationship between difficulty of trials and HR response shown by the linear
regression of the data. Right panels: mean heart rate (HR) (top) and heart rate variability (HRV) (bottom)
responses with respect to the timing of game events (time = 0) (n=31) for a healthy subject. The event related
responses (Y-Y axis) are computed as the percentage change of HR or HRV measures. The blue curve
indicates the mean response and green curves +/- standard deviation.

2.4. Clinical Study

At this moment, out of 76 stroke patients admitted to the hospital during a period of 8
months, 17 fulfilled our inclusion criteria. The patients were randomly assigned to
either the RGS group (n=7), the Control A group (n=4) or the Control B group (n=3).
This study was approved by the ethics committee of clinical research of the Instituto
Municipal de Asistencia Sanitaria (IMAS) and all the patients gave their signed
informed consent.

2.4.1. Monitoring and Movement Analysis


Thanks to the calibration phases performed at the starting of every session, it is now
possible to quantify the evolution of the speed of movements, reaching distance and
other characteristics of the motor actions of the patients (Figure 10).
In particular, the different measures of the speed of movement can be fitted with a
linear regression, in which case the slope of the fit provides us with a measure of the
improvement over time. A positive slope indicates an increase of the speed of the
movements whereas a flat line indicates a stable measure. In the case of patient
ID.1407178, a stable movement speed is found for the non-paretic arm, around a value
of 2 m/s. Interestingly, starting around 1 m/s, the paretic arm regains speed over
sessions until matching the speed of the non-paretic arm (Figure 10, left panel).
Figure 10. Monitoring of patients using the calibration tasks. Left panel: evolution of the speed of
movements of the paretic and non-paretic arms over sessions for patient ID.1407178. The colored solid
lines correspond to the linear regressions of the data. Right panel: phase plot of the shoulder vs elbow
angles for patient ID.951736. The plot shows two distinct strategies used by the paretic and non-paretic
arms to reach the same distances.
A clear advantage of the RGS is that detailed information of the movements
performed by the patients is recorded. This allows detecting individual movement
strategies. For instance, we observe in patient ID.951736 an increased opening of the
elbow angle to compensate a shoulder limitation (Figure 10, right panel).

2.4.2. Clinical Measures


Since all three groups of the clinical study perform the real calibration task of the RGS
at least once per week, it will be possible to compare the improvements of the different
groups at two levels. First, the RGS provides quantitative data about reaching distance,
movement speed, reaction times, etc. Second, clinical evaluations including 6
assessment scales (Functional Independence Measure, Barthel Index, Motricity Index,
Fugl-Meyer Assessment Test, Chedoke Arm and Hand Activity Inventory, Pegboard
test) are performed at four different stages of the study. In this section, we will
exclusively discuss the clinical scores.
Although the group sizes are small and the intra-group variability is large, we can
appreciate some tendencies in the different groups. Firstly, the RGS group shows a
smaller or similar mean absolute improvement from baseline to week 5 than the control
groups. This is true for all the measures except the CAHAI, for which the improvement
is slightly larger for the RGS group, although no statistical significances are found [33].
Secondly, for the second half of the treatment (from week 5 to week 12) a new trend
can be observed. In this case, the patients in the RGS group show a higher mean
increase in all their scores compared to both control groups (see [33] for further
information). However, these results should be interpreted with caution due to the
heterogeneity of the baselines at admission.
As a complementary source of information, the percentage of improvement of the
clinical scales with respect to baseline has been used to compensate for the differences
in the baseline at admission. Out of the 6 clinical scales, we focus on the clinical scales
directly related to the upper limb assessment, i.e. Motricity Index, Fugl-Meyer
Assessment Test (upper extremities) and the Chedoke Arm and Hand Activity
Inventory. The CAHAI and Motricity Index scores show a sustained and slightly larger
improvement for the RGS group during the training period (from baseline to week 12),
MOTRICITY INDEX
% of improvement % of improvement % of improvement
100
Intervention
Control A
50

0
0 5 10 15 20 25

FUGL-MEYER
100
Intervention
Control
50

0
0 5 10 15 20 25

CAHAI
100
Intervention
Control A
50

0
0 5 10 15 20 25
Weeks

Figure 11. Percentage of improvement in standard evaluation scales obtained at different stages - week 0
(admittance), week 5, week 12 (end of treatment) and week 24 (follow-up) - for two patients with similar
baseline measures. Top panel: Motricity Index for the upper extremity. Middle panel: Fugl-Meyer
Assessment Test for the upper extremity. Bottom panel: Chedoke Arm and Hand Activity Inventory.
whereas this is not the case for the Fugl-Meyer [33]. Although some interesting trends
are observed in the clinical scores, at this point of the study the data are not conclusive
and there is a need to find a better measure to compare patients with different baselines.
As an example, here we show the data of the 2 patients with the closest scores at
admittance (1 RGS and 1 Control A) that completed the entire protocol. The patient in
the RGS group had the following scores at admittance: motor FIM = 28, Barthel
Index=39, Motricity Index = 34, Fugl-Meyer = 27 and CAHAI = 13. The patient in the
Control A group had the following scores at admittance: motor FIM = 31, Barthel
Index=37, Motricity Index = 34, Fugl-Meyer = 24 and CAHAI = 13. The scores of the
three previously discussed clinical scales, namely the Motricity Index, the Fugl-Meyer
Assessment Test for upper extremities and the Chedoke Arm and Hand Activity
Inventory (CAHAI) were used to perform an analysis of the percentage of
improvement over time (Figure 11).
On what concerns specific properties of the movements, evaluated by the Motricity
Index and the Fugl-Meyer Assessment Test, the Control A patient presented a higher or
similar improvement rate at week 5, but then stabilized over the entire study period; on
the other hand, the patient in the RGS group shows a smaller improvement rate at week
5 but the improvement is sustained over the whole intervention period (Figure 11, top
and middle panels). On the evaluation of the functionality of the paretic arm (CAHAI),
the patient in the RGS group presented a trend similar to the one observed for the other
measures but with accentuated differences when compared to the Control A patient
(Figure 11, bottom panel). This particular measure is relevant because it directly
evaluates the active use of the paretic arm in the performance of daily living activities.
3. Discussion and Conclusions

In this review of virtual reality based rehabilitation systems we have given an overview
of the different approaches and focused specifically on the Rehabilitation Gaming
System. Using the results of pilot studies we have shown that the RGS is a tool for the
rehabilitation of motor deficits that has a number of properties that are relevant for an
efficient rehabilitative training. Firstly, the RGS is grounded in an explicit
neuroscientific theory about the mechanisms of recovery, activation of the motor
system and cortical plasticity. Secondly, it generates task-specific training scenarios
designed for the rehabilitation of the upper limbs, and monitors and quantifies the
improvement of the patients over time. And thirdly, the RGS tasks follow a model that
deploys an individualized training that has been divided in three phases of increasing
complexity, ranging from arm extension/flexion to a coordination task that combines
arm movement with grasp and release. The parameters of the game are continuously
adapted to the performance of the patient based on a model of the difficulty of the task
derived from data of stroke patients, which allows for an individualized training, while
ensuring that all patients are exposed to the same training rule.
In a first study of the RGS with chronic stroke patients, we analyzed the transfer of
movements between real and virtual worlds [35]. We observed that our system retains
qualitative and quantitative information of the patient’s performance during the virtual
tasks that are matched in the real world, allowing for a detailed assessment of the
deficits of the patients.
In a second study, we investigated the impact of specific game events on the stress
and arousal level of RGS subjects [61]. The monitoring of HR and GSR during game
performance allows a more detailed control of the state of the patient during therapy,
and can be used as a biofeedback system to tune the game parameters to both, the
training requirements and the capabilities of the patients. The measured HR data
support the difficulty model implemented in the game, since higher difficulty levels
induce higher levels of stress and/or arousal.
The RGS is currently used in a randomized longitudinal study with acute stroke
patients with two control conditions [33, 34]. We illustrated the overall approach and
reported on preliminary results. Our data suggest that the RGS induces a sustained
improvement over the training period when compared to the control groups.
Nonetheless, we can not yet draw definite conclusions given the small sample size of
the study at the moment (n=14, split into 3 groups). In the following months we will
assess to a larger extent the impact of the RGS intervention in an inter-group
comparative study with both the clinical scales and the measures delivered by our
system, with a larger population of stroke patients.
Although there is little work on the use of virtual systems in the early stages of
stroke, the main outcomes and cortical changes happen in the first few months after
stroke [36-38]. Therefore, it is important to act during this period and there is a
growing need to investigate if intervention at this stage can have an impact on the
prognostic of the patients. We believe that our system includes several properties that
make it a suitable tool for rehabilitation and that it captures valid working principles
that generalize to many rehabilitation paradigms. Besides the automatic monitoring,
and the adaptive training scenarios of graded difficulty, the system is versatile and can
be easily adapted to suit different clinical situations such as lower limb rehabilitation or
traumatic brain injury patients. At this moment we are exploring the additional benefits
of the RGS when coupled to haptic interfaces or passive exoskeletons.
To conclude, notwithstanding the therapeutic benefits of the RGS beyond
conventional therapy, the RGS is a very valuable low cost tool for diagnosis and
amusing training that can be largely deployed in hospitals and at home.

Acknowledgments

The authors would like to thank the occupational therapy and clinical staff at the
Hospital de L’ Esperanza in Barcelona, especially N. Rueda, S. Redon and A. Morales,
for their help and support in this study.
This research is supported by the European project Presenccia (IST-2006-27731).

References

[1] M.S. Cameirao, S. Bermudez i Badia, and P.F.M.J. Verschure, Virtual Reality Based Upper Extremity
Rehabilitation following Stroke: a Review, Journal of CyberTherapy & Rehabilitation 1 (2008), 63-74.
[2] M.K. Holden, Virtual environments for motor rehabilitation: review, Cyberpsychology & behavior 8
(2005), 187-211.
[3] F.D. Rose, B.M. Brooks, and A.A. Rizzo, Virtual reality in brain damage rehabilitation: review,
Cyberpsychology & behavior 8 (2005), 241-62.
[4] C.D. Mathers, and D. Loncar, Projections of global mortality and burden of disease from 2002 to 2030,
PLoS Med 3 (2006), e442.
[5] H.T. Hendricks, J. van Limbeek, A.C. Geurts, and M.J. Zwarts, Motor recovery after stroke: a
systematic review of the literature, Archives of Physical Medicine and Rehabilitation 83 (2002), 1629-
37.
[6] S.A. Thomas, and N.B. Lincoln, Factors relating to depression after stroke, British Journal of Clinical
Psychology 45 (2006), 49-61.
[7] J.N. Sanes, and J.P. Donoghue, Plasticity and primary motor cortex, Annual Review of Neuroscience 23
(2000), 393-415.
[8] C.M. Butefisch, Plasticity in the human cerebral cortex: lessons from the normal brain and from stroke,
Neuroscientist 10 (2004), 163-73.
[9] R.J. Nudo, Plasticity, NeuroRx 3 (2006), 420-7.
[10] R.J. Nudo, B.M. Wise, F. SiFuentes, and G.W. Milliken, Neural substrates for the effects of
rehabilitative training on motor recovery after ischemic infarct, Science 272 (1996), 1791-4.
[11] C.M. Fisher, Concerning the mechanism of recovery in stroke hemiplegia, Canadian Journal of
Neurological Sciences 19 (1992), 57-63.
[12] L. Kalra, and R. Ratan, Recent advances in stroke rehabilitation, Stroke 38 (2007), 235-7.
[13] G. Kwakkel, R. van Peppen, R.C. Wagenaar, S. Wood Dauphinee, C. Richards, A. Ashburn, K. Miller,
N. Lincoln, C. Partridge, I. Wellwood, and P. Langhorne, Effects of augmented exercise therapy time
after stroke: a meta-analysis, Stroke 35 (2004), 2529-39.
[14] A. Karni, G. Meyer, P. Jezzard, M.M. Adams, R. Turner, and L.G. Ungerleider, Functional MRI
evidence for adult motor cortex plasticity during motor skill learning, Nature 377 (1995), 155-8.
[15] E.J. Plautz, G.W. Milliken, and R.J. Nudo, Effects of repetitive motor training on movement
representations in adult squirrel monkeys: role of use versus learning, Neurobiology of Learning and
Memory 74 (2000), 27-55.
[16] J.W. Krakauer, Motor learning: its relevance to stroke recovery and neurorehabilitation, Current
Opinion in Neurology 19 (2006), 84-90.
[17] E.L. Altschuler, S.B. Wisdom, L. Stone, C. Foster, D. Galasko, D.M. Llewellyn, and V.S.
Ramachandran, Rehabilitation of hemiparesis after stroke with a mirror, Lancet 353 (1999), 2035-6.
[18] S. Blanton, H. Wilsey, and S.L. Wolf, Constraint-induced movement therapy in stroke rehabilitation:
Perspectives on future clinical applications, NeuroRehabilitation 23 (2008), 15-28.
[19] A. Zimmermann-Schlatter, C. Schuster, M.A. Puhan, E. Siekierka, and J. Steurer, Efficacy of motor
imagery in post-stroke rehabilitation: a systematic review, Journal of NeuroEngineering and
Rehabilitation 5 (2008), 8.
[20] D. Ertelt, S. Small, A. Solodkin, C. Dettmers, A. McNamara, F. Binkofski, and G. Buccino, Action
observation has a positive impact on rehabilitation of motor deficits after stroke, Neuroimage 36
(2007), T164-73.
[21] M.K. Holden, T.A. Dyar, and L. Dayan-Cimadoro, Telerehabilitation using a virtual environment
improves upper extremity function in patients with stroke, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 15 (2007), 36-42.
[22] M.K. Holden, E. Todorov, J. Callahan, and E. Bizzi, Virtual environment training improves motor
performance in two patients with stroke: case report, Neurology Report 23 (1999), 57-67.
[23] L. Piron, P. Tombolini, A. Turolla, C. Zucconi, M. Agostini, M. Dam, G. Santarello, F. Piccione, and P.
Tonin, Reinforced Feedback in Virtual Environment Facilitates the Arm Motor Recovery in Patients
after a Recent Stroke, in Virtual Rehabilitation, Venice, Italy, 2007.
[24] L. Piron, P. Tonin, F. Piccione, V. Laia, E. Trivello, and M. Dam, Virtual Environment Training
Therapy for Arm Motor Rehabilitation, Presence 14 (2005), 732-40.
[25] J. Broeren, M. Rydmark, A. Bjorkdahl, and K.S. Sunnerhagen, Assessment and training in a 3-
dimensional virtual environment with haptics: a report on 5 cases of motor rehabilitation in the chronic
stage after stroke, Neurorehabilitation and Neural Repair 21(2007), 180-9.
[26] J. Broeren, M. Rydmark, and K. S. Sunnerhagen, Virtual reality and haptics as a training device for
movement rehabilitation after stroke: a single-case study, Archives of Physical Medicine and
Rehabilitation 85 (2004), 1247-50.
[27] A.S. Merians, D. Jack, R. Boian, M. Tremaine, G.C. Burdea, S.V. Adamovich, M. Recce, and H.
Poizner, Virtual reality-augmented rehabilitation for patients following stroke, Physical Therapy 82
(2002), 898-915.
[28] A.S. Merians, H. Poizner, R. Boian, G. Burdea, and S. Adamovich, Sensorimotor training in a virtual
reality environment: does it improve functional recovery poststroke?, Neurorehabilitation and Neural
Repair 20 (2006), 252-67.
[29] P.L. Weiss, D. Rand, N. Katz, and R. Kizony, Video capture virtual reality as a flexible and effective
rehabilitation tool, Journal of NeuroEngineering and Rehabilitation 1 (2004), 12.
[30] A. Montagner, A. Frisoli, L. Borelli, C. Procopio, M. Bergamasco, M.C. Carboncini, and B. Rossi, A
pilot clinical study on robotic assissted rehabilitation in VR with an arm exoskeleton device, in Virtual
Rehabilitation, Venice, Italy, 2007.
[31] R.J. Sanchez, J. Liu, S. Rao, P. Shah, R. Smith, T. Rahman, S.C. Cramer, J.E. Bobrow, and D.J.
Reinkensmeyer, Automating arm movement training following severe stroke: functional exercises with
quantitative feedback in a gravity-reduced environment, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 14 (2006), 378-89.
[32] A. Gaggioli, A. Meneghini, F. Morganti, M. Alcaniz, and G. Riva, A strategy for computer-assisted
mental practice in stroke rehabilitation, Neurorehabilitation and Neural Repair 20 (2006), 503-7.
[33] M.S. Cameirão, S. Bermúdez i Badia, E. Duarte Oller, and P.F.M.J. Verschure, Using a Multi-Task
Adaptive VR System for Upper Limb Rehabilitation in the Acute Phase of Stroke, in Virtual
Rehabilitation, Vancouver, Canada, 2008.
[34] M.S. Cameirão, S. Bermúdez i Badia, E. Duarte Oller, and P.F.M.J. Verschure, Stroke Rehabilitation
using the Rehabilitation Gaming System (RGS): initial results of a clinical study., in CyberTherapy, San
Diego, EUA, 2008.
[35] M.S. Cameirão, S. Bermúdez i Badia, L. Zimmerli, E. Duarte Oller, and P.F.M.J. Verschure, The
Rehabilitation Gaming System: a Virtual Reality Based System for the Evaluation and Rehabilitation of
Motor Deficits, in Virtual Rehabilitation, Lido, Venice, Italy, 2007.
[36] S.H. Kreisel, H. Bazner, and M.G. Hennerici, Pathophysiology of stroke rehabilitation: temporal
aspects of neuro-functional recovery, Cerebrovascular Diseases 21 (2006), 6-17.
[37] P.W. Duncan, L.B. Goldstein, D. Matchar, G.W. Divine, and J. Feussner, Measurement of motor
recovery after stroke. Outcome assessment and sample size requirements, Stroke 23 (1992), 1084-9.
[38] G. Kwakkel, B. Kollen, and J. Twisk, Impact of time on improvement of outcome after stroke, Stroke
37 (2006), 2348-53.
[39] K. August, J.A. Lewis, G. Chandar, A. Merians, B. Biswal, and S. Adamovich, FMRI analysis of neural
mechanisms underlying rehabilitation in virtual reality: activating secondary motor areas, Conference
Proceedings - IEEE Engineering in Medicine and Biology Society 1 (2006), 3692-5.
[40] G. Rizzolatti, and L. Craighero, The mirror-neuron system, Annual Review of Neuroscience 27 (2004),
169-92.
[41] G. Buccino, F. Binkofski, G.R. Fink, L. Fadiga, L. Fogassi, V. Gallese, R.J. Seitz, K. Zilles, G.
Rizzolatti, and H.J. Freund, Action observation activates premotor and parietal areas in a somatotopic
manner: an fMRI study, European Journal of Neuroscience 13 (2001), 400-4.
[42] M. Iacoboni, and M. Dapretto, The mirror neuron system and the consequences of its dysfunction,
Nature Reviews Neuroscience 7 (2006), 942-51.
[43] F. Maeda, G. Kleiner-Fisman, and A. Pascual-Leone, Motor facilitation while observing hand actions:
specificity of the effect and role of observer's orientation, Journal of Neurophysiology 87 (2002), 1329-
35.
[44] P.L. Jackson, A.N. Meltzoff, and J. Decety, Neural circuits involved in imitation and perspective-
taking, Neuroimage 31 (2006), 429-39, May 15 2006.
[45] Z. Mathews, S. Bermúdez i Badia, and P.F.M.J. Verschure, A Novel Brain-Based Approach for Multi-
Modal Multi-Target Tracking in a Mixed Reality Space, in INTUITION - International Conference and
Workshop on Virtual Reality, Athens, Greece, 2007.
[46] S. Bermúdez i Badia, The Principles of Insect Navigation Applied to Flying and Roving Robots: From
Vision to Olfaction, Zurich, Switzerland: Eidgenössische Technische Hochschule ETH, 2006.
[47] R.M. Yerkes, and J.D. Dodson, The relation of strength of stimulus to rapidity of habit formation,
Journal of Comparative Neurology 18 (1908), 459-482.
[48] M.M. Bradley, B.N. Cuthbert, and P.J. Lang, Picture media and emotion: effects of a sustained
affective context, Psychophysiology 33 (1996), 662-70.
[49] J.F. Brosschot, and J.F. Thayer, Heart rate response is longer after negative emotions than after positive
emotions, International Journal of Psychophysiology 50 (2003),181-7.
[50] H.D. Critchley, Electrodermal responses: what happens in the brain, Neuroscientist 8 (2002), 132-42.
[51] M.R. Council, Aids to the Examination of the Peripheral Nervous System, London, 1976.
[52] M.F. Folstein, S.E. Folstein, and P.R. McHugh, Mini-mental state. A practical method for grading the
cognitive state of patients for the clinician, Journal of Psychiatric Research 12 (1975), 189-98.
[53] R.A. Keith, C.V. Granger, B.B. Hamilton, and F.S. Sherwin, The functional independence measure: a
new tool for rehabilitation, Advances in Clinical Rehabilitation 1 (1987), 6-18.
[54] F.I. Mahoney, and D.W. Barthel, Functional Evaluation: The Barthel Index, Maryland State Medical
Journal 14 (1965), 61-5.
[55] C. Collin, and D. Wade, Assessing motor impairment after stroke: a pilot reliability study, Journal of
Neurology, Neurosurgery & Psychiatry 53 (1990), 576-9.
[56] A.R. Fugl-Meyer, L. Jaasko, I. Leyman, S. Olsson, and S. Steglind, The post-stroke hemiplegic patient.
1. a method for evaluation of physical performance, Scandinavian Journal of Rehabilitation Medicine 7
(1975), 13-31.
[57] S. Barreca, C. K. Gowland, P. Stratford, M. Huijbregts, J. Griffiths, W. Torresin, M. Dunkley, P.
Miller, and L. Masters, Development of the Chedoke Arm and Hand Activity Inventory: theoretical
constructs, item generation, and selection, Top Stroke Rehabilitation 11 (2004), 31-42.
[58] M. Kellor, J. Frost, N. Silberberg, I. Iversen, and R. Cummings, Hand strength and dexterity, American
Journal of Occupational Therapy 25 (1971), 77-83.
[59] S. Brunnstrom, Recovery stages and evaluation procedures, in Movement Therapy in Hemiplegia: A
Neurophysiological Approach, New York, 1970.
[60] M.S. Cameirão, S. Bermúdez i Badia, L. Zimmerli, E. Duarte Oller, and P.F.M.J. Verschure, A Virtual
Reality System for Motor and Cognitive Neurorehabilitation, in Association for the Advancement of
Assistive Technology in Europe - AAATE, San Sebastian, Spain, 2007.
[61] M.S. Cameirão, S. Bermúdez i Badia, K. Mayank, C. Guger, and P.F.M.J. Verschure, Physiological
Responses during Performance within a Virtual Scenario for the Rehabilitation of Motor Deficits, in
Presence, Barcelona, Spain, 2007.
Virtual Reality and Gaming Systems to
Improve Walking and Mobility for People
with Musculoskeletal and Neuromuscular
Conditions
Judith E. DEUTSCHa
a
Rivers Lab, Doctoral Programs in Physical Therapy, Rehabilitation and Movement
Science, University of Medicine and Dentistry of NJ, USA

Abstract. Improving walking for individuals with musculoskeletal and


neuromuscular conditions is an important aspect of rehabilitation. The capabilities
of clinicians who address these rehabilitation issues could be augmented with
innovations such as virtual reality gaming based technologies. The chapter
provides an overview of virtual reality gaming based technologies currently being
developed and tested to improve motor and cognitive elements required for
ambulation and mobility in different patient populations. Included as well is a
detailed description of a single VR system, consisting of the rationale for
development and iterative refinement of the system based on clinical science.
These concepts include: neural plasticity, part-task training, whole task training,
task specific training, principles of exercise and motor learning, sensorimotor
integration, and visual spatial processing.

Keywords. Virtual reality, gaming, stroke, gait, multiple sclerosis, Parkinson


Disease, cerebral palsy, balance training, motor control, motor learning.

Introduction

Rehabilitation of walking for individuals with musculoskeletal and neuromuscular


conditions remains a challenge in rehabilitation. The application of new technologies
such as virtual reality, gaming and robotics has stimulated many approaches to enable
walking for individuals with disabilities. The purpose of this chapter is two-fold, first to
provide a brief overview on technology that incorporates virtual reality (VR) to
promote walking or mobility for individuals with disability; second to describe in some
detail the development and testing of one such system used for individuals with both
musculoskeletal and neurological impairments that interfered with functional mobility.

1. Overview of VR and Gaming Systems to Improve Mobility and Walking

Virtual Reality based systems used to rehabilitate walking or mobility for individuals
with disabilities, generally, are composed of hardware used as an input into a virtual
environment that is generated by software delivered to the user with an interface. The
hardware appliances vary from simple cameras for motion capture to elaborate
instrumented robots. The software renders a variety of environments (VE) in which
some form of navigation or movement takes place. The display of the environments
can be through a head mounted display (HMD), desktop computer, television screen or
a large rear projected screen. [1] Gaming systems typically have a console that is
connected to a display device and has a controller that the user manipulates to enter the
game. The user can train on these systems by physically executing movements or by
practicing navigation skills. The following sections will review VR systems and
applications based on patient populations.

1.1. Individuals Post-Stroke with Walking and Mobility Deficits

Six different VR-based approaches were identified in the literature that were designed
to improve walking or mobility for individuals post-stroke. All of these approaches
involved the physical practice of either gait or gait-related activities. Four of these are
summarized in more detail elsewhere. [2] The earliest work, was presented by Jaffee
and Brown who used a HMD to display virtual obstacles as a stimulus for stepping. [3]
Individuals post-stroke had their foot movements tracked by a camera as they
walked, with the support of a harness, on a treadmill. They trained for 12 hours over
two weeks for a total of 120 steps. Visual and tactile inputs were provided users when
their foot contacted the virtual objects. They reported that training in the VE had some
benefits relative to improved gait speed and navigation of obstacles in the real world
compared to a group of patients that trained on an over-ground obstacle course. You
and colleagues used the commercially available IREX system by GestureTek to train
gait related activities. [4] The motion capture system displayed the user avoiding sharks
and eels in a seascape, practicing stepping, and practicing balance and weight shifting
as they skied down a course. Visual and auditory feedback and performance scores
were provided to the users. Compared to a no treatment control group the VR group
improved walking category and displayed laterality shifts on fmri consistent with
plasticity. Deutsch and colleagues used a robot-vr system that will be described in more
detail in the second section of the paper. Fung and colleagues used the CAREN system
consisting of a treadmill mounted on a Stewardt platform interfaced with a virtual
hallway displayed by rear projector on a large screen. For two individuals post-stroke
with fast-walking speeds they reported adaptation to walking in the virtual
environments. [5] Using the Sony Playstation II with the Eye Toy, Flynn and
colleagues [6] reported a case of individual post-stroke that used this system as a home
based exercise. The PlayStation II with Eye Toy uses similar motion capture
technology as the IREX. [7] After 20 sessions of training with this low cost motion
capture system the individual increased her speed in the timed up and go test (TUG).
Improvement on the TUG removed the subject from the risk for falls category.
Yang and colleagues [8] used a treadmill interfaced with a three rear projectors to
display walking environments on a three-sided large screen display which afforded a
154 degree horizontal and 37 degree vertical field of view. Subjects’ leg motions were
tracked with an electromagnetic system to detect collisions. They reported increased
speed of walking and greater distance with community ambulation for the group that
trained (for only nine twenty minute sessions) with the VR-treadmill set up compared
to the group that trained with the treadmill alone.
In contrast to physical practice of a mobility task, way finding and navigation
through virtual environments has also been used as a way to train skills that are
required for mobility. [9] Individuals post-stroke navigated a two-dimensional virtual
environment (VE) was navigated using a joystick. When compared to a video-based
psycho-educational modeling program superior performance, was reported, on a scale
developed to rate community mobility skills such as road crossing and station
navigation.
These approaches to improve walking all have in common the delivery of task-
based therapy with augmented, multisensory feedback. The initial findings are
encouraging but require further study to answer a variety of questions that are relevant
to rehabilitation. Several of the groups are continuing to refine their technology and test
it further to identify what is the best VR system to use for which type of individual
post-stroke. What are the dosing requirements for optimum outcomes and in the end
most importantly which system will be widely adopted in a clinical setting? While the
majority of the studies reviewed involve physical practice of walking or walking
related motor skills, the way finding study indicates that aspects of mobility such as
basic navigation skills can be trained in the absence of physical practice. It is likely that
to attain complex skills like community ambulation it will be important to train both
motor and cognitive skills in combination.
The VR systems currently available to rehabilitate walking for people in the
chronic phase post-stroke can be categorized as direct and indirect walking
interventions. Direct walking systems involve the task of walking; this would include
the work by Jaffee, Fung and Yang. Indirect interventions by Deutsch, Flynn and You,
use gait related activities, but none that involve translation. In reviewing the literature it
was notable that the indirect interventions had a higher dosing. For example the
number of repetitions in work by You et al., ranged from 1320-1965 for gait related
activities, while the work by Jaffee et al., had a 120 repetitions of stepping over objects
while walking. All groups are incorporating principles of motor learning into their
simulations providing multisensory and cognitive feedback to the users. Outcomes
have focused on motor performance and motor control. Cognitive self-efficacy, which
was explored in the VR way finding study, may be appropriate to incorporate into gait
rehabilitation studies. Finally the active ingredients in each system have not been fully
explored.
The important question of transferring the technology to practice remains to be
addressed. The early findings show transfer of training from walking in virtual
environments and or physically training gait related activities to improved walking in
the real world. [2, 8] There are however practical considerations such as system cost
and their commercial availability. Specifically, the challenge with implementing these
technologies in the clinic is that several are not commercially available and those that
are, in some cases, are cost prohibitive for most institutions. Another consideration is
whether systems that were specifically designed for rehabilitation like the IREX, and
the CAREN system will be superior to off the shelf gaming consoles. The off the shelf
gaming software has only been tested for feasibility and not efficacy. Further research
with gaming technology may demonstrate efficacy and therefore justify its use in
practice. Comparison between highly specialized virtual reality applications relative to
the commercially available systems is warranted.

1.2. Individuals Post-Stroke with Neglect

Virtual environments have been developed to train Individuals with neglect to


safely cross the street. [10, 12] Using a desktop computer and a joystick, healthy
individuals were compared to a group of right-hemisphere damage patients post-stroke
in a street- crossing task. The emphasis was on safe crossing. [11] The same system
with refinements was further tested on a larger group of patients who all had unilateral
spatial neglect and used wheelchairs as their primary source of mobility. The authors
reported that the individuals with unilateral spatial neglect looked left more often and
had fewer accidents than the group that trained on computer visual scanning tasks. [10]
Transfer to real life crossing was not significantly different between the groups.
These studies illustrate the use of a navigation task for individuals who have mobility
deficits that are heavily influenced by their visual spatial processing abilities.

1.3. Virtual Reality Applications for Individuals with Multiple Sclerosis and Parkinson
Disease who have mobility challenges

Although not as extensive as the research on virtual reality enabled walking and
mobility for people post-stroke, the literature on VR to improve mobility for people
with multiple sclerosis (MS) and Parkison Disease (PD) is emerging. As with the
research on stroke there are various methods used to deliver the virtual environments
and train the patients.
For people with MS two contrasting approaches were found in the literature.
Baram and colleagues used VR as a cueing strategy to improve stepping for people
with MS who presented with an ataxic gait. [13] A checkered board tiled floor was
displayed through a visor worn over the subjects’ head. The VR was used as an orthotic
to stabilize the stepping pattern. They demonstrated improvements in walking speed
that had short term carry over. Fulk and colleagues reported on a single case in which
they combined bodyweight supported treadmill training with virtual reality based
balance training using the IREX-GestureTek motion capture system described in the
section on mobility and stroke. The combined treatment resulted in improved walking
and balance outcomes. [14] The case report format allowed the formulation of a
treatment plan that reflected a combined therapeutic approach to meet the patient’s
mobility needs. This is in contrast with research studies that require a reductionistic
approach to testing interventions in order to guarantee the internal validity of the study.
It is likely that clinicians would use VR technologies in combination with other
therapeutic modalities to achieve patient goals.
Research on use of virtual environments for individuals with PD has focused on
motor control aspects related to action and navigation as well as performing activities
of daily living, rather than training walking. [15, 16] This more basic research, however,
has implications for practice. Individuals with mild to moderate PD were compared to
healthy controls during a virtual supermarket navigation task. The task involved
navigation and specific actions that occurred using a first person perspective as if
pushing a shopping cart. The individuals with PD achieved similar outcomes in the
virtual environment tasks but required greater distances and more time to complete the
tasks relative to healthy controls. These differences were attributed to planning deficits
that may be amenable to training. [15] Using a HMD and joystick two individuals with
PD (Hoehn and Yahr Stage 2) and 10 healthy controls navigated through environments
and performed activities of daily living. The goal was to determine, if in the absence of
deficits on paper and pencil neuropsychological tests, the VR tasks could identify
deficits in planning for the individuals with PD. Evaluated on orientation, speed,
hesitation and memory tasks, individuals with PD were found to have the most notable
deficits on speed of execution. This was pronounced when they had to navigate through
a narrow doorway. [16] Both studies suggest that virtual environments can be used for
examination of cognitive deficits that may interfere with mobility. It will be interesting
to see if they can be applied to rehabilitation.

1.4. Gaming and VR to Improve Mobility for Individuals with Cerebral Palsy

Use of virtual reality and gaming has been used to improve selective lower extremity
motor control and improve mobility and balance in adolescents with cerebral palsy
(CP). Bryanton and colleagues demonstrated that individuals with CP were more
motivated and exercised at a greater intensity when working with a Kung Fu Game on
the IREX system compared to standard of care exercises. [17] Deutsch and colleagues,
incorporated gaming with the Nintendo Wii sport software, into an adolescent with
CP’s summer program. The individual trained in sitting and standing over 11 sessions
using boxing, baseball, bowling and golfing games designed to improve postural
control, spatial abilities and mobility. They reported gains in standing symmetry and
control, scores on the Test of Visual Perceptual Skills III (a measure of spatial ability)
and walking distance. [18] They had hypothesized the direct changes in visual spatial
ability and balance but were uncertain if they would see the transfer to walking. Finally
there has also been a case report in which neural plasticity was demonstrated after
virtual reality training with an individual with cerebral palsy. [19]
The evidence for use of virtual reality to improve mobility across a variety of
rehabilitation populations is modest. Of interest is the variety of approaches in terms of
the technology used for similar applications. The greatest number of studies and labs
that are integrating virtual reality or gaming technology for mobility rehabilitation has
focused on individuals post stroke. Important questions about transfer of training, what
is the right amount of technology will need to be addressed before these approaches are
widely adopted in the clinic. Such efforts are underway in applying virtual reality to
upper extremity [20] as well as walking rehabilitation in individual post-stroke. [2]

2. Description of Development and Testing of One VR-Based System to Improve


Mobility and Walking

2.1. Introduction

The inclusion of a more detailed description of a specific VR-based system to improve


mobility and walking for people with neuromuscular and musculoskeletal conditions is
presented here. The main objective of this section is to describe a process that is multi-
disciplinary, and requires technical, clinical and patient expertise for development and
refinement of a system. It is also to describe a progression of studies that range from
proof of concept and validation to efficacy trials.

2.2. The system and rationale for development

A robot-virtual environment lower extremity system was developed through a


collaboration of clinician-scientists, engineers and eventually users. The engineering
team included mechanical (Mourad Bouzit), electrical (Greg Burdea), computer (Rares
Boian, Jeffrey Lewis) and human interface engineers (Marilyn Tremaine). The
clinician-scientists’ background was in applied neuroscience and physical therapy. The
users were individuals with lower extremity musculoskeletal injuries, individuals post-
stroke and physical therapists. Over the course of six years the system was
conceptualized, developed, refined and tested in feasibility, pilot and user studies
culminating in randomized single blind clinical trial.
The system consists of a six-degree of freedom parallel kinematics robot (Stewrdt
platform) interfaced with a controller and a desktop computer, which displays the
virtual environments. The Stewardt platform is instrumented with a force transducer
and linear potentiometers that read forces and displacements of the platform, which are
referenced to the foot movements. Using inverse dynamics the ankle orientation and
position relative to the floor can be read into the simulation. The robots pneumatic
actuators also provide forces and torques to the patient’s foot and ankle. This force
feedback system allows for the delivery of haptic effects to the foot. The haptic effects
were modeled at a low and a high level. High-level effects allow for manipulation of
augmented sensory input to the user’s foot. Thus the robot serves as input into the
virtual environment as well as a recorder of all the movements. Details on the hardware
and haptic modeling can be found elsewhere. [21, 22] [23, 24]
The software evolved from a basic representation of a foot moving on a checker-
board pattern to an airplane navigating through a series of simple targets into an
airscape and a seascape complete with visual, auditory and haptic effects to increase
realism, challenge mobility and augment sensory input. It was designed using
principles of exercise and motor learning. [25] We have described the theoretical
rationale for the construction of a robot coupled with a virtual environment elsewhere
[25]. Briefly it was based on evidence from pre-clinical and basic science studies of the
effects of training animals in enriched environments that produced superior task
performance than those trained in impoverished environments, on the identification of
the important role of the distal effector namely the ankle in walking and the integration
of principles of motor learning and principles of exercise. Animals (primarily rats)
trained in enriched environments perform better on functional tasks and in solving
problems when compared to animals trained in impoverished environments. [26] This
difference is accentuated when the complexity of the problem increases. [27] It has
been suggested that the use of virtual environments for rehabilitation may provide the
stimulation to extend the existing benefits of rehabilitation and promote functional
recovery. [28]
The initial goal of developing the system was to create a tool for rehabilitation of
the lower extremity to remediate impairments such as weakness, lack of flexibility, in-
coordination, decreased endurance and sensory loss. The patient population initially
identified that would benefit from such a device were individuals with musculoskeletal
conditions that primarily affected the ankle. These included but were not limited to
individuals with ankle sprains and fractures. These individuals were selected because
we hypothesized that the training provided by the robot-vr system would target all of
the relevant impairments that may interfere with their recovery of mobility. Training at
the impairment level was a frequently employed therapeutic approach with this
population. Training relevant kinematic features of a movement is also believed to
transfer to whole tasks. [29] For our particular application there was some speculation
about whether training ankle impairments as well relevant kinematic features of
walking, namely the kinetics of ankle push-off, would transfer to improved walking.
2.3. The Experiments, Iterations and Refinements

The first set of experiments was designed to validate the system and its capabilities.
These validation and proof of concepts studies were executed with individuals with
ankle sprains or ankle fractures. [23, 30] A simple simulation of a foot was used as a
stimulus for basic ankle motions. Using their non-affected side as a control we
demonstrated that individuals with ankle sprains had lower force production and ankle
excursions with their affected ankles compared to their non-affected ankles. These
measurements derived by the robotic system were comparable to clinical gold
standards thus establishing the diagnostic validity of the system. [23]
To determine if the system might be transferred to the clinic we placed it in an
outpatient orthopedic physical therapy practice. By this time we had created a gaming
simulation where a virtual plane navigated through targets and the user received
feedback about their navigation. To run the study we required a clinician scientist, an
engineering student (and many cold packs to cool the compressor that often
overheated). In the clinic six patients with different diagnoses that involved the lower
extremity agreed to discontinue their physical therapy and substitute the virtual reality
training. Three of patients had ankle sprains and or fractures. They demonstrated ease
in using the system and clinical improvements in range of motion and strength. [30]
One of them worked at such high intensity that her sessions ended with application of
ice to cool the joint. We realized that we would need to enhance the simulation to assist
with engagement and immersion by adding to the task complexity, as one of the
participants (a 14 year old adolescent) reached the maximum settings for range of
motion and resistance for the navigation tasks.
One of the clinic participants was an individual in the chronic stage post-stroke.
The findings from his participation in the proof of concept or feasibility trial
altered the direction of our research. He benefitted from the virtual reality robotic
training in unexpected ways, showing a transfer to walking and stair climbing. [31]
This launched a series of studies to confirm that this transfer of training while
seated and moving your ankle in a virtual environment was a finding that could be
replicated. It was. [32] In parallel the simulation was enhanced adding haptic effects
that were coordinated with the virtual environment. [24]
In addition we conducted usability studies. The purpose of these studies was to
involve the end-user in the design of the system, in our case both the clinician and the
patient. Typically these should occur in the design phase and be repeated with system
modifications. We had informally solicited input in our earlier studies but learned much
more once we formalized the approach. We learned that our system could be used by
clinicians and was liked by the patients. [33, 34]. There were however aspects to
change and enhance consistent with the iterative process of developing technology.
These included changes to solve command structure problems, by changing the
labels and order of the action buttons; as well as simplifying terminology to make it
more accessible to the clinician. [34] A series of studies were also performed in which
the system was interfaced for tele-rehabilitation. [35, 36]These will not be elaborated
on here.
The most recent work related the robotic-virtual reality system was to determine
whether it was the robot or the combination of the robot with the virtual environment
that produced the transfer of training to the real world. We hypothesized that the
combined system in which the learner has to solve the task of navigating in the virtual
environment would produce the transfer of training, whereas training with the robot
alone might produce some impairment level changes but not a transfer to function. In a
single blind, randomized clinical trial we demonstrated that individuals who trained
with the robot-virtual reality system had gait speed and distance increases that were
measured both in the clinic and the real world, that were significantly greater than those
who trained with the robot alone. [37, 38] Other interesting findings from a fully
instrumented gait analysis was the dramatic change in ankle push off kinetics of the
robot-vr group compared to the robot alone. The specificity of training the ankle
kinetics as well as transfer of relevant part-task training is one of several explanations
for the positive outcome of the vr-system coupled with the robot. A complex system
like the one that we have just described has many features that remain to be explored.
Probably the most relevant finding of single blind randomized trial is the transfer
of training from the lab setting to the community. Using an activity monitor subjects’
gait was measured for a week in advance of training and a week after training
concluded. Significant improvements in walking distance and velocity were measured
in real world situations for the robot-vr but not the robot alone group. [38] These
findings are important as transfer for the real world from virtual reality training is a
central goal of this rehabilitation approach.

3. Summary

Virtual reality and gaming based approaches to rehabilitation of individuals with


neuromuscular and musculoskeletal conditions has been reviewed and evaluated.
Approaches in terms of technology (hardware and software) have been quite variable.
In common, the systems developed, use rich augmented multi-sensory feedback, as
well as information about performance and results. Whether it is the hardware that
interfaces into the (VE), or the stimulus for goal directed movement that the VE offers
that promotes the changes in the motor behavior remains to be elucidated. Most of the
work has used custom built, primarily lab-based systems. These offer the advantages of
customization and rich data collection. However, off the shelf commercially available
systems are also being trialed. Their reduced cost and ease of availability relative to the
lab-based systems makes them appealing. Which of these systems will be adopted in
practice remains to be determined. It is likely that exploration of off-the-shelf gaming
systems will continue in parallel with the development of lab-based systems. Each will
serve an important role in understand the usefulness of virtual reality and gaming
technology in the rehabilitation of walking and mobility.

References

[1] M.K. Holden, and T. Dyar, Virtual environment training: a new tool for neurorehabilitation, Neurology
Report 26 (2002), 62-71.
[2] J.E. Deutsch, and A. Mirelman, Virtual reality-based approaches to enable walking for people post-
stroke, Topics in Stroke Rehabilitation 14 (2007), 45-53.
[3] D.L. Jaffee, D.A. Brown, C. Pierson-Carey, E. Buckley, and H.L. Lew, Stepping over obstacles to
improve walking in individuals with poststroke hemiplegia, Journal of Rehabilitation Research &
Development 41 (2004), 283-292.
[4] S.H. You, S.H. Jang, Y. Kim, M. Hallett, S.H. Ahn, Y. Kwon, J.H. Kim, and M.Y. Lee, Virtual reality-
induced cortical reorganization and associated locomotor recovery in chronic stroke: an experimenter-
blind randomized study, Stroke 36 (6)(2005) 1166-71.
[5] J. Fung, C.L. Richards, F. Malouin, B.J. McFadyen, and A. Lamontagne, A treadmill and motion
coupled virtual reality system for gait training post-stroke, Cyberpsychology & Behavior 9 (2006), 157-
62.
[6] S. Flynn, P. Palma, and A. Bender, Feasibility of using the Sony PlayStation 2 gaming platform for an
individual poststroke: a case report, Journal of Neurologic Physical Therapy 31 (2007), 180-189.
[7] P. Weiss, D. Rand, N. Katz, and R. Kizony, Video capture virtual reality as a flexible and effective
rehabilitation tool, Journal of NeuroEngineering and Rehabilitation 1 (2004), 12.
[8] Y.R. Yang, M.P. Tsai, T.Y. Chuang, W.H. Sung, and R.Y. Wang, Virtual -reality based training
improves community ambulation in individuals with stroke: A randomized controlled trial, Gait &
Posture (2008).
[9] Y.S. Lam, D.W. Man, S.F. Tam, and P.L. Weiss, Virtual reality training for stroke rehabilitation,
NeuroRehabilitation 13 (2006), 245-53.
[10] N. Katz, H. Ring, Y. Naveh, R. Kizony, U. Feintuch, and P.L. Weiss, Interactive virtual environment
training for safe street crossing of right hemisphere stroke patients with unilateral spatial neglect,
Disability & Rehabilitation 27 (2005), 1235-43.
[11] P.L.T. Weiss, Y. Naveh, and N. Katz, Design and testing of a virtual environment to train stroke
patients with unilateral spatial neglect to cross a street safely, Occupational Therapy International 10
(2003), 39-55.
[12] J. Kim, K. Kim, D.Y. Kim, W.H. Chang, C.I. Park, S.H. Ohn, K. Han, J. Ku, S.W. Nam, I.Y. Kim, and
S.I. Kim, Virtual environment training system for rehabilitation of stroke patients with unilateral
neglect: crossing the virtual street, Cyberpsychology & Behavior 10 (2007), 7-15.
[13] Y. Baram, and A. Miller, Virtual reality cues for improvement of gait in patients with multiple sclerosis,
Neurology 66 (2006), 178-81.
[14] G.D. Fulk, Locomotor training and virtual reality-based balance training for an individual with multiple
sclerosis: a case report, Journal of Neurologic Physical Therapy 29 (2005), 34-42.
[15] E. Klinger, I. Chemin, S. Lebreton, and R.M. Marie, Virtual action planning in Parkinson's disease: a
control study, Cyberpsychology & Behavior 9 (2006), 342-7.
[16] G. Albani, R. Pignatti, L. Bertella, L. Priano, C. Semenza, E. Molinari, G. Riva, and A. Mauro,
Common daily activities in the virtual environment: a preliminary study in parkinsonian patients,
Neurological Sciences 23 (2002),S49-50.
[17] C. Bryanton, J. Bosse, M. Brien, J. McLean, A. McCormick, and H. Sveistrup, Feasibility, motivation,
and selective motor control: virtual reality compared to conventional home exercise in children with
cerebral palsy, Cyberpsychology & Behavior 9 (2006), 123-8.
[18] J.E. Deutsch, M. Borbely, J. Filler, K. Huhn, and P. Guarrera-Bowlby, Use of a low-cost, commercially
available gaming console (Wii) for rehabilitation of an adolescent with cerebral palsy, Physical Therapy
88 (2008), 1196-207.
[19] S.H. You, S.H. Jang, Y.H. Kim, Y.H. Kwon, I. Barrow, and M. Hallett, Cortical reorganization induced
by virtual reality therapy in a child with hemiparetic cerebral palsy, Developmental Medicine and Child
Neurology 47 (2005), 628-35.
[20] A. Henderson, N. Korner-Bitensky, and M. Levin, Virtual reality in stroke rehabilitation: a systematic
review of its effectiveness for upper limb motor recovery, Topics in Stroke Rehabilitation 14 (2007),
52-61.
[21] M. Girone, G. Burdea, M. Bouzit, and J.E. Deutsch, Orthopedic rehabilitation using the‘Rutgers ankle’
interface, presented at Medicine Meets Virtual Reality, Newport Beach, California, 2000.
[22] R. Boian, C.S. Lee, J.E. Deutsch, G. Burdea, and J.A. Lewis, Virtual reality-based system for ankle
rehabilitation post stroke, Proceedings of the First, presented at International Workshop on Virtual
Reality Rehabilitation, Mental Health, Neurological, Physical, Vocational, 2002.
[23] M. Girone, G. Burdea, M. Bouzit, V. Popescu, and J.E. Deutsch, A Stewart platform-based system for
ankle telerehabilitation, Autonomous Robots 10 (2001), 203-212.
[24] R. Boian, J.E. Deutsch, C.S. Lee, G. Burdea, and J.A. Lewis, Haptic Effects for Virtual Reality-based
Post-Stroke Rehabilitation, presented at Symposium on Haptic Interfaces For Virtual Environment And
Teleoperator Systems, Los Angeles, CA, 2003.
[25] J.E. Deutsch, A.S. Merians, S. Adamovich, H. Poizner, and G.C. Burdea, Development and application
of virtual reality technology to improve hand use and gait of individuals post-stroke, Restorative
Neurology & Neuroscience 22 (2004), 371-86, 2004.
[26] A. Risedal, B. Mattsson, P. Dahlqvist, C. Nordborg, T. Olsson, and B.B. Johansson, Environmental
influences on functional outcome after a cortical infarct in the rat, Brain Research Bulletin 58 (2002),
315-21.
[27] M.J. Renner, and M.R. Rosensweig, Enriched and Impoverished Environments: Effects on Brain and
Behavior, New York, Springer-Verlag, 1987.
[28] F.D. Rose, B.M. Attree, B.M. Brooks, and D.A. Johnson, Virtual environments in brain damage
rehabilitation: A rationale from basic neuroscience, in G. Riva, B.K. Wiederhold, and E. Molinari,
Virtual environments in clinical and neuroscience, Amsterdam, IOS Press, 1998.
[29] C.J. Winstein, Designing Practice for Motor Learning: Clinical Implications, presented at II Step
Conference, Norman, Oklahoma, 1990.
[30] J.E. Deutsch, Rehabilitation of musculoskeletal injuries using the Rutgers Ankle Haptic Interface:
Three Case reports, Eurohaptics (2001), 11-16.
[31] J.E. Deutsch, J. Latonio, G. Burdea, and R. Boian, Post-Stroke Rehabilitation with the Rutgers Ankle
System – A case study, Presence (2001). 416-430.
[32] J.E. Deutsch, C. Paserchia, C. Vecchione, J.A. Lewis, R. Boian, and G. Burdea, Improved gait and
elevation speed of individuals post-stroke after lower extremity training in virtual environments,
Journal of Neurologic Physical Therapy 28 (2004), 185-86.
[33] J.A. Lewis, J.E. Deutsch, and G. Burdea, Usability of the remote console for virtual reality
telerehabilitation: formative evaluation, Cyberpsychology & Behavior 9 (2006), 142-7.
[34] J. Deutsch, J.A. Lewis, E. Whitworth, R. Bolan, G. Burdea, and M. Tremaine, Formative evaluation
and preliminary findings of a virtual reality telerehabilitation system for the lower extremity, Presence
14 (2005), 198-213.
[35] J.A. Lewis, R.F. Boian, G. Burdea, and J.E. Deutsch, Remote console for virtual telerehabilitation,
Studies in Health Technology & Informatics 111 (2005), 294-300.
[36] J. Lewis, R. Boian, G. Burdea, and J. Deutsch, Real-time web-based telerehabilitation monitoring,
Studies in Health Technology & Informatics 94 (2003), 190-2.
[37] A. Mirelman, P. Bonato, and J. Deutsch, Effects of Virtual Reality-Robotic Training w Compared with
a Robot Training Alone on the Walking of Individuals with Post Stroke Hemiplegia, Journal of
Neurologic Physical Therapy 31 (2007), 200-201.
[38] A. Mirelman, P. Bonato, and J. Deutsch, Effects of training with a robot-virtual reality system
compared with a robot alone on the gait of individuals after stroke., Stroke 40 (2009), 169-74.
Virtual Reality Environments to Enhance
Upper Limb Functional Recovery in
Patients with Hemiparesis
Mindy F. LEVINa, Luiz Alberto Manfre KNAUTb, Eliane C. MAGDALONc, and
Sandeep SUBRAMANIANa
a
School of Physical and Occupational Therapy, Faculty of Medicine McGill,
University, Montreal, Quebec, Canada
b
School of Rehabilitation, University of Montreal, PR, Brazil
c
Department of Biomedical Engineering, University of Campinas, Campinas, SP,
Brazil

Abstract. Impairments in reaching and grasping have been well-documented in


patients with post-stroke hemiparesis. Patients have deficits in spatial and temporal
coordination and may use excessive trunk displacement to assist arm transport
during performance of upper limb tasks. Studies of therapeutic effectiveness have
shown that repetitive task-specific practice may improve motor function outcomes.
Movement retraining may be optimized when done in virtual reality (VR)
environments. Environments created with VR technology can incorporate elements
essential to maximize motor learning, such as repetitive and varied task practice,
performance feedback and motivation. Haptic technology can also be incorporated
into VR environments to enhance the user’s sense of presence and to make motor
tasks more ecologically relevant to the participant. As a first step in the validation
of the use of VR environments for rehabilitation, it is necessary to demonstrate
that movements made in virtual environments are similar to those made in
equivalent physical environments. This has been verified in a series of studies
comparing pointing and reaching/grasping movements in physical and virtual
environments. Because of the attributes of VR, rehabilitation of the upper limb
using VR environments may lead to better rehabilitation outcomes than
conventional approaches.

Keywords. Stroke, Kinematics, Validation, Reaching, Grasping.

Introduction

The motor recovery of the upper limb in patients following congenital or acquired brain
injury remains a persistent problem in neurological rehabilitation. More than 80% of
the approximately 566,000 stroke survivors in the United States experience hemiparesis
resulting in impairment of one upper extremity (UE) immediately after stroke and in
55-75% of survivors, impairments persist beyond the acute stage of stroke. Important
from a rehabilitation perspective is that functional limitations of the upper limb
contribute to disability and are associated with diminished health-related quality of life
[1, 3].
Despite a growing number of studies, there is still a paucity of good quality
evidence for the effectiveness of upper limb motor rehabilitation techniques for patients
with stroke-related hemiparesis [4]. Current rehabilitation practice is based on
movement repetition of targeted tasks in the clinical setting. Not all motor
improvements gained in clinical settings however have been shown to carry over into
real world situations when patients are discharged home after therapy [5]. For example,
even patients with well-recovered upper limb function as judged by clinical tests may
not make full use of their arm in everyday activities [6]. One possible reason for the
tendency to under use the affected arm may be the lack of recovery of higher order
motor control functions resulting in an inability to perform rapid, accurate and
coordinated movement and the perception of arm movements as being clumsy and slow
[7]. This suggests that greater attention should be paid to retraining upper limb
coordination or the ability of the arm and hand to interact with the environment rapidly
and efficiently in order to improve the real world relevance of practice in the clinical
setting. Indeed, an important component of dexterous movement, if such a term can be
applied to whole arm movement, is coordination between different body segments - an
element that has been largely neglected in rehabilitation approaches to movement
recovery.

1. Deficits in the coordination of reaching and grasping movements in patients


with stroke

The arm motor deficit in stroke is complex and can be described at all levels of the
International Classification of Functioning (ICF, World Health Organization,
http://www.who.int/classifications/icf/en/). At the Body Structure and Function
(impairment) level, stroke-related hemiparesis is characterized by sensorimotor deficits
such as spasticity [8] and pathological synergies in the limbs contralateral to the
hemispheric lesion [9]. The ability to activate and inactivate appropriate muscles [9,
15] is also compromised as well as the abilities to compensate elbow and shoulder
torques [12, 16] and to coordinate movements between adjacent joints [17, 18].
Impairments may be related to altered mechanical properties of motor units [19, 20],
abnormal agonist motor unit activation [21, 22] and deficits in segmental reflex
organization, including the ability to appropriately regulate stretch-reflex threshold
excitability [23, 27]. Previous studies have shown that patients have deficits in both
spatial and temporal aspects of interjoint coordination during 3D reaching to stationary
targets, placed within [18, 28, 30] and beyond the reach [31]. They also have
coordination deficits when synchronizing hand orientation with hand opening and
closing during reach-to-grasp movements to stationary targets (Figure 1) [32, 33].
Figure 1. Arm and hand coordination during a reach-to-grasp task in one healthy subject (top) and one
individual with stroke-related hemiparesis (bottom). The mean peak hand aperture (thin solid lines) generally
occurs after the mean peak hand velocity (thick solid lines) as seen in both examples but the movement is
slower and hand opening is delayed in the individual with hemiparesis. Dotted lines indicate ± one standard
deviation of the mean traces.
For more complex movements, individuals with hemiparesis may have several
deficits when attempting to produce coordinated arm, trunk and hand movements. For
example, during trunk-assisted reaching (reaching to objects placed beyond arm’s
length), patients may have deficits in the timing of the initiation of arm and trunk
movement characterized by delays and increased variability [34, 35]. In addition,
Esparza et al. [35] found differences in the range of trunk displacement between
patients with left and right brain lesions and documented bilateral deficits in the control
of movements involving complex arm-trunk co-ordination.
We are only beginning to understand how complex movements are controlled and
the role of perception-action coupling in the healthy and damaged nervous system. The
healthy nervous system is able to integrate multiple degrees of freedom of the body and
produce invariant hand trajectories when making pointing movements with or without
trunk displacement (Figure 2). In trunk-assisted reaching, Rossi et al. [36] compared
the hand trajectories when healthy subjects reached to a target placed beyond the reach
on a horizontal surface. In some trials, the trunk was free to move and thus contributed
to the endpoint trajectory. In some other trials however, the trunk movement was
unexpectedly arrested before the movement began. They showed that the initial
contribution of the trunk movement to the hand displacement was neutralized by
appropriate compensatory rotations at the shoulder and elbow. Trunk movement began
to contribute to hand displacement only after the peak velocity of the hand movement
was reached. Results such as these highlight the elegant temporal and spatial
coordination used by the healthy nervous system to produce smooth and effective
movement.
Figure 2. Top. For beyond-the-reach experiments, subjects sat in a cut-out section of a plexiglass table.
Goggles obstructed vision of the hand and target after the go signal. Hand starting position was located 30 cm
in front of the sternum. A metal plate attached to the back of the trunk, and an electromagnet attached to the
wall were used to arrest the trunk movement in 30% of randomly selected trials. Middle and lower panels:
Mean hand and trunk trajectories for one healthy (left) and one stroke subject (right) in trunk-blocked (solid
lines) and trunk-free (open lines) movements. The stroke subject had a moderate motor impairment as
indicated by the Fugl-Meyer (FM) Arm Score of 50 out of 66. Despite differences in the trunk motion
between conditions, hand trajectories for blocked-trunk trials initially coincided with those for free-trunk
movements. Hand trajectories for trunk-blocked trials diverged earlier in participants with stroke indicating
that they could not fully compensate for the trunk movement by adjusting their arm movement.
After stroke, control of movement in specific joint ranges is limited and trunk
movement makes a larger and earlier contribution to hand transport for reaches to
objects placed both within and beyond the arm’s length [26, 29]. The neurologically
damaged system also has deficits in the ability to make appropriate compensatory
adjustments of the arm joints to maintain the desired hand trajectory during trunk-
assisted reaching. This was tested using the same paradigm described above for the
study by Rossi et al. [36]. We compared hand trajectories and elbow-shoulder interjoint
coordination during “beyond-the-reach” pointing movements in healthy and
hemiparetic subjects when the trunk was free to move or when it was unexpectedly
arrested [31]. In approximately half the participants with hemiparesis, hand trajectory
divergence occurred earlier (Figure 2, right panels) while the divergence of interjoint
coordination patterns occurred later than the control group suggesting that
compensatory adjustments of the shoulder and elbow joints were not sufficient to
neutralize the influence of the trunk on the hand trajectory. Arm movements only
partially compensated the trunk displacement and this compensation was delayed. This
suggests a deficit in intersegmental temporal coordination that may be partly
responsible for the loss of arm coordination even in well-recovered patients.
Individuals with hemiparesis also have spatial and temporal coordination deficits
between movements of adjacent arm joints such as the elbow and shoulder [12, 16, 17,
18, 37], between the transport phase of reaching and aperture formation in grasping [38,
40] and in precision grip force control [39, 41]. For example, using a mathematical
analysis of kinematic variability during whole arm reaching movements, Reisman and
Scholz [42] found that individuals with mild-to-moderate hemiparesis had deficits in
specific patterns of joint coupling, and that they had only partial ability to rapidly
compensate movement errors. This suggestion had previously been proposed for single
joint arm movements by Dancause et al. [43] who further related the error
compensation deficits to impairments in executive functioning in patients with chronic
stroke.
The reduced capacity to produce and coordinate the movements of the arm, hand
and trunk into coherent action [see 44, 45] may lead to clumsy and slow movement
making it less likely that individuals would use their upper limb in daily life activities.
Rehabilitation efforts are aimed at reducing the effects of impairments through repeated
practice of targeted movements, tasks or activities in controlled clinical environments
[46].

2. Environments for upper limb rehabilitation interventions

The environment in which movement is practiced may be crucial to maximize motor


recovery. Recently, Kleim and Jones [47] summarized some of the outcomes of the
IIIStep meeting held in Salt Lake City in 2005, and outlined 10 principles of
experience-dependent plasticity related to recovery from stroke. Of these, several
principles directly or indirectly relate to the environment in which movement is
practiced. These include the importance of specificity, repetition, intensity and salience
of practice. All of these factors can be creatively manipulated using virtual reality
technology to make the most of the practice environment and to add the novelty of
gaming to make activities more challenging. Virtual reality (VR) is a multisensorial
experience in which a person is immersed and can interact with a computer-generated
environment [48]. VR offers the user a practice environment that can be ecologically
valid and has the potential to enhance patient enjoyment and compliance [49],
important factors in successful rehabilitation [50, 52].

2.1. Advantages of virtual environments

In virtual reality environments (VE), real-world situations can be mimicked while


precisely and systematically manipulating environmental constraints (tasks, obstacles).
Indeed, task difficulty can be manipulated without danger to the user. Consequently,
VEs have been used in a number of movement analysis studies [53, 61]. One advantage
of using VEs is that sensory parameters can be adapted and scaled to the abilities of the
user. In so doing, responses to a larger number of situations in a shorter amount of time
than is possible in real-world laboratory experimental set-ups can be measured. For
example, in a VE, several object locations and orientations can be reliably and rapidly
reproduced and object properties can be manipulated (i.e., obstacles can be introduced
by quickly changing properties and orientation of the object or the environment). VEs
are especially suited to the study of how individuals interact with objects or situations
that unexpectedly change. Thus, questions about dexterity and coordination that are not
easily accessible in a real-world environment can be more easily addressed. This is of
particular importance in the study of arm functional recovery in post-stroke patients.
Many stroke survivors lack the ability to reliably use the arm and hand during
interactions with objects within changing environments: e.g. catching a ball or picking
up an object while walking. These types of experimental set-ups are difficult to recreate
in the laboratory. Finally, another advantage of using VR is the possibility of studying
movement production in situations that, in the real world, may compromise the safety
of the individual. For example, in obstacle avoidance tasks, the ability to anticipate and
reach around a static obstacle such as the table ledge can be evaluated as well as the
ability to move in a constrained environment without danger of incurring injury due to
impact of the hand with an object.

2.2. The question of haptics

When the arm and hand interact with objects in the physical world, in addition to
proprioceptive feedback related to limb movement, the individual perceives sensory
information about collision of the hand with the objects being manipulated. This
sensory information combined with task success, provides feedback to the individual
about the adequacy and effectiveness of his or her movement in the virtual
environment. However, haptic information is not easily incorporated into VR
environments created for motor control studies or rehabilitation studies of upper limb
reaching and object manipulation. The use of relevant haptic interfaces is important
because it enhances the user’s sense of presence within VEs [62]. Many existing VEs
do not include haptics or include haptic information limited to sensations felt through a
joystick or mouse [63, 64]. These do not provide the nervous system with the most
salient movement-related sensory information. Given this reality, the essential question
is whether movements made in VR environments that lack haptic sensory cues usually
available in physical environments, can be considered valid. In other words, are they
spatially and temporally kinematically similar to equivalent movements made in
physical environments? In order to address this question, several studies have been
done to compare the kinematics of movements made in different types of VEs to those
made in physical environments [65, 69]. The following section of this chapter will
summarize the results of these validation studies.

3. Are movements made in virtual and physical environments kinematically


similar?

Viau et al. [69] compared movement kinematics made by 8 healthy adults and 7 stroke
survivors with mild left hemiparesis who performed near identical tasks in both a
physical and in a virtual environment. In both tasks, seated subjects grasped a real or
virtual 7 cm diameter ball, reached forward by leaning the trunk and then placed the
ball within a 2 cm x 2 cm yellow square on a real or virtual target. The initial
conditions for the task and the tasks themselves were carefully matched so that
movement extent and direction were as similar as possible. Thus, in both environments,
the initial position of the arm was about 0° flexion, 30° abduction and 0° external
rotation (shoulder), 80° flexion and 0° supination (elbow) with the wrist and hand in
the neutral position. The fingers were slightly flexed. The initial position of the ball
was 13 cm in front of the right shoulder, 7 cm above and 3 cm to the left of the
subject’s hand. The target was placed 31 cm in front of the shoulder, 12.5 cm above
and 14 cm to the right of the initial position of the ball. The VR environment was
displayed in 2 dimensions (2D) on a computer screen placed 75 cm in front of subject’s
midline. The ball and hand were displayed on the screen inside a cube. The task was to
place the ball in the upper right far corner of the cube. The virtual representation of the
subject’s hand was obtained using a 22 sensor fibre optic glove (Cyberglove,
Immersion Corp.) and an electromagnetic sensor (Fastrak, Polhemus Corp.) that was
used to orient the glove in the 2D environment. Data from these devices were
synchronized in real time. To enable the subject to "feel" the virtual ball, a prehension
force feedback device (Cybergrasp, Immersion Corp.) was fitted to the dorsal surface
of the hand. The Cybergrasp delivered prehension force feedback in the form of
extension forces to the distal phalanxes of the thumb and each finger. Forces applied to
the fingers were calibrated for each subject while he/she was wearing the Cyberglove
and all subjects perceived that they were holding a spherical object in their hand. To
better compare the performance of participant in each of the two environments, the
glove and grasp devices were worn on the hand in both conditions (Figure 3).

Figure 3. Top: Experimental set up for reaching, grasping and placing experiment in 2D virtual (VE) and
physical (PE) environments. Elbow-shoulder interjoint coordination in the reaching (middle) and transport
(bottom) phase of the task was similar between environments in healthy and stroke subjects.
Kinematics of functional arm movements involving reaching, grasping and
releasing made in physical and virtual environments were analyzed in two phases: 1)
reaching and grasping the ball and 2) ball transport and release. Temporal and spatial
parameters of reaching and grasping were determined for each phase. Using this 2D
VR environment, individuals in both groups were able to reach, grasp, transport, place
and release the virtual and physical ball using similar movement strategies. In healthy
subjects, reaching and grasping movements in both environments were similar in terms
of spatial and temporal characteristics of the endpoint and joint movements. Healthy
subjects however, used less wrist extension and more elbow extension to place the ball
on the virtual vertical surface.
As has been well-documented [17, 37], reaching movements made by individuals
with hemiparesis are different from those made by healthy control subjects. Compared
to healthy subjects, participants with hemiparesis made slower movements in both
environments and during transport and placing of the ball, trajectories were more
curved and interjoint coordination was altered. Despite these differences, however,
participants with hemiparesis also tended to use less wrist extension during the whole
movement and they used more elbow extension at the end of the placing phase for the
movement made in VR.
The finding that both groups of subjects used less wrist extension and more elbow
extension in the virtual compared to the physical environment suggested that the
movements made in VR might have been influenced by differences in perception of the
target location and the absence of haptic feedback when the target was touched by the
ball. We addressed these questions in a second study in which we compared the spatial
and temporal characteristics of reaching to targets located in different parts of the
workspace in a 3D environment [65, 66]. If the problem of target localization was
related to the quality of depth perception, then movements made in a 3D environment
should be more like those made in a physical environment than those made in the 2D
environment of the computer screen.
We created a 3D VE consisting of two rows of three targets arranged so that they
were in different parts of the arm workspace (Figure 4). The virtual environment,
created on CAREN software (Motek, Inc) was viewed through a head-mounted display
(HMD, Kaiser XL50, resolution 1024 x 768, frequency 60Hz) and arm and hand
movements were recorded with an Optotrak Motion Capture System (Northern Digital).
In lieu of haptic feedback, when a target was ‘touched’ by the virtual hand, auditory or
visual feedback was provided.
Figure 4. A. Experimental set-up for comparison of pointing in the physical environment and equivalent 3D
virtual environment. The virtual environment (VE) was designed as two rows of three elevator buttons. The
distances between the buttons and from the body were the same in both environments. B. Examples of
endpoint (hand) and trunk trajectories for pointing movements to three lower targets in one healthy and one
stroke subject. C. Examples of elbow/shoulder interjoint coordination for movements made to middle lower
target in healthy and stroke subjects in the physical (PE) and virtual (VE) environments.
The VE was designed to exactly reproduce a physical environment that also
consisted of 2 rows of targets. Thus, the VE was not designed to take advantage of the
attributes of virtual environments for movement retraining. Rather, it was designed to
be an exact replica of the physical environment in order to be able to compare the
movement kinematics made to similarly placed targets. The location of the targets
required the subject to use different combinations of arm joint movements for
successful pointing. The center-to-center distance between adjacent targets was 26 cm
in both environments and targets were displayed at a standardized distance equal to the
participant’s arm length.
Fifteen adults (4 women, 11 men; aged 59 ± 15.4 years) with chronic poststroke
hemiparesis participated in this study. They had moderate upper limb impairment
according to Chedoke-McMaster Arm Scores which ranged from 3 to 6 out of 7. A
comparison group of 12 healthy subjects (6 women, 6 men, aged 53.3 ± 17.1 years)
also participated in the study.
The task was to point as quickly and as accurately as possible to each of the 6 targets
(12 trials per target) in a random sequence in each of the two environments.
Movements were analyzed in terms of performance outcome measures (endpoint
precision, trajectory and peak velocity) and arm and trunk movement patterns (elbow
and shoulder ranges of motion, elbow/shoulder coordination, trunk displacement and
rotation). There were very few differences in movement kinematics between
environments for healthy subjects. Overall, there were no differences in elbow and
shoulder ranges of motion or interjoint coordination for movements made in both
environments by either group (Figure 5). Healthy subjects however, made movements
faster, pointed to contralateral targets more accurately and made straighter endpoint
paths in the PE compared to the VE. The participants with stroke made less accurate
and more curved movements in VE and also used less trunk displacement. Thus, the
results of this study suggested that pointing movements in virtual environments were
sufficiently similar to those made in physical environments so that 3D VEs could be
considered as valid training environments for upper limb movements.

Figure 5. Results of comparison of pointing movements made in two environments described in Figure 4.
Healthy (A) but not stroke (B) subjects made movements more slowly in the virtual environment (VE)
compared to the physical environment (PE). There were no differences in joint ranges used in either healthy
or stroke subjects in the two environments (C,D).
The appearance of more curved trajectories and the use of less trunk movement
were also features of grasping movements made in a virtual environment while subjects
wore a haptic device on the hand (Cybergrasp, Immersion Corp.). In a study of 12
adults with chronic stroke-related hemiparesis (age 67±10 yrs), reaching and grasping
kinematics to three different objects in a VE and a PE were compared [68]. The 3D
virtual environment was displayed via a HMD as in the previous study and the task was
to reach forward, pick-up and transport a virtual/physical object from one surface to
another (Figure 6). Three objects were used that required different grasp types – a can
(diameter 65.6 mm) that required a spherical grasp, a screwdriver (diameter 31.6 mm)
requiring a power grasp and a pen (diameter 7.5 mm), requiring a precision finger-
thumb grasp. In the VE, the virtual representation of the subject's hand was obtained
using a glove (Cyberglove, Immersion Corp.) and haptic feedback (prehension force
feedback) was provided via an exoskeleton device placed over the glove (Cybergrasp,
Immersion Corp.).
As for the comparison of reaching movements, comparable movement strategies
were used to reach, grasp and transport the virtual and physical objects in the two
environments. Similar to what was found for pointing movements, reaching in VR took
approximately 35% longer compared to PE. This was true especially for the cylindrical
and precision grasps. Thus, reaching and grasping movements that were accomplished
in around 1.5 seconds in PE, took up to 2.2 seconds in the VE. The increase in
movement time was reflected in all the temporal variables compared between the two
environments such as the peak velocity, the time to peak velocity, the time to maximal
grip aperture and the deceleration time as the hand approached the object. In addition to
the temporal differences, movement endpoint trajectories were also more curved in VE.
Overall, participants used more elbow extension and shoulder horizontal adduction in
VE compared to PE and there were slight differences in the amount of supination and
pronation used for reaching the different objects. Despite these differences, subjects
were able to similarly scale hand aperture to object size and the hand was similarly
oriented in the VE compared to the PE.

Figure 6. Representation of virtual environment for comparison of reaching and grasping kinematics in
physical and virtual environments. Inset (upper right) shows the scene as viewed by the subject wearing the
head-mounted display. Bottom: Sequence of movements (1-5) for picking up and moving the can,
screwdriver and pen.
4. Conclusion

Results of these validation studies are encouraging for the incorporation of VEs into
rehabilitation programs aimed at improving upper limb function. They suggest that
movements made in virtual environments can be kinematically similar to those made in
physical environments. This is the first step in the validation of VEs for rehabilitation
applications. A question remains as to how similar movements made in VEs have to be
to movements made in the physical world in order for real functional gains to occur.
Research on the effectiveness of task-specific training versus conventional or non-
specific training suggests that rehabilitation outcomes are better when practice is task-
oriented and repetitive [4, 46, 70]. Better outcomes are also expected when the learner
is motivated to improve and when the movements practiced are judged to be salient to
the learner [47]. These variables can be optimized in novel environments offered by
virtual reality technology to maximize rehabilitation outcomes.
VR is one of the most innovative, potentially effective technologies that during the
past decade has begun to be used as an assessment and treatment tool in the
rehabilitation of adults and children [49, 50, 52, 71, 72]. Some progress has been made
in the demonstration of the transfer of abilities and skills acquired within VE to real
world performance [50, 69, 73, 75]. Training in virtual reality environments has the
potential to lead to better rehabilitation outcomes than conventional approaches
because of the attributes of VR. Future research is still needed to firmly establish that
motor gains made in VEs are transferable to and will improve functioning and arm use
in the physical world.

Acknowledgements

These studies were supported by the Canadian Foundation for Innovation (CFI), the
Natural Science and Engineering Council of Canada (NSERC) and the Heart and
Stroke Foundation of Canada (HSFC). ECM was supported by CAPES, Brazil. MFL
holds a Tier 1 Canada Research Chair in Motor Recovery and Rehabilitation. Thanks
are extended to the patients and volunteers who participated in these studies and to
Ruth Dannenbaum-Katz, Christian Beaudoin, Valeri Goussev for clinical and technical
expertise.

References

[1] N.E. Mayo, W. Wood-Dauphinee, S. Ahmed, C. Gordon, J. Higgins, S. McEwen, N. Salbach,


Disablement following stroke, Disability & Rehabilitation 21 (1999), 258-268.
[2] J. Carod-Artal, J.A. Egido, J.L. Gonzalez, E. Varela de Seijas, Quality of life among stroke survivors
evaluated 1 year after stroke: experience of a stroke unit, Stroke 31 (2000), 2995-3000.
[3] P. Clarke, S.E. Black, Quality of life following stroke: Negotiating disability, identity, and resources,
Journal of Applied Genetics 24 (2005), 319-336.
[4] Canadian Stroke Network – Evidence Based Review of Stroke Rehabilitation,
http://www.canadianstrokenetwork.ca/eng/research/themefour.php#, accessed on 2007.
[5] G. Kwakkel, B.J. Kollen, and R.C. Wagenaar, Therapy impact on functional recovery in stroke
rehabilitation. A critical review of the literature, Physiotherapy 85 (1999), 377-391.
[6] J.G. Broeks, G.J. Lankhorst, K. Rumping, A.J.H. Prevo, The long-term outcome of arm function after
stroke: results of a follow-up study, Disability Rehabilitation 21 (1999), 357-364.
[7] T. Platz, P. Denzler, Do psychological variables modify motor recovery among patients with mild arm
paresis after stroke or traumatic brain injury who receive the Arm Ability Training? Restorative
Neurology and Neuroscience 20 (2002), 37-49.
[8] J.W. Lance, The control of muscle tone, reflexes, and movement: Robert Wartenberg Lecture, Neurol 30
(1980), 1303-1313.
[9] B. Bobath, Adult Hemiplegia. Evaluation and Treatment 2nd ed., Heinemann Medical, London, 1978.
[10] D. Bourbonnais, S. Vanden Noven, Weakness in patients with hemiparesis, American Journal of
Occupational Therapy 43 (1989), 313-317.
[11] B. Conrad, R. Benecke, H.M. Meinck, Gait disturbances in paraspastic patients. In: Restorative
Neurology, Clinical Neurophysiology in Spasticity, P.J. Delwaide, and R.R. Young, Elsevier,
Amsterdam, 1 (1985), 155-174.
[12] J.P.A. Dewald, P.S. Pope, J.D. Given, T.S. Buchanan, and W.Z. Rymer, Abnormal muscle coactivation
patterns during isometric torque generation at the elbow and shoulder in hemiparetic subjects, Brain 118
(1995), 495-510.
[13] J. Filiatrault, D. Bourbonnais, J. Gauthier, D. Gravel, A.B. Arsenault, Spatial patterns of muscle
activation at the lower limb in subjects with hemiparesis and in healthy subjects, Journal of
Electromyography and Kinesiology 2 (1991), 91-102.
[14] M.C. Hammond, G.H. Kraft, S.S. Fitts Recruitment and termination of electromyographic activity in the
hemiparetic forearm, Archives of Physical Medicine and Rehabilitation 69 (1988), 106-110.
[15] M.F. Levin, M. Dimov, Spatial zones for muscle coactivation and the control of postural stability, Brain
Research 757 (1997), 43-59.
[16] R.F. Beer, J.P. Dewald, W.Z. Rymer, Deficits in the coordination of multijoint arm movements in
patients with hemiparesis: evidence for disturbed control of limb dynamics, Experimental Brain
Research 131 (2000), 305-319.
[17] M.F. Levin, Interjoint coordination during pointing movements is disrupted in spastic hemiparesis, Brain
119 (1996), 281-294.
[18] M.C. Cirstea, A.B. Mitnitski, A.G. Feldman, M.F. Levin Interjoint coordination dynamics during
reaching in stroke patients, Experimental Brain Research 151 (2003), 289-300.
[19] A. Hufschmidt, K.H. Mauritz, Chronic transformation of muscle in spasticity: a peripheral contribution
to increased tone, Journal of Neurology, Neurosurgery, and Psychiatry 48 (1985), 676-685.
[20] F. Jakobsson, L. Grimby, L. Edstrom, Motoneuron activity and muscle fibre type composition in
hemiparesis, Scandinavian Journal of Rehabilitation Medicine 24 (1992), 115-119.
[21] J.G. Colebatch, S.C. Gandevia, P.J. Spira, Voluntary muscle strength in hemiparesis: distribution of
weakness at the elbow, Journal of Neurology, Neurosurgery, and Psychiatry 49 (1986), 1019-1024.
[22] A. Tang, W.Z. Rymer, Abnormal force-EMG relations in paretic limbs of hemiparetic human subjects,
Journal of Neurology, Neurosurgery, and Psychiatry 44 (1981), 690-698.
[23] C. Gowland, H. deBruin, J.V. Basmajian, N. Plews, I. Burcea, Agonist and antagonist activity during
voluntary upper-limb movement in patients with stroke, Physical Therapy 72 (1992), 624-633.
[24] M.C. Hammond, S.S. Fitts, G.H. Kraft, P.B. Nutter, M.J. Trotter, L.M. Robinson, Co-contraction in the
hemiparetic forearm: Quantitative EMG evaluation, Archives of Physical Medicine and Rehabilitation 69
(1988), 348-351.
[25] M.F. Levin, A.G. Feldman, The role of stretch reflex threshold regulation in normal and impaired motor
control, Brain Research 637 (1994), 23-30.
[26] M.F. Levin, R.W. Selles, M.H.G. Verheul, O.G. Meijer, Deficits in the coordination of agonist and
antagonist muscles in stroke patients: Implications for normal motor control, Brain Research 853 (2000),
352-369.
[27] N. Yanagisawa, R. Tanaka, Reciprocal Ia inhibition in spastic paralysis in man, in: W.A. Cobb, H. van
Duijn H, Contemporary Clin Neurophysiol EEG Suppl 34, Elsevier, Amsterdam, 1978, pp. 521-526.
[28] M.C. Cirstea, M.F. Levin, Compensatory strategies for reaching in stroke, Brain 123 (2000), 940-953.
[29] M.F. Levin, S. Michaelsen, C. Cirstea, A. Roby-Brami, Use of the trunk for reaching targets placed
within and beyond the reach in adult hemiparesis, Experimental Brain Research 143 (2002), 171-180.
[30] S.M. Michaelsen, R. Dannenbaum, M.F. Levin, Task-specific training with trunk restraint on arm
recovery in stroke. Randomized control trial, Stroke 37 (2006), 186-192.
[31] D. Moro, M.F. Levin, Arm-trunk compensations for beyond-the-reach movements in adults with chronic
stroke, International Society of Electrophysiological Kinesiology, Abstract, Boston, 2004.
[32] A. Roby-Brami, A. Feydy, M. Combeaud, E.V. Biryukova, B. Bussel, M. Levin, Motor compensation
and recovery of reaching in stroke patients, Acta Neurologica Scandinavia 107 (2003), 369-381.
[33] S.M. Michaelsen, E.C. Magdalon, M.F. Levin, Coordination between reaching and grasping in adults
with hemiparesis, Motor Control, in press.
[34] P. Archambault, P. Pigeon, A.G. Feldman, M.F. Levin, Recruitment and sequencing of different degrees
of freedom during pointing movements involving the trunk in healthy and hemiparetic subjects,
Experimental Brain Research 126 (1999), 55-67.
[35] D. Esparza, P.S. Archambault, C.J. Winstein, M.F. Levin, Hemispheric specialization in the co-
ordination of arm and trunk movements during pointing in patients with unilateral brain damage,
Experimental Brain Research 148 (2003), 288-497.
[36] E. Rossi, A. Mitnitski, A.G. Feldman, Sequential control signals determine arm and trunk contributions
to hand transport during reaching, The Journal of Physiology 538 (2002), 659-671.
[37] M.C. Cirstea, A. Ptito, M.F. Levin Arm reaching improvements with short-term practice depend on the
severity of the motor deficit in stroke, Experimental Brain Research 152 (2003), 476-488.
[38] S.M. Michaelsen, S. Jacobs, A. Roby-Brami, M.F. Levin, Compensation for distal impairments of
grasping in adults with hemiparesis, Experimental Brain Research 157 (2004), 162-173.
[39] R. Wenzelburger, F. Kopper, A. Frenzel, H. Stolze, S. Klebe, A. Brossmann, J. Kuhtz-Buschbeck, M.
Golge, M. Illert, G. Deuschl, Hand coordination following capsular stroke, Brain 128 (2005), 64-74.
[40] R.M. Dannenbaum, M.F. Levin, R. Forget, P. Oliver, S.J. De Serres, Fading of sustained touch-pressure
appreciation in the hand of patients with hemiparesis, Archives of Physical Medicine and Rehabilitation,
in press.
[41] J. Hermsdorfer, K. Laimgruber, G. Kerkhoff, N. Mai, G. Goldenberg, Effects of unilateral brain damage
on coordination, and kinematics of ipsilesional prehension, Experimental Brain Research 128 (1999),
41-51.
[42] D.S. Reisman, J.P. Scholz. Aspects of joint coordination are preserved during pointing in persons with
post-stroke hemiparesis, Brain 126 (11) (2003), 2510-2527.
[43] N. Dancause, A. Ptito, M.F. Levin, Error correction strategies for motor behavior after unilateral brain
damage: Short-term motor learning processes, Neuropsychologia 40 (2002), 1313-1323.
[44] N. St-Onge, A.G, Feldman, Referent configuration of the body: A global factor in the control of multiple
skeletal muscles, Experimental Brain Research 155 (2004), 291-300.
[45] A.G. Feldman, V. Goussev, A. Sangole, M.F. Levin, Threshold position control and the principle of
minimal interaction in motor actions, Brain Research 165 (2007), 267-281.
[46] A. Gentile, Skill acquisition: action, movement, and neuromotor processes, in J. Carr and R. Shepherd
(Eds) Movement Science: Foundations for Physical Therapy in Rehabilitation, Aspen, Rockville, MD,
1987, pp. 93-117.
[47] J.A. Kleim, T.A. Jones, Principles of experience-dependent neural plasticity: Implication for
rehabilitation after brain damage, Journal of Speech and Hearing Research 51 (2008), 225-239.
[48] M.T. Schultheis, J. Himelstein, A.A. Rizzo, Virtual reality and neuropsychology: Upgrading the current
tools, The Journal of Head Trauma Rehabilitation 17 (2002), 378-394.
[49] A. Rizzo, G.J. Kim, A SWOT analysis of the field of virtual reality rehabilitation and therapy, Presence-
Teleoperators & Virtual Environments 14 (2005), 119-146.
[50] H. Sveistrup, Motor rehabilitation using virtual reality, Journal of NeuroEngineering and Rehabilitation
1 (2004), 1-8.
[51] M. Thornton, S. Marshal, J. McComas, H. Finestone, H. McCormick, H. Sveistrup, Benefits of activity
and virtual reality based balance exercise program for adults with traumatic brain injury: Perceptions of
participants and their caregivers, Brain Injury 19 (2005), 989-1000.
[52] P.L. Weiss, N. Katz, The potential of virtual reality for rehabilitation, Journal of Rehabilitation Research
and Development 41 (2004), vii-x.
[53] J. Broeren, M. Dixon, K. Stibrant Sunnerhagen, M. Rydmark, Rehabilitation after stroke using virtual
reality, haptics (force feedback) and telemedicine, Studies in Health Technology and Informatics 124
(2006), 51-56.
[54] J.E. Deutsch, J. Latonio, G.C. Burdea, R. Boian, Rehabilitation of musculoskeletal injuries using the
Rutgers Ankle haptic interface: Three case reports, Europhaptics 1 (2001), 11-16.
[55] J.E. Deutsch, A.S. Merians, G.C. Burdea, R. Boian, S.V. Adamovich, H. Poizner H, Haptics and virtual
reality used to increase strength and improve function in chronic individuals post-stroke: Two case
reports, Neurology Report 26 (2002), 79-86.
[56] M. Holden, E. Todorov, J. Callahan, E. Bizzi, Case report: Virtual environment training improves motor
performance in two stroke patients, Neurology Report 23 (1999), 57-67.
[57] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Motor learning and generalization following
virtual environment training in a patient with stroke, Neurology Report 24 (2000), 170-171.
[58] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Quantitative assessment of motor
generalization in the real world following training in a virtual environment in patents with stroke,
Neurology Report 25 (2002), 129-130.
[59] A.S. Merians, D. Jack, R. Boian, M. Tremaine, G.C. Burdea, S.V. Adamovich, M. Recce, H. Poizner,
Virtual reality-augmented rehabilitation for patients following stroke, Physical Therapy 82 (2002), 898-
915.
[60] A. Rovetta, F. Lorini, M.R. Canina, Virtual reality in the assessment of neuromotor diseases:
measurement of time response in real and virtual environments, Studies in Health Technology and
Informatics 44 (1997), 165-184.
[61] S.H. You, S.H. Jang, Y.H. Kim, M. Hallett, S.H. Ahn, Y.H. Kwon, J.H. Kim, M.Y. Lee, Virtual reality-
induced cortical reorganization and associated locomotor recovery in chronic stroke: an experimenter-
blind randomized study, Stroke 36 (2005), 1166-1171.
[62] P.J. Durlach, J. Fowlkes, C.J. Metevier, Effect of variations in sensory feedback on performance in a
virtual reaching task presence, Teleoperators & Virtual Environments 14 (2005), 450-462.
[63] D.J. Reinkensmeyer, A.M. Cole, L.E. Kahn, D.G. Kamper, Directional control of reaching is preserved
following mild/moderate stroke and stochastically constrained following severe stroke, Experimental
Brain Research 143 (2002), 525-530.
[64] U. Feintuch, L. Raz, J. Hwang, N. Josman, N. Katz, R. Kizony, D. Rand, A.S. Rizzo, M. Shahar, J.
Yongseok, P.L. Weiss, Integrating haptic-tactile feedback into a video-capture-based virtual environment
for rehabilitation, CyberPsychology & Behavior 9 (2006), 129-132.
[65] S. Subramanian, L.A. Knaut, C. Beaudoin, B.J. McFadyen, A.G. Feldman, M.F. Levin, Virtual reality
environments for post-stroke arm rehabilitation, Journal of NeuroEngineering and Rehabilitation 4
(2007), 20.
[66] L.A. Knaut, S. Subramanian, A.K. Henderson, C. Beaudoin, D. Bourbonnais, S.J. De Serres, M.F. Levin,
Comparison of kinematics of pointing movements made in a virtual and a physical environment in
patients with chronic stroke, Abstract Viewer/Itinerary Planner, Atlanta, GA: Society for Neuroscience,
2006, pp. 451-18.
[67] L. Knaut, S. Subramanian, B.J. McFadyen, D. Bourbonnais, M.F. Levin, Kinematics of pointing
movements made in a virtual versus a physical 3D environment, Archives of Physical Medicine and
Rehabilitation, in press.
[68] E.C. Magdalon, M.F. Levin, A.A.F. Quevedo, S.M. Michaelsen, Kinematics of reaching and grasping in
a 3D immersive virtual reality environment in patients with hemiparesis, Neurorehabilitation and Neural
Repair 22 (2008), ID299.
[69] A. Viau, A.G. Feldman, B.J. McFadyen, M.F. Levin, Reaching in reality and virtual reality: A
comparison of movement kinematics in healthy subjects and in adults with hemiparesis, Journal of
NeuroEngineering and Rehabilitation 1 (2004), 11.
[70] J. Carr, R. Shepherd, Motor relearning program for stroke, Aspen Systems, Rockville, MD, 1985.
[71] A.A. Rizzo, Virtual reality and disability: emergence and challenge, Disability & Rehabilitation 24
(2002), 567-569.
[72] A.A. Rizzo, D. Strickland, S. Bouchard, The challenge of using virtual reality in telerehabilitation,
Telemedicine Journal & E-Health 10 (2004), 184-195.
[73] J. McComas, J. Pivik, M. Laflamme, Current uses of virtual reality for children with disabilities. In: G.
Riva, B.K. Wiederhold, E. Molinari, Virtual Environments in Clinical Psychology and Neuroscience:
Methods and Techniques in Advanced Patient-Therapist Interaction, 1998, pp. 161-169.
[74] M.K. Holden, Virtual environments for motor rehabilitation: Review, CyberPsychology & Behavior 8
(2005), 187-211.
[75] Y.S. Lam, D.W. Man, S.F. Tam, P.L. Weiss, Virtual reality training for stroke rehabilitation,
Neurorehabilitation 21 (2006), 245-253.
Virtual Reality Environments to Enhance
Upper Limb Functional Recovery in
Patients with Hemiparesis
Mindy F. LEVINa, Luiz Alberto Manfre KNAUTb, Eliane C. MAGDALONc, and
Sandeep SUBRAMANIANa
a
School of Physical and Occupational Therapy, Faculty of Medicine McGill,
University, Montreal, Quebec, Canada
b
School of Rehabilitation, University of Montreal, PR, Brazil
c
Department of Biomedical Engineering, University of Campinas, Campinas, SP,
Brazil

Abstract. Impairments in reaching and grasping have been well-documented in


patients with post-stroke hemiparesis. Patients have deficits in spatial and temporal
coordination and may use excessive trunk displacement to assist arm transport
during performance of upper limb tasks. Studies of therapeutic effectiveness have
shown that repetitive task-specific practice may improve motor function outcomes.
Movement retraining may be optimized when done in virtual reality (VR)
environments. Environments created with VR technology can incorporate elements
essential to maximize motor learning, such as repetitive and varied task practice,
performance feedback and motivation. Haptic technology can also be incorporated
into VR environments to enhance the user’s sense of presence and to make motor
tasks more ecologically relevant to the participant. As a first step in the validation
of the use of VR environments for rehabilitation, it is necessary to demonstrate
that movements made in virtual environments are similar to those made in
equivalent physical environments. This has been verified in a series of studies
comparing pointing and reaching/grasping movements in physical and virtual
environments. Because of the attributes of VR, rehabilitation of the upper limb
using VR environments may lead to better rehabilitation outcomes than
conventional approaches.

Keywords. Stroke, Kinematics, Validation, Reaching, Grasping.

Introduction

The motor recovery of the upper limb in patients following congenital or acquired brain
injury remains a persistent problem in neurological rehabilitation. More than 80% of
the approximately 566,000 stroke survivors in the United States experience hemiparesis
resulting in impairment of one upper extremity (UE) immediately after stroke and in
55-75% of survivors, impairments persist beyond the acute stage of stroke. Important
from a rehabilitation perspective is that functional limitations of the upper limb
contribute to disability and are associated with diminished health-related quality of life
[1, 3].
Despite a growing number of studies, there is still a paucity of good quality
evidence for the effectiveness of upper limb motor rehabilitation techniques for patients
with stroke-related hemiparesis [4]. Current rehabilitation practice is based on
movement repetition of targeted tasks in the clinical setting. Not all motor
improvements gained in clinical settings however have been shown to carry over into
real world situations when patients are discharged home after therapy [5]. For example,
even patients with well-recovered upper limb function as judged by clinical tests may
not make full use of their arm in everyday activities [6]. One possible reason for the
tendency to under use the affected arm may be the lack of recovery of higher order
motor control functions resulting in an inability to perform rapid, accurate and
coordinated movement and the perception of arm movements as being clumsy and slow
[7]. This suggests that greater attention should be paid to retraining upper limb
coordination or the ability of the arm and hand to interact with the environment rapidly
and efficiently in order to improve the real world relevance of practice in the clinical
setting. Indeed, an important component of dexterous movement, if such a term can be
applied to whole arm movement, is coordination between different body segments - an
element that has been largely neglected in rehabilitation approaches to movement
recovery.

1. Deficits in the coordination of reaching and grasping movements in patients


with stroke

The arm motor deficit in stroke is complex and can be described at all levels of the
International Classification of Functioning (ICF, World Health Organization,
http://www.who.int/classifications/icf/en/). At the Body Structure and Function
(impairment) level, stroke-related hemiparesis is characterized by sensorimotor deficits
such as spasticity [8] and pathological synergies in the limbs contralateral to the
hemispheric lesion [9]. The ability to activate and inactivate appropriate muscles [9,
15] is also compromised as well as the abilities to compensate elbow and shoulder
torques [12, 16] and to coordinate movements between adjacent joints [17, 18].
Impairments may be related to altered mechanical properties of motor units [19, 20],
abnormal agonist motor unit activation [21, 22] and deficits in segmental reflex
organization, including the ability to appropriately regulate stretch-reflex threshold
excitability [23, 27]. Previous studies have shown that patients have deficits in both
spatial and temporal aspects of interjoint coordination during 3D reaching to stationary
targets, placed within [18, 28, 30] and beyond the reach [31]. They also have
coordination deficits when synchronizing hand orientation with hand opening and
closing during reach-to-grasp movements to stationary targets (Figure 1) [32, 33].
Figure 1. Arm and hand coordination during a reach-to-grasp task in one healthy subject (top) and one
individual with stroke-related hemiparesis (bottom). The mean peak hand aperture (thin solid lines) generally
occurs after the mean peak hand velocity (thick solid lines) as seen in both examples but the movement is
slower and hand opening is delayed in the individual with hemiparesis. Dotted lines indicate ± one standard
deviation of the mean traces.
For more complex movements, individuals with hemiparesis may have several
deficits when attempting to produce coordinated arm, trunk and hand movements. For
example, during trunk-assisted reaching (reaching to objects placed beyond arm’s
length), patients may have deficits in the timing of the initiation of arm and trunk
movement characterized by delays and increased variability [34, 35]. In addition,
Esparza et al. [35] found differences in the range of trunk displacement between
patients with left and right brain lesions and documented bilateral deficits in the control
of movements involving complex arm-trunk co-ordination.
We are only beginning to understand how complex movements are controlled and
the role of perception-action coupling in the healthy and damaged nervous system. The
healthy nervous system is able to integrate multiple degrees of freedom of the body and
produce invariant hand trajectories when making pointing movements with or without
trunk displacement (Figure 2). In trunk-assisted reaching, Rossi et al. [36] compared
the hand trajectories when healthy subjects reached to a target placed beyond the reach
on a horizontal surface. In some trials, the trunk was free to move and thus contributed
to the endpoint trajectory. In some other trials however, the trunk movement was
unexpectedly arrested before the movement began. They showed that the initial
contribution of the trunk movement to the hand displacement was neutralized by
appropriate compensatory rotations at the shoulder and elbow. Trunk movement began
to contribute to hand displacement only after the peak velocity of the hand movement
was reached. Results such as these highlight the elegant temporal and spatial
coordination used by the healthy nervous system to produce smooth and effective
movement.
Figure 2. Top. For beyond-the-reach experiments, subjects sat in a cut-out section of a plexiglass table.
Goggles obstructed vision of the hand and target after the go signal. Hand starting position was located 30 cm
in front of the sternum. A metal plate attached to the back of the trunk, and an electromagnet attached to the
wall were used to arrest the trunk movement in 30% of randomly selected trials. Middle and lower panels:
Mean hand and trunk trajectories for one healthy (left) and one stroke subject (right) in trunk-blocked (solid
lines) and trunk-free (open lines) movements. The stroke subject had a moderate motor impairment as
indicated by the Fugl-Meyer (FM) Arm Score of 50 out of 66. Despite differences in the trunk motion
between conditions, hand trajectories for blocked-trunk trials initially coincided with those for free-trunk
movements. Hand trajectories for trunk-blocked trials diverged earlier in participants with stroke indicating
that they could not fully compensate for the trunk movement by adjusting their arm movement.
After stroke, control of movement in specific joint ranges is limited and trunk
movement makes a larger and earlier contribution to hand transport for reaches to
objects placed both within and beyond the arm’s length [26, 29]. The neurologically
damaged system also has deficits in the ability to make appropriate compensatory
adjustments of the arm joints to maintain the desired hand trajectory during trunk-
assisted reaching. This was tested using the same paradigm described above for the
study by Rossi et al. [36]. We compared hand trajectories and elbow-shoulder interjoint
coordination during “beyond-the-reach” pointing movements in healthy and
hemiparetic subjects when the trunk was free to move or when it was unexpectedly
arrested [31]. In approximately half the participants with hemiparesis, hand trajectory
divergence occurred earlier (Figure 2, right panels) while the divergence of interjoint
coordination patterns occurred later than the control group suggesting that
compensatory adjustments of the shoulder and elbow joints were not sufficient to
neutralize the influence of the trunk on the hand trajectory. Arm movements only
partially compensated the trunk displacement and this compensation was delayed. This
suggests a deficit in intersegmental temporal coordination that may be partly
responsible for the loss of arm coordination even in well-recovered patients.
Individuals with hemiparesis also have spatial and temporal coordination deficits
between movements of adjacent arm joints such as the elbow and shoulder [12, 16, 17,
18, 37], between the transport phase of reaching and aperture formation in grasping [38,
40] and in precision grip force control [39, 41]. For example, using a mathematical
analysis of kinematic variability during whole arm reaching movements, Reisman and
Scholz [42] found that individuals with mild-to-moderate hemiparesis had deficits in
specific patterns of joint coupling, and that they had only partial ability to rapidly
compensate movement errors. This suggestion had previously been proposed for single
joint arm movements by Dancause et al. [43] who further related the error
compensation deficits to impairments in executive functioning in patients with chronic
stroke.
The reduced capacity to produce and coordinate the movements of the arm, hand
and trunk into coherent action [see 44, 45] may lead to clumsy and slow movement
making it less likely that individuals would use their upper limb in daily life activities.
Rehabilitation efforts are aimed at reducing the effects of impairments through repeated
practice of targeted movements, tasks or activities in controlled clinical environments
[46].

2. Environments for upper limb rehabilitation interventions

The environment in which movement is practiced may be crucial to maximize motor


recovery. Recently, Kleim and Jones [47] summarized some of the outcomes of the
IIIStep meeting held in Salt Lake City in 2005, and outlined 10 principles of
experience-dependent plasticity related to recovery from stroke. Of these, several
principles directly or indirectly relate to the environment in which movement is
practiced. These include the importance of specificity, repetition, intensity and salience
of practice. All of these factors can be creatively manipulated using virtual reality
technology to make the most of the practice environment and to add the novelty of
gaming to make activities more challenging. Virtual reality (VR) is a multisensorial
experience in which a person is immersed and can interact with a computer-generated
environment [48]. VR offers the user a practice environment that can be ecologically
valid and has the potential to enhance patient enjoyment and compliance [49],
important factors in successful rehabilitation [50, 52].

2.1. Advantages of virtual environments

In virtual reality environments (VE), real-world situations can be mimicked while


precisely and systematically manipulating environmental constraints (tasks, obstacles).
Indeed, task difficulty can be manipulated without danger to the user. Consequently,
VEs have been used in a number of movement analysis studies [53, 61]. One advantage
of using VEs is that sensory parameters can be adapted and scaled to the abilities of the
user. In so doing, responses to a larger number of situations in a shorter amount of time
than is possible in real-world laboratory experimental set-ups can be measured. For
example, in a VE, several object locations and orientations can be reliably and rapidly
reproduced and object properties can be manipulated (i.e., obstacles can be introduced
by quickly changing properties and orientation of the object or the environment). VEs
are especially suited to the study of how individuals interact with objects or situations
that unexpectedly change. Thus, questions about dexterity and coordination that are not
easily accessible in a real-world environment can be more easily addressed. This is of
particular importance in the study of arm functional recovery in post-stroke patients.
Many stroke survivors lack the ability to reliably use the arm and hand during
interactions with objects within changing environments: e.g. catching a ball or picking
up an object while walking. These types of experimental set-ups are difficult to recreate
in the laboratory. Finally, another advantage of using VR is the possibility of studying
movement production in situations that, in the real world, may compromise the safety
of the individual. For example, in obstacle avoidance tasks, the ability to anticipate and
reach around a static obstacle such as the table ledge can be evaluated as well as the
ability to move in a constrained environment without danger of incurring injury due to
impact of the hand with an object.

2.2. The question of haptics

When the arm and hand interact with objects in the physical world, in addition to
proprioceptive feedback related to limb movement, the individual perceives sensory
information about collision of the hand with the objects being manipulated. This
sensory information combined with task success, provides feedback to the individual
about the adequacy and effectiveness of his or her movement in the virtual
environment. However, haptic information is not easily incorporated into VR
environments created for motor control studies or rehabilitation studies of upper limb
reaching and object manipulation. The use of relevant haptic interfaces is important
because it enhances the user’s sense of presence within VEs [62]. Many existing VEs
do not include haptics or include haptic information limited to sensations felt through a
joystick or mouse [63, 64]. These do not provide the nervous system with the most
salient movement-related sensory information. Given this reality, the essential question
is whether movements made in VR environments that lack haptic sensory cues usually
available in physical environments, can be considered valid. In other words, are they
spatially and temporally kinematically similar to equivalent movements made in
physical environments? In order to address this question, several studies have been
done to compare the kinematics of movements made in different types of VEs to those
made in physical environments [65, 69]. The following section of this chapter will
summarize the results of these validation studies.

3. Are movements made in virtual and physical environments kinematically


similar?

Viau et al. [69] compared movement kinematics made by 8 healthy adults and 7 stroke
survivors with mild left hemiparesis who performed near identical tasks in both a
physical and in a virtual environment. In both tasks, seated subjects grasped a real or
virtual 7 cm diameter ball, reached forward by leaning the trunk and then placed the
ball within a 2 cm x 2 cm yellow square on a real or virtual target. The initial
conditions for the task and the tasks themselves were carefully matched so that
movement extent and direction were as similar as possible. Thus, in both environments,
the initial position of the arm was about 0° flexion, 30° abduction and 0° external
rotation (shoulder), 80° flexion and 0° supination (elbow) with the wrist and hand in
the neutral position. The fingers were slightly flexed. The initial position of the ball
was 13 cm in front of the right shoulder, 7 cm above and 3 cm to the left of the
subject’s hand. The target was placed 31 cm in front of the shoulder, 12.5 cm above
and 14 cm to the right of the initial position of the ball. The VR environment was
displayed in 2 dimensions (2D) on a computer screen placed 75 cm in front of subject’s
midline. The ball and hand were displayed on the screen inside a cube. The task was to
place the ball in the upper right far corner of the cube. The virtual representation of the
subject’s hand was obtained using a 22 sensor fibre optic glove (Cyberglove,
Immersion Corp.) and an electromagnetic sensor (Fastrak, Polhemus Corp.) that was
used to orient the glove in the 2D environment. Data from these devices were
synchronized in real time. To enable the subject to "feel" the virtual ball, a prehension
force feedback device (Cybergrasp, Immersion Corp.) was fitted to the dorsal surface
of the hand. The Cybergrasp delivered prehension force feedback in the form of
extension forces to the distal phalanxes of the thumb and each finger. Forces applied to
the fingers were calibrated for each subject while he/she was wearing the Cyberglove
and all subjects perceived that they were holding a spherical object in their hand. To
better compare the performance of participant in each of the two environments, the
glove and grasp devices were worn on the hand in both conditions (Figure 3).

Figure 3. Top: Experimental set up for reaching, grasping and placing experiment in 2D virtual (VE) and
physical (PE) environments. Elbow-shoulder interjoint coordination in the reaching (middle) and transport
(bottom) phase of the task was similar between environments in healthy and stroke subjects.
Kinematics of functional arm movements involving reaching, grasping and
releasing made in physical and virtual environments were analyzed in two phases: 1)
reaching and grasping the ball and 2) ball transport and release. Temporal and spatial
parameters of reaching and grasping were determined for each phase. Using this 2D
VR environment, individuals in both groups were able to reach, grasp, transport, place
and release the virtual and physical ball using similar movement strategies. In healthy
subjects, reaching and grasping movements in both environments were similar in terms
of spatial and temporal characteristics of the endpoint and joint movements. Healthy
subjects however, used less wrist extension and more elbow extension to place the ball
on the virtual vertical surface.
As has been well-documented [17, 37], reaching movements made by individuals
with hemiparesis are different from those made by healthy control subjects. Compared
to healthy subjects, participants with hemiparesis made slower movements in both
environments and during transport and placing of the ball, trajectories were more
curved and interjoint coordination was altered. Despite these differences, however,
participants with hemiparesis also tended to use less wrist extension during the whole
movement and they used more elbow extension at the end of the placing phase for the
movement made in VR.
The finding that both groups of subjects used less wrist extension and more elbow
extension in the virtual compared to the physical environment suggested that the
movements made in VR might have been influenced by differences in perception of the
target location and the absence of haptic feedback when the target was touched by the
ball. We addressed these questions in a second study in which we compared the spatial
and temporal characteristics of reaching to targets located in different parts of the
workspace in a 3D environment [65, 66]. If the problem of target localization was
related to the quality of depth perception, then movements made in a 3D environment
should be more like those made in a physical environment than those made in the 2D
environment of the computer screen.
We created a 3D VE consisting of two rows of three targets arranged so that they
were in different parts of the arm workspace (Figure 4). The virtual environment,
created on CAREN software (Motek, Inc) was viewed through a head-mounted display
(HMD, Kaiser XL50, resolution 1024 x 768, frequency 60Hz) and arm and hand
movements were recorded with an Optotrak Motion Capture System (Northern Digital).
In lieu of haptic feedback, when a target was ‘touched’ by the virtual hand, auditory or
visual feedback was provided.
Figure 4. A. Experimental set-up for comparison of pointing in the physical environment and equivalent 3D
virtual environment. The virtual environment (VE) was designed as two rows of three elevator buttons. The
distances between the buttons and from the body were the same in both environments. B. Examples of
endpoint (hand) and trunk trajectories for pointing movements to three lower targets in one healthy and one
stroke subject. C. Examples of elbow/shoulder interjoint coordination for movements made to middle lower
target in healthy and stroke subjects in the physical (PE) and virtual (VE) environments.
The VE was designed to exactly reproduce a physical environment that also
consisted of 2 rows of targets. Thus, the VE was not designed to take advantage of the
attributes of virtual environments for movement retraining. Rather, it was designed to
be an exact replica of the physical environment in order to be able to compare the
movement kinematics made to similarly placed targets. The location of the targets
required the subject to use different combinations of arm joint movements for
successful pointing. The center-to-center distance between adjacent targets was 26 cm
in both environments and targets were displayed at a standardized distance equal to the
participant’s arm length.
Fifteen adults (4 women, 11 men; aged 59 ± 15.4 years) with chronic poststroke
hemiparesis participated in this study. They had moderate upper limb impairment
according to Chedoke-McMaster Arm Scores which ranged from 3 to 6 out of 7. A
comparison group of 12 healthy subjects (6 women, 6 men, aged 53.3 ± 17.1 years)
also participated in the study.
The task was to point as quickly and as accurately as possible to each of the 6 targets
(12 trials per target) in a random sequence in each of the two environments.
Movements were analyzed in terms of performance outcome measures (endpoint
precision, trajectory and peak velocity) and arm and trunk movement patterns (elbow
and shoulder ranges of motion, elbow/shoulder coordination, trunk displacement and
rotation). There were very few differences in movement kinematics between
environments for healthy subjects. Overall, there were no differences in elbow and
shoulder ranges of motion or interjoint coordination for movements made in both
environments by either group (Figure 5). Healthy subjects however, made movements
faster, pointed to contralateral targets more accurately and made straighter endpoint
paths in the PE compared to the VE. The participants with stroke made less accurate
and more curved movements in VE and also used less trunk displacement. Thus, the
results of this study suggested that pointing movements in virtual environments were
sufficiently similar to those made in physical environments so that 3D VEs could be
considered as valid training environments for upper limb movements.

Figure 5. Results of comparison of pointing movements made in two environments described in Figure 4.
Healthy (A) but not stroke (B) subjects made movements more slowly in the virtual environment (VE)
compared to the physical environment (PE). There were no differences in joint ranges used in either healthy
or stroke subjects in the two environments (C,D).
The appearance of more curved trajectories and the use of less trunk movement
were also features of grasping movements made in a virtual environment while subjects
wore a haptic device on the hand (Cybergrasp, Immersion Corp.). In a study of 12
adults with chronic stroke-related hemiparesis (age 67±10 yrs), reaching and grasping
kinematics to three different objects in a VE and a PE were compared [68]. The 3D
virtual environment was displayed via a HMD as in the previous study and the task was
to reach forward, pick-up and transport a virtual/physical object from one surface to
another (Figure 6). Three objects were used that required different grasp types – a can
(diameter 65.6 mm) that required a spherical grasp, a screwdriver (diameter 31.6 mm)
requiring a power grasp and a pen (diameter 7.5 mm), requiring a precision finger-
thumb grasp. In the VE, the virtual representation of the subject's hand was obtained
using a glove (Cyberglove, Immersion Corp.) and haptic feedback (prehension force
feedback) was provided via an exoskeleton device placed over the glove (Cybergrasp,
Immersion Corp.).
As for the comparison of reaching movements, comparable movement strategies
were used to reach, grasp and transport the virtual and physical objects in the two
environments. Similar to what was found for pointing movements, reaching in VR took
approximately 35% longer compared to PE. This was true especially for the cylindrical
and precision grasps. Thus, reaching and grasping movements that were accomplished
in around 1.5 seconds in PE, took up to 2.2 seconds in the VE. The increase in
movement time was reflected in all the temporal variables compared between the two
environments such as the peak velocity, the time to peak velocity, the time to maximal
grip aperture and the deceleration time as the hand approached the object. In addition to
the temporal differences, movement endpoint trajectories were also more curved in VE.
Overall, participants used more elbow extension and shoulder horizontal adduction in
VE compared to PE and there were slight differences in the amount of supination and
pronation used for reaching the different objects. Despite these differences, subjects
were able to similarly scale hand aperture to object size and the hand was similarly
oriented in the VE compared to the PE.

Figure 6. Representation of virtual environment for comparison of reaching and grasping kinematics in
physical and virtual environments. Inset (upper right) shows the scene as viewed by the subject wearing the
head-mounted display. Bottom: Sequence of movements (1-5) for picking up and moving the can,
screwdriver and pen.
4. Conclusion

Results of these validation studies are encouraging for the incorporation of VEs into
rehabilitation programs aimed at improving upper limb function. They suggest that
movements made in virtual environments can be kinematically similar to those made in
physical environments. This is the first step in the validation of VEs for rehabilitation
applications. A question remains as to how similar movements made in VEs have to be
to movements made in the physical world in order for real functional gains to occur.
Research on the effectiveness of task-specific training versus conventional or non-
specific training suggests that rehabilitation outcomes are better when practice is task-
oriented and repetitive [4, 46, 70]. Better outcomes are also expected when the learner
is motivated to improve and when the movements practiced are judged to be salient to
the learner [47]. These variables can be optimized in novel environments offered by
virtual reality technology to maximize rehabilitation outcomes.
VR is one of the most innovative, potentially effective technologies that during the
past decade has begun to be used as an assessment and treatment tool in the
rehabilitation of adults and children [49, 50, 52, 71, 72]. Some progress has been made
in the demonstration of the transfer of abilities and skills acquired within VE to real
world performance [50, 69, 73, 75]. Training in virtual reality environments has the
potential to lead to better rehabilitation outcomes than conventional approaches
because of the attributes of VR. Future research is still needed to firmly establish that
motor gains made in VEs are transferable to and will improve functioning and arm use
in the physical world.

Acknowledgements

These studies were supported by the Canadian Foundation for Innovation (CFI), the
Natural Science and Engineering Council of Canada (NSERC) and the Heart and
Stroke Foundation of Canada (HSFC). ECM was supported by CAPES, Brazil. MFL
holds a Tier 1 Canada Research Chair in Motor Recovery and Rehabilitation. Thanks
are extended to the patients and volunteers who participated in these studies and to
Ruth Dannenbaum-Katz, Christian Beaudoin, Valeri Goussev for clinical and technical
expertise.

References

[1] N.E. Mayo, W. Wood-Dauphinee, S. Ahmed, C. Gordon, J. Higgins, S. McEwen, N. Salbach,


Disablement following stroke, Disability & Rehabilitation 21 (1999), 258-268.
[2] J. Carod-Artal, J.A. Egido, J.L. Gonzalez, E. Varela de Seijas, Quality of life among stroke survivors
evaluated 1 year after stroke: experience of a stroke unit, Stroke 31 (2000), 2995-3000.
[3] P. Clarke, S.E. Black, Quality of life following stroke: Negotiating disability, identity, and resources,
Journal of Applied Genetics 24 (2005), 319-336.
[4] Canadian Stroke Network – Evidence Based Review of Stroke Rehabilitation,
http://www.canadianstrokenetwork.ca/eng/research/themefour.php#, accessed on 2007.
[5] G. Kwakkel, B.J. Kollen, and R.C. Wagenaar, Therapy impact on functional recovery in stroke
rehabilitation. A critical review of the literature, Physiotherapy 85 (1999), 377-391.
[6] J.G. Broeks, G.J. Lankhorst, K. Rumping, A.J.H. Prevo, The long-term outcome of arm function after
stroke: results of a follow-up study, Disability Rehabilitation 21 (1999), 357-364.
[7] T. Platz, P. Denzler, Do psychological variables modify motor recovery among patients with mild arm
paresis after stroke or traumatic brain injury who receive the Arm Ability Training? Restorative
Neurology and Neuroscience 20 (2002), 37-49.
[8] J.W. Lance, The control of muscle tone, reflexes, and movement: Robert Wartenberg Lecture, Neurol 30
(1980), 1303-1313.
[9] B. Bobath, Adult Hemiplegia. Evaluation and Treatment 2nd ed., Heinemann Medical, London, 1978.
[10] D. Bourbonnais, S. Vanden Noven, Weakness in patients with hemiparesis, American Journal of
Occupational Therapy 43 (1989), 313-317.
[11] B. Conrad, R. Benecke, H.M. Meinck, Gait disturbances in paraspastic patients. In: Restorative
Neurology, Clinical Neurophysiology in Spasticity, P.J. Delwaide, and R.R. Young, Elsevier,
Amsterdam, 1 (1985), 155-174.
[12] J.P.A. Dewald, P.S. Pope, J.D. Given, T.S. Buchanan, and W.Z. Rymer, Abnormal muscle coactivation
patterns during isometric torque generation at the elbow and shoulder in hemiparetic subjects, Brain 118
(1995), 495-510.
[13] J. Filiatrault, D. Bourbonnais, J. Gauthier, D. Gravel, A.B. Arsenault, Spatial patterns of muscle
activation at the lower limb in subjects with hemiparesis and in healthy subjects, Journal of
Electromyography and Kinesiology 2 (1991), 91-102.
[14] M.C. Hammond, G.H. Kraft, S.S. Fitts Recruitment and termination of electromyographic activity in the
hemiparetic forearm, Archives of Physical Medicine and Rehabilitation 69 (1988), 106-110.
[15] M.F. Levin, M. Dimov, Spatial zones for muscle coactivation and the control of postural stability, Brain
Research 757 (1997), 43-59.
[16] R.F. Beer, J.P. Dewald, W.Z. Rymer, Deficits in the coordination of multijoint arm movements in
patients with hemiparesis: evidence for disturbed control of limb dynamics, Experimental Brain
Research 131 (2000), 305-319.
[17] M.F. Levin, Interjoint coordination during pointing movements is disrupted in spastic hemiparesis, Brain
119 (1996), 281-294.
[18] M.C. Cirstea, A.B. Mitnitski, A.G. Feldman, M.F. Levin Interjoint coordination dynamics during
reaching in stroke patients, Experimental Brain Research 151 (2003), 289-300.
[19] A. Hufschmidt, K.H. Mauritz, Chronic transformation of muscle in spasticity: a peripheral contribution
to increased tone, Journal of Neurology, Neurosurgery, and Psychiatry 48 (1985), 676-685.
[20] F. Jakobsson, L. Grimby, L. Edstrom, Motoneuron activity and muscle fibre type composition in
hemiparesis, Scandinavian Journal of Rehabilitation Medicine 24 (1992), 115-119.
[21] J.G. Colebatch, S.C. Gandevia, P.J. Spira, Voluntary muscle strength in hemiparesis: distribution of
weakness at the elbow, Journal of Neurology, Neurosurgery, and Psychiatry 49 (1986), 1019-1024.
[22] A. Tang, W.Z. Rymer, Abnormal force-EMG relations in paretic limbs of hemiparetic human subjects,
Journal of Neurology, Neurosurgery, and Psychiatry 44 (1981), 690-698.
[23] C. Gowland, H. deBruin, J.V. Basmajian, N. Plews, I. Burcea, Agonist and antagonist activity during
voluntary upper-limb movement in patients with stroke, Physical Therapy 72 (1992), 624-633.
[24] M.C. Hammond, S.S. Fitts, G.H. Kraft, P.B. Nutter, M.J. Trotter, L.M. Robinson, Co-contraction in the
hemiparetic forearm: Quantitative EMG evaluation, Archives of Physical Medicine and Rehabilitation 69
(1988), 348-351.
[25] M.F. Levin, A.G. Feldman, The role of stretch reflex threshold regulation in normal and impaired motor
control, Brain Research 637 (1994), 23-30.
[26] M.F. Levin, R.W. Selles, M.H.G. Verheul, O.G. Meijer, Deficits in the coordination of agonist and
antagonist muscles in stroke patients: Implications for normal motor control, Brain Research 853 (2000),
352-369.
[27] N. Yanagisawa, R. Tanaka, Reciprocal Ia inhibition in spastic paralysis in man, in: W.A. Cobb, H. van
Duijn H, Contemporary Clin Neurophysiol EEG Suppl 34, Elsevier, Amsterdam, 1978, pp. 521-526.
[28] M.C. Cirstea, M.F. Levin, Compensatory strategies for reaching in stroke, Brain 123 (2000), 940-953.
[29] M.F. Levin, S. Michaelsen, C. Cirstea, A. Roby-Brami, Use of the trunk for reaching targets placed
within and beyond the reach in adult hemiparesis, Experimental Brain Research 143 (2002), 171-180.
[30] S.M. Michaelsen, R. Dannenbaum, M.F. Levin, Task-specific training with trunk restraint on arm
recovery in stroke. Randomized control trial, Stroke 37 (2006), 186-192.
[31] D. Moro, M.F. Levin, Arm-trunk compensations for beyond-the-reach movements in adults with chronic
stroke, International Society of Electrophysiological Kinesiology, Abstract, Boston, 2004.
[32] A. Roby-Brami, A. Feydy, M. Combeaud, E.V. Biryukova, B. Bussel, M. Levin, Motor compensation
and recovery of reaching in stroke patients, Acta Neurologica Scandinavia 107 (2003), 369-381.
[33] S.M. Michaelsen, E.C. Magdalon, M.F. Levin, Coordination between reaching and grasping in adults
with hemiparesis, Motor Control, in press.
[34] P. Archambault, P. Pigeon, A.G. Feldman, M.F. Levin, Recruitment and sequencing of different degrees
of freedom during pointing movements involving the trunk in healthy and hemiparetic subjects,
Experimental Brain Research 126 (1999), 55-67.
[35] D. Esparza, P.S. Archambault, C.J. Winstein, M.F. Levin, Hemispheric specialization in the co-
ordination of arm and trunk movements during pointing in patients with unilateral brain damage,
Experimental Brain Research 148 (2003), 288-497.
[36] E. Rossi, A. Mitnitski, A.G. Feldman, Sequential control signals determine arm and trunk contributions
to hand transport during reaching, The Journal of Physiology 538 (2002), 659-671.
[37] M.C. Cirstea, A. Ptito, M.F. Levin Arm reaching improvements with short-term practice depend on the
severity of the motor deficit in stroke, Experimental Brain Research 152 (2003), 476-488.
[38] S.M. Michaelsen, S. Jacobs, A. Roby-Brami, M.F. Levin, Compensation for distal impairments of
grasping in adults with hemiparesis, Experimental Brain Research 157 (2004), 162-173.
[39] R. Wenzelburger, F. Kopper, A. Frenzel, H. Stolze, S. Klebe, A. Brossmann, J. Kuhtz-Buschbeck, M.
Golge, M. Illert, G. Deuschl, Hand coordination following capsular stroke, Brain 128 (2005), 64-74.
[40] R.M. Dannenbaum, M.F. Levin, R. Forget, P. Oliver, S.J. De Serres, Fading of sustained touch-pressure
appreciation in the hand of patients with hemiparesis, Archives of Physical Medicine and Rehabilitation,
in press.
[41] J. Hermsdorfer, K. Laimgruber, G. Kerkhoff, N. Mai, G. Goldenberg, Effects of unilateral brain damage
on coordination, and kinematics of ipsilesional prehension, Experimental Brain Research 128 (1999),
41-51.
[42] D.S. Reisman, J.P. Scholz. Aspects of joint coordination are preserved during pointing in persons with
post-stroke hemiparesis, Brain 126 (11) (2003), 2510-2527.
[43] N. Dancause, A. Ptito, M.F. Levin, Error correction strategies for motor behavior after unilateral brain
damage: Short-term motor learning processes, Neuropsychologia 40 (2002), 1313-1323.
[44] N. St-Onge, A.G, Feldman, Referent configuration of the body: A global factor in the control of multiple
skeletal muscles, Experimental Brain Research 155 (2004), 291-300.
[45] A.G. Feldman, V. Goussev, A. Sangole, M.F. Levin, Threshold position control and the principle of
minimal interaction in motor actions, Brain Research 165 (2007), 267-281.
[46] A. Gentile, Skill acquisition: action, movement, and neuromotor processes, in J. Carr and R. Shepherd
(Eds) Movement Science: Foundations for Physical Therapy in Rehabilitation, Aspen, Rockville, MD,
1987, pp. 93-117.
[47] J.A. Kleim, T.A. Jones, Principles of experience-dependent neural plasticity: Implication for
rehabilitation after brain damage, Journal of Speech and Hearing Research 51 (2008), 225-239.
[48] M.T. Schultheis, J. Himelstein, A.A. Rizzo, Virtual reality and neuropsychology: Upgrading the current
tools, The Journal of Head Trauma Rehabilitation 17 (2002), 378-394.
[49] A. Rizzo, G.J. Kim, A SWOT analysis of the field of virtual reality rehabilitation and therapy, Presence-
Teleoperators & Virtual Environments 14 (2005), 119-146.
[50] H. Sveistrup, Motor rehabilitation using virtual reality, Journal of NeuroEngineering and Rehabilitation
1 (2004), 1-8.
[51] M. Thornton, S. Marshal, J. McComas, H. Finestone, H. McCormick, H. Sveistrup, Benefits of activity
and virtual reality based balance exercise program for adults with traumatic brain injury: Perceptions of
participants and their caregivers, Brain Injury 19 (2005), 989-1000.
[52] P.L. Weiss, N. Katz, The potential of virtual reality for rehabilitation, Journal of Rehabilitation Research
and Development 41 (2004), vii-x.
[53] J. Broeren, M. Dixon, K. Stibrant Sunnerhagen, M. Rydmark, Rehabilitation after stroke using virtual
reality, haptics (force feedback) and telemedicine, Studies in Health Technology and Informatics 124
(2006), 51-56.
[54] J.E. Deutsch, J. Latonio, G.C. Burdea, R. Boian, Rehabilitation of musculoskeletal injuries using the
Rutgers Ankle haptic interface: Three case reports, Europhaptics 1 (2001), 11-16.
[55] J.E. Deutsch, A.S. Merians, G.C. Burdea, R. Boian, S.V. Adamovich, H. Poizner H, Haptics and virtual
reality used to increase strength and improve function in chronic individuals post-stroke: Two case
reports, Neurology Report 26 (2002), 79-86.
[56] M. Holden, E. Todorov, J. Callahan, E. Bizzi, Case report: Virtual environment training improves motor
performance in two stroke patients, Neurology Report 23 (1999), 57-67.
[57] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Motor learning and generalization following
virtual environment training in a patient with stroke, Neurology Report 24 (2000), 170-171.
[58] M.K. Holden, T. Dyar, J. Callahan, L. Schwamm, E. Bizzi, Quantitative assessment of motor
generalization in the real world following training in a virtual environment in patents with stroke,
Neurology Report 25 (2002), 129-130.
[59] A.S. Merians, D. Jack, R. Boian, M. Tremaine, G.C. Burdea, S.V. Adamovich, M. Recce, H. Poizner,
Virtual reality-augmented rehabilitation for patients following stroke, Physical Therapy 82 (2002), 898-
915.
[60] A. Rovetta, F. Lorini, M.R. Canina, Virtual reality in the assessment of neuromotor diseases:
measurement of time response in real and virtual environments, Studies in Health Technology and
Informatics 44 (1997), 165-184.
[61] S.H. You, S.H. Jang, Y.H. Kim, M. Hallett, S.H. Ahn, Y.H. Kwon, J.H. Kim, M.Y. Lee, Virtual reality-
induced cortical reorganization and associated locomotor recovery in chronic stroke: an experimenter-
blind randomized study, Stroke 36 (2005), 1166-1171.
[62] P.J. Durlach, J. Fowlkes, C.J. Metevier, Effect of variations in sensory feedback on performance in a
virtual reaching task presence, Teleoperators & Virtual Environments 14 (2005), 450-462.
[63] D.J. Reinkensmeyer, A.M. Cole, L.E. Kahn, D.G. Kamper, Directional control of reaching is preserved
following mild/moderate stroke and stochastically constrained following severe stroke, Experimental
Brain Research 143 (2002), 525-530.
[64] U. Feintuch, L. Raz, J. Hwang, N. Josman, N. Katz, R. Kizony, D. Rand, A.S. Rizzo, M. Shahar, J.
Yongseok, P.L. Weiss, Integrating haptic-tactile feedback into a video-capture-based virtual environment
for rehabilitation, CyberPsychology & Behavior 9 (2006), 129-132.
[65] S. Subramanian, L.A. Knaut, C. Beaudoin, B.J. McFadyen, A.G. Feldman, M.F. Levin, Virtual reality
environments for post-stroke arm rehabilitation, Journal of NeuroEngineering and Rehabilitation 4
(2007), 20.
[66] L.A. Knaut, S. Subramanian, A.K. Henderson, C. Beaudoin, D. Bourbonnais, S.J. De Serres, M.F. Levin,
Comparison of kinematics of pointing movements made in a virtual and a physical environment in
patients with chronic stroke, Abstract Viewer/Itinerary Planner, Atlanta, GA: Society for Neuroscience,
2006, pp. 451-18.
[67] L. Knaut, S. Subramanian, B.J. McFadyen, D. Bourbonnais, M.F. Levin, Kinematics of pointing
movements made in a virtual versus a physical 3D environment, Archives of Physical Medicine and
Rehabilitation, in press.
[68] E.C. Magdalon, M.F. Levin, A.A.F. Quevedo, S.M. Michaelsen, Kinematics of reaching and grasping in
a 3D immersive virtual reality environment in patients with hemiparesis, Neurorehabilitation and Neural
Repair 22 (2008), ID299.
[69] A. Viau, A.G. Feldman, B.J. McFadyen, M.F. Levin, Reaching in reality and virtual reality: A
comparison of movement kinematics in healthy subjects and in adults with hemiparesis, Journal of
NeuroEngineering and Rehabilitation 1 (2004), 11.
[70] J. Carr, R. Shepherd, Motor relearning program for stroke, Aspen Systems, Rockville, MD, 1985.
[71] A.A. Rizzo, Virtual reality and disability: emergence and challenge, Disability & Rehabilitation 24
(2002), 567-569.
[72] A.A. Rizzo, D. Strickland, S. Bouchard, The challenge of using virtual reality in telerehabilitation,
Telemedicine Journal & E-Health 10 (2004), 184-195.
[73] J. McComas, J. Pivik, M. Laflamme, Current uses of virtual reality for children with disabilities. In: G.
Riva, B.K. Wiederhold, E. Molinari, Virtual Environments in Clinical Psychology and Neuroscience:
Methods and Techniques in Advanced Patient-Therapist Interaction, 1998, pp. 161-169.
[74] M.K. Holden, Virtual environments for motor rehabilitation: Review, CyberPsychology & Behavior 8
(2005), 187-211.
[75] Y.S. Lam, D.W. Man, S.F. Tam, P.L. Weiss, Virtual reality training for stroke rehabilitation,
Neurorehabilitation 21 (2006), 245-253.
Virtual Reality to Maximize Function for
Hand and Arm Rehabilitation: Exploration
of Neural Mechanisms
Alma S. MERIANSa, Eugene TUNIKa and Sergei V. ADAMOVICHa,b
a
Doctoral Programs in Physical Therapy, Department of Rehabilitation and Movement
Science, University of Medicine and Dentistry of New Jersey, Newark, USA
b
Department of Biomedical Engineering, New Jersey Institute of Technology, Newark,
NJ, USA

Abstract. Stroke patients report hand function as the most disabling motor deficit.
Current evidence shows that learning new motor skills is essential for inducing
functional neuroplasticity and functional recovery. Adaptive training paradigms
that continually and interactively move a motor outcome closer to the targeted skill
are important to motor recovery. Computerized virtual reality simulations when
interfaced with robots, movement tracking and sensing glove systems, are
particularly adaptable, allowing for online and offline modifications of task based
activities using the participant’s current performance and success rate. We have
developed a second generation system that can exercise the hand and the arm
together or in isolation and provide for both unilateral and bilateral hand and arm
activities in three-dimensional space. We demonstrate that by providing haptic
assistance for the hand and arm and adaptive anti-gravity support, the system can
accommodate patients with lower level impairments. We hypothesize that
combining training in virtual environments (VE) with observation of motor actions
can bring additional benefits. We present a proof of concept of a novel system that
integrates interactive VE with functional neuroimaging to address this issue. Three
components of this system are synchronized, the presentation of the visual display
of the virtual hands, the collection of fMRI images and the collection of hand joint
angles from the instrumented gloves. We show that interactive VEs can facilitate
activation of brain areas during training by providing appropriately modified
visual feedback. We predict that visual augmentation can become a tool to
facilitate functional neuroplasticity.

Keywords. Virtual Environment, Haptics, fMRI, Stroke, Cerebral Palsy

Introduction

During the past decade the intersection of knowledge gained within the fields of
engineering, neuroscience and rehabilitation has provided the conceptual framework
for a host of innovative rehabilitation treatment paradigms. These newer treatment
interventions are taking advantage of technological advances such as the improvement
in robotic design, the development of haptic interfaces, and the advent of human-
machine interactions in virtual reality and are in accordance with current neuroscience
literature in animals and motor control literature in humans. We therefore find
ourselves on a new path in rehabilitation.
Studies have shown that robotically-facilitated repetitive movement training might
be an effective stimulus for normalizing upper extremity motor control in persons with
moderate to severe impairments who have difficulty performing unassisted movements
[1, 2]. An important feature of the robots is their ability to measure the kinematic and
dynamic properties of a subject’s movements and provide the assistive force necessary
for the subject to perform the activity, with the robot adjusting the assistance and
transitioning to resistance as the subject’s abilities expand [2]. Most of these first
generation robotic devices train unilateral gross motor movements [3, 4] and a few
upper extremity devices have the capability of training bilateral motion [2, 5]. None of
these systems allow for three dimensional arm movements with haptic assistance.
Robotics for wrist and hand rehabilitation is much less developed [6] and systems
for training the hand and arm together are non-existent.
Virtual reality simulations when interfaced with robots, movement tracking and
sensing glove systems can provide an engaging, motivating environment where the
motion of the limb displayed in the virtual world is a replication of the motion
produced in the real world by the subject. Virtual environments (VE’s) can be used to
present complex multimodal sensory information to the user and have been used in
military training, entertainment simulations, surgical training, training in spatial
awareness and more recently as a therapeutic intervention for phobias [7, 8]. Our
hypothesis for the use of a virtual reality/robotic system for rehabilitation is that this
environment can monitor the specificity and frequency of visual and auditory feedback,
and can provide adaptive learning algorithms and graded assistive or resistive forces
that can be objectively and systematically manipulated to create individualized motor
learning paradigms. Thus, it provides a rehabilitation tool that can be used to exploit
the nervous system’s capacity for sensorimotor adaptation and provide plasticity-
mediated therapies.
This chapter describes the design and feasibility testing of a second-generation
system, a revised and advanced version of the virtual reality based exercise system that
we have used in our past work [9, 10]. The current system has been tested on patients
post-stroke [11, 13] and on children with Cerebral Palsy [14]. By providing haptic
assistance and adaptive anti-gravity support and guidance, the system can now
accommodate patients with greater physical impairments. The revised version of this
system can exercise the hand alone, the arm alone and the arm and hand together as
well as provide for unilateral and bilateral upper extremity activities. Through adaptive
algorithms it can provide assistance or resistance during the movement directly linking
the assistance to the patient’s own force generation.

1. Description of the System

1.1 Hardware

The game architecture was designed so that various inputs can be seamlessly used to
track the hands as well as retrieve the finger angles. The system supports the use of a
Figure 1. a. Hand & Arm Training System using a CyberGlove and Haptic Master interface that provides
the user with a realistic haptic sensation that closely simulates the weight and force found in upper
extremity tasks. b. Hand & Arm Training System using a CyberGlove, a CyberGrasp and Flock of Birds
electromagnetic trackers. c. Close view of the haptic interface in a bimanual task.

pair of 5DT [15]) or CyberGlove [16] instrumented gloves for hand tracking and a
CyberGrasp ([16] for haptic effects. The CyberGrasp device is a lightweight, force-
reflecting exoskeleton that fits over a CyberGlove data glove and adds resistive force
feedback to each finger. The CyberGrasp is used in our simulations to facilitate
individual finger movement by resisting flexion of the adjacent fingers in patients with
more pronounced deficits thus allowing for individual movement of the active finger.
The arm simulations utilize the Haptic MASTER [17], a 3 degrees of freedom
admittance controlled (force controlled) robot. Three more degrees of freedom (yaw,
pitch and roll) can be added to the arm by using a gimbal with force feedback available
for pronation/supination (roll). A three-dimensional force sensor measures the external
force exerted by the user on the robot. In addition, the velocity and position of the
robot’s endpoint are measured. These variables are used in real time to generate
reactive motion based on the properties of the virtual haptic environment in the vicinity
of the current location of the robot’s endpoint, allowing the robotic arm to act as an
interface between the participants and the virtual environments, enabling multiplanar
movements against gravity in a 3D workspace. The haptic interface provides the user
with a realistic haptic sensation that closely simulates the weight and force found in
functional upper extremity tasks [18] (Figure 1).
Hand position and orientation as well as finger flexion and abduction is recorded in
real time and translated into three-dimensional movements of the virtual hands shown
on the screen in a first-person perspective. The Haptic MASTER robot or the
Ascension Flock of Birds motion trackers [19] are used for arm tracking.

1.2 Simulations

We have developed a comprehensive library of gaming simulations: two exercise the


hand alone, five exercise the arm alone, and five exercise the hand and arm together.
Eight of these gaming simulations facilitate bilateral, symmetrical movement of the two
upper extremities. To provide clarification of the richness of these virtual worlds and
the sophistication of the haptic modifications for each game, we will describe some of
them in detail.
Figure 2. a. The piano trainer consists of a complete virtual piano that plays the appropriate notes as they
are pressed by the virtual fingers. b. Placing Cups displays a three-dimensional room with a haptically
rendered table and shelves. c. Reach/Touch is accomplished in the context of aiming /reaching type
movements in a normal, functional workspace. d. The Hammer Task trains a combination of three
dimensional reaching and repetitive finger flexion and extension. Targets are presented in a scalable 3D
workspace. e. Catching Falling Objects enhances movement of the paretic arm by coupling its motion
with the less impaired arm. f. Humming Bird Hunt depicts a hummingbird as it moves through an
environment filled with trees, flowers and a river. g. The full screen displays a three-dimensional room
containing three shelves and a table.

Most of the games have been programmed using C++/OpenGL or Virtools


software package [20]) with the VRPack plug-in which communicates with the open
source VRPN (Virtual Reality Peripheral Network) [21]. In addition, two activities
were adopted from existing Pong games in which we have transferred the game control
from the computer mouse to one of our input devices (e.g., CyberGlove or Haptic
Master). The Haptic Master measures position, velocity and force in three dimensions
at a rate of up to 1000 Hz and records for off-line analysis. We used Haptic Master’s
Application Programming Interface (API) to program the robot to produce haptic
effects such as spring, damper and global force. Virtual haptic objects, including
blocks, cylinders, toruses (donuts), spheres, walls and complex surfaces can be created.

1.2.1 Piano Trainer


The piano trainer is a refinement and elaboration of one of our previous simulations [9,
10] and is designed to help improve the ability of subjects to individually move each
finger in isolation (fractionation). It consists of a complete virtual piano that plays the
appropriate notes as they are pressed by the virtual fingers (Figure 2a). The position
and orientation of both hands as well as the flexion and abduction of the fingers are
recorded in real time and translated into 3D movement of the virtual hands, shown on
the screen in a first person perspective. The simulation can be utilized for training the
hand alone to improve individuated finger movement (fractionation), or the hand and
the arm together to improve the arm trajectory along with finger motion. This is
achieved by manipulating the octaves on which the songs are played. These tasks can
be done unilaterally or bilaterally. The subjects play short recognizable songs, scales,
and random notes. Color-coding between the virtual fingers and piano keys serve as
cues as to which notes are to be played. The activity can be made more challenging by
changing the fractionation angles required for successful key pressing (see 1.3.
Movement Assessment). When playing the songs bilaterally, the notes are key-
matched. When playing the scales and the random notes bilaterally, the fingers of both
hands are either key matched or finger matched. Knowledge of results and knowledge
of performance is provided with visual and auditory feedback.

1.2.2 Hummingbird Hunt


This simulation depicts a hummingbird as it moves through an environment filled with
trees, flowers and a river. Water and bird sounds provide a pleasant encouraging
environment in which to practice repeated arm and hand movements (Figure 2f). The
game provides practice in the integration of reach, hand-shaping and grasp using a
pincer grip to catch and release the bird while it is perched on different objects located
on different levels and sections of the workspace. The flight path of the bird is
programmed into three different levels, low, medium and high allowing for progression
in the range of motion required to successfully transport the arm to catch the bird.
Adjusting the target position as well as the size, scales the difficulty of the task and the
precision required for a successful grasp and release.

1.2.3 Placing Cups


The goal of the “Placing Cups” task is to improve upper extremity range and
smoothness of motion in the context of a functional reaching movement. The screen
displays a three-dimensional room with a haptically rendered table and shelves (Figure
2b). The participants use their virtual hand (hemiparetic side) to lift the virtual cups and
place them onto one of nine spots on one of three shelves. Target spots on the shelves
(represented by red squares) are presented randomly for each trial. To accommodate
patients with varying degrees of impairments, there are several haptic effects that can
be applied to this simulation; gravity and antigravity forces can be applied to the cups,
global damping can be provided for dynamic stability and to facilitate smoother
movement patterns, and the three dimensions of the workspace can be calibrated to
increase the range of motion required for successful completion of the task. The
intensity of these effects can be modified to challenge the patients as they improve.

1.2.4 Reach/Touch
The goal of the Reach/Touch game is to improve speed, smoothness and range of
motion of shoulder and elbow movement patterns. This is accomplished in the context
of aiming/reaching type movements (Figure 2c). Subjects view a 3-dimensional
workspace aided by stereoscopic glasses [23] to enhance depth perception, to increase
the sense of immersion and to facilitate the full excursion of upper extremity reach. The
participant moves a virtual cursor (small sphere) through this space in order to touch
ten targets presented randomly. Movement initiation is cued by a haptically rendered
activation target (donut at the bottom of the screen). In this simulation, there are three
algorithms that are used to control the robot to accommodate varying levels of
impairments. The first algorithm is an adjustable spring-like assistance that draws the
participants’ arm/hand toward the target if they are unable to reach it within a
predefined time interval. The spring stiffness gradually increases when hand velocity
and force applied by the subject do not exceed predefined thresholds within this time
interval. Current values of active force and hand velocity are compared online with
threshold values and the assistive force increases if both velocity and force are under
threshold. If either velocity or force is above threshold, spring stiffness starts to
decrease in 5 N/m increments. The range of the spring stiffness is from 0 to 10000
N/m. The velocity threshold is predefined for each of the ten target spheres based on
the mean velocity of movement recorded from a group of neurologically healthy
subjects. The second algorithm, a haptic ramp (invisible tilted floor that goes through
the starting point and the target) decreases the force necessary to move the upper
extremity toward the target. This can be added or removed as needed. Finally, a range
restriction limits participant’s ability to deviate from an ideal trajectory toward each
target. This restriction can be decreased to provide less guidance as participants’
accuracy improves. We have recently adapted this VE to train children with hemiplegia
due to cerebral palsy. To keep the children’s attention focused, we modified this game
to make it more dynamic by enhancing the visual and auditory presentation. The
spheres, rather than just disappearing, now explode accompanied by the appropriate
bursting sound. This modification, easily implemented in the framework of VR, has
dramatically increased the children’s compliance and engagement [24].

1.2.5 Hammer Task


The Hammer Task trains a combination of three dimensional reaching and repetitive
finger flexion and extension. Targets are presented in a scalable 3D workspace (Figure
2d). There are two versions of this simulation. One game exercises movement of the
hand and arm together by having the subjects reach towards a wooden cylinder and
then use their hand (finger extension or flexion) to hammer the cylinders into the floor.
The other uses supination and pronation to hammer the wooden cylinders into a wall.
The haptic effects allow the subject to feel the collision between the hammer and target
cylinders as they are pushed through the floor or wall. Hammering sounds accompany
collisions as well. The subjects receive feedback regarding their time to complete the
series of hammering tasks. Adjusting the size of the cylinders, the amount of anti-
gravity assistance provided by the robot to the arm and the time required to
successfully complete the series of cylinders adaptively modifies the task requirements
and game difficulty.

1.2.6 Catching Falling Objects


The goal of this bilateral task simulation, Catching Falling Objects, is to enhance
movement of the paretic arm by coupling its motion with the less impaired arm (Figure
2e). Virtual hands are presented in a mono-view workspace. Each movement is
initiated by placing both virtual hands on two small circles. The participant’s arms then
move in a synchronized symmetrical action to catch virtual objects with both hands as
they drop from the top of the screen. Real-time 3-D position of the less affected arm is
measured from either a Flock of Birds sensor attached to the less impaired hand or a
second Haptic Master robot. The position of the less affected arm guides the movement
of the impaired arm. For the bilateral games, an initial symmetrical (relative to the
patient’s midline) relationship between the two arm positions is established prior to the
start of the game and maintained throughout the game utilizing a virtual spring
mechanism. At the highest levels of the virtual spring‘s stiffness, the haptic master
guides the subject’s arm in a perfect 1:1 mirrored movement. As the trajectory of the
subject’s hemiparetic arm deviates from a mirrored image of the trajectory of the less
involved arm, the assistive virtual spring is stretched exerting a force on the subject’s
impaired arm. This force draws the arm back to the mirrored image of the trajectory of
the uninvolved arm. The Catching Falling Objects simulation requires a quick,
symmetrical movement of both arms towards an object falling along the midline of the
screen. If the subject successfully hits the falling object three times in a row the spring
stiffness diminishes. The subject then has to exert a greater force with their hemiplegic
arm in order to maintain the symmetrical arm trajectory required for continuous
success. If the subject can not touch the falling object appropriately by exerting the
necessary force, the virtual spring stiffens again to assist the subject. In this way, the
adaptive algorithm maximizes the active force generated by the impaired arm. The
magnitude of the active force measured by the robot defines the progress and success in
the game, therefore this adaptive algorithm insures that the patient continually utilizes
their arm and does not rely on the Haptic Master to move it for them.

1.3 Movement assessment

Several kinematic measures are derived from the training simulations. Each task in a
simulation consists of a series of movements e.g. pressing a series of piano keys to
complete a song, or placing 9 cups on the virtual shelves. Time to complete a task,
range of motion and peak velocity for each individual movement can be measured in
each simulation. Accuracy, which denotes the proportion of correct key presses, and
fractionation are measures specific to the hand. Peak fractionation score quantifies the
ability to isolate each finger’s motion and is calculated online by subtracting the mean
of the metacarpophalangeal and proximal interphalangeal joint angles of the most
flexed non-active finger from the mean angle of the active finger. When the actual
fractionation score becomes greater than the target score during the trial, a successful
key press will take place (assuming the subject’s active finger was over the correct
piano key). The target fractionation score starts at 0 at the beginning of the training.
After each trial, and for each finger, our algorithm averages the fractionation achieved
when the piano key is pressed. If the average fractionation score is greater than 90% of
the target, the target fractionation will increase by 0.005 radians. If the average
fractionation is less than 75% of the target, the target will decrease by the same
amount. Otherwise, the target will remain the same. There is a separate target for each
finger and for each hand (total 10 targets). Once a key is displayed for the subject to
press, the initial threshold will be the set target. This will decrease during the trial
according to the Bezier Progression (interpolation according to a Bezier curve).
Thresholds will start at the target value and decrease to zero or to a predefined negative
number over the course of one minute. Negative limits for the target score will be used
to allow more involved subjects to play the game. To calculate movement smoothness,
we compute the normalized integrated third derivative of hand displacement [25, 26].
Finally, active force denotes the mean force applied by the subject to move the robot to
the target during the movement.
2. Training paradigms

2.1 Training the hand

We trained patients using three different paradigms, the hand alone, the hand and arm
separately, and the hand and arm together. We trained the hemiplegic hand of 8
subjects in the chronic phase post-stroke [9, 10]. Examination of the group effects
using analysis of variance of the data from the first two days of training, last two days
of training, and the one-week retention test showed significant changes in performance
in each of the parameters of hand movement that were trained in the virtual
environment. Post-hoc analyses revealed that subjects as a group improved in finger
fractionation (a measurement of finger flexion independence), thumb range of motion,
finger range of motion, thumb speed and finger speed. The Jebsen Test of Hand
Function (JTHF) [27], a timed test of hand function and dexterity, was used to
determine whether the kinematic improvements gained through practice in the VE
measures transferred to real world functional activities. After training, the average task
completion time for all seven subtests of the JTHF for the affected hand (group mean
(SD) decreased from 196 (62) sec to 172 (45) sec; paired t-test, t =2.4, p=<.05). In
contrast, no changes were observed for the unaffected hand (t=.59, p=.54). Analysis of
variance of the Jebsen scores from the pre-therapy, post-therapy and one-week
retention test demonstrated significant improvement in the scores. The subjects’
affected hand improved in this test (pre-therapy versus post-therapy) on average by
12%. In contrast, no significant changes were observed for the unaffected hand.
Finally, scores obtained during the retention testing were not significantly different
from post-therapy scores.

2.2 Training the hand and arm

Four other subjects (mean age=51; years post stroke =3.5) practiced approximately
three hrs/day for 8 days on simulations that trained the arm and hand separately
(Reach/Touch, Placing Cups, Piano/Hand alone). Four other subjects (mean age=59;
years post stroke =4.75) practiced for the same amount of time on simulations that
trained the arm and hand together (Hammer, Plasma Pong, Piano/Hand/Arm,
Hummingbird Hunt). All subjects were tested pre and post training on two of our
primary outcome measures the JTHF and the Wolf Motor Function Test (WMFT) a
time-based series of tasks that evaluates upper extremity performance [28]. The groups
that practiced arm and hand tasks separately (HAS) showed a 14% change in the
WMFT and an 9% change in the JTHF whereas the group that practiced using the
simulations that trained the arm and hand together (HAT) showed a 23% (WMFT) and
29% change (JTHF) in these tests of real world hand and arm movements.
There were also notable changes in the secondary outcome measures; the
kinematics and force data derived from the virtual reality simulations during training.
These kinematic measures included time to task completion (duration), accuracy,
velocity, smoothness of hand trajectory and force generated by the hemiparetic arm.
Subjects in both groups showed similar changes in the time to complete each game, a
36%-42% decrease, depending on the specific simulation. Additionally three of the
four subjects in the HAS group improved the smoothness of their hand trajectories (in
the range of 50%-66%) indicating better control [29].
Figure 3. Trajectories of a representative subject performing single repetitions of the cup reaching
simulation. a. The dashed line represents the subject’s performance without any haptic effects on Day 1 of
training. The solid line represents the subjects performance with the trajectory stabilized by the damping
effect and with the work against gravity decreased by the robot. Also note the collision with the haptically
rendered shelf during this trial. b. The same subject’s trajectory while performing the cup placing task
without haptic assistance following training. Note the coordinated, up and over trajectory, consistent with
normal performance of a real world placing task (adapted from [13]).
However, the subjects in the HAT group showed a more pronounced decrease in
the path length. This suggests a reduction in extraneous and inaccurate arm movement
with more efficient limb segment interactions. Figure 3 shows the hand trajectories
generated by a representative subject in the Placing Cup activity pre and post training.
Figure 3a depicts a side view of a trajectory generated without haptic assistance, and
another trajectory generated with additional damping and increased antigravity support.
At the beginning of the training the subject needed the addition of the haptic effects to
stabilize the movement and to provide enough arm support for reaching the virtual
shelf. However, Figure 3b shows that after two weeks of training this subject
demonstrated a more normalized trajectory even without haptic assistance.

2.3 Bilateral training

In the upper arm bilateral games, movement of the unimpaired hand guides the
movement of the impaired hand. Importantly, an adaptive algorithm continually
modifies the amount of force assistance provided by the robot. This is based upon the
force generation and success in the game achieved by the subject. This adaptive
algorithm thereby ensures that the patient continually utilizes their hemiplegic arm and
does not rely on the Haptic Master to move it for them. Figure 4 shows the change in
the relationship between the assistive force provided by the robot and the active force
generated by a representative subject on Day 1 and Day 7 of training. With the aid of
this algorithm, the subjects were able to minimize their reliance on the assistance
provided by the robot during training, and greatly increase the force they could
generate to successfully complete the catching task during the final days of training.
Active force was calculated as the amount of force generated to move the robot towards
the target and did not take into account the force required to support the arm against
gravity. The mean active force produced by the impaired upper extremity during this
bilateral elbow-shoulder activity increased by 82% and 95% for two subjects who were
more impaired (pre-training WMFT scores of 180 sec and 146 sec). Two other subjects
who were less impaired, (pre-training WMFT scores of 67 sec and 54 sec) improved
their active force by 17% and 22% respectively.
Figure 4. Interaction between the subject and robot which is coordinated by on-line assistance algorithms.
Figure 4a depicts the performance of a repetition of Reach/Touch. The dashed line plots the hand velocity
over time. As the subject moves toward the target, the assistive force, depicted by the solid line, stays at a
zero level unless the subject fails to reach the target within a predefined time window. As the subjects
progress toward the target slows, the assistive force increases until progress resumes and then starts to
decrease after velocity exceeds a predefined threshold value. Figure 4b and c describe two repetitions of the
bilateral Catching Falling Objects simulation. Performance on Day 1 (b) requires Assistive Force from the
robot (solid line) when the subject is unable to overcome gravity and move the arm towards the target (Active
Force (dashed lines) dips below zero. Figure 4c represents much less assistance from the robot to perform the
same task because the subject is able to exert active force throughout the task.
Questionnaires have been used to assess subjects’ perception and satisfaction with
the training sessions, the physical and mental effort involved in the training and an
evaluation of the different exercises. The subjects were eager to participate in the
project. They found the computer sessions required a lot of mental concentration, were
engaging and helped improve their hand motion. They found the exercises to be tiring
but wished this form of training had been part of their original therapy. When
comparing the hand simulations they stated that playing the piano one finger at a time
(fractionation exercise) required the most physical and mental effort.

3. Virtual Reality as a tool for engaging targeted brain networks

Studies show that training in virtual environments (VE) have had positive effects on
motor recovery [9, 10, 22, 30, 31] and neural [32, 33] adaptations. However, what
remains untested is whether these benefits emerge simply because VR is an
entertaining practice environment or whether interacting in a specially-designed VE
can be used to selectively engage a frontoparietal action observation and action
production network.
It is important to understand the neural mechanism underlying these innovative
rehabilitation strategies. Little is understood about susceptibility of brain function to
various sensory (visual, tactile, auditory) manipulations within the VE. It is critical to
determine the underlying neurological mechanisms of moving and interacting within a
VE and to consider how they may be exploited to facilitate activation in neural
networks associated with sensorimotor learning.
Empirical data suggests that sensory input can be used to facilitate reorganization
in the sensorimotor system. Additionally, recent studies have also shown that a
distributed neural network, which includes regions containing mirror neurons, can be
activated through observation of actions when intending to imitate those actions.
Regions within the frontoparietal network: the opercular region of the inferior frontal
gyrus (IFG) and the adjacent precentral gyrus (which we will collectively refer to as the
IFG) and the rostral extent of the inferior parietal lobule (IPL) have been extensively
researched for their role in higher-order representation of action [34-37]. Mirror and
canonical neurons may play a central role. Detailed accounts and physiological
characteristics of these neurons are extensively documented (for review, see [38]),
however, a key property of a mirror cell is that it is equally activated by either
observing or actuating a given behavior. Though initially identified in non-human
primates, there is now compelling evidence for the existence of a human mirror neuron
system [38, 39]. Although the nature of tasks and functions that may most reliably
capture this network remains under investigation (for example, see [37]),
neurophysiological evidence suggests that mirror neurons may be the link that allows
the sensorimotor system to resonate when observing actions, such as for motor
learning. Notably, the pattern of muscle activation evoked by transcranial magnetic
stimulation to the primary motor cortex while observing a grasping action was found to
be similar to the pattern of muscle activation seen during actual execution of that
movement [40, 41] suggesting that the neural architecture for action recognition
overlaps with and can prime the neural architecture for action production [42]. This
phenomenon may have profound clinical implications [43].
Literature on the effects of observation of actions performed in the natural world
indicating recruitment of specific neural networks [38] allows us to hypothesize that
observation of actions performed in VE may also recruit neural circuits of interest. If
we can show proof of concept for using virtual reality feedback to selectively drive
brain circuits in healthy individuals, then this technology can have profound
implications for use in diagnoses, rehabilitation, and studying basic brain mechanisms
(i.e. neuroplasticity). We have done several pilot experiments using MRI-compatible
data gloves to combine VE experiences with fMRI to test the feasibility of using VE-
based sensory manipulations to recruit select sensorimotor networks. In this chapter, in
addition to the data supporting the feasibility of our enhanced training system, we also
present preliminary data indicating that through manipulations in the VE, one can
activate specific neural networks, particularly those neural networks associated with
sensorimotor learning.

3.1 fMRI Compatible Virtual Reality System

Three components of this system are synchronized, the presentation of the visual
display of the virtual hands, the collection of fMRI images and the collection of hand
joint angles from the MRI-compatible (5DT) data gloves. We have extracted the
essential elements common to all of our environments, the virtual hands, in order to test
the ability of visual feedback provided through our virtual reality system to effect brain
activity (Figure 5, left panel). Subjects performed simple sequential finger flexion
movements with their dominant right hand (index through pinky fingers) as if they
were pressing imaginary piano keys at a rate of 1 Hz. Subjects’ finger motion was
recorded and the joint angles were transmitted in real time to a computer controlling the
motion of the virtual hands. Thus we measured event-related brain responses in real-
time as subjects interacted in the virtual environment.
The virtual hand on the display was sized in proportion to the subjects’ actual hand
and its movement was calibrated for each subject before the experiment. After
calibration, glove data collection was synchronized with the first functional volume of
each functional imaging run by a trigger signal transmitted from the scanner to the
computer controlling the glove. From that point, glove data was collected in a
continuous stream until termination of the visual presentation program at the end of
each functional run. As glove data was acquired, it was time-stamped and saved for
offline analysis. fMRI data was realigned, co-registered, normalized, and smoothed (10
mm Gaussian filter) and analyzed using SPM5 (http://www.fil.ion.ucl.ac.uk/spm/).
Activation was significant if it exceeded a threshold level of p<0.001 and a voxel
extent of 10 voxels. Finger motion data was analyzed offline using custom written
Matlab software to confirm that subjects produced the instructed finger sequences and
rested in the appropriate trials. Finger motion amplitude and frequency was analyzed
using standard multivariate statistical approaches to assure that differences in finger
movement did not account for any differences in brain activation.
First, we investigated whether observing virtual hand actions with the intention to
imitate those actions afterwards activates known frontoparietal observation-execution
networks. After signing institutionally approved consent, eight right-handed subjects
who were naïve to the virtual reality environment and free of neurological disease were
tested in two conditions: 1) Watch Virtual Hands: observe finger sequences performed
by a virtual hand model with the understanding that they would imitate the sequence
after it was demonstrated – observe with intent to imitate (OTI), 2) Move and Watch
Hands: execute the observed sequence while receiving real time feedback of the virtual
hands (actuated by the subject’s motion). The trials were arranged as 9 second long
miniblocks and separated by a random interval lasting between 5-10 seconds. Each
subject completed four miniblocks of each condition.
In the Move+Watch condition, significant activation was noted in a distributed
network traditionally associated with motor control – contralateral sensorimotor pre-
motor, posterior parietal, basal ganglia, and ipsilateral anterior intermediate cerebellum.
In the OTI condition, significant activation was noted in the contralateral dorsal
premotor cortex, the (pre)supplementary motor area, and the parietal cortex. Parietal
activation included regions in the superior and inferior parietal lobules and overlapped
with activation noted in the Move+Watch condition in the rostral extent of the
intraparietal sulcus (see Figure 5). The common activation noted in this region for
intentional observation and execution of action is in line with other reports using video
playback of real hands moving [34, 37] and suggests that well constructed VE may tap
into similar neural networks.
L R

Virtual Hands

Observe with intent to imitate


Real Hand Imitate
Figure 5. Left panel. Subject’s view during fMRI experiment (top). The real hand in a 5DT glove is
shown below that. Movement of the virtual hand can be generated as an exact representation of the real
hand, or can be distorted to study action-observation interaction inside a virtual environment. Right panel.
Observing finger sequences with the intention to imitate afterwards. Significant BOLD activity (p<.001)
is rendered on an inflated cortical surface template. Arrows show activation in the dorsal premotor cortex,
BA 5, rostral portion of the IPS, supramarginal gyrus, and (pre)supplementary motor area, likely
associated with planning sequential finger movements.
Having demonstrated the ability to use movement observation and execution in an
interactive VE to activate brain areas often recruited for real-world observation and
movement, we then tested the ability of the VE to facilitate select regions in the brain.
If this proves successful, VE can offer a powerful tool to clinicians treating
patients with various pathologies. As a vehicle for testing our proof of concept, we
chose to test a common challenge facing the stroke patient population: hemiparesis.
Particularly early after stroke, facilitating activation in the lesioned motor cortex is
extremely challenging since paresis during this phase is typically most pronounced. We
hypothesized that viewing a virtual hand corresponding to the patient’s affected side
and animated by movement of the patient’s unaffected hand could selectively facilitate
the motor areas in the affected hemisphere. This design and hypothesis was inspired by
studies by Altschuler [44] who demonstrated that viewing hand motion through a
mirror placed in the sagittal plane during bilateral arm movements might facilitate hand
recovery in patients post stroke.
Three healthy subjects and one patient who had a right subcortical stroke
performed sequences of finger flexions and extensions with their right hand in four
sessions. During each session, subjects performed 50 trials while receiving one of four
types of visual feedback: 1. left virtual hand motion, 2. right virtual hand motion, 3. left
virtual blob motion, 4. right virtual blob motion. Data were analyzed on a subject-by-
subject basis (Factors: left VR hand, right VR hand, left blob, right blob). Since we
were interested in the effects on the right motor cortex, we limited our analysis to this
region by creating a region-of-interest mask. We created the mask by finding the
location of the peak activation in motor cortex in the Movement > Rest contrast
(healthy subjects [42, 12, 64], stroke patient: [34, 18, 64]) and used a custom written
program [46] to create a mask with a radius of 20 mm and centered at the coordinates
above.
Figure 6. A representative healthy subject (left panel) and a chronic stroke patient (right panel)
performed a finger sequence with the RIGHT hand. The inset in the right panel shows the lesion location
in the stroke patient, (see also [9]) For each subject, the panels show the activations that were
significantly greater when viewing the corresponding finger motion of the LEFT more than the RIGHT
virtual hand (i.e. activation related to ‘mirror’ viewing). Note that viewing the LEFT virtual hand led to
significantly greater activation of the primary motor cortex IPSILATERAL to the moving hand (i.e.
contralateral to the observed virtual hand) (see arrow). Significant BOLD activity (p<.01) is rendered on
an inflated cortical surface template using Caret software (adapted from [22]).
Fig. 6 shows activation in the ROI that was greater when the LEFT (relative to
RIGHT) virtual hand was actuated by the subject’s physical movement of their right
hand. In other words, this contrast represents greater activation when seeing the virtual
mirrored hand than the corresponding hand. This simple sensory manipulation was
sufficient to selectively facilitate lateralized activity in the cortex representing the
observed (mirrored) virtual hand. As our preliminary data suggest in the case of stroke
patients, this visual manipulation in a VE may be effective in facilitating the
sensorimotor motor cortex in the lesioned hemisphere and may help explain the
positive therapeutic effects noted by Altschuler and colleagues [44] when training
stroke patients using mirror therapy.

4. Discussion

Rehabilitation of the upper extremity is difficult. It has been reported that 75%-95% of
patients post stroke learn to walk again, but 55% have continuing problems with upper
extremity function [47, 48]. The complexity of sensorimotor control required for hand
function as well as the wide range of recovery of manipulative abilities makes
rehabilitation of the hand even more challenging. Moreover, while walking requires
integration of both limbs, ensuring that the affected limb is ‘exercised’ during
ambulation, some upper extremity tasks can be completed using only the unaffected
limb, creating a situation in which the patient gets used to neglecting the affected side
(learned disuse).
Although we demonstrated positive outcomes with the original system, it was only
appropriate for patients with mild impairments. Our second generation system,
combining, movement tracking, virtual reality therapeutic gaming simulations and
robotics appears to be a viable possibility for patients with more significant
impairments of the upper extremity. The haptic mechanisms such as the spring
assistance, the damping to stabilize trajectories and the adaptable anti-gravity
assistance allowed patients with greater impairments to successfully participate in
activities in which they could not usually partake. From a clinical perspective,
therapists can tailor the interventions to address the particular needs of the patients, and
from the patients perspective, it was clear throughout the testing of the system, that the
patients enjoyed the activities and were challenged by the intervention.
In addition to their use in assisting to provide more intense therapy of longer
duration, Brewer [6] suggests that robotics have the potential to address the challenge
of conducting clinically relevant research. An example of this is the comparison we
described above, training the hand and arm separately to training them together. It is
controversial whether training the upper extremity as an integrated unit leads to better
outcomes than training the proximal and distal components separately. The current
prevailing paradigm for upper extremity rehabilitation describes the need to develop
proximal control and mobility prior to initiating training of the hand. During recovery
from a lesion, the hand and arm are thought to compete with each other for neural
territory [49]. Therefore, training proximal control first or along with distal control may
actually have deleterious effects on the neuroplasticity and functional recovery of the
hand. However, neural control mechanisms of arm transport and hand-object
interaction are interdependent. Therefore, complex multisegmental motor training is
thought to be more beneficial for skill retention. Our preliminary results demonstrate
that in addition to providing an initial proof of concept, the system allows for the
systematic testing of such controversial treatment interventions.
Our second goal was to design a sensory stimulation paradigm for acute and severe
patients with limited ability to participate in therapy. A practice condition used during a
therapy session is that of visual demonstration or modeling. Current neurological
evidence suggests that the observation of motor actions is more than an opportunity to
understand the requirements of the movement to be executed. Many animal and human
studies have shown activation of the motor cortex during observation of actions done
by others [38]. Observation of motor actions may actually activate similar neural
pathways to those involved in the performance of the observed action. These findings
provide an additional potential avenue of therapeutic intervention to induce neural
activation.
However, some studies indicate that neural processing is not the same when
observing real actions and when observing virtual actions suggesting that observing
virtual models of human arms could have significantly less of a facilitation effect when
compared to video clips of real arm motion [50]. We found that when our subjects
viewed the movement of the virtual hands, with the intention of imitating that action,
the pre-motor and posterior parietal areas were activated. Furthermore, we showed in
both healthy subjects and in one subject post-stroke, that when the left virtual hand was
actuated by the subject’s physical movement of their right hand, activity in the cortex
ipsilateral to the real moving hand (contralateral to the moving virtual hand) was
selectively facilitated.
We hypothesized that viewing a virtual hand corresponding to the patient’s
affected side and animated by movement of the patient’s unaffected hand could
selectively facilitate the motor areas in the affected hemisphere. This sensory
manipulation takes advantage of the capabilities of virtual reality to induce activation
through observation and to perturb the reality in order to target particular networks. We
are optimistic about our preliminary findings and suggest that this visual manipulation
in a VE should be further explored to determine its effectiveness in facilitating
sensorimotor areas in a lesioned hemisphere.
We believe that VR is a promising tool for rehabilitation. We found that adding
haptic control mechanisms to the system enabled subjects with greater impairments to
successfully participate in these intensive computerized training paradigms. Finally, we
investigated the underlying mechanisms of interacting within a VE. We found that the
value of training in a VE is not just limited to its ability to provide an intensive practice
environment but that specially-designed VE’s can be used to selectively activate a
frontoparietal action observation and action production network. This finding opens a
doorway to a potential tool for clinicians treating patients with a variety of
neuropathologies.

Acknowledgments

This work was supported in part by Rehabilitation Engineering Research Center grant #
H133E050011 from the National Institute on Disability and Rehabilitation Research.

References

[1] J.L. Patton, and F.A. Mussa-Ivaldi, Robot-assisted adaptive training: custom force fields for teaching
movement patterns, IEEE Transactions in Biomedical Engineering 51 (2004), 636–646.
[2] P.S. Lum, C.G. Burgar, P.C. Shor, M. Majmundar, and M. Van der Loos, MIME robotic device for
upper limb neurorehabilitation in subacute stroke subjects: A follow up study, Journal of Rehabilitation
Research and Development 42 (2006), 631-642.
[3] H.I. Krebs, N. Hogan, M.L. Aisen, and B.T. Volpe, Robot-aided neurorehabilitation, IEEE
Transactions in Rehabilitation Engineering 6 (1998), 75-87.
[4] L.E. Kahn, P.S. Lum, W.Z. Rymer, and D.J. Reinkensmeyer, Robot assisted movement training for the
stroke impaired arm: Does it matter what the robot does? Journal of Rehabilitation Research and
Development 43 (2006), 619-630.
[5] S. McCombe-Waller, and J. Whittall, Fine motor coordination in adults with and without chronic
hemiparesis: baseline comparison to non-disabled adults and effects of bilateral arm training, Archives
of Physical Medicine and Rehabilitation 85 (2004), 1076-1083.
[6] B.R. Brewer, S.K. McDowell, and L.C. Worthen-Chaudhari, Poststroke upper Extremity rehabilitation:
A review of robotic systems and clinical results, Topics in Stroke Rehabilitation 14 (1993), 22-44.
[7] K.M. Stanney, Handbook of Virtual Environments: Design, Implementation and Applications, London,
Lawrence Erlbaum, 2002.
[8] G.C. Burdea, and P. Coiffet, Virtual Reality Technology, New Jersey, Wiley, 2003.
[9] A.S. Merians, H. Poizner, R. Boian, G. Burdea, and S. Adamovich, Sensorimotor training in a virtual
reality environment: does it improve functional recovery poststroke?, Neurorehabilitation and Neural
Repair 20 (2006), 252-67.
[10] S.V. Adamovich, A.S. Merians, R Boian, M. Tremaine, G.C. Burdea, M. Recce, and H. Poizner, A
virtual reality (VR)-based exercise system for hand rehabilitation after stroke, Presence 14 (2005), 161-
174.
[11] A.S. Merians, J. Lewis, Q. Qiu, B. Talati, G.G. Fluet, and S.A. Adamovich, Strategies for Incorporating
Bilateral Training into a Virtual Environment, In: IEEE/ICME International Conference on Complex
Medical Engineering, Beijing, China, 2007, pp. 1272-1277.
[12] S.A. Adamovich, Q. Qiu, B. Talati, G.G. Fluet, and A.S. Merians, Design of a Virtual Reality Based
System for Hand and Arm Rehabilitation. In: IEEE 10th International Conference on Rehabilitation
Robotics, Noordwijk, The Netherlands, 2007, pp. 958-963.
[13] S.A. Adamovich, G.G. Fluet, A.S. Merians, A. Mathai, and Q. Qiu, Incorporating haptic effects into
three-dimensional virtual environments to train the hemiparetic upper extremity, IEEE Transactions on
Neural Systems and Rehabilitation Engineering (2009), in press.
[14] Q. Qiu, D.A. Ramirez, K. Swift, H.D. Parikh, D. Kelly, and S.A. Adamovich, Virtual environment for
upper extremity rehabilitation in children with hemiparesis, In: NEBC / IEEE 34th Annual Northeast
Bioengineering Conference Providence, RI, 2008.
[15] 5DT, 5DT Data Glove 16 MRI, http://www.5dt.com.
[16] Immersion, CyberGlove, http://www.immersion.com, 2006.
[17] Moog FCS Corporation, Haptic Master, http://www.fcs-cs.com, 2006.
[18] R.Q. Van der Linde, P. Lammertse, E. Frederiksen, and B. Ruiter, The HapticMaster, a new high-
performance haptic interface, In: Proceedings Eurohaptics, 2002, pp. 1-5.
[19] Ascension, Flock of Birds, http://www.ascension-tech.com, 2006.
[20] Dassault Systemes, Virtools Dev 3.5, www.virtools.com, 2005.
[21] R.M. Taylor, The Virtual Reality Peripheral Network (VRPN), http://www.cs.unc.edu/Research/vrpn,
2006.
[22] A.S. Merians, E. Tunik, G.G. Fluet, Q. Qiu, S.V. Adamovich, Innovative Approaches to Rehabilitation
of Upper Extremity Hemiparesis Using Virtual Environments, European Journal of Physical and
Rehabilitation Medicine 44 (2008), in press.
[23] RealD/StereoGraphics, CrystalEyes shutter eyewear, htt://www.reald.com, (2006).
[24] Q. Qiu, D.A. Ramirez, S. Saleh, G.G. Fluet, H.D. Parikh, D.Kelly, and S.V. Adamovich, The New
Jersey Institute of Technology Robot-Assisted Virtual Rehabilitation system for children with cerebral
palsy: A feasibility study, Journal of Neuroengineering and Rehabilitation, submitted.
[25] H. Poizner, A.G. Feldman, M.F. Levin, M.B. Berkinblit, W.A. Hening, A. Patel, S.V. Adamovich, The
timing of arm-trunk coordination is deficient and vision-dependent in Parkinson's patients during
reaching movements, Experimental Brain Research 133 (2000), 279-92.
[26] S.V. Adamovich, M.B. Berkinblit, W. Hening, J. Sage, H. Poizner, The interaction of visual and
proprioceptive inputs in pointing to actual and remembered targets in Parkinson's disease, Neuroscience
104 (2001), 1027-41.
[27] R.H. Jebsen, N. Taylor, R.B. Trieschmann, M.J. Trotter, and L.A. Howard, An objective and
standardized test of hand function, Archives of Physical Medicine and Rehabilitation 50 (1969), 311-
319.
[28] S. Wolf, P. Thompson, D. Morris, D. Rose, C. Winstein, E. Taub, C. Giuliani, and S. Pearson, The
EXCITE Trial: Attributes of the Wolf Motor Function Test in Patients with Sub acute Stroke,
Neurorehabilitation & Neural Repair 19 (2005), 194-205.
[29] B. Rohrer, S. Fasoli, H.I. Krebs, R. Hughes, B. Volpe, W.R. Frontera, J. Stein, and N. Hogan,
Movement smoothness changes during stroke recovery, Journal of Neuroscience 22 (2002), 8297-
82304.
[30] J.E. Deutsch, A.S. Merians, S.V. Adamovich, H.P. Poizner, and GC. Burdea, Development and
application of virtual reality technology to improve hand use and gait of individuals post-stroke,
Restorative Neurology and Neuroscience 22 (2004), 341-386.
[31] M.K. Holden, NeuroRehabilitation using 'Learning by Imitation' in Virtual Environments, Proceedings
of HCI International, London, Lawrence Erlbaum Assoc., 2001.
[32] S.H. You , S.H. Jang, Y. H. Kim, M. Hallett, S.H. Ahn, Y.H. Kwon, J.H. Kim, and M.Y. Lee, Virtual
reality-induced cortical reorganization and associated locomotor recovery in chronic stroke: an
experimenter-blind randomized study, Stroke 36 (2005), 1166-71.
[33] S.H. You, S.H. Jang, Y.H. Kim, Y.H. Kwon, I. Barrow, and M. Hallett, Cortical reorganization induced
by virtual reality therapy in a child with hemiparetic cerebral palsy, Developmental Medicine and Child
Neurology 47 (2005), 628-35.
[34] A.F. Hamilton, and S.T. Grafton, Goal representation in human anterior intraparietal sulcus, Journal of
Neuroscience 26 (2006), 1133-1137.
[35] E.S. Cross, A.F. Hamilton, and S.T. Grafton, Building a motor simulation de novo: Observation of
dance by dancers, Neuroimage 31 (2006), 1257-1267.
[36] E. Tunik, P. Schmitt, and S.T. Grafton, BOLD coherence reveals segregated functional neural
interactions when adapting to distinct torque perturbations, Journal of Neurophysiology 97 (2007),
2107-2120.
[37] I. Dinstein, U. Hasson, N. Rubin, and D.J. Heeger, Brain areas selective for both observed and executed
movements, Journal of Neurophysiology 98 (2007), 1415-1427.
[38] G. Rizzolatti, and L. Craighero, The mirror neuron system, In: S. Hyman, ed. Annual Review of
Neuroscience 27 (2004), 169-192.
[39] M. Iacoboni, I. Molnar-Szakacs, V. Gallese, G. Buccino, J.C. Mazziotta, and G. Rizzolatti, Grasping
the intentions of others with one's own mirror neuron system, PLoS Biol 3 (2005), e79.
[40] L. Fadiga, L. Fogassi, G. Pavesi, and G. Rizzolatti, Motor facilitation during action observation: a
magnetic stimulation study, Journal of Neurophysiology 73 (1995), 2608-2611.
[41] M. Gangitano, F.M. Mottaghy, and A. Pascual-Leone, Phase-specific modulation of cortical motor
output during movement observation, Neurology Report 12 (2001), 1489-1499.
[42] G. Buccino, F. Binkofski, and L. Riggio, The mirror neuron system and action recognition, Brain &
Language 89 (2004), 370-376.
[43] M. Iacoboni, and J.C. Mazziotta, Mirror neuron system: basic findings and clinical applications, Annals
of Neurology 62 (2007), 213-218.
[44] E.L. Altschuler, S.B. Wisdom, L. Stone, C. Foster, D. Galasko, D.M. Llewellyn, and V.S.
Ramachandran, Rehabilitation of hemiparesis after stroke with a mirror, Lancet 353 (1999), 2035-2036.
[45] R.C. Welsh, simpleROIbuilder (http://wwwpersonal.umich.edu/~rcwelsh/Simple ROIBuilder) (2008).
[46] ‘Volumes’ toolbox extension http://sourceforge.net/projects/spmtools).
[47] T. Olsen, Arm and leg paresis as outcome predictors in stroke rehabilitation, Stroke 21 (1990), 247-251.
[48] K. Hiaraoka, Rehabilitation effort to improve upper extremity function in post stroke patients: A meta-
analysis, Journal of Physical Therapy Science 13 (2001), 5-9.
[49] W. Muellbacher, C. Richards, U. Ziemann, G. Wittenberg, D. Weltz, B. Boroojerdi, L. Cohen, and M.
Hallett, Improving hand function in chronic stroke, Archives of Neurology 59 (2002), 1278-1282.
[50] D. Perani, F. Fazio, and N.A. Borghese, Different brain correlates for watching real and virtual hand
actions, NeuroImage 14 (2001), 749-758.
Robot therapy for stroke survivors:
proprioceptive training and regulation of
assistance
Vittorio SANGUINETIa,1 , Maura CASADIOa,b, Elena VERGAROa, Valentina
SQUERIa,b, Psiche GIANNONIc and Pietro G. MORASSOa,b
a
Department of Informatics, Systems and Telematics, University of Genoa, Genoa,
Italy
b
Italian Institute of Technology, Genoa, Italy
c
ART Education and Rehabilitation Center, Genoa, Italy

Abstract. Robot therapy seems promising with stroke survivors, but it is unclear
which exercises are most effective, and whether other pathologies may benefit
from this technique. In general, exercises should exploit the adaptive nature of the
nervous system, even in chronic patients. Ideally, exercise should involve multiple
sensory modalities and, to promote active subject participation, the level of
assistance should be kept to a minimum. Moreover, exercises should be tailored to
the different degrees of impairment, and should adapt to changing performance.
To this end, we designed three tasks: (i) a hitting task, aimed at improving the
ability to perform extension movements; (ii) a tracking task, aimed at improving
visuo-motor control; and (iii) a bimanual task, aimed at fostering inter-limb
coordination. All exercises are conducted on a planar manipulandum with two
degrees of freedom, and involve alternating blocks of exercises performed with
and without vision. The degree of assistance is kept to a minimum, and adjusted to
the changing subject’s performance. All three exercises were tested on chronic
stroke survivors with different levels of impairment. During the course of each
exercise, movements became faster, smoother, more precise, and required
decreasing levels of assistive force. These results point to the potential benefit of
that assist-as-needed training with a proprioceptive component in a variety of
clinical conditions.

Keywords. Robot therapy, stroke, rehabilitation

Introduction

During the last few years, considerable effort has been devoted to using robots for
delivering therapy to persons with motor disabilities [1, 2]. Robotic devices have been
frequently used to enforce passive movements (see Figure 1, left). In fact, it has been
shown that repeated passive exercise may help improving recovery[3, 7]. However, a
number of studies [8, 10] point at techniques that take the adaptive nature of the
nervous system into consideration. Such techniques include active-assisted exercises,
in which the robot guides the arm along a desired path (see Figure 1, right). A variant is

1
Corresponding author: Department of Informatics, Systems and Telematics, University of Genoa, Via
Opera Pia 13, 16145 Genoa (ITALY). E-mail: vittorio.sanguineti@unige.it
human human
F(t) x(t) F(t) x(t)
-
robot robot
-
Figure 1. Passive (left) and Active (right) training modalities

represented by active-constrained techniques, in which the robot only allows


movement when the limb forces are appropriately directed toward the target. In
contrast, in active-resisted exercises the robot provides resistance to the desired
movement. Furthermore, in adaptive exercises the robot provides an unfamiliar
dynamic environment, which requires the subject to adapt. Active-resisted and adaptive
techniques imply the presence of a sufficient residual voluntary function, but are not
viable options for severely impaired subjects, who may lack autonomous control of
their movements. On the other hand, these subjects may benefit from therapeutic
protocols in which a sufficient level of assistance allows them to exploit their residual
abilities.
In this chapter, we review a number of studies on using robots with chronic stroke
survivors. In particular, we suggest that rehabilitation protocols should involve vision
and no-vision (proprioceptive) training, and that assistance should be kept to a
minimum.

Motor learning and the role of assistance

Most robot therapy protocols tested in clinical trials use a combination of active and
passive training [1, 2]; therefore, it is hard to draw solid conclusions on their relative
merits. Some indications on what exercises are more effective may come from a better
understanding of the neural basis of motor learning.
The mechanisms of action of physical assistance in promoting motor learning or
re-learning are poorly understood. Assistive forces help subjects complete the motor
task, which in turn may increase subject motivation, even in the early phase of the
learning/recovery process. Furthermore, assistive forces may elicit the right afferent
signals (proprioceptive, tactile), thus promoting the emergence of the appropriate
associations in sensory and motor cortical areas. In addition, assistive forces may affect
learning by inducing a sensation of greater stability of the external environment, or
some aspects of it a necessary condition for long-term, more stable adaptation to occur
[11].
In the simplest form of assisted exercise, the robot has complete control of the task.
This may be beneficial, but as learning (or re-learning) progresses, the differences
between physically guided and active movements become more important. Passive
(completely assisted) movements provide feedback that is different from that of active
movements. In fact, in motor learning studies, the benefits of physical guidance for
motor learning have mainly been ascribed to its early phase, when the motor pattern is
brought into the ‘right ballpark’, e.g. [12].
The learning process may be facilitated if augmented feedback is provided on
selected aspects of performance and/or the outcome of the movement[13]. In fact,
assistance may be seen as a form of augmented feedback that emphasizes performance
on specific aspects of the task.
Overall, these considerations suggest that in order to be effective, assistance should
be highly task-specific. Recent studies indicate that only the task-relevant features of a
movement are explicitly controlled by the nervous system [14]. The remaining degrees
of freedom would be specified through an on-line optimisation process. This would
suggest that assistance should be limited to specific (task-relevant) features of the
movement.
According to this view, assistance would take the general form of a feedback
controller, with proportional (position-dependent) and derivative (speed-dependent)
components. An alternative view [15] is that assistance continuously generates the
forces that the impaired arm cannot provide by itself, so that the movement is as
normal as possible. In this view, assistance would take the form of a controller which
involves an explicit model of the (impaired) arm and its neural control.
The ultimate goal of robot therapy is to emulate as closely as possible a (good)
human physical therapist. This would suggest, at least in perspective, to look at the
patterns of patient-therapist interaction from the point of view of the modalities of
assistance outlined above. This would also allow to identify which combination of
feedback and feed-forward control is actually embodied by human therapists.
Robot-assisted exercise may be seen as a form of cooperative control, in which the
robot and the human subject aim at achieving a common goal. The aim is to gradually
transfer control from the robot to the human.

Regulation of assistance

Recently, a few studies have addressed the way assistive forces affect motor
performance and/or motor learning [16]. When moving under the effect of assistive
forces, provided by a robot agent, humans tend to quickly incorporate these forces in
their motor plan. More specifically, the motor system appears to behave as a ‘greedy’
optimiser, which exploits the assistive forces in order to reduce the amount of
voluntary control (and therefore muscle activation), while keeping the position error
small. This strategy would minimize effort while maintaining the required
performance. As a consequence, during active-assisted exercises (and, even more,
during passive training), a constant-magnitude assistive force would gradually depress
voluntary control; it has been suggested that this would have adverse effects on
recovery. To prevent this, assistance should be reduced to a minimum (assist-as-
needed), and continuously regulated as a function of the observed outcome[17].
Ideally, assistance should be adjusted with respect to the current amount of
voluntary control. As the latter is not readily available, most schemes of regulation are
driven by the observed performance; see Figure 2.
ROBOT+PATIENT
Target Degree of
Perfomance assistance
CONTROLLER

Actual
Perfomance
Figure 2. Mechanism of assistance regulation

Are there ‘optimal’ ways to provide assistance, and to continuously regulate it? If
this is the case, do they depend on the specific task, or do they obey to general
principles, valid for a wide range of motor learning problems? While optimal solutions
have been proposed for simple, specific tasks, like lifting a weight [18, 19], it would be
desirable to derive general principles and methods, that can (in principle) be applied to
any motor learning/re-learning task.
Recently, Wolbrecht and coll. [15] proposed an adaptive control scheme, in which
a controller negotiates an error-reducing and an effort-reducing component. This allows
to keep assistance to a minimum and to automatically adapt it to task performance,
while providing enough assistance to support task completion. This technique does not
explicitly aim at augmenting the degree of voluntary control. Such an increase is
assumed to result from the ability to successfully complete the task.

Proprioceptive training

In stroke survivors, motor impairment is frequently associated with degraded


proprioceptive and/or somatosensory functions [20]. Stroke subjects may have
difficulties with estimating the position of their arm in absence of vision. Moreover,
they may be unable to integrate visual and proprioceptive information. Furthermore,
when performing assistive training they may not be capable of detecting the presence,
magnitude and direction of assistive forces. Therefore, impaired proprioception may
affect the recovery of motor functions [21]. Like motor deficits, proprioceptive deficits
may decrease through repeated exercise [22]. The nervous system uses flexible
strategies in integrating visual and proprioceptive information [23]: when both visual
and kinesthetic information of a limb are available, vision is usually the dominant
source of information [24, 27]. As a consequence of visual dominance[28, 30],
proprioceptive impairment may be masked by vision if the latter is available. This
would suggest that in subjects with both proprioceptive and motor impairment,
assistive exercise might be more effective if at least part of the training were performed
without vision. In fact, recent studies demonstrate that visual feedback is not necessary
for learning novel dynamics [31].
In this context, the contribution of robotic devices to neuromotor rehabilitation
may turn out to be crucial. Moreover, different training conditions - either presence or
absence of vision - may have different degrees of efficacy in robot therapy protocols in
individual stroke patients.

1. Statistical models to assess recovery

A problem of protocols based on variable degrees of assistance for severely impaired


subjects is that the amount of voluntary control (i.e., the performance in absence of
assistance), as well as its change due to exercise, cannot be inferred from trials in
which assistance is removed - in this case, patients would be unable to perform the task
- and is not immediately observable when looking at performance in assisted trials [32].
Moreover, if the training is tailored to individual subjects such that the level of
assistance decreases with improved performance, treatment protocols will tend to differ
widely across subjects, which makes comparisons difficult.
A similar problem occurs if therapy protocols include training with both eyes open
and eyes closed. The effect may be highly variable from subject to subject, depending
on the nature of their impairment. Subjects with impaired proprioception may perform
better in the presence of vision. Subjects with problems in integration of proprioceptive
and visual information may perform better in absence of vision. In these different
situations, visual feedback is likely to have different effects. This highlights the need
for analytic tools that can explore this form of between-subjects variability.
One possible way to address these problems is to use a statistical model which
separately accounts for the effects of exercise, vision and degree of assistance on the
overall performance, while taking into account individual variations. Such a model
would allow for well defined statistical hypothesis testing (e.g. is the treatment
effective? ) and analysis of inter-subject variability.
This may be done with a mixed-effects model, with three fixed effect factors plus
an interaction term (session, force, vision, session × vision interaction) and one random
factor (subject), to properly account for the correlations among repeated measures
from the same subject. The deterministic part of the model is defined as:
performance ≈ bo + b1 ⋅ session + b2 ⋅ force + b3 ⋅ vision + b4 ⋅ ( session ⋅ vision) (1)
where session is the session number (from 0 to max), force is the intensity of the
assistive force (in N), and vision denotes absence (0) or presence (1) of vision.
Model coefficients may be interpreted as follows: (i) b0 is the ‘baseline
performance level’, i.e. the performance at the initial session, with zero assistive force;
this corresponds to the initial degree of voluntary control; (ii) b1 is the between-session
rate of improvement; (iii) b2 is a ‘compliance’ coefficient, measuring the sensitivity of
performance on the assistance level; (iv) b3 is the ‘vision’ component, which indicates
the contribution to the performance provided by presence of vision; (v) b4 is the
‘session × vision’ component, which accounts for the differences in the session effect
that are due to vision. In other words, b4 accounts for the different behaviors, in terms
of between-session improvement, of vision and no-vision trials.
The presence of random factors implies that the above model parameters can be
seen as having a constant component (the same for all subjects), and a random
component (different for each subject), which can be estimated separately.
Testing the significance of the ‘fixed’ components allows to test hypotheses like
whether the therapy produces a significant improvement (this would correspond to
testing for the significance of the ‘session’ effect). If we consider the whole set of
parameters (i.e., fixed plus random), we can look at inter-subject variability. For
instance, we may look at the relationship between the baseline performance (b0) and the
subsequent improvement (b1) in no-vision sessions. Or, we may look at the difference
in baseline performance between vision and no-vision trials (b3) and the corresponding
difference in improvement between the same trials (b4).
For each particular task we need to define a suitable indicator of performance.
Then, the model can be fitted to the data by using a maximum-likelihood procedure
[33] – for, instance, in the R statistical package, this is done by the ‘lme’ function
library [34]. The fitting procedure provides estimates for the fixed and the random
components of each model coefficient, as well as the corresponding significance scores.

2. Experiments

We carried out three pilot studies to investigate the potential benefit of active assisted
training in the recovery of arm movements after stroke. The training included an
explicit proprioceptive component. In all cases, subjects performed their movements
under the influence of robot-generated assistive forces.
We focused on chronic stroke survivors, who were initially unable to complete the
required movements with their affected arm without assistance. The inclusion criteria
were chronic conditions (at least 1 year after stroke) and stable clinical conditions for at
least one month before entering the study. The exclusion criteria were the inability to
understand instructions about the exercise protocol, and the presence of other neuro-
cognitive problems.
In all cases, we used an ‘assist-as-needed’ protocol, in which the therapist initially
sets the magnitude of the assistive force provided by the robot. Assistance allows
patients to initiate the movements, but in no way imposes the trajectory, the reaching
time, or the speed profile. Whenever patient performance improves, in the subsequent
blocks of trials force magnitude is reduced - either manually or automatically. Part of
the trials are performed without vision of the arm, so that subjects are forced to rely on
proprioception to estimate the position of their arm and the direction/position of the
target by detecting presence and direction of the assistive force.
All studies use the same robot system, specifically designed for robot therapy and
for the evaluation of motor control and motor adaptation. The robot - Braccio di Ferro
(BdF) is a planar manipulandum with 2 degrees of freedom [35]. It has a large planar
workspace (a 80×40 cm ellipse) and a rigid parallelogram structure with direct drive of
two brushless motors that provides a low intrinsic mechanical impedance at the end-
effector and a full backdriveability. Hand trajectory is measured with high resolution
(0.1 mm) through optical encoders, and an impedance controller modulates (from
fractions of 1 N up to 50 N) the force transmitted to the hand. Therefore, motion of the
hand is not imposed but results from the interaction between the forces generated by
the robot and the forces generated by patients’ muscles. In all experiments, subjects sat
in a chair, with their chest and wrist restrained, and grasped the robot handle. A light,
soft support was connected to the forearm to allow low-friction sliding on the
horizontal surface of a table. In this way, only the shoulder and the elbow were allowed
to move, and motion was restricted to the horizontal plane, with no influence of gravity.
The height of the seat was adjusted, so that the arm was kept approximately
horizontal, and its position was also adjusted, in such a way that the farthest targets
could be reached with an almost extended arm. A 19” LCD screen was positioned in
front of the patients at a distance of about 1 m in order to display the positions of hand
and of the targets.
Due to the small size of the subjects population, these studies are merely intended
as feasibility studies, aimed at demonstrating the proposed approach and the related
analytical tools.

2.1. Hitting Task

This task [36] focuses specifically on facilitating the active execution of arm extension
movements. This is motivated by the observation that many stroke subjects are unable
to actively perform these movements, particularly in specific directions. In contrast,
wide inward movements are dominated by the flexion pattern that characterizes this
pathology. The task consists of hitting a set of targets, arranged in the horizontal plane
(Figure 3, top) according to three layers: inner (A, 3 targets), middle (B, 3 targets), and
outer (C, 7 targets). Reaching the outer targets requires nearly full extension of the arm.
Target sequences were generated according to the following scheme: A→C→B→A. In
this way, outward movements had to be performed in one step (A→C), whereas inward
movements were performed in two steps (C→B and B→A).
When a target was presented to the subject, the robot generated an assistive force F,
directed toward the target, xT. The assistive force was delivered gradually with a ramp-
and-hold profile, R(t) that had a rise time of one second. The force was switched off as
soon as the subject hit the target. The next target was presented after a pause of 1 s.
Assistance also had a speed-dependent component, aimed at improving the interaction
between the subject and the robot. A virtual wall also provided additional haptic
feedback. The force generated by the robot is summarized by Eq. 2:
(x − xH )
F (t ) = F A T ⋅ R(t ) − b ⋅ v H − kW ⋅ ( x H − xW ) (2)
xT − x H
where xT is the vector that identifies the target position in the plane, xH and vH are,
respectively, the hand position and speed vectors; b (12 Ns/m) is the viscous coefficient,
and kW (1000 N/m) is the stiffness coefficient of the wall. xW indicates the projection of
hand position on the wall. The difference (xH - xW ) indicates the degree of ‘penetration’
of the hand inside the wall, and is zero outside the wall. The protocol started with a test
phase, during which individual subjects became familiar with the apparatus and in
which a physical therapist selected the minimum force level FA that evoked a
functional response, i.e. a (possibly incomplete) movement in the intended direction.
One block of trials included repetitions of the A→C→B→A sequence with
different targets in random order, for a total of 3×3×7=63 movements. Each block of
trials was performed either with or without vision. In the latter case, the subjects were
blindfolded, but could still feel the target through proprioception. The first training
session initiated with two blocks of trials (vision, no-vision), using the same level of
force determined in the test session (F1). After a little rest, the therapist considered the
level of performance and asked the subject about fatigue. The decision could be 1) to
terminate the session, 2) to continue with the same force level, 2) to continue with a
reduced force F2 (10-20% less than F1). The procedure was iterated until the decision to
stop was agreed by the patient and the therapist. In following sessions the training
always started F1, and then, if possible, the level of assitance was decreased. If subjects
reached a level of assistance with a force below 4 N, the no-vision blocks were
Figure 3. The targets are arranged on three layers: A, B, C. The C layer is just in front of a virtual wall.
The distance between adjacent layers was 10 cm. A target was considered as reached when its distance
from the hand was less than 2 cm.

eliminated. The whole training protocol consisted of 10 sessions (1-2 sessions/week,


about 1 hour each), plus the initial test session.
Nine stroke survivors (2 males, 7 females, age 52±14) participated in this study.
Disease duration was 34±19 months (range 12-76), in which the majority were
ischemic in nature (7/9). Patient impairment was evaluated by means of the Fugl-
Meyer score, limited to the arm section (FMA) [37, 38]. The average FMA score was
15±13 (range 5-41). The average Ashworth score of muscle spasticity [39] was 1.9±0.9
(range 1-3).

2.1.1. Results
An example of a trial in an early and a late phase of training (Subject 5) is depicted in
Figure 3, which shows (middle) the A→C→B→A trajectories and (bottom) the
corresponding time courses of assistive force and hand speed profile. Figure 3 shows
the trajectories in a typical subject. In early sessions, the outward movement (A→C) is
segmented into a sequence of sub-movements. The first sub-movement covers only
part of the total distance, thus leaving a residual error which has to be corrected by
additional movements. The motor performance in late training sessions (Figure 3,
bottom right) suggests a visible improvement. At the same time, the level of robot
assistance could be reduced from 12 N to 6 N; movement duration was shorter, and the
number of sub-movements was reduced. The residual error after the first sub-
movement decreases as well. In the overall population of subjects, the initial level of
assistance ranged between 25 N and 5 N, and was generally higher for patients who
initially had lower Fugl-Meyer scores (arm part).
To account for the joint effects of session and assistance, we applied the mixed-
effects model (see Eq. 1) to the number of sub-movement observed during the outward
phase of each trial. The number of sub-movements had a significant effect on the level
of assistance (p=0.0026). This is not surprising, the results merely confirm that
assistance has a beneficial effect on performance. The effect of session was also highly
significant (p<0.0001). In fact, we found a negative b1 (session) coefficient (systematic
part): -0.369±0.098 sub-movements per session. This indicates that the observed effect
of session corresponds to a reduction of sub-movements. The model may also be used
to assess the session effect on each individual subject (Figure 4 - left). The number of
peaks displays a strong negative correlation (the correlation coefficient is -0.75)
between baseline performance, b0, and the change over sessions, b1: subjects with
better initial performance are closer to maximum performance and therefore they
improve less, however, irrespective of the initial conditions, all subjects have a
potential for improvement. With regard to the effect of vision, we found significant
vision and session × vision effects. This means the presence of vision did not have a
systematic effect. However, the model allows us to investigate the effect of vision on

0 Number of peaks 15 Number of peaks


S2
S7
S1 S8
-2 S3 S7
S5 S4
Vision Performance
Changes NV (b1)

10
-4 S6
S3
S6 S1
S5
-6 S9
5

-8 S4 S8
S2 S9

-10 0
0 5 10 15 0 5 10 15
Baseline NV (b0) No Vision Performance

Figure 4. Effect of robot training on the number of sub-movements. Left: Baseline performance vs
change over sessions. Improvement is greater in subjects with a greater initial impairment. Right:
Different subjects exhibit different impairments with and without vision, but in all cases the effect of
training is to equalize their vision-no vision performance. Dots indicate initial performance, lines the
change over sessions.
individual subjects. A crucial question is how the different subjects compare in terms
of their initial performance with eyes open or eyes closed. Another question, similar to
the one we asked before for the ‘session’ effect, is whether there is a systematic
relationship between the differential behavior in vision and no-vision baseline behavior
and the differential change in vision and no-vision trials. The former question can be
addressed by comparing, for each subject, the baseline performance with vision (b0+b3)
and without (b0). Figure 4 (right) clearly indicates that some subjects (namely, S1 and
S3) have a better initial performance with eyes closed (data points above the diagonal
line). In contrast, other subjects (S8, S9) have better performance with eyes open (data
points below the diagonal). The remaining subjects have similar performance with both
sensory modalities.
The difference in the baseline performance with and without vision (i.e., parameter
b3) and the relative difference in the performance change over sessions (i.e., parameter
b4) have a strong negative correlation (correlation coefficient: -0.96). This means that
subjects with severe impairments in the eyes closed condition (negative b3) result in a
greater improvement in eyes closed trials (negative b4), and vice versa.
As regards FMA scores, we found a statistically significant change (p = 0.00035,
pairwise t-test) from 15±13 to 20±13, corresponding to an average 4.8±2.4
improvement. This is in line with previous studies [1], which report an average
improvement of 3.7±0.5. Evaluation of the FMA at follow-up resulted in a substantial
preservation of the improvement (FMA=20±13, no significant difference from that
assessed at the end of treatment). Four subjects even displayed an improvement in their
FMA score. No change was observed in the subjects’ Ashworth score.

2.2. Tracking Task

In this task [40], subjects had to continuously track a moving visual target, moving on a
figure-of-eight trajectory (length = 90 cm, time period = 15 s). The target was
represented visually as a small red circle, and haptically, as an attractive force field
defined by F = K ⋅ d , where d is the distance of the hand from the target (Figure 5).
The current position of the hand was continuously displayed (as the picture of a small
car). For each subject, the scale factor, K, was initially selected as the minimum level
capable to induce the initiation of movement; The range of the assistive force was 3-30
N (from the least to the greatest impairment). The moving target stopped if the distance
from the cursor was greater than 2 cm. The experimental protocol was organized into
blocks of 10 trials each, which include 10 repetitions of the figure-of-eight. Within
each training session, two blocks of trials are alternated, with eyes open and eyes
closed. Within each block, half of the trials were clockwise and half were
counterclockwise. One session lasted approximately 45 minutes. At the end of each
block, the robot estimated a performance score, based on the number of stops and the
overall movement duration. If the score exceeded a threshold, the level of assistance
was reduced. Unlike the previous exercise, assistance here is automatically adapted to
the observed performance (see Figure 2).
The therapy cycle included up to 10 sessions (2-3 sessions/week). Improvements
were evaluated with clinical scales (FMA, Ashworth) and movement indicators
(average speed, duration, tracking error, stop time). We used the statistical model
described in Section 1.
First

Last

10 cm
Figure 5. Top: Visual tracking task. Bottom: trajectories at early and late training, for vision (left) and
no-vision trials (right)

Ten chronic hemiparetic subjects participated in this study (3 M, 7 F, age=53±15y,


disease duration=4±2y, Fugl-Meyer score - arm part (FMA): 23±14).

2.2.1. Results
Fig. 5 (bottom) displays changes in the tracking trajectories in a typical patient between
the first and the last session. Statistical analysis resulted in highly significant effects of
session for the mean speed (p<0.0001) At the mean speed, the effect of session was
highly significant. The session effect resulted in an improved performance (increased
speed). As regards assistance, we found no significant effects. This is no surprise, as
the expected outcome of assistance regulation is that performance is relatively
insensitive to assistance.
We found significant vision and session × vision effects. The presence of vision
did not have a systematic effect likely due to the fact that subjects vary widely in their
level of sensory impairment. However, the statistical model allows us to investigate the
effect of vision on individual subjects. As in the previous experiment, two crucial
questions are: (i) how the different subjects compare in terms of their initial
performance with eyes open or closed, and (ii) whether there is a systematic
relationship between the differential behavior in vision and no-vision baseline behavior
and the differential change in vision and no-vision trials. Question (i) may be addressed
by comparing the baseline performance with and without vision for each subject,. We
found that some subjects (S3, S6 and S8) have a better initial performance with eyes
closed, whereas other subjects (S1, S4, S5, S9) perform better with eyes open. The
remaining subjects have similar performance with both sensory modalities.
We found a negative correlation (r= -0.27) between the initial vision/no vision
performance difference and the difference in the vision/no vision improvement. This
means that subjects with an initially more severe impairment with eyes closed resulted
in a greater improvement in eyes closed trials, and vice versa. Improved performance is
also reflected in the increased FMA score (from 23±14 to 27±15, corresponding to an
average 3.4±1.9 increase). The level of assistance was reduced on average by 28%.
As in the previous experiment, subjects consistently improve their performance.
Moreover, proprioceptive problems - revealed by a discrepancy between initial
performance with eyes open and closed - tend to reduce over training.

2.3. Bi-manual training

Upper limb robot therapies for stroke hemiparesis primarily focus on the paretic limb,
with unilateral exercises to improve motor control of the shoulder and elbow [1].
Actually, many daily tasks require the coordination of both hands. This points to a
possible benefit of protocols for upper limb robotic rehabilitation that involve the
cooperation of both hands. Few studies have examined the efficacy of bilateral training
in the recovery of paretic limb movements post-stroke [3, 4, 41]. These studies showed
a positive effect on joint power of the affected shoulder and elbow muscles, although
motor control improved to a lesser extent. In these cases, however, the two arms were
not required to cooperate but, rather, to interact in a master-slave fashion.
Here we propose a robot-mediated cooperative exercise, in which subjects make
forward and backward movements with both hands, while grasping the handles of an
horizontal bar. Subjects are required to keep the horizontal orientation of the bar. In this
way, the plegic and non plegic limbs are required to coordinate and balance their action
in order to achieve the movement goal. Bi-manual cooperation may be seen as a form
of self-regulated assistance. The non-plegic limb contributes to the forward and
backward translation of the bar, but the contributions of both arms must be balanced in

Figure 6. The bi-manual task. Left: experimental apparatus. Right: assistive force fields.
order to keep the bar horizontal.
For the purpose of this study, an horizontal bar (Figure 6, left) was connected to
the end effector of the robot. Subjects grasped the two handles of the bar,
symmetrically positioned with respect to the central hinge. The distance between the
handles was adjusted to match the distance between the shoulders of the subject. Bar
rotation was not actuated, but bar orientation was measured by a potentiometer.
Subjects sat in front of a computer screen, which displayed a target (a circle with a
2 cm diameter) and a green bar, indicating position and orientation of the bar; see
Figure 6 (right). The task consisted of forward and backward movements (nominal path
length: 20 cm), to be performed by maintaining the bar perpendicular to movement
direction. If bar orientation exceeded a threshold angle (4°), the bar became red.
The robot generated four types of forces (Figure 6, right): 1) an assistive field,
pulling the end effector toward the target. Its magnitude was set as the minimum value
sufficient to promote active movement. This value was gradually decreased while
subjects’ performance improved; 2) a strong resistive elastic field, only active when the
orientation error was greater than 4°; 3) two vertical ‘walls’, that prevented horizontal
movements; 4) a viscous field, which introduced a friction component for the
stabilization of patients’ arms. The task was carried out in two conditions: with eyes
open or eyes closed. In the latter case, subjects had no visual feedback, but the robot
provided the necessary proprioceptive information: target direction and bar unbalance
were denoted by the attractive and resistive fields, respectively. Each block of trials
consisted of 10 repetitions of forward and backward movements, under one of the two
conditions (open and closed eyes). Each session lasted about 30 minutes. The therapy
cycle included up to 5 sessions (2-3 sessions/week). Movement trajectories were then
analyzed.
Six patients with chronic stroke (3 M, 3 F) participated at this study. Subjects
ranged in age from 32 to 74 years (58±16y), with an average post-stroke time of
3.5±1.4y. Their Fugl-Meyer score - arm part (FMA) was 14.3±8.6, and their Ashworth
score was 2.2±1.4.

2.3.1. Results
Over sessions, the number of blocks of trials performed by the subjects increased,
while the minimum level of assistive force decreased. Even though the difficulty of the
exercise increased, subjects’ performance improved. Movement duration (Figure 7) and
balance error (defined as the number of times bar orientation exceeded the threshold;
Figure 7) decreased throughout the sessions. At the end of the training sessions, the
task was carried out faster and with better coordination between the two limbs. Figure 7
suggests that backward movements are faster than forward movements, possibly
because the flexion pattern that characterize this pathology has a more negative
influence and is more difficult to control in backward movements. This is consistent
with the findings in the Hitting task (see above). An improvement was found in both
the vision and non vision conditions. In the closed eyes condition, performance tends to
be worse in all subjects, except S5. In this situation subjects must rely solely on
proprioception to estimate 1) the position of each arm in the workspace, 2) the position
of one arm with respect to the other, and 3) the effect of one arm movement on the
other arm.
These preliminary results suggest that bi-manual cooperative training may be
beneficial to stroke survivors. Moreover, the results help to justify a full clinical trial
with a control group and greater number of subjects, as well as more rehabilitation
sessions.

3. Discussion

We have presented three examples of active-assisted training protocols, aimed at the


rehabilitation of chronic stroke survivors. These exercises have a number of common
features: (i) problem-solving aspects and a sensory-rich experience; (ii) a mechanism
that regulates the degree of assistance such that it is kept to a minimum; (iii) different
blocks of trials are performed with and without vision, in alternation.
In all three experiments, analysis of performance suggests that, all patients

First session Last session


0.2 0.2 Target [m]
End effector [m]
0.1 0.1

0 0

-0.1 -0.1

-0.2 -0.2
10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90

Angle [deg]
10 10 Threshold [deg]

0 0

-10 -10

10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90

Attractive F [N]
20 20 Resistive F [N]

0 0

-20 -20

10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90
Time [s] Time [s]

Figure 7. Performance of a typical stroke subject in the bi-manual task, at the beginning (left) and end
(right) of the training protocol. Top: time course of vertical movements. Middle: bar orientation. Bottom:
Attractive (assistive) and resistive forces.
exhibited an increase in the amount of voluntary control, even though some of them
could not achieve complete recovery of autonomous movements.. In particular, we
found that proprioceptive training (i.e., training with closed eyes) is beneficial to
patients with abnormal proprioception. Moreover, training different sensory modalities
separately may improve overall recovery.
These results highlight a number of key points, which will need to be accounted
for when trying to improve the efficacy of robots as therapeutic devices. First, robot
therapy should rely on a better understanding of the mechanisms underlying motor
learning and re-learning. In particular, it is crucial to identify ‘optimal’ ways to provide
assistance and to regulate it. Second, robots may be beneficial to neuromotor
rehabilitation not only for their potential for improving motor control, but also because
they may help to train multi-sensory and sensorimotor integration. Robots are capable
of delivering interactive and repeatable sensorimotor exercises and continuously
monitoring the actual motor performance. They can also be used to simulate new and
‘controlled’ haptic environments. Third, therapy robots should ideally possess an
ability to continuously estimate subjects’ amount of voluntary control and to regulate
assistance accordingly. Ultimately, during recovery subjects would learn from robots,
and robots would learn from patients.

References

[1] G.B. Prange, M.J. Jannink, C.G. Groothuis-Oudshoorn, H.J. Hermens, and M.J. Ijzerman, Systematic
review of the effect of robot-aided therapy on recovery of the hemiparetic arm after stroke, Journal of
Rehabilitation Research and Development 43 (2006), 171-84.
[2] G. Kwakkel, B.J. Kollen, and H.I. Krebs, Effects of Robot-Assisted Therapy on Upper Limb Recovery
After Stroke: A Systematic Review, Neurorehabilitation and Neural Repair (2007).
[3] C.G. Burgar, P.S. Lum, P.C. Shor, and H.F. Machiel Van der Loos, Development of robots for
rehabilitation therapy: the Palo Alto VA/Stanford experience, Journal of Rehabilitation Research and
Development 37 (2000), 663-73.
[4] P.S. Lum, C.G. Burgar, P.C. Shor, M. Majmundar, and M. Van der Loos, Robot-assisted movement
training compared with conventional therapy techniques for the rehabilitation of upper-limb motor
function after stroke, Archives of Physical Medicine and Rehabilitation 83 (2002), 952-9.
[5] H.I. Krebs, N. Hogan, M.L. Aisen, and B.T. Volpe, Robot-aided neurorehabilitation, IEEE
Transactions on Rehabilitation Engineering 6 (1998), 75-87.
[6] B.T. Volpe, H.I. Krebs, N. Hogan, L. Edelsteinn, C.M. Diels, and M.L. Aisen, Robot training enhanced
motor outcome in patients with stroke maintained over 3 years, Neurology 53 (1999), 1874-6.
[7] D.G. Kamper, A.N. McKenna-Cole, L.E. Kahn, and D.J. Reinkensmeyer, Alterations in reaching after
stroke and their relation to movement direction and impairment severity, Archives of Physical Medicine
and Rehabilitation 83 (2002), 702-7.
[8] C.D. Takahashi and D.J. Reinkensmeyer, Hemiparetic stroke impairs anticipatory control of arm
movement, Experimental Brain Research 149 (2003), 131-40.
[9] J.L. Patton and F.A. Mussa-Ivaldi, Robot-assisted adaptive training: custom force fields for teaching
movement patterns, IEEE Transactions on Rehabilitation Engineering 51 (2004), 636-46.
[10] J.L. Patton, M.E. Stoykov, M. Kovic, and F.A. Mussa-Ivaldi, Evaluation of robotic training forces that
either enhance or reduce error in chronic hemiparetic stroke survivors, Experimental Brain Research
168 (2006), 368-383.
[11] K.P. Kording, J.B. Tenenbaum, and R. Shadmehr, The dynamics of memory as a consequence of
optimal adaptation to a changing body, Nature Neuroscience 10 (2007), 779-86.
[12] R.A. Schmidt and T.D. Lee, Motor Control And Learning: A Behavioral Emphasis, Fourth ed.
Champaign, Illinois: Human Kinetics, 2005.
[13] E. Todorov, R. Shadmehr, and E. Bizzi, Augmented Feedback Presented in a Virtual Environment
Accelerates Learning of a Difficult Motor Task, Journal of Motor Behavior 29 (1997), 147-158.
[14] E. Todorov and M.I. Jordan, Optimal feedback control as a theory of motor coordination, Nature
Neuroscience 5 (2002), 1226-35.
[15] E.T. Wolbrecht, V. Chan, D.J. Reinkensmeyer, and J.E. Bobrow, Optimizing Compliant, Model-Based
Robotic Assistance to Promote Neurorehabilitation, IEEE Transactions on Rehabilitation Engineering
(2008).
[16] J.L. Emken, R. Benitez, A. Sideris, J.E. Bobrow, and D.J. Reinkensmeyer, Motor adaptation as a
greedy optimization of error and effort, Journal of Neurophysiology 97 (2007), 3997-4006.
[17] J.L. Emken, R. Benitez, and D.J. Reinkensmeyer, Human-robot cooperative movement training:
learning a novel sensory motor transformation during walking with robotic assistance-as-needed,
Journal of NeuroEngineering and Rehabilitation 4 (2007), 8.
[18] D. Aoyagi, W.E. Ichinose, S.J. Harkema, D.J. Reinkensmeyer, and J.E. Bobrow, A robot and control
algorithm that can synchronously assist in naturalistic motion during body-weight-supported gait
training following neurologic injury, IEEE Transactions on Rehabilitation Engineering 15 (2007), 387-
400.
[19] D.J. Reinkensmeyer, E. Wolbrecht, and J. Bobrow, A Computational Model of Human-Robot Load
Sharing during Robot-Assisted Arm Movement Training after Stroke, Conference Proceedings - IEEE
Engineering in Medicine and Biology Society 1 (2007), 4019-23.
[20] S. Tyson, M. Hanley, J. Chillala, A.B. Selley, and R.C. Tallis, Sensory Loss in Hospital-Admitted
People With Stroke: Characteristics, Associated Factors and Relationship With Function,
Neurorehabilitation and Neural Repair, 2007.
[21] L.M. Carey, T.A. Matyas, and L.E. Oke, Sensory loss in stroke patients: effective training of tactile and
proprioceptive discrimination, Archives of Physical Medicine and Rehabilitation 74 (1993), 602-11.
[22] S. Dechaumont-Palacin, P. Marque, X. De Boissezon, E. Castel-Lacanal, C. Carel, I. Berry, J. Pastor,
J.F. Albucher, F. Chollet, and I. Loubinoux, Neural Correlates of Proprioceptive Integration in the
Contralesional Hemisphere of Very Impaired Patients Shortly After a Subcortical Stroke: An fMRI
Study, Neurorehabil Neural Repair (2007).
[23] S.J. Sober, and P.N. Sabes, Flexible strategies for sensory integration during motor planning, Nature
Neuroscience 8 (2005), 490-7.
[24] J.R. Flanagan, and A.K. Rao, Trajectory adaptation to a nonlinear visuomotor transformation: evidence
of motion planning in visually perceived space, Journal of Neurophysiology 74 (1995), 2174-8.
[25] D.M. Wolpert, Z. Ghahramani, and M.I. Jordan, Are arm trajectories planned in kinematic or dynamic
coordinates? An adaptation study, Experimental Brain Research 103 (1995), 460-70.
[26] D.M. Wolpert, Z. Ghahramani, and M.I. Jordan, Perceptual distortion contributes to the curvature of
human reaching movements, Experimental Brain Research 98 (1994), 153-6.
[27] J.B. Smeets, J.J. van den Dobbelsteen, D.D. de Grave, R.J. van Beers, and E. Brenner, Sensory
integration does not lead to sensory calibration, Proceedings of the National Academy of Sciences U S A
103 (2006), 18781-6.
[28] M. Botvinick, and J. Cohen, Rubber hands 'feel' touch that eyes see, Nature 391 (1998), 756.
[29] R.J. van Beers, A.C. Sittig, and J.J. Denier van der Gon, The precision of proprioceptive position sense,
Experimental Brain Research 122 (1998), 367-77.
[30] R.J. van Beers, A.C. Sittig, and J.J. Denier van der Gon, How humans combine simultaneous
proprioceptive and visual position information, Experimental Brain Research 111 (1996), 253-61.
[31] D.W. Franklin, U. So, E. Burdet, and M. Kawato, Visual feedback is not necessary for the learning of
novel dynamics, PLoS ONE 2 (2007), e1336.
[32] R. Colombo, F. Pisano, S. Micera, A. Mazzone, C. Delconte, M.C. Carrozza, P. Dario, and G. Minuco,
Robotic techniques for upper limb evaluation and rehabilitation of stroke patients, IEEE Transactions
on Rehabilitation Engineering 13 (2005), 311-24.
[33] N.M. Laird, and J.H. Ware, Random-Effects Models for Longitudinal Data, Biometrics 38 (1982), 963-
974.
[34] D.M. Bates, and J.C. Pinheiro, lme and nlme - Mixed-Effects Methods and Classes for S and S-PLUS,
Version 3.0., Madison: Bell Labs, Lucent Technologies and University of Wisconsin, 1998.
[35] M. Casadio, P.G. Morasso, V. Sanguineti, and V. Arrichiello, Braccio di Ferro: a new haptic
workstation for neuromotor rehabilitation, Technology Health Care 13 (2006), 1-20.
[36] M. Casadio, P. Morasso, V. Sanguineti, and P. Giannoni, Impedance controlled, minimally assistive
robotic training of severely impaired hemiparetic patients, 1st IEEE / RAS-EMBS International
Conference on Biomedical Robotics and Biomechatronics, Pisa, Italy, 2006.
[37] D.J. Gladstone, C.J. Danells, and S.E. Black, The fugl-meyer assessment of motor recovery after stroke:
a critical review of its measurement properties, Neurorehabilitation and Neural Repair 16 (2002), 232-
40.
[38] T. Platz, C. Pinkowski, F. van Wijck, I.H. Kim, P. di Bella, and G. Johnson, Reliability and validity of
arm function assessment with standardized guidelines for the Fugl-Meyer Test, Action Research Arm
Test and Box and Block Test: a multicentre study, Clinical Rehabilitation 19 (2005), 404-11.
[6] R.W. Bohannon, and M.B. Smith, Interrater reliability of a modified Ashworth scale of muscle
spasticity, Physical Therapy 67 (1987), 206-7.
[7] E. Vergaro, M. Casadio, V. Squeri, P. Giannoni, P. Morasso, and V. Sanguineti, Robot-therapy of
hemiparetic patients with a minimally assistive strategy for tracking movements, 3rd International
Symposium on Measurement, Analysis, and Modeling of Human Functions, Lisbon, Portugal, 2007.
[8] S. Hesse, H. Schmidt, C. Werner, and A. Bardeleben, Upper and lower extremity robotic devices for
rehabilitation and for studying motor control, Current Opinion in Neurology 16 (2003), 705-10.
Advances in Wearable Technology for
Rehabilitation
Paolo BONATOa,b
a
Department of Physical Medicine and Rehabilitation, Harvard Medical School,
Spaulding Rehabilitation Hospital, Boston MA, USA
b
Harvard-MIT Division of Health Science and Technology, Cambridge MA, USA

Abstract. Assessing the impact of rehabilitation interventions on the real life of


individuals is a key element of the decision-making process required to choose a
rehabilitation strategy. In the past, therapists and physicians inferred the
effectiveness of a given rehabilitation approach from observations performed in a
clinical setting and self-reports by patients. Recent developments in wearable
technology have provided tools to complement the information gathered by
rehabilitation personnel via patient’s direct observation and via interviews and
questionnaires. A new generation of wearable sensors and systems has emerged
that allows clinicians to gather measures in the home and community settings that
capture patients’ activity level and exercise compliance, the effectiveness of
pharmacological interventions, and the ability of patients to perform efficiently
specific motor tasks. Available unobtrusive sensors allow clinical personnel to
monitor patients’ movement and physiological data such as heart rate, respiratory
rate, and oxygen saturation. Cell phone technology and the widespread access to
the Internet provide means to implement systems designed to remotely monitor
patients’ status and optimize interventions based on individual responses to
different rehabilitation approaches. This chapter summarizes recent advances in
the field of wearable technology and presents examples of application of this
technology in rehabilitation.

Keywords. wearable technology, wireless communication, e-textile, telemedicine

1. The State of the Art in Wearable Technology

Significant progress in computer technologies, solid-state micro sensors, and


telecommunication has advanced the possibilities for individual health monitoring
systems. A variety of compact wearable sensors are available today and we expect that
more will be available in the near future. This technology has allowed researchers and
clinicians to pursue applications in which individuals are monitored in the home and
community settings [1].
Figure 1 shows a schematic representation of a wearable system as it can be
envisioned based on currently available wearable technology. A combination of
wireless sensors and sensors embedded in the person’s garments (e.g. a sensor suit) are
utilized to monitor data such as heart rate, respiratory rate, and movements of the limbs.
A data logger (e.g. a personal digital assistant, a cell phone) is utilized to store data and
transmit data and alerts to a remote clinical center or a caregiver according to the
application of interest. Clinical personnel have remote access to the monitoring system,
Figure 1. Schematic representation of a wearable system to monitor individuals in the home and community
settings.
for instance, via a web-based application. The system is recharged at night via a
docking station that allows for fast communication with a remote clinical center by
means of an access point thus facilitating transfer of raw data and the performance of
maintenance tasks. This technology is bound to enable new telemedicine applications
[2] and to facilitate the implementation of the medical home concept [3].
Wearable devices can be divided into two categories: 1) garments with embedded
sensors and 2) body sensor networks. The idea of embedding sensors into garments
was first pursued by a research team at Georgia Institute of Technology led by
Dr. Sundaresan Jayaraman [4, 6]. Research work by this team eventually led to a
product referred to as Smart Shirt (Figure 2). The Smart Shirt is a wearable health
monitoring system by Sensatex, Inc., USA (http://www.sensatex.com/) that monitors
heart rate, body temperature, and motion of the trunk. The monitoring system is
designed as an undershirt with various sensors embedded within it. Data are transmitted
to a pager-size device attached to the waist portion of the shirt where it is sent via a
wireless gateway to the Internet and routed to a data server where the actual monitoring
occurs. The Smart Shirt incorporates a patented technology named “Wearable
Motherboard” which incorporates optical fibers, a data bus, a microphone, other
sensors, and a multifunction processor, all embedded in a basic textile grid that can be
laundered.
Figure 2. The Smart Shirt by Sensatex, Inc., USA, a garment with embedded sensors for physiological
function monitoring. (Reproduced with permission)

Figure 3. The MIThril is a platform of sensors and wearable computing technology to gather data in the field.
A description of the platform can be found at http://www.media.mit.edu/wearables/mithril/. (Reproduced
with permission)
Following the pioneer work by the research team at Georgia Institute of
Technology, several companies and research groups pursued the development of
garments with embedded sensors. An example of the technology developed by
companies that manufacture wearable systems is the LifeShirt, by VivoMetrics. The
LifeShirt is a comfortable, washable “shirt” that contains numerous embedded sensors
that continuously monitor 30+ physiological signs of sickness and health. The list of
physiologic functions it monitors includes: ECG, respiration, BP, PO2, and posture.
Data from the sensors are recorded to a small belt-worn recorder where it is
encrypted and sent to VivoMetrics Data Center by cellular telecommunication. There it
is decrypted, scanned for artifacts, and posted in a database where summary reports can
be generated for the client.
In the research arena, several groups distinguished themselves for the originality of
their contributions. Among others, Dr. Sandy Pentland at Massachusetts Institute of
Technology developed the MIThril, [7, 9] an architecture that combines hardware and
software platforms as shown in Figure 3. The hardware combines computational,
sensing and networking in a clothing-integrated design. The software is a combination
of user interface elements and machine-learning tools built on the Linux operating
system. This architecture has been used by several mobile platforms for personal digital
assistants and cell phones to demonstrate that active pattern analysis of face-to-face
conversations and interactions within the workplace have the potential to improve the
functioning of the organization [10]. Researchers in Dr. Pentland’s laboratory are
currently exploring potential clinical applications of this technology.
Other examples of garments designed to monitor physiological functions are those
developed by Dr. Danilo De Rossi’s group at University of Pisa [11, 13] and Dr. Harry
Asada at Massachusetts Institute of Technology [14, 15]. These garments specifically
address the need for monitoring patients’ movements. However, as with all garment-
based solutions, they require patients to wear a special clothing item. While we believe
that this approach is necessary for very long-term monitoring (i.e. months to years), we
see the use of miniature wireless sensors as more practical when monitoring needs to
be achieved over shorter periods of time (i.e. a few days to a week).
Seminal work was performed toward the development of wearable wireless
sensors at the NASA’s Jet Propulsion Laboratory, Pasadena, CA where researchers
attempted to implement prototypes of sensor patches to record physiological data over
extended periods of time http://findarticles.com/p/articles/mi_qa3957/is_/ai_n8884077.
These non-invasive sensors were miniature biotelemetric units resembling adhesive
bandages (Figure 4). They were designed to communicate with a hand-held unit (i.e.
readout unit). The patches contained a noninvasive microelectromechanical sensor
integrated with electronic circuitry that transmitted a radio signal modulated by the
processed sensor output. The patch did not contain a battery. Instead, it contained a
circuit for extracting power from an incident radio beam that was present during
readout. For readout, a hand-held radio transceiver was positioned near the patch; the
transceiver transmitted the radio beam to the patch circuitry and received the modulated
radio signal transmitted from the patch. These sensors were proposed for use in
measuring temperature, heart rate, blood pressure, and other physiological parameters.
Following this exciting work, NASA initiated the Sensors 2000! program to develop
advanced sensors, biotelemetry systems, and data systems technology, including the
Sensor Pill, that can be swallowed to monitor the health status of the alimentary canal
and other organ systems.
Figure 4. Wearable sensor patch designed by Gisela Lin and William Tang as part of a project carried out at
NASA's Jet Propulsion Laboratory. (Reproduced with permission)
In the business sector, several companies took inspiration from the seminal work
achieved by researchers at NASA’s Jet Propulsion Laboratory and developed systems
based body sensor networks for commercialization. Among others, FitLinxx
http://www.fitlinxx.com/brand.htm has recently put on the market an ultra low-power
wireless personal area network that provides two-way radio communication that can
control and respond to sensors and actuators, as well as provide wireless connectivity
to the Internet via devices such as a cell phone, personal digital assistant or PC. Based
on this platform, FitLinxx has developed products for health monitoring that integrate
heart rate, blood pressure, a pedometer, and body weight data gathered using a special
weight scale. Similar products are offered by BodyMedia Inc
http://www.bodymedia.com/. BodyMedia’s products are centered on the SenseWear
Armband, a sleek, wireless, wearable body monitor that enables continuous
physiological and lifestyle data collection outside the lab environment. Worn on the
back of the upper arm, it utilizes a unique combination of sensors and technologies that
allows one to gather raw physiological data such as movement, heat flow, skin
temperature, near body ambient temperature, heart rate, and galvanic skin response.
The SenseWear Armband contains a 2-axis accelerometer, temperature sensors for
monitoring heat fluctuation, skin temperature, near body ambient temperature, galvanic
skin response, and heart rate received from a Polar Monitor system. The SenseWear
Armband can be worn continuously up to 3 days without recharging the battery (at
default sampling rate settings), and stores up to 5 days of continuous physiological and
lifestyle data. Research software is available to offer audio and tactile feedback for
reminders, targets, and alerts. Its ability to provide 2-way communication makes the
SenseWear Armband a hub for collecting data from other third-party products such as a
weight scale or a blood pressure cuff. The manufacturer promotes the product as
“eliminating the need for researchers and clinicians to administer and apply
cumbersome sensors to their research subjects.”
The research and development work summarized above is bound to facilitate the
development of new clinical applications in telemedicine [2]. However, a fundamental
limitation that hinders the application in rehabilitation of all commercially available
systems and the majority of the body sensor networks developed in the research field is
that most of the available systems are only suitable for managing data gathered at a
sampling rate of a few Hz per channel. The ideal sampling rate for applications in
rehabilitation ranges from 100 Hz for biomechanical data to 1 kHz for surface EMG
data. To our knowledge, only two research groups have focused on the development of
body sensor networks with the performance required by clinical applications in
rehabilitation. Dr. Emil Jovanov at University of Alabama [16] and Dr. Matt Welsh at
Harvard University [17, 18] developed body sensor networks that provide adequate
performance for application in rehabilitation. These groups developed complex data
management architectures with buffering and data transmission that occurs both in real
time as well as offline to meet the specifications of applications in rehabilitation. The
approaches developed by these researchers achieve high bandwidth in ways compatible
with the low-power consumption specification that needs to be met in order to allow
one to implement a wearable system for monitoring patients over days.
Researchers and clinicians with a focus on rehabilitation are demonstrating a
growing interest in the adoption of wearable technology when monitoring individuals
in the home and community settings is relevant to optimize outcomes of rehabilitation
interventions. Three major categories of application of wearable technology are
emerging: 1) with the focus on monitoring motor activities via pedometers or sensor
networks that go beyond the use of simple pedometers; 2) with the emphasis on
medication titration and, more generally, applications in which the severity of
symptoms (e.g. in motor disorders) is assessed and a clinical intervention is adjusted
accordingly; and 3) with the focal point on the assessment of the outcomes of
therapeutic interventions (i.e. physical and occupation therapy) with potential for
gathering information suitable to adjust the intensity and modality of the prescribed
therapeutic exercises. The following three sections summarize recent work by our team
and others in the above-described three areas of development of new applications of
wearable technology in rehabilitation.

2. Monitoring Motor Activities in Patients with COPD

Chronic obstructive pulmonary disease (COPD) is a major public health problem.


COPD is currently the fourth leading cause of death in the world [19], and is projected
to rank fifth in 2020 as a worldwide burden of disease [20]. Disability, hospitalizations,
and medication costs associated with this disease account for 15 billion dollars in lost
revenues and health care expenditures annually, an estimated 16 % of the national
health care budget [21]. Despite the increasing numbers of patients with COPD, there
has been little advancement in the ability of healthcare providers and clinical
researchers to monitor patients with COPD. The forced expiratory volume in 1 s
(FEV1), long thought to be the gold standard, has been shown to correlate poorly with
other measures of disease status and does not predict mortality and resource utilization.
In the research setting, the FEV1 takes too long to change to be an efficient and
meaningful outcome measure. Recent work by our team [22] and others [23, 24] has
focused on the hypothesis that measurement of cumulative free-living physical activity
with wearable technology in the patient’s home environment combined with
physiological data collection (heart rate, respiratory rate, and oxygen saturation) can
complement current clinical assessments of disease status and provide improved
monitoring of COPD patients.
In recent work by our team [22] we studied 6 males and 6 females, mean age 68 +
11 years, with severe COPD. Mean + s.d. FEV1 was 0.96 + 0.51 L (34 + 17%
predicted) and FVC 2.57 + 0.91 L (70 + 18% predicted). At the time of monitoring, the
average six-minute walking distance was 1170 + 304 feet. In Part I of this pilot study,
our aim was to automatically identify three exercises comprising the aerobic portion of
the pulmonary rehabilitation exercise program from a continuous data record: walking
on a treadmill, cycling on a stationary bike, and cycling of the upper extremities on an
arm ergometer. Identification was based on the output of a neural network trained with
examples of accelerometer data corresponding to each of the exercise conditions. We
demonstrated that accurate and reliable identification of the exercise activities could be
achieved, thus enabling monitoring of patients’ compliance with a prescribed exercise
regimen outside of the rehabilitation environment. For misclassification equal to 5%,
the sensitivity of the classifier was remarkably high, ranging from 93 to 98% across
subjects. Details concerning the study are provided in Sherrill et al [25] and Moy et al
[26].
In Part II of this preliminary investigation, we extended the protocol to include
typical activities such as climbing stairs, walking indoors, doing household chores, etc.
Identifying these types of activities is relevant for assessing patients’ overall mobility.
We collected data by providing patients with a script as described in Sherrill et al
[22]. Due to the physical limitations of the COPD individuals, it was not feasible to
gather more than a minute of data for tasks such as ascending stairs. Since the number
of features derived from the accelerometer data far exceeded the number of data
segments available, a neural network approach was not considered to be appropriate.
We envisioned therefore compiling data from a large group of patients and performing
all identifications based on an existing database of examples rather than custom-
training the classifier for each individual. In order for this to be a workable solution, the
variability across tasks must exceed the variability within tasks due to different
individuals. To show that this was a feasible approach, we sought ways to visualize the
relationships among clusters of data points corresponding to the conditions of interest.
We combined principal components analysis (PCA) and Sammon’s mapping. First, a
PCA transformation was applied, and the first 15 PCs (accounting for 90% of the total
variance) were retained. Then, the Sammon’s map was computed on the transformed
data. Results were viewed as a scatter plot, color-coded by task as shown in Figure 5
utilizing a gray scale. A clear division is evident among tasks. Techniques to assess the
“quality” of the clusters were then utilized as reported in Sherrill et al [22], thus
allowing us to conclude that the motor activities of interest can be classified based on
accelerometer data recorded from upper and lower extremities.
fold laundry arm ergometer

sweep floor climb stairs

treadmill

stationary
bike

walk in hallway

Figure 5. Clustering of features derived from accelerometer data recorded while patients with COPD
performed various motor tasks. Results are plotted for 7 tasks that were performed by the 5 patients who
participated in the study. Each task is represented by 20 randomly selected data points per patient that were
projected in two dimensions using the Sammon’s map algorithm. Each task occupies a distinct subregion of
the data space, forming distinct clusters in each region. Cluster positions are suggested by circles overlaid on
the plots. Axes are the result of abstract transformations on normalized data and therefore units are not shown.

3. Medication Titration in Individuals with Parkinson’s Disease

Parkinson’s disease is the most common cause of movement disorder, affecting about
3% of the population over the age of 65 years and more than 500,000 US residents. The
characteristic motor features of Parkinson’s disease include the development of rest
tremor, bradykinesia (i.e. slowness of movement), rigidity (i.e. resistance to externally
imposed movements), and impairment of postural balance. The primary biochemical
abnormality in Parkinson’s disease is deficiency of dopamine due to degeneration of
neurons in the substantia nigra pars compacta. Current therapy of Parkinson’s disease is
based primarily on augmentation or replacement of dopamine, using the biosynthetic
precursor levodopa or other drugs that activate dopamine receptors [27, 28]. These
therapies are often successful for some time in alleviating the abnormal movements,
but most patients eventually develop motor complications as a result of these
treatments [29, 30]. These complications include wearing off, the abrupt loss of
efficacy at the end of each dosing interval, and dyskinesias, involuntary and sometimes
violent writhing movements. Wearing off and dyskinesias produce substantial
disability, and frequently prevent effective therapy of the disease [31, 33].
Currently available tools for monitoring and managing motor fluctuations are quite
limited [34, 35]. In clinical practice, information about motor fluctuations is usually
obtained by asking patients to recall the number of hours of ON (i.e. when medications
effectively attenuate tremor) and OFF time (i.e. when medications are not effective)
they have experienced in the recent past. Figure 6 shows a schematic representation of
a motor fluctuation cycle (i.e. interval between two medication intakes) and the
occurrence of dyskinesia. Dyskinetic movements are observed at certain points of the
cycle. Patients are asked to report the duration of these symptoms in terms of percent of
awake time spent in each state. This retrospective approach is formalized in Subscale
Four of the Unified Parkinson’s Disease Rating Scale (UPDRS) [36] referred to as
“Complications of Treatment”. This kind of self-report is subject to both perceptual
bias (e.g. patients often have difficulty distinguishing dyskinesia from other symptoms)
and recall bias. Another approach is the use of patient diaries, which does improve
reliability by recording symptoms as they occur, but does not capture many of the
features that are useful in clinical decision-making [37]. In clinical trials of new
therapies, both the diary-based approach [37] as well as extended direct observations of
the patients in a clinical care setting [38] have been used, but both capture only a small
portion of the patient’s daily experience and are burdensome for the subjects.
Based on these considerations, our team [39] and others [40] have developed
methods that rely on wearable technology to monitor longitudinal changes in the
severity of symptoms and motor complications in patients with Parkinson’s disease. In
our own study, we recruited twelve individuals, ranging in age from 46 to 75 years,
with a diagnosis of idiopathic Parkinson’s disease (Hoehn & Yahr stage 2.5 to 3, i.e.
mild to moderate bilateral disease) [36]. Subjects delayed their first medication intake
in the morning so that they could be tested in a “practically-defined OFF” state
(baseline trial). This approach is used clinically to observe patients during their most
severe motor symptoms. Subjects were instructed to perform a series of standardized
motor tasks utilized in clinically evaluating patients with Parkinson’s disease.
Accelerometer sensors positioned on the upper and lower extremities were used to
gather movement data during performance of the standardized series of motor tasks
mentioned above. The study focused on predicting tremor, bradykinesia, and
dyskinesia based on features derived from accelerometer data. Raw accelerometer data
were high-pass filtered with a cutoff frequency of 1 Hz to remove gross changes in the
orientation of body segments [41]. An additional filter with appropriate characteristics
was applied to isolate the frequency components of interest for estimating each
symptom or motor complication. Specifically, the time series were band-pass filtered
with bandwidth 3-8 Hz for the analysis of tremor, and they were low-pass filtered with
a cut-off frequency of 3 Hz for the analysis of bradykinesia and dyskinesia. All the
filters were implemented as IIR filters based on an elliptic design. The accelerometer
time series were segmented using a rectangular window randomly positioned
throughout the recordings performed during performance of each motor task. [42]
Features were extracted from 30 such data segments (i.e. epochs) for each motor task
from the recordings performed from each subject during each trial. Five different types
of features were estimated from accelerometer data recorded from different body
segments. The features were chosen to represent characteristics such as intensity,
modulation, rate, periodicity, and coordination of movement. We implemented Support
Vector Machines to predict clinical scores of the severity of Parkinsonian symptoms
and motor complications. Our results demonstrated that an average prediction error not
exceeding a few percentage points can be achieved in the prediction of Parkinsonian
symptoms and motor complications from wearable sensor data. Specifically, average
prediction error values were 3.5 % for tremor, 5.1 % for bradykinesia, and 1.9 % for
dyskinesia. [43].
ON WEARING-OFF

Onset Peak-Dose End-of-Dose


Dyskinesia Dyskinesia Dyskinesia
OFF OFF
Levodopa
Intake
Figure 6. Example of the cyclical pattern of motor abnormalities observed in patients with Parkinson’s
disease for dyskinesia during a motor fluctuation cycle.

4. Assessment of Rehabilitation Interventions in Patients Post Stroke

More than 700,000 people are affected by stroke each year in the United States [44].
Strokes affect a person’s cognitive, language, perceptual, sensory, and motor
abilities [45]. More than 1,100,000 Americans have reported difficulties with
functional limitations following stroke [46]. Recovery from stroke is a long process
that continues beyond the hospital stay and into the home setting. The rehabilitation
process is guided by clinical assessments of motor abilities, which are expected to
improve over time in response to rehabilitation interventions. Telerehabilitation has the
potential to facilitate extending therapy and assessment capabilities beyond what can be
achieved in a clinical setting.
Accurate assessment of motor abilities is important in selecting the best therapies
for stroke survivors. These assessments are based on observations of subjects’ motor
behavior using standardized clinical rating scales. Wearable sensors could be used to
provide accurate measures of motor abilities in the home and community settings and
could be leveraged upon to facilitate the implementation of telerehabilitation protocols.
Our team has performed pilot studies exploring the use of wearable sensors
(accelerometers) and an e-textile glove-based system (herein referred to as “data
glove”) designed to monitor movement and facilitate the implementation of physical
therapy based on the use of video games.
We demonstrated that accelerometers can be utilized to predict the Wolf
Functional Ability Score (FAS). The Wolf FAS provides a measure of the subject’s
quality of movement based on an evaluation of the quality of movement during
performance of the Wolf Motor Performance Test [47]. The scores capture factors such
as smoothness, speed, ease of movement, and amplitude of the compensatory
movements. Twenty-three subjects who had a stroke within the previous 2 to 24
months were recruited for the study. Accelerometers were positioned on the sternum
and the affected (i.e. hemiparetic) arm. Subjects performed multiple repetitions of tasks
requiring reaching and prehension, selected from the Wolf Motor Performance Test.
The tasks included reaching to close and distant objects, placing the hand or forearm
from lap to a table, pushing and pulling a weight across a table, drinking from a
beverage can, lifting a pencil, flipping a card, and turning a key. Accelerometer data
were processed to derive features that captured different aspects of the movement
patterns and were fed to a classifier built using a Random Forest. The Random Forest
approach is based on an ensemble of decision trees and is suitable for datasets with low
feature-to-instance ratio. We assessed the reliability of the estimates achieved using this
method by deriving the prediction error for each of the investigated motor tasks. The
estimated prediction error for such motor tasks ranged between about 1 % and 13 %.
This is a very encouraging result as it suggests that FAS scores could be estimated via
monitoring motor tasks performed by patients in the home and community settings
using accelerometers.
Our team also studied the feasibility of utilizing a sensorized glove to implement
physical therapy protocols for motor retraining based on the use of video games. The
glove was utilized to implement grasp and release of objects in the video games. This
function was achieved by defining a measure of “hand aperture” and estimating it
based on processing data gathered from the data glove. Calibration of the data glove
was achieved by asking individuals to hold a wooden cone-shaped object with diameter
ranging from 1 cm to 11.8 cm at different points of the cone corresponding to a known
diameter. The output of the sensors on the glove was used to estimate the diameter of
the section of the cone-shaped object corresponding to the position of the middle finger.
A linear regression model was utilized to estimate the above-defined measure of “hand
aperture” (dependent variable) using the glove sensor outputs as independent variables.
Encouraging results were achieved from the study. The estimation error that marked
the measures of “hand aperture” as defined above was smaller than 1.5 cm. We
consider this result as satisfactory in the context of the application of interest, i.e. the
implementation of video games to train grasp and release functions in individuals post
stroke.
Overall, the results herein summarized indicate that the investigated wearable
technologies are suitable to implement telerehabilitation protocols.

Figure 7. A subject testing the data glove herein described in combination with a robotic system for
rehabilitation.
5. Conclusions

The assessment of the impact of rehabilitation interventions on the daily life of


individuals is essential for developing protocols that maximize the impact of
rehabilitation on the quality of life of individuals. The use of questionnaires is
somehow limited because questionnaires are subject to perceptual bias and recall bias.
Furthermore, relying on questionnaires is bound to introduce a delay in the response to
changes in patient’s status since the information needed to make a clinical decision
concerning changes in rehabilitation interventions is not readily available on a
continuous basis but rather questionnaires are administered sporadically. Wearable
technology has the potential to overcome limitations of existing methodologies to
assess the impact of rehabilitation interventions on the real life of individuals.
Miniature unobtrusive sensors can provide clinicians with quantitative measures of
subjects’ status in the home and community settings thus facilitating making clinical
decisions concerning the adequacy of ongoing interventions and possibly allowing
prompt modification of the rehabilitation strategy if needed. In this chapter, we
presented three applications that point at potential areas of use of wearable technology
in rehabilitation.
In the first example, we showed that wearable sensors can provide clinicians with a
tool to monitor exercise compliance in patients with COPD. We also showed that
activities of daily living that are associated with different systemic responses can be
identified with high reliability. It is conceivable that based on trends identified via
analysis of changes in activity level and systemic responses associated with certain
motor tasks, we could achieve early detection of exacerbation episodes. The impact on
our ability to care for patients with COPD would be paramount.
In the second example, we demonstrated that we can monitor the severity of
symptoms and motor complications in patients with Parkinson’s disease. This is
important in the late stages of the disease when motor fluctuations develop. Since
motor fluctuations span an interval of several hours, observations performed in a
clinical setting (typically limited to the duration of the outpatient visit, i.e. about 20-
25 minutes) are not sufficient to capture the severity of motor fluctuations. Monitoring
patients in the home and community settings could therefore substantially improve the
clinical management of patients with late stage Parkinson’s disease. Our results suggest
that the technique summarized in this chapter could be extended to monitoring patients
with other neurodegenerative conditions that are accompanied by motor symptoms.
Finally, we demonstrated that wearable technology could provide clinicians with a
means to assess functional ability in individuals post stroke. This is important because
we currently have very limited tools to assess the impact of rehabilitation interventions
on the real life of patients. Although it is expected that therapeutic interventions that
are associated with improvements in impairment level and functional level lead to an
improved quality of life, it would be very useful to quantify such impact and compare
different interventions measuring their impact on the performance of activities of daily
living via processing data gathered in the home and community settings. Tools to
monitor patients in the home and community settings could lead to new criteria for
adjusting interventions that maximize the impact on real life conditions of the adopted
therapeutic intervention. We anticipate that such criteria would allow clinicians to help
patients achieving higher level of independence and better quality of life.
All in all, the examples provided in this chapter indicate that wearable technology
has tremendous potential to allow clinicians to improve quality of care thus resulting in
a likely improvement in quality of life in individuals in response to rehabilitation. The
next challenge in wearable technology is indeed to demonstrate that such
methodologies can have a significant impact on the quality of care provided to patients
and their quality of life.

Acknowledgments

The author wishes to thank Dr. Sunderasan Jayaraman, Dr. Alex (Sandy) Pentland, and
Dr. William Tang for allowing him to utilize figures that they utilized in previous
communications and for their input on a draft of this chapter. The work on wireless
technology summarized in this chapter was largely carried out by Dr. Matt Welsh and
his team at the Harvard School of Engineering and Applied Sciences. Applications of
e-textile solutions were pursued jointly with Dr. Danilo De Rossi, University of Pisa,
and his associates Dr. Alessandro Tognetti and Mr. Fabrizio Cutolo. Dr. Rita Paradiso
(Smartex) provided expertise and support in the development of the data glove
discussed in this chapter. The pilot study concerning the application of wearable
technology to monitor patients with COPD was performed with Dr. Marilyn Moy at
Harvard Medical School and Ms. Sherrill Delsey, currently at the MIT Lincoln
Laboratory, who was at Spaulding Rehabilitation Hospital at the time the study
described in this chapter was performed. The development of methodologies to assess
the severity of symptoms and motor complications in patients with Parkinson’s disease
was performed with Dr. John Growdon and Ms. Nancy Huggins at Massachusetts
General Hospital. Algorithms for the analysis of accelerometer data were developed by
Mr. Shyamal Patel, Northeastern University. Mr. Richard Hughes, currently with
Partners HomeCare, who was at Spaulding Rehabilitation Hospital at the time the study
described in this chapter was performed, provided clinical scores for all the patients’
recordings. Mr. Richard Hughes also contributed to the pilot study we performed to
assess the use of wearable technology in patients post stroke. Medical expertise for this
project was provided by Dr. Joel Stein, currently at Columbia University, who was at
Spaulding Rehabilitation Hospital at the time the study was performed. Algorithms for
the analysis of data recorded from patients post stroke were developed by Mr. Todd
Hester, currently at University of Texas Austin, who was at Spaulding Rehabilitation
Hospital at the time the study was performed. Mr. Shyamal Patel, Northeastern
University, also contributed to the development of these algorithms.

References

[1] P. Bonato, Advances in wearable technology and applications in physical medicine and rehabilitation,
Journal of NeuroEngineering and Rehabilitation 2 (1) (2005), 2.
[2] E.A. Krupinski, Telemedicine for home health and the new patient: when do we really need to go to the
hospital?, Studies in Health Technology and Informatics 131 (2008), 179-189.
[3] Joint principles of the Patient-Centered Medical Home, Del Med J 80 (1) (2008), 21-22.
[4] S. Park, C. Gopalsamy, R. Rajamanickam, S. Jayaraman, The Wearable Motherboard: a flexible
information infrastructure or sensate liner for medical applications, Studies in Health Technology and
Informatics 62 (1999), 252-258.
[5] S. Park, S. Jayaraman, Enhancing the quality of life through wearable technology, IEEE Engineering in
Medicine and Biology Magazine 22 (3) (2003), 41-48.
[6] S. Park, S. Jayaraman, e-Health and quality of life: the role of the Wearable Motherboard, Studies in
Health Technology and Informatics 108 (2004), 239-252.
[7] A. Pentland, Healthwear: medical technology becomes wearable, Studies in Health Technology and
Informatics 118 (2005), 55-65.
[8] A. Pentland, T. Choudhury, N. Eagle, P. Singh, Human dynamics: computation for organizations,
Pattern Recognition Letters 26 (2005), 503-511.
[9] M. Sung, C. Marci, A. Pentland, Wearable feedback systems for rehabilitation, Journal of
NeuroEngineering and Rehabilitation 2 (2005), 17.
[10] A. Pentland, Social Dynamics: Signals and Behavior, MIT Media Lab, 2005.
[11] D. De Rossi, F. Lorussi, E.P. Scilingo, F. Carpi, A. Tognetti, M. Tesconi, Artificial kinesthetic systems
for telerehabilitation, Studies in Health Technology and Informatics 108 (2004), 209-213.
[12] D. De Rossi, A. Lymberis, New generation of smart wearable health systems and applications, IEEE
Transactions on Biomedical Engineering Letters 9 (3) (2005), 293-294.
[13] A. Tognetti, F. Lorussi, R. Bartalesi, S. Quaglini, M. Tesconi, G. Zupone, D. De Rossi, Wearable
kinesthetic system for capturing and classifying upper limb gesture in post-stroke rehabilitation,
Journal of Neuroengineering Rehabilitation 2 (1) (2005), 8.
[14] E. Wade, H. Asada, Cable-free body area network using conductive fabric sheets for advanced human-
robot interaction, Conference of the IEEE Engineering in Medicine and Biology Society 4 (2005), 3530-
3533.
[15] P.T. Gibbs, H.H. Asada, Wearable conductive fiber sensors for multi-axis human joint angle
measurements, Journal of Neuroengineering Rehabilitation 2 (1) (2005), 7.
[16] E. Jovanov, A. Milenkovic, C. Otto, P.C. de Groen, A wireless body area network of intelligent motion
sensors for computer assisted physical rehabilitation, Journal of Neuroengineering Rehabilitation 2 (1)
(2005), 6.
[17] T.R. Fulford-Jones, G.Y. Wei, M. Welsh, A portable, low-power, wireless two-lead EKG system,
Conference of the IEEE Engineering in Medicine and Biology Society 3 (2004), 2141-2144.
[18] T. Gao, L. Selavo, M. Welsh, Creating a hospital-wide patient safety net: Design and deployment of
ZigBee vital sign sensors, AMIA Annual Symposium Proceedings (2007), 960.
[19] WHO, World Health Report, Geneva, Switzerland, 2000.
[20] C.J. Murray, A.D. Lopez, Mortality by cause for eight regions of the world: Global Burden of Disease
Study, Lancet 349 (9061) (1997), 1269-1276.
[21] L. Wilson, E.B. Devine, K. So, Direct medical costs of chronic obstructive pulmonary disease: chronic
bronchitis and emphysema, Respiratory Medicine 94 (3) (2000), 204-213.
[22] D.M. Sherrill, M.L. Moy, J.J. Reilly, P. Bonato, Using hierarchical clustering methods to classify motor
activities of COPD patients from wearable sensor data, Journal of Neuroengineering Rehabilitation 2
(2005), 16.
[23] M.L. Moy, S.J. Mentzer, J.J. Reilly, Ambulatory monitoring of cumulative free-living activity, IEEE
Engineering in Medicine and Biology Magazine 22 (3) (2003), 89-95.
[24] M.L. Moy, E. Garshick, K.R. Matthess, R. Lew, J.J. Reilly, Accuracy of uniaxial accelerometer in
chronic obstructive pulmonary disease, Journal of Rehabilitation Research and Development 45 (4)
(2008), 611-617.
[25] D.M. Sherrill, M.L. Moy, J.J. Reilly, P. Bonato, Objective Field Assessment of Exercise Capacity in
Chronic Obstructive Pulmonary Disease, 15th Annual Congress of the International Society of
Electrophysiology Kinesiology, Boston (Massachusetts), 2004.
[26] M.L. Moy, D.M. Sherrill, P. Bonato, J.J. Reilly, Monitoring Cumulative Free-Living Exercise in COPD,
ATS, Orlando (Florida), 2004.
[27] D.G. Standaert, A.B. Young, Treatment of CNS Neurodegenerative Diseases, in: J.G. Hardman, L.E.
Limbird (Eds.), Goodman and Gilman's Pharmacological Basis of Therapeutics, McGraw-Hill, 2001,
pp. 549-620.
[28] S. Fahn, Levodopa in the treatment of Parkinson's disease, Journal of Neural Transmission 71 (2006),
1-15.
[29] T.N. Chase, Levodopa therapy: consequences of the nonphysiologic replacement of dopamine,
Neurology 50 (5) (1998), S17-25.
[30] J.A. Obeso, C.W. Olanow, J.G. Nutt, Levodopa motor complications in Parkinson's disease, Trends in
Neurosciences 23 (10) (2000), S2-7.
[31] A.E. Lang, A.M. Lozano, Parkinson's disease. First of two parts, The New England Journal of Medicine
339 (15) (1998), 1044-1053.
[32] A.E. Lang, A.M. Lozano, Parkinson's disease. Second of two parts, The New England Journal of
Medicine 339 (16) (1998), 1130-1143.
[33] A. Thomas, L. Bonanni, A. Di Iorio, S. Varanese, F. Anzellotti, A. D'Andreagiovanni, F. Stocchi, M.
Onofrj, End-of-dose deterioration in non ergolinic dopamine agonist monotherapy of Parkinson's
disease, Journal of Neurology 253 (12) (2006), 1633-1639.
[34] W.J. Weiner, Motor fluctuations in Parkinson's disease, Reviews in neurological diseases 3 (3) (2006),
101-108.
[35] T. Muller, H. Russ, Levodopa, motor fluctuations and dyskinesia in Parkinson's disease, Expert
Opinion on Pharmacotherapy 7 (13) (2006), 1715-1730.
[36] S. Fahn, R.L. Elton, Unified Parkinson’s Disease Rating Scale, in: S. Fahn (Ed.), Recent Developments
in Parkinson’s Disease, MacMillan Healthcare Information, 1987, pp. 153-163.
[37] Evaluation of dyskinesias in a pilot, randomized, placebo-controlled trial of remacemide in advanced
Parkinson disease, Archives of Neurology 58 (10) (2001), 1660-1668.
[38] C.H. Adler, C. Singer, C. O'Brien, R.A. Hauser, M.F. Lew, K.L. Marek, E. Dorflinger, S. Pedder, D.
Deptula, K. Yoo, Randomized, placebo-controlled study of tolcapone in patients with fluctuating
Parkinson disease treated with levodopa-carbidopa. Tolcapone Fluctuator Study Group III, Archives of
Neurology 55 (8) (1998), 1089-1095.
[39] S. Patel, D. Sherrill, R. Hughes, T. Hester, N. Huggins, T. Lie-Nemeth, D. Standaert, P. Bonato,
Analysis of the severity of dyskinesia in patients with parkinson’s disease via wearable sensors,
BSN2006, International Workshop on Wearable and Implantable Body Sensor Networks, Cambridge,
MA, 2006, pp. 123-126.
[40] N.L. Keijsers, M.W. Horstink, S.C. Gielen, Ambulatory motor assessment in Parkinson's disease,
Movement Disorders 21 (1) (2006), 34-44.
[41] J.I. Hoff, A.A. van den Plas, E.A. Wagemans, J.J. van Hilten, Accelerometric assessment of levodopa-
induced dyskinesias in Parkinson's disease, Movement Disorders 16 (1) (2001), 58-61.
[42] P. Bonato, D.M. Sherrill, D.G. Standaert, S.S. Salles, M. Akay, Data mining techniques to detect motor
fluctuations in Parkinson's disease, Conference of the IEEE Engineering in Medicine and Biology
Society 7 (2004), 4766-4769.
[43] S. Patel, K. Lorincz, R. Hughes, N. Huggins, J.H. Growdon, M. Welsh, P. Bonato, Analysis of feature
space for monitoring persons with Parkinson's disease with application to a wireless wearable sensor
system, Conference of the IEEE Engineering in Medicine and Biology Society (2007), 6291-6294.
[44] Heart Disease and Stroke Statistics, American Heart Association, 2005.
[45] Stroke: Hope Through Research, National Institute of Neurological Disorders and Stroke, 2004.
[46] Morbidity and Mortality Weekly Report, Center for Disease Control, 2001.
[47] S.L. Wolf, P.A. Catlin, M. Ellis, A.L. Archer, B. Morgan, A. Piacentino, Assessing Wolf motor
function test as outcome measure for research in patients after stroke, Stroke 32 (7) (2001), 1635-1639.
Brain-Computer Interfaces and
Neurorehabilitation
Roberta CARABALONAa, Paolo CASTIGLIONIa and Furio GRAMATICAa
a
Biomedical Technology Department, Santa Maria Nascente Research Hospital,
Don Gnocchi Foundation, Milan, Italy

Abstract. A brain-computer interface (BCI) directly uses brain-activity signals to


allow users to operate the environment without any muscular activation. Thanks to
this feature, BCI systems can be employed not only as assistive devices, but also
as neurorehabilitation tools in clinical settings. However, several critical issues
need to be addressed before using BCI in neurorehabilitation, issues ranging from
signal acquisition and selection of the proper BCI paradigm to the evaluation of
the affective state, cognitive load and system acceptability of the users. Here we
discuss these issues, illustrating how a rehabilitation program can benefit from
BCI sessions, and summarize the results obtained so far in this field. Also
provided are experimental data concerning two important topics related to BCI
usability in rehabilitation: the possibility of using dry electrodes for EEG
acquisition, and the monitoring of psychophysiological effects during BCI tasks.

Keywords. BCI, rehabilitation, technology acceptance, dry electrodes, affective


computing

Introduction

A brain-computer interface (BCI) is a system that interprets brain signals generated by


the user, allowing specific commands from the brain to be executed on an external
device. Therefore such an interface would enable severely disabled people to interact
with their environment without the need for any muscle activation. Indeed, BCI
systems appear to be interesting new assistive devices for people with severe motor
disabilities. However they differ from other human-machine interfaces in that the user
must learn completely new skills in order to operate them. Years of experimentation
have shown cortical plasticity even in adult brain, which still adapts to BCIs, thus the
combination of rehabilitation and BCIs, both of which exploit cortical plasticity, could
help people become “able” once again. For this reason, BCI systems appear promising
rehabilitation tools.
First, we provide an overview of BCI systems, from the historical and technical
points of view, and then we move on to discuss the application of BCI in rehabilitation,
where we focus on BCI usability in relation to user acceptance. In the final section we
present experimental data concerning two important issues related to the applicability
of BCI in rehabilitation procedures: i) the use of dry electrodes, a technology that has
the potential to improve BCI system usability and comfort; ii) the monitoring of
psychophysiological effects during BCI tasks, thus allowing the quantification of the
“cognitive load” and “mental fatigue” of BCI rehabilitation sessions.
1. Overview of BCI systems

1.1. History and types of BCI

Since the first electroencephalography (EEG) on humans, which was done by Hans
Berger [1], there has been much speculation about the possibility of reading thoughts
and of using the brain to control devices [2]. The first “real time detection of brain
events in EEG” is probably the one reported by Vidal [3], but it was not until the late
1980s and early 1990s that significant research activity started in this field. Since then,
there has been a continuous increase in the research aimed at signal acquisition and
processing, and at medical applications: a search of the PubMed database for “brain
computer interface” provided 2 publications before 1993, 5 for the 1993-1998 period,
38 for 1998-2003 and 357 for 2003-2008.
The first distinction in brain-computer interfaces is between invasive and non-
invasive BCI systems: invasive BCI systems use multi-electrode grids implanted in the
brain of the user, whereas non-invasive systems operate by acquiring the signal from
the scalp. Methods of brain signal detection include electrocorticograms by subdural
electrode grids, invasive BCI, electro- and magnetoencephalograpy or near-infrared
spectroscopy, which is non-invasive BCI. The methods take two different approaches
[4]: the former tries to reconstruct skilled movements from single spikes, the latter is
bound to biofeedback.
Another relevant issue concerns the timing of the interaction between man and
machine. If the timing of operation is determined by the BCI, which means that the
interaction is computer-driven, we speak of synchronous (or cue-based) BCI, if by the
user it is asynchronous (or self-paced) BCI [5].

1.2. Structure of a BCI system

As a communication channel, a BCI consists of input signals, output signals, an input-


into-output translator (i.e., a signal processing system), a protocol for timing and one
for the switching on or off of the BCI communication-channel itself [6]. At this level, a
BCI can be depicted by the simple model in fig.1. Though useful and correct, the
model hides the critical issues characteristic of any BCI.

Signal Processing

Brain Signal Output

Figure 1. Schematic model of BCI, which considers three basic elements: input signal (brain signal from
user), input-into-output translator block (signal processing) and output signal (to an external device like an
electric wheelchair).
Mason and Birch [7] proposed a functional model of BCI; it includes the following
components: user, electrodes, EEG amplifier, feature extractor, feature translator,
control interface, device controller, device and physical (operating) environment; the
feature extractor and feature translator can be further divided into subcomponents [8].
The processing done by the feature extractor can be split into signal enhancement (a
preprocessing done to increase the signal to noise ratio), actual feature extraction and,
finally, feature selection (dimensionality reduction).
Mason and Birch’s approach highlights the real complexity of any BCI system.
This is because a BCI is not a human-computer interface in the classical sense as the
interaction is directly driven by the user’s brain signals. In addition, there is an
interaction between two systems (human and machine) that have to learn and to adapt
to each other. From the user viewpoint, this requires the acquisition of new and
complex skills [6], whereas for the computer system there has to be the provision of
reliable and efficient algorithms for feature extraction, selection and classification [7, 8,
9, 10]. Regarding this last point, Schalk et al. [11] suggest the alternative approach of
moving from classification of features to detection of signal change, thus bypassing the
critical concerns related to training both user and algorithm.

1.3. Paradigms

The input signal for a BCI system cannot be simply the EEG signal at rest. This is
because at least two different states are needed to operate an external device. Thus a
cognitive task assigned to the user produces a signal containing features that are
extracted and classified. Different cognitive tasks can be used to produce such features.
The operational framework used to specify them is called the paradigm. The most used
paradigms are listed below, with a brief description of their physiological rationale.

1.3.1. Motor Imagery


According to Decety [12], motor imagery can be defined as a dynamic state during
which a given action is mentally simulated by a subject. The subject can implement
two different techniques: “first person perspective”, or motor-kinesthetic, and “third
person”, or visuo-spatial perspective. Considering the physiological bases, movement
execution and motor imagery share common mechanisms [13]: in both cases, event-
related desynchronization of mu (or Rolandic) and beta rhythms over the contralateral
side (with respect to the movement) are present [14]. Moreover, it has been shown that
it is possible to discriminate between imagination of right and left hand movements
[15].

1.3.2. P300 component of event-related potentials


EEGs show event-related potentials in response to some stimuli. Traditionally, such
potentials are extracted from the EEG by presenting the stimulus repeatedly, and then
averaging epochs which are time-locked to the stimulus or to its response. The
resulting waveform presents peaks of different amplitude at different latencies: the
P300 component is a positive peak with a latency of about 300 ms. It was discovered
by Sutton [16] and can be elicited when the subject is performing an “oddball task”.
During the task the subject is presented a series of stimuli comprising two classes of
different relevance and probability of occurrence. The subject has to pay attention to
the target stimulus that belongs to the less frequent class: the P300 component
highlights the recognition of the target-events by the subject [17]. The stimulation can
be either visual or auditory [18].

1.3.3. Slow cortical potentials


Slow cortical potentials (SCP) are changes in cortical potential that can last from a
minimum of 300 ms up to several seconds. Typically, any reduction in cortical activity
produces positive changes with such a slow dynamic, while functions associated with
cortical activation, like preparation to voluntary movements, induce negative SCP [6,
19].

1.3.4. Steady State Visual Evoked Potential


Experimental evidence shows that flickering visual stimulation synchronizes human
visual cortex neurons to the frequency of the stimulus. The EEG response to such
visual stimulus is called the Steady State Visual Evoked Potential (SSVEP). This
SSVEP is a periodic oscillation with the same fundamental frequency as the flickering
stimulus, but it can also include higher frequencies [20].

2. BCI as a neurorehabilitation tool

According to the World Health Organization, the aim of rehabilitation is to enable


people with disabilities “to reach and maintain their optimal physical, sensory,
intellectual, psychological and social functional levels. Rehabilitation provides
disabled people with the tools they need to attain independence and self-determination”
[21]. This definition refers to the term rehabilitation in a broad sense, and suggests that
individuals with disabilities can be “made able again” in two ways: by being assisted
by technological tools, and to recover some abilities by following rehabilitation
protocols in clinical settings. BCI systems can be considered as rehabilitation devices
in both these senses [22].
Given the structure of BCI systems, which we illustrated in the previous section, it
is easy to understand how BCIs can work as assistive devices. In fact, BCIs can be used
for communication [23], as wheelchair controllers in real [24] or virtual environments
[25], and to operate prosthetic or Functional Electrical Stimulation devices in invasive
[26] or non-invasive [27] settings. Therefore, it is quite rational to consider them as
rehabilitation tools that enable people with severe disabilities to operate in an
environment without requiring muscular activation.
However, BCI can also help recover some lost abilities. In fact, experimental
evidence has shown that the brain is not rigidly hardwired because it has cortical
plasticity, and cortical plasticity can be stimulated by learning processes. Moreover,
studies on synaptical plasticity are shedding new light on the relationship between
cortical reorganization and rehabilitation [28, 29], showing that rehabilitation can
actually modify cortical circuitries. BCI systems can exploit the plasticity of the central
nervous system to restore some of its functionalities, as we will discuss in this section.
Two ways of using BCI in clinical rehabilitation were proposed recently by Daly
and Wolpaw [30]. One way is related to the use of biofeedback for controlling and
modulating brain signals. The other uses BCI to control an external device, like a
robotic arm, to provide sensory input that can help normalize motor control. Both
approaches are based on the BCI feature that distinguishes it from other human-
computer interfaces: BCIs require skilled users, able to modulate their brain activity
and should give such users real time feedback. Whatever the feedback modality
(biofeedback or sensory input), it can induce activity-dependent plasticity, as shown by
Jarosiewicz et al. with an invasive BCI in primates. [31].
Thus, learning processes activated by cognitive and sensory experiences related to
feedback from the environment are key elements in promoting neural plasticity and
modifications of brain circuitry. Adaptation to brain damage with compensatory
strategies can also be considered a learning process: thus the brain, although damaged,
triggers a reorganization of its structure. Addressing issues concerning brain structure
modification, and learning capacity due to brain insults, is very important for an
effective translation of neuroscience results into rehabilitation [32].
Interesting results toward the application of BCI in neurorehabilitation come from
Buch and colleagues [33]. They acquired data from eight subjects with chronic hand
plegia resulting from stroke (hand plegia duration was on average 16.7 ± 6.4 months).
They used a BCI system based on magnetoencephalography. During the BCI sessions
the subjects wore a mechanical orthosis on the plegic hand. This study demonstrated
that patients with complete hand paralysis due to stroke can not only use motor
imagery to operate an orthosis, but can also achieve control of the mechanical orthosis
by using signals recorded on the ipsilesional hemisphere. The evidence that increasing
the excitability of the ipsilesional motor cortex can improve the clinical outcome for
stroke patients [34] makes these findings very promising.
Some studies have shown that deafferentation of a body part induces a reduction of
its topographical representation on the somatosensory cortex. This reorganization can
be the result of both structural lesion and disuse. Counteracting such reorganization is
an important rehabilitation goal, particularly in stroke rehabilitation. In the event of
severe motor disorders and impairments, when physical exercise is no longer possible,
motor imagery may be the only possible way to access and train the motor system [35,
36], and since motor imagery is also a BCI paradigm, BCI is itself a candidate as the
rehabilitation tool for this situation. Furthermore, experimental evidence has revealed
functional and neural similarity between motor execution and imagery, which can be
performed from a kinesthetic or a visual perspective. In sports applications, mental
practice with motor imagery enhances performance and facilitates motor learning [13].
Another interesting finding with regard to rehabilitation is found in the study of Stinear
et al. [37], who indicate that only kinesthetic imagery modulates corticomotor
excitability. It is interesting to note that kinesthetic imagery seems to provide the best
performance also in BCI tasks [38]. This observation highlights the importance of
recognizing, in order to provide effective neurorehabilitation treatment, the kind of
motor imagery attitude, if any, of the BCI user [35].
Jackson et al [39] suggested a model for the use of mental practice with motor
imagery in rehabilitation. According to their model, three elements contribute to the
rehabilitative outcome: physical execution (musculo-skeletal activity), declarative
knowledge (information about the skill the patient has to learn) and nonconscious
processes (elements of the skill, which cannot be explicitly verbalized). Obviously, due
to the interaction of the three components, the outcome improves with physical
execution, but this is not always possible or may be difficult in patients with brain
damage. Thus, motor imagery could be helpful for such cases [35, 39]. Moreover, the
lack of motor execution stresses the role of declarative knowledge and could also be
important in disclosing nonconscious aspects of motor learning [39].
Motor imagery has been used in stroke rehabilitation (though without clear results,
[40]) and in relation to Parkinson's disease [41]. Motor imagery in BCI goes further
because it provides users with feedback related to their cognitive activity, and this can
be exploited to achieve effective treatment. Moreover, BCI provides a quantitative
evaluation of both the subject’s engagement and his/her ability to accomplish the
cognitive task.
Since one of the BCI paradigms is based on the P300 potential related to cognitive
events, BCI could also be used for cognitive rehabilitation. People affected by brain
injury or disease often experience cognitive problems [42], which can seriously affect
their quality of life. Cognitive rehabilitation is aimed at mitigating cognitive deficits
arising from neurological insults and diseases. While there is substantial evidence of
the efficacy of cognitive therapies concerning stroke and traumatic brain injury [43],
more research is needed where other diseases are concerned [44].
In cognitive rehabilitation, the use of event-related potentials is traditionally
limited to an assessment of injuries incurred or disorder severity. However, a tentative
biofeedback therapy based on P300 was designed to treat attention-deficit patients with
brain injury [45]: five patients with chronic mental disturbances received a P300 based
biofeedback therapy for a four week period, and all showed remarkable improvement.
However this pioneering work has not yet been followed by a larger clinical study.
A critical issue in the design of every BCI-based rehabilitation protocol concerns
the selection of the paradigm, which is related to the cognitive task proposed to the
user. Wolpaw [46] analyzed BCI system performance under different paradigms,
within controlled settings. He found an intrinsic inter-subject variability, and concluded
that such variability is a fundamental feature of BCI systems, probably related to the
nature of the BCI output pathway. In fact, there is an essential difference between
“classic” assistive devices and BCI systems: the former rely on the brain’s natural
output pathways, while the latter require that the central nervous system control the
cortical neurons instead of the spinal motoneurons. In order to achieve a more natural,
and therefore reliable, BCI system, it could be beneficial to shift the control strategy
from process-control to goal-selection. The BCI performance results were from using
BCI as assistive technology, and Wolpaw points out that paradigms like P300 should
be preferred to others like motor imagery in order to reduce the user’s cognitive load. It
is important to consider the issues discussed by Wolpaw, also when endeavoring to
design effective neurorehabilitation protocols based on BCI. In fact, protocols based on
motor imagery can imply a relevant cognitive load.

3. Usability of BCI as a neurorehabilitation tool

As discussed in the previous section, BCI systems can be considered rehabilitation


devices in a wide sense, i.e., both as assistive technologies and as neurorehabilitative
tools in clinical settings. However, it must be remembered that any BCI system is a
high-technology device. Although the use of sophisticated technology is not new in
rehabilitation (technology-based approaches include cognitive prosthetics [42],
computer-assisted therapies [47], virtual reality [48] and robotic based rehabilitation
[49]), the introduction of BCI technology in rehabilitation raises issues related to its
acceptance and usability, as well as to its impact on the patient’s emotional and
motivational states.
Whatever the use of a BCI system (assistive or neurorehabilitation tool), it appears
more straightforward to classify BCI users as patients suffering from progressive
diseases, leading to a locked-in condition, or to non-progressive ones (like stroke or
spinal cord injuries). In the case of progressive diseases, Neumann and Birbaumer [50]
believe that patients should be trained to use BCI early, so that they learn the necessary
cognitive skills before the disease is too advanced. These researchers also found that
the quality of the initial performance is a predictor of the efficiency of BCI use in the
future. However, according to Kuebler et al [51], the level of functional loss seems to
be more relevant in addressing patients to BCI than is specific disease.
Although not so evident, we should consider, apart from the end user, other types
of users, like operators: care-givers (for BCI as assistive technology) and therapists (for
BCI as neurorehabilitation tool).
An assessment of the patient’s viewpoint, motivational and environmental factors,
as well as the expectations of care-givers, have been identified as relevant issues for the
successful application of BCI as assistive technology [51, 52]. Such experience
emphasizes the importance of considering a user-centered design approach when
designing a BCI system, in order to increase its acceptance rate. A user-centered design
of a BCI system, as well as of any other human-computer interface, requires the
identification of both end-user and operator expectations about what the system should
do, and how, as was pointed out by Lee et al [53] on considering the therapist’s
viewpoint in using a robotic system for rehabilitation, and by Doherty et al [54] for the
end users of the brain-body interface “Cyberlink”.
User-centered design is linked to usability analysis. According to the ISO 9241-11
standard [55], usability is defined as “the extent to which a product can be used by
specified users to achieve specified goals with effectiveness, efficiency and satisfaction
in a specified context of use”.
Many aspects underlie the ISO definition of usability, as shown by Doherty et al
[54]. These authors analyzed the performance of cerebral palsy or brain injury sufferers
while operating the Cyberlink system using different software interfaces. They report
how the users reacted differently to the proposed interfaces, providing a first set of
experience-based guidelines to design this kind of interface.
Interface usability contributes to improving the so-called working alliance between
patient and operator. Working alliance is recognized as an important factor for
successful medical treatment [56]. If the interaction between the operator and patient is
based on technology, its acceptance by both users is a key issue to achieving a working
alliance. Many models have been proposed to explain user technology acceptance, and
the most referred to is the Technology Acceptance Model (TAM) proposed by Davis
(cited in [57]). Two constituent factors in this model are “perceived usefulness” and
“perceived ease of use”, which should be addressed when designing and proposing
technological tools. A recent attempt to adopt this model in rehabilitation is discussed
by Bretrand and Bouchard [58]. The authors considered a sample of adults familiar
with virtual reality, and assessed their perception toward the use of virtual reality for
mental health problems. They found that technology acceptance was more influenced
by perceived usefulness than by perceived ease of use for this sample of subjects,
pointing out that the relative relevance of these factors may be related to the specific
kind of users. In their integrative view of the model, Sun and Zhang [57] highlighted
the importance of both individual and contextual aspects, which actually play the role
of “relation's strength modulators” with respect to the constituent factors of the TAM
model.
Another important element in obtaining a working alliance is the therapist’s
perception of the emotional state of the patient. When interaction is mediated by a BCI
this perception tends to be lacking, but the integration of emotion detection tools into
the set-up can overcome this: in fact, a so-called “affective interface” system can
recognize the user status, and consequently can adapt the BCI system to the user’s
needs [59]. For instance, there is the possibility of estimating the cognitive workload
and the amount of mental fatigue from physiological signals collected during BCI tasks
(see next paragraph). Such an approach can make a rehabilitative BCI tool more
acceptable to patients as affective human-computer interface systems can recognize
user states and adapt to them [59], thus providing better and safer interaction.
The last, but certainly not the least issue concerning the design of adaptive systems
is related to feedback and reaction to errors. As already mentioned, patients must
acquire new skills to use a BCI system: therefore, as pointed out by Jarosiewicz et al.
[31], the feedback subjects receive during their learning phase becomes particularly
important. Nevertheless, feedback can be either reward or punishment: leading to
improved performance and motivating the user, or causing it to deteriorate and thus
frustrating the user. Moreover, the feedback, which can be discrete or continuous [60]
as well as visual or auditory, should take into account constraints due to the clinical
status of the user [61]. Regardless of its nature, feedback is always the response of the
computer side of a BCI system, and arises as a result of signal processing procedures.
Due to the mutual adaptation between brain and machine, the user will react to the
presented feedback, comparing the system response to his/her own expectations of the
interaction results. Thus, wrong-performance feedback can arise from an incorrect
execution of the task or from incorrect data processing. In the case of the latter, the user
will produce an error potential [62] labelled as interaction error potential [63]. Since
rehabilitation is intimately linked to learning, the feedback needs to be used correctly,
and careful consideration must be given to the generation of error potentials.

4. Improving BCI usability: two experimental studies

As described in the previous paragraph, the use of BCI in rehabilitation gives rise to
several critical issues. Two aspects appear particularly important for the design of an
effective neurorehabilitative BCI tool. The first concerns the physical design of the
acquisition section, a major limitation for BCI usability. The second concerns the
evaluation of the users’ affective states during BCI sessions. This second point is
important to optimize BCI system design, improving BCI acceptability and efficacy as
neurorehabilitation tools. In the following sections we illustrate how these problems are
addressed in relation to the acquisition of scalp EEG and to the assessment of the
psychophysiological effects of BCI usage.

4.1. Dry Electrodes

In clinical diagnostics, like electrocardiography and electroencephalography,


conventional electrodes are used extensively, requiring skin preparation and the
application of an electrolytic gel for high quality, low amplitude, recording [64]. The
application of the gel allows electrical contact between the electrode and the living
epidermis (made of cells and intercellular fluids containing water and electrolytes),
penetrating the outermost layer of the skin, called stratum corneum, basically made of
dead, water-less, insulating keratinocytes. However, the classical EEG detection
technology by means of conventional electrodes is not suitable for taking BCI from the
laboratory into the patient’s home because of the complexity in maintaining constant
signal transmission (gel dries during prolonged measurements) and because of the
accurate and long preparation needed (up to several minutes per electrode).
To overcome this long patient preparation, spiked dry electrodes able to pierce the
stratum corneum and directly reach the lower conductive epidermis layers in a painless
fashion are of great interest [65]. Such devices reduce artifacts caused by electrode
motion or skin stretching and, at the same time, optimize recordings because of their
reduced size, though they still present a large electrical contact area. Recently, micro
needle-based electrodes made of polymer materials and coated with gold have been
produced by means of a combination of Deep X-Ray Lithography, electroforming, and
soft lithography [66]. Important parameters for electrode characterization are needle
optimal density, needle length, tip sharpness and needle resistance to insertion. This
resistance is linked to both the tip angle and the height to base ratio of the micro
needles (aspect ratio) [67]. The extension of the electrode area is also a key issue for
both handling and insertion: large areas are required for proper handling, while
relatively small areas are of great importance to avoid obstacles that can prevent
insertion (dirt, hair).
In order to test the electrical characteristics of the dry electrodes described in [66],
we performed a sequence of acquisitions on one volunteer. The experiments consisted
of the acquisition of electrocardiogram (ECG, mV range) and EEG (µV range) signals
with micro-needle electrodes and with conventional electrodes, and of the comparison
of signals obtained by the new and the traditional sensors.
For ECG acquisition, a wet electrode, used as reference, was coupled with the
micropatterned one. Both electrodes were attached to the subject’s chest in precordial
position and connected to an amplifier; during acquisition the subject was asked to
move in order to test electrode measurement stability.
For the EEG acquisition, the electrodes were placed using a cap, according to the
extended 10-20 international system [68], and connected to a commercial EEG
amplifier. The acquisition setup (unipolar recording) included two smaller micro needle
dry electrodes (8x5 mm2, 300 solid needles) on C3 and C4. Traditional (i.e. wet)
electrodes were placed on the scalp after conventional skin preparation. The reference
and ground electrodes (P8 and Fpz, respectively) were also traditional electrodes.
During the acquisition, the subject was asked to rest with open eyes for a few minutes,
and then with closed eyes.
The ECG collected with the micro needle-based electrodes showed higher
intensities and better long-term stability than the ECG obtained with conventional
electrodes. In the EEG recordings both rhythms, beta (eyes open) and alpha (eyes
closed), were clearly recognizable in the signals collected with the dry electrodes. The
quality of the EEG recorded by dry electrodes was similar to that of the EEG obtained
using traditional electrodes [66].

4.2. Quantification of psychophysiological effects

As pointed out in the previous paragraph, the monitoring of psychophysiological


effects during BCI rehabilitation sessions is useful to evaluate the degree of user
engagement, motivation or fatigue. Moreover, the evaluation of long-term changes can
help to better assess how a user responds to a specific rehabilitation program. The
following paragraphs illustrate an experimental protocol designed to quantify the
physiological effects associated with emotional and cognitive loads, and with the
mental stress and fatigue induced by the BCI task. The focus of our study is on the BCI
paradigms corresponding to motor imagery and the P300 component of event related
potentials.

4.2.1. Monitored Physiological Signals


The experimental set-up is based on the collection of different physiological signals:
electroencephalogram (EEG), electrooculogram (EOG) and electrocardiogram (ECG),
by means of the same biosignal amplifier (gUSBamp, g.tec) at 256 Hz.
The scalp EEG was recorded using different numbers of electrodes, depending on
the subject’s task, positioned according to the extended 10-20 international system
[68], using Fpz and right mastoid process as sites for ground and reference electrodes
respectively. Whatever the task, EEG data from Fz, Cz, Pz and Oz [69, 70] were also
collected and further analysed to monitor the psychophysiological state of the subject:
in fact, specific EEG spectral components can be the source of important information
concerning the level of alertness and task engagement [71], and the level of task
difficulty [72].
The electrooculogram is recorded by placing ocular electrodes above and below
the left eye (E1 and E3 electrodes) and at the left and right outer canthi of the eyes (E5
and E6). Vertical (VEOG) and horizontal (HEOG) movements are then derived as the
E1-E3 and E5-E6 differences respectively [73]. The composition of the HEOG and
VEOG movements yields information on visual exploring strategies during each task.
Moreover, blinks are easily identified from the VEOG signal, and blink rate has been
demonstrated to depend on task demand and fatigue level [69, 74].
The patient instrumentation was completed by applying electrodes for recording a
one-lead electrocardiogram (ECG), the difference between electrodes placed on the left
and right clavicles. This single-channel ECG corresponds to the first lead of the classic
Einthoven lead system [75]. The ECG is recorded to derive the instantaneous heart rate
on a beat-by-beat basis as the time interval between consecutive R-peaks (R-R
interval). Activation of the autonomic tone can then be identified from an analysis of
the R-R interval variability [76]. Also respiratory rate can be estimated from the same
ECG recording: this is done by calculating the frequency of the modulations of the R-
peak amplitude [77]. In fact, thoracic movements (which are almost completely due to
respiration during our BCI sessions) modify the projection of the cardiac axis on the
ECG leads, changing the amplitude of the R wave.

4.2.2. Experimental Protocol


Psychophysiological signals were collected during two BCI sessions, one consisting of
a P300-speller task and one based on a motor imagery task. Both sessions comprised
training and performance phases, where the training phase is used by the BCI system to
acquire the data needed to initialize the classification algorithm (which is Linear
Discriminant Analysis for both tasks). The subject is fully aware that the training phase
produces meaningless feedback (P300-speller) or no feedback at all (motor-imagery) as
also that the algorithm has to learn how to extract and classify the features elicited by
the task.
During the P300-speller sessions (both copy spelling), a 6x6 cell matrix containing
alphanumeric characters (one for in each cell) is presented to the subject. For each trial
rows and columns of the matrix flash 15 times, every flash lasting 60 ms with a dark
time between two flashes of 10 ms. The flashing order of rows and columns is random.
In each trial a letter is selected for communication, and the subject is asked to count
how many times the cell containing the selected letter flashes. During the training
phase, the subject communicates a predefined (meaningful) word of 10 letters, but, as
already mentioned, does not receive any meaningful feedback from the BCI system.
Therefore, the subject is told in advance to ignore the symbol printed on the screen at
the end of every repetition. During the performance phase, the subject communicates a
different predefined (meaningful) word of 10 letters. However in this case the subject
receives meaningful feedback, i.e., at the end of each series of random flashes, the
character identified by the signal processing block is printed on the screen.
In the motor imagery task, the subject looks at a fixation cross displayed in the
centre of the monitor. After 3 seconds, an arrow (cue stimulus) pointing to the right, or
the left, appears for 1.25s on the fixation cross. In the training phase, the subject is
asked to imagine a right or left hand movement, according to the direction indicated by
the arrow; this trial is repeated 40 times, with the arrow pointing randomly 20 times in
each of the two directions. In the performance phase, feedback appears while the
subject imagines the hand movement: the feedback is a horizontal bar indicating the
direction of the imagined movement (left or right) as identified on line by the
computer.
The experimental session also consists of two reference conditions. One is the
basal resting condition, in which the subject is asked to sit quietly in front of the
computer monitor for 6 minutes: for the first 3 minutes with eyes open, looking at a
fixation point on the screen, and for the remaining 3 minutes with eyes closed. The
second reference condition is a mental arithmetic task. The subject is asked to perform
a Paced Auditory Serial Addition Test (PASAT) [78]. First, two series of single digits
are presented verbally every 3 seconds (10 digits for the probe and 60 digits for the test,
in this text named PASAT-3) and the patient must add each new digit to the one
immediately prior to it. The test is then repeated with two different series of single
digits presented every 2 seconds (again, 10 digits for the probe and 60 digits for the
test, in this text named PASAT-2). Sitting at rest and PASAT represent two
reproducible and standardized reference conditions characterized by different levels of
autonomic tone: a low sympathetic tone during rest, and an important activation of the
sympathetic tone, induced by the stress associated with the mental arithmetic, during
PASAT.

4.2.3. Data Analysis


Here we show some examples which suggest how information on the psychological
and physiological effects produced by BCI tasks can be assessed by analyzing the
signals monitored during the rehabilitation sessions. Examples are derived from signals
recorded in one healthy volunteer (male, 28 years) who followed our previously
described experimental protocol.

4.2.3.1. EEG Spectrum


In the literature, specific components of the EEG spectrum have been used to monitor
the state of alertness or sleepiness of the subject, to reflect the difficulty of a task and to
identify lapses in attention [72]. For this reason we quantified spectral changes during
each BCI task. This was done by computing the power spectrum at Fz, Cz, Pz and Oz
Figure 2. Changes of EEG spectral components during different BCI tasks and during a mental arithmetic
test (PASAT-2) calculated for Fz. Each histogram shows the ratio between the EEG spectrum obtained
during the task and the spectrum obtained in the reference condition (eyes open - sitting at rest). Each
spectrum was normalized by its total power before computing the ratio.
during the two reference conditions (rest and PASAT), and during the training and
performance phases of the P300 speller and motor imagery tasks. Spectra were
normalized with respect to their total power, and expressed as a percentage of the value
measured in the “eyes open - sitting at rest” condition (our reference condition in this
analysis). As an example, figure 2 shows the results obtained for Fz. Clear differences
appear between the training and performance phases in the weight of the low-frequency
components, and between the BCI tasks and PASAT. These spectral alterations may
help to better understand the mental load of the subject and to monitor his/her
psychophysiological state.

4.2.3.2.Neurovegetative Responses
Also activation of the sympathetic branch of the autonomic nervous system can be
expected during BCI tasks. The reasons for such activation may be related to the stress
induced by the mental tasks required during BCI sessions [79], to the phenomena
related to the expectation of positive feedback from the computer, or to the frustration
when negative feedback appears, and, finally, to the mechanisms of motor anticipation
and programming activated by motor imagery tasks [12, 80]. A way to continuously
monitor the neurovegetative tone is through an analysis of heart rate variability [81]. A
“heart rate variability” signal can be derived from the ECG. This is done first by
identifying the time of occurrence of the R-peak of each heart beat, and then by
calculating the time intervals between consecutive R peaks on a beat-by-beat basis. The
series of R-R intervals strongly reflects any change in the cardiac outflow of the
parasympathetic and sympathetic systems. Indeed, it has been shown that the spectral
power of the time series of R-R intervals, calculated over a high frequency (HF, from
0.15 to 0.40 Hz) band, mainly reflects the vagal tone, while power in a low frequency
(LF, from 0.04 to 0.15 Hz) band is influenced by both cardiac sympathetic and vagal
outflows. This evidence suggested considering the ratio between power in the LF and
HF bands, the so-called LF/HF power ratio, as an indirect index of the sympatho-vagal
balance [76].
Figure 3. Values of the LF/HF power ratio normalized with respect to the sitting at rest condition (100%
basal condition) during a mental arithmetic test (PASAT) and during different BCI tasks.
Figure 3 shows our volunteer’s level of LF/HF power ratios evaluated during
different tasks of the experimental protocol. Values are expressed as percentage of the
basal at rest condition, i.e., one of the two reference conditions (dashed line).
The second reference condition is the mental arithmetic test (PASAT). Clearly
mental arithmetic induced sympathetic activation, which is reflected in a substantially
higher LF/HF power ratio. The training phase of the P300 speller induces a similarly
high sympathetic activation, probably produced by the same mechanisms of mental
stress activated during the mental arithmetic test. The actual performance of the P300
speller is associated with an even larger LF/HF power ratio. Differently from the
training phase, during the performance of this BCI task the subject receives meaningful
feedback from the computer (in this case, the identification of the letter selected for
communication). It is likely that the presence of feedback, particularly the expectation
of correct letter recognition, is responsible for the further sympathetic activation
observed during the P300 performance. Similarly to the P300-based BCI, also the BCI
sessions based on the motor imagery paradigm are characterized by a larger LF/HF
ratio when feedback is present. However, it is worth noting that during both the
training and performance phases, motor imagery tasks are associated with a larger
autonomic activation than the corresponding P300-speller tests. It can be hypothesized
that the difference is accounted for by the additional sympathetic activation associated
with motor planning and preparation [11].

Conclusions

The inherent complexity of using any BCI system derives from its peculiar feature: the
interaction between man and machine does not require any muscular activation. This
means that unlike classical human-computer interfaces, the user commands follow not-
natural output pathways. We have shown that this peculiar feature makes BCI systems
not only valuable assistive devices for people with severe motor disabilities, but also
real rehabilitative tools. In fact, by stimulating patients to acquire new skills, and
activating specific cortical areas, BCIs might also be used for innovative and effective
neurorehabilition therapies. Indeed, we have reviewed studies investigating this
possible use of BCI, and report the first promising clinical applications.
However clinical BCI applications are still very limited, and many critical issues
need to be addressed before we can see effective systems for “neurorehabilitative BCI”
operating in clinical settings or in patients’ homes. We have described the most
significant points that need to be considered when designing, selecting and using a BCI
system for neurorehabilitation. Furthermore, we have emphasized the importance of
technology acceptance and usability. Problems, which today limit the practical use of
BCI in neurorehabilitation, will probably be overcome when new technologies provide
non-conventional sensors for less obtrusive brain signal recording, and affective
interfaces able to adapt the BCI according to emotional status changes in the patient.
Our experimental results regarding the possible use of dry electrodes, and the online
monitoring of the psychophysiological effects of BCI tasks, suggest a way of
addressing these problems.

References

[1] E. Niedermeyer, Historical aspects, in E. Niedermeyer, F. Lopes da Silva, Electroencephalography.


Basic Principles, Clinical Applications, and Related Fields 5th , Lippincott Williams & Wilkins,
Baltimore, 2004, pp. 1-16.
[2] J.J. Vidal, Towards direct-brain-computer communication, Annual Review of Biophysics and
Bioengineering 2 (1973), 157-180.
[3] J.J. Vidal, Real-Time Detection of Brain Events in EEG, Proceedings IEEE 65 (1977), 633-641.
[4] N. Birbaumer, Breaking the silence: Brain-computer Interface (BCI) for communication and motor
control, Psychophysiology 43 (2006), 517-522.
[5] G. Townsend, B. Graimann, and G. Pfurtscheller, Continuous EEG classification during motor imagery
-Simulation of an Asynchronous BCI, IEEE Transactions on Neural Systems and Rehabilitation
Engineering 12 (2004), 258-265.
[6] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, T.M. Vaughan, Brain-Computer
interfaces for communication and control, Clinical Neurophysiology 113 (2002), 767-791.
[7] S.G. Mason, and G.E. Birch, A general framework for brain-computer interface design, IEEE
Transactions on Neural Systems and Rehabilitation Engineering 11 (2003), 70-85.
[8] A. Bashashati, M. Fatourechi, R.K. Ward, and G.E. Birch, A survey of signal processing algorithms in
brain-computer interfaces based on electrical brain signals, Journal of Neural Engineering 4 (2007),
R32-R57.
[9] F. Lotte, M. Congedo, A. Lecuyer, F. Lamarche, and B. Arnaldi, A review of classification algorithms
for EEG-based brain-computer interfaces, Journal of Neural Engineering 4 (2007), R1-R13.
[10] D.J McFarland, C.W. Anderson, K.R. Mueller, A. Schloegl, and D.J. Krusienki, BCI Meeting 2005-
Workshop on BCI Signal Processing: Features Extraction and Translation, IEEE Transactions on
Neural Systems and Rehabilitation Engineering 14 (2006), 153-137.
[11] G. Schalk, P. Brunner, L.A. Gerhardt, H. Bischof, J.R. Wolpaw, Brain-computer interfaces (BCIs):
detection instead of classification, Journal of neuroscience methods 167 (2008), 51-62.
[12] J. Decety, The neurophysiological basis of motor imagery, Behavioural Brain Research 77 (1996), 45-
52.
[13] M. Lotze, U. Halsband, Motor imagery, Journal of Physiology 99 (2006), 386-395.
[14] G. Pfurtscheller, C. Neuper, Motor imagery activates primary sensorimotor areas in humans,
Neuroscience Letters 239 (1997), 65-68.
[15] G. Pfurtscheller, C. Neuper, D. Flotzinger, M. Pregenzer, EEG-based discrimination between
imagination of right and left hand movement, Electroencephalography and Clinical Neurophysiology
103 (1997), 642-651.
[16] J. Polich, Updating P300: An integrative theory of P3a and P3b, Clinical Neurophysiology 118 (2007),
2128-2148.
[17] L.A. Farwell, and E. Donchin, Talking off the top of your head: toward a mental prosthesis utilizing
event-related brain potentials, Electroencephalography and Clinical Neurophysiology 70 (1988), 510-
523.
[18] E.W. Sellers, E. Donchin, A P300-based brain–computer interface: Initial tests by ALS patients,
Clinical Neurophysiology 117 (2006), 538-548.
[19] N. Birbaumer, Slow cortical potentials: plasticity, operant control and behavioral effects, The
Neuroscientist 5 (1999), 74-78.
[20] G.R. Mueller-Putz, R. Scherer, C. Brauneis, and G. Pfurtscheller, Steady-state visual evoked potential
(SSVEP)-based communication: impact of harmonic frequency components, Journal of Neural
Engineering 2 (2005), 123-130.
[21] http://www.who.int/topics/rehabilitation/en/
[22] B.H. Dobkin, Brain-computer interface technology as a tool to augment plasticity and outcomes for
neurological rehabilitation, The Journal of Physiology 579 (2007), 637-642.
[23] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey et al, A spelling device for the
paralysed, Nature 398 (1999), 297-298.
[24] J. Philips, J. del R. Millán, G. Vanacker, E. Lew, F. Galán, P.W. Ferrez, H. Van Brussel, and M. Nuttin,
Adaptive Shared Control of a Brain-Actuated Simulated Wheelchair, Proceedings of the 2007 IEEE
10th International Conference on Rehabilitation Robotics, Noordwijk, The Netherlands, 408-414.
[25] R. Leeb, D. Friedman, G.R. Mueller-Putz, R. Scherer, Mel Slater and G. Pfurtscheller, Self-Paced
(Asynchronous) BCI Control of a Wheelchair in Virtual Environments: A Case Study with a
Tetraplegic, Computational Intelligence and Neuroscience (2007), 1-8
[26] L.R. Hochberg, M.D. Serruya, G.M. Friehs, J.A. Mukand, M. Saleh et al., Neuronal ensemble control
of prosthetic devices by a human with tetraplegia, Nature 242 (2006), 164-171.
[27] G. Pfurtscheller, G.R. Mueller, J. Pfurtscheller, H.J. Gerner, R. Rupp, ‘Thought’ – control of functional
electrical stimulation to restore hand grasp in a patient with tetraplegia, Neuroscience Letters 351
(2003), 33-36.
[28] D.V. Buonomano, M.M. Merzenich, Cortical plasticity: from synapses to maps, Annual Review of
Neuroscience 21 (1998), 149-186.
[29] C. Kelly, J.J. Foxe, H. Garavan, Patterns of normal human brain plasticity after practice and their
implications for neurorehabilitation, Archives of Physical Medicine and Rehabilitation 87 (2006), S20-
S29.
[30] J.J. Daly, J.R. Wolpaw, Brain-computer interfaces in neurological rehabilitation, The Lancet Neurology
7 (2008), 1032-43.
[31] B. Jarosiewicz, S.M. Chase, G.W. Fraser, M. Velliste, R.E. Kass, A.B. Schwartz, Functional network
reorganization during learning in a brain-computer interface paradigm, Proceedings of the National
Academy of Sciences USA 105 (2008), 19486-91.
[32] J.A. Kleim, T.A. Jones, Principles of experience-dependent neural plasticity: implications for
rehabilitation after brain damage, Journal of Speech and Hearing Research 51 (2008), S225-39.
[33] E. Buch, C. Weber, L.G. Cohen, C. Braun, M.A. Dimyan et al., Think to move: a neuromagnetic Brain-
Compute Interface (BCI) system for chronic stroke, Stroke 39 (2008), 910-917.
[34] C. Stinear, P.A. Barber, J.P. Coxon, M.K. Fleming, B.D. Winston, Priming the motor system enhances
the effects of upper limb therapy in chronic stroke, Brain 131 (2008), 1381-1390.
[35] N. Sharma, V.M. Pomeroy, J.C. Baron, Motor imagery: a backdoor to the motor system after stroke?,
Stroke 37 (2006), 1941-52.
[36] T. Mulder, Motor imagery and action observation: cognitive tools for rehabilitation, Journal of Neural
Transmission 114 (2007), 1265-1278.
[37] C.M. Stinear, W.D. Byblow, M. Steyvers, O. Levin, S.P. Swinnen, Kinesthetic, but not visual, motor
imagery modulates corticomotor excitability , Experimental Brain Research 168 (2006), 157-164.
[38] C. Neuper, R. Scherer R, M. Reiner, and G. Pfurtscheller, Imagery of motor actions: differential effects
of kinesthetic and visual-motor mode of imagery in single-trial EEG, Brain research. Cognitive brain
research 25 (2005), 668-77.
[39] P.L. Jackson, M.F. Lafleur, F. Malouin, C. Richards, J. Doyon, Potential role of mental practice using
motor imagery in neurologic rehabilitation, Archives of Physical Medicine and Rehabilitation 82
(2001), 1133-1141.
[40] S.M. Braun, A.J. Beurskens, P.J. Borm, T. Schack, D.T. Wade, The effects of mental practice in stroke
rehabilitation: a systematic review, Archives of Physical Medicine and Rehabilitation 87 (2006), 842-
852.
[41] R. Tamir, R. Dickstein, M. Huberman, Integration of motion imagery and physical practice in group
treatment applied to subjects with Parkinson's disease, Neurorehabilitation and Neural Repair 21
(2007), 68-75.
[42] B.A. Wilson, Neuropsychological Rehabilitation, Annual Review of Clinical Psychology 4 (2008), 141-
162.
[43] K.D. Cicerone, C. Dahlberg C, J.F. Malec, D.M. Langenbahn, T. Felicetti et al, Evidence-based
cognitive rehabilitation: updated review of the literature from 1998 through 2002, Archives of Physical
Medicine and Rehabilitation 86 (2005), 1681-1692.
[44] A.R. O'Brien, N. Chiaravalloti, Y. Goverover, J. DeLuca, Evidence-based cognitive rehabilitation for
persons with multiple sclerosis: a review of the literature, Archives of Physical Medicine and
Rehabilitation 89 (2008), 761-769.
[45] R. Neshige, T. Endou, T. Miyamoto et al, Proposal of P300 biofeedback therapy in patients with mental
disturbances as cognitive rehabilitation, The Japanese Journal of Rehabilitation Medicine 32 (1995),
323-329.
[46] J.R. Wolpaw, Brain-computer interfaces as new brain output pathways, The Journal of Physiology 579
(2007), 613-619.
[47] J.H. Wright, A.S. Wright, AM. Albano, M.R. Basco, L.J. Goldsmith et al, Computer-assisted cognitive
therapy for depression: maintaining efficacy while reducing therapist time, The American Journal of
Psychiatry 162 (2005), 1158-1164.
[48] R.M.E.M. da Costa, L.A.V. de Carvalho, The acceptance of virtual reality devices for cognitive
rehabilitation: a report of positive results with schizophrenia, Computer Methods and Programs in
Biomedicine 73 (2004), 173-182.
[49] W.S. Harwin, J.L. Patton, V.R. Edgerton, Challenges and opportunities for robot-mediated
neurorehabilitation, Proceedings of the IEEE 94 (2006),1717-1126.
[50] N. Neumann, N. Birbaumer, Predictors of successful self control during brain-computer
communication, Journal of Neurology, Neurosurgery & Psychiatry 74 (2003), 1117-1121.
[51] A. Kuebler, V.K. Mushhwar, L.R. Hochberg, and J.P. Donoghue, BCI Meeting 2005-Workshop on
clinical issues and applications, IEEE Transactions on Neural Systems and Rehabilitation Engineering
14 (2006), 131-134.
[52] N. Neumann, A. Kuebler, Training locked-in patients: a challenge for the use of brain-computer
interfaces, IEEE Transactions on Neural Systems and Rehabilitation Engineering 11 (2003), 169-172.
[53] M. Lee, M. Rittenhouse, and H.A. Abdullah, Design Issues for Therapeutic Robot Systems: Results
from a Survey of Physiotherapists, Journal of Intelligent and Robotic Systems 42 (2005), 239–252.
[54] E. Doherty, G. Cockton, C. Bloor, D. Benigno, Improving the Performance of the Cyberlink Mental
Interface with the “Yes/No Program”, Proceedings of the SIGCHI conference on Human factors in
computing systems 3 (2001), 69-76.
[55] N. Bevan, Ux, Usability and ISO standards, ACM Proceedings of CHI, Florence, Italy, 2008.
[56] J.N. Fuertes, A. Mislowack, J. Bennett et al, The physician–patient working alliance, Patient Educ
Couns 66 (2007), 29-36.
[57] H. Sun, P. Zhang, Role of moderating factors in user technology acceptance, Int. J. Human-Computer
Studies 64 (2006), 53-78.
[58] M. Bertrand, S. Bouchard, Applying the Technology Acceptance Model to vr with people who are
favorable to its use, Journal of CyberTherapy & Rehabilitation 1 (2008), 200-210.
[59] E. Hudlicka, To feel or not to feel: the role of affect in human-computer interaction, Int. J. Human-
Computer Studies 59 (2003), 1-32.
[60] C. Neuper, A. Schlögl, G. Pfurtscheller, Enhancement of Left-Right Sensorimotor EEG Differences
During Feedback-Regulated Motor Imagery, Journal of Clinical Neurophysiology 16 (1999), 373-382.
[61] M. Pham, T. Hinterberger, N. Neumann, A. Kuebler, N. Hofmayer et al, An Auditory Brain-Computer
Interface Based on the Self-Regulation of Slow Cortical, Neurorehabilitation and Neural Repair 19
(2005), 206-18
[62] G. Schalk, J.R. Wolpaw, D.J. McFarland, G. Pfurtscheller, EEG-based communication: presence of an
error potential, Clinical neurophysiology 111 (2000), 2138-2144.
[63] P.W. Ferrez, J del R. Millan, Error-Related EEG Potentials Generated During Simulated Brain–
Computer Interaction, IEEE Transaction on Biomedical Engineering 55 (2008), 923-929.
[64] A.D. Legatt, Intraoperative Neurophysiologic Monitoring: Some Technical Considerations, The
American journal of EEG technology 35 (1995), 167–200.
[65] B.A. Taheri, R.T. Knight, R.L. Smith, A dry electrode for EEG recording, Electroencephalography
Clinilcal Neurophysiology 90 (1994), 376-83.
[66] M. Matteucci, R. Carabalona, M. Casella, E. Di Fabrizio, F. Gramatica, M. Di Rienzo, E. Snidero, L.
Gavioli, and M. Sancrotti, Micropatterned dry electrodes for brain-computer interface, Microelectronic
Engineering 84 (2007), 1737-1740.
[67] P.M. Wang, M.G. Cornwell, and M.R. Prausnitz, Effects of microneedle tip geometry on injection and
extraction in the skin, Proceedings of the second joint EMBS/BMES Conference, Houston, TX, USA,
2002, 23-26.
[68] M.R. Nuwer, G. Comi, R. Emerson et al, IFCN standards for digital recording of clinical EEG.
International Federation of Clinical Neurophysiology, Electroencephalography and Clinical
Neurophysiology 106 (1998), 259-61.
[69] F. Yamada, Frontal midline theta rhythm and eyeblinking activity during a VDT task and a video game:
useful tools for psychophysiology in ergonomics, Ergonomics 41 (1998), 678-688.
[70] W. Klimesch, EEG alpha and theta oscillations reflect cognitive and memory performance: a review
and analysis, Brain Research Review 29 (1999), 169-195.
[71] A.T. Pope, E.H. Bogart, and D.S. Bartolome, Biocybernetic system evaluates indices of operator
engagement in automated task, Biological Psychology 40 (1995), 187-195.
[72] J. Allanson, and S.H. Fairclough, A research agenda for physiological computing, Interacting with
Computers 16 (2004), 857-878.
[73] R.J. Croft, and R.J. Barry, Removal of ocular artifact from the EEG: a review, Clinical
Neurophysiology 30 (2000), 5-19.
[74] J.A. Stern, D. Boyer, and D. Schroeder, Blink rate: a possible measure of fatigue, Hum. Factors 36
(1994), 285-297.
[75] J. Malmivuo, and R. Plonsey, 12-Lead ECG System, in Bioelectromagnetism - Principles and
Applications of Bioelectric and Biomagnetic Fields New York, Oxford University Press, 1995, pp.277-
288.
[76] Task Force of the European Society of Cardiology and the North American Society of Pacing and
Electrophysiology, Heart rate variability. Standards of measurement, physiological interpretation, and
clinical use, European Heart Journal 17 (1996), 354-381.
[77] G. Moody, R. Mark, M. Bump, J. Weinstein, A. Berman, J. Mietus, and A. Goldberger, Clinical
Validation of the ECG-Derived Respiration (EDR) Technique, Computers in Cardiology, 13 ed
Washington, D.C. IEEE Computer Society Press, 1986, pp. 507-510.
[78] S.M. Rao, G.J. Leo, V.M. Haughton, P. St Aubin-Faubert, and L. Bernardin, Correlation of magnetic
resonance imaging with neuropsychological testing in multiple sclerosis, Neurology 39 (1989), 161-
166.
[79] P. Hjemdahl, U. Freyschuss, A. Juhlin-Dannfelt, and B. Linde, Differentiated sympathetic activation
during mental stress evoked by the Stroop test, Acta Physiological Scandinavia 527 (1984), 25-29.
[80] K. Oishi, and T. Maeshima, Autonomic nervous system activities during motor imagery in elite
athletes, Journal of Clinical Neurophysiology 21 (2004), 170-179.
[81] S. Akselrod, D. Gordon, F.A. Ubel, D.C. Shannon, A.C. Berger, and R.J. Cohen, Power spectrum
analysis of heart rate fluctuation: a quantitative probe of beat-to-beat cardiovascular control, Science
213 (1981), 220-222.
Why Is Music Effective in Rehabilitation?

Alessandro ANTONIETTIa,1
a
Department of Psychology, Catholic University of the Sacred Heart, Milano, Italy

Abstract. In this chapter a conceptual foundation of employing music in


rehabilitation is highlighted. The basic assumption is that, when a person is
involved in performing or listening to music, she has a comprehensive experience
in which several mental registers are activated simultaneously. The specific effect
of music is to trigger a coordinated action of motor, visuospatial and verbal
mechanisms. Thanks to the synergic activation of these mechanisms, music can
stimulate, support and driven the mental functions to be rehabilitated.

Keywords. Music, Music Therapy, Rehabilitation

Introduction

In what sense can music be considered a technology? In a superficial sense, music is


technology because, for it to be produced and enjoyed, it needs tools. Except for
singing, every musical performance is mediated by some artefact, which can be very
simple and primitive – such as cut reeds, tanned and stretched animal skins, roughly
moulded metal sheets – or very sophisticated, as in the case of electronic equipments
that generate new kinds of sounds and new ways of interacting with sounds [1, 2].
Material devices are also required for music reproduction in all occasions except for at
a live concert. In this regard, the range of traditional technologies – radio, long playing
records, audiotapes – has recently been expanded (or replaced) by new technological
opportunities [3, 4, 5]. But music can also be understood as a technology in a less
superficial sense. Music, as a symbolic system, is a cognitive technology, an extension
or prosthesis of intelligence, a form of embodiment of thought whereby mental life
expresses and builds itself. In this perspective music is a tool of the mind and, as such,
it allows for interesting opportunities for rehabilitation.
Attempts at using music for therapeutic goals date back to a long time [6]. A
statement about psychology made by Ebbinghaus may also be true for this tradition,
which often labelled as music therapy [7, 8]: it has a long past but a recent history.
Indeed, a scientific approach to understanding the benefits people can get from music
has developed only in the last few decades. Actually the variety of music-based
methods employed with therapeutic purposes is quite wide, as is the range of different
situations in which such methods are offered. In the neurorehabilitation field, the
spectrum of potential patients benefitting from music therapy interventions is broad,
ranging from motor deficits to speech disorders, from cognitive deterioration to

1
Corresponding Author: Department of Psychology, Catholic University of the Sacred Heart, Largo
Gemelli 1, 20123 Milano, Italy; E-mail: alessandro.antonietti@unicatt.it.
dysfunction in emotional control, from coma conditions to hyperactivity [9, 10]. To
simplify the picture, it is possible to identify three large categories in the therapeutic-
rehabilitative utilisation of music:
− music is used to induce a psycho-physiological state (usually relaxation, and
more generally a state of well-being), a mood (calmness, joy) or an attitude
(for example, emotional disinhibition) that can either be the goal itself of the
intervention or serve to introduce another intervention (for example, a training
session or a psychotherapeutic session) aiming to facilitate the processes
meant to be triggered in the patient. In this respect Sorrell and Sorrell [11], for
example, have noticed that music improves the life quality of elderly people
and is a motivational input in performing rehabilitation exercises; similar
beneficial effects of music are found with patients suffering from Parkinsons’
disease [12]. It has also been found that elderly people perform better in
working memory tasks if they are listening to music [13];
− music is used to activate behaviours and mental operations that need to be
rehabilitated. In this case music can either perform an accompanying function
(for example, in combination with motor exercises or while pronouncing
sentences) or be the main or exclusive task in which the patient is absorbed
(for example, in the context of (re)training memory functions, the patient can
be involved in an activity based on melody recognition). Along this line, just
to give an example, Bannan and Montgomery-Smith [14] have shown that
involving patients with Alzheimer’s dementia in choral singing enables them
to improve their attention skills; a similar result has been found with brain
damaged patients with the aid of technological tools [15]. Music-based
activities can produce not only the recovery of a specific function such as
attention, but also a more general intellectual recovery. Särkämö et al. [16]
have shown that having patients listen to music on a daily basis results in a
better post-stroke cognitive recovery than listening to audiobooks;
− music is used to stimulate the person to relate to other people [17]. Here, use
is made of either a general communicative value of music (by listening to
music one can share it with other people; by singing and/or playing with
others one can perform something together) or more specific interpersonal
dynamics triggered by music (for example, music enables us to express
responses and experiences that we would otherwise be incapable of
verbalizing.) An example of such a use of music can be activities in order to
improve social interactions in patients after strokes or traumatic brain injuries
[18, 19], or the attempt to communicate through sounds with patients in a
minimal consciousness state or in prolonged coma [20].
In the first and third above mentioned cases music seems to play a non-specific
role, as it aims at very general goals (to motivate the patient, to induce an appropriate
state of mind, to make interpersonal contacts, to socialize, etc.) or is preparatory to or
complementary with other kinds of intervention. But in the second case music seems to
be appreciated for what it can specifically produce. As DeNora argued, music is not
only a communication medium and it is not used by people just to produce sought-after
emotional states and to escape unwelcome conditions. Music is also a tool for action
and reflection. We use music to recall important people or events of our life. Music can
affect individuals by changing the way their body is arranged, their behaviour, their
way of experiencing, their self-perception and their way of perceiving other people and
situations. In short, music has a transformative power; it does things, changes things
and allows things to happen [21]. Then, it is a matter of understanding the reasons why
the specific use of music can result in the achievement of goals in the field of
neurorehabilitation.

1. Music as a multi-register tool

The key idea we intend to develop here is that when someone has to do with music,
both in a receptive mode (listening) and in a productive mode (performing), the person
has a comprehensive experience in which several mental registers are activated
simultaneously and synergically. The specific effect of music, or at least the effect we
would like to emphasize here, is to trigger a coordinated action of multiple mechanisms.
This peculiarity can serve as the foundation for the efficacy of sound-based
rehabilitation treatments. These mechanisms can be identified along three lines,
corresponding with three relevant cognitive registers available to humans: motor,
iconic and verbal.
These three registers follow a recurrent distinction in psychology that has been
acknowledged by different theories and has been effectively systematised in Bruner’s
work. He identified three developmental stages; in each one there is a specific system
of mental representation: enactive, iconic or symbolic [22]. First, the child’s motor
behaviour shows definite strategies for action that make us assume that movements are
guided by mental representations. These are the enactive representations, formed by
operational patterns, i.e. patterns that coordinate the sequence of different acts or
segments forming a movement. Iconic representations are representations which are
independent from the action, even though they are bound to perception, since they are
formed by images or spatial schemata. They allow the representation of states, relations,
or transformations of events. To perform tasks that require abstraction one needs
symbolic or verbal representations, which operate through concepts, categories and
hierarchies. The tripartition suggested by Bruner between enactive, iconic and verbal-
symbolic can be useful in making our point, because it helps us to identify three
registers, that is, lines along which the mental processes activated by music unfold and
to find in these lines some likely reasons why music-based rehabilitation interventions
can be successful.
Firstly, music activates the motor mode, because it is naturally connected with the
body. Music is always initiated by a body gesture (blowing, beating, etc.) Moreover, a
lot of music is composed keeping in mind the actions it is supposed to accompany
(dance or military march, for instance). Some people in Africa have no specific word to
indicate music; there is only a term designating the presence of music and dance at the
same time. In many contexts music goes with working activities: in Ghana gardeners
work more swiftly when accompanied by music; in the Hebrides the activity of textile
workers is accompanied by songs that change according to the movements to be
performed; some songs of sailors change according to the required manoeuvres [21].
Blacking [23] emphasised and supported the notion of music as being strongly
embedded in the body movements, a point corroborated by his long-lasting experience
in studying African music. This author thought that the physical-motor experience
makes the sounds take on a different meaning to the ones we perceive with our ears.
From an ontogenetic point of view, the connection between music and movement
develops very early. Moog [24] observed that 4 month olds start to respond to music
with large body movements. Philips-Silver and Trainor [25] reported that at 7 months
of age the infant shows his preference for a rhythm associated with a synchronised
rocking of the cradle. At 18 months of age children spontaneously perform rhythmical
movements synchronised with sounds, while they are listening to music [26]. At a later
age, the connection between music and movement does not require involvement of
one’s own body. For example, Boone and Cunnigham [27] asked 4 and 5 year olds to
make a teddy bear dance according to the emotional features of short musical segments
while they were hearing them. Afterwards adults were presented the videotaped
performance played by the children without the accompanying musical track and were
requested to identify the emotion that the body movement intended to express. Results
showed that children succeeded in moving the teddy bear so as to express the
emotional meaning of the music. The detailed analysis of how children manipulated the
teddy bear showed that upward movements, rotations, shifts, as well as the tempo and
the force of the movements, differed significantly according to the expressive meaning
of the corresponding music.
Secondly, music carries an iconic, i.e. a visuospatial, component. Music, at least in
some circumstances, seems to translate spontaneously into images, so much so that in
German the term Tonmalerei (painting with sounds) has been coined in order to
indicate the possibility of depicting visual pictures through musical notes. To
corroborate the fact that, besides a motor element, also an iconic dimension belongs to
music, we can recall that in Non-Western musical cultures the performer’s activity is
controlled by space representations rather than by sound representation. But in the
Western world too music is connected with visual thought. For example, it is proven
that musicians, as compared to non musicians, have greater capacities of visuospatial
memory and their hippocampus – the cerebral structure connected with this kind of
memory – is more developed than in the latter [28]. Practicing music develops visual
memory abilities, probably because of the inherent figural nature of sounds patterns.
Even people without any musical training think about music in spatial terms. In an
experiment Halpern (mentioned in [29] p. 202) presented one word, by selecting it
from the lyrics of a song, and subsequently another word from the same song. The task
of the subjects was to compare the height of the notes corresponding to the two words.
The reaction time recorded during this task increased as a function of the distance (in
terms of bars) between the two words in the song. This suggests that the listeners
scanned mentally an image of the melody. Hence, in music there seems to be a similar
activity to the scan of visual images.
Thirdly, music carries a verbal component. Between music and verbal language
there are overt analogies:
− there is a poor variation in the structural aspects of both music and verbal
language among cultures;
− the skills required in music and verbal language appear early in ontogenetic
development;
− music and language follow similar principles of perceptual organisation;
− music and language can be described in terms of organised time units;
− both consist of complex productions generated by few elements;
− these elements are combined according to rules;
− the rules determine hierarchical structures;
− the rules allow the generation of a potentially infinite number of combinations
of elements.
These similarities concern mostly the syntactical aspects of music and have
enabled authors such as Lerdhal and Jackendoff [30] to identify some general cognitive
principles that are the foundations for musical listening. As happens for the syntactical
structure of verbal language (understood in a Chomskian sense), music implies an
unconscious construction of abstract structures that meet the dictates of a generative
grammar with a set of recursive analytical rules. However, the verbal dimension of
music appears not only at the level of syntactical structures, but also in terms of
narrative structures. In the approach of Heinrich Schenker [31] – an author who
anticipated the ideas advocated by Lerdhal and Jackendoff – the diatonic triad is the
Ursatz, i.e. the basic structure, in which (i) the tonic represents the initial balance, (ii)
the dominant introduces tension and (iii) the return to the tonic re-establishes the
balance. One can find a correspondence between this harmonic pattern and the
grammar of stories, some of which imply a transition from (i) the initial situation to (ii)
the appearance of troubles/hindrances/problems to end (iii) with the resolution of the
conflict/struggle/quest/tension.
Finally, the verbal dimension of music appears at the phonetic-prosodic level,
when one attempts to render the inflections of the spoken language through musical
sounds, and at the pragmatic level, when the dynamic of roles, entrances and
alternations of the interlocutors in the development of the discourse is at play.
Music activates in the listener and the performer some mental processes in all three
registers (motor, iconic and verbal) and in this synchronised activation of several
registers we can find the reasons of its therapeutic-rehabilitative efficacy. Now we have
to develop further this point.

2. Levels and correspondences

Within each register – motor, iconic and verbal – different levels can be identified. The
motor register articulates at the level of neurovegetative responses triggered by sounds
(for example, variations of the heart and breathing rates), at the level of gestural
responses (as it is shown by the tendency to accompany music by tapping the feet or
drumming the fingers, etc.) and at a level of more complex patterns of action (for
example, those implied in the art of dancing).
In the iconic register visual synaesthesia develops; in fact, visual experiences may
be elicited by sounds (sounds are heard as dark, shining, etc.). Furthermore, the visual
features of music appear in the topological relations that sounds remind (for instance,
music can be assimilated to continuous or broken lines or it can inspire a sense of
closure or opening, etc.). Finally, music takes shape in visuospatial isodynamisms (it
suggests upward or downward jumps, approaching or departing trajectories, etc.).
The verbal register is involved at a basic level through the usage of onomatopoeic
devices (the musical sounds imitate natural or artificial sounds) and at a more
sophisticated level through the prosodic intonations (accelerations and decelerations, as
with rhythm and intensity, hinting at the "tone" – solemn, whining, peremptory,
friendly, etc. – with which the musical discourse is pronounced) to construct a
discursive structure (distribution of parts, entrances and relative turns, repartition of
"topics" introduced in the discourse, etc.).
What relationship exists between the different registers? The registers are
interdependent and synchronised. They are activated by the same musical input and
mirror the same characteristics of this input, yet with a different emphasis (for example,
an aspect of the piece will be better reflected or expressed in the motor register, another
in the verbal one). What is processed within a given register is correlated to and
presents some analogies with what happens in another register. Let us attempt to use a
concrete example to describe the isomorphism between different registers. If we
imagine a stretch of gravel path, on its surface some stones will protrude more than
others and some depressions will form. Let us imagine pressing a piece of cardboard
into the ground. Some features of the gravel path – its protrusions and depressions, etc.
– will be found on the piece of cardboard. Where on the ground there was a sharp stone,
on the card there will be a narrow and high protuberance. In some way, the
characteristics of the ground have been “retranscribed” in the shape the card has taken.
There are some correspondences between the two surfaces, even though each one is
“made” of different things (in this example, stones for the former and cellulose mixture
for the latter). If we imagine we pour some coloured paint on the modelled card after it
has been pressed into the ground, the paint will run down along the protrusions of the
card and thicken in its depressions, colouring protrusions and hollows with different
intensity. If we flatten the card now, we can still detect the original roughness of the
ground that has been impressed on it in terms of protrusions and hollows, because the
different intensity of the paint has “transcribed” the three-dimensional undulations of
the paper. With a different medium (the paint pigment) the characteristics of the ground
have been maintained, since we still find the same set of relationships made of hollows
and protrusions that are on the ground. We have three different planes and three
different materials – stones, paper and coloured pigment. Although in a different
manner, all of them represent the same system of relations, since the same “print” has
been impressed in these different materials. A «transcription» is therefore a projection,
on a certain cognitive register, of characteristics emphasized in a different register. The
«transcriptions», i.e. the correspondences that are formed between the various registers
(motor, iconic and verbal), contribute to transform the mental processing of music into
a consistent complex of acts which generates an overall strong impression.
The ability to grasp the correspondences between different registers appears quite
early. According to Stern [32], infants show an ability to connect the content of
heterogeneous senses (sight, hearing, touch). For example, infants capture the relation
between the rhythm of a repeated noise and a similar rhythm of a caress and they
associate these rhythms with the switching on and off of a light occurring at the same
pace. At 3 weeks after birth infants grasp the relationship between a time pattern
reaching their hearing and a similar visible time pattern. When the mother tries to
quieten her baby by singing or pronouncing some words with rhythmical and prosodic
inflections and she accompanies her voice with a movement of her hands caressing the
child’s body in a manner synchronised with the pace of her voice, the baby perceives
the correspondence between the two experiences (auditory and tactile). Musical
cognition is a multimodal form of knowledge which, through the simultaneous
triggering of several registers, produces a global experience. It is now time to consider
each one of these registers and their consequent potentials in terms of rehabilitation.

3. The motor register

On a first level of the motor register we find how music triggers neurovegetative
reactions not casually and affects the biological rhythm of the individual. Within a
general tendency to synchronise the internal bio-physiological oscillations with the
external rhythms which are heard, we can notice that the musical rhythm induces
variations in the cardio-vascular and respiratory rates that, in turn, affect other
physiological changes. It has been confirmed that lullabies decrease the heart beat and
the respiratory rate, which synchronise with music [33]. It is not only rhythm that has
these effects; the emotional quality of music also changes the cardio-respiratory rate
[34].
On a different level of the motor register, it is proven that people grasp the
expressive tension-release dynamisms in music [35]. When subjects were asked to push
a device depending on the tension perceived in the musical piece they were listening to,
the authors noticed that moments of tension and relaxation alternated. Furthermore,
high tension was detected in correspondence with sections of fortissimo, when the
melody was ascending, the density of notes increased, places of dissonance occurred,
rhythmical and harmonic complexity increased, musical segments were repeated, as
well as during the pauses and in the parts in which some musical ideas were developed.
Similar responses can be found at the level of muscular reactions determining the
expression of the face. Usually people respond with subliminal changes of their facial
expression while they are listening to music [34, 36]. These responses can be more
specifically related to the type of music [33] – music with negative emotional meaning
tends to produce a greater corrugating muscular activity, while music with positive
emotional meaning brings about zygomatic activity. These associations between music
and motor responses appear early: 3-4 year olds know how to match musical pieces and
facial expressions congruently with the emotional character of the music [34].
On a more sophisticated level, it has been shown that music generates in the
listener motor responses that allow the person to mirror the gestures performed by the
interpreter [35]. These claims are supported by experiments showing that people are
able to associate to music the corresponding gestures and actions. For instance, only by
watching the videotape of a musical performance without any sound track, people can
rate successfully the expressive intent inherent in the piece [36, 37]. Such a skill
emerges also by observing people making sound-producing gestures in the air without
manipulating any concrete instrument [38]. Similar findings were reported by
considering ballet performances: hearing only the music or seeing only the body
movements produced similar judgments about the beginnings and the ends of the
internal sections of the performance, as well as about the tension and the emotions
conveyed by the stimuli [39]. Visual experience of a musical performance provides
listeners not only with information about the context where it takes place and the
alleged personal features of the musician, but also a variety of cues which can
emphasize the expressive intention of the executor [40, 41]. The gestures of the
performer help decode also some structural aspects of music. In an experiment [42] a
singer performed intervals of various ranges and was videotaped. Subsequently two
samples were presented with only the sound track or only the soundless filmed
sequence. In both conditions the judges adequately identified the range of the different
intervals. In the video condition visual cues, such as the facial expression and gestures
of the singer, were enough to assess the size of the performed melodic interval.
The possibilities of using the link between music and body reactions for
rehabilitation purposes are broad. For example, as far as physiological responses to
sounds are concerned, Antic et al. [43] investigated the effect of music in a sample of
patients with acute ischemic stroke. In almost 80% of participants an increase in the
mean blood flow velocity in the middle cerebral artery was recorded as a consequence
of listening to music for 30 minutes. Elderly people affected by dementia benefitted
from music treatments by showing lower systolic blood pressure and better
maintenance of their physical and mental state than controls did [44]. With regards to
muscle responses, Carrick, Oggero and Pagnacco [45] observed, through computer
dynamic posturography, how people reacted to music by measuring body stability; they
found positive changes, due to music, in stability scores in individuals with balance
abnormalities, so suggesting that music can be a way to prevent fall and/or vertigo and
to rehabilitate persons showing postural disorders. Music has been applied in gait
training addressed to people with brain injuries: results showed improvements in gait
efficiency, supported also by electromyographic measurements [10]. In an even more
evident way, if music is utilised to provide the patient with an auditory pattern as a
basis for organising movement, the synchronisation between sounds and gestures
resulting from it can be applied to teach brain-damaged patients to perform the
appropriate movements required to dress autonomously [46].
Technology enables us to expand the natural link between music and movement or
to recover it when physical disabilities have impaired it. For example, Tam et al. [47]
devised a computer system, called Movement-To-Music, which allows children with
impaired movements to play and create music, resulting in broader horizons and
increased quality of life. Patients with spinal chord injuries were trained to create and
play music by means of an electronic music program: this tool led them to exercise
upper extremities which were connected to a synthesiser through a computer [48].
Motor skills impaired by stroke can be rehabilitated thanks to an equipment constituted
by electronic drum pads (to train gross motor skills) and a MIDI-piano (to train fine
motor skills) designed to activate auditory-motor representations in the patient’s mind
[49]. Stroke patients were induced to use such tools to reproduce a musical pattern with
the impaired arm. Better outcomes were recorded in these patients as compared to the
effects produced by traditional rehabilitation. Equipment enabled to produce MIDI
sounds can be activated and controlled by muscular contractions, as well as by
biosignals such as electrocardiogram or electroencephalogram: in this way even people
with severe motor impairments can produce music and receive feedback [10]. Other
similar devices are Sound Beam and Wave Rider [10]. In all cases in which music
contributes to restore motor functions [50], music can be conceived as an anticipatory
and continuous temporal template which facilitates the execution of the movement
which has to be rehabilitated, thanks to auditory-motor synchronisation.

4. The iconic register

On a first level of the iconic register, visual representations triggered by music appear
as diffused chromatic sensations. Through synaesthetic mechanisms, sounds elicit
experiences afferent to non-acoustic sensory modes. In some cases synaesthesiae
impose themselves on the person without her being able to stop perceiving images
when she listens to sounds. In other cases it is just a special sharpness in detecting
synaesthetic associations; in yet other cases (the majority) one perceives implicitly
synaesthetic assonances that become the focus of the thought only if one tries to
analyse them.
On a different level the images suggested by music convey topological relations.
First of all, on a perceptive level, it becomes clear how the flow of musical notes is
inscribed in a sound environment with basic spatial coordinates, being vectorially
oriented from left to right. The Spatial-Musical Association of Response Codes
(SMARC) effect, recently documented [51, 52], is supposedly evidence for it. The
SMARC effect is a form of stimulus-response compatibility effect: the person is facing
a screen where some signals appear; they can appear unpredictably either on the left or
the right sides of the screen. The task is to push a button as soon as one perceives the
appearance of the signal. If the positioning of the signal and the button to be pushed are
compatible (for example, the signal appears on the left side of the screen and the button
is at the left side of the person, so that she uses her left hand to push it), the response
will be quicker than it would be in a situation of incompatibility (the signal appears on
the left and the button for the answer is on the right). If the stimuli are musical notes,
and the subjects are asked to determine whether, compared to a standard note, the
following one is higher or lower, the SMARC effect occurs. If the button
corresponding to the answer «lower» is on the left and the button corresponding to the
answer «higher» is on the right, the response is quicker as compared to what happens if
the buttons are switched. This happens because in the first condition there is
compatibility between the stimulus characteristic (pitch) and the position of the button.
The musical notes are therefore mentally represented in a space vectorially oriented
from left to right, so that low pitches tend to be psychologically “located” on the left
and high pitches on the right.
The iconic power of music is grasped very early. According to research by Spelke
[53, 54], 3 and 4 month olds are capable of detecting when sound rhythm and visual
rhythm are coordinated and when they are uncoordinated. In these experiments infants
were shown a visual scene in which a puppet representing an animal was making jumps.
A sound was produced either when the jumping puppet was landing or a little later. The
children preferred to watch the visual scene in which jumps and sounds were
coordinated rather than the uncoordinated scene (their preference was assessed
according to the frequency and duration of ocular fixations). A preference was shown
also when the time interval between the puppet landing and the sound, although out of
phase (delayed sound), was constant. Other studies showed that 6-8 month olds are able
to grasp numerical correspondences between sounds and images. For example, given
the choice to look at a scene in which two objects appeared or a scene with three
objects, if the infants heard two sounds, they rather watched the two-object scene;
while they turned their gaze to the three-object scene if there were three sounds. The
skills highlighted by Wagner et al. [55] in 6 to 14 month olds are even more surprising.
The children seem to be able to associate characteristics of sounds (such as pitch) and
characteristics of sound sequences (ascending or descending sequences, sequences of
continuous or intermittent sounds) with analogous characteristics of lines. The children
prefer to watch a low line, a small circle and a dark circle in concomitance with low
pitch, a high line, a big circle and a clear circle in correspondence with a high pitch;
just as they prefer to turn their gaze to an ascending arrow if they are listening to an
ascending melodic line and a descending arrow if the melody is descending, or a
continuous line if the sound sequence is continuous and a broken line if the sound
sequence is intermittent.
Older children – as documented by Walker [56] – know how to make even more
complex associations, such as matching weak and strong, low and high, long and short
notes respectively with long and short lines, light and dark lines, low and high lines,
empty and full symbols. Fairly early on children understand that certain characteristics
of sound stimuli can be represented graphically with a variety of devices [57].
What role could the visuospatial components play in listening to music? In some
psychological models [58, 59] these components seem to fulfil a function only in the
preliminary and/or conclusive stages of the process of listening to music. In the former
case, for example, it is emphasized how some of the organisation principles of the
visual field (law of proximity, similarity, continuity, etc.) are true also for the
organisation of sound events: picture-like principles would intervene in the segregation
of the musical flow and in the formation of basic sound clusters. In the latter case,
general patterns of emotional response triggered by listening to music would maintain
characteristics of the iconic type (sense of raising/sinking, opening/closing, etc.).
However, it seems that the figural aspects of musical language can be assigned a role
not only in these “peripheral” moments – respectively “incoming” (perceptual
organisation) and “outgoing” (emotional response) – of the process of listening, but
also in the “central” moment of formation of meaning of the musical piece.
The visual resonances and spatial analogies activated by music are often used
within rehabilitative interventions to induce the patient into a state that favours the
recovery of his remaining cognitive and emotional resources. To this end, a method
called Guided Imagery and Music (GIM) has been increasingly applied. It intentionally
elicits visual imagery in the mind of the person starting from sound stimuli. Having
proven that music therapy can be successfully applied in cardiac rehabilitation [60],
there was the attempt to empower such a technique by associating visual imagery to
musical stimulation. Thus, it has been devised Music-Assisted Relaxation and Imagery,
a variant of GIM which has been proven to be more effective in cardiac rehabilitation
than traditional music therapy [61].
Finally, the iconic register activated by music can be used in rehabilitation with
other goals. For example, music can be used to facilitate recalling visual scenes from
the past. In fact, it has been shown that music can enhance the production of
autobiographical memories in Alzheimer patients [62].

5. The verbal register

A first level where one can detect correspondences between music and verbal language
is the structural-syntactic level. In both cases it is a matter of putting discrete elements
(notes in the former, words in the latter) in sequence by respecting some formal rules. It
does not surprise, then, that aphasic people with difficulty in understanding the
syntactic aspects of language also show difficulty in grasping syntactical aspects of
music related to harmonic relations [36]. From a rehabilitative point of view these
parallels hint at the fact that trainings focused on the processing of sequential aspects of
music can be beneficial for recovering syntactic abilities in the linguistic context. The
sequential nature of music makes it fit to be used in other sectors as well, such as the
Parkinson’s disease. Satoh and Kuzuhara [63] asked mild and moderate Parkinson’s
disease patients to walk while mentally singing. This allowed them to overcome gait
disturbances, as shown by the fine-grained analysis of their videotaped behaviour.
On a different level, the verbal dimension of music appears to be related to how
speech is organised. According to Schaffer [64] music can convey a narrative. The
structure of a musical piece describes an implicit event; the way in which the piece is
performed gives shape to this event, enriching it with emotional connotations. The
gestures of musical expressiveness would then correspond with the emotional gestures
of the implicit main character of the story who participates in or witnesses that event.
In other words, the interpretation made by the performer has the function of helping
define the character of the protagonist in the narrative script, which is implicitly
contained in the musical structure. The musical elements define the implicit event, i.e.
the structure that has a decisive and primary role in determining the range of gestures
suitable for that musical piece. The performer, like a storyteller, has to be loyal to the
structure of the story and, at the same time, has the freedom to modulate the emotions
of the characters. In other words, the performer has the task to create the character so as
to add deep meaning to the literal surface of the musical piece. According to Schaffer,
the details of a musical expression are more fully understood if regarded as
corresponding to the gestures of an implicit main character. In this respect, we can
recall the observation made by Sloboda [26] that people recognize better a melody if,
as they are listening to it, they label it with concrete titles that hint at its dramatisation.
This is a potential way of using music that Noy [65] designated «narrative path», which
leads the listener to identify with the experience of the composer, feeling his emotions
as if reliving his narrative.
Following Shaffer’s suggestion, how can we identify the narrative dimension in
the structure of music itself? Like in a story, the plot unfolds through promises,
creation of preconditions, anticipations, escalation, dramatic turn of events, sudden
resolutions, etc., and similar variations of the arousal levels are produced by the
unfolding of the musical discourse. The emotional "course" of music would be parallel
to that of a story that could overlap it.
As it is easy to imagine, the narrative characteristic of music can be exploited
particularly in the therapeutic context to activate dynamisms in terms of affect and
emotion processing. It seems that the understanding of the emotional meaning of music
has its own distinct counterpart in the brain. In this regard, Peretz [66] referred an
interesting dissociation in a patient with damage to the auditory cortex. He was still
able to enjoy music emotionally, but not to make simple auditory discriminations.
Notably, the patient knew how to distinguish sad and cheerful melodies, he was
sensitive to speed manipulations of the music and to the distinction between major and
minor modes in order to differentiate sad and cheerful music, but he was not able to
make any distinction between familiar and unfamiliar melodies (for example, he could
not recognize that a piece he listened to was the Adagio in G minor by Albinoni; he
said that this music «made him feel sad like the Adagio by Albinoni») and did not
realise the errors in the pitch of the notes purposely introduced in the musical pieces he
was asked to listen to, just as he could not discriminate between consonant and
dissonant music. In this patient the analysis of music was intact as far as the emotional
aspects were concerned, but not with regard to the syntactic ones.
Do such data lead us to believe that an emotional evaluation of music does not
require cortical mediation? According to Peretz [66], this is not the conclusion to be
drawn, because in the above described case a specific cortical structure could be
damaged. It is well-known that at a cortical level it is possible to identify
neurobiological structures which can be related to the discrimination of the emotional
meaning of music. The frontal left cortical activity is higher when the subject listens to
cheerful music (and when there are variations of mode and time in the direction of joy),
while the right one is higher with sad or frightening music (and with its respective
variations). It is also proven that the left ear (which projects to the right hemisphere) is
superior when one judges music as unpleasant (i.e. atonal), while the right ear is
superior in case of pleasant (tonal) music, therefore suggesting a specialisation of the
left hemisphere in perceiving positive emotions and the right one for negative emotions.
This inter-hemisphere asymmetry does not appear when the persons need to judge the
same music not from an emotional standpoint (i.e. as pleasant or unpleasant), but in
terms of right or wrong. To sum up, an emotional appreciation of music would be
supported by a specific neural path that requires cortical mediation. A two-stage model
could be proposed: musical stimuli are first processed by the superior temporal gyrus
(where their perceptive organisation would occur) and then by the emotional systems in
the paralimbic structures and in the frontal areas (according to the meaning of the
emotion).
Finally, music and verbal language share some prosodic inflections. It seems that
our nervous system has developed specialised structures and processes to deal with the
prosodic aspects of language [66]. The superiority of the right ear (and consequently
the left hemisphere) for processing the content of words and the superiority of the left
ear (and therefore the right hemisphere) for the perception of the emotional tone of
voice have been demonstrated. Hence, brain damages compromise selectively the
identification of emotional connotations of the voice as well as the grasping of prosodic
variations in exclamations, questions and assertive sentences.
It is not accidental that children prefer songs addressed to them rather than songs
addressed to adults; children know how to seize the prosodic inflections of the former
and perceive them as adapted to an interaction with them. It is a fact that in all cultures
children are the receivers of songs addressed to them by the adults and that in many
cultures these songs are specific for children. Experiments conducted by Trehub and
Trainor [67] showed that, when adults sing for a child, they make higher and slower
sounds, in a more loving tone, introducing longer pauses between phrases as compared
to when they sing for other listeners. Furthermore, adults seem to use two definite
singing styles with children: a lullaby-like mode when they want to quieten and let the
child fall asleep and a playful mode aiming to activate the child and draw his attention
on interesting aspects of the environment.
The continuum existing between the spoken and singing languages gives reasons
for the prosodic correspondences between texts and sounds in vocal music. But it is
less obvious to explain the prosodic aspects in instrumental music. Such aspects are
grounded on the fact that common traits of music and the human expression of an
emotion can be found in the characteristics of the voice. A voice expressing sadness
and a music conveying sadness share some features, such as low pitch, small range of
pitch variations, low intensity, trailing sound flow, slowness, pauses, progressively flat
trend of the pitch and the rhythm, etc. Instrumental music tries to mirror these features
by means of non-vocal sounds.
The analogies between prosody of verbal language and prosody of music account
for the use of singing in rehabilitating the fluency of spoken language. For example,
music is beneficial in the treatment of acquired dysarthria following traumatic brain
injury or strokes. The intelligibility and the naturalness of speech of dysarthritic
patients improved as a consequence of set of sessions where they performed, beside
motor respiratory exercises, rhythmic and melodic articulatory tasks based on
intonation and singing [68]. Singing is a way to rehabilitate also aphasia. It has been
proven that patients suffering from severe forms of non-flowing aphasia benefit from
the Melodic Intonation Therapy [69, 70], a rehabilitation technique based on the
imitation of singing [36]. Musical techniques can be applied also to improve the vocal
quality, the coordination, rhythm and timing of speech and pragmatic use of language
in children with acquired brain lesions [71].
Also dyslexic people can be trained through music. Besson et al. [72] found that
musical activities were successful in improving pitch processing in speech, an ability
that is fundamental in second language learning and that is impaired in dyslexic
children, so suggesting that music can be employed as a remediation in dyslexia to
improve people’s impaired reading skills. This is in agreement with the observation
according to which dyslexic children show some difficulty in the timing in music and,
if they attend music classes, they improve their reading skills [36].

6. Conclusions

As we have tried to argue, if music is a tool that triggers representations and processes
in different mental registers (motor, iconic and verbal) – given that sounds carry
affordances, forces, vectors which drive the performance of specific actions, images
and ways of speaking and that what occurs in the various registers is reciprocally
synchronised – both the power of music as a spontaneous elicitor of emotions and as a
natural tool of communication and the deliberate utilisation of music for rehabilitative
purposes are justified.
Music is constitutively motor, iconic and linguistic, since gestures, images and
words are not extrinsic elements to it. Motor, visuospatial and verbal elements are
already present in the innermost nature of music. The registers that music activates
(movements, figures, words) do not "attach" to music from the outside; they are
embricated and are deeply embedded in music. It is because of this very imbrication
that we can argue that music acts as a vicarious function in the rehabilitation context.
When the processes of motor planning are impaired, music can provide the sequential
and rhythmical patterns required to perform actions that need to be relearned, and this
is possible because these patterns are embedded in music itself. When memory
processes fail in recalling the past, music helps the memory emerge because it suggests
colours, shapes, spatial movements that can be found in visual scenes. If it is the
organisation of verbal language that is impaired, music can assist it, because it contains
discursive patterns. In other words, music, thanks to its multimodal nature, offers
“scaffolding” on which one can learn to perform movements, carry out cognitive
operations or articulate verbal expressions that need to be rehabilitated.
Recently, the theoretical concepts justifying the interventions in the field of music
therapy have been clarified and reliable evidence of achievable results have started to
be collected. The new technologies can expand possible music-based interventions, but
we have a long way to go yet to understand better the potential of music in
rehabilitation.

Acknowledgements

Isabella Negri is gratefully acknowledged for the linguistic revision of this chapter she
made.

References

[1] P. Cook, Music, cognition, and computerized sounds, MIT Press, Cambridge (MA), 2001.
[2] C. Roads, Microsound with map, MIT Press, Cambridge (MA), 2004.
[3] P. Burkart, and T. Mccourt, Digital music wars: Ownership and control of the celestial jukebox,
Rowman & Littlefield Publishers, Lenham (MD), 2006.
[4] K. Collins, From Pac-Man to pop music: Interactive audio in games and new media, Ashgate,
Aldershot (UK) – Burlington (VT), 2008.
[5] A. Williams, Portable music and its functions, Peter Lang, New York, 2007.
[6] P. Horden, Music as medicine: The history of music therapy since antiquity, Ashgate, Aldershot (UK),
2000.
[7] J. Schmidt Peter, Music therapy: An introduction, Charles C. Thomas Publishers, Springfield (IL), 2000.
[8] T. Wigram, B. Saperstone, and R. West, The art and science of music therapy: A handbook, Routledge,
New York, 1995.
[9] D. Aldridge, Music therapy and neurological rehabilitation, Jessica Kingsley Publishers, Philadelphia
(PA), 2005.
[10] S. Paul, and D. Ramsey, Music therapy in physical medicine and rehabilitation, Australian
Occupational Therapy Journal 47 (2000), 111-118.
[11] J. A. Sorrell, and J. M. Sorrell, Music as a healing art for older adults, Journal of Psychosocial Nursing
and Mental Health Services 46 (2008), 21-24.
[12] C. Pacchetti, F. Mancini, R. Aglieri, C. Fundarò, E. Martignoni, and G. Nappi, Active music therapy in
Parkinson’s disease: An integrative method for motor and emotional rehabilitation, Psychosomatic
Medicine 62 (2000), 386-393.
[13] N. Mammarella, B. Fairfield, and C. Cornoldi, Does music enhance cognitive performance in healthy
older adults? The Vivaldi effect, Aging Clinical and Experimental Research 19 (2007), 394-399.
[14] N. Bannan, and C. Montgomery-Smith, “Singing for the brain”: Reflections on the human capacity for
music arising from a pilot study of group singing with Alzheimer’s patients, Journal of the Royal
Society for the Promotion of Health 128 (2008), 73-78.
[15] R. Knox, and J. Jutai, Music-based rehabilitation of attention following brain injury, Canadian Journal
of Rehabilitation 9 (1996), 169-181.
[16] T. Särkämö, M. Tervaniemi, S. Laitinen, A. Forsblom, S. Soinila, M. Mikkonen, T. Autti, H. M.
Silvennoinen, J. Erkkilä, M. Laine, I. Peretz, and M. Hietanen, Music listening enhances cognitive
recovery and mood after mild cerebral artery stroke, Brain 131 (2008), 866-876.
[17] W.L. Magee, and C. Bowen, Using music in leisure to enhance social relationships with patients with
complex disabilities, Neurorehabilitation 23 (2008), 305-311.
[18] S. Nayak, B.L. Wheeler, S.C. Shiflett, and S. Agostinelli, Effect of music therapy on mood and social
interaction among individuals with acute traumatic brain injury and stroke, Rehabilitation Psychology
45 (2000), 274-283.
[19] B.L. Wheeler, S. Shiflett, and S. Nayak, Effects of number of sessions and group or individual music
therapy on the mood and behaviour of people who have had strokes or traumatic brain injuries, Nordic
Journal of Music Therapy 12 (2003), 139-151.
[20] W.L. Magee, Music as a diagnostic tool in low awareness states: Considering limbic responses, Brain
Injury 21 (2007), 593-599.
[21] T. DeNora, Music in everyday life, Cambridge University Press, Cambridge (UK), 2000.
[22] J.S. Bruner et all., Studies in cognitive growth, Wiley, New York, 1966.
[23] J. Blacking, How musical is man,? University of Washington Press, Seattle (WA)-London, 1973.
[24] H. Moog, The development of musical experience in children of preschool age, Psychology of Music 4
(1976), 38-45.
[25] J. Phillips-Silver, and L.J. Trainor, Feeling the beat in music: Movement influences rhythm perception
in infants, Science 308 (2005), 1430.
[26] J.A. Sloboda, The musical mind, Clarendon Press, Oxford, 1985.
[27] R.T. Boone, and J.G. Cunningham, Children's expression of emotional meaning in music through
expressive body movements, Journal of Nonverbal Behavior 25 (2001), 21-41.
[28] V. Sluming, D. Page, J. Downes, C. Denby, A. Mayes, and N. Roberts, Structural brain correlates of
visuospatial memory in musicians. Conference The neurosciences and music II. From perception to
performance (2005), 8-10.
[29] C.L. Krumhansl, Internal representations for music perception and performance, In M.R. Jones & S.
Holleran, Cognitive bases of musical communication, American Psychological Association,
Washington (DC), 1992, pp. 197-211.
[30] F. Lerdhal, and R. Jackendoff, A generative theory of tonal music, MIT Press, Cambridge (MA), 1983.
[31] H. Schenker, Harmony, University of Chicago Press, Chicago (IL), 1954.
[32] D.N. Stern, The interpersonal world of the infant, Basic Books, New York, 2000.
[33] K.R. Scherer, and M.R. Zentner, Emotional effects of music: production rules, in P.N. Juslin, and J.A.
Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 361-392.
[34] J.A. Sloboda, and P.N. Juslin, Psychological perspectives on music and emotion, in P.N. Juslin, and J.A.
Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 71-104.
[35] A. Gabrielsson, and S. Lindström, The influence of musical structure on emotional expression, in PN.
Juslin, and J.A. Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 223-248.
[36] M. Leman, Embodied music cognition and mediation technology, MIT Press, Cambridge (MA), 2008.
[37] J.W. Davidson, Visual perception and performance manner in the movements of solo musicians,
Psychology of Music 21 (1993), 103-113.
[38] I. Molnar-Szakacs, and K. Overy, Music and mirror neurons: from motion to 'e'motion, Scan 1 (2006),
235-241.
[39] J.W. Davidson, What does the visual information contained in music performances offer the observer?
Some preliminary thoughts, in R. Steinberg, Music and the mind machine: Psychophysiology and
psychopathology of the sense of music, Springer, Heidelberg, 1995, pp. 105-114.
[40] R.I. Godøy, E. Haga, and A.R. Jensenius, Playing "air instruments": mimicry of sound-producing
gestures by novices and experts, in S. Gibet, N. Courty, and J.-F. Kamps (Eds.), Gesture in human-
computer interaction and simulation, Springer, Berlin, 2006, pp. 256-267.
[41] C.L. Krumhansl, and D.L. Schenck, Can dance reflect the structural and expressive qualities of music?
A perceptual experiment on Balanchine's choreography of Mozart's Divertimento no. 15, Musicae
Scientiae 1 (1997), 63-85.
[42] K. Ohgushi, and M. Hattori, Emotional communication in performance of vocal music. Interaction
between auditory and visual information, in B. Pennycook, and E. Costa-Giomi, Proceedings of the
Fourth International Conference on Music Perception and Cognition, Montreal, 1996, pp. 269-274.
[43] W.F. Thompson, and F.A. Russo, Visual influences on the perception of emotion in music, in S.
Lipscomb, R. Ashley, R. Gjerdingen, and P. Webster, Proceedings of the Eighth International
Conference for Music Perception and Cognition, Northwestern University, 2004, pp. 198-199.
[44] W.F. Thompson, P. Graham, and F.A. Russo, Seeing music performance: Visual influences on
perception and experience, Semiotica 156 (2005), 203-227.
[45] S. Antic, I. Galinovich, A. Lovrendic-Huzjan, V. Vukovic, M.J. Jurasic, and V. Demarin, Music as an
auditory stimulus in stroke patients, Collegium Anthropologicum 32 (2008), 19-23.
[46] T. Takahashi, and H. Matsushita, Long-term effects of music therapy on elderly with moderate/severe
dementia, Journal of Music Therapy 43 (2006), 317-333.
[47] F.R. Carrick, E. Oggero, and G. Pagnacco, Posturographic changes associated with music listening,
Journal of Alternative and Complementary Medicine 13 (2007), 519-526.
[48] A.P. Gervin, Music therapy compensatory technique utilizing song lyrics during dressing to promote
independence in the patient with a brain injury, Music Therapy Perspectives 9 (1991), 87-90.
[49] C. Tam, H. Schwellnus, C. Eaton, Y. Hamdani, A. Lamont, and T. Chau, Movement-to-music computer
technology: A developmental play experience for children with severe physical disabilities,
Occupational Therapy International 14 (2007), 99-112.
[50] B. Lee, and T. Nantais, Use of electronic music as an occupational therapy modality in spinal chord
injury rehabilitation: An occupational performance mode, American Journal of Occupational Therapy
50 (1996), 362-369.
[51] S. Schneider, P.W. Schönle, E. Altenmüller, and T.F. Münte, Using musical instruments to improve
motor skill recovery following a stroke, Journal of Neurology 254 (2007), 1339-1346.
[52] M.H. Thaut, Rhythmic intervention techniques in music therapy with gross motor dysfunctions, The
Arts in Psychotherapy 15 (1988), 127-137.
[53] P. Lidji, R. Kolinsky, A. Lochy, and J. Morais, Spatial associations for musical stimuli: A piano in the
head, Journal of Experimental Psychology: Human Perception and Performance 23 (2007), 1189-1207.
[54] E. Rusconi, B. Kwan, B. Giordano, C. Umiltà, and B. Butterworth, The mental space of pitch height, in
G. Avanzini, L. Lopez, S. Koelsch, and M. Majno, The neurosciences and music II. From perception to
performance, Annals of the New York Academy of Sciences 1060 (2005), 195-197.
[55] E.S. Spelke, Infants’ intermodal perception of events, Cognitive Psychology 8 (1976), 553-560.
[56] E.S. Spelke, Perceiving bimodally specified events in infancy, Developmental Psychology 15 (1979),
626-636.
[57] S. Wagner, E. Winner, D. Cicchetti, and H. Gardner, Metaphorical mapping in human infants, Child
Development 52 (1981), 728-731.
[58] R. Walker, The effects of culture, environment, age, and musical training of choices of visual
metaphors for sound, Perception and Psychophysics 42 (1987), 491-502.
[59] J. Bamberger, The mind behind the musical hear. How children develop musical intelligence,
Cambridge University Press, London, 1991.
[60] M. Imberty, Les écritures du temps, Bordas, Paris, 1981.
[61] M.L. Serafine, Music as cognition. The development of thought in sound, Columbia University Press,
New York, 1988.
[62] L.K. Metzger, Assessment of use of music by patients participating in cardiac rehabilitation, Journal of
Music Therapy 41 (2004), 55-69.
[63] S.E. Mandel, S.B. Hanser, M. Secic, and B.A. Davis, Effects of music therapy on health-related
outcomes in cardiac rehabilitation: A randomized controlled trial, Journal of Music Therapy 44 (2007),
176-197.
[64] M. Irish, C.J. Cunningham, J.B. Walsh, D. Coakley, B.A. Lawlor, I.H. Robertson, and R.F. Coen,
Investigating the enhancing effect of music on autobiographical memory in mild Alzheimer’s disease,
Dementia and Geriatric Cognitive Disorders 22 (2006), 108-120.
[65] M. Satoh, and S. Kuzuhara, Training in mental singing while walking improves gait disturbance in
Parkinson’s disease patients, European Neurology 60 (2008), 237-243.
[66] L.H. Schaffer, How to interpret music, in M.R. Jones, and S. Holleran, Cognitive bases of musical
communication, American Psychological Association, Washington (DC), 1992, pp. 263-278.
[67] P. Noy, How music conveys emotion, in S. Feder, R.L. Karmel, and G.H. Pollock (Eds.),
Psychoanalytic explorations in music, International Universities Press, Madison (CT), 1993, pp. 125-
149.
[68] I. Peretz, Listen to the brain: A biological perspective on musical emotions, in P.N. Juslin, and J.A.
Sloboda, Music and emotion, Oxford University Press, New York, 2001, pp. 105-134.
[69] S.E. Trehub, and L.J. Trainor, Singing to infants: Lullabies and play songs, Advances in Infancy
Research 12 (1998), 43-77.
[70] J. Tamplin, A pilot study into the effect of vocal exercises and singing on dysarthric speech,
Neurorehabilitation 23 (2008), 207-216.
[71] R. Sparks, N. Helm, and M. Albert, Aphasia rehabilitation resulting from melodic intonation therapy,
Cortex 10 (1974), 303-316.
[72] G.J. Murrey, Alternate therapies in the treatment of brain injury and neurobehavioral disorders,
Haworth Press, Binghamton (NY), 2006.
[73] J. Kennelly, L. Hamilton, and J. Cross, The interface of music therapy and speech pathology in the
rehabilitation of children with acquired brain injury, Australian Journal of Music Therapy 12 (2001),
13-20.
[74] M. Besson, D. Schön, S. Moreno, A. Santos, and C. Magne, Influence of musical expertise and musical
training on pitch processing in music and language, Restorative Neurology and Neuroscience 25 (2007),
399-410.
Computer-Guided Mental Practice in
Neurorehabilitation
Andrea GAGGIOLIa,b, Francesca MORGANTIb, Andrea MENEGHINIc, Ilaria
POZZATOd, Giovanni GREGGIOd, Maurizia PIGATTOc,d, Giuseppe RIVAa,b
a
Psychology Faculty, Catholic University of Milan, Italy
b
Applied Technology for Neuro-Psychology Lab, Istituto Auxologico Italiano, Italy
c
Advanced Technology in Rehabilitation Unit, Padua Teaching Hospital, Italy
d
Padua University, Italy

Abstract. Motor imagery is the mental simulation of a movement without motor


output. In recent years, there has been growing interest towards the application of
motor imagery-based training, or “mental practice”, in stroke rehabilitation. We
have developed a virtual reality prototype (the VR Mirror) to support patients in
performing mental practice. The VR Mirror displays a three-dimensional
simulation of the movement to be imagined, using data acquired from the healthy
arm. We tested the system with nine post-stroke patients with chronic motor
impairment of the upper limb. After eight weeks of training with the VR Mirror,
remarkable improvement was noted in three cases, slight improvement in two
cases, and no improvement in four cases. All patients showed a good acceptance of
the procedure, suggesting that virtual reality technology can be successfully
integrated in mental practice interventions.

Keywords. Motor imagery, mental practice, stroke, rehabilitation, virtual reality

1. Introduction

Motor imagery refers to the mental simulation of a motor act in the absence of any
gross muscular activation [1]. The mental process of motor imagery has been
investigated within different areas of research, such as cognitive psychology,
neuroscience and sport psychology, sometimes with different terminology. In the
context of athletic performance studies, a frequently used concept is mental practice.
This term refers to a training technique by which a motor act is cognitively rehearsed
with the goal of improving performance. It is important to distinguish this specific
definition from the broader term mental preparation, which includes a variety of
disparate sport psychology techniques that share a goal of enhancing performance, such
as positive mental imagery, performance cues/concentration, relaxation/activation, self-
efficacy statements, and other forms of mental training. A distinction also needs to be
made between the “external” and “internal” perspectives in motor imagery. The
external perspective, considered to be mainly visual in nature, involves a third-person
view of the movement, as if watching oneself on a screen. The internal (or kinaesthetic)
perspective, on the other hand, requires a subject to take a first-person view and to
imagine the somesthetic feedback associated with action [2].
Recent studies in neuroscience have provided robust evidence that mental practice
with motor imagery may induce plastic changes in the motor system similar to actual
physical training [3, 4]. This supports the idea that mental training could be effective in
promoting motor recovery after damage to the central nervous system. In this chapter,
we first provide the rationale for using mental training in neurorehabilitation. Next, we
describe results of a pilot clinical trial, in which we examined the technical and clinical
feasibility of using virtual reality technology to support mental practice in stroke
recovery.

2. Motor imagery

Scientific investigation of motor imagery dates back to 1885, when the Viennese
psychologist, Stricker, collected the first empirical evidence that overt and covert motor
behaviours involve the same processing resources [5]. Over the past thirty-five years, a
number of studies have investigated this hypothesis further, by means of behavioural,
psycho-physiological and neuroimaging methodologies. Overall, these studies have
provided robust evidence about the existence of a striking functional similarity between
real and mentally imagined actions.

2.1. Chronometric studies

Chronometric studies are based on the Mental Chronometry paradigm, which involves
comparing real and imagined movement durations. In general, results of these studies
indicate a close temporal coupling between mentally imagined and executed movement.
Decety and Michel [6] compared actual and imagined movement times in a
graphic task. They found that the time taken by right-handed subjects to write a
sentence was the same whether the task was executed mentally or physically. Also,
subjects took approximately the same time, both physically and mentally, whether they
wrote the text in large letters or in small letters. This observation suggests that the
“isochronic principle”, which holds for physically performed drawing and writing tasks,
applies also to mentally-simulated motor tasks.
In another experiment, Decety and Jeannerod [7] investigated whether Fitt’s law
(which implies an inverse relationship between the accuracy of a movement and the
speed with which it can be performed), applies also to imagined movements. These
authors investigated mentally simulated motor behaviours within a virtual environment.
Participants were instructed to imagine themselves walking in a computer-
generated three-dimensional space toward gates of different apparent widths placed at
three different apparent distances. Results showed that response time increased for
decreasing gate widths when the gate was placed at different distances, as predicted by
Fitt’s law. According to authors, these findings support the hypothesis that mentally
simulated actions are governed by central motor rules.
The temporal correspondence between real and imagined motion is affected by
moderating variables such as the type of motor task and the time of the day. Rodriguez
and colleagues [8] asked a group of healthy subjects to perform or imagine a fast
sequence of finger movements of progressive complexity. Findings showed real-mental
congruency in relatively complex motor sequences (4 to 5 fingers), while in the
simplest sequences (performed with 1 to 2 fingers) real-mental congruency remarkably
decreased. The influence of the time of the day on real-mental congruency was
investigated by Gueugneau and colleagues [9]. They found that the real-virtual
isochrony was only observable between 2 pm and 8 pm, whereas in the morning and
later in the evening, the durations of mental movements were significantly longer than
the durations of real movements.

2.2. Psycho-physiological studies

Further evidence of the functional similarity between physical and imagined


movements is provided by studies that have measured patterns of autonomic response
during mental simulation of effortful motor actions. Decety and colleagues [10]
measured cardiac and ventilatory activity during actual and mental locomotion at
different speeds. Data analysis showed a strict correlation between heart and respiratory
rates and the degree of imagined effort. For example, the authors found that the amount
of vegetative arousal of a participant mentally running at 12 km/h was similar to that of
a subject physically walking at a speed of 5 km/h. In another study, Decety and
colleagues [11] analysed heart rate, respiration rate and muscular metabolism during
both actual and mental leg exercise. During motor imagery, vegetative activation was
found to be greater than expected from metabolic demands. The authors explained the
additional autonomic activation as the involvement of central mechanisms dedicated to
motor control, which anticipate the need for energetic mobilization required by the
planned movement.
Bonnet et al. [12] investigated changes in the excitability of spinal reflex pathways
during mental simulation and actual motor performance. In their experiment, subjects
were instructed either to exert or to mentally simulate a strong or a weak pressure on a
pedal with the left or the right foot. Modifications in the H- and T reflexes were
measured on both legs by electromyography (EMG). Findings showed that spinal
reflex excitability during motor imagery was only slightly weaker than in the reflex
facilitation associated with the actual performance. A further interesting result of this
study was that the lateralization and intensity of the imagined movement significantly
modulated the EMG activity during motor imagery.

2.3. Brain imaging studies

A large body of recent research has investigated neural substrates underlying motor
imagery by comparing the brain activation that occurs during mental and physical
execution of movements. Taken together, results derived from these studies suggest
that imagining a motor act is a cognitive task that engages a complex distributed neural
circuit, which includes the activation of primary motor cortex (M1), supplementary
motor area, dorsal and ventral lateral pre-motor cortices, superior and inferior parietal
lobule, pre-frontal areas, inferior frontal gyrus, superior temporal gyrus, primary
sensory cortex, secondary sensory area, insular cortex, anterior cingulate cortex, basal
ganglia and cerebellum [13, 15].
The pattern of cerebral activation associated with motor imagery can be influenced
by the level of motor expertise. Ross and colleagues [16] used fMRI to evaluate motor
imagery of the golf swing of golf players with different handicap. Results showed
activation of cerebellum, vermis, supplementary motor area, as well as motor and
parietal cortices. Moreover, the authors found a correlation between increased handicap
of participants and an increased number of activated brain areas. According to the
authors of this study, increased brain activity may reflect a failure to learn and become
highly automatic, or be related to a loss of automaticity with the need for compensatory
processing.
A controversial point is whether different types of movement imagery (e.g., visual
and kinesthetic) involve distinct neural networks. By using EEG, Davidson and
Schwartz [17] observed different patterns of occipital and sensory motor alpha activity
during kinesthetic versus visual imaging. In particular, visual imaging was associated
with greater relative occipital activation. In a fMRI experiment, Guillot and colleagues
[18] found that visual imagery was correlated with activation of the occipital regions
and the superior parietal lobules, whereas kinaesthetic imagery yielded more activity in
motor-associated structures and the inferior parietal lobule. These results suggest that,
like physical motion, these two imagery modalities are mediated by separate brain
systems.

2.4. Clinical neuro-psychology studies

Further evidence in support of the functional equivalence hypothesis comes from


clinical neuropsychological studies, showing that motor imagery is not dependent on
the ability to execute a movement but rather on central processing mechanisms.
Impaired motor imagery was observed in patients with lesions in the parietal cortex
[19] and in patients suffering from Parkinson’s disease, which affects supplementary
motor area, prefrontal cortex and basal ganglia [20, 22]. In those patients, movement
velocity during both motor execution and motor imagery is slower compared to healthy
controls; in contrast, patients with spinal lesions only show longer of motor execution
times but the same duration of MI motor imagery [23].Reduced functional motor
imagery was also identified in stroke patients with contralateral and premotor lesions,
with particular reference to upper limb pointing and rotation tasks [24, 25].
Furthermore, it appears that both imagery accuracy and temporal coupling can be
disrupted after a stroke, a phenomenon that has been defined by Sharma and colleagues
[26] as "chaotic motor imagery".

3. Mental practice

In the previous section, we have reviewed evidence suggesting that the execution of
mental and physical actions obey the same biomechanical constraints and share similar
neuromuscular mechanisms. Another stream of research has investigated the effects of
mental rehearsal on motor skill learning. Laboratory experiments involving healthy
individuals have shown that motor learning can occur through mental practice alone,
and that the combination of physical and mental rehearsal can lead to superior
performance compared to physical practice only [27]. Positive effects of mental
practice have been reported in a variety of motor tasks and for different outcome
variables, including performance accuracy, movement speed and muscular force [28,
31].
Neuro-physiological studies have consistently shown that prolonged mental
practice induce plastic changes in the brain which are similar to those resulting from
physical training. Pascual-Leone and colleagues [3] used transcranial magnetic
stimulation to examine patterns of functional reorganization of the brain after mental or
physical training of a motor skill. Participants practiced a one-handed piano exercise
over a period of five days. Results showed that the size of the contra-lateral cortical
output map for the long finger flexor and extensor muscles increased progressively
each day, and that the increase was equivalent in both physical and mental training.
Furthermore, both conditions produced performance improvements, although subjects
in the physical practice group displayed greater learning. However, the addition of one
physical training session allowed participants who practiced the task mentally to reach
the same level of performance as those who practiced physically.
Jackson and colleagues [4] used positron emission tomography to examine
functional changes associated with the learning of a sequence of foot movements
through intensive mental practice. The improvement of performance determined by
mental training was found to be associated with an increase in activity in the medial
aspect of the orbitofrontal cortex (OFC), and a decrease of activity in the cerebellum.
Data analysis also highlighted a positive correlation between the blood flow increase in
the OFC and the percentage of improvement on the foot sequence task.
Sacco and colleagues [32] used fMRI to measure the activity of brain areas
involved in locomotor imagery tasks (basic tango steps) at baseline and after one week
of training consisting of combined physical and mental practice. Findings showed an
expansion of active bilateral motor imagery areas during locomotor imagery after
training. Moreover, these authors found a decrease in visuospatial activation in the
posterior right brain, suggesting a decreased role of visual imagery processes in the
post-training period in favor of motor-kinesthetic ones.

3.1. Factors affecting mental practice

Other mental practice studies have examined the conditions under which this approach
is more effective. Driskell and colleagues [33] conducted a meta-analysis to determine
the effect of mental rehearsal and different moderators on performance. The key factors
highlighted by the review are summarized below:
- Type of task: mental practice seems to be more effective when the task to be
learnt require cognitive or symbolic components/operations (i.e. make decisions,
solve problems, generate hypotheses, p. 485);
- Retention interval: the effects of mental practice on performance become
weaker over time. To gain the maximum benefits of mental practice, one should
refresh training on at least a one- or two-week schedule (p. 489);
- Experience level: while experienced subjects benefit equally well from mental
practice, regardless of task type (cognitive or physical), novice subjects benefit
more from mental practice on cognitive tasks than on physical tasks (p. 488).
Mental practice may be more effective if novice subjects are given schematic
knowledge before mental practice of a physical task (p. 489);
- Duration of mental practice: the benefit of mental practice decreases with the
training duration. To maximize learning outcome, an overall training period of
approximately twenty minutes is recommended (p. 488).
The type of imagery modality (internal or external) used by the participant is
another important variable to consider when defining mental practice protocols. Fery
[34] found that in learning a new task, visual imagery is better for tasks that emphasize
form, while kinesthetic imagery is more suited for those tasks that emphasize timing or
fine coordination of the two hands. In another study, Hall and colleagues [35]
highlighted that kinesthetic imagery is better for learning closed motor skills, whereas
visual-based imagery is more effective for learning open motor skills.
4. Mental practice in neurological rehabilitation

In recent years, there has been growing interest towards the application of mental
practice in rehabilitation. Studies have been reported on patients with different
neuromuscular conditions, including Parkinson disease [36], spinal cord injury [37],
and intractable pain [38]. However, the largest body of mental practice research has
been conducted on stroke patients. Within these studies, the effects of combining
mental and physical practice are usually compared with conventional treatment based
on physical practice alone. In a recent meta-analysis, Zimmermann-Schlatter and
colleagues [39] reported a positive (albeit modest) effect of combined mental and
physical practice on stroke recovery, though the low number of randomized trials (four)
prevented the authors from drawing firm conclusions about the effectiveness of this
integrated approach.
Jackson et al. [40] proposed a model that describes the potential benefit of mental
practice for rehabilitation. These authors suggest that on one hand, mental practice with
motor imagery can be an effective means to access the otherwise non-conscious
learning processes involved in a task. On the other hand, the absence of direct feedback
from physical execution makes mental practice on its own a less effective training
method than physical practice. Sharma and colleagues [41] conceptualized motor
imagery as a "backdoor" to accessing the motor system after a stroke because "it is not
dependent on residual functions yet still incorporates voluntary drive." (p. 1942).
However, the use of this approach with brain-injured patients poses several practical
issues. Patients can find the mental simulation task too overwhelming and difficult to
understand. Also, neuropsychological evidence suggests that after stroke, motor
imagery is not symmetrical, and motor imagery vividness is better when imagining
movements on the unaffected than on the affected side [42]. Finally, it is not a simple
task to instruct the patient to imagine movements using a first-person perspective
(kinesthetic imagery), an approach that is believed to be effective for training fine
motor skills [34].
Different strategies have been proposed to support brain-injured patients in
executing mental practice on the affected limb [40, 43]. One such strategy involves the
use of a mirror box apparatus [44]. In this approach, the patient is required to perform
the movement to be re-trained with the non-affected arm in front of a mirror placed on
the frontal-coronal plane and observe the resulting mirrored image. Then the patient is
instructed to mentally simulate the movement he has just observed. One shortcoming of
this approach, however, is that the use of a mirror requires the patient to split his
attention between the movement executed with the healthy arm and its mirrored image,
which enhances the cognitive burden of the task. Moreover, this approach does not
permit recording of patient’s movements.
In an attempt to overcome these issues, we developed a non-immersive virtual
reality prototype equipped with arm-tracking sensors, the “VR-Mirror” [45]. The VR-
Mirror was designed to superimpose a virtual reconstruction of the movement
registered from the non-paretic arm over the (unseen) paretic arm. This allows the
patient to observe a model of the movement that has to be imagined. The objective of
this strategy is to guide the patient in mentally simulating the movement to be re-
trained by taking a first-person perspective (kinesthetic imagery).
5. Pilot clinical study

5.1. Subjects

Before starting the intervention, all patients signed an informed consent statement, in
accordance with the guidelines of the institutional ethical review board, outlining their
rights as experimental subjects. The following inclusion criteria were adopted:
- stroke onset between 1 and 6 years;
- no cognitive deficits (Mini Mental Status Examination > 24);
- age 18-80;
- no excessive spasticity or pain in the affected limb;
- completely discharged from all forms of physical rehabilitation;
- ability to perform mental imagery.
Nine patients who received an inpatient rehabilitation program matched eligibility
criteria and were enrolled in the study. All patients had suffered an ischemic stroke
resulting in a chronic hemiplegia. Clinical observations suggested that the patients’
affected limb function had not improved since the time of discharge from the hospital.
The mean time from stroke onset to enrollment in the study was 25.3 months (S.D. 14.6,
range 13-96). A summary of demographics and clinical characteristics of the sample is
presented in Table 1.

5.2. Neuropsychological assessment

Patients underwent neuropsychological assessment, which included the evaluation of


communication and cognitive skills, memory, attention, visuo-spatial, and executive
functions. Mental imagery ability was measured through the Vividness of Visual
Imagery Questionnaire (VVIQ) [46], and the Mental Rotation Test [47]. In the VVIQ,
the patient is asked to imagine different scenarios and rate the vividness of the images
Table 1. Demographics and clinical characteristics of sample
Stroke onset Side of Hand
Patient ID Gender Age
(Mo) impairment dominance
VP M 46 13 Right Right
GT M 68 24 Right Right
MLR F 61 27 Right Right
LS M 39 24 Left Right
SS M 57 20 Right Right
RG M 68 25 Left Right
LL F 63 96 Left Right
TP F 40 36 Right Bilateral
PZ M 27 14 Left Left
Mean 52.1 31
SD 14.6 25.3
Range 27-68 13-96
that are generated. The responses to all the questions can be summed to provide an
overall score. The Mental Rotation Test is used to assess the ability to mentally rotate
three-dimensional objects. The Vividness of Movement Imagery Questionnaire
(VMIQ) [48] was used to test patient's ability to perform motor imagery. The VMIQ is
constructed specifically to assess kinesthetic imagery ability, and contains 24-item
scale consisting of movements that the subject is requested to imagine. The
questionnaire includes a variety of relatively simple upper-extremity, lower extremity,
and whole-body movements. The best score is 120, and the worst score is 24.

5.3. System

The VR Mirror consists of the following components (Figure 1):

- a retro-projected horizontal screen incorporated in a wooden table;


- a LCD projector with parallax correction;
- a mirror that reflects the projector beam onto the horizontal screen;
- 2 movement tracker sensors (Polhemus Isotrack II, Polhemus, Colchester,VT)
positioned on patient’s hand and forearm;
- two sets of five buttons placed at the side of the screen;
- a personal computer equipped with a graphics accelerator.
The laboratory intervention using the VR-Mirror was integrated with a home-
rehabilitation program making use of a DVD. The DVD stored prerecorded movies that
showed the patient how to perform the motor exercises.

Figure 1. The Virtual Reality Mirror. Top left: the patient is performing the movement with the healthy arm
during the registration phase. Top-right and bottom: positioning of sensors and structure of the prototype.
5.4. Intervention

The day-hospital rehabilitation protocol includes a minimum of two weekly sessions,


for eight consecutive weeks. Each therapeutic session at the hospital included 1/2 h of
standard physiotherapy plus 1/2 h of VR Mirror training. The treatment focused on the
following motor exercises:
1) flexion-extension of the wrist;
2) supination/pronation;
3) flexion-extension of the elbow with assisted stabilization of the shoulder.
The training procedure with the VR Mirror consisted of the following steps. First,
the therapist shows the patient how to perform the movement with the unaffected arm.
When the patient performs the task, the system registers the movement and generates
its mirrored three-dimensional simulation. Then, the virtual arm is superimposed over
the (unseen) paretic limb, so that the patient can observe a model of the movement to
be imagined. Next, the patient is asked to mentally rehearse the movement he has just
observed, taking a first-person perspective. When the patient starts to imagine the
movement, he presses a button (using his healthy hand), pressing it again when he has
finished. This allows the therapist to measure the time the patient takes to imagine each
movement exercise. Last, the patient has to perform the movement with the affected
arm. During the execution of the physical exercise with the paretic arm, the system
tracks the movement and measures its deviation from the movement performed with
the non-paretic arm. Using this measurement, which is done in real time, the system
provides the patient with audiovisual feedback describing his performance on the task.
The feedback consists of a red bar chart, which changes its shape according to the
precision of the movement. This procedure was repeated at least 5 times within each
practice session for each target exercise.
In parallel to hospital-based treatment, patients were asked to practice home-based
exercises using the DVD three times a week for one hour. The DVD stored pre-
recorded movies showing the correct exercise to be performed. After viewing the
movies, the patient was asked to take a first-person perspective and to imagine
executing the movement with the impaired arm. The patient performed this sequence at
home 3 times a week.

5.5. Evaluation

Patients were evaluated 4 times:


1) at the beginning of the hospital practice (baseline assessment);
2) four weeks after starting hospital practice (midterm evaluation);
3) eight weeks after starting hospital practice;
4) 12 weeks after the end of hospital practice (follow-up).
Primary pretreatment and post-treatment measures included the Action Research
Arm Test (ARAT) [49] and Fugl-Meyer Upper Extremity Assessment Scale (FMA-
UE) [50]. ARAT includes four domains (grasp, grip, pinch and gross motor) and
contains 19 items. Each item is graded on a four-point scale with total score ranging
from 0 to 60. Higher scores indicate better upper extremity function. The Fugl-Meyer
Upper Extremity Assessment Scale is composed of 33 items, with total scores ranging
between 0 and 66. Higher FMA-UE scores mean better motor function.
Functional outcome measures were integrated with health-related quality of life
assessments to evaluate the impact of the treatment on patients’ perceived well-being.
The EuroQol (EQ-5D) [51] generic health measure was used for this purpose. This
index consists of a five-part questionnaire and a visual analogue scale (VAS) that asks
for an overall rating of the respondent's self-perceived health. The questionnaire
describes states of health in five dimensions: mobility, self care, usual activities, pain or
discomfort, and anxiety or depression. Each dimension comprises three levels (1=no
problems, 2=some problems, 3=severe problems), and the respondent is asked to
indicate his/her health state by choosing the most appropriate statement. The EuroQol
VAS is similar to a thermometer and has endpoints of 100 (‘best imaginable health
state’) at the top and 0 (‘worst imaginable health state’) at the bottom. Finally, patients
were asked to keep a rehabilitation diary, in which they recorded their compliance with
the home-based exercise program.

5.6. Results

Results of the pilot clinical trial are summarized in Table 2. A paired t-test was used to
compare functional evaluation scores at baseline with follow-up assessments. Results
showed no significant improvement of function as measured with the ARAT, and a
quasi-significant difference in pre-post Fugl-Meyer scores (t=2,18; p<.06). However, a
case-by-case analysis revealed a notable improvement in patients VP, TP and GT. In
those patients, the score on both functional outcome measures increased throughout the
eight weeks of treatment with no loss of improvement at follow-up evaluation.
Measurements of wrist function revealed increases in range of motion during the first
phase of intervention with no losses in movement range occurring after the laboratory
intervention was completed. Moreover, these patients showed appreciable increases in
grip strength for the affected right limb. Patients MLR and PZ, who were less severely
affected, showed complete recovery of the functions assessed by the Fugl-Meyer scale
but no further improvement in the ARAT scores. Furthermore, the analysis of the
rehabilitation diary revealed that patient MLR regained the ability to paint, and that
patient PZ reported feelings of enhanced dexterity and control on the affected limb.
Table 2. Functional assessment after treatment

ARAT Baseline 4 weeks 8 weeks follow-up


VP 12 26 29 29
GT 25 26 29 30
MLR 57 57 57 57
LS 0 0 0 0
SS 5 5 5 5
RG 0 0 0 0
LL 0 0 0 0
TP 20 24 28 28
PZ 60 60 60 60
Mean 19,9 22,0 23,1 23,2
SD 23,7 23,6 23,8 23,8
FUGL-MEYER Baseline 4 weeks 8 weeks follow-up
VP 20 34 36 36
GT 25 25 32 32

MLR 59 60 66 66

LS 10 10 10 11

SS 14 15 15 15

RG 6 7 7 7

LL 14 15 15 15

TP 0 2 5 5

PZ 54 54 58 62

Mean 19,1 24,7 27,1 27,7

SD 15,9 20,7 22,5 23,1

Patients LL, LS, SS and RG presented a more negative pattern of results. None of
them showed improvement on the functional scales, and the effect of treatment on
functional recovery was negligible. This result might be partially explained by the low
compliance with the home-based exercise program that was reported by these patients
in their diaries.
When practicing with the VR Mirror, patients were asked to press a button (with
the healthy arm) in order to record imagined movement times (Figure 2). The analysis
of these data did not reveal any significant correlation between real and imagined
movement durations.
Despite post-treatment measures showing moderate gains in motor function, all
patients reported increased well-being and reduced stress. A Wilcoxon signed-rank test
revealed a significant pre-post difference for EuroQol VAS scores (W=28, p < .02).
Furthermore, patients reported improvements of key health status dimensions, with
particular reference to daily activities (t = 1.18, p < .05). In particular, patients MLR
and PZ reported the achievement of remarkable gains in leisure, household and
community tasks.

Figure 3. Mean EQ-5D scores for the five health dimensions (1= no problems; 3 = severe problems).
Figure 4. EQ-5D assessment of patients’ own health status before and after treatment (0 = worst imaginable
health state; 100 = best imaginable health state).

6. Conclusions

The main objective of this pilot study was to evaluate the technical and clinical
feasibility of using virtual reality technology to support mental practice in
neurorehabilitation. This strategy was tested in nine post-stroke patients with chronic
motor impairment of the upper limb. After eight weeks of treatment, remarkable
improvement was noted in three cases, slight improvement in two cases, and no
improvement in four cases. The limited number of patients and the absence of a control
condition did not allow us to draw any conclusion about the efficacy of this
intervention. However, results showed a good acceptance of VR Mirror therapy by
both patients and therapists, suggesting that virtual reality technology can be
successfully integrated into mental practice interventions. A future goal is to define
appropriate technology-based strategies for motivating patients to execute mental
practice at home without therapist supervision.

References

[1] M. Jeannerod, The representing brain: Neural correlates of motor intention and imagery, Behavioural
and Brain Sciences 17 (2) (1994), 187-245.
[2] L.P. McAvinue, and I.H. Robertson, Relationship between visual and motor imagery, Percept Mot
Skills 104 (2007), 823-43.
[3] A. Pascual-Leone, D. Nguyet, L.G. Cohen, J.P. Brasil-Neto, A. Cammarota, and M. Hallett, Modulation
of muscle responses evoked by transcranial magnetic stimulation during the acquisition of new fine
motor skills, Journal of Neurophysiology 74 (3) (1995), 1037-1045.
[4] P.L. Jackson, M.F. Lafleur, F. Malouin, C.L. Richards, and J. Doyon, Functional cerebral
reorganization following motor sequence learning through mental practice with motor imagery,
NeuroImage 20 (2) (2003), 1171-1180.
[5] A. Sirigu, and J.R. Duhamel, Motor and visual imagery as two complementary but neutrally dissociable
mental processes, Journal of Cognitive Neuroscience 13 (7) (2001), 910-919.
[6] J. Decety, and F. Michel, Comparative analysis of actual and mental movement times in two graphic
tasks, Brain and Cognition 11 (1989), 87-97.
[7] J. Decety, and M. Jeannerod, Mentally simulated movements in virtual reality: does Fitts's law hold in
motor imagery?, Behavioural Brain Research 72 (1-2) (1995), 127-134.
[8] M. Rodriguez, C. Llanos, S. Gonzalez, and M. Sabate, How similar are motor imagery and movement?,
Behavioural Neuroscience 122 (2008), 910-6.
[9] N. Gueugneau, B. Mauvieux, and C. Papaxanthis, Circadian Modulation of Mentally Simulated Motor
Actions: Implications for the Potential Use of Motor Imagery in Rehabilitation, Neurorehabilitation
and Neural Repair (2008).
[10] J. Decety, M. Jeannerod, M. Germain, and J. Pastene, Vegetative response during imagined movement
is proportional to mental effort, Behavioural Brain Research 42 (1) (1991), 1-5.
[11] J. Decety, M. Jeannerod, D. Durozard, and G. Baverel, Central activation of autonomic effectors during
mental simulation of motor actions in man, The Journal of Physiology 461 (1993), 549-563.
[12] M. Bonnet, J. Decety, M. Jeannerod, and J. Requin, Mental simulation of an action modulates the
excitability of spinal reflex pathways in man, Brain Research Cognitive Brain Research 5 (3) (1997),
221-228.
[13] J. Decety, Do imagined and executed actions share the same neural substrate?, Brain Research.
Cognitive Brain Research 3 (2) (1996), 87-93.
[14] P.L. Jackson, M.F. Lafleur, F. Malouin, C. Richards, and J. Doyon, Potential role of mental practice
using motor imagery in neurologic rehabilitation, Archives of Physical Medicine and Rehabilitation 82
(8) (2001), 1133-1141.
[15] M.G. Lacourse, E.L.R. Orr, S.C. Cramer, and M.J. Cohen, Brain activation during execution and motor
imagery of novel and skilled sequential hand movements, NeuroImage 27 (3) (2005), 505-519.
[16] J.S. Ross, J. Tkach, P.M. Ruggieri, M. Lieber, and E. Lapresto, The Mind's Eye: Functional MR
Imaging Evaluation of Golf Motor Imagery, American Journal of Neuroradiology 24 (6) (2003), 1036-
1044.
[17] R.J. Davidson, and G.E. Schwarz, Brain mechanisms subserving self-generated imagery:
Electrophysiological specificity and patterning, Psychophysiology 14 (6) (1977), 598-601.
[18] A. Guillot, C. Collet, V.A. Nguyen, F. Malouin, C. Richards, and J. Doyon, Brain activity during visual
versus kinesthetic imagery: An fMRI study, Human Brain Mapping (2008).
[19] A. Sirigu, L. Cohen, J.R. Duhamel, B. Pillon, B. Dubois, Y. Agid, and C. Pierrot-Deseilligny,
Congruent unilateral impairments for real and imagined hand movements, Neuroreport 6 (7), 997-1001.
[20] P. Dominey, J. Decety, E. Broussolle, G. Chazot, and M. Jeannerod, Motor imagery of a lateralized
sequential task is asymmetrically slowed in hemi-Parkinson's patients, Neuropsychologia 33 (6) (1995),
727-41.
[21] R. Cunnington, G.F. Egan, J.D. O'Sullivan, A.J. Hughes, J.L. Bradshaw, and J.G. Colebatch, Motor
imagery in Parkinson's disease: a PET study, Movement Disorders 16 (5) (2001), 849-57.
[22] S. Thobois, P.F. Dominey, J. Decety, P.P. Pollak, M.C. Gregoire, P.D. Le Bars, and E. Broussolle,
Motor imagery in normal subjects and in asymmetrical Parkinson's disease: a PET study, Neurology 55
(7) (2000), 996-1002.
[23] A. Sirigu, J.R. Duhamel, and L. Cohen, The mental representation of hand movements after parietal
cortex damage, Science 273 (1996), 1564-1568.
[24] B. Tomasino, and R.I. Rumiati, Effects of strategies on mental rotation and hemispheric lateralization:
neuropsychological evidence, The Journal of Cognitive Neuroscience 16 (2004), 878-888.
[25] N. Sharma, V.M. Pomeroy, and J.C. Baron, Motor imagery: a backdoor to the motor system after
stroke?, Stroke 37 (7) (2006), 1941-1952.
[26] N. Allami, Y. Paulignan, A. Brovelli, and D. Boussaoud, Visuo-motor learning with combination of
different rates of motor imagery and physical practice, Experimental Brain Research 184 (1) (2008),
105-13.
[27] L. Yágüez, D. Nagel, H. Hoffman, A.G.M. Canavan, E. Wist, and V. Hömberg, A mental route to
motor learning: Improving trajectorial kinematics through imagery training, Behavioural Brain
Research 90 (1) (1998), 95-106.
[28] G. Yue, and K.J. Cole, Strength increases from the motor program: comparison of training with
maximal voluntary and imagined muscle contractions, Journal of Neurophysiology 67 (1992), 1114-
1123.
[29] V.K. Ranganathan, V. Siemionow, J.Z. Liu, V. Sahgal, and G.H. Yue, From mental power to muscle
power-gaining strength by using the mind, Neuropsychologia 42 (7) (2004), 944-956.
[30] R. Gentili, C. Papaxanthis, and T. Pozzo, Improvement and generalization of arm motor performance
through motor imagery practice, Neuroscience 137 (2006), 761-772.
[31] K. Sacco, F. Cauda, L. Cerliani, D. Mate, S. Duca, and G.C. Geminiani, Motor imagery of walking
following training in locomotor attention. The effect of "the tango lesson", NeuroImage 32 (3) (2006),
1441-9.
[32] J.E. Driskell, C. Copper, and A. Moran, Does Mental Practice Enhance Performance?, Journal of
Applied Psychology 79 (4) (1994), 481-492.
[33] Y.A. Fery, Differentiating visual and kinesthetic imagery in mental practice, Journal of Experimental
Psychology 57 (1) (2003), 1-10.
[34] C. Hall, E. Buckolz, and G.J. Fishburne, Imagery and the acquisition of motor skills, Journal of Sport
Science 17 (1992), 19-27.
[35] R. Tamir, R. Dickstein, and M. Huberman, Integration of motor imagery and physical practice in group
treatment applied to subjects with Parkinson's disease, Neurorehabilitation and Neural Repair 21
(2007), 68-75.
[36] S.C. Cramer, E.L. Orr, M.J. Cohen, and M.G. Lacourse, Effects of motor imagery training after chronic,
complete spinal cord injury, Experimental brain research 177 (2006), 233-242.
[37] G.L. Moseley, Graded motor imagery is effective for long-standing complex regional pain syndrome: a
randomised controlled trial, Pain 108 (1-2) (2004), 192-198.
[38] A. Zimmermann-Schlatter, C. Schuster, M.A. Puhan, E. Siekierka, and J. Steurer, Efficacy of motor
imagery in post-stroke rehabilitation: a systematic review, Journal of NeuroEngineering and
Rehabilitation 14 (2008).
[39] P.L. Jackson, M.F. Lafleur, F. Malouin, C. Richards, and J. Doyon, Potential role of mental practice
using motor imagery in neurologic rehabilitation, Archives of Physical Medicine and Rehabilitation 82
(8) (2001), 1133-1141.
[40] N. Sharma, V.M. Pomeroy, and J.C. Baron, Motor imagery: a backdoor to the motor system after
stroke?, Stroke 37 (2006), 1941–1952.
[41] F. Malouin, C.L. Richards, A. Durand, and J. Doyon, Clinical assessment of motor imagery after stroke,
Neurorehabilitation and Neural Repair 22 (4) (2008), 330-40.
[42] R. Dickstein, and J.E. Deutsch, Motor Imagery in Physical Therapist Practice, Physical Therapy 87 (7)
(2007), 942 - 953.
[43] J.A. Stevens, and M.E. Stoykov, Using motor imagery in the rehabilitation of hemiparesis, Archives of
Physical Medicine and Rehabilitation 84 (7) (2003), 1090-2.
[44] A. Gaggioli, F. Morganti, R. Walker, A. Meneghini, M. Alcaniz, J.A. Lozano, J. Montesa, J.A. Gil, and
G. Riva, Training with computer-supported motor imagery in post-stroke rehabilitation,
CyberPsychology and Behavior 7 (3) (2004), 327-332.
[45] D. Marks, Visual imagery differences in the recall of pictures, British Journal of Developmental
Psychology 64 (1973), 17-24.
[46] J. Metzler, and R. Shepard, Mental rotation of three-dimensional objects, Science 171 (1971), 1-32.
[47] A. Isaac, D. Marks, and D. Russell, An instrument for assessing imagery of movement: The vividness
of movement imagery questionnaire (VMIQ), Journal of Mental Imagery 10 (1986), 23-30.
[48] R.A. Lyle, Performance test for assessment of upper limb function in physical rehabilitation treatment
and research, International Journal of Rehabilitation Research 4 (1981), 483-492.
[49] A. Fugl-Meyer, L. Jaasko, and I. Leyman, The post-stroke hemiplegic patient, I: a method for
evaluation of physical performance, Scandinavian Journal of Rehabilitation Medicine 7 (1975), 13-31.
[50] R. Rabin, and F. de Charro, EQ-5D: a measure of health status from the EuroQol Group, Annals of
Medicine 33 (2001), 337-43.
Postural and Spatial Orientation Driven by
Virtual Reality
Emily A. KESHNERa and Robert V. KENYONb
a
Department of Physical Therapy, College of Health Professions and Department of
Electrical and Computer Engineering, College of Engineering, Temple University,
Philadelphia, USA
b
Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA

Abstract. Orientation in space is a perceptual variable intimately related to


postural orientation that relies on visual and vestibular signals to correctly identify
our position relative to vertical. We have combined a virtual environment with
motion of a posture platform to produce visual-vestibular conditions that allow us
to explore how motion of the visual environment may affect perception of vertical
and, consequently, affect postural stabilizing responses. In order to involve a
higher level perceptual process, we needed to create a visual environment that was
immersive. We did this by developing visual scenes that possess contextual
information using color, texture, and 3-dimensional structures. Update latency of
the visual scene was close to physiological latencies of the vestibulo-ocular reflex.
Using this system we found that even when healthy young adults stand and walk
on a stable support surface, they are unable to ignore wide field of view visual
motion and they adapt their postural orientation to the parameters of the visual
motion. Balance training within our environment elicited measurable rehabilitation
outcomes. Thus we believe that virtual environments can serve as a clinical tool
for evaluation and training of movement in situations that closely reflect
conditions found in the physical world.

Keywords. Perception, Posture Platform, Multi-modal Immersion, Visual-


vestibular Conflict

1. Introduction

We have created a laboratory that combines two technologies: dynamic posturography


and virtual reality (VR). The purpose of this laboratory is to test and train postural
behaviors in a virtual environment (VE) which simulates real world conditions. Our
goal with this environment is to explore how multisensory inputs influence the
perception of orientation in space, and to determine the consequence of shifts in spatial
perception on postural responses. Drawing from previous findings from aviators in
simulators which indicated that responses to visual disturbances became much stronger
when combined with a physical force disturbance [1], we have assumed that the use of
a VE would elicit veridical reactions.
Traditionally, postural reactions have been studied in controlled laboratory
settings that either remove inputs from specific pathways to determine their
contribution to the postural response (e.g., closing the eyes to remove vision), or test
patients with sensory deficits to determine how the central nervous system (CNS)
compensates for the loss of a particular input [2]. In order to simplify the system, most
studies recorded a single dependent variable such as center of pressure (COP) or center
of mass (COM) when either the physical or visual world moved [3, 4]. There have been
studies comparing the relative influence of two input signals, but conclusions have
mostly been drawn from a single output variable [5, 6]. With our environment, we have
the ability to monitor multiple inputs as well as multiple outputs. Thus our
experimental protocols can be designed to incorporate the complex cortical processing
that occurs when an individual is navigating through the natural world, thereby eliciting
motor behaviors that are presumed to be more analogous to those that take place in the
natural physical environment.
In this chapter we present information about both the technology used in our
laboratory as well as data that demonstrate how we have been able to modify and
measure postural and perceptual responses to multisensory disturbances in both healthy
and clinical populations. First, we will present a background of our rationale for using
VE technology. Second, we will describe the choices we made in developing our
laboratory. Results from experiments in the VE will be offered to support our claim
that our laboratory creates a sense of presence in the environment. Lastly, we will
present evidence of a strong linkage between posture and perception which supports
our belief that the VE is a valuable tool for exploring the organization of postural
behaviors as they would occur in natural conditions. Our laboratory presents the
challenges necessary to evaluate postural disorders and to design interventions that will
fully engage the adaptive properties of the system. We believe the VE has vast
potential as both a diagnostic and treatment tool for patients with poorly diagnosed
balance disorders.

1.1. Unique Features that Motivated Our Use of VR in Posture Control Research

Prior to the advent of virtual environment display technology, experiments using


complex, realistic computer-controlled imagery to study visual information
processing/motor control linkages were difficult to produce. Until the arrival of
affordable high performance computer graphics hardware, the manipulation of visual
stimuli was relegated to real world conditions that were optically altered using prisms
lenses [7, 8] or using artificial visual stimuli depicted in pictures and simple computer
generated images such as random dots moving across a monitor [9, 11]. Each of these
systems had advantages and limitations that constrained the type of experiments that
could be performed. Consequently, any investigation of motor control and its
interaction with vision and the other senses was limited by the available technology.
These limitations severely impeded the study of posture control which incorporates
both feedback about our self-motion and the perception of orientation in space.
Although the role of each sensory feedback system could be studied by either removing
or augmenting its contribution to an action, perception of vertical orientation is more
difficult to discern and its measurement was mostly dependent on subjective feedback
from the subject. If we were to examine how a higher level process like perception
impacted posture control, then we needed to produce a visual environment convincing
enough that our subjects believed that they needed to deal with disturbances presented
in that environment. To create such conditions we modeled our laboratory after one of
the most successful applications of a VE to activate perception, that of pilots in flight
simulators [12]. We needed an environment where subjects accepted that they were
actually present in the environment, thus their responses to the virtual world would be
similar to those elicited in the physical world [13]. This “suspension of disbelief”
which accompanies a subject’s immersion in the environment, mentally and/or
physically, is also known as “presence”. We felt that a strong sense of “presence” was
needed in order to engage and manipulate the higher level cognitive processes that
influence posture control [14].

1.2. Our Work and Its Parallel to Flight Simulation

As mentioned previously, probably the best known and most successful use of a VE
with human subjects is in the area of flight training [15, 16]. Pilots are exposed to
situations that hone their skills in dealing with dangerous situations or train them to use
new equipment. Many of these scenarios involve situations that could not be employed
in the actual aircraft due to the danger it would present to the crew. In the safety of the
simulator, however, such practice is routine and a vital part of their training. What
makes this environment so compelling is the combination of somatosensory inputs
[from the stick or rudder], vestibular motion inputs [from the motion-base systems],
and auditory stimuli that are combined with convincing graphics that relay the pilot’s
actions as if he were controlling the actual aircraft. Thus, a sense of presence in the VE,
through the combination of sensory and physical motion feedback, helped to obtain a
high fidelity response from the pilots that was successfully translated to tangible flying
skills [15]. In a similar manner we use a combination of somatosensory and complex
visual inputs in our laboratory to immerse the subjects so that their responses to the VE
are closely matched to what they perceive to be real disturbances in the environment.
Yet, as in simulators, our subjects are protected from the dangers of collision or falling
regardless of the scenario under investigation. As with simulators, the VE allows us to
expose subjects to situations that they may never actually experience, but their
responses will teach us about the adaptive and control properties of the CNS by fully
engaging their sensory and motor systems in planning and producing responses to
environmental disturbances. We believe that we have accomplished this goal in the
way that we have developed our laboratory as described below.

1.3. How We Created Our Lab

To adhere to our goal of engaging the perceptual system so as to elicit as close to a


natural reaction from the CNS as possible, we had to decide what kind of VE system
would be the best to use in our research. At that time, there were two main contenders
to chose from: Head Mounted Displays (HMD) and the CAVE1 (CAVE Audio Visual
Environment) [17]. The cost of each was comparable and each had their advantages
and disadvantages. HMD systems allow one to totally immerse subjects in the
computer generated world and give the scientist complete control over what the
subjects will see during the course of the testing/training in a very compact package. In
addition, update rates and the resolution of the screens could be higher than in a CAVE
system. However, one of the more important considerations is that HMDs suffer from
image swimming during head motion due to the latencies inherent in head tracking and
image generation. In addition, at that time such systems were heavier and therefore
added weight to the subject’s head. The projection-based CAVE used a light weight
pair of glasses to allow the subject to perceive stereo which made this option very
attractive since it imposed a mild encumbrance to the subject. The subjects were

1
The CAVE is a registered trademark of the Board of Trustees of the University of Illinois.
immersed, but not to the extent provided by a HMD system since they could see
physical objects, such as their own body, in addition to the virtual world. The ability to
see yourself within your environment is a trait experienced in the physical world
[Augmented Reality HMD systems that allow the subject to see both physical and
virtual objects are currently available but are at least twice the cost of a CAVE].
Swimming of the scene during head movements was minimal because the entire field-
of-view (FOV) was projected on the screen in front of the subject. Swimming was
further reduced because the tracking system we used (Motion Analysis, Santa Rosa,
CA) produced very short latencies (approximately 10-20 msec) resulting in an image
update that is very close to the physiological latency of the vestibulo-ocular reflex
during natural head motion [18]. Negative characteristics of the projection-based
system are that it requires a much larger physical space than a HMD and that it forces
the subjects to be confined to an area near the screens in order to see the image. Also,
images are not as bright as in HMDs. However, our decision criteria lead us to use a
CAVE system rather than a HMD.
We originally started with a one wall CAVE, i.e., a single projection screen in
front of the subject [19]. Although the 100° FOV was adequate for our experiments, we
expected that a wider FOV would elicit a stronger sense of motion in subjects [20]. In
fact, we have found that narrowing the FOV so that the peripheral field is not
stimulated actually produces greater delays in response to postural disturbances [21].
We currently have a 3 screen passive stereo system, with walls in front and to the sides
of the subject, which permits peripheral as well as central visual field motion (Figure
1). Two projectors are located behind each screen. Each pair of projectors has
circularly polarized filters of different (opposite) directions placed in front of them, and
each projects a full-color workstation field [1280h x 1024v] at 60 Hz onto each screen.
Matching circularly polarized stereo glasses are worn by the subject to deliver the
appropriate left and right eye image to each eye allowing a 150° stereo FOV. The
correct perspective and stereo projections for the scene are computed using values for
the subject’s inter-pupillary distance (IPD) and the current orientation of the head
supplied by position markers attached to the subject’s head and scanned by the Motion
Analysis infrared camera system. Consequently, virtual objects retain their true
perspective and position in space regardless of the subjects’ movement. The visual
experience is that of being immersed in a realistic scene with textural content and optic
flow. To produce the physical motion disturbances necessary to elicit postural
reactions, we incorporated a moving base of support with two integrated force plates
(NeuroCom International Inc, Clackamas OR) into the environment (Figure 1). In
many posture laboratories and with the popular clinical tools for diagnosis and training
of postural reactions (e.g., the Equitest and Balance Master), the visual axis of rotation
is placed at the ankle and the multi-segmental body is assumed or even constrained to
function as an inverted pendulum [6, 20, 22, 25]. In our laboratory, the visual axis is
referenced to the head as occurs during natural movement, and it is assumed that the
control of posture is a multi-segmental process.

1.4. Is Stereo Vision Necessary?

We have explored whether having stereovision in the VE produced a more compelling


visual experience than just viewing a flat wall or picture. Stereopsis is an effective cue
up to about 30m which encompasses many objects in our scene [26]. We predicted that
stereovision was necessary to produce a sense of immersion in the VE, and that this
Figure 1. The Virtual Environment and Postural Orientation Laboratory currently at Temple University is a
three-wall virtual environment. Each wall measures 2.4 m x 1.7 m. The visual experience is that of being
immersed in a realistic scene with textural content and optic flow. Built into the floor is a 3 degree of
freedom posture platform (NeuroCom Inc., Clackamas, OR) with two integrated force plates (AMTI,
Watertown, MA) on which sit reflective markers from the Motion Analysis (Santa Rosa, CA) infrared
camera system.
perceptual engagement would be reflected in the postural response metrics. For these
experiments we produced postural instability by having young adults stand on the force
plate with a full (100%) base of support (BOS), or on a rod offering 45% of their BOS
(calculated as a percentage of their foot length), or a rod offering 35% BOS [21, 27].
Subjects viewed the wide FOV visual scene moving fore-aft at 0.1 Hz viewing either
stereo (IPD ≠ 0) or dioptic (IPD = 0) images. Response power at the frequency of the
scene increased significantly (p < 0.05) with the 35% BOS (Figure 2), suggesting some
critical mechanical limit at which subjects could no longer rely on the inputs provided
by the BOS and, thus, switched to a reliance on vision. There was also an interaction
between BOS and stereovision revealing that when subjects were more reliant on the
visual inputs, stereovision had a greater effect on their motion. Thus, in an unstable
environment, visual feedback and, in particular, stereovision became more influential
on the metrics of the postural response. As a result we chose to retain the stereo
component in our 3-wall environment.
2D Visual Field 3D Visual Field
100% BOS 35% BOS 100% BOS 35% BOS
s1 head
trunk
shank

s2

s3

s4

0.1 0.4 0.1 0.4 0.1 0.4 0.1 0.4


Frequency (Hz) Frequency (Hz) Frequency (Hz) Frequency (Hz)

Figure 2. Power of head, trunk, and shank center of mass for four subjects is normalized to the largest
response of each subject during 0.1 Hz motion of a visual scene with dioptic (2D) and stereo (3D) images
while on a full (100%) and reduced (35%) base of support.

2. The Effects of Complex Visual Scenes on Posture Control

When the world is moving, we have to determine whether it is the environment or


ourselves that is moving in order to recognize our orientation in space. To do this, we
must use the sensory information linked to the context of the movement and determine
whether there is a mismatch between the visual world motion and our vestibular and
somatosensory afference. If we believe that the environment around us is stationary, it
is relatively easy to identify our physical motion. However, when the world is also
moving, we need to shape our reactions to accurately match the demands of the
environment. The ability to orient ourselves in space is a multisensory process [28, 31],
and the impairment of any one of the relevant pathways (i.e., proprioceptive, vestibular,
and visual) will impact postural stability.
Whole body sway responses of subjects exposed to visual rotation stimuli in our
environment were qualitatively similar to those observed and published in the literature
that was available prior to our initial experiments [20, 22, 23, 32]. The novelty in our
approach was to explore how each body segment acted to maintain posture during
visual disturbances rather than looking at postural sway as a single output variable. We
chose to examine the body segments individually because of previous studies
suggesting differential control mechanisms in the upper and lower body [24, 25]. In our
first study with a VE [33], subjects stood in quiet stance while observing either random
dots or a realistic visual scene that moved sinusoidally or at constant velocity about the
pitch or roll axes (Figure 3). Segmental displacements, Fast Fourier Transforms (FFT),
-5
A ankle
trunk
head

3
-5
B

Forward
0 Left Right

Backward
3
-3.5 -2.5 -1.5 -0.5 0.5
x-position (ft)
Figure 3. (Left) A subject standing within a field of random dots projected in the VE. The subject is tethered
to three flock-of-birds sensors that are recording 6 axes of motion of the head, trunk, and lower limb. (Right)
Graphs of two subjects (A and B) showing the relationship of the head, trunk, and left ankle during
locomotion. The two gait patterns produced by the subjects walking from the rear of the CAVE (bottom of
the y-axis) to the front wall (top of the y-axis) are shown. (A) The subject takes one step forward and then
walks in the direction of the counterclockwise scene by crossing one limb over the other. (B) The subject
crouches down and stamps his feet to progress forward in the CAVE.
and root mean square (RMS) values were calculated for the head, trunk, and lower
limb. We found that with scene motion in either the pitch and roll planes, subjects
exhibited greater magnitudes of motion in the head and trunk than at the lower limb.
Additionally, the frequency or velocity content of the head and trunk motion was
equivalent to that of the visual input, but this was not the case in the lower limb.
Smaller amplitudes and frequent phase reversals observed at the ankle suggested that
control at the ankle was directed toward keeping the body over the base of support (the
foot) rather than responding to changes in the visual environment. These results
suggested to us that the lower limb postural controller was setting a limit of motion for
postural stabilization while posture of the head and trunk may have been governed by a
perception of the visual vertical driven by the visual scene.
When our subjects were asked to walk while the visual environment rolled
counterclockwise, all of the subjects compensated for the visual field motion by
exhibiting one of two locomotion strategies. Some subjects exhibited a normal step
length, taking only two or three steps to cover the seven-foot distance which would be
a normal gait for this distance. However, a lateral shift took place so that they walked
sideways in the direction of the rolling scene (Figure 3A). In each case, the subject’s
first step was straight ahead and the second step was to the left regardless of which foot
was placed first. For example, one subject who made the first step with the left foot
then made the second step by crossing the right leg over the left leg when responding to
the visual stimulus [in order to move to the left]. When queried about the amount of
translation produced during the walking trials, subjects responded that they recognized
they were moving off center. In fact, these subjects were three feet to the left of center
at the end of their trial but were unable to counteract the destabilizing force.
The other subjects walked with short, vertically projected stamping, taking
approximately seven or eight steps in the seven feet traveled (Figure 3B). These
subjects exhibited an increased frequency of medial-lateral sway of the head and trunk
as though they were rocking over each foot as they stepped forward. These subjects
reported that they were only focused on “not falling over”. Shortened step lengths and
increased flexion at the knee and ankle implied that these subjects were exerting
cognitive decisions about their locomotion that focused on increasing their awareness
of both the somatosensory feedback and their motor output. This locomotion pattern
was reminiscent of the gait observed in elderly fallers [34] or subjects that have been
walking with reversing prisms [35].
From these results we concluded that subjects could only counteract the effects of
the destabilizing visual stimulus by altering their normal locomotion pattern and,
correspondingly, the altered perception of vertical. Interestingly, the content of the
visual scene did not determine response strategy selection (subjects receiving the
random dot pattern also exhibited the different strategies), thus this paradigm can be
used in laboratories with less advanced technologies than those reported here.

3. Effects of Perception on Posture: Altering the Perception of Vertical

As would be expected when perception is involved, we have found that the effect of
visual motion on posture is not equal across all motion axes and that segmental
responses can vary across subjects. At the velocities that we have used, postural
responses were greatest to roll motion of the visual field, followed closely by pitch,
then by anterior-posterior (A-P) motions of the visual scene (Figure 4). We might
explain these differences by our experiences with visual feedback in the physical
environment. Modulation of segmental responses in the pitch and A-P planes occurs
principally in the sagittal plane as we navigate through the environment. Roll motion of
the visual environment is less commonly experienced, however, and therefore might
elicit an intensified reaction to the perception of visual motion [33].
When an individual is standing quietly and only the visual scene is moving in any
of the planes, the performer experiences a conflict between the visual perception of
motion and the vestibular and somatosensory systems signaling an absence of physical
motion. A decision then needs to be made about whether the visual motion signal was
due to self motion or motion of the environment. Mutability of this decision process
may be responsible for the variability observed across subjects. In our experiments
[27], the roll scene was rotated in a counterclockwise (CCW) direction about the line of
sight; the pitch scene rotated from lower to upper visual fields about an axis passing
through the subject’s ears. Quietly standing subjects tended to drift in the direction of
the visual motion while producing small corrective oscillations of each segment (Figure
4A). In general, subjects would follow the constant velocity stimulus for some interval
of time and then suddenly move in the opposite direction (exhibited as a sudden
downward drop in the data), followed by a steep return in the direction of the visual
scene motion as though correcting for the visual drift. The peak of the segmental
response was delayed in most subjects and initiation of the response to the visual scene
took about 20 sec to occur. These delays and response reversals reflected fluctuations
in the strength of the immersion in the VE. In both the roll and pitch planes, subjects
tended to respond more with the head and trunk than with the ankle.
Figure 4. Amplitudes of head, trunk, and ankle to pitch, roll, and A-P motion of the VE. For pitch and roll,
both constant velocity at 5°/s (A) and sinusoidal motion of the VE at 0.1Hz (B) and 0.5Hz (C) were used. (A)
Vertical dashed lines indicate the start and termination of constant velocity visual scene motion. (B and C)
Sinusoidal motion of the visual scene is illustrated by the light grey lines in each plot. (D) Sinusoidal motion
of the visual scene at 0.1 Hz is shown in the bottom trace. Time scale shows responses from early and late
portions of the experiment. In all A-P plots, upward peaks represent anterior motion relative to the room;
downward peaks represent posterior motion relative to the room.
With 0.1 Hz sinusoidal roll of the visual scene (Figure 4B), although the
magnitude of motion was greater in the head and trunk than in the ankle, all segments
had similar phases and oscillatory frequencies suggesting that subjects were responding
as a simple pendulum limited only by the constraints of the base of support. With 0.1
Hz sinusoidal pitch of the visual scene, the subject shown in Figure 4B attempted to
maintain a sinusoidal relation with the stimulus with similar magnitudes at all
segments. Segmental responses were more synchronized to a visual scene with a
frequency of 0.1 Hz than 0.5 Hz (Figure 4C). Interestingly, that same frequency (0.1
Hz) with a visual scene moving in A-P (Figure 4D) produced a much more subtle
response of the body segments with lower amplitudes [35]. Differences seen between
the responses of the two subjects presented in Figure 4D are indicative of the variable
Figure 5. Orientation of the hand held wand, the head, and the center of pressure (COP) while viewing
counterclockwise (CCW) roll motion of the visual scene (bold line) and a stationary visual scene (broken
line) in three subjects demonstrates a fluctuating response (top row), a bi-directional response (middle row),
and a constant response (bottom row) that is consistent across all three variables. Dashed vertical lines mark
the start and end of the scene motion.
response to the visual motion, which is not unexpected if the response is a reflection of
each individual’s perception of their own movement and that of the environment.
The waxing and waning of our subjects’ responses were reminiscent of reports in
the literature regarding subjects’ perceptions of orientation during scene rotation [20,
36, 38]. Consequently, we wanted to determine whether changes in orientation of the
head and trunk when exposed to a rotating scene correlated with spatial and temporal
characteristics of the perception of self-motion. We recorded head position in space,
center of pressure responses, and perception of self-motion through the orientation of a
hand-held wand during constant velocity rotations of the visual scene about the roll
axis [39]. Although no consistent response pattern emerged across the healthy subjects,
there was a clear relationship between the perception of vertical, the position of the
head in space, and postural sway within each subject (Figure 5). This observed
relationship between spatial perception and postural orientation suggests that spatial
representation during motion in the environment is modified by both ascending and
descending controls. We inferred from these data that postural behaviors generated by
the perception of self-motion are the result of cortical interactions between visual and
vestibular signals as well as input from other somatosensory signals. This probable
real-time monitoring of spatial orientation has implications for rehabilitation
interventions. For example, the recovery of balance following a slip or trip may rely
greatly on the ability to match continuously changing sensory feedback to an initial
model of vertical that could be highly dependent on the visual environment and the
mechanical arrangement during that particular task. Also, we cannot assume that a
patient, particularly one with a sensory deficit who appears to be vertically oriented at
the initiation of motion, will be able to sustain knowledge of that orientation as the task
progresses [3].
4. Things Don’t Add Up: Visual–Vestibular Conflict

If the perception of physical motion and orientation in space is derived from the
convergence of vestibular, proprioceptive, and visual signals, then a mismatch between
these signals would produce a conflict that needs to be resolved by the CNS (Figure 6).
Examples of such a conflict occur in nature when watching a moving train and sensing
that it is yourself who is moving, or standing in a tilting room and being unable to
distinguish between visual field motion and self-motion [20, 40]. This phenomenon of
illusory self-motion (vection) suggests that the CNS is not always capable of
suppressing inappropriate inputs.
To explore how the postural system weights coincident yet discordant disturbances
of the visual and proprioceptive/vestibular systems, we chose to depart from roll
motion of the visual scene which is less relevant to the environmental experience. In
this study, the visual scene moved in the sagittal plane as did the individual’s physical
motion [41, 43]. We examined the postural responses of healthy young adults (25–38
yrs), elderly (60-78 yrs), and labyrinthine deficient subjects (59-86 yrs) during fore-aft
translations (0.1 Hz, ± 3.7 m/sec) of an immersive, wide FOV visual environment, or
anterior-posterior translations (0.25 Hz, ± 15 cm/sec) of the support surface, or both
concurrently. Kinematics of the head, trunk, and shank were collected with an infrared
camera motion analysis system, and angular motion of each segment was plotted across
time. When only the support surface was translated, segmental responses were small
(1°–2°) and mostly opposed the direction of platform translation. When only the visual
scene was moving, segmental responses were initially small and increased as the trial
progressed. When the inputs were presented at the same time, however, response
amplitudes were large even at the onset of the trial. Mean RMS values across subjects
were significantly greater with combined stimuli than for either stimulus presented
alone, and areas under the power curve across subjects were significantly increased at
the frequency of the visual input when both inputs were presented (Figure 7, top).

Figure 6. Schematic illustration of the vection phenomenon. Gravitational and visual signals stimulate the
otoliths and the visual system, respectively, which, when combined, produce the perception of tilt. Thus, as
seen on the right, when the visual scene is rotating counterclockwise there is a mismatch with the vertically
directed otolith vector. The CNS determines that it doesn’t make sense for the world to be moving, thereby
resolving this conflict with a perception of tilt. The response is to correct for the perceived tilt (in the irection
opposite that of the visual world) by tilting the body in the same direction as the motion of the visual world.
When discordant signals were simultaneously presented, even patients with
labyrinthine deficit who claimed that they were ignoring the visual inputs exhibited
increased complexity in the frequency spectra of their responses. These increases were
not a simple linear summation of the responses to each input (Figure 7, bottom). Thus,
intra-modality dependencies were observed, and we must conclude that the CNS
doesn’t just add the effects of each sensory pathway but rather attempts to
accommodate to the multiple demands presented by conflicting sensory signals.
Our results have significant bearing on studies of motor control and, ultimately, on
the design of rehabilitation interventions. In the past, postural responses have
principally been examined by isolating individual control pathways in order to
determine their specific contribution. However, if these pathways are responsive to
functionally relevant contexts, then their response may well be different when the CNS
is receiving simultaneous inputs from multiple pathways - especially when the
confluence of signals produces non-linear behaviors.

Figure 7. (Top) Power of the relative angles between head, trunk, shank and the moving platform (sled) over
the period of the trial at the relevant frequencies of platform motion (0.25 Hz) and visual scene motion (0.1
Hz) are shown for each protocol for one young adult, one elderly adult, and one labyrinthine deficient adult.
The power at each segment is portrayed as the percentage of the maximum response power (observed in the
trunk) across segments for that subject. (Bottom) Mean area under the power curve ± standard error of the
mean across all young adult subjects at the relevant frequency for platform (sled) motion only (0.25 Hz),
visual scene motion only (0.1 Hz), and both frequencies of combined platform and visual scene motion
(both). Segmental responses significantly increased (*) at 0.1 Hz when platform and scene motion were
combined.
Furthermore, we believe it unlikely that the role of any single pathway contributing
to postural control can be accurately characterized in a static environment if the
function of that pathway is context dependent. We conclude from these data that a
healthy postural system does not selectively switch between inputs but continuously
monitors all environmental signals to update the frequency and magnitude
characteristics of a motor behavior.

5. Sensory Reweighting or Sensory Selection?

Our finding that combining conflicting inputs actually produces responses that
incorporate specific parameters from each input is surprising in light of the generally
accepted hypothesis of sensory weighting which suggests that the signal most relevant
to the current task is more heavily weighted in the response.

Figure 8. Average head, whole body, and shank COM power for each of the three BOS conditions when the
augmented visual motion was imposed on a stereo virtual scene. Subjects viewed the motion with a narrow
(black line) and wide (dashed) FOV.
For example, when we are moving in the environment rather than standing quietly, we
might expect that feedback generated by our physical motion becomes more heavily
weighted and it should therefore be easier for the postural control system to
differentiate between our own motion and motion of the world. PET and MRI studies
have supported this hypothesis by demonstrating that when both retinal and vestibular
inputs are processed, there are changes in the medial parieto-occipital visual area and
parieto-insular vestibular cortex [44, 46] as well as cerebellar nodulus [47, 48] that
suggest a deactivation of the structures processing object-motion when there is a
perception of physical motion. But we have preliminary data [49] to suggest that
inappropriate visual field motion is not suppressed when it is not matched to actual
physical motion. Instead, during quiet stance, magnitude and power of segmental
motion increased as the velocities of sinusoidal anterior-posterior visual field motion
were increased even to values much greater than that normally observed in postural
sway. In fact, head velocity in space was modulated by the scene velocity regardless of
the velocity of physical body motion.

Figure 9. Average head, whole body, and shank COM power for the 100% (dashed line) and 45% (black
line) BOS conditions in subjects that were able to maintain balance on the reduced BOS (typical subjects)
and those that needed to take a step (steppers) while viewing the scene in stereo.
We have further explored relative sensory weighting of visual field motion on
postural responses in the paradigm reported previously when the BOS and the FOV
were gradually narrowed [21]. The immersive virtual environment was either moved
realistically with head motion (natural motion) or translated sinusoidally at 0.1 Hz in
the fore-aft direction (augmented motion). Subjects viewed the visual motion under
wide (90º and 55º in the horizontal and vertical directions) and narrow (25º in both
directions) FOV conditions while standing flatfooted (100% BOS) and on two blocks
(45% and 35% BOS). Furthermore, the augmented motion was presented in stereo and
in non-stereo. Head and whole body COM and ankle angle RMS were calculated, and
FFT were performed on the head, whole body, and shank COM.
When combined with a 35% BOS, natural motion of the visual scene with a wide
FOV produced significantly reduced COM RMS values compared to a narrow FOV.
Viewing the augmented stereo visual condition produced a significant reduction in
whole body COM for the 45% BOS compared to the 35% BOS. Whole body COM
RMS was also significantly greater when standing on the 45% BOS compared to the
100% BOS when viewing an augmented stimulus in a non-stereo scene. The primary
effect of augmented motion emerged in both the head and whole body COM which
exhibited significantly increased power at the frequency of the visual field motion with
a wide FOV and a narrowed BOS (Figure 8). Shank COM power was greater for the
wide FOV compared to the narrow FOV regardless of the size of the BOS. We
concluded that by narrowing the BOS, the CNS was forced to increase its reliance on
peripheral visual information in order to stabilize the head in space even though the
augmented visual motion was promoting postural instability. Thus the presence of a
destabilizing visual stimulus in a wide FOV was not down weighted and still exerted a
strong impact on postural control when postural stability was compromised.
One of the most interesting results to emerge from these data was the finding that a
subset of the subjects could not maintain continuous stance on the smallest BOS when
the virtual environment was in motion and they needed to take a step to stabilize
themselves [21, 27]. We found that when viewing augmented motion with a wide FOV,
there was a great effect on their head and whole body COM and ankle angle RMS
values. FFT analyses revealed greater power at the frequency of the visual stimulus in
the steppers compared to the non-steppers (Figure 9). With a narrow FOV, whole body
COM time lags relative to the augmented visual scene also appeared, and the time-
delay between the scene and the COM was significantly increased in the steppers. This
increased responsiveness to visual field motion indicates a greater visual field-
dependency of the steppers and implies that the thresholds for shifting from a reliance
on visual information to somatosensory information can differ within the healthy
population. Our results strongly point to a role of visual perception in the successful
organization of a postural response so that the weighting of the sensory inputs that
contribute to the postural response may well depend on the perceptual choice made by
each individual CNS [50].

6. Clinical Application of the VR Laboratory

A natural progression from our findings of the role of perceptual choice in the VE was
to explore whether we could find clinical signs in the response patterns of individuals
that complained of perceptual dysfunction (i.e., dizziness). We chose to study a group
of patients that were classified as visually sensitive. This diagnosis encompasses
individuals who complain of dizziness provoked by visual environments with full field
of view repetitive or moving visual patterns [51]. Visual vertigo is present in some
patients with a history of a peripheral vestibular disorder, but there is also a subset of
patients who have no history of vestibular disorder and who test negative for vestibular
deficit on traditional clinical tests. We investigated whether the visual sensitivity
described by these individuals could be quantified by the magnitude of the postural
response in response to an upward pitch of the VE combined with dorsiflexion tilt of
the support surface [52].
We found that the healthy subjects exhibited incremental effects of visual field
velocity on the peak angular velocities of the head, but responses of the visually
sensitive subjects were not linearly modulated by visual field velocity (Figure 10).
Patients with no history of vestibular disorder demonstrated exceedingly large head
velocities whereas patients with a history of vestibular disorder exhibited head
velocities that fell within the bandwidth of healthy subjects. Thus, our results clearly
indicated that the relation between postural kinematics and visual inputs could quantify
the presence of a perceptual disorder. From this we concluded that virtual reality
technology could be useful for differential diagnosis and specifically designed
interventions for individuals whose chief complaint was sensitivity to visual motion.
We have also started to explore whether the VE could be used to measure
improvements following balance retraining. We have tested one patient with bilateral
vestibular deficit and another with BPPV following a training paradigm that focused on
somatosensory feedback. To test whether balance was improved following treatment,
we placed them in a VE that moved in counterclockwise roll while they were standing
on a platform that was sway-referenced to the motion of their center of mass. At the
same time, they were instructed to point to a target that moved laterally in their visual
field. Although these are preliminary results, we have been able to demonstrate that
visual field motion is less destabilizing following the balance training program than
prior to the training period (Figure 11) which suggests that VR technology holds
particular promise as a clinical evaluation tool.

10

0
DARK 0 30 45 60

Scene Velocity (deg/sec)


Figure 10. RMS of head velocity across a 1 sec period following a 30 deg/sec dorsiflexion tilt of the base of
support while the scene was dark, matched to the head motion (0 deg/sec), matched to the velocity of the
base of support (30 deg/sec), or moving at velocities greater than the base of support (45 and 60 deg/sec) in a
healthy young adult (white bar), a subject with a history of vestibular dysfunction (grey bar), and a subject
with visual sensitivity but no history of vestibular dysfunction (black bar).
Dark Still Roll
side-side

anterior-posterior sway

Dark Still Roll Pointing


side-side

anterior-posterior sway
Figure 11. Center of pressure responses of a BPPV subject before (top traces) and following (bottom traces)
balance training. The subject stood on an unstable support surface while in the dark, viewing a scene matched
to her head motion (still), viewing a scene moving counterclockwise (roll), and while pointing to a target in
the rolling scene (pointing). N.B. the subject was unable to accomplish the pointing task prior to the balance
training.

7. Musings on the Future of the Virtual Environment/Postural Orientation


Laboratory

We have based the development of our laboratory on many years of research in


psychology and perceptual psychophysics. The understanding of how perception and
action are linked, and the role of sensation in the production of complex behaviors, has
guided the direction of the laboratory and the design of our experiments. One of the
most compelling pieces of evidence for why we need to place a great informational
load on the CNS to achieve our goal of exploring decision making processes in postural
control comes from studies with pilots. There is a large literature base that has explored
changes that take place when pilots are exposed to increasingly difficult tasks [e.g.,
landing in calm air vs. during a turbulent storm] [53]. Evaluations of pilot performance
during these variable conditions often produced no differences on measures of
performance. Clearly something must have been changing as a result of the different
conditions, but this change was not reflected in the pilots’ motor output. What
researchers have found is that the pilot workload will change dramatically in each case
in order to maintain a consistent level of performance [e.g., smooth landing]. It is only
when conditions stress the pilot to the point where an increase in workload cannot be
tolerated that we see a decrease in performance and the influence of other
environmental factors. Similarly, in order to expose properties of the systems involved
in maintaining a stable posture, we needed to stress the spatial orientation system to the
point where the performance of the subject changes.
In much of our data there is evidence of individual preferences for selecting and
weighting sensory information [54, 57]. Subtle differences in postural control may,
therefore, go unnoticed as there can be multiple combinations of sensory information
and joint coordination patterns that can yield similar postural outcomes. Only by taxing
the biomechanical limits of the system were we able to observe differences in how
these combinations impacted the subject’s ability to maintain balance. Thus, the
flexibility of the CNS to accommodate to a wide variety of task constraints presents a
particular challenge when attempting to evaluate postural disorders and to design an
intervention that will fully engage the adaptive properties of the system. We believe
that our laboratory presents such an environment and that we have the potential to use
the VE both as a diagnostic and treatment tool for patients with poorly diagnosed
balance disorders. The potential of our laboratory as a rehabilitation tool is promising
given our finding that within our VE we could distinguish the postural responses of
patients with visual sensitivity, who present with oscillopsia but have no hard clinical
signs, from a healthy population [52]. We have also had some initial success in using
the VE to test the carryover of postural training paradigm in patients with vestibular
deficit.
Future directions for our laboratory, and for virtual technology to be considered
seriously as a rehabilitative tool, must include studies to determine how immersive the
VE must be given the strength of the stimuli to produce changes in the perception of
vertical and spatial orientation. Does the VE need to project a stereo image and how
wide must the field of view be? Can we identify how to make more economical
systems for treatment and diagnosis of postural disorders? Finally, we must ask how to
make these systems user friendly (and safe) either for the clinic or for home based use.

Acknowledgements

The research reported here was supported by National Institutes of Health (NIH) grants
DC01125 and DC05235 from the National Institute on Deafness and Communication
Disorders and grants AG16359 and AG26470 from the National Institute on Aging.
The virtual reality research, collaborations, and outreach programs at the Electronic
Visualization Laboratory (EVL) at the University of Illinois at Chicago are made
possible by major funding from the National Science Foundation (NSF), awards EIA-
9802090, EIA-9871058, ANI- 9980480, and ANI-9730202, as well as the NSF
Partnerships for Advanced Computational Infrastructure (PACI) cooperative agreement
ACI-9619019 to the National Computational Science Alliance. The authors thank
VRCO Inc. for the use of their CAVElib and Trackd software and our colleagues J.
Streepy, K. Dokka, and J. Langston for their collaboration on some of this work.

References

[1] L.R. Young, Vestibular reactions to spaceflight: human factors issues, Aviation, Space, and
Environmental Medicine 71 (2000), A100-4.
[2] F.O. Black, C. Wall, and L.M. Nashner, Effects of visual and support surface orientation references
upon postural control in vestibular deficient subjects, Acta Otolaryngology 95 (1983), 199-201.
[3] E.A. Keshner, J.H. Allum, and C.R. Pfaltz, Postural coactivation and adaptation in the sway stabilizing
responses of normals and patients with bilateral vestibular deficit, Experimental Brain Research 69
(1987), 77-92.
[4] F.B. Horak, and L.M. Nashner, Central programming of postural movements: adaptation to altered
support-surface configurations, Journal of Neurophysiology 55 (1986), 1369-81.
[5] K.S. Oie, T. Kiemel, and J.J. Jeka, Multisensory fusion: simultaneous re-weighting of vision and touch
for the control of human posture, Cognitive Brain Research 14 (2002), 164-76.
[6] R.J. Peterka, Sensorimotor integration in human postural control, Journal of Neurophysiology 88
(2002), 1097-118.
[7] Stratton, and G.M., Vision without inversion of the retinal image, Psychological Review 4 (1987), 463-
81.
[8] C.M. Oman, O.L. Bock, and J.K. Huang, Visually induced self-motion sensation adapts rapidly to left-
right visual reversal, Science 209 (1980), 706-8.
[9] J. Dichgans, and T. Brandt, Visual-vestibular interaction and motion perception, British Journal of
Ophthalmology 82 (1972), 327-38.
[10] L.R. Young, C.M. Oman, D.G. Watt, K.E. Money, B.K. Lichtenberg, R.V. Kenyon, and A.P. Arrott,
M.I.T./Canadian vestibular experiments on the Spacelab-1 mission: 1 Sensory adaptation to
weightlessness and readaptation to one-g: an overview, Experimental Brain Research 64 (1986), 291-
98.
[11] S. Weghorst, J. Prothero, T. Furness, D. Anson, and T. Riess, Virtual images in the treatment of
Parkinson's disease akinesia, in K. Morgan, R.M. Satvara, H.B. Sieburg, R. Matheus, and J.P.
Christensens (Eds), Medicine Meets Virtual Reality II 30, 1995, pp. 242-43.
[12] C.C. Ormsby, and L. Young, Perception of static orientation in a constant gravitoinertial environment,
Aviation, Space, and Environmental Medicine 47 (1976), 159-64.
[13] W. Sadowski, and K. Stanney, Presence in Virtual Environments, in Handbook of Virtual
Environments: Design, Implementation, and Applications, K.M. Stanney, Ed. London: Lawrence
Erlbaum Associates, Inc, 2002, pp. 791-806.
[14] M. Slater, Presence - the view from Marina del Rey, http://www.presence-thoughts.blogspot.com/,
2008.
[15] C.E. Lathan, M.R. Tracey, M.M. Sebrechts, D.M. Clawson, and G.A. Higgins, Using Virtual
Environments as Training Simulators: Measuring Transfer, in K.M. Stanney, Handbook of Virtual
Environments: Design, Implementation, and Applications, London, Lawrence Erlbaum Associates, Inc,
2002, pp. 403-14.
[16] A.E. Thurrell, and A.M. Bronstein, Vection increases the magnitude and accuracy of visually evoked
postural responses, Experimental Brain Research 147 (2002), 558-60.
[17] C. Cruz-Neira, D. Sandin, T. Defanti, R. Kenyon, and J. Hart, The CAVE Audio-Visual Environment,
ACM Transactions on Graphics 35 (1992), 65-72.
[18] T. Swee Aw, M.J. Todd, and G.M. Halmagyi, Latency and initiation of the human vestibuloocular
reflex to pulsed galvanic stimulation, Journal of Neurophysiology 96 (2006), 925-30.
[19] E.A. Keshner, and R.V. Kenyon, Using immersive technology for postural research and rehabilitation,
Assistive Technology 16 (2004), 54-62.
[20] J. Dichgans, R. Held, L.R. Young, and T. Brandt, Moving visual scenes influence the apparent direction
of gravity, Science 178 (1972), 1217-19.
[21] J. Streepey, R.V. Kenyon, and E.A. Keshner, Visual motion combined with base of support width
reveals variable field dependency in healthy young adults, Experimental Brain Research 176 (2006),
182-87.
[22] T.M. Dijkstra, G. Schoner, and C.C. Gielen, Temporal stability of the action-perception cycle for
postural control in a moving visual environment, Experimental Brain Research 97 (1994), 477-86.
[23] F.H. Previc, The effects of dynamic visual stimulation on perception and motor control, Journal of
Vestibular Research 2 (1992), 285-95.
[24] E.A. Keshner, M.H. Woollacott, and B. Debu, Neck, trunk and limb muscle responses during postural
perturbations in humans, Experimental Brain Research 71 (1988), 455-66.
[25] J.J. Buchanan, and F.B. Horak, Emergence of postural patterns as a function of vision and translation
frequency, Journal of Neurophysiology 81 (1999), 2325-39.
[26] J. Cutting, and P.M. Vishton, Perceiving Layout and Knowing Distances: The Integration, Relative
Potency, and Contextual Use of Different Information About Depth, in Handbook of Perception and
Cognition: Perception of Space and Motion, 2nd ed: Academic Press, 1995, pp. 69-117.
[27] J. Streepey, R.V. Kenyon, and E.A. Keshner, Field of view and base of support width influence postural
responses to visual stimuli during quiet stance, Gait and Posture 25 (2006), 49-55.
[28] A.D. Kuo, R.A. Speers, R.J. Peterka, and F.B. Horak, Effect of altered sensory conditions on
multivariate descriptors of human postural sway, Experimental Brain Research 122 (1998), 185-95.
[29] T. Mergner, and S. Glasauer, A simple model of vestibular canal-otolith signal fusion, Annals of the
New York Academy of Sciences 871 (1999), 430-34.
[30] T. Mergner, C. Maurer, and R.J. Peterka, A multisensory posture control model of human upright
stance, Progress in Brain Research 142 (2003), 189-201.
[31] T. Mergner, and T. Rosemeier, Interaction of vestibular, somatosensory and visual signals for postural
control and motion perception under terrestrial and microgravity conditions-a conceptual model, Brain
Research Reviews 28 (1998), 118-35.
[32] F.H. Previc, R.V. Kenyon, E.R. Boer, and B.H. Johnson, The effects of background visual roll
stimulation on postural and manual control and self-motion perception, Perceptual Psychophysics 54
(1993), 93-107.
[33] E.A. Keshner, and R.V. Kenyon, The influence of an immersive virtual environment on the segmental
organization of postural stabilizing responses, Journal of Vestibular Research 10 (2000), 207-19.
[34] D.A. Winter, A.E. Patla, J.S. Frank, and S.E. Walt, Biomechanical walking pattern changes in the fit
and healthy elderly, Physical Therapy 70 (1990), 340-47.
[35] A. Gonshor, and G.M. Jones, Postural adaptation to prolonged optical reversal of vision in man, Brain
Research 192 (1980), 239-48.
[36] A. Thurrell, P. Bertholon, and A.M. Bronstein, Reorientation of a visually evoked postural response
during passive whole body rotation, Experimental Brain Research 133 (2000), 229-32.
[37] J. Dichgans, and T. Brandt, Visual-vestibular interaction: effects on self-motion perception and postural
control., in R. Held, H.W. Leibowitz, and H.L. Teuber (Eds), Handbook of sensory physiology, New
York, Springer, 1978, pp. 755-804.
[38] H. Fushiki, S. Takata, and Y. Watanabe, Influence of fixation on circular vection, Journal of Vestibular
Research 10 (2000), 151-55.
[39] E.A. Keshner, K. Dokka, and R.V. Kenyon, Influences of the perception of self-motion on Postural
parameters in a dynamic visual environment, Cyberpsychology and Behavior 9 (2006), 163-66.
[40] J.R. Lackner, and P. DiZio, Visual stimulation affects the perception of voluntary leg movements
during walking, Perception 17 (1988), 71-80.
[41] E.A. Keshner, R.V. Kenyon, and J. Langston, Postural responses exhibit multisensory dependencies
with discordant visual and support surface motion, Journal of Vestibular Research 14 (2004), 307-19.
[42] E.A. Keshner, R.V. Kenyon, and Y. Dhaher, Postural research and rehabilitation in an immersive
virtual environment, Conference Proceedings IEEE Engineering in Medicine & Biology Society 7
(2004), 4862-65.
[43] E.A. Keshner, R.V. Kenyon, Y.Y. Dhaher, and J.W. Streepey, Employing a virtual environment in
postural research and rehabilitation to reveal the impact of visual information, International Journal on
Disability and Human Development 4 (2005), 177-82.
[44] T. Brandt, P. Bartenstein, A. Janek, and M. Dieterich, Reciprocal inhibitory visual-vestibular
interaction. Visual motion stimulation deactivates the parieto-insular vestibular cortex, Brain 121 (9)
(1998), 1749-58.
[45] T. Brandt, S. Glasauer, T. Stephan, S. Bense, T.A. Yousry, A. Deutschlander, and M. Dieterich, Visual-
vestibular and visuovisual cortical interaction: new insights from fMRI and pet, Annals of the New York
Academy of Sciences 956 (2002), 230-41.
[46] M. Dieterich, and T. Brandt, Brain activation studies on visual-vestibular and ocular motor interaction,
Current Opinion in Neurology 13 (2000), 13-18.
[47] A. Kleinschmidt, K.V. Thilo, C. Buchel, M.A. Gresty, A.M. Bronstein, and R.S. Frackowiak, Neural
correlates of visual-motion perception as object- or self-motion, Neuroimage 16 (2002), 873-82.
[48] C. Xerri, L. Borel, J. Barthelemy, and M. Lacour, Synergistic interactions and functional working range
of the visual and vestibular systems in postural control: neuronal correlates, Progress in Brain Research
76 (1988), 193-203.
[49] K. Dokka, R. Kenyon, and E.A. Keshner, Influence of visual velocity on head stabilization, Society for
Neuroscience (2006).
[50] S. Lambrey, and A. Berthoz, Combination of conflicting visual and non-visual information for
estimating actively performed body turns in virtual reality, International Journal of Psychophysiology
50 (2003), 101-15.
[51] A.M. Bronstein, The visual vertigo syndrome, Acta Otolaryngol Suppl 520 (1) (1995), 45-8.
[52] E.A. Keshner, J. Streepey, Y. Dhaher, and T. Hain, Pairing virtual reality with dynamic posturography
serves to differentiate between patients experiencing visual vertigo, Journal of NeuroEngineering and
Rehabilitation 4 (2007), 24.
[53] D. Gopher, and E. Donchin, Workload - An Examination of the Concept, in K.R. Boff, L. Kaufman,
and J.P. Thomas (Eds), Handbook of perception and human performance, New York, Wiley, 1986, pp.
41-1 - 41-49.
[54] V.S. Gurfinkel, P. Ivanenko Yu, S. Levik Yu, and I.A. Babakova, Kinesthetic reference for human
orthograde posture, Neuroscience 68 (1995), 229-43.
[55] B. Isableu, T. Ohlmann, J. Cremieux, and B. Amblard, Selection of spatial frame of reference and
postural control variability, Experimental Brain Research 114 (1997), 584-89.
[56] B. Isableu, T. Ohlmann, J. Cremieux, and B. Amblard, Differential approach to strategies of segmental
stabilisation in postural control, Experimental Brain Research 150 (2003), 208-21.
[57] J. Kluzik, F. B. Horak, and R.J. Peterka, Differences in preferred reference frames for postural
orientation shown by after-effects of stance on an inclined surface, Experimental Brain Research 162
(2005), 474-89.
Telerehabilitation: Enabling the Remote
Delivery of Healthcare, Rehabilitation, and
Self Management
David M. BRENNANa,1, Sue MAWSONb and Simon BROWNSELLc
a
National Rehabilitation Hospital, Washington DC, USA
b
Centre for Health and Social Care Research, Sheffield Hallam University, UK
c
School of Health and Related Research, University of Sheffield, UK

Abstract. Telerehabilitation refers to the use of Information and Communication


Technologies (ICT) to provide rehabilitation services to people remotely in their
homes or other environments. By using ICT, client access to care can be improved
and the reach of clinicians can extend beyond the physical walls of a traditional
healthcare facility, thus expanding continuity of care to persons with disabling
conditions. The concept of telecare, when telerehabilitation is used to deliver
services to clients in their homes or other living environments, empowers and
enables individuals to take control of the management of their medical needs and
interventions by enabling personalized care, choice and personal control. A wide
variety of assessment and treatment interventions can be delivered to clients using
remote monitoring systems, robotic and virtual reality technologies, and
synchronized collaboration with online material. This chapter will present a brief
history of telerehabilitation and telecare and offer an overview of the technology
used to provide remote rehabilitation services. Emphasis will be given to the
importance of human factors and user-centered design in the planning,
development, and implementation of telerehabilitation systems and programs. The
issue of self-care in rehabilitation and self-management will be discussed along
with the rationale for how telerehabilitation can be used to promote client self-care
and self-management. Two case studies of real-world telerehabilitation systems
will be given, with a focus on how they were planned and implemented so as to
maximize their potential benefits. The chapter will close with a discussion of
obstacles and challenges facing telerehabilitation and suggestions for ways to
promote its growth in use and acceptance.

Keywords. telerehabilitation, telemedicine, telecare, self-care, self management

Introduction

Technology has revolutionized all aspects of medical rehabilitation. The use of robotics,
virtual reality, nanotechnologies, embedded sensors, neuro-imaging, and a host of other
technologies have enhanced health outcomes and allowed researchers to break new
ground and expand their knowledge of the processes of neurological and
musculoskeletal recovery. As this research knowledge and technology development is

1
Corresponding Author: National Rehabilitation Hospital, Center for Applied Biomechanics and
Rehabilitation Research, 102 Irving Street, NW, Washington, DC, 20010, USA; E-mail:
david.m.brennan@medstar.net.
translated to practice, those with disabilities have access to an ever-expanding range of
cutting-edge treatments.
However, it is not only the interventions themselves that benefit from advanced
technologies, but also the way in which the interventions are delivered. Physical
distance between a client and a clinic is no longer an insurmountable obstacle to care.
Information and Communication Technologies (ICT) can be used to extend the reach
of clinicians far beyond the physical walls of a healthcare facility to local clinics,
community health care facilities, or in some cases, directly to clients in their homes.
Beyond basic videoconferencing, a wide variety of assessment and treatment
interventions can be delivered to clients using remote monitoring systems, inertial
sensors, robotic and haptic devices, and synchronized collaboration with online
material.
Using ICT to deliver rehabilitation services improves access to care for people
who live in rural locales (or in areas without specialty-trained clinicians) and for clients
with mobility impairments who have difficulty travelling. It can also substantially
reduce practitioner travel time and therefore increase the number of client consultations
possible in a day. Finally, the use of ICT in delivering rehabilitation services can
greatly expand continuity of care to persons with disabling conditions and enable them
to take control of the management of their medical needs and therapy interventions by
promoting personalized care, choice, and personal autonomy.
Having stated the opportunities that ICT affords in rehabilitation, it is worth
providing a brief introduction to the related terminology. For the purposes of this
chapter, telemedicine is used, in a broad sense, to refer to the transfer or exchange of
medical and healthcare information using ICT. Telerehabilitation refers specifically to
the delivery of rehabilitation services via telemedicine methods and techniques.
Telecare refers to the specific instances where health or care services are provided to
people in their homes or other supervised living settings [1, 2]. It is hoped the reader
will appreciate it is not so much the specific definition of each term that is essential, but
rather the concepts behind the terms. The reader is also encouraged to keep in mind that
no matter what term is used, telerehabilitation and its related applications are not new
clinical disciplines or specialties, rather they represent a new way to deliver treatment
to the client and improve access to care, while perhaps offering opportunities to
enhance the rehabilitation process.
This chapter will present a brief history of telerehabilitation and telecare and
discuss how they fit within the broad scope of telemedicine. It will offer an overview of
the technology used to provide remote rehabilitation services as well as discuss the
importance of human factors and user-centered design in the planning, development,
and implementation of telerehabilitation systems and programs. The importance of self-
care in rehabilitation will be discussed along with the rationale for how telecare can be
used to promote client self-care. Two case studies of real-world telerehabilitation will
be given, with a focus on how they were planned and implemented so as to maximize
their potential benefits. The chapter will close with a discussion of obstacles and
challenges facing telerehabilitation and suggestions of ways to promote its growth.

1. Origins of Telemedicine

Some have suggested that telemedicine can trace its origins to the use of bonfires and
smoke signals, carrier pigeons, and horseback-riding letter carriers to transfer
information related to disease outbreaks, military casualty lists, and requests for
medical assistance [1]. These instances are obviously dwarfed by the scope and speed
of modern telemedicine applications, yet they demonstrate the rationale and potential
for health data to be transmitted from one location to another.
The formal history of telemedicine is directly tied to advancements in technology.
The original "no-tech" methods, described above, were gradually replaced, first by the
telephone and telegraph, and later by radio-transmission, closed-circuit television
signals, and satellite communication. The mid- to late-1990's brought about what some
consider to be the 'modern' era of telemedicine with the advent of digital
communications, growth of electronic medical records, and rapid proliferation of the
Internet – all of which continue to be significant driving forces for telemedicine [1, 2].
Figure 1 offers an approximate timeline illustrating when various ICT was first used for
transmitting medical information.
Today, telemedicine has grown to include a broad research knowledge base, a
mounting body of evidence on efficacy and effectiveness, and a rising level of
acceptance among clinicians and clients [1]. Radiology, pathology, and other primarily
image-driven diagnostic specialties have strongly embraced telemedicine as a way to
deliver services faster, more efficiently, more accurately (for example, when advanced
image processing techniques or algorithms are applied), and to a greater number of
people [3]. Videoconferencing consults from larger specialty clinics to rural healthcare
providers are becoming increasingly commonplace around the globe, extending the
reach of clinicians and improving client care [2]. Advancements in ICT coupled with
the rapid development of software, sensors, robotics, digital medical records, and other
equipment have helped telemedicine develop into a key component in the evolution of
modern healthcare.

2. Origins of Telerehabilitation

Much as the earliest examples of telemedicine were opportunistic and often driven by
innovative clinicians making use of equipment that existed for another purpose [2],
telerehabilitation sprang from the application of existing telemedicine tools and

Figure 1. Timeline of ICT Telemedicine Utilization


techniques to individual rehabilitation disciplines. From the first demonstration projects,
the motivation for telerehabilitation has been a desire to improve the delivery of
rehabilitation services, enhance the continuum of care, and promote client involvement
and participation in treatment. These are essential components of the long-term
dynamic process of rehabilitation and are directly related to the functional outcomes
and level of recovery a client can achieve.
Early telerehabilitation efforts were structured mostly as pilot projects that were
small in sample size and proof-of-concept in nature, yet they demonstrated that some
rehabilitation assessment and treatment techniques could be delivered to clients located
in physically separate locations, thus overcoming obstacles of distance and lack of
access to trained providers. In some of the first telerehabilitation projects, clinicians
used the telephone to provide client follow-up and caregiver support, and to administer
client self-assessment measures [4, 5]. By the late 1980's, this approach expanded to
include the use of closed-circuit television and pre-recorded video material to provide
visual interaction with clients [6, 7].
As ICT advanced, telerehabilitation applications expanded in scope. Projects began
to employ live interactive videoconferencing, with an emphasis on rehabilitation
interventions that relied mostly on audio-visual interaction (e.g. neuropsychology,
speech-language pathology, counseling, etc.) and for which a lack of physical contact
presented less of a barrier to treatment. In cases where a higher-speed connection was
available, clinicians were able to use high-quality video transmission to provide
consultations, diagnostic assessments, delivery of treatment interventions, and distance
learning and supervision via telerehabilitation [8, 12]. Other projects made use of
slower analog public switched telephone network (PSTN) connections that were more
limited in the speed of the videoconferencing they could provide. Yet despite lower-
quality video transmission, telerehabilitation was shown to be a feasible method for
delivering a range of rehabilitation treatment and assessment interventions [13, 17].
These projects pointed to the potential benefits of telerehabilitation with results
demonstrating efficacy and yielding high levels of client and clinician satisfaction.
Feasibility of telerehabilitation was shown not only in controlled laboratory or clinic
settings, but also across long distances, bringing therapy and assessment to remote and
rural populations.
The recent development of advanced sensor and remote monitoring technologies
has enabled an increasing number of telerehabilitation applications to be deployed into
the home. While early telecare projects looked to provide basic follow-up services and
caregiver support [18, 19], more recent work has developed and deployed systems to
provide home-based exercise monitoring, diet and medication compliance tracking,
robotic-based treatment, and other more dynamic interventions [20, 23].

3. Technology

All telemedicine applications, while unique in their purpose, involve the exchange of
medical or healthcare information. The people involved in the session (e.g. clinician-to-
clinician in a tele-consultation, or clinician-to-client in a treatment encounter), the type
of information collected, and the ways in which it is transmitted and displayed, varies
significantly according to the intervention being delivered. Figure 2 illustrates the
typical information exchange in telemedicine. In a live session, the information
transmission occurs in real-time, while in store-and-forward telemedicine information
Figure 2. Information Exchange during a Telemedicine Session (Uni-Directional)
is collected and transmitted for review at a later time. It should be noted that this
exchange is most typically bi-directional with information flowing both to and from
each site. In many telecare applications, there may be an added step where the
information is analyzed or processed either when it is collected or when it is received
(e.g. to check if data from a sensor is within pre-defined criteria and notify users as
appropriate).
Real-time verbal and visual interaction between participants in a telemedicine
encounter occurs through the use of videoconferencing technology. There are a number
of different types of videoconferencing systems, each defined by the type of network
over which they connect and the telecommunications standard which they support (e.g.
H.320, H.323, H.324) [24]. Across all technologies, the quality of the link (typically
measured by the speed of the video and the clarity of the audio) is directly related to the
speed or bandwidth of the connection being used. In cases where the videoconference
is used for basic conversation or to provide macro-level visual interaction with a client,
a lower-fidelity connection such as standard PSTN lines may be sufficient (Figure 3).
However, in telemedicine applications where higher-quality video is necessitated (e.g.
assessment of fine motor skill, balance, or tremor; detection of facial affect or emotion)
a higher bandwidth connection may be required. Section 6.1 provides information on
how sensors can be used to augment videoconferencing to provide high-fidelity data on
motion and patient performance during a telemedicine session.
While videoconferencing is a powerful tool for bringing people together across
long distances, in many instances, it is not enough to provide the dynamic interaction
between a clinician and a client that lies at the heart of telerehabilitation. By
incorporating additional types of information exchange between users, a wider range of
telerehabilitation interventions can be delivered. Many traditional rehabilitation
assessment and therapy techniques make use of paper-based materials. In a
telerehabilitation application, these materials can be exchanged in a store-and-forward
fashion through the simple use of a fax machine or e-mail, or alternately in real-time

Figure 3. Spinal Cord Injured Client Using PSTN Videophone


via computer-based data sharing methods where on-screen material can be used
interactively by participants (as illustrated by the interactive telerehabilitation system
described in Section 6.2).
Today, an increasing number of projects are moving beyond basic
videoconferencing to include the types of remote ‘hands-on’ interaction that was once
viewed as being impossible for telerehabilitation. Multi-axial position and force sensors
(the latest of which are small in size with wireless communication and low-power
requirements) provide a tangible measure of physical performance and function of a
remote client. Haptic and robotic technologies let therapists ‘feel’ a client and impart
forces and motion. Environmental sensors, and other ‘Smart Home’ equipment,
monitor a living space and collect information on a client’s interaction with the
environment. The data from these devices can be used as part of a remote monitoring
application and transmitted in real-time (with or without a simultaneous
videoconference) or be collected, processed, and analyzed using store-and-forward
methods.
Despite the ever-evolving potential that advanced technologies afford clinicians
and researchers, it is imperative that the presence of technology has no negative impact
on the interaction between clinician and client. Telerehabilitation technologies must be
developed and implemented such that they facilitate the treatment interventions being
delivered and are usable and accepted by both the clients (and their caregivers) who
will receive the services and the clinicians who will provide them.
Given the broad scope of the field of rehabilitation, the dynamic recovery process,
the need to maintain and prevent deterioration in neurological and musculoskeletal
systems, and the inherent variability of clients receiving treatment, it is difficult to
make a one-size-fits-all recommendation of telerehabilitation technologies. Rather the
clinician, researcher, and/or administrator, in collaboration with the target population,
should first carefully identify the clinical need and relevant constraints (e.g. available
bandwidth, cost, etc.) and use them as a basis from which to select the most suitable
and appropriate technology. Figure 4 illustrates a top-down needs-focused approach for
identifying the appropriate technology for a telerehabilitation application.
The importance of including the client users in this process is described by
Ballegaard et al. who advocate that the clinical need for health technology must be
supplemented with the ‘citizens perspective' focusing on the everyday life activities,
values, expertise and wishes of the person who will utilize the system [25].

Figure 4. Needs-Based Approach for Selecting Telemedicine Technology


4. Human Factors and User-Centered Design in Telerehabilitation

Technology plays a vital role in telerehabilitation as it is responsible for all aspects of


the transfer of medical information. Yet as important as technology is, in order for
telerehabilitation to ultimately be successful and move forward, it must remain focused
on the users. This emphasis on user-centered design (UCD) is important at all phases of
telerehabilitation planning, development, and implementation. Through careful
involvement and consideration of users, technology designers can produce systems that
are more user-friendly and less error-prone; researchers can collect more targeted and
appropriate outcomes on satisfaction, effectiveness, and cost efficiency; and
administrators can improve staff comfort levels and adoption of telerehabilitation
procedures. The UK Department of Health has suggested that, “systematically and
rigorously finding out what people want and need from their services is a fundamental
duty of both the commissioners and the providers of services. It is particularly
important to reach out to those whose needs are greatest but whose voices are often
least heard” [26, p. 157].
In telemedicine applications, the list of users is quite broad and includes clients,
clinicians, support staff, caregivers, and administrators. In considering the users of a
telerehabilitation system, one must look beyond the factors of age, education, and
technology experiences and account for possible impairments in cognitive, gross and
fine motor function, and visual and language skills [27]. These additional
considerations should be reflected in the telerehabilitation service delivery (e.g.
accessibility of treatment space, training materials, consent documentation; disability-
specific training for staff) as well as in the technology that is utilized (e.g. high
usability, accessible controls).
Mainstream designers have become increasingly aware of the need for UCD, as
can be evidenced by the recent trends in commercial products towards design
emphasizing usability, ergonomics and aesthetics. While there are a number of existing
methods and tools for UCD that can be used, it has been argued that new paradigms
such as a ‘user-sensitive inclusive design’ should be used by developers of health care
devices [28]. The purpose is to gain an understanding of the users’ experience of not
only the technology to be designed but also the social and cultural context in which
users live with their disability and in which they will engage with the technology [29,
31].
In the advancing world of telecare and telerehabilitation, the need to utilize such
design techniques is paramount. Given the unique nature of the telerehabilitation
population and the requirement for clients and caregivers to often directly interact with
the technology, all aspects of telerehabilitation devices and systems must emphasize
ease of use, consistency, and reliability. This relates to physical input and control
functions such as buttons, switches, and connection ports/jacks; the use and placement
of sensors and other peripherals such as sensor arm bands, blood pressure cuffs, and
cameras; and on-screen graphical user interfaces of videoconferencing systems, in-
home telehealth messaging devices, and sensors and other monitoring equipment.
By involving users throughout each stage of the development process, there is a
greater likelihood that the final product will meet its design goals, be completed on-
time, have lower development costs, and be more usable for its target population [32,
33]. To work towards this goal, developers of telerehabilitation devices may wish to
employ UCD techniques such as ‘story-boarding’ which involves conducting
observational fieldwork, semi-structured interviews, and cultural probes, to develop
‘personas’ for each of the targeted users of a system. For example, in a
telerehabilitation system for clients with stroke, the persona would identify physical
and cognitive abilities, social environments, personal life goals and networking needs
and then map that storyboard to a new or emerging technology. In this way the design
is derived from both the clinical need and the future user’s needs. Other valuable UCD
methods that can be used include iterative paper prototyping (a method of having users
test early iterations of a GUI through low-fidelity mock-ups), video acting, and
workshop dissemination processes.
Well-designed and developed devices are clearly essential for successful
telerehabilitation. However, there are other human factors considerations that must also
be taken into account, especially with regard to how the telerehabilitation system
integrates into the existing organization and clinical or research environment. These
considerations include such dimensions as stakeholder support and buy-in, user training,
equipment maintenance, scheduling, and technical support.

5. The self management and self-care agenda

As telerehabilitation will likely be a key component in the evolution of healthcare, it is


important to understand the scope and potential of telerehabilitation systems to meet as
full a range as possible of user needs. One such avenue for future exploration, from
both a service-delivery and research perspective, is client self-care, defined as the
practices undertaken by individuals towards maintaining health and managing illness
[34]. Home-based telecare programs have the potential to promote self-care in
numerous ways. Sensor-based systems could monitor performance and provide clients
with feedback on their progress or display to them pre-established therapy and
educational content (delivered by computer screen), all without the direct real-time
involvement of a therapist. Clients would perhaps feel empowered to take an active role
in their own rehabilitation, conducting self-care whenever they feel appropriate. Self-
care, therefore, provides both the opportunity to receive treatment at the time and place
of the client’s choosing, and to achieve improved health outcomes through self-
managed additional rehabilitation sessions. In recognition of the fact that self-care
could promote long-term wellness and a reduction in future healthcare demand, the UK
Department of Health has identified self-care as one of the key building blocks for
future client-centered health service [35].
The concept of self-management is closely linked to that of self-care. However,
there are some distinctions in relation to rehabilitation, with clear key elements of the
self-management concept that dictate a more dynamic process with the users. The
elements within the self-management paradigm are goal identification, information
acquisition, problem solving, decision-making, and self reaction, which in
rehabilitation terms, should result in changes in motor control and subsequent
functional ability. The SMART case study (Section 6.1) demonstrates that it is possible
to use ICT and sensor technology to enable the key elements of the self management
process to take place, the ultimate goal being to promote self managed recovery of
motor function following a stroke.
Arguably most of the attention to date on self-care has centered on the
management of long-term conditions (LTC) such as Chronic Heart Failure, Chronic
Obstructive Pulmonary Disease, and Congestive Heart Failure. While self-care is
encouraged in many countries at the policy level (including the use of telecare
technologies), a recent Cochrane review involving 17 studies with a total of 7,442
participants reported that while self-care programs for LTCs led by lay leaders, "may
lead to modest, short-term improvements in patients confidence to manage their
condition and perceptions of their own health." The, "programmes did not improve
quality of life, alter the number of times patients visited their doctor or reduce the
amount of time spent in hospital" [36, p. 2].
While the clinical outcomes of self-care approaches are inconclusive at this point,
such results should perhaps be interpreted in the light that currently the process and
imputed relationship between self-care education skills and health service contact, the
human factors, are poorly understood [37]. Indeed, a noticeable cultural shift from both
clients and clinicians is likely to be necessary before the full extent of the effectiveness
of self-care can be quantified. Historically, the ‘balance of power’ has been with the
medical practitioners who would use their expertise and knowledge to respond
reactively to a client. Self-care promises increased knowledge for clients which can be
used to empower them to pro-actively care for themselves; moving the balance of
power, and in some instances sharing the decision making, between client and clinician.
Thus, a paradigm shift is required where service providers, rather than strictly
providing medical interventions, create an environment where people feel supported in
their self-care decision making.
Telecare can be a key tool in enabling such a cultural shift and realizing the
proposed clinical benefits of self-care. It can allow the ‘balance of power’ to be
facilitated in a more agreeable environment as clinicians have access to frequently
entered data (gathered manually and automatically) from the home and can maintain
‘contact’ with clients, particularly those with significant needs. At the same time, the
client is empowered to self-care and receive additional therapy sessions in the
knowledge that the clinician is keeping a ‘watching brief’ on their progress. Therefore,
the telerehabilitation system, by its very nature, encourages a sharing of responsibility
and communication. The SMART case study (Section 6.1) demonstrates the feasibility
of a system that fosters a supportive client environment such that self-care educational
material and the self-management of the therapy sessions can be targeted directly to the
client (either as a consequence of the system automatically observing changes in client-
entered data, or from direct clinician involvement); thus improving the client’s
knowledge base, ability to self-care, and therapy outcomes.
While such innovations are currently small in scale there is an appetite for such a
paradigm shift in service. The European Commission Information Society and Media
(2006) suggest that, "the way healthcare is presently delivered has to be deeply
reformed… The situation is becoming unsustainable and will only worsen in the future
as chronic diseases and the demographic change place additional strains on
healthcare systems around Europe." They call for a, "new healthcare delivery model
based on preventative and person-centred health systems. This new model can only be
achieved through proper use of ICT, in combination with appropriate organizational
changes and skills" [38, p. 11]. The ability to generate reliable evidence for service
commissioners to make informed decisions will be important to deliver this paradigm
shift.
6. Telerehabilitation Case Studies

Stroke is the largest cause of severe disability in the western world. It is estimated that
over 900,000 people in the UK and close to 2.5 million people in the United States are
living with moderate to severe impairments as a result of stroke, affecting their
independence and quality of life [39, 40]. In response to this need, and faced with a
growing prevalence of stroke and other chronic diseases, rehabilitation specialists are
investigating innovative models of health care delivery using technologies to improve
access to care and enable the self management of long-term conditions.

6.1. Self-management of rehabilitation: the SMART study

Evidence suggests that 30% to 60% of stroke survivors regain no functional use in the
arm at 6 months post-stroke [41]. Targeted rehabilitation is known to promote re-
organization of the central nervous system with the intensity and frequency of the
training [42], as well as the level of the client’s active participation and engagement
[43], shown to be crucial factors in the recovery process.
The SMART rehabilitation project, a collaborative initiative with partners from
Sheffield Hallam University, University of Sheffield, University of Ulster, University
of Essex, and University of Bath, aimed to explore the use of innovative technologies
to promote the self-management of progressive, repetitive, functional task-specific
activities to motivate stroke survivors to engage in the rehabilitation process. The
project examined the scope, effectiveness and appropriateness of systems to support
home-based rehabilitation for stroke survivors and their caregivers focusing on upper
limb home-based rehabilitation.
The SMART rehabilitation system (Figure 5) consists of three components: a
movement tracking system (consisting of two small inertial sensors attached to the
arm), a personal computer and a web-server unit [44, 20, 43]. The client attaches the
sensors, selects a specific functional goal such as drinking or reaching, and repeats the
task a number of times. The sensors record changes in the client's arm position
resulting from movement of the elbow and shoulder joints. The information collected
by the sensors is analyzed and provided as feedback to the client (and caregiver) in the
form of on-screen 3D avatars showing the client's movement compared to the targeted
"normal" movement. Summative kinematic data on frequency of use, angle ranges, and
cycle time can also be viewed.
In this way the SMART system enables users to self-manage their rehabilitation by
choosing which goal (exercise) to undertake, deciding when and how often to perform
the task, observing and problem-solving their movements in comparison to a normal
movement, and self-reacting by changing the way they move in response to feedback
given [46, 47]. Clinicians can assess and monitor movements remotely via the Internet
by accessing the central server, or provide comments/instructions over the web-based
system when they have viewed the summative kinematic data.
Figure 5. Infrastructure of the SMART remote rehabilitation system.
The project used an iterative UCD approach to develop a series of prototypes of
the system’s user interface that were evaluated via task analysis, cognitive
walkthroughs, and formal usability testing sessions. The final system was deployed and
tested in the homes of four stroke survivors for two weeks as a proof of concept
research study (Figure 6). Results indicated that the system could be effective at
promoting functional improvement and encouraging compliance. Qualitative data
provided anecdotal evidence of both the motivational aspects of the SMART system
and the way it facilitated self-management of rehabilitation by enabling users to
problem solve their movement strategies and self-react to the feedback provided.
Figure 7 offers comments from two participants on their experience with the system.

Figure 6. Study Participant Using the SMART System at Home

"What I liked about it was that he was so eager to do it. He'd ask me 'shall I do it again?
Shall I do it again?...It's really amazing that he really wanted to do these exercises
much more..."

"Having viewed it visually I'm aware that this elbow swings out...people can see the
difference between what they can do and what they should be doing"

Figure 7. Qualitative Feedback from Home Users of the SMART System


The results provided valuable guidance for further development of the system and
proof of the concept that a robust rehabilitation system can be managed at home and
used to provide useful and motivating feedback within the daily routine of a stroke
survivor. Future projects will expand the SMART system and methodology to develop
a personalized remote therapy system for chronic pain, stroke and chronic heart failure.

6.2. Interactive videoconferencing for remote cognitive-communicative treatment

Following stroke or brain injury, many clients often exhibit some degree of cognitive-
communicative impairments. Treatment of these impairments by a speech-language
pathologist (SLP) typically involves skill-based exercises that are largely based on
drill-and-practice, repetition, and the use of treatment materials such as worksheets and
flash cards. Given the highly verbal and visual nature of this treatment, it is well-suited
for delivery using telerehabilitation technology [48]. While early work in the field
pointed to the significant potential for telerehabilitation to deliver numerous cognitive-
communicative interventions to remote clients [6, 9, 11, 12, 49], there was a clear need
for technology that could enhance and expand the ways in which clinicians could
interact with their remote clients.
Work at the National Rehabilitation Hospital, in Washington, DC, investigated use
of a customized telerehabilitation system that combined videoconferencing with
interactive data sharing features (Figure 8). The goal was to develop a system that
could augment and extend interaction during a telerehabilitation session so as to enable
a wide range of therapeutic interventions to be delivered to remote clients. In addition
to the basic verbal and visual communication afforded by the videoconferencing
connection, data collaboration functions were designed to serve as a ‘virtual desktop’
on which the client and clinician were able to work together in real-time using on-
screen material (e.g. word processing documents, scanned worksheets, computer
applications, or digital drawing whiteboards) just as they would use physical treatment
materials in a traditional face-to-face session (Figure 9).
Development of the overall system was based on a UCD framework, which
emphasized effective and usable interface design in addition to traditional
software/system development goals of quality, budget, expandability, and timeliness.
The design of the system GUIs were achieved through iterative prototyping that
progressed from low-fidelity designs (e.g. drawings, sketches, and storyboards) to

Figure 8. Telerehabilation Interaction Incorporating Videoconferencing and Data Sharing


Figure 9. Videoconferencing Interaction with Barrier Drawing Task (NOTE: in this example, the client and
the clinician share an on-screen drawing whiteboard, such that the clinician can view the client’s drawing via
the touchscreen in real-time)
increasingly higher-fidelity functional models. The members of a cross-disciplinary
project team worked in close collaboration with users (both stroke survivors and SLP
clinicians) at all stages of the project through focus groups, brainstorming sessions, ad
hoc task-centered cognitive walkthroughs, and formal usability testing sessions to
ensure that the final design met the usability requirements of the target population.
Through a series of case studies, this system was evaluated as a method for
delivering cognitive-communicative treatment to clients following stroke. Upon
completion of a six-week treatment protocol, all clients achieved improvement in their
functional communication at a level consistent with expectations for an equivalent
period of traditional face-to-face treatment. In addition to a high level of overall client
satisfaction, the data sharing features were found to be a highly valuable component of
the system and were viewed favorably by both clients and clinicians as making
treatment significantly more engaging and motivating [50, 51].

7. Obstacles, barriers and recommendations

Although telerehabilitation has been emerging for some time, it is still in its relative
infancy. In order for telerehabilitation to achieve the long-term goal of improving
health outcomes, empowering clients, and serving as a cost-effective mainstream tool
for all, obstacles related to reliable evidence, technology, policy and reimbursement,
organizational change, and workforce development and operational delivery must be
addressed. Table 1 highlights each of these obstacles and offers recommendations as to
how the telerehabilitation community - through the mutual efforts of researchers,
clinicians, administrators, and policymakers - can overcome them for the benefit of
client and clinician alike. For example, in order to address a lack of reliable evidence,
well-designed studies with appropriate research methodologies must be conducted to
evaluate both clinical and cost effectiveness of telerehabilitation programs as well as
the impacts on client care delivery systems, training programs, and healthcare
organizations.
Table 1. Obstacles to mainstream deployment of telerehabilitation and recommendations to overcome them

Obstacle Recommendations

Reliable evidence: While service innovators 1. The academic community should produce reliable
embrace change believing improvements will evidence, at a variety of different levels, to support
result, the majority of decision makers and service commissioners’ decision making for
service commissioners are typically more sustainable services. Particular attention should be
cautionary. Consequently requiring reliable given to clinical and cost effectiveness which
evidence so as to compare costs and clinical compares against existing forms of service delivery.
outcomes of one service innovation against 2. Research funding bodies should review the potential
another before they will embrace the benefits of telerehabilitation and make the
opportunity. appropriate levels of funding available to support
sustainable research programs that deliver solutions
over the short, medium and long-term.
3. While the risks of undertaking a new service
innovation are often highlighted, the associated risks
of rejecting opportunities and staying with the ‘status
quo’ are rarely recognized. Consideration of these
should therefore be highlighted.
Technology: Currently many hardware 1. Interoperable technology standards for
configurations work in isolation to one communication, operation and interface should be
another, therefore requiring existing systems developed to increase the flexibility of end-user
to be replaced when additional functionality is solutions and facilitate easy user operation when
required. This can also limit the functionality utilizing the products from different manufacturers*.
of the ‘technology’ as an individual 2. Greater efforts should be made to understand why
manufacturer may not have all of the users reject or abandon ‘technology’ and efforts made
‘technology’ required to meet a specific users to redesign equipment to maximize benefit for all –
need. including the aesthetics of the technology.
Much of the technology is not as 3. In a research and development capacity greater
aesthetically pleasing as it might be. This can collaboration between different disciplines should be
result in some users refusing or abandoning encouraged due to the cross-organizational nature of
the technology as they consider it a badge of solutions required. Collaboration and joint initiatives
dependency. This issue is equally important in with industry should also ensure the minimum time
the growing consumer or self purchase delay between research demonstration and translation
market. to available market product.
Finally the features of much of the
technology are rather simplistic when
compared to some other fields. Greater efforts
are required for monitoring, analysis and data
presentation.
Policy and reimbursement: The supportive 1. International comparison and collaboration at a
strength of national policy is variable policy level should be sought to share learning,
throughout many different countries. As is the evidence and encourage service
healthcare environment and the way in which innovation/technological developments by
services are funded. Even where policy is international companies*.
strong the funding is often unclear, sometimes 2. Holistic views of the financial costs and associated
resulting in greater financial benefits for those benefits are required. Alternatively understanding of
not contributing to the purchasing costs of the benefits throughout operational systems is
such systems than those actually paying for it. required to share the costs and benefits accordingly.
As such, where direct benefit is not returned
to the purchaser, there can be little motivation
to invest at a micro level, even though the
macro level benefits may be understood.
Organizational change: Challenging cultural 1. Evidence should be presented, including consultation
attitudes and the established ways in which with end users, such that the requirement for change
organizations function is a considerable task and the anticipated benefits can be described – in
that can take significant time. However, if the effect justifying the business case.
whole organization is not committed to the 2. The process of change should be project-managed
service redesign then the benefits are likely to accordingly with sufficient resource provided to
be less than possible or anticipated. enable the best possible chances of success.
3. Throughout the process, consultation and evaluation
should be conducted to provide a feedback loop to
reinforce the benefits and lessons learnt.
Workforce development and operational 1. Additional training courses are required to provide
delivery: A variety of new skills may well be the necessary knowledge and competencies so the
required on service delivery aspects, including workforce can effectively deliver recognizable
the ‘balance of power’, technological aspects, components of the overall system.
equipment installation and maintenance, etc. 2. National occupational standards should be
Operational codes of practice, competencies encouraged so clear career paths can be established.
and standards will also be required to ensure 3. Government organizations, trade associations and the
practitioners have the appropriate skills and voluntary sector should be encouraged to establish
abilities to offer services to a demonstrable local, national, and possibly international, operational
level of performance. standards.

* Note: The efforts of the Continua Health Alliance [52], aimed at


promoting interoperable personal telehealth solutions, are welcomed.

8. Conclusion

The need for evolving the delivery of rehabilitation services and incorporating aspects
of self-care and remote monitoring is somewhat magnified in light of the shift in global
demographics to an older population and the increasing prevalence of chronic health
conditions. Telerehabilitation holds significant potential to meet this need and to
provide services that are more accessible to more people, while having the ability to
offer a more affordable enhanced level of care.
Despite all of its potential, the evolution of telerehabilitation is not inevitable and it
will not occur on its own. Greater adoption of telerehabilitation will likely occur as a
result of the shift towards user-focused and technology-enabled healthcare, and as an
increased emphasis is placed on preventative and continuous care, rather than
traditional episodic and reactive care. Additionally, there is a crucial need for a greater
body of evidence on clinical effectiveness and cost efficiency of telerehabilitation
programs. Research should look to analyze the behavior change that can and does occur
as a result of telerehabilitation interventions and the impact it has on long-term health
outcomes.
The last decade has seen tremendous growth of telerehabilitation, and this trend is
likely to continue. While most of the past and current focus in telerehabilitation has
been on modifying face-to-face treatment methods for remote delivery, future work
will explore the potential for telerehabilitation to enhance and perhaps even improve
care. While this growth occurs and new approaches for remote service delivery are
explored, great care must be taken to ensure that the planning, design, and
implementation of telerehabilitation technologies, systems, and services is strongly
focused on user-centered human factors principles. Telerehabilitation must never be a
result of 'technology push' alone, rather it must be driven by clinical need and a desire
to improve healthcare.
Acknowledgments

The SMART Consortium (www.thesmartconsortium.org) is funded by the Engineering


and Physical Sciences Research Council (www.fp.rdg.ac.uk/equal). Consortium
members include Sheffield Hallam University, University of Sheffield, University of
Bath, University of Essex and the University of Ulster.

References

[1] J. Craig, and V. Patterson, Introduction to the practice of telemedicine, In R. Wootton, J. Craig, and V.
Patterson, Introduction to Telemedicine, Second Edition, London: The Royal Society of Medicine Press,
Ltd, 2006, pp.3-14.
[2] A.C. Norris, Essentials of Telemedicine and Telecare, West Sussex: John Wiley and Sons, Ltd, 2002.
[3] V. Della Mea, Prerecorded telemedicine, In R. Wootton, J. Craig, and V. Patterson, Introduction to
Telemedicine, Second Edition, London: The Royal Society of Medicine Press, Ltd, 2006, pp. 3-14.
[4] G.R. Vaughn, Tel-communicology: health-care delivery system for persons with communicative
disorders, ASHA 18 (1) (1976), 13-17.
[5] N. Korner-Bitensky, S. Wood-Dauphinee, Barthel Index information elicited over the telephone. Is it
reliable?, American Journal of Physical Medicine and Rehabilitation 74 (1) (1995), 9-18.
[6] R. Wertz, N. Dronkers, E. Bernstein-Ellis, Y. Shubitowski, R. Elman, G. Shenaut, R. Knight, and J.
Deal, Appraisal and diagnosis of neurogenic communication disorders in remote settings, In R.H.
Brookshire, Clinical Aphasiology, Minneapolis: BRK Publishers, 1987, pp. 117-123.
[7] R. Wertz, N. Dronkers, E. Bernstein-Ellis, L. Sterling, Y. Shubitowski, R. Elman, G. Shenaut, R.
Knight, and J. Deal, Potential of telephonic and television technology for appraising and diagnosing
neurogenic communication disorders in remote settings, Aphasiology 6 (2) (1992), 195-202.
[8] J.R. Duffy, G.W. Werven, and A.E. Aronson, Telemedicine and the diagnosis of speech and language
disorders, Mayo Clinic Proceedings 72 (12) (1997), 1116-1122.
[9] A. McCullough, Viability and effectiveness of teletherapy for pre-school children with special needs,
International Journal of Language & Communication Disorders 36 (1) (2001), 321-326.
[10] L. Savard, A. Borstad, J. Tkachuck, D. Lauderdale, and B. Conroy, Telerehabilitation consultations for
clients with neurologic diagnoses: cases from rural Minnesota and American Samoa,
NeuroRehabilitation 18 (2) (2003), 93-102.
[11] D. Theodoros, T.G. Russell, A. Hill, L. Cahill, and K. Clark, Assessment of motor speech disorders
online: a pilot study, Journal of Telemedicine & Telecare 9 (2) (2003), S66-68.
[12] D. Brennan, A. Georgeadis, C. Baron, L. Barker, The effect of videoconference-based telerehab on
story retelling performance by brain injured subjects and its implications for remote speech-language
therapy, Telemedicine Journal and e-Health 10 (2) (2004), 147-154.
[13] R.P. Hauber, M.L. Jones, A.J. Temkin, S. Vesmarovich, V.L. Phillips, Extending the Continuum of
Care After Spinal Cord Injury Through Telerehabilitation, Topics in Spinal Cord Injury Rehabilitation
5 (3) (1999),11-20.
[14] N.C. Dreyer, K.A. Dreyer, D.K. Shaw, and P.P. Wittman, Efficacy of telemedicine in occupational
therapy: a pilot study, Journal of Allied Health 30 (1) (2001), 39-42.
[15] E.D. Lemaire, Y. Boudrias, and G. Greene, Low-bandwidth, Internet-based videoconferencing for
physical rehabilitation consultations, Journal of Telemedicine and Telecare 7 (2) (2001), 82-89.
[16] P.G. Clark, S.J. Dawson, C. Scheideman-Miller, and M.I. Post, Telerehab: Stroke teletherapy and
management using two-way interactive video, Neurology Report 26 (2002), 87-93.
[17] T.G. Russell, P. Buttrum, R. Wootton, and G.A. Jull, Low-bandwidth telerehabilitation for patients who
have undergone total knee replacement: preliminary results, Journal of Telemedicine & Telecare 9 (2)
(2003), S44-47.
[18] V.L. Phillips, S. Vesmarovich, R. Hauber, E. Wiggers, A. Egner, Telehealth: reaching out to newly
injured spinal cord patients, Public Health Reports 116 (1) (2001), 94-102.
[19] B.Q. Tran, K.M. Buckley, and C.M. Prandoni, Selection & use of telehealth technology in support of
homebound caregivers of stroke patients, CARING Magazine 21 (3) (2002), 16-21.
[20] H. Zheng, R.J. Davies, N.D. Black, Web-based monitoring system for home based rehabilitation with
stroke patients, Proceedings of 18th IEEEE International Symposium on computer based medical
systems 2005.
[21] R.M. Bendixen, K. Horn, C. Levy, Using telerehabilitation to support elders with chronic illness in their
homes, Topics in Geriatric Rehabilitation 2 (1) (2007), 47-51.
[22] M.K. Holden, T.A. Dyar, and L. Dayan-Cimadoro, Telerehabilitation using a virtual environment
improves upper extremity function in patients with stroke, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 15 (1) (2007), 36-42.
[23] G. Placidi, A smart virtual glove for hand telerehabilitation, Computers in Biology and Medicine 37 (8)
(2007), 1100-1107.
[24] Tandberg, Video Conferencing Standards, Application Notes D10740, Rev 2.3. Available from:
http://www.tandberg.com/collateral/white_papers/whitepaper_Videoconferencing_standards.pdf
[25] S.A. Ballegaard, et al, Healthcare in everyday life - designing healthcare services for daily life, CHI
proceedings, Florence, Italy, 2008, pp. 5-10.
[26] H.M. Government, Department of Health. Our health, our care, our say: a new direction for community
services, 2006.
[27] D.M. Brennan, L.M. Barker, Human factors in the development and implementation of
telerehabilitation systems, Journal of telemedicine and telecare 14 (2008), 55-58.
[28] A.F. Newell, P. Greor, Design for older and disabled people- where do we go from here?, Univ Access
Inf Soc 2 (2002), 3-7.
[29] W. Gaver, T. Dunne, E. Pacenti, Cultural Probes, Interaction 1 (6) (1999), 21-29.
[30] H. Hutchinson, et al., Technology probes inspiring design for and with families, Proceedings of the
conference on human factors in computing systems, 2002.
[31] T. Mattelmake, Applying probes-from inspirational notes to collaborative insights, CoDesign 2 (1)
(2005), 83-102.
[32] P. Yellowlees, Successful development of telemedicine systems – seven core principles, Journal of
Telemedicine and Telecare 3 (1997); 215-23.
[33] A.V. Salvemini, Challenges for user-interface designers of telemedicine systems, Journal of
Telemedicine and Telecare 5 (1999), 163-8.
[34] The Health Foundation, Patient-focused interventions: A review of the evidence, A.Coulter, and J.
Ellins, London, 2006.
[35] Department of Health, Self care, 2008, from http://www.dh.gov.uk/en/Healthcare/Selfcare/index.htm.
[36] G. Foster, S.J.C. Taylor, S.E. Eldridge, J. Ramsay, and C.J. Griffiths, Self-management education
programmes by lay leaders for people with chronic conditions, Cochrane Reviews 2 (2007).
[37] C. Gately, A. Rogers, and C. Sanders, Re-thinking the relationship between long-term condition self-
management education and the utilisation of health services, Social Science and Medicine 65 (2007),
934-45.
[38] European Commission Information Society and Media, ICT for Health and i2010: Transforming the
European healthcare landscape towards a strategy for ICT for Health, Luxembourg, 10 (2006), ISBN
92-894-7060-7.
[39] W. Rosamond, et al., Heart disease and stroke statistics: a report from the American Heart Association
Statistics Committee and Stroke Statistics Subcommittee, Circulation 117 (4) (2008), e25-146.
[40] National Audit Office, Reducing Brain Damage: Faster access to better stroke care, London: The
Stationery Office, 2005.
[41] G. Kwakkel, B.J. Kollen, J. van der Grond, A.J. Prevo, Probability of regaining dexterity in the flaccid
upper limb: impact of severity of paresis and time since onset in acute stroke, Stroke 34 (9) (2003),
2181-2186.
[42] G. Kwakkel, B.J. Kollen, R.C. Wagenaar, Long term effects of intensity of upper and lower limb
training after stroke: a randomised trial, Journal of Neurology Neurosurgery and Psychiatry 72 (2002),
473-479.
[43] M. Lotze, C. Baun, N. Birbaumer, S. Anders, and L. Cohen, Motor learning elicited by voluntary drive
Brain 126 (4) (2003), 866-872.
[44] G.A. Mountain, P.M. Ware, J. Hammerton, S.J. Mawson, H. Zheng, R. Davies, N.D. Black, H. Zhou, H.
Hu, N. Harris, and C. Eccleston, The SMART Project: A user led approach to developing and testing
technological applications for domiciliary stroke rehabilitation, In P. Clarkson, J. Langdon, and P.
Robinson, Designing Accessible Technology, Springer-Verlag, London, 2006.
[45] H. Zheng, R. Davies, N.D. Black, P.M. Ware, J. Hammerton, S.J. Mawson, G.A. Mountain, and N.
Harris, The SMART Project: An ICT decision platform for home based stroke rehabilitation system,
Proceedings of the International Conference on Smart Homes and Telematics (2006).
[46] T. Creer, K. Holroyd, Self management, In A. Braun, S. Newman, J. Weinman, C. McManus,
Cambridge handbook of Psychology, Health and Medicine, Cambridge: Cambridge University Press,
1997.
[47] F. Jones, Strategies to enhance chronic disease self-management: How can we apply this to stroke,
Disability and Rehabilitation 28 (13-14) (2006), 841-847.
[48] D. Brennan, A. Georgeadis, C. Baron, Telerehabilitation tools for the provision of remote speech-
language treatment, Topics in Stroke Rehabilitation 8 (4) (2002), 71-78.
[49] A. Georgeadis, D. Brennan, L. Barker, C. Baron, Telerehabilitation and its effect on story retelling by
adults with neurogenic impairments, Aphasiology 18 (5/6/7) (2004), 639-652.
[50] D. Brennan, L. Barker, A model of client-clinician interaction for telemedicine in speech-language
pathology, Telemedicine Journal and e-Health 11 (2) (2005), 218-219.
[51] D. Brennan, A. Georgeadis, The use of data sharing in speech-language telemedicine following stroke,
Telemedicine Journal and e-Health 12 (2) (2006), 226.
[52] http://www.continuaalliance.org/home
Socially Assistive Robotics for Stroke and
Mild TBI Rehabilitation
Maja MATARIĆa,1, Adriana TAPUSa,2, Carolee WINSTEINb,3 , Jon ERIKSSONa,4
a
Department of Computer Science, University of Southern California, USA.
b
Division of Biokinesiology and Physical Therapy, University of Southern California,
USA

Abstract. This paper describes an interdisciplinary research project aimed at


developing and evaluating effective and user-friendly non-contact robot-assisted
therapy, aimed at in-home use. The approach stems from the emerging field of
social cognitive neuroscience that seeks to understand phenomena in terms of
interactions between the social, cognitive, and neural levels of analysis. This
technology-assisted therapy is designed to be safe and affordable, and relies on
novel human-robot interaction methods for accelerated recovery of upper-
extremity function after lesion-induced hemiparesis. The work is based on the
combined expertise in the science and technology of non-contact socially assistive
robotics and the clinical science of neurorehabilitation and motor learning, brought
together to study how to best enhance recovery after stroke and mild traumatic
brain injury. Our approach is original and promising in that it combines several
ingredients that individually have been shown to be important for learning and
long-term efficacy in motor neurorehabilitation: (1) intensity of task specific
training and (2) engagement and self-management of goal-directed actions. These
principles motivate and guide the strategies used to develop novel user activity
sensing and provide the rationale for development of socially assistive robotics
therapy for monitoring and coaching users toward personalized and optimal
rehabilitation programs.

Keywords. Socially-Assistive Robotics (SAR), Human-robot interaction (HRI),


Robot-assisted rehabilitation, Technology-assisted therapy.

Socially Assistive Robotics - A New Type of Rehabilitation Tool

As a result of the confluence of enabling technological and growing societal needs,


research into assistive technologies is growing rapidly. Examples of assistive domain
amenable to significant technological advances include physical rehabilitation for post-
operative cardiac care, post-stroke rehabilitation, traumatic brain injury, and obesity
mitigation. Intense task-oriented training is known to be an effective therapy for upper

1
Corresponding Author: Robotics Research Lab/Interaction Lab, Department of Computer Science,
University of Southern California, Los Angeles, USA; E-mail: mataric@usc.edu.
2
Corresponding author: Robotics Research Lab/Interaction Lab, Department of Computer Science,
University of Southern California, Los Angeles, USA; E-mail: adriana.tapus@ieee.org.
3
Corresponding author: Division of Biokinesiology & Physical Therapy School of Dentistry, and
Department of Neurology, Keck School of Medicine, University of Southern California, Los Angeles, USA;
E-mail: winstein@usc.edu.
4
Corresponding author: Robotics Research Lab/Interaction Lab, Department of Computer Science,
University of Southern California, Los Angeles, USA; E-mail: je@kth.se.
limb neurorehabilitation from stroke, as well as unilateral brain damage from traumatic
brain injury, tumors affecting arm function, and Parkinson’s disease [1, 2, 3].
As the population ages and the number of sufferers of the above disabilities grows
[4, 5], the need for effective means of supervising and motivating rehabilitation
activities is rapidly increasing. Importantly, the current standard of care cannot meet
the growing needs for rehabilitation activities supervision, both in and especially
outside of the clinic. The research work we propose is aimed at addressing this
important problem by developing and validating non-contact robotics technology as a
means to improve task-specific practice and functional outcomes. Specifically, we
propose a general and affordable technology that can provide supplemental therapy,
supervision, and encouragement of functional practice for individuals with impaired
movement capability in an effort to significantly augment in- and out-of clinic care.
Socially Assistive Robotics (SAR) focuses on assisting through social, not physical,
interaction [6] and therefore a human-robotic therapeutic interaction can offer a
possible and cost-effective method to reach our goal by maximizing the patients’
motivation both during and after structured rehabilitation, such that they will continue
practicing beyond the physical therapy session per se. Our long-term goal is to show
that such enhancement of sustained motivation can be achieved by incorporating
contact-free robotic therapy during rehabilitation. This creates a critical niche for SAR,
wherein Human-Robot Interaction (HRI) can be used not to replace physical or
occupational therapists, but to become frequently and readily available individualized
rehabilitation aids. By providing the opportunity for time-extended monitoring and
encouragement of rehabilitation activities in any setting (at the clinic or at home), these
systems complement human care [7, 8, 9, 10, 11, 12].
In this developmental/exploratory research work, we illustrate some of the key
factors that impact user acceptance and practice efficacy in improving self-efficacy of
paretic arm use through human-robot social interaction while optimizing functional
performance and recovery. We describe a pilot study involving an autonomous
assistive mobile robot that aids stroke patient rehabilitation by providing monitoring,
encouragement, and reminders. We also show some preliminary results that focused on
the benefits of mirroring user personality in robot's behavior and user modeling for
adaptive and natural assistive behaviors. All these are aimed at improving the human-
robot social interaction and at the same time enhancing the user's task performance in
daily activities and rehabilitation activities. Furthermore, we outline and discuss future
work and factors toward the development of effective socially assistive rehabilitation
robots.

1. Defining the Need and New Insights for the Hands-Off Robotic Rehabilitation

The technology described in this research features a novel, non-contact approach to


robotics-based upper extremity rehabilitation. Our approach is original and promising
in that it combines several ingredients that individually have been shown to be
important for learning and long-term efficacy in motor neurorehabilitation: (1) intensity
of task specific training and (2) engagement and self-management of goal-directed
actions. These two guiding principles are incorporated into the development and testing
of an engaging, user-friendly home-based robotic therapy for accelerated recovery of
upper-extremity function after stroke hemiparesis that relies on our pilot results in
novel human-robot interaction [7, 8, 13, 14, 15].
We propose to develop and evaluate robot-assisted rehabilitation technology with
general relevance to motor rehabilitation due to stroke, traumatic brain injury, tumors
affecting limb use, and Parkinson’s Disease. Our work is motivated by the large and
growing need for providing motivation and supervision of intensive rehabilitation
activities required as part of therapy in such disabilities outside of the clinic. Stroke
alone is the leading cause of serious, long-term disability among American adults and
the third leading cause of death in the US [4]. Each year, over 700,000 people suffer a
stroke, and nearly 400,000 survive with some form of neurologic disability, placing a
tremendous burden on the nation’s private and public health resources [16]. The
cumulative total of stroke-affected Americans is over 4 million, and the estimated
annual burden from stroke related disability is $53.6 billion, of which $20.6 billion is in
indirect costs due to lost productivity and income. Population-based statistics indicate
that age is the strongest non-modifiable risk factor, with the incidence of stroke
increasing exponentially after the age of 25, and the majority of strokes occurring in
persons older than 65. It is estimated that the number of stroke survivors with disability
will almost double by the year 2025 as the ‘baby boom’ population progressively ages,
making the burden even more apparent [17]. For these reasons, we propose to perform
our evaluation experiments with persons post-stroke with the understanding that the
developed technology is intended well beyond this single cause for disability.
Loss of function of the upper limb is one of the most important impairments that
warrants effective rehabilitation from stroke. Statistics indicate that over 80% of first-
time strokes (infarctions only) involve acute hemiparesis of the upper limb that
significantly impacts the functional independence and health of the stroke survivor [18].
Stroke-related arm disabilities range from deficits in sensation and motor
coordination to complete hemiparesis and loss of limb function. In addition, stroke
often leaves individuals unable to perform movements with the affected limb even
though the limb is not completely paralyzed. This loss of function, termed ‘learned
disuse,’ is most obvious during the early post-injury period but can improve with
rehabilitation therapy [19, 20]. Yet, only limited attention has been given to upper limb
rehabilitation, and functional recovery of the arm and hand has generally been resistant
to the traditional approaches compared with that for the lower extremities [18, 21].
Rehabilitation of the upper extremity requires more fine motor control than the
lower extremity, and rehabilitation of fine motor skills requires longer and more
specific types of task-related training than is included in the standard rehabilitation
program. In addition, health insurance companies often reject requests for rehabilitation
past the three to six month period following a stroke due to the belief that additional
therapy would not be helpful [22]. However, clinical studies using motor training have
found improvement in functional upper limb performance in patients more than 1 year
post stroke and cortical reorganization and recruitment of adjacent brain areas
associated with intensive use of the affected upper limb have been documented several
years after the initial stroke injury [23, 24, 25].
The most effective known arm-focused interventions with the strongest evidence
and potentially the most immediate and cost-effective appeal for the current health care
environment share a common emphasis on focused task-specific training applied with
an intensity higher than usual care. This, along with the findings of our recent
Extremity Constraint-Induced Therapy Evaluation (EXCITE) randomized control trial
[27, 28], suggests that the potential for functional recovery goes on much longer than
previously believed, and that the degree of recovery that can be anticipated involves not
only the level of initial impairment but the amount, type, and intensity of practice
available to the patient during the recovery process. To make significant advances in
the field of motor rehabilitation, we need a better understanding of the critical factors
that underlie the recovery process at the behavioral, psychological, and pathological
levels, and the specific ways that therapeutic interventions modulate that recovery
process across these levels. For these reasons, we propose a concerted multidisciplinary
collaboration between engineering, computer and clinical sciences that will develop
and evaluate cost-effective, evidence-based upper extremity rehabilitation programs
aimed specifically at the promotion of engaging, motivating human-robot interaction
for accelerated recovery of function.

2. Pilot Study and Preliminary Results

Today’s rehabilitation robotics methods involve hands-on application of forces either


by the patient to a monitoring robot manipulandum, or by the robot mechanism to the
patient, or a combination of the two [29, 30, 31, 32]. Because human-robot contact
involves complex issues of safety, such hands-on robotics methods remain areas of
active ongoing research, with many outstanding challenges. Moreover, the existing
contact robots used for upper limb rehabilitation are not portable and generally require
patients to travel to a laboratory for robotic therapy, are very expensive, and expertise
is necessary to program and execute trials. Risk of injury to the patient is of concern
when movement of a limb with sensorimotor loss is imposed by a robot. Injuries to the
upper limb from use of contact robots are of concern and have been documented [29,
30]. Another concern stems from differences in outcomes of a recent rehabilitation trial
in stroke hemiparesis that compared a functional-task training group and a purely
strength-training group (20). We found that while there were short-term benefits in
strength from the strength training protocol, these changes did not persist 9 months
later. Instead, the functional-task training group showed long-term benefits in the
performance of functional tasks and surprisingly, a concomitant strength gain 9 months
later that was greater than the strength group. One explanation for this counter-intuitive
result is suggested by the post-therapy, self-maintenance literature. We speculate that
the functional task practice protocol provided a more favorable and meaningful context
for continued arm use and associated strength gain, perhaps mediated through
meaningful activity (outside of the therapy session), than did the resistance-strength
exercise training protocol that was less meaningful and engaging for the participants.
This suggests that contact-robot training for force production (strength gain) may
be beneficial in the short-term, but unless you keep working on the robot, you will not
achieve better function or persist in any of the strength gains in the long-term.
Therefore, the lack of a convenient, practical, non-technical, and safe human-
robotic interaction for rehabilitation further supports the rationale of our proposed
approach, which explores the contact-free robotic rehabilitation paradigm. The non-
contact approach affords the client the opportunity to engage in functional therapeutic
interaction conveniently and safely within the clinic or home in a user-friendly manner.
The two approaches can be complementary, in that hands-on methods may be
more useful in the early stages of rehabilitation, while hands-off methods can be used
after a certain level of movement proficiency is attained, and can be employed in a
variety of settings including homes.
This section describes our first pilot study with a socially assistive mobile robot
and the first results. The robot interacts with post-stroke patients in the process of
performing rehabilitation activities such as arm movements and shelving magazines, by
providing encouragement, guidance, and reminders.

Robot Test-bed

The robot used for our experiments, shown in Figure 1, consisted of an ActiveMedia
Pioneer 2-DX mobile robot base, equipped with a SICK LMS200 laser rangefinder
used to track and identify people in the environment by detecting reflective fiducials
worn by users. A Sony pan-tilt-zoom (PTZ) camera allowed the robot to “look” at and
away from the participant, shake its “head” (camera), and make other communicative
actions. A speaker produced pre-recorded or synthesized speech and sound effects. The
IMU-based motion capture unit provided movement data to the robot wirelessly in real
time. The entire robot control software was implemented using the Player robot control
system [33].

Design

This study [7, 8] focused on how different robot behaviors may affect the patient's
willingness to comply with the rehabilitation program. Our main goal was to test
different voices, movements, and levels of patience on the part of the robot, and
correlated those with participant compliance, i.e., adherence to the activities.
The robot was able to safely move about the environment without colliding with
objects or people. This was achieved through the use of a laser sensor which provides
high-fidelity information in real-time. Moreover, the robot was able to find and follow
the patient, maneuver itself to an appropriate position for monitoring the patient, and
leave when it was not wanted. This was achieved through the use of highly reflective
markers worn on the leg of the patient (Figure 2), in order for the robot to reliably
detect and recognize the patient.

Figure 1. The Pioneer mobile robot base used in the experiments. Shown are the laser (the blue box), camera
(mounted on top of the laser), and speakers (mounted on each side of the laser).
x,y,z motion
sensor

laser range
finder laser
reflective
robot band

(a) (b)
Figure 2. Two hands-off robot-assisted rehabilitation tasks: (a) magazine stacking and (b) free movement of
the stroke-affected limb.
The robot was able to monitor the movement of the stroke-affected limb. We used
a light-weight and low-cost inertial measurement unit (IMUs). The patient wore a
maker on the wrist, which provided its 3D real-time position information to the robot
through wireless communication. The robot used the information provided by the
motion sensor about the movement of the patient’s limb so as to encourage the patient
to continue using the limb, or use the limb more or in a different way, as appropriate
based on the sensor data and goal movement.
The robot was capable of using three distinct interaction modes, as follows:

I. The robot said nothing, and gave feedback only with different
beeping sounds. The robot’s presence also served to remind the
patient of the activity. The robot kept at a distance from the patient
and was not very persistent in encouraging the patient.
II. The robot used a “robotic”-sounding synthesized voice for its
communication with the patient. It gave simple verbal feedback,
including: “It looks like you are not using your arm”, “Have you
already shelved the books?”, “Great, keep up the good work”. It
maintained a shorter distance to the patient than in the first mode and,
when the patient was not reacting to the encouragement by
continuing the activity, was more persistent before giving up and
going away.
III. The robot used a pre-recorded friendly human voice, with humor and
engagement. It stayed with the patient and followed him/her around,
persistently encouraging the patient to perform the activity. It also
used body movement, wiggling back and forth, side to side and
turning around.
The robot was programmed to behave as follows: when activated, it started by
finding the patient, approaching him/her, and maintaining a specified distance. It then
gave instructions to the patient regarding the activity to be performed. During the
activity, it monitored the movement of the relevant limb with the motion sensor and
provided continual feedback based on the patient’s behavior and its interaction mode.
Our hypothesis was as follows:
H1: More animated/engaging and persistent robot behavior will result in better
patient compliance with the robot's instructions and higher patient approval of
the robot.
Experiments

This system was evaluated in three short experiments at the USC Center for Health
Professions on the Health Sciences Campus and in the USC Robotics Lab on the
University Park Campus. Two of these were conducted with patients, and one with
non-patients. Of the six stroke patients, two were women; all were middle-aged, the
participants ranged in age between 65 and 75. The stroke impairment occurred on
different limbs among the patients but all were sufficiently mobile to perform the
activities in the experiments. All experiments were video recorded and comprised
several experimental runs involving three randomly selected types of interaction for
each participant. The participants were asked by the robot to perform one of the
experimental tasks: shelving books/magazines or any voluntary movement of the
stroke-affected limb. The robot measured arm movement as an averaged derivative of
the arm angle. In the shelving task, the robot “counted” how many books the patient
put on the shelf by monitoring the movement of the arm. Hence, it was possible to fool
the robot by merely lifting the arm without any books; this was discovered by one of
the patients. In our newly designed experiments this possibility is eliminated. The
overall measure of performance the robot used was the length of time the patient
persisted in the chosen activity.
At the start of the experiment, the patient was presented with a written one-page
introduction to the experiment, followed by a simple questionnaire. Next, the robot was
introduced. The order of presentation of the three different modes of interaction was
randomized. After the patient performed both activities in all three modes (totaling six
experiments per patient), a second questionnaire was presented. Finally, an exit
interview solicited patient impressions and opinions and the experiment was concluded.

Results

We investigated the participants’ response to the robot and to the different interaction
modes. The pilot results are positive; generally, the robot was received well by the
participants, and the participants expressed consistent preferences in terms of robot
voices and interface technologies. Some participants continued to perform the activity
beyond the end of the experiment, therefore providing further evidence of improved
compliance in the robot condition well beyond any novelty effect. The design of the
study emphasized the user's response to the robot's behavior. Furthermore, as expected,
there were significant personality differences among the patients; some were highly
compliant but appeared un-engaged by the robot, while others were highly engaged and
even entertained, but got involved in playing with the robot rather than performing the
prescribed exercises. All this leads toward interesting questions of how to define
adaptive robot-assisted rehabilitation protocols that will serve the variety of patients as
well as the time-extended and evolving needs of a single participant. We addressed
some of the questions in the next study, described below. Video transcripts of the
experiments can be found on line [34]. The details about this study have been reported
[7, 8].
3. Personality-Matching Study

Our previously described experiment with a SAR system we developed, that monitored
and encouraged stroke patients to perform rehabilitation activities, demonstrated that
personality differences had a strong impact in the way the user’s interacted with the
robot. While all patients reported having enjoyed the robot, task performance ranged
from strict adherence to the robot’s instructions but no obvious engagement, to playful
engagement and even repeated attempts to trick the robot. It is known that pre-stroke
personality has a great influence on post-stroke recovery [4]; subjects classified as
extroverted before the stroke mobilize their strength easier to recover than do
introverted subjects [35]. Further, work in human-computer interaction (HCI) has
demonstrated the similarity-attraction principle, which posits that individuals are more
attracted to others manifesting the same personality as theirs [36, 37, 38]. Little
research to date has addressed personality in human-robot social interactions and no
work has yet addressed the issue of personality in the assistive human-robot interaction
context.
The research question addressed in this study was as follows:
Is there any relationship between the extroversion-introversion personality
spectrum and the challenge-based vs. nurturing style of patient encouragement?

Experimental Design

We [7, 8] performed a series of experiments in which the simple mobile robot depicted
in Figure 1, equipped with a camera and a microphone, interacted with a (healthy, 30
years old) user in an experimental scenario designed for post-stroke rehabilitation

Figure 3. The participant is performing turning pages of a newspaper task with the robot at a social distance.
The laser fiducial is on the participant’s right leg, the motion capture sensor on the right arm, and a
microphone is worn on standard headphones.
activities (see Figure 3). The participants were asked to perform four tasks (designed as
functional activities) similar to those used during standard stroke rehabilitation:
drawing up and down, or left and right on an easel; lifting and moving books from a
desktop to a raised shelf; moving pencils from one bin to another; and turning pages of
a newspaper. The subject pool for this experiment consisted of 19 participants (13 male,
6 female; 7 introverted and 12 extroverted). The participants completed a set of
questionnaires before the experiment, which were used to assess their personality traits
using the Eysenck biologically-based model [39]. The resulting personality assessment
based specifically on the extroversion-introversion dimension was used to determine
the robot’s personality. Our behavior control architecture is based on the Bandura’s
model of reciprocal influences on behavior [40]. The robot expressed its personality
through several means: (1) proxemics (social use of space; the extroverted personalities
used smaller personal distances) [41]; (2) speed and amount of movement (the
extroverted personalities moved more and faster); and (3) vocal content (the
extroverted personalities talked more aggressively (“You have done only x movements,
I’m sure you can do more!”), using a challenge-based style compared to a nurture-
based style (“I know it’s hard, but remember it’s for your own good.”) on the
introversion end of the personality spectrum). The robot used the arm motion capture
data to monitor user activity and to determine whether the activity was being performed.
The experiment compared personality-matched vs. personality-mismatched (random)
conditions.
Our hypotheses were as follows:
H1: A robot that challenges the user during rehabilitation therapy rather than
praising her/him will be preferred by users with extroverted personalities and
will be less appealing to users with introverted personalities.
H2: A robot that focuses on nurturing praise rather than on challenge-based
motivation during the training program will be preferred by users with
introverted personalities and will be less appealing to users with extroverted
personalities.

Results

The system evaluation was performed based on user introspection (questionnaires).


After each experiment, the participant completed two post-experiment questionnaires
designed to evaluate impression of the robot’s personality (e.g., “Did you find the
robot’s character unsociable?”) and about the interaction with the robot (e.g., “The
robot’s personality is a lot like mine.”). All questions were presented on a 7-point
Likert scale ranging from “strongly agree” to “strongly disagree”. The data obtained
from the questionnaires conclusively showed that the robot’s personality was
fundamental in the interaction and two statistically significant results were found
(ANOVA validation): (1) participants consistently performed better on the task (more
pages turned, more sticks moved, etc.) when interacting with the personality-matched
robot; (2) both extroverted and introverted participants reported preferring the
personality-matched robot. More details about this study can be found in [10].
3. Robot Adaptation Study

Learning to adapt our daily behavior as a function of different internal and external
factors is an inherently human trait. Creating robots capable of exhibiting similar
sophisticated capabilities has proven to be a very difficult task. Therefore, providing an
engaging and motivating customized protocol that is adaptable to user personality and
preferences is a challenge in robotics, especially when working with vulnerable user
populations and a careful consideration of the users’ needs and disabilities is required.
A variety of robotic learning approaches is available in the literature, but none include
the user’s profile, preferences, and/or personality. Socially assistive robotics presents a
variety of rich opportunities for exploring learning as a tool for human-robot
interaction. In this study, we [11, 12] focused both on the short-term changes that
represent individual differences and on the long-term changes that allow the interaction
to continue to be engaging over a period of months and even years.
The research question addressed here is:
How should the behavior and encouragement of the therapist robot adapt as a
function of the user’s personality and task performance?

Methodology

The problem was formulated as policy gradient reinforcement learning (PGRL) and it
consisted of the following steps: (a) parameterization of the robot’s overall behavior
(including all parametric components, listed above); (b) approximation of the gradient
of the reward function in the parameter space; and (c) movement toward a local
optimum. This methodology allowed us to dynamically optimize the interaction
parameters: interaction distance/proxemics, speed, and vocal content (what the robot
says and how it says it) [11]. Proxemics involved three zones (all beyond the minimal
safety area), activity was expressed through the amount of robot movement, and vocal
content varied from nurturing (“You are doing great, please keep up the good work.”)
to challenging (“Come on, you can do better than that.”) and extroverted (higher-
pitched tone and louder volume) to introverted (lower-pitched tone and lower volume),
in accordance with well-established personality theories referred to earlier. These
define the behavior, and thus personality, of the therapist robot, which is adaptable to
the user’s personality in order to improve the user’s task performance. Task
performance is measured as the number of movements performed and/or time-on-task,
depending on the nature of the trial.
The robot incrementally adapted its behavior and thus its expressed personality as
a function of the (healthy) user’s extroversion-introversion level and the amount of
performed activities, attempting to maximize that amount. The result was a novel
stroke/TBI rehabilitation tool that has the potential to provide individualized and
appropriately challenging/nurturing therapy style that may measurably improve user
task performance.

Experimental Design

We designed two different experiments to test the adaptability of the robot’s behavior
to the participant’s personality and preferences. The experimental task was a common
object transfer task used in post-stroke/TBI rehabilitation and consisted of moving
pencils from one bin on the left side of the participant to another bin on his/her right
side. One of the bins was on an electronic scale in order to measure the user’s task
performance. The task was open-ended. The subject pool consisted of 12 healthy
participants (7 male and 5 female); there was no control group in this study. The
participants were asked to complete a pre- and post- experiment questionnaire, so as to
determine the user personality (based on the Eysenck Personality Inventory (EPI) [39]),
respectively the participants’ preferences related to the therapy styles or robot’s vocal
cues, interaction distances, and robot’s speed from the values used in the experiments.
The learning algorithm was initialized with parameter values that were in the vicinity
of what was thought to be acceptable for both extroverted and introverted individuals,
based on the user-robot personality matching study described earlier.
The first experiment was designed to test the robot behavior adaptation to user
personality-based therapy style. The therapy styles ranged from coach-like therapy to
encouragement-based therapy for extroverted personality types and from supportive
therapy to nurturing therapy for introverted personality types. The vocal content for
each of these scenarios was selected in concordance with encouragement language used
by professional rehabilitation therapists.
People are more influenced by certain voices and accents than others. The main
goal of our second experiment was to test and validate the adaptation capability of the
robot to the user preferences related to English accent and voice gender.

Results

The experimental results provided first evidence for the effectiveness of robot behavior
adaptation to user personality and performance: users (non-disabled) tended to perform
more or longer trials under the personality matched and therapy style matched
conditions. The latter refers to nurturing styles being correlated with the introversion
side of the personality spectrum, and challenging styles correlated with the
extroversion side of the spectrum. A more detailed description is given in [11].

PENCIL
BIN 1
BIN 2

SCALE
Figure 4. Participant performing the object transfer task: moving pencils from one bin to another.
4. Conclusions

We have presented a research program aimed at developing non-contact socially


assistive robot therapists intended for monitoring, assisting, encouraging, and socially
interacting with users during the motor rehabilitation process. Our first results
demonstrated user acceptance of the robot. Our next round of results validated that
mirroring user personality in the robot’s behavior during the hands-off therapy process
acts to improve task performance on rehabilitation activities. Finally, our last round of
results demonstrated the robot’s ability to adapt its behavior to the user’s personality
and preferences.
Our ongoing work is aimed at evaluating the described approach in a time-
extended user study with a large group of participants post-stroke. The longitudinal
study will allow us to eliminate the effects of novelty, and will also provide the robot
with the opportunity for richer learning and adaptation algorithms. Our robots are
designed to subordinate to the participants’ desires and preferences, thereby promoting
patient-centered practice and avoiding the complex issues of taking control away and
dehumanizing health care [42]. Our ultimate goal is to develop technology-assisted
therapy methods that can augment the current standard of care in order to meet the
growing need for personalized care indicated by the population demographics.

Acknowledgements

This work was supported by USC Women in Science and Engineering (WiSE)
Program, the Okawa Foundation, the National Science Foundation grants #IIS-0713697
and #CNS-0709296.

References

[1] C.J. Winstein, and S.L. Wolf, Task-oriented training to promote upper extremity recovery, in J. Stein,
R.L. Harvey, R.F. Macko, C.J. Winstein, and R.D. Zorowitz, Eds, Stroke Recovery and Rehabilitation,
Demos Medical, New York, New York, 2008, pp. 267-90.
[2] B.E. Fisher, A.D. Wu, G.J. Salem, J. Song, C.H. Lin, J. Yip, S. Cen, J. Gordon, M. Jakowec, and G.
Petzinger, The effect of exercise training in improving motor performance and corticomotor excitability
in people with early Parkinson’s Disease, Archives of Physical Medicine and Rehabilitation 89 (2008),
1221-9.
[3] M.J. Watson, and R. Hitchcock, Recovery of walking late after a severe traumatic brain injury,
Physiotherapy 80 (2008), 103-7.
[4] American Heart Association. Heart disease and stroke statistics, American Heart Association and
American Stroke Association, 2003.
[5] D.J. Thurman, C. Alverson, K.A. Dunn, J. Guerrero, and J.E. Sniezek, Traumatic brain injury in the
United States: A public health perspective, Journal of Head and Trauma Rehabilitation 14 (6)
(1999),602-615.
[6] D. Feil-Seifer, and M.J. Matarić, Defining socially assistive robotics, In Proc. IEEE International
Conference on Rehabilitation Robotics (ICORR’05), Chicago, Il, USA, 2005, pp. 465-468.
[7] J. Eriksson, M.J. Matarić, and C. Winstein, Hands-off assistive robotics for post stroke arm
rehabilitation, In Proc. IEEE International Conference on Rehabilitation Robotics (ICORR’05),
Chicago, Il, USA, 2005, pp. 21-24.
[8] M. Matarić, J. Eriksson, D. Feil-Seifer, and C. Winstein, Socially assistive robotics for post-stroke
rehabilitation. International, Journal of NeuroEngineering and Rehabilitation 4 (5) (2007).
[9] A. Tapus, and M.J. Matarić, Towards socially assistive robotics. International, Journal of the Robotics
Society of Japan 24 (5) (2006):14-16.
[10] A. Tapus, and M.J. Matarić, User personality matching with hands-off robot for post-stroke
rehabilitation therapy, In Proc. International Symposium on Experimental Robotics (ISER’06), Rio de
Janeiro, Brazil, 2006.
[11] A. Tapus, C. Tapus, and M.J. Matarić, User-robot personality matching and assistive robot behavior
adaptation for post-stroke rehabilitation therapy. Intelligent Service Robotics, Special Issue on
Multidisciplinary Collaboration for Socially Assistive Robotics, 1 (2) (2008),169-183.
[12] A. Tapus, and M.J. Matarić, Towards Active Learning for Socially Assistive Robots. Proceedings of
Neural Information Processing Systems (NIPS-07), Workshop on Robotics Challenges for Machine
Learning, Vancouver, Canada, 2007.
[13] L. Ada, C.G. Canning, J. Carr, S.L. Kilbreath, and R. Shephed, Task-specific training of reaching and
manipulation, In: K. Bennett, U. Castiello, ed. Insights into the Reach to Grasp Movement, Amsterdam,
Elsevier, (105) (1994), 239–265.
[14] J. Carr, and R.B. Shepherd, A motor learning model for stroke rehabilitation, Physiotherapy 75 (1989),
372-380.
[15] K. Hellstrom, B. Lindmark, B. Wahlberg, and A.R. Fugl-Meyer, Self-efficacy in relation to
impairments and activities of daily living disability in elderly patients with stroke: A prospective
investigation, Journal of Rehabilitation Medicine 35 (5) (2003), 202-207.
[16] M. Kelly-Hayes, and J.T. Robertson, The American Heart Association stroke outcome classification.
Stroke 29 (6) (1998), 1274-1280.
[17] J.P. Broderick., M. William, Feinberg lecture: Stroke therapy in the year 2025: Burden, breakthroughs,
and barriers to progress, Stroke 35 (1) (2004), 205-211.
[18] T. S. Olsen, Arm and leg paresis as outcome predictors in stroke rehabilitation, Stroke (2) (1990), 247-
251.
[19] R.J. Nudo, and E.J. Plautz et al., Role of adaptive plasticity in recovery of function after damage to
motor cortex, Muscle Nerve 24 (8) (2001), 1000- 1019.
[20] E.G. Taub, and G. Uswatte et al., Improved motor recovery after stroke and massive cortical
reorganization following constraint-induced movement therapy, Journal of Medical Physics 14 (1)
(2003), 77-91.
[21] J. Desrosiers, F. Malouin, D. Bourbonnais, C.L. Richards, A. Rochette, and G. Bravo, Arm and leg
impairments and disabilities after stroke rehabilitation: relation to handicap, Clinical Rehabilitation (6)
(2003), 666-673.
[22] H. Smits, and E.C. Smits-Boone, Hand recovery after stroke: exercise and results measurements,
Boston, MA, Butterworth-Heinemann, 2000.
[23] J. Van der Lee, R.C.Wagenaar, G. Lankhorst, T.W. Vogelaar,W.L. Deville, and L.M. Bouter, Forced
use of the upper extremity in chronic stroke patients; results from a single-blind randomized clinical
trial, Stroke 30 (1999), 2369-2375.
[24] D. Reinkensmeyer, M. Averbuch, A McKenna-Cole, B.D. Schmit, W.Z. Rymer, Understanding and
treating arm movement impairment after chronic brain injury: Progress with the arm guide, Journal of
Rehabilitation Research and Development 37(6), 2000.
[25] J. Schaechter, E. Kraft, T.S. Hilliard, R.M. Dijkhuizen, T. Benner, S.P Finklestein, B.R. Rosen, and S.C.
Cramer, Motor recovery and cortical reorganization after constraint-induced movement therapy in
stroke patients: a preliminary study, Neurorehabilitation and Neural Repair 16 (4) (2002),326-338.
[26] C. Winstein, D.K. Rose, S.M. Tan, R. Lewthwaithe, H.C. Chui, and S.P. Azen, A randomized
controlled comparison of upper extremity rehabilitaiton strategies in acute stroke: a pilot study of
immediate and lont-term outcomes, Archives of Physical Medicine and Rehabilitation 85 (2004), 620-
628.
[27] S.L. Wolf, C. Winstein, P.J. Miller, P.A. Thompson, E. Taub, G. Uswatte, D. Morris, S. Blanton, D.
Nichols-Larsen, P.C. Clark., Retention of upper limb function in stroke survivors who have received
constraint-induced movement therapy: the EXCITE randomized trial, The Lancet Neurology 7 (1)
(2008), 33-40.
[28] S.L. Wolf, C. Winstein, J.P. Miller, E. Taub, G. Uswatte, D. Morris, C. Giuliani, K.E. Light, D.
Nichols-Larsen, EXCITE Investigators. Effect of constraint-induced movement therapy on upper
extremity function 3 to 9 months after stroke: the EXCITE randomized clinical trial, Journal of the
American Medical Association 296 (17) (2006), 2095-104.
[29] S. Hesse, G. Schulte-Tigges, et al., Robot-assisted arm trainer for the passive and active practice of
bilateral forearm and wrist movements in hemiparetic subjects, Archives of Physical Medicine and
Rehabilitation 84 (6) (2003), 915-20.
[30] S. Hesse, and C. Werner , Poststroke motor dysfunction and spasticity: novel pharmacological and
physical treatment strategies, CNS Drugs 17 (15) (2003), 1093-107.
[31] B.R. Brewer, R. Klatzky, and Y. Matsuoka, Feedback distortion to overcome learned nonuse: A system
overview, IEEE Engineering in Medicine and Biology (2003), 1613-1616.
[32] C. Burgar, P. Shor, and H.F. Van Der Loos, Development of robots for rehabilitation therapy: the Palo
Alto VA/Stanford experience, Journal of Rehabilitation Medicine 37 (6) (2000), 663-673.
[33] B. Gerkey, R. Vaughan, and A. Howard, The Player/Stage Project: Tools for Multi-Robot Distributed
Sensor Systems, in Proceedings of the International Conference on Advanced Robotics, Coimbra,
Portugal; 2003, pp. 317-323.
[34] Interaction Lab: Human Robot Interaction for Post-Stroke Recovery Robot Project Page [http://www-
robotics.usc.edu/interaction/?l=Research:Projects:post_stroke:index]
[35] M. Ghahramanlou, J. Arnoff, M.A. Wozniak, S.J. Kittner, and T.R. Price, Personality influences
psychological adjustment and recovery from stroke, in Proc. of the American Stroke Association’s 26th
International Stroke Conference, Fort Lauderdale, USA, 2001.
[36] H. Nakajima, Y. Morishima, R. Yamada, S. Brave, H. Maldonado, C. Nass, and S. Kawaji, Social
intelligence in a human-machine collaboration system: Social responses to agents with mind model and
personality, Journal of the Japanese Society for Artificial Intelligence 19 (3) (2004), 184-196.
[37] H. Nakajima, C. Nass, R. Yamada, Y. Morishima, S. Kawaji, The functionality of human-machine
collaboration systems mind model and social behavior, In Proc. of the IEEE Conference on Systems,
Man and Cybernetics, Washington, USA, 2003, pp. 2381-2387.
[38] C. Nass, and M.K. Lee, Does computer-synthesized speech manifest personality? experimental tests of
recognition, similarity-attraction, and consistencyattraction, Journal of Experimental Psychology
Applied 7 (3) (2001), 171-18.
[39] H.J. Eysenck, Dimensions of personality: 16, 5 or 3? criteria for a taxonomic paradigm, Personality and
Individual Differences 12 (1991), 773-790.
[40] A. Bandura, Principles of behavior modification, Holt, Rinehart & Wilson, New York, USA, 1969.
[41] E.T. Hall, Hidden Dimension, Doubleday, Gorden City, NY, 1966.
[42] Institute of Medicine, Crossing the quality chasm: A new health care system for the 21st century,
Washington, D.C.: National Academy Press, 2001.
Moving Beyond Single User, Local Virtual
Environments for Rehabilitation
Patrice L. (Tamar) WEISSa and Evelyne KLINGERb
a
Laboratory for Innovations in Rehabilitation Technology, University of Haifa, Israel
b
HIT Team, P&I Lab Laval, Arts et Métiers ParisTech Angers, France

Abstract. The rapid development of Virtual Reality-based technologies over the


past decade is both an asset and a challenge for neuro-rehabilitation. The
availability of novel technologies that provide interactive, functional simulations
with multimodal feedback enable clinicians to achieve traditional therapeutic goals
that would be difficult, if not impossible, to attain via conventional therapy. They
also lead to the creation of completely new clinical paradigms which would have
been hard to achieve in the past. In applications of rehabilitation for both motor
and cognitive deficits the main focus of much of the early exploratory research has
been to investigate the use of virtual reality as an assessment tool. To date such
environments are primarily: (a) single user (i.e., designed for and used by one
clinical client at a time) and (b) used locally within a clinical or educational setting.
More recently, researchers have begun the development of new and more complex
VR-based approaches according to two dimensions: the number of users and the
distance between the users. Driven by a push-pull phenomenon, the original
approach has now expanded to three additional avenues: multiple users in co-
located settings; single users in remote locations; and multiple users in remote
locations. After a presentation of examples that illustrate theses various approaches,
we will conclude in addressing questions and ethical considerations raised by this
evolution in the use of virtual environments in rehabilitation.

Keywords. Virtual Reality, Rehabilitation, Tele-Rehabilitation

1. Introduction

The rapid development of Virtual Reality (VR)-based technologies over the past
decade is both an asset and a challenge for neuro-rehabilitation. The availability of
novel technologies that provide interactive, functional simulations with multimodal
feedback (visual, auditory and, less frequently, haptic, vestibular, and olfactory
channels) enable clinicians to achieve traditional therapeutic goals that would be
difficult, if not impossible to attain via conventional therapy. For example, the practice
of functional skills, such as street crossing or supermarket shopping, are inconvenient
and sometimes dangerous for clients with brain damage when they take place in real
settings. They also lead to the creation of novel clinical paradigms. For example, the
use of instrumented tangible cubes that control virtual building blocks, enables a
clinician to assess the constructional ability of children with Developmental
Coordination Disorder under dynamic conditions [1].
In applications of rehabilitation for both motor and cognitive deficits, the main
focus of much of the early exploratory research has been to investigate the use of VR as
Figure 1. Single user and locally used virtual environments
an assessment tool [2, 3]. More recently researchers have been striving to develop and
evaluate VR-based intervention strategies. Examples include the use of realistic
functional simulations, tele-rehabilitation, and home-based therapy. For example, the
IREX video capture VR system has been used to improve ankle movements in children
with Cerebral Palsy [4] and a customized speech training program has been used to
augment therapy for clients with stroke who have been discharged home [5].
In this chapter, we begin by providing a short overview of how virtual
environments (VE) first began to be implemented for the purposes of cognitive or
motor rehabilitation. To date such environments are primarily: (a) single user (i.e.,
designed for and used by one clinical client at a time) and (b) used locally within a
clinical or educational setting (see Figure 1). The clinical attributes of such systems
will be illustrated via two examples: the VAP-S [6] and the IREX VMall [7].
Researchers developed these technologies to enhance conventional assessment [8]
and therapy [9, 10] with the aid of VEs; a single user in a particular location
experiences a VR-based clinical session under the local supervision of a therapist. The
potential of VR assets for rehabilitation are now well known [2, 10]. They include real-
time interaction, objective outcome measures that are documented, and repeated
delivery of virtual stimuli within simulated functional environments that are graded in
difficulty and context. A variety of studies have begun to demonstrate the validity of
VR use in neuropsychology and rehabilitation [11, 13].
In recent years, we have observed a push-pull phenomenon which is leading to an
increase in the application of VR technologies for rehabilitation. The “push” emanates
from the continuous development of novel technologies, their more ready availability
in clinical settings, and lowered costs. The “pull” stems from clients, clinicians and
third party payers who recognize the need for treatment that goes beyond conventional
therapy. As indicated above, VR-based therapy given to single users in local settings
has been driven by the push-pull phenomenon. However, very recently, efforts are
being made to expand to approaches designed to support multiple users and remote
locations. Figure 2 presents a revised version of Figure 1, showing three additional
avenues: multiple users in co-located settings (Arrow 1), single users in remote
locations (Arrow 2) and multiple users in remote locations (Arrow 3). The latter two
approaches are often referred to as tele-rehabilitation. This evolution in the use of VEs
in rehabilitation raises research questions and ethical considerations that we will
address below.
Figure 2. Moving beyond single user and local virtual environments for rehabilitation

2. Single User Approaches

Replicating the therapist-patient relationship in traditional therapy, the first VR-based


applications were used with a single user (patient) who engaged in a particular VE in
the presence of the therapist; this is what we refer to as the single user and locally used
VE (Figure 1 and lower, left quadrant in Figure 2). The clinical attributes of such
systems are illustrated by two examples: the VAP-S [6] and the IREX VMall [7].

2.1. Virtual Action Planning Supermarket (VAP-S)

The Virtual Action Planning Supermarket (VAP-S) was designed to assess and train
the ability to plan and execute the task of purchasing items on a shopping list [6, 14,
16]. It was created with two main tools: 3D Studio Max from Autodesk
(www.autodesk.com) and VirtoolsTM Life Platform from Dassault Systèmes
(www.virtools.com). Operation of the VAP-S includes a series of actions, described as
a task, and allows an analysis of the strategic choices made by clients and thus their
capacity to plan, such as the “test of shopping list” [8]. The VAP-S simulates a fully
textured, medium size supermarket with multiple aisles displaying most of the items
that can be found in a real supermarket. There are also four cashier check-out counters;
a reception point and a shopping cart, as illustrated in Figure 3. Some obstacles, like
packs of bottles or cartons, may hinder the advance of the shopper along the aisles. In
addition, virtual humans are included in the supermarket such as a fishmonger, a
butcher, check-out cashiers and some customers.
The test task is to purchase seven items from a clearly defined list of products, to
then proceed to the cashier’s desk and to pay for them. Twelve correct actions (e.g.,
selecting the correct product) are required to completely succeed in the task. Actions
are considered as incorrect if the client: 1) chooses items that are not in the list or
chooses the same item twice; 2) chooses a check-out counter without any cashier; 3)
leaves the supermarket without purchasing anything or without paying; or 4) stays in
the supermarket after the purchases. A training task which is similar, but not identical,
to the test is also available to enable the user to get acquainted with the VE and the
tools. The task-related instructions are, at first, written on the screen and the target
items to purchase are displayed on the right side of the screen. As the client progresses
with the purchases, the items appear in the cart and disappear from the screen. The
cashier-related instructions are verbal and are given before the beginning of the session.
While sitting in front of a PC screen monitor, the client enters the supermarket
behind the cart as if he is pushing it, and moves around freely by pressing the keyboard
arrows. He is able to experience the environment from a first person perspective
without any intermediating avatar. The client is able to select items by pressing the left
mouse button. If the item selected is one of the items on the list it will transfer to the
cart. At the cashier check-out counter, the client may place the items on the conveyor
belt by pressing the left mouse button with the cursor pointing to the belt. He may also
return an item placed on the conveyor belt to the cart. By clicking on the purse icon, the
client may pay and proceed to the supermarket exit.

Figure 3. The Virtual Action Planning Supermarket (VAP-S)

The VAP-S records various outcome measures (position, times, actions) while the
participant experiences the VE and executes the task. At least eight variables can be
calculated from the recorded data: the total distance in meters traversed by the patient
(referred to as the trajectory), the total task time in seconds, the number of items
purchased, the number of correct actions, the number of incorrect actions, the number
of pauses, the combined duration of pauses in seconds, and the time to pay (i.e., the
time between when the cost is displayed on the screen and when the participant clicks
on the purse icon). A review of the performance is available from a “Bird’s eye view”,
i.e. from above the scene (see white traces in Figure 4 and Figure 5).
Figure 4. Trajectory while shopping for items by a Figure 5. A typical trajectory of the same shopping
user with no impairment. task by a client with Parkinson's Disease.
The initial design of the VAP-S was carried out in the context of research on
Parkinson’s disease (PD) and the elderly. Its purpose was first to test the feasibility of
the VAP-S for elderly people, and second to investigate the capacity of the VAP-S to
discriminate between patients with PD and age-matched control subjects. Five patients
with PD (two females, three males; age, 74.0 ± 5.4 years) and five age-matched healthy
controls (four females, one male; age, 66.6 ± 7.7 years) were recruited, according to the
inclusion criteria [6, 15]. A debriefing period allowed Klinger et al. to collect the
participants’ feedback: they well understood the task and VAP-S usage; thanks to the
training session they easily became familiar with the VR interface. One limitation
(related to the correct distance to apply from the shelves) was noted and revised in
subsequent versions of the VAP-S. The performance results underlined a behavioral
difference between patients with PD and controls: patients needed more time to execute
the task and cover a longer distance. This difference was not related to motor
difficulties since they navigated with keyboard keys at the same speed. It is rather
related to their hesitations, numerous stops, and need to search for products not
appropriate to the products' position in the VAP-S (see Figure 5). These data reveal the
alteration of temporal and spatial organization of PD patients [17]. Moreover the
review of the trajectory was appreciated by both the participants and the therapist.
The original VAP-S was then adapted by E. Klinger in 2005 for use by an Israeli
population; the names of the aisles and grocery items, as well as all the elements of the
task were translated to Hebrew [18, 19]. The purpose of the study was first to test the
feasibility of the VAP-S for post-stroke patients, and second to examine the
relationships between performance within the VAP-S and standard outcome measures
of executive functions. Twenty-six post-stroke patients participated in the study [18]. In
order to predict problems in everyday activities, they also were assessed with the six
performance subtests of the Behavioral Assessment of Dysexecutive Syndrome
(BADS) [20], which cover various aspects of the dysexecutive syndrome such as
difficulty in planning and abstract thinking. Performance results showed the feasibility
of the VAP-S for use by post-stroke patients. Analysis of the participants’ performance
showed a large variance of the scores within the VAP-S. The relationships between
performance within the VAP-S and the key search subtest from the BADS that requires
planning ability showed that the supermarket task requires planning ability which is
one of the key executive functions.
The potential of the VAP-S as a predictive tool of executive function profiles is
currently explored thanks to studies among various populations of patients with deficits
in the central nervous system [19, 21].
2.2. IREX Video capture Virtual Mall (VMall)

Video capture VR uses a video camera and software to track movement in a single
plane without the need to place markers on specific bodily locations. The user's image
is thereby embedded within a simulated environment such that it is possible to interact
with animated graphics in a completely natural manner. Users stand or sit in a
demarcated area with a chroma key backdrop (used to subtract users from the real
environment prior to inserting them into the virtual environment) and view a large
video screen that displays simulated environments. A single camera films the user and
display’s his image within the VE. The user’s movements are processed on the same
plane as screen animation, text, graphics, and sound, which respond in real-time.
Therefore the users see themselves in the VE and interact using their own natural
movements [22]. Several virtual games which run on the same VR platform and have
been adapted for rehabilitation (Birds & Balls, Soccer, Snowboard and Volleyball)
were also used during the sessions (IREX™ Interactive Rehabilitation and Exercise
System (www.gesturetekhealth.co) [23]. This system has been used in rehabilitation
and has been shown to be suitable for use with patients suffering from motor and/or
cognitive deficits [24, 27].
The VMall is a virtual supermarket that runs on the IREX platform. It encourages
planning, multitasking and problem solving while practicing an everyday task of
shopping [28]. The products are virtually selected and placed in a shopping cart using
upper extremity movements (see Figure 6). It has been shown to be a valid assessment
tool which can differentiate between two groups of healthy people and between healthy
people and those with stroke [7, 25], and which correlates with performance in a
complex shopping task in a real mall [25].
The VMall was further validated by comparing a virtual version of the well-known
test of executive functions, the Multiple Errands Test (MET), called the Virtual
Multiple Errands Test (VMET) to the original MET [29]. The study population
included three groups; post-stroke participants (N=9), healthy young participants
(N=20) and healthy older participants (N=20). The VMET was able to differentiate
between two age groups of healthy participants and between healthy and post-stroke
participants thus demonstrating that it is sensitive to brain injury and aging and
supports construct validity between known groups. In addition, significant correlations
were found between the MET and the VMET for both the post-stroke participants and
older healthy participants. These results provide initial support for the ecological
validity of the VMET as an assessment tool of executive functions.
The potential of the VMall as an intervention tool for people with stroke who
present difficulties in executive functions and multitasking [30] or in motor deficits
[31] was also explored. The two companion studies of four and six participants with
stroke, respectively, received ten 60-minute sessions in a 3 week period using the
VMall. Intervention for the executive function deficits group focused on improving
multitasking during a shopping task. A substantial percent improvement for most of
the measures in the MET in an actual shopping mall and in the Virtual MET was found
for all 4 participants. In addition, some improvement was found for a general IADL
measure. Intervention for the motor deficits group focused on reaching movements by
the affected upper extremity while the participant was engaged in a virtual shopping
task. An improvement was found for all outcome measures during the intervention
phase as compared to none or very little change during both baseline phases. The
participants reported that the intervention helped them improve their weak upper
Figure 6. The VMall, the left panel shows selection of a supermarket aisle, the middle panel shows a
shopping cart with purchased food items, and the right panel shows selection of grocery products.
extremity and stated that they used it more in daily life than prior to the intervention.
These data support the potential of the VMall as a motivating and effective intervention
tool for the rehabilitation of people with stroke by presenting multitasking or upper
extremity motor deficits.

3. Co-located Multiple User Approaches

Researchers have foreseen the need to go beyond the design and implementation of
single user and locally used VE, aiming to develop more complex VR-based
approaches that advance along the two dimensions shown in Figure 2: the number of
users and the distance between the users. With regard to the dimension of number of
users (Figure 2, arrow 1), issues related to the role that additional users will play in a
VE arise. They may be involved in an assistive role (e.g., a therapist may help the
patient perform the activity or task), or in a positive or negative collaborative role (e.g.,
patients may be involved in a common activity to achieve a task). In order to illustrate
these two aspects, we next describe the use of the IREX system in motor rehabilitation
and the use of collaborative tables among children with autism.

3.1. Client to Therapist Interaction

As indicated above, users of IREX video capture VR see themselves in the VE and
interact using their own natural movements [22]. Video-capture VR provides users
with a mirror image view of themselves actively participating within the environment.
This contrasts with other VR display technologies such as a Head Mounted Display
(HMD) which provides users with a "first person" point of view, or many desktop
platforms in which the user is represented by an avatar. The use of the user’s own
image has been suggested to add to the realism of the environment and to the sense of
presence [32]. It also provides feedback about the user's body posture and quality of
movement, comparable to the use of video feedback in conventional rehabilitation
during the treatment of certain conditions such as unilateral spatial neglect [33].
Interaction and control is another attribute of video capture VR that has
implications for therapy. This characteristic relates to how the user controls objects
within the VE. Rather than relying on a pointing device or tracker, interaction within
video-capture based environments is accomplished in a completely intuitive manner via
natural motion of the head, trunk and limbs [22]. Not only is the control of movement
more natural, but, in the case of the chroma key IREX, a "red glove" option (or any
object with a distinct color) may be used to restrict system responses to one or more
body parts as deemed suitable for the attainment of specified therapeutic goals. For
example, when it is appropriate to have the intervention directed in a more precise
manner, a client may be required to repel projected balls via a specific body part (e.g.,
by the hand when wearing a red glove or by the head when wearing a red hat). Or,
when the intervention is more global, the client will not use the red glove option and
thus be able to respond with any part of the body. The ability to direct a client's motor
response to be either specific or global makes it possible to train diverse motor abilities
such as the range of motion of different limbs and whole body balance training.
Both attributes described above (see themselves in the VE and interact using their
own natural movements) offer an intuitive interaction between a patient and the VE. A
therapist can naturally enter the VE, replicating the conventional therapist-patient
interaction while retaining VR's added assets.

3.2. Client to Client Interaction

Active Surfaces are an emerging class of devices and applications [34, 35]. They are
shared co-located systems that represent a radical shift from the paradigm of one-user-
one-computer. As such, they are subject to different design constraints than standard
Graphical User Interface (GUI) applications. They are based on large interactive
surfaces placed horizontally (‘tabletop’ devices) or vertically (‘wall displays’) on which
a specifically designed interface is displayed or projected. These systems open up new
possibilities for fostering collaboration and they can be used to increase the users’
sense of teamwork and facilitate access control on large, shared displays. Zancanaro et
al. [34] developed a tabletop device (StoryTable) that examined the interaction pattern
in pairs of typically developed children when narrating a joint story using an interface
that “enforced” joint actions (Figure 7). The objective was to foster the recognition of
the contribution of the peer in dyadic interaction. In collaboration with that group, we
used the StoryTable paradigm with high functioning Autistic Spectrum Disorder (HF-
ASD) children to evaluate the effectiveness of a three-week intervention in which a
computerized, co-located touch table interface combined with Cognitive Behavioral
Therapy (CBT) intervention guidelines (e.g., self instruction, problem solving) [36, 38].
Intervention focused on exposing three pairs of children, aged 8-10 years, with
HF-ASD to an enforced collaboration paradigm while they narrated a story. Pre and
post intervention tasks included a "low technology" version of the StoryTable and a
non story-telling play situation using a free construction game, to examine
generalization of the learned social behaviors. Results demonstrated progress in three
major areas of social behaviors. First, the participants were more likely to initiate
positive social interaction with peers after the intervention. Second, the level of shared
play of the children increased from the pre-test to the post test and they all increased
their level of collaboration following the intervention. Third, the children with ASD
demonstrated lower frequencies of autistic behaviors while using the StoryTable in
comparison to the free construction game activity. This preliminary study revealed the
effectiveness of the integration of technology to enforce collaboration integrated with
CBT framework.
Figure 7. The StoryTable. Left panel shows screen shot of one story telling background. Right panel shows
two typically developed children engaged in a multi-touch activation.

4. Remote Location, Single User Approach

With regard to the dimension of location (local versus remote), several VEs have
evolved that add distance between the patient and the therapist leading to what is called
tele-rehabilitation (Figure 2, arrow 2). Their purpose was to improve access to care and
to extend the reach of medical rehabilitation service delivery. In order to illustrate the
clinical attributes of such systems, we will describe their use in the rehabilitation of
upper extremity function in patients with stroke. Further information is provided by the
Brennan et al. chapter within this book.
Due to changes in the health care delivery system in recent years, many patients
can suffer reduced access to care even though they are returning at home with disabling
deficits. Considering that fact, several groups of researchers developed VR-based tele-
rehabilitation systems, focused either on clinic to clinic connections [39] or on clinic to
home connections [40, 41]. Holden et al. proposed a tele-rehabilitation system that was
an enhancement and expansion of their VR-based motor training system for the upper
extremity, originally developed as a “single user and locally used VE” [40, 42]. The
system provides real-time interactive training sessions at the patient’s home with a
therapist who is located remotely at a clinic. Each partner (patient and therapist)
disposes of two computers, one for the display of the VR program (VE display) and
one for communication via video-conferencing. Both patient computers and therapist
computers are connected via a high-speed Internet connection. Motion-capture
equipment transmits information about patients arm movements to both VE displays
(patient and therapist), and video cameras allow video-conferencing. The movements
are done within the context of a virtual situation which requires that the patient imitate
a pre-recorded movement. The therapist can direct and control the activity in real time,
from a remote place. This system can be used to train a wide variety of arm movements
in any place at the UE workspace.
A study, carried out with 11 subjects with stroke, demonstrated the feasibility of
the system deployment in a home-based environment and the efficacy of this kind of
training in the context of stroke [42]. Results showed significant improvement in upper
extremity function following 30 VE treatment sessions (one-hour each, delivered 5
times per week) as measured with standard clinical tests. The changes were maintained,
for the most part, at a four-month follow-up test.
Future applications of tele-rehabilitation systems, designed to further improve care
after disabling events, should consider providing smart and easy to use systems for
secure training of patients.

5. Remote Location, Multiple Users Approaches Approach

The obvious next step in the evolvement of VEs for clinical use is to expand jointly in
both dimensions (numbers of users and location) in Figure 2, arrow 3. Indeed,
researchers have recently begun to develop and evaluate the use of “multiple users and
remote VE” for tele-learning and tele-therapy. This phenomenon is supported by the
continuous development of robust technologies such as faster, more secure Internet
connections and the growing popularity of the social networks [43]. In order to
illustrate this trend, we next describe some medical and health applications based on
Second Life.
Second Life is an Internet-based virtual world launched in 2003 by Linden’s Lab
(http://lindenlab.com). Thanks to the downloadable Second Life viewer, users, called
“residents”, are able to inhabit and interact via their own graphical self representations,
known as avatars. Second Life provides an advanced level of social interaction while
allowing users to participate in individual and group activities, and to create and trade
items and services. Second Life provides its own virtual currency, called the Linden
dollar, which is exchangeable for real world currencies in a resident to resident market
place facilitated by Linden Lab. By the end of March 2008, 13 million accounts were
registered, although some residents have multiple accounts. About 38,000 residents are
logged on to Second Life at any particular moment.
Second Life currently features a number of medical and health educational projects
including Nutrition Game, Heart Murmur Sim, Second Life Virtual Hallucination Lab,
Gene Pool, Occupational Therapy at the Virtual Neurological Education Centre (see
[44, 45] for a review of these applications). All these initiatives focus primarily on the
dissemination of medical information and the education of therapists and patients.
However a few Second Life VEs, called private islands, have been created for
therapeutic purposes. For example, Brigadoon [46] was specifically designed for
patients with Asperger’s syndrome. Brigadoon is a controlled environment where users
are encouraged to feel comfortable and learn socialization skills at their own pace. This
simulation is less fearful for people because it does not involve local interactions, yet,
due to the representation of actual users via avatars, it retains the flavour of real social
situations. Its ability to teach social skills that are effective in real world situations still
needs to be evaluated.

6. Issues For Future Consideration

There are several key issues that need to be addressed as more and more VEs expand
from being primarily supportive of single user, local location applications towards
accommodating multiple users in local and remote locations.
• User perspective – Further research is necessary to determine the
effectiveness of providing patients with first, third or bird's eye perspectives.
In the past such decisions were based primarily on the technology selected to
render the VE. In the future, such decisions should be also driven by the
therapeutic needs of patients with varying neurological conditions and the
optimal presentation of therapeutic goals as designed by therapists.

• Role of virtual presence – It has generally been assumed that increasing the
level of virtual presence helps to facilitate the achievement of therapeutic
goals due to its impact on motivation and performance. This assumption
should be more directly tested in single user, local location VEs. It is
particularly important to establish the role of virtual presence in multi-user,
remote location VEs due to the added difficulty in achieving in such settings.

• Technology considerations – Access to remote locations, especially in real-


time, adds additional cost and technical complexity to the design and
implementation of VEs. Considerations of increased band width and the use of
sensors capable of transmitting high fidelity data must be taken into account.

• Compliance – Therapists are well aware that a key issue in the rehabilitation
process is the motivation of a patient to be a willing partner in the process.
Indeed, one of VRs major assets has been the use of game-like environments
to increase motivation, participation and performance [10]. Whether and how
much compliance may be lost due to changes in locality and number of users
remains to be determined.

• Ethical considerations – The use of VEs in the traditional single user, local
setting retained all elements of privacy that were guarded during conventional
rehabilitation. The addition of other users and the transmission of data, images,
and communication over the Internet clearly introduce ethical issues not
previously considered.

• Availability of software supporting functional VEs – To date, most


functional VEs have been customized by specific research groups, and often
unavailable to other clinical researchers. Or, when available, not readily
customizable for other applications (e.g., use in other languages or for
different clinical populations). The recently publicized NeuroVR initiative
(www.neurovr.org, see chapter in this book) provides a cost-free VE editor,
which allows non-expert users to easily setup and tune VEs including a
supermarket, apartment, park, office, high school, university and restaurant)
[47]. These VEs can then be run on the NeuroVR Player. Both are
downloadable at no cost.

7. Conclusion

The rapid development of VR-based technologies over the past decade has been both
an asset and a challenge for neuro-rehabilitation. The availability of novel technologies
that provide interactive, functional simulations with multimodal feedback enables
clinicians to achieve traditional therapeutic goals that would be difficult, if not
impossible, to attain via conventional therapy. They also lead to the creation of
completely new clinical paradigms which would have been hard to achieve in the past.
In applications of rehabilitation for both motor and cognitive deficits, the main focus of
much of the early exploratory research has been to investigate the use of VR as an
assessment tool. To date such environments are primarily: (a) single user (i.e., designed
for and used by one clinical client at a time) and (b) used locally within a clinical or
educational setting. More recently, researchers have begun the development of new
and more complex VR-based approaches according to two dimensions: the number of
users and the distance between the users. Driven by a push-pull phenomenon, the
original approach has now expanded to three additional avenues: multiple users in co-
located settings; single users in remote locations; and multiple users in remote locations.
It is clear that the VR rehabilitation research community needs to address the new
concerns that are associated with such novel VEs.

References

[1] S. Jacoby, N. Josman, D. Jacoby, M. Koike, Y. Itoh, N. Kawai, Y. Kitamura, E. Sharlin, and P.L. Weiss,
Tangible User Interfaces: Tools to Examine, assess and treat dynamic cinstructional processes in
children with Developmental Coordination Disorders, International Journal of Disability and Human
Development 5 (2006), 257-263.
[2] A.A. Rizzo, M.T. Schultheis, K.A. Kerns, and C. Mateer, Analysis of assets for virtual reality
applications in neuropsychology, Neuropsychological Rehabilation 14 (2004), 207-239.
[3] A.A. Rizzo, J.G. Buckwalter, and C. van der Zaag, Virtual Environment Applications for
Neuropsychological Assessment and Rehabilitation, in Handbook of Virtual Environments, K. Stanney,
Ed. New York, L.A. Earlbaum, 2002, pp. 1027-1064.
[4] C. Bryanton, J. Bosse, M. Brien, J. McLean, A. McCormick, and H. Sveistrup, Feasibility, motivation,
and selective motor control: virtual reality compared to conventional home exercise in children with
cerebral palsy, Cyberpsychology & Behaviour 9 (2006), 123-8.
[5] D. M. Brennan, A.C. Georgeadis, C.R. Baron, and L.M. Barker, The effect of videoconference-based
telerehabilitation on story retelling performance by brain-injured subjects and its implications for
remote speech-language therapy, Telemedicine Journal and e-Health 10 (2004), 147-54.
[6] E. Klinger, I. Chemin, S. Lebreton, and R.M. Marié, Virtual Action Planning in Parkinson’s Disease: a
control study, Cyberpsychology & Behaviour 9 (2006), 342-347.
[7] D. Rand, N. Katz, and P.L. Weiss, Evaluation of virtual shopping in the VMall: Comparison of post-
stroke participants to healthy control groups, Disability and Rehabilitation (2007), 1-10.
[8] R. Martin, Test des commissions (2nde édition), Bruxelles, Editest, 1972.
[9] G. Riva, Virtual reality for health care: the status of research, Cyberpsychology & Behaviour 5 (2002),
219-25.
[10] A.A. Rizzo, and G.J. Kim, A SWOT analysis of the field of virtual reality rehabilitation and therapy,
Presence: Presence: Teleoperators and Virtual Environments 14 (2005), 119-146.
[11] M.K. Holden, Virtual environments for motor rehabilitation: review, Cyberpsychology & Behaviour 8
(212-9) (2005), 187-211.
[12] F.D. Rose, B.M. Brooks, and A.A. Rizzo, Virtual reality in brain damage rehabilitation: review,
Cyberpsychology & Behaviour 8 (263-71) (2005), 241-62.
[13] A. Henderson, N. Korner-Bitensky, and M. Levin, Virtual reality in stroke rehabilitation: a systematic
review of its effectiveness for upper limb motor recovery, Topics in Stroke Rehabilitation 14 (2007),
52-61.
[14] E. Klinger, I. Chemin, S. Lebreton, and R.M. Marié, A Virtual Supermarket to Assess Cognitive
Planning, Cyberpsychology & Behaviour 7 (2004), 292-293.
[15] R.M. Marié, I. Chemin, S. Lebreton, and E. Klinger, Cognitive Planning Assessment and Virtual
Environment in Parkinson's Disease, presented at VRIC - Laval Virtual, Laval, 2005.
[16] R.M. Marié, E. Klinger, I. Chemin, and M. Josset, Cognitive Planning assessed by Virtual Reality,
presented at VRIC 2003, Laval Virtual Conference, Laval, France, 2003.
[17] E. Klinger, Apports de la réalité virtuelle à la prise en charge des troubles cognitifs et comportementaux,
PhD Thesis, ENST, 2006.
[18] N. Josman, E. Hof, E. Klinger, R.M. Marie, K. Goldenberg, P.L. Weiss, and R. Kizony, Performance
within a virtual supermarket and its relationship to executive functions in post-stroke patients, presented
at Proceedings of International Workshop on Virtual Rehabilitation, 2006.
[19] N. Josman, E. Klinger, and R. Kizony, Performance within the Virtual Action Planning Supermarket
(VAP-S): An executive function profile of three different populations suffering from deficits in the
central nervous system, presented at Proceedings of the 7th Intl Conf. Disability, Virtual Reality &
Assoc. Tech., Maia & Porto, Portugal, 2008.
[20] B.A. Wilson, N. Alderman, P.W. Burgess, H. Emslie, and J.J. Evans, Behavioral Assessment of the
Dysexecutive Syndrome Manual. UK: Thames Valley Test Company, 1996.
[21] E. Klinger, R.M. Marié, S. Lebreton, P.L. Weiss, E. Hof, and N. Josman, The VAP-S: A virtual
supermarket for the assessment of metacognitive functioning, presented at Proceedings of VRIC’08,
Laval, France, 2008.
[22] P.L. Weiss, D. Rand, N. Katz, and R. Kizony, Video capture virtual reality as a flexible and effective
rehabilitation tool, Journal of NeuroEngineering and Rehabilitation 1 (2004), 1-12.
[23] R. Kizony, N. katz, and P.L. Weiss, Adapting an immersive virtual reality system for rehabilitation, The
Journal of Visualization and Computer Animation 14 (2003), 261-268.
[24] R. Kizony, L. Raz, N. Katz, H. Weingarden, and P.L. Weiss, Video-capture virtual reality system for
patients with paraplegic spinal cord injury, Journal of Rehabilitation Research and Development 42
(2005), 595-608.
[25] D. Rand, Performance in a functional virtual environment and its effectiveness for the rehabilitation of
individuals following stroke, Unpublished PhD Thesis, University of Haifa, Israel, 2007.
[26] H. Sveistrup, Motor rehabilitation using virtual reality, Journal of NeuroEngineering and
Rehabilitation 1 (2004), 1-10.
[27] D.T. Reid, Benefits of a virtual play rehabilitation environment for children with cerebral palsy on
perceptions of self-efficacy: a pilot study, Pediatric rehabilitation 5 (2002), 141-8.
[28] D. Rand, N. Katz, M. Shahar, R. Kizony, and P.L. Weiss, The virtual mall: A functional virtual
environment for stroke rehabilitation, Annual Review of Cybertherapy and Telemedicine: A decade of
VR 3 (2005), 193-198.
[29] D. Rand, S. Basha-Abu Rukan, P.L. Weiss, and N. Katz, Validation of the Vmall as an assessment tool
for executive functions, Neuropsychological Rehabilitation, in press.
[30] D. Rand, P.L. Weiss, and N. Katz, Training multitasking in a virtual supermarket: a novel intervention
following stroke, American Journal of Occupational Therapy, in press.
[31] D. Rand, N. Katz, and P.L. Weiss, Intervention using the VMall for improving motor and functional
ability of the upper extremity in post stroke participants, European Journal of Physical and
Rehabilitation Medicine, in press.
[32] E.B. Nash, G.W. Edwards, J.A. Thompson, and W. Barfield, A Review of Presence and Performance in
Virtual Environments, International Journal of Human-Computer Interaction 12 (2000), 1-41.
[33] I. Soderback, I. Bengtsson, E. Ginsburg, and J. Ekholm, Video feedback in occupational therapy: its
effects in patients with neglect syndrome, Archives of Physical Medicine and Rehabilitation 73 (1992),
1140-6.
[34] M. Zancanaro, F. Pianesi, O. Stock, P. Venuti, A. Cappelletti, G. Iandolo, M. Prete, and F. Rossi,
Children in the Museum: an Environment for Collaborative Storytelling, in PEACH - Intelligent
Interfaces for Museum Visits, 2007, pp. 165-184.
[35] P.H. Dietz, and D.L. Leigh, DiamondTouch: A Multi-User Touch Technology, presented at
Proceedings of the 14th annual ACM symposium on User Interface Software and Technology (UIST),
Orlando, Florida, 2001.
[36] E. Gal, D. Goren-Bar, E. Gazit, N. Bauminger, A. Cappelletti, F. Pianesi, O. Stock, M. Zancanaro, and
P. L. Weiss, Enhancing social communication through story-telling among high-functioning children
with autism, Lecture Notes in Computer Science 3814 (2005), 320-323.
[37] N. Bauminger, E. Gal, D. Goren-Bar, J. Kupersmitt, F. Pianesi, O. Stock, P.L. Weiss, R. Yifat, and M.
Zancanaro, Enhancing Social Communication in High-Functioning Children with Autism through a Co-
located Interface, presented at Proceedings of the 6th International Workshop on Social Intelligence
Design, Trento, Italy, 2007.
[38] E. Gal, N. Bauminger, D. Goren-Bar, F. Pianesi, Stock,O., M. Zancanaro, and P.L. Weiss, Enhancing
social communication of children with high functioning autism through a co-located interface, Artificial
Intelligence & Society, in press.
[39] G.C. Burdea, V. Popescu, V. Hentz, and K. Colbert, Virtual reality-based orthopedic telerehabilitation,
IEEE Transactions on Neural Systems and Rehabilitation Engineering 8 (2000), 430-2.
[40] M.K. Holden, T.A. Dyar, L. Schwamm, and E. Bizzi, Virtual-Environment-Based Telerehabilitation in
Patients with Stroke, Presence: Teleoperators & Virtual Environments 14 (2005), 214-233.
[41] D.J. Reinkensmeyer, C.T. Pang, J.A. Nessler, and C.C. Painter, Web-based telerehabilitation for the
upper extremity after stroke, IEEE Transactions on Neural Systems and Rehabilitation Engineering 10
(2002), 102-8.
[42] M.K. Holden, T.A. Dyar, and L. Dayan-Cimadoro, Telerehabilitation using a virtual environment
improves upper extremity function in patients with stroke, IEEE Transactions on Neural Systems and
Rehabilitation Engineering 15 (2007), 36-42.
[43] M.N. Boulos and S. Wheeler, The emerging Web 2.0 social software: an enabling suite of sociable
technologies in health and healthcare education, Health Information and Libraries Journal 24 (2007),
2-23.
[44] M.N. Boulos, L. Hetherington, and S. Wheeler, Second Life: an overview of the potential of 3-D virtual
worlds in medical and health education, Health Information and Libraries Journal 24 (2007), 233-45.
[45] A. Gorini, A. Gaggioli, C. Vigna, and G. Riva, A second life for eHealth: prospects for the use of 3-D
virtual worlds in clinical psychology, Journal of Medical Internet Research 10 (2008), e21.
[46] J. Lester, About Brigadoon. Brigadoon: An innivative online community for people dealing with
Asperger's syndrome and autism, http://braintalk.blogs.com/brigadoon/2005/01/about_brigadoon.html
[accessed 2008 Sept 23]
[47] G. Riva, and A. Gaggioli, CyberEurope Column, Cyberpsychology & Behaviour 10 (2007), 493-494.

You might also like