Professional Documents
Culture Documents
Human Interaction Emerging Technologies and Future Systems V 1St Edition Manfred Baumgartner Online Ebook Texxtbook Full Chapter PDF
Human Interaction Emerging Technologies and Future Systems V 1St Edition Manfred Baumgartner Online Ebook Texxtbook Full Chapter PDF
Human Interaction Emerging Technologies and Future Systems V 1St Edition Manfred Baumgartner Online Ebook Texxtbook Full Chapter PDF
https://ebookmeta.com/product/interaction-design-beyond-human-
computer-interaction-6th-edition-yvonne-rogers/
https://ebookmeta.com/product/human-movements-in-human-computer-
interaction-hci-biele/
https://ebookmeta.com/product/smart-nanomaterials-emerging-
materials-and-technologies-1st-edition-imalka-munaweera/
https://ebookmeta.com/product/operating-systems-and-middleware-
supporting-controlled-interaction-max-hailperin/
https://ebookmeta.com/product/emerging-technologies-in-
agriculture-livestock-and-climate-abid-yahya/
https://ebookmeta.com/product/emerging-ict-technologies-and-
cybersecurity-from-ai-and-ml-to-other-futuristic-
technologies-1st-edition-kutub-thakur/
https://ebookmeta.com/product/human-centred-intelligent-systems-
proceedings-of-kes-hcis-2021-conference-244-smart-innovation-
systems-and-technologies-244-alfred-zimmermann-editor/
Lecture Notes in Networks and Systems 319
Tareq Ahram
Redha Taiar Editors
Human Interaction,
Emerging Technologies
and Future Systems V
Proceedings of the 5th International
Virtual Conference on Human
Interaction and Emerging Technologies,
IHIET 2021, August 27–29, 2021
and the 6th IHIET: Future Systems
(IHIET-FS 2021), October 28–30, 2021, France
Lecture Notes in Networks and Systems
Volume 319
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Fernando Gomide, Department of Computer Engineering and Automation—DCA,
School of Electrical and Computer Engineering—FEEC, University of Campinas—
UNICAMP, São Paulo, Brazil
Okyay Kaynak, Department of Electrical and Electronic Engineering,
Bogazici University, Istanbul, Turkey
Derong Liu, Department of Electrical and Computer Engineering, University
of Illinois at Chicago, Chicago, USA; Institute of Automation, Chinese Academy
of Sciences, Beijing, China
Witold Pedrycz, Department of Electrical and Computer Engineering,
University of Alberta, Alberta, Canada; Systems Research Institute,
Polish Academy of Sciences, Warsaw, Poland
Marios M. Polycarpou, Department of Electrical and Computer Engineering,
KIOS Research Center for Intelligent Systems and Networks, University of Cyprus,
Nicosia, Cyprus
Imre J. Rudas, Óbuda University, Budapest, Hungary
Jun Wang, Department of Computer Science, City University of Hong Kong,
Kowloon, Hong Kong
The series “Lecture Notes in Networks and Systems” publishes the latest
developments in Networks and Systems—quickly, informally and with high quality.
Original research reported in proceedings and post-proceedings represents the core
of LNNS.
Volumes published in LNNS embrace all aspects and subfields of, as well as new
challenges in, Networks and Systems.
The series contains proceedings and edited volumes in systems and networks,
spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor
Networks, Control Systems, Energy Systems, Automotive Systems, Biological
Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems,
Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems,
Robotics, Social Systems, Economic Systems and other. Of particular value to both
the contributors and the readership are the short publication timeframe and the
world-wide distribution and exposure which enable both a wide and rapid
dissemination of research output.
The series covers the theory, applications, and perspectives on the state of the art
and future developments relevant to systems and networks, decision making, control,
complex processes and related areas, as embedded in the fields of interdisciplinary
and applied sciences, engineering, computer science, physics, economics, social, and
life sciences, as well as the paradigms and methodologies behind them.
Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago.
All books published in the series are submitted for consideration in Web of Science.
Editors
123
Editors
Tareq Ahram Redha Taiar
Institute for Advanced Systems Engineering Campus du Moulin de la Housse
University of Central Florida Université de Reims Champagne Ardenne
Orlando, FL, USA GRESPI
Reims Cedex, France
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This book, entitled Human Interaction, Emerging Technologies and Future Systems
V, aims to provide a global forum for presenting and discussing novel human
interaction, emerging technologies and engineering approaches, tools, methodolo-
gies, techniques, and solutions for integrating people, concepts, trends, and appli-
cations in all areas of human interaction endeavor. Such applications include, but
are not limited to, health care and medicine, sports medicine, transportation, opti-
mization and urban planning for infrastructure development, manufacturing, social
development, a new generation of service systems, as well as safety, risk assess-
ment, and cybersecurity in both civilian and military contexts.
Rapid progress in developments in cognitive computing, modeling, and simu-
lation, as well as smart sensor technology, will have a profound effect on the
principles of human interaction and emerging technologies at both the individual
and societal levels in the near future.
The book gathers selected papers presented at the 5th International Conference
on Human Interaction and Emerging Technologies (IHIET 2021) and the 6th
International Conference on Human Interaction & Emerging Technologies: Future
Systems (IHIET-FS 2021), both conferences focusing on human-centered design
and human interaction approaches which utilize and expand on the current
knowledge of design and emerging technologies supported by engineering, artificial
intelligence and computing, data analytics, wearable technologies, and
next-generation systems.
This book also presents many innovative studies with a particular emphasis on
emerging technologies and their applications, in addition to the consideration of
user experience in the design of human interfaces for virtual, augmented, and mixed
reality applications. Reflecting on the above-outlined perspective, the papers con-
tained in this volume are organized into eight sections:
Section 1: Human–computer Interaction
Section 2: Human-centered Design
Section 3: Emerging Technologies and Applications
Section 4: Augmented, Virtual, and Mixed Reality Simulation
v
vi Preface
Human–Computer Interaction
Human and Machine Trust Considerations, Concerns and Constraints
for Lethal Autonomous Weapon Systems (LAWS) . . . . . . . . . . . . . . . . . 3
Guermantes Lailari
A Multimodal Approach for Early Detection of Cognitive Impairment
from Tweets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Nirmalya Thakur and Chia Y. Han
A Formal Model of Availability to Reduce Cross-
Domain Interruptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Tom Gross and Anna-Lena Mueller
Progressive Intensity of Human-Technology Teaming . . . . . . . . . . . . . . 28
Toni Waefler
Cultural Difference of Simplified Facial Expressions for Humanoids . . . 37
Meina Tawaki, Keiko Yamamoto, and Ichi Kanaya
“I Think It’s Quite Subtle, So It Doesn’t Disturb Me”: Employee
Perceptions of Levels, Points and Badges in Corporate Training . . . . . . 44
Adam Palmquist and Izabella Jedel
Escape Rooms: A Formula for Injecting Interaction
in Chemistry Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Luis Aimacaña-Espinosa, Marcos Chacón-Castro,
and Janio Jadán-Guerrero
Information Dissemination of COVID-19 by Ministry of Health
in Indonesia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Dika Pratama, Achmad Nurmandi, Isnaini Muallidin, Danang Kurniawan,
and Salahudin
vii
viii Contents
Human-Centered Design
The Face of Trust: Using Facial Action Units (AUs) as Indicators
of Trust in Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Jonathan Soon Kiat Chua, Hong Xu, and Sun Woh Lye
The Effect or Non-effect of Virtual Versus Non-virtual Backgrounds
in Digital Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
Ole Goethe, Hanne Sørum, and Jannicke Johansen
Approach to Estimate the Skills of an Operator During Human-Robot
Cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Adrian Couvent, Christophe Debain, and Nicolas Tricot
Adopting User-Centered Design to Identify Assessment Metrics
for Adaptive Video Games for Education . . . . . . . . . . . . . . . . . . . . . . . . 289
Yavor Dankov, Albena Antonova, and Boyan Bontchev
The Contribution of Online Platforms to Alternative Socialization
Opportunities of Architecture Students . . . . . . . . . . . . . . . . . . . . . . . . . 298
Pınar Şahin, Serengül Seçmen, Salih Ceylan, and Melek Elif Somer
May I Show You the Route? Developing a Service Robot Application
in a Library Using Design Science Research . . . . . . . . . . . . . . . . . . . . . 306
Giordano Sabbioni, Vivienne Jia Zhong, Janine Jäger,
and Theresa Schmiedel
Adaptive Fashion: Knitwear Project for People with Special Needs . . . . 314
Miriana Leccia and Giovanni Maria Conti
Communication Needs Among Business Building Stakeholders . . . . . . . 322
Marja Liinasuo and Susanna Aromaa
Reduction of Electrotactile Perception Threshold Using Background
Thermal Stimulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Rahul Kumar Ray and M. Manivannan
Physiological Based Adaptive Automation Triggers in Varying
Traffic Density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
Shi Yin Tan, Chun Hsien Chen, and Sun Woh Lye
Contents xi
Guermantes Lailari(B)
Human trust in technology is based on our understanding of how it works and our
assessment of its safety and reliability. To trust a decision made by an algorithm, we
need to know that it is reliable and fair, that it can be accounted for, and that it will
cause no harm. We need assurance that it cannot be tampered with and that the system
itself is secure. We need to understand the rationale behind the algorithmic assessment,
recommendation or outcome, and be able to interact with it, probe it – even ask questions.
And we need assurance that the values and norms of our societies are also reflected in
those outcomes…Moving forward, “build for performance” will not suffice as an AI
design paradigm. We must learn how to build, evaluate and monitor for trust. [1] [italics
by author].
The above quote describes the challenges that industry, such as IBM, and others have
with artificial intelligence and autonomous systems, and helps identify technical, legal
and ethical aspects of the problem. Much of the discussion regarding Lethal Autonomous
Weapon Systems (LAWS) in the media, academia, human rights organizations, and
among governments have been about legal concerns of employing LAWS. This paper
highlights considerations, concerns, and constraints regarding trust between the human-
machine “interface” and LAWS.
What is trust? The National Institute of Standards and Technology (NIST) is tasked
to provide guidance on standards for technology in the US. Trust can have a simple defi-
nition, such as “The confidence one element has in another that the second element will
behave as expected.” [2] Trust can also be defined in a very technical manner such as the
17 technical concerns that can negatively affect products and services: “(1) scalability,
(2) heterogeneity, (3) control and ownership, (4) composability, interoperability, integra-
tion, and compatibility, (5) ‘ilities’ (non-functional requirements), (6) synchronization,
(7) measurement, (8) predictability, (9) specific testing and assurance approaches, (10)
certification criteria, (11) security, (12) reliability, (13) data integrity, (14) excessive
data, (15) speed and performance, (16) usability, and (17) visibility and discovery.” [3]
In addition to these 17 technical trust concerns, two additional non-technical concerns
are included: insurability and risk management. [3] In the case of military autonomous
systems, insurability is not relevant, but risk management certainly is.
At the end of the day, trust is earned and cannot be assumed. A NIST Internet of
Thinks (IoT) paper ends with a cautionary note:
For instance, humans are prone both to over-trusting and to under-trusting machines
depending on context. Challenges also exist for measuring the performance of human-
AI teams, conveying enough information while avoiding cognitive overload, enabling
humans and machines to understand the circumstances in which they should pass con-
trol between each other, and maintaining appropriate human engagement to preserve
situational awareness and meaningfully take action when needed. [4].
Trust is further complicated when the machine does not work properly and military
or technical personnel attempt ad hoc repairs and the results could be disastrous:
If the technology employed is unstable, personnel may avoid using the equipment or
may develop ad hoc and informal fixes to the perceived weaknesses… The resulting gap
between senior leaders’ performance expectations and actual performance introduces
another source of uncertainty into senior leaders’ understanding of the combat situation.
[5].
These challenges also describe the challenges military personnel will face as
autonomous systems are phased into operational units. As militaries around the world
develop and use various autonomous systems and as these systems become weaponized,
military personnel will have to trust these systems. Some NIST definitions will provide
a common lexicon to use in this paper:
Risk. A measure of the extent to which an entity is threatened by a potential circumstance
or event, and typically is a function of: (i) the adverse impact, or magnitude of harm,
that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence
[6].
Weapon System. A combination of one or more weapons with all related equipment,
materials, services, personnel, and means of delivery and deployment (if applicable)
required for self-sufficiency. [7].
In general, trust involves risk because it involves depending on another: When a
person uses an autonomous system, the individual assumes the system will perform
as designed and not have an unintended engagement. In other words, the system has
expected results: positive outcomes and negative avoidance. In reviewing the literature
on artificial intelligence, trustworthiness, reliability, risk, safety, challenges, control and
related synonyms are used to describe the trust relationship. The antonym of these
terms are also used to describe outcomes such as “untrustworthy suppliers, insertion of
counterfeits, tampering, unauthorized production, theft, insertion of malicious code, and
poor manufacturing and development practices.” [6].
Ethical and legal considerations of trust are primary to public policy discussions.
For example, former President Trump’s Executive Order on promoting trust with AI in
the Federal Government highlighted on the following:
Agencies must therefore design, develop, acquire, and use AI in a manner that fosters
public trust and confidence while protecting privacy, civil rights, civil liberties, and
American values, consistent with applicable law… [8].
The National Artificial Intelligence Research and Development Strategic Plan: 2019
Update offers an example that covers technological and the political dimensions: “Be-
yond being safe, secure, reliable, resilient, explainable, and transparent, trustworthy AI
must preserve privacy while detecting and avoiding inappropriate bias.” [9] The NIST
concentrates on technical aspects of trustworthiness in implementing trust in AI to meet
the intent of the political leaders.
Europe is also involved in trust and artificial intelligence. For example, the European
Commission defines “trustworthy AI” in terms of “three components: (1) it should be
lawful, ensuring compliance with all applicable laws and regulations; (2) it should be
ethical, demonstrating respect for, and ensuring adherence to, ethical principles and val-
ues; and (3) it should be robust, both from a technical and social perspective, because,
even with good intentions, AI systems can cause unintentional harm.” [10] The Euro-
pean Commission (EC) guidelines further elaborate that in order to ensure that artificial
intelligence is trustworthy, it must “ensure that the development, deployment and use
of AI systems meets the seven key requirements: (1) human agency and oversight, (2)
technical robustness and safety, (3) privacy and data governance, (4) transparency, (5)
diversity, non-discrimination and fairness, (6) environmental and societal well-being
and (7) accountability.” [10] The EC guidelines deal with technological and political
dimensions.
Interest in artificial intelligence is exploding in global scientific, policy, and public
interest arenas. For example, People’s Republic of China (PRC) documents emphasize
AI’s technical aspects: “While vigorously developing AI, we must attach great impor-
tance to the potential safety risks and challenges, strengthen the forward-looking pre-
vention and guidance on restraint, minimize risk, and ensure the safe, reliable, and con-
trollable development of AI.” [11] China’s Ministry of Technology AI governing princi-
ples reflect the same concepts: “AI systems should continuously improve transparency,
6 G. Lailari
in… ‘We have programs right now, capabilities right now that allow for fully automatic
processing of sensor-to-shooter targeting, but we don’t trust the data. And we still ensure
that there’s human intervention at every [step in the process]. And, of course, with each
intervention by humans we’re adding more time, more opportunities for mistakes to
happen, time we’re not going to have when an adversary’s targeting our network… We
have the ability for a quicker targeting cycle, but we don’t trust the process.’” [15].
The March 2021 Department of the Navy’s Unmanned Campaign Plan describes
its vision for the Navy as to “make unmanned systems a trusted and sustainable part of
the Naval force structure, integrated at speed to provide lethal, survivable, and scalable
effects in support of the future maritime mission.” [16] Additionally, the Chief of Naval
Operations (CNO) released his AI plan [17] that states the following:
[u]nmanned platforms play a vital role in our future fleet. Successfully integrating
unmanned platforms—under, on, and above the sea—gives our commanders bet-
ter options to fight and win in contested spaces. They will expand our intelligence,
surveillance, and reconnaissance advantage, add depth to our missile magazines,
and provide additional means to keep our distributed force provisioned. Further-
more, moving toward smaller platforms improves our offensive punch while also
providing affordable solutions to grow the Navy [17].
The plan calls for the Navy to develop, test and deploy unmanned systems to perform
the “dull, dirty and dangerous” missions. Some of the systems in development are the
MQ–25A Stingray (unmanned carrier-based refueling tanker); Overlord Unmanned Sur-
face Vehicles (USVs); Sea Hunters; and Ghost Fleet Overlord. [18 & 19] The USMC is
developing and testing systems such as the Supply Glider effort that DARPA has assisted
with: “heavy-duty cardboard gliders which can deliver supplies and then disappear in a
span of days” and although they are re-useable, they “are designed to be expendable and
biodegradable.” These small gliders can be released by aircraft to deliver critical suppli-
ers to a precise location defined by ground troops. The Supply Glider’s features avoid
the problem of inaccurate deliveries, “a need to recover the system after deployment”
as well as “the cost of resupply systems (parachute or UAVs) that must be retrieved as
well as other challenges of returning logistical systems such as batteries that displaces
payload capacity, as well as launch/land infrastructure.” [20] Cardboard UAVs would
be difficult to detect and to shoot down, and could be designed to deliver warheads. In
effect, they would be inexpensive stealth systems. Instead of a transport aircraft being a
logistical system to support ground troops, it could also deliver a massive strike via an
airdrop.
The US Army is developing the Advanced Targeting and Lethality Automated System
(ATLAS) that “will use artificial intelligence and machine learning to give ground-combat
vehicles autonomous targeting capabilities” allowing weapon systems to “acquire, iden-
tify, and engage targets at least 3X faster than the current manual process” [21]. Mov-
ing to a higher level of operations, Project Convergence is a US Army program that
connects any sensor to the best shooter. Project Convergence has two sub-projects: an
automatic target recognition AIs that are “machine learning algorithms processed the
massive amount of data picked up by the Army’s sensors to detect and identify threats
on the battlefield, producing targeting data for weapon systems to utilize.” The AI fire
8 G. Lailari
References
1. Trusting AI, About Us, IBM (no date)
2. NIST SP 800-161 Software Assurance in Acquisition: Mitigating Risks to the Enterprise,
April 2015
3. Voas, J., Kuhn, R.: NIST Cybersecurity White Paper “Internet of Things (IoT) Trust
Concerns”, 17 October 2018
4. Final Report National Security Commission on Artificial Intelligence, National Security
Commission on Artificial Intelligence (NSCAI), March 2021
5. Mandeles, M.: The Future of War: Organizations as Weapons. Potomac Books, McLean
(2005)
10 G. Lailari
6. Risk Management Framework for Information Systems and Organizations: A System Life
Cycle Approach for Security and Privacy, NIST Special Publication 800-37 Revision 2,
December 2018
7. Protection of Mission Critical Functions to Achieve Trusted Systems and Networks (TSN),
Department of Defense Instruction Number 5200.44 v3, 15 October 2018
8. Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal
Government, 3 December 2020
9. The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update,
Select Committee on Artificial Intelligence of the National Science & Technology Council,
June 2019
10. Ethics Guidelines for Trustworthy AI, European Commission: High-Level Expert Group on
Artificial Intelligence at 5, 8 April 2019
11. A Next Generation Artificial Intelligence Development Plan, 2017 (Chinese). DigiChina,
New America, and the Stanford Cyber Policy Center, 02 March 2021
12. Governance Principles for a New Generation of Artificial Intelligence: Develop Responsible
Artificial Intelligence (2017). (translated into English from Chinese)
13. Autonomy in Weapon Systems, DOD Directive 3000.09, 21 November 2012, Incorporating
Change 1 on 8 May 2017
14. Stanton, B., Jensen, T.: Trust and Artificial Intelligence, NIST Interagency/Internal Report
(NISTIR) - 8332, National Institute of Standards and Technology, Gaithersburg, MD (2021)
15. Eckstein, M.: Berger: Marines Need to Trust Unmanned, AI Tools for Future Warfare, USNI
News, 2 February 2021
16. Unmanned Campaign Plan, Department of the Navy, 16 March 2021
17. CNO NAVPLAN, January 2021
18. Eckstein, M.: Navy to Expand Land-Based Testing for Unmanned Vessels, Conduct Offensive
Firepower Analysis for USVs, 25 January 2021
19. LaGrone, S.: Navy Wants 10-Ship Unmanned ‘Ghost Fleet’ to Supplement Manned Force,
USNI, 13 March 2019
20. Industrial Paper Airplanes for Autonomous Aerial Delivery, Press Release, Other Lab, 12
January 2017
21. Rohrlich, J.: The US Army wants to turn tanks into AI-powered killing machines, QZ News,
26 February 2019
22. Strout, N.: How the Army plans to revolutionize tanks with artificial intelligence, C4ISR Net,
29 October 2020
23. Air Force Science and Technology Strategy 2030, April 2019
24. Boeing, General Atomics, and Kratos to develop unmanned aircraft to demonstrate teaming
with piloted planes, Intelligent Aerospace, 15 December 2020
25. Harper, J.: The Rise of Skyborg: Air Force Betting on New Robotic Wingman, National
Defense Magazine, 25 September 2020
26. Trevithick, J.: Glitzy Air Force Video Lays Out “Skyborg” Artificial Intelligence Combat
Drone Program, The Drive, 24 June 2020
27. Hitchens, T.: Air Force Research Lab’s Golden Horde Swarming Weapons, Defense
Information, 22 January 2021
28. Host, P.: AFA Winter 2021: US Air Force envisions additional Vanguard S&T programmes
in the future, Janes, 25 February 2021
A Multimodal Approach for Early Detection
of Cognitive Impairment from Tweets
Abstract. The proposed approach can filter, study, analyze, and interpret written
communications from social media platforms for early detection of Cognitive
Impairment (CI) to connect individuals with CI with assistive services in their
location. It has three novel functionalities. First, it presents a Big Data-centric
Data Mining methodology that uses a host of Natural Language Processing and
Information Retrieval approaches to filter and analyze tweets to detect if the tweets
were made by a user with some form of CI – for instance, Dementia. Second,
it consists of a string-matching functionality that uses the Levenshtein distance
algorithm and Fuzzy matching to score tweets indicating the degree of CI. Finally,
the framework consists of a text mining approach for detecting the geolocation
of the Twitter user so that, if the user is cognitively impaired, caregivers in that
area could be alerted and connected to them to facilitate early-stage care, services,
therapies, or treatment.
1 Introduction
Dementia, a form of Cognitive Impairment (CI), develops slowly, but it is common in
the rapidly increasing elderly population, which currently stands at 962 million globally
[1]. At present, there are around 50 million people worldwide who have Dementia, and
this number is projected to double by 2030 [2]. The costs of looking after people with
Dementia and other forms of CI are a global concern. In the United States alone, these
costs are estimated to be around USD 355 billion in 2021 and are projected to rise to
about USD 1.1 trillion by 2050 [3]. Thus, early detection of Dementia and other forms of
CI is essential for providing assistive care and services to reduce the associated long-term
healthcare and caregiving costs. Over the last few years, social media platforms have
evolved and transformed into ‘virtual’ spaces via which people from all demographics
communicate with each other to form their social support systems [4] and develop
‘online’ interpersonal relationships [5]. Recent research shows that people use different
social media platforms to seek support, share information, and communicate with others
when they face various forms of CI and are in the early stages of the same [6, 7].
Early diagnosis of CI, such as Dementia, has several benefits [8] such as improved
quality of life, probability of pharmacological and non-drug treatments to have maxi-
mum effect, delaying transition into care homes, better treatment of dementia-related
depression, reduced behavioral disorders and increased independence during activities
of daily living. Communication difficulties, both in oral and written communications, are
considered one of the earliest symptoms of CI [9]. This paper presents a brief review of
related works and gives the rationale for the new approach in Sect. 2. Section 3 presents
the multilayered framework as the solution. Section 4 shows how this framework was
implemented using a data science tool and discusses the obtained results of detecting CI
from tweets, followed by the concluding remarks in Sect. 5.
2 Literature Review
Cavedoni et al. [10] presented a virtual reality-based approach for detecting CI in the
elderly. The paper also outlined the approach of studying the evolution of CI over time.
In [11], the work mostly involved discussing various approaches for diagnosing mild
cognitive impairments in the elderly. An activity recognition and analysis-based frame-
work for detecting mild cognitive impairment in the elderly was presented in [12]. In
[13], the authors evaluated the efficacy of multiple assessments on the same day to accu-
rately detect CI in patients with Alzheimer’s. A Plasma microRNA biomarkers-based
approach was proposed by Sheinerman et al. [14] to detect various forms of mild cogni-
tive impairment in the elderly. The Mini-Mental State Examination is another approach
for detecting CI in the elderly that is widely used to analyze the cognitive abilities of
individuals [15]. To address the limitations of MMSE, which is specifically related to the
overestimation of the results, the Korean Dementia Screening Questionnaire (KDSQ)
was developed. It is a specific set of questions, and this evaluation can be conducted
by anyone, even without specialized skills related to the detection of any form of CI
[16]. Despite these recent advances in this field, early detection of CI, such as Dementia,
remains a challenge. This is primarily because the methodologies for detecting CI based
on behavior recognition and analysis require the elderly to either have wearables on them
or familiarize themselves with new technology-based gadgets and devices, which most
elderly are naturally resistive to. Thus, finding solutions for early detection of CI by
capturing the early symptoms and relevant signs in a more natural, relaxed, and elderly-
friendly way is much needed. We address this challenge by developing a multilayered
framework, at the intersection of Natural Language Processing with several other dis-
ciplines, that contains methods for processing such early symptoms and relevant signs
associated with CI, such as Dementia. The framework consists of the functionality to
filter, study, analyze, and interpret written communications from social media platforms
to detect if these communications were made by users with some form of CI, such as
Dementia.
3 Proposed Approach
This framework has multiple layers. Each layer is equipped with a distinct functionality
that serves as the foundation for the development and performance of the subsequent
A Multimodal Approach for Early Detection of Cognitive Impairment 13
layer. We used data from Twitter, a popular social media platform, for development,
implementation, and testing all these layers. The first layer performs data gathering,
preprocessing, content parsing, text refinement, and text analysis to detect tweets that
could have been made by a user with some form of CI. This layer also consists of the
methodology to perform intelligent decision-making for filtering tweets to eliminate
advertisements and promotions.
The second layer implements a methodology to score the tweets obtained from the
first layer to determine the extent of CI. It uses the Levenshtein distance algorithm [17]
to implement a scoring system based on the degree of string matching by calculating the
linguistic distance between the tweet and a user-defined bag of words. This layer also
contains a user-defined threshold for this scoring system, allowing filtering of tweets as
per their scores compared to this threshold. All the tweets which receive a score greater
than or equal to this threshold are retained in the results, and the other tweets with scores
less than this threshold are eliminated. The third layer consists of the methodology to
detect the user’s geolocation whose tweets indicated that they have some form of CI.
This detection of geolocation is done based on the publicly available location data on
respective Twitter user profiles.
We used RapidMiner [18], a powerful and versatile data science tool, to develop
this framework. RapidMiner consists of a range of in-built functions called ‘operators’,
which can be customized to develop such applications. A ‘process’ in RapidMiner is an
application that uses one or more of its ‘operators’, which define different functionalities.
RapidMiner also allows the development of new ‘operators’ with new or improved func-
tionalities that can be made available to other users of this platform via the RapidMiner
marketplace. The steps involved for the development of each layer are as follows:
First Layer:
1. Develop a bag of words model, which would act as a collection of keywords and
phrases to identify tweets where people have communicated having CI.
2. Use the ‘Search Twitter’ ‘operator’ to search tweets that match the keywords and
phrases from Step 1.
3. Perform filtering of tweets obtained from Step 2 to eliminate tweets that might be
advertisements, promotions, or similar, which are not required for this study.
4. Identify and remove stop words from the tweets obtained from Step 3.
5. Filter out unwanted attributes from the RapidMiner results to retain only the tweets
and other essential user information needed for this study.
6. Track the Twitter ID and username of each Twitter user by using the ‘Get Twitter
User Details’ ‘operator’ from the list of tweets obtained after the filtering process.
Here, we are using Twitter ID as the unique identifier for each Twitter user.
7. Integrate all the above ‘operators’ and develop a RapidMiner ‘process’ and set up a
Twitter connection.
14 N. Thakur and C. Y. Han
Fig. 1. RapidMiner ‘process’ showing the implementation of the first layer by using the ‘Search
Twitter’ ‘operator’
Second Layer:
1. Use the ‘Get Twitter User Status’ ‘operator’ to look up status updates by all the users
identified in Step 6 of the first layer.
2. Use the ‘Select Attribute’ ‘operator’ to select only the text of the tweet (as a string)
from the output of Step 1.
3. Using the ‘Read Document’ ‘operator’, set up a path to a file on the local computer
that contains a set of keywords or phrases which indicate some form of CI, that
would be used for checking similarity. It is recommended that this file is a .txt file.
4. Implement the Levenshtein distance algorithm using the ‘Fuzzy matching’ ‘opera-
tor’. Provide Step 2 as the source and Step 3 as the grounds for comparison to this
‘operator’ to generate similarity scores based on string-comparison of each tweet.
5. Enable the advanced parameters of the ‘Fuzzy matching’ ‘operator’ to define the
threshold value. This can be any user-defined value, and only those tweets that have
a similarity greater than the threshold would be retained in the results.
6. Integrate all the above ‘operators’ and develop a RapidMiner ‘process’ and set up a
Twitter connection.
Third Layer:
1. Use the ‘Get Twitter User Details’ ‘operator’ to look up publicly available location
information associated with a Twitter account.
2. Configure this ‘operator’ to look up details using the Twitter ID associated with
each account and supply the Twitter IDs identified in Step 6 of the first layer to this
‘operator’.
3. Use the ‘Select Attribute’ ‘operator’ to filter out attributes from Step 1 that do not
contain any location information.
A Multimodal Approach for Early Detection of Cognitive Impairment 15
4. Integrate all the above ‘operators’ and develop a RapidMiner ‘process’ and set up a
Twitter connection.
Fig. 2. RapidMiner ‘process’ showing the implementation of the second and third layer of the
proposed framework
Fig. 3. Some results from RapidMiner showing the data obtained from the first layer of the
framework for a specific user, where the user-defined keyword or phrase provided as input to the
‘process’ was “I have Dementia”
The functionality of the ‘operator’ was customized to look for up to 10,000 recent
or popular tweets because of the processing limitation of the free version of RapidMiner
that we used. After implementing the approaches for data preprocessing and filtering
for developing the first layer, the results obtained from this layer were analyzed. This is
shown in Fig. 3. Next, the first layer’s results were provided to the RapidMiner ‘process’
for the second layer, as shown in Fig. 2. The results obtained from this layer are shown
in Fig. 4. After that, we followed the architecture of our framework to detect the location
of these users by using the RapidMiner ‘process’ shown in Fig. 2. The result for one
typical user is shown in Fig. 5.
Fig. 4. Some results from RapidMiner showing the data obtained from the first layer of the
framework
In the context of these results, the ‘Location’ attribute is most important to us. Twitter
allows every user the option to hide or protect different information associated with their
account [19]. The above user had chosen to make all their account information public,
so we could obtain their location. Similarly, the details of other users were tracked,
and their locations were noted and compiled. Those users identified from the results
A Multimodal Approach for Early Detection of Cognitive Impairment 17
Fig. 5. Screenshot from RapidMiner’s results terminal showing the results obtained from the third
layer of the framework for a specific user
table shown in Fig. 4, who had chosen to hide or protect their information, specifically
their location, were not included in this study. The location data obtained for all such
user profiles would help the future scope of work on this project so that caregivers or
medical practitioners in the same area could be connected to these users. We defined
a very low threshold, threshold = 2, for the ‘Fuzzy matching’ ‘operator’ during the
development of the second layer and its associated functionalities. This was to ensure
that most of the tweets made by that specific user passed this threshold. This provided
us the historical data of a person’s tweeting history. In this context, the historical data
are scores detecting the extent of CI of the user. This analysis is shown in Fig. 6 for
one of the users. Such an analysis can have multiple uses, which include (1) study of
tweeting patterns in terms of scores indicating the extent of CI associated with tweets;
(2) detection of any sudden increase in these scores, which could indicate worsening of
the CI symptoms or indicators suggesting immediate help, and (3) studying the degree
of help provided by caregivers or medical practitioners looking after these older adults
by observing the decrease of scores over time, just to name a few.
Fig. 6. Analysis of tweet scores (that indicate the degree of CI) for the tweeting history for one
of the users identified by the first layer of the framework
them to assistive care and services in their geographic region to facilitate early-stage care,
services, therapies, or treatment. The work explored the intersection of Natural Language
Processing with Big Data, Data Mining, Data Analysis, Human-Computer Interaction,
and Assistive Technologies. The framework uses data gathering, preprocessing, content
parsing, text refinement, text analysis, and string comparison-based scoring of tweets by
implementing the Levenshtein distance algorithm. It also consists of the methodology
to interpret the degree of CI of a user based on a user-defined threshold value. As per
the authors’ best knowledge, no similar work has been done in this field yet. The results
presented and discussed uphold the relevance and importance of this framework for early
detection of CI, such as Dementia, in the elderly.
It also addresses the challenge of developing a cost-effective and easily imple-
mentable solution for early detection of CI that does not require the elderly to learn
any new technologies or familiarize themselves with any new gadgets. Future work on
this project would involve developing an approach to identify assistive care-based ser-
vices in any geographic location, to connect those services to the elderly with CI in that
region.
References
1. Our world is growing older: UN DESA releases new report on ageing [Internet]
(2019). www.un.org, https://www.un.org/development/desa/en/news/population/our-world-
is-growing-older.html. Accessed 20 Mar 2021
2. US Census Bureau. An aging world: 2015 (2018). https://www.census.gov/library/publicati
ons/2016/demo/P95-16-1.html. Accessed 20 Mar 2021
3. Alzheimer’s Disease Facts and Figures [Internet]. Alz.org. https://www.alz.org/alzheimers-
dementia/facts-figures. Accessed 20 Mar 2021
4. Onyeator, I., Okpara, N.: Human communication in a digital age: Perspectives on interpersonal
communication in the family. New Media and Mass Communication [Internet] (2019). https://
core.ac.uk/download/pdf/234653577.pdf
5. Shepherd, A., Sanders, C., Doyle, M., Shaw, J.: Using social media for support and feedback
by mental health service users: thematic analysis of a Twitter conversation. BMC Psychiatry
15(1), 29 (2015)
6. Craig, D., Strivens, E.: Facing the times: a young onset dementia support group: FacebookTM
style: facing the times: young onset dementia. Australas. J. Ageing 35(1), 48–53 (2016)
7. Rodriquez, J.: Narrating dementia: self and community in an online forum. Qual. Health Res.
23(9), 1215–1227 (2013)
8. Milne, A.: Dementia screening and early diagnosis: the case for and against. Health Risk Soc.
12(1), 65–76 (2010)
9. Stanyon, M.R., Griffiths, A., Thomas, S.A., Gordon, A.L.: The facilitators of communication
with people with Dementia in a care setting: an interview study with healthcare workers. Age
Ageing 45(1), 164–170 (2016)
10. Cavedoni, S., Chirico, A., Pedroli, E., Cipresso, P., Riva, G.: Digital biomarkers for the early
detection of mild cognitive impairment: artificial intelligence meets virtual reality. Front.
Hum. Neurosci. 14, 245 (2020)
11. Chertkow, H., Nasreddine, Z., Joanette, Y., Drolet, V., Kirk, J., Massoud, F., et al.: Mild
cognitive impairment and cognitive impairment, no dementia: Part A, concept and diagnosis.
Alzheimer’s Dement. 3(4), 266–282 (2007)
A Multimodal Approach for Early Detection of Cognitive Impairment 19
12. Thakur, N., Han, C.Y.: An intelligent ubiquitous activity aware framework for smart home.
In: Ahram, T., Taiar, R., Langlois, K., Choplin, A. (eds.) Human Interaction, Emerging Tech-
nologies and Future Applications III. Advances in Intelligent Systems and Computing, vol.
1253, pp. 296–302. Springer , Cham (2021). https://doi.org/10.1007/978-3-030-55307-4_45
13. Darby, D., Maruff, P., Collie, A., McStephen, M.: Mild cognitive impairment can be detected
by multiple assessments in a single day. Neurology 59(7), 1042–1046 (2002)
14. Sheinerman, K.S., Tsivinsky, V.G., Crawford, F., Mullan, M.J., Abdullah, L., Umansky, S.R.:
Plasma microRNA biomarkers for detection of mild cognitive impairment. Aging (Albany
NY). 4(9), 590–605 (2012)
15. Folstein, M.F., Folstein, S.E., McHugh, P.R.: “Mini-mental state”. A practical method for
grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 12(3), 189–198
(1975)
16. Choi, S.H., Park, M.H.: Three screening methods for cognitive dysfunction using the Mini-
Mental State Examination and Korean Dementia Screening Questionnaire: three screening
using MMSE and KDSQ. Geriatr. Gerontol. Int. 16(2), 252–258 (2016)
17. Wikipedia contributors. Levenshtein distance [Internet]. Wikipedia, The Free Encyclope-
dia (2021). https://en.wikipedia.org/w/index.php?title=Levenshtein_distance&oldid=101109
8657. Accessed 20 Mar 2021
18. Mierswa, I., Wurst, M., Klinkenberg, R., Scholz, M., Yale, E.T.: Rapid prototyping for com-
plex data mining tasks. In: Proceedings of the 12th ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining - KDD 2006. ACM Press, New York (2006)
19. Twitter Help Center. About profile visibility settings [Internet]. Twitter.com. Twitter
Help Center (2021). https://help.twitter.com/en/safety-and-security/birthday-visibility-set
tings. Accessed 20 Mar 2021
A Formal Model of Availability to Reduce
Cross-Domain Interruptions
Abstract. The mutual awareness and availability across team members is essen-
tial for effective and efficient cooperation. Yet, interruptions in general and inter-
ruptions unrelated to the current domain and task in particular can lead to disturbing
disruption. The literature on boundary management has great insight to offer with
respect to organising and maintaining a balance between different life domains. In
this paper we introduce a formal model of the semantic structure of life domains,
grounded in the concept of integration and segmentation found in boundary the-
ory. This formal model is based on simple set theory and relations. It is system
unrelated but serves as a starting point for creating further implementation specific
models in UML or other notations during the development process.
1 Introduction
Effective and efficient teamwork requires that team members have adequate information
about each other’s work activities and results and are available for each other for commu-
nication, cooperation, and coordination [7]. However, this can also lead to interruptions.
Interruptions in general and particularly interruptions that are unrelated to the user’s
current task domain can lead to disturbing disruption. Interruptions and their negative
consequences have been an important topic in HCI research [13]. Recent research has
shown that blocking distractions has a significant positive effect on focus and productivity
[12].
In teamwork complete blocking is often not an option—team members often cannot
completely disappear from the team. We therefore suggest to leverage on findings in
boundary management where great findings have produced a lot of insight into how
individuals structure and organise their roles and tasks into life domains as well as their
availability and interruptibility within and across these domains [4, 6, 10].
The boundary management literature is abundant [e.g., 4, 6, 10, 14]. Therefore, it
is necessary to boil it down to the main concepts that are relevant when analysing or
designing interactive and cooperative systems that aim to help users managing their
availability and awareness in teams while keeping interruptions to a minimum. In this
paper we introduce a formal model of the semantic structure of life domains, grounded
in the concept of integration and segmentation found in boundary theory. We specify the
essential concepts such as persons, domains, availability, interruptions, and notifications.
Figure depicts a scenario where the focal user is working in domain A and is being
contacted by another user from the domain B. In such a situation it is essential to have
a clear model for clarifying if and how the system should notify the focal user (Fig. 1).
Available?
Domain A
Domain B
This formal model is based on simple set theory and relations. It is system unrelated
but serves as a starting point for creating further implementation specific models in UML
or other notations during the development process. Formal models accurately specify
what we are talking about and set up rules on how one is allowed to reason in the scope
of the concept [8]. The advantage of using a formal method especially in such an early
phase of designing an application is to add precision, to help understanding and to reason
about properties of the design [17].
2 Theoretical Framework
Modern information and communication technology and approaches such as bring-
your-own-device (BYOD) bring a lot of flexibility but also lead to a huge amount of
interruptions from different life domains on the same device. Yet availability for different
domains varies. Boundary management helps to control which interruptions may cross
borders according to the person’s receptivity for interruptions of each domain. The
availability for cross-domain interruptions refers to the degree to which an individual is
more integrating or segmenting two domains [10]. Segmentation means an inflexible and
impermeable mental as well as physical boundary [6] and no conceptual overlap between
domains exists [10]. Creating such strong boundaries enables users to concentrate on
the current domain and less on others [1].
To define the availability for an interruption, it is necessary to specify to which
domain the interrupting person refers. The assignment of an individual’s contacts to
exactly one domain is clear for extreme segmenters. The contact information of persons
allocated to more than one domain, if they exist at all, is separated according to the
domain (e.g., by using different address books [14] or having several numbers or contact
entries stored on mobile devices). The less a person segments his or her domains, the
22 T. Gross and A.-L. Mueller
Fig. 2. Exemplary assignment of persons to domains for an integrating individual at a fixed time
t. A to F are initials of names.
3 Approach
As this theoretical description of relationships and dependencies in natural language
is hard to translate into a system, we create a formal model summing up all findings
necessary for a theory of interruptibility concerning different domains.
In Human-Computer Interaction formal models focus on different aspects of reality:
the problem domain, the interaction, or the system. Those models differ in the level
of abstraction concerning the resulting system. An abstract formal model explains the
Another random document with
no related content on Scribd:
back
back
back
back
back
back
back