Computers & Education

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Computers & Education 142 (2019) 103649

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

Affective computing in education: A systematic review and future


T
research
Elaheh Yadegaridehkordia,∗, Nurul Fazmidar Binti Mohd Noorb,∗∗,
Mohamad Nizam Bin Ayubb, Hannyzzura Binti Affalb, Nornazlita Binti Hussinb
a
Department of Information Systems, Faculty of Computer Science & Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia
b
Department of Computer System & Technology, Faculty of Computer Science & Information Technology, University of Malaya, 50603 Kuala Lumpur,
Malaysia

ARTICLE INFO ABSTRACT

Keywords: It is becoming a trend to apply an emotional lens and to position emotions as central to edu-
Teaching/learning strategies cational interactions. Recently, affective computing has been one of the most actively research
Adult learning topics in education, attracting much attention from both academics and practitioners. However,
Architectures for educational technology despite the increasing number of papers published, there still are deficiencies and gaps in the
system
comprehensive literature review in the specific area of affective computing in education.
Therefore, this study presents a review of the literature on affective computing in education by
selecting articles published from 2010 to 2017. A review protocol consisting of both automatic
and manual searches is used to ensure the retrieval of all relevant studies. The final 94 selected
papers are reviewed and relevant information extracted based on a set of research questions. This
study classifies selected articles according to the research purposes, learning domains, channels
and methods of affective recognition and expression, and emotion theories/models as well as the
emotional states. The findings show the increased number and importance of affective computing
studies in education domain in recent years. The research purposes of most affective computing
studies are found to be designing emotion recognition and expression systems/methods/instru-
ments as well as examining the relationships among emotion, motivation, learning style, and
cognition. Affective measurement channels are classified into textual, visual, vocal, physiolo-
gical, and multimodal channels, while the textual channel is recognized as the most widely-used
affective measurement channel. Meanwhile, integration of textual and visual channels is the most
widely-used multimodal channel in affective computing studies. Dimensional theories/models
are the most preferred models for description of emotional states. Boredom, anger, anxiety,
enjoyment, surprise, sadness, frustration, pride, hopefulness, hopelessness, shame, confusion,
happiness, natural emotion, fear, joy, disgust, interest, relief, and excitement are reported as the
top 20 emotional states in education domain. Finally, this study provides recommendations for
future research directions to help researchers, policymakers and practitioners in the education
sector to apply affective computing technology more effectively and to expand educational
practices.


Corresponding author.
∗∗
Corresponding author.
E-mail addresses: yellahe@gmail.com (E. Yadegaridehkordi), fazmidar@um.edu.my (N.F.B.M. Noor).

https://doi.org/10.1016/j.compedu.2019.103649
Received 28 March 2019; Received in revised form 2 August 2019; Accepted 5 August 2019
Available online 09 August 2019
0360-1315/ © 2019 Elsevier Ltd. All rights reserved.
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

1. Introduction

Emotions are considered as individual experiences and responses which depend on the conditions in which they appear in human
societies (Dupre et al., 2015). Emotions are an essential part of every human being that can influence behavior, thinking ability,
decision making, resilience, well-being and the way humans communicate with each other (Morrish, Rickard, Chin, & Vella-Brodrick,
2018; Neophytou, 2013). The study of human-computer interaction is referred to as Affective Computing, which is based on pro-
gramming and aims to diagnose and measure emotional expression (Lin, Wu, & Hsueh, 2014). R. Picard (1997) coined the term
affective computing in 1994, and since then many systems have been developed using affective computing techniques. There has
been an increasing number of studies over the past decade to promote the usefulness of human computer/robot interactions (Baker,
D'Mello, Rodrigo, & Graesser, 2010; Fu, Leong, Ngai, Huang, & Chan, 2017; Wang, Huang, & Makedon, 2014). The design of affect-
aware interactions between humans and technology has become a significant research area among researchers into human-computer
interaction (HCI) (Gil, Virgili-Gomá, García, & Mason, 2015). The main goal of affective computing is to recognize and measure
people's explicit affective appearances and link them to their implicit emotions. Recognition of affective states is useful for analyzing
users' reactions in order to elicit behavioral intentions and to create reasonable responses. This can develop an affective-aware system
and improve its user interface in potential applications (Handayani et al., 2014). Human behaviors in individual and social com-
munities are strongly influenced by emotions and need to be intensively examined in any human activity, such as e-Learning (Faria
et al., 2017). The number of emotions affecting students learning styles and their academic achievements is increasing (Peterson,
Brown, & Jun, 2015). Meanwhile, it is believed that there is significant relationships between emotional events and students' de-
veloped entrepreneurial competencies (Lackéus, 2014). Thus, it is becoming a trend to apply an emotional lens in emerging academic
research and position emotions as central to learning (Baldassarri, Hupont, Abadía, & Cerezo, 2015; M.; Feidakis, 2016; Jiménez,
Juárez-Ramírez, Castillo, & Ramírez-Noriega, 2017; Xu, 2018). This has further given rise to an important trend in the development
of affective tutoring systems (ATSs), a type of intelligent tutoring systems (ITSs), which are systems with the ability to control
learners' adverse emotions (Lin et al., 2014; Malekzadeh, Salim, & Mustafa, 2014). The incorporation of an emotion detection
capability can significantly expand the use of educational technologies and offer additional opportunities to improve the overall
distance learning outcomes as well as providing new chances for personalized instruction and low-cost delivery of teaching and
learning programs (Arroyo, du Boulay, Eligio, Luckin, & Porayska-Pomsta, 2011; Cabada, Estrada, Hernández, Bustillos, & Reyes-
García, 2018; Caballé, 2015; Shen, Xie, & Shen, 2014). Emotions could be the key factor that affects learning, as well as becoming the
driving factor that promotes learning and engagement (Leony, Muñoz-Merino, Pardo, & Kloos, 2013; Lin et al., 2014; Muñoz, Mc
Kevitt, Lunney, Noguez, & Neri, 2011). They also play a critical role in decision-making, timing, managing learning activities and
thus, increasing the student's motivation in learning (Sandanayake & Madurapperuma, 2013). Therefore, to identify the relationships
between the emotional, cognitive and motivational aspects of learning, it is crucial to have reliable methods of emotion recognition in
an academic context (Burić, Sorić, & Penezić, 2016).

1.1. Research significance and research questions

Affective computing continues to be one of the most actively research topics in education (Poria, Cambria, Bajpai, & Hussain,
2017). Over the last decade, researchers have studied different aspects of affective computing in education/learning, such as the role
of the teacher in managing students' emotions (Lavy & Eshet, 2018; Siu & Wong, 2016; Urhahne, 2015), the effect of gender, and the
influence of shape and color on emotion and learning (Merz & Wolf, 2017; Plass, Heidig, Hayward, Homer, & Um, 2014), students'
emotions when doing homework (Goetz et al., 2012), the impact of emotions related to achievement on students' decision-making
and learning performance (Chen & Wu, 2015), the design of emotionally-aware models and platforms for educational environments
(Baldassarri et al., 2015; Muñoz et al., 2011), and the relation between students' emotional states and the development of en-
trepreneurial competencies (Lackéus, 2014). However, more in-depth research is necessary to make these understandings applicable
to real-world situations because research studies on affective computing, especially in the educational domain, are so far broader and
less mature than in other domains (C. Wu, Huang, & Hwang, 2015). To date, few reviews have been conducted to synthesize the
results of previous studies on affective computing in the educational domain. Malekzadeh, Mustafa, and Lahsasna (2015) examined
studies related to the regulation of negative emotional states in the students' learning process. This review was limited only to
empirical research published between 2008 and 2014 and examined different methods for dealing with the users' negative emotional
states, such as “boredom”, “anxiety”, and “sadness”, during learning using computerized learning systems. Vogel-Walcutt, Fiorella,
Carper, and Schatz (2012) conducted a cross-disciplinary study on the extant literature on the state of boredom, published between
the 1980s and 2011, and critically reviewed the description, evaluation, and mitigation of the state of boredom within the educa-
tional settings. C. Wu et al. (2015) reviewed the research trends regarding affective computing in education between 1997 and 2013.
They identified 90 relevant papers from selected databases and proposed five challenges and problems for affective computing
implementation in education. Recently, Barcelos and Ruohotie-Lyhty (2018) focused on articles related to beliefs and emotions in
second language teaching by covering theoretical frameworks.
However, the literature reviews conducted by Malekzadeh et al. (2015) and Vogel-Walcutt et al. (2012) merely focused on specific
aspects of affective computing in education. Malekzadeh et al. only considered the negative emotional state of users, while Vogel-
Walcutt et al. focused on the emotion state of boredom. In their review of the selected studies, C. Wu et al. (2015) did not consider the
research purposes, emotion theories/models, and instruments used. Furthermore, they only covered the literature up to 2013 and
stated that their review was limited to only seven journals. Meanwhile, review of Barcelos and Ruohotie-Lyhty (2018) is limited to
second language teaching only.

2
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Hence, our present study is a new attempt to fill the gaps and respond to Wu et al., 2015's call for more comprehensive review of
affective computing in education/learning. Our study also conducts a more comprehensive examination and analysis of recent re-
search findings in the field, in order to classify those studies based on their research trends, purposes, learning domains, affective
recognition and expression channels and methods, emotion theories/models, instruments used, and the emotional states studied. This
study attempts to classify the research purposes and to identify the major affective measurement channels and the methods used in
each channel as well as to clarify the integration of different channels into the multimodal-based affective computing channel.
Furthermore, this study discussed the different emotion theories/models and used the self-reported questionnaire instruments in
affective computing studies. All these aspects are new and have not been addressed in the previous research.
To achieve the aim of this study, we systematically reviewed the recent relevant literature between 2010 and 2017 published in
journals, conferences, and workshops, based on the guidelines developed by Kitchenham and Charters (2007). This was done to
ensure a more comprehensive analysis of previous studies and to obtain a better understanding of current developments and trends in
affective computing in education, which will provide recommendations for practitioners and academics for further research direc-
tions. Specifically, we have formulated the following four research questions to help us in managing the literature review and in
achieving the research goals:

(RQ1) What are the trends in affective computing in education/learning as can be gauged from the selected papers?
(RQ2) What are the main research purposes and learning domains addressed the selected papers?
(RQ3) What are the main affective measurement channels and methods used in the selected papers?
(RQ4) What are the major theories/models of emotion adopted and the emotional states considered in the selected papers?

It is hoped that this review can help researchers in identifying areas which have already been investigated or require further
investigation, as well as updating practitioners on the new developments of affective computing in the educational context.
The remainder of this study is structured as follows. Section 2 explains the research method used to conduct a systematic lit-
erature review in this study. Section 3 reports the results of the review of the selected papers, based on the four research questions.
Section 4 discusses the outcomes and gives suggestions for future research based on the four research questions (RQs). Finally,
Section 5 summarizes all the findings and draws conclusions.

2. Review method

Kitchenham and Charters (2007) guideline is one of the most widely used and reliable step by step approach for performing
systematic literature review (SLR) in all fields (Asadi & Dahlan, 2017; Elaish, Shuib, Ghani, & Yadegaridehkordi, 2017; Elaish, Shuib,
Ghani, Yadegaridehkordi, & Alaa, 2017). Thus, in this study we followed Kitchenham and Charters (2007) guidelines in order to come
out with accurate, clear, and transparent literature review.
Kitchenham and Charters (2007) guidelines involve the following distinct activities: formulate the research questions; develop a
review protocol; identify inclusion and exclusion criteria; develop the search strategy and study selection process; perform data
extraction and synthesis; conclude and report the results. Details about this procedure and its associated steps are discussed in the
following subsections.

2.1. Review protocol

The systematic review process normally begins by preparing a review protocol based on the underlying research questions and
pre-defined methods (Kitchenham & Charters, 2007). This is the main step of the process as it reduces the possibility of researcher's
bias (Elaish, Shuib, Ghani, & Yadegaridehkordi, 2017; Elaish, Shuib, Ghani, Yadegaridehkordi et al., 2017; Kitchenham, 2004). The
main components of a review protocol include reviewing the background of the subject, formulating the research questions, de-
veloping a research strategy, selection criteria and a data extraction strategy and, finally, synthesis of the data obtained (Balaid,
Rozan, Hikmi, & Memon, 2016). In this study, the research background and questions are stated in above sections. The review
protocol of this study is illustrated in Fig. 1.

2.2. Inclusion and exclusion criteria

In this study, we considered the articles from jounals, conferences and workshops published in the English language from January 2010
to December 2017. There were some reasons behind selecting this period of time in this study. First, this review is a complement of
previous review studies (Malekzadeh et al., 2015; Vogel-Walcutt et al., 2012; C.; Wu et al., 2015) to provide more in-depth understanding
of affective computing in education in recent years. Second, the term affective computing has been increasingly used in education related
studies since 2010 (C. Wu et al., 2015). However, there is no effort that specifically focuses and reviews studies published from 2010
onwards. Therefore, this study systematically collected related studies from 2010 to 2017 to comprehensively classify and conclude
affective recognition and expression channels, methods, models, and instruments in the education domain which have not been considered
by previous researchers. The databases searched include ISI Web of Knowledge, ScienceDirect, IEEE Explore, and Springer Link. The search
excluded editorials, prefaces, poster sessions, panels and tutorial summaries, articles that were not available in a complete version, not
peer-reviewed or not written in English. It is noted that, to be included in this study, an article should meet all inclusion criteria and not
should satisfy any of the exclusion criteria. Table 1 summarizes the inclusion and exclusion criteria.

3
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Fig. 1. The review protocol.

2.3. Research strategy

The research strategy, as shown in Fig. 1, consisted of both manual and automatic steps. The first step is aimed at identifying the
basic studies of affective computing in the education/learning domain. In line with the recommendation of Webster and Watson
(2002) we reviewed several digital databases in order to cover a wide number of published materials instead of limiting our search to
a specific set of journals. Hence, the digital databases used include ISI Web of Knowledge, ScienceDirect, IEEE Explore, and Springer
Link. These databases were considered as they are important indexing services for retrieving high impact scholarly publications
relevant to social science, education, and psychology (Malekzadeh et al., 2015). The main search phrase/keywords used in this study

4
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Table 1
Inclusion and exclusion criteria.
Inclusion criteria Exclusion criteria

Relevant to research and published between 2010 and 2017. Outside period of coverage.
Published in the selected databases. Non-English.
Full-text papers. Uncompleted studies.
Papers written in English. Unavailable full text.
Duplicated studies.

include: “affective computing in education/learning”, “emotion in education/learning”, “students' emotion”, and “emotion re-
cognition”. As recommended by Kitchenham (2004), Endnote, a useful bibliographic package, was used to manage and sort all the
retrieved studies as well as to delete duplicate ones. As suggested by Kitchenham and Charters (2007), a manual search was done on
the references cited in the primary studies to try to ensure that all relevant articles were retrieved for review (Webster & Watson,
2002). If new articles were found by performing this forward/backward search method, the search process was started again. This
search continued until no new article was found.

2.4. Study selection process

The automatic search was conducted on the selected digital databases using the defined keywords and phrases. The search
retrieved 314 studies. After removing non-relevant and duplicate studies using Endnote, 123 studies remained. The title, abstract,
keywords, and conclusions of these studies were checked against the inclusion and exclusion criteria and following this, 37 studies
were excluded, leaving 86 studies. As suggested by Kitchenham and Charters (2007), in this step, the studies that were not clearly
relevant to our goals and research questions were removed. However, those studies that could not be clearly determined regarding
their relevancy were kept and a full-text scanning was performed again. The manual search was conducted, in which the references of
all 86 studies retained from the first step were checked against the criteria, in order to ensure that all relevant studies were retrieved
or identified. This procedure was repeated until no new studies were identified from the search results. After completing the manual
search, ten additional studies were identified. Thus, a total of 96 studies were identified as the primary studies. These studies were
checked against the defined criteria, and as a result, two studies were removed, and the remaining 94 studies were used as the final
selected papers for this study.

2.5. Data extraction and coding

In this research, the data extraction and coding procedures were carried out by reviewing each article recording the required data
using Endnote and MS Excel spreadsheets. In addition to the columns for the study ID and reference, five columns were allocated for
data extraction and coding as follows, based on the research questions: (1) research purpose; (2) affective measurement channels; (3)
learning domains; (4) emotion theory/model; and (5) emotional states. Data extraction and description of the data for each column
are shown in Table 2. Following the completion of this step and the analysis all of the research questions are answered. The results are
reported and discussed in the remaining sections.

3. Review results

The sections below report the results of the review of 94 selected studies published on affective computing in education according
to the four research questions.

Table 2
Data extraction and description.
Extracted data Description

Study ID Unique number for each paper


Reference Names of all the authors and publication date of paper using Endnote
Research purpose The purpose as stated in the paper
Affective measurement channels Each channel includes the different methods which are used for recognizing and measuring affective states
Examples: textual, visual, vocal, physiological, and multimodal
Learning domains Examples: “arts & humanities”, “engineering & technology”, “life sciences & medicine”, “natural sciences”, “mixed learning
domains”, not specified
Emotion theory/model Theory/model adopted for recognizing emotions
Examples: Control-value theory of achievement emotions, Facial Action Coding System, etc.
Emotion states Examples: boredom, anger, anxiety, enjoyment, surprised, etc.

5
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Fig. 2. Distribution of studies based on publication source.

3.1. (RQ1) what are the trends in affective computing in education/learning as can be gauged from the selected papers?

3.1.1. Overview of the publication sources


As shown in Fig. 2, the majority of the selected studies were published in highly-ranked journals as well as following reliable conferences,
which increases the quality and importance of this review study. Fig. 2 shows that the majority of the selected studies 59 (63%) were from
journals, followed by 31 (33%) conferences studies and only four (4%) published in workshops. Word cloud is one of the popular methods to
visualize the key terms used in the title, abstract, and keywords of selected papers in the review studies (Asadi & Dahlan, 2017; Tricco et al.,
2016). The larger the word in the graphical representation, the more common the word is in the title, abstract, and keywords of the selected
papers. Fig. 3 shows the word-cloud of selected papers in this study, using the keywords extracted from their titles, abstracts, and keywords.
This figure can help in visualizing the aim and pattern of the selected publications. As shown in this figure, the most frequently repeated terms
in the selected studies are “learning” (27 times) followed by “emotional” (11 times) and “affective” (10 times).

3.1.2. Trend analysis of publications


The distribution of the number of research papers by year of publication from 2010 to 2017 is shown in Fig. 4. The figure shows
that the number of affective computing studies in the education domain gradually increased from eight papers in 2010 to 21 studies
in 2015. Published Studies then decreased to ten papers in 2016 and reached to 12 papers in 2017. Therefore, the highest number of
publications was recorded in 2015 with 21 studies.

3.2. (RQ2) what are the main research purposes and learning domains addressed the selected papers?

3.2.1. Research purposes


In this research, each paper was classified into one of four categories based on its stated research purpose: (1) investigating the

Fig. 3. The word cloud of key terms appearing in selected affective computing-related studies.

6
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Fig. 4. Number of papers on affective computing in education.

effects of emotion on users' intention, achievement, and performance; (2) designing emotion recognition and expression systems/
methods/instruments; (3) examining the relationships among emotion, motivation, learning style, and cognition; and (4) evaluating
the effects of a lecturer behavior in students' emotion and motivation. Fig. 5 shows that the most common research purpose was
designing emotion recognition and expression systems/methods/instruments (67%), followed by examining the relationships among
emotion, motivation, learning style, and cognition (23%), with only 7% of studies investigating the effects of emotion on users'
intention, achievement, and performance, and just 3% evaluating the effects of lecturer behavior on users' emotion and motivation.

3.2.2. Learning domains


This study adopted the QS World University Rankings list, 2016, which identifies five popular learning domains, covering 42
subjects, as shown in Table 3.
The main learning domains are “social sciences and management” (21%) and “engineering and technology” (19%), followed by
“natural sciences” (15%), “arts and humanities” (6%) and “life sciences and medicine” (4%). 13% of studies conduct in mixed
learning domains. However, a large group of studies (22%) do not specify any learning domain as they mainly focus on the design of
affective recognition systems and investigation of emotional states.

3.3. (RQ3) what are the main affective measurement channels and methods used in the selected papers?

In this study, the affective measurement channels were classified as textual, visual, vocal, physiological, and multimodal, which is
any integration of those channels. Fig. 6 shows a logic chart of affective measurement channels and the methods used in each
channel. Most studies were reported in the textual channel, in which most studies used a self-reporting method, either through
questionnaires (32 studies) or text (14 studies), and 3 studies used expert observation. In the visual channel, facial expression was the
most widely used method (8 studies). In the physiological channel, the common affective measurement methods were electro-
encephalogram (EEG) (4 studies), eye tracking (3 studies), electrocardiography (ECG), heart rate variability (HRV), and skin con-
ductance level (SCL) (2 studies each) (refer to Appendix A for definitions of physiological affective measurement methods). There
were 16 studies in the multimodal channel. There were no studies that only applied the vocal channel (speech and/or prosody&
intonation methods). However, the vocal channel is considered a distinctive channel as it is used as a part of the multimodal channels
in some studies.

Fig. 5. Distribution of studies by research purpose.

7
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Table 3
Learning domains and subjects (adopted from QS Topuniveristies (2016)).
Learning domain Subject

Arts & humanities “Archaeology, architecture, art & design, English language & literature, history, linguistics, modern languages, performing
arts, philosophy”
Engineering & technology “Chemical engineering, civil & structural engineering, computer science, electrical engineering, mechanical, aeronautical &
manufacturing engineering, mineral & mining engineering”
Life sciences & medicine “Agriculture & forestry, biological sciences, dentistry, medicine, nursing, pharmacy & pharmacology, psychology, veterinary
science”
Natural sciences “Physics & astronomy, mathematics, environmental sciences, earth & marine sciences, chemistry, materials sciences,
geography”
Social sciences & management “Accounting & finance, anthropology, business & management, sociology, communication, politics, education, development
studies, statistics, economics, law, social policy”

3.3.1. Textual channel


Table 4 shows the frequency analysis result of different questionnaires that had been used in the past affective computing studies.
The table reveals that the “Achievement Emotions Questionnaire” (AEQ) was the most popular and first ranked questionnaire, as it
was used in 20 studies on affective computing in education. “Self-Assessment Manikin” (SAM) and “Positive and Negative Affect
Schedule” (PANAS) questionnaires were each used in three studies, while the “Mini-International Personality Item Pool” (mini-IPIP),
“Agent Response Inventory” (ARI), “Scales for Assessing Positive/Negative Activation and Valence in Experience Sampling Studies”
(PANAVA-KS), “Achievement Emotions Questionnaire-Elementary School” (AEQ-ES), and Scale for “Early Mathematics Anxiety”
(SEMA) questionnaires were each used once in the recorded studies.

3.3.2. Multimodal channel


Integration of different channels into the multimodal-based affective computing channel is shown in Table 5. According to this
table, integration of textual and visual channels, used in seven (7) studies, is the most widely-used multimodal channel in affective
computing studies. Integration of textual, visual, and physiological (three studies), and visual and vocal (two studies) were also
carried out. Other integration strategies used included: textual + visual + vocal, textual + vocal + physiological, vi-
sual + vocal + physiological and textual + visual + vocal + physiological (each with one study).

3.4. (RQ4) what are the major theories/models of emotion adopted and the emotional states considered in the selected papers?

3.4.1. Emotion theories/models


Two main steps are defined for an affective recognition system to measure a user's emotions. First, the system collects the data
from users' emotions then it predicts the user's emotional states based on a predefined model and previously recognized emotions, and
also from the stored data (Jaques, Vicari, Pesty, & Martin, 2011, pp. 599–608). Generally, a categorical model aims at explaining the
cognitive process that elicits an emotion from predefined categories of emotions, whereas a dimensional model refers to a two-
dimensional model of affects, where emotions are considered to be a combination of arousal/learning and valence/affect. Table 6
shows the frequency analysis of different emotion theories/models and their description in terms of dimensional or categorical
emotional states. The table shows that the majority of studies (20 papers) used the dimensional theory/model for description of
emotional states, in which the control-value theory of achievement emotions proposed by Pekrun (2006) is the most widely-used
theory. Kort, Reilly, and Picard (2001) learning spiral model and Russell (1980) circumplex model of emotions are the second and
third most commonly used frameworks, respectively. However, other dimensional theories/models such as the Limited Capacity
Model of Motivated Mediated Message Processing (Lang, 2006), Whissell's evaluation-activation space (Whissel, 1989), Posner's
Circumplex model of affect (Posner, Russell, & Peterson, 2005), and Feidakis's emotion model (Michalis Feidakis, Daradoumis,
Caballé, Conesa, & Gañán, 2013), were each used once. Nine studies used categorical-based theories, and these include five which
used the Facial Action Coding System (FACS) (Ekman, 1992; Ekman & Friesen, 1971) and four using Ortony, Clore and Collins'
Structure of Emotions (OCC) (Ortony, Clore, & Collins, 1988).

3.4.2. Emotional states


Table 7 shows the frequency analysis of the top 20 emotional states used in the studies in the educational domain. Owing to the
diverse emotional states involved in education, the majority of the less frequent emotional states are not included in the table. The
table shows that the top 20 emotional states used in affective computing studies are boredom, anger, anxiety, enjoyment, surprise,
sadness, frustration, pride, hopefulness, hopelessness, shame, confusion, happiness, natural, fear, joy, disgust, interest, relief, and
excitement. The top emotions reported in the table indicate that researches in the education domain are more concerned about
negative emotions. This also is supported by (Malekzadeh et al., 2015) who reported that many empirical researches have focused on
applying different techniques for managing the negative emotional state of the learners to improve learning productivity during
learning episode. Meanwhile (Vogel-Walcutt et al., 2012), emphasized on controlling negative state of boredom within educational
settings for enhancing educational practices. Generally, many researchers strongly agree that ITSs would significantly improve
learners' performance if they can manage negative emotional states of the learners (D'Mello & Calvo, 2013; Tian et al., 2014).

8
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Fig. 6. Affective measurement channels and methods.

9
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Table 4
Different questionnaires used in affective computing studies.
Rank Questionnaire Frequency

1 Achievement Emotions Questionnaire (AEQ) 20


2 Self-Assessment Manikin (SAM) 3
Positive and Negative Affect Schedule (PANAS) 3
3 Mini-International Personality Item Pool (mini-IPIP) 1
Agent Response Inventory (ARI) 1
Scales for Assessing Positive/Negative Activation and Valence in Experience Sampling Studies (PANAVA-KS) 1
Achievement Emotions Questionnaire-Elementary School (AEQ-ES) 1
Scale for Early Mathematics Anxiety (SEMA) 1

Table 5
Distribution of usage of multimodal channel.
References Multimodal channel

Textual Visual Vocal Physiological

Tj, Leister, Schulz, and Larssen (2015) * *


Caballé (2015) * * * *
Kaklauskas et al. (2015) * * *
Harley, Bouchet, Hussain, Azevedo, and Calvo (2015) * * *
Lin et al. (2014) * *
Salmeron-Majadas, Santos, and Boticario (2014) * *
Shen et al. (2014) * * *
Santos, Salmeron-Majadas, and Boticario (2013) * *
Harley, Bouchet, and Azevedo (2013) * *
Lin, Wang, Chao, and Chien (2012) * *
Alepis and Virvou (2011) * *
Hussain, AlZoubi, Calvo, and D’Mello (2011) * * *
Banda and Robinson (2011) * *
D’Mello, Lehman, and Graesser (2011) * *
Cooper, Muldner, Arroyo, Woolf, and Burleson (2010) * * *
Mao and Li (2010) * * *

Table 6
Emotion theory/model used in affective computing studies.
Emotion theory/model Description of emotional states Frequency

Dimensional Categorical

Control-value theory of achievement emotions (Pekrun, 2006) * 11


Facial Action Coding System (FACS) (Ekman, 1992; Ekman & Friesen, 1971) * 5
Ortony, Clore and Collins Structure of Emotions (OCC) (Ortony et al., 1988) * 4
Learning spiral model (Kort et al., 2001) * 3
Russell's circumplex model of emotions (Russell, 1980) * 2
Limited Capacity Model of Motivated Mediated Message Processing (Lang, 2006) * 1
Whissell's evaluation-activation space (Whissel, 1989) * 1
Posner's Circumplex model of affect (Posner et al., 2005) * 1
Feidakis's emotion model (Michalis Feidakis et al., 2013) * 1
Total number of dimensional emotion theories/models = 20
Total number of categorical emotion theories/models = 9

3.4.3. Distribution of emotional states by affective measurement channels


The distribution of the research papers by the top 20 emotional states and affective measurement channels is shown in Fig. 7.
Among the measurement channels, the textual channel is mainly used to measure boredom, anger, anxiety, and enjoyment; the visual
channel is mainly used for measuring surprise, anger, sadness, and disgust; the physiological channel is mainly used for measuring
boredom, frustration and disgust; and multimodal channel is mainly used for measuring anger, surprise, frustration, confusion, and
natural emotions.

4. Discussion

The sections below present discussions, recommendations and insights for future research directions according to the four re-
search questions.

10
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Table 7
Frequency analysis of emotion states.
Rank Emotion Frequency

1 Boredom 41
2 Anger 37
3 Anxiety 30
4 Enjoyment 27
5 Surprised 25
6 Sadness 24
7 Frustration 24
8 Pride 21
9 Hopefulness 20
10 Hopelessness 19
11 Shame 19
12 Confusion 17
13 Happiness 17
14 Natural 17
15 Fear 17
16 Joy 17
17 Disgust 15
18 Interest 15
19 Relief 13
20 Excitement 13

Fig. 7. Distribution of emotional states by affective measurement channels.

4.1. (RQ1) what are the trends in affective computing in education/learning as can be gauged from the selected papers?

From Fig. 4 it has been observed that there is a trend for more research and interest in affective computing, indicating the
increasing importance of this subject in the educational domain. This finding was supported by (C. Wu et al., 2015) who reported a
growing trend of affective computing research in education domain. It is expected that almost all learning applications and platforms
will have an embedded capability to detect and monitor learners' emotions in the near future. More specifically, considering the rapid
development and importance of mobile learning in the current educational systems (Elaish, Shuib, Ghani, & Yadegaridehkordi, 2017;
Elaish, Shuib, Ghani, Yadegaridehkordi et al., 2017; Yadegaridehkordi & Iahad, 2012), developing emotional models that can be
integrated into mobile devices such as tablet and mobile phone is more substantial (Subramainan, Yusoff, & Mahmoud, 2015).
Therefore, the educational environments of the future might be different and be more sophisticated than the current situation.
Meanwhile, given the significant role of educational institutions in the development of societies, knowledge and technologies, de-
veloping strategies to fulfill their historic mission of teaching and research becomes crucial for various stakeholders (Olcay & Bulu,
2016). In this sense, policymakers and practitioners in education are urged to make appropriate plans to allocate the necessary
resources for supporting future research and development in affective recognition and expression.

11
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

4.2. (RQ2) what are the main research purposes and learning domains addressed the selected papers?

Crompton, Burke, Gregory, and Gräbe (2016) and Wingkvist and Ericsson (2013) emphasize the importance of determining research
methods and purposes in order to interpret and share results and facilitate knowledge transfer. However, previous reviews on affective
computing have not attempted to review and classify the research purposes. This study presents a new finding to fill the gap. According
to Fig. 5, of the 94 studies reviewed in this research, 67% considered “designing emotion recognition and expression systems/methods/
instruments” as the primary research purpose. As C. Wu et al. (2015) and Malekzadeh et al. (2015) pointed out, the increased appli-
cation of affective computing in education has only started in recent years. Thus, it is not surprising that there still has been a lack of
emotion recognition and expression systems/methods/instruments. Therefore, most of the researchers have focused their efforts on
designing and proposing emotion recognition and expression systems/methods/instruments for educational environments (Caballé,
2015; Khalfallah & Slama, 2015; Salmeron-Majadas et al., 2014; Sandanayake & Madurapperuma, 2013; Yang, Alsadoon, Prasad, Singh,
& Elchouemi, 2018). In view of this development, researchers and practitioners in education are urged to incorporate new devices and
equipment, such as intelligent sensors, cameras, speech prosody and intonation recognition, in the design and development of affective
recognition systems. Further, exploiting the latest information technology such as cloud computing, green information technology and
internet of things, in the design process could affect a dramatic change in affective computing. Meanwhile, by considering the significant
impacts of color features like chroma, hue or lightness on emotions, concentrating on emotional differences and color preferences in
different learning situations can be another research direction in education domain (Sokolova & Fernández-Caballero, 2015). Other
research purposes include examining the relationships among emotion, motivation, learning style and cognition (Baldassarri et al.,
2015; Harley, Carter, et al., 2015; Heidig, Müller, & Reichelt, 2015; Tj et al., 2015), and investigating the effects of emotion on users'
intention, achievement, and performance (Chen & Wu, 2015; Chung, Cheon, & Lee, 2015; Lackéus, 2014).
Although the design of an affective recognition system or method is the first step in integrating such systems into the educational
environment, it is important to examine the performance, effectiveness, and usability of these systems, which will help in designing
more efficient and applicable systems and methods. Therefore, researchers and practitioners should examine and appraise these
factors as soon as a new affective recognition system or method is introduced into the educational sector.
Studies on affective computing in education have been widely conducted in different learning domains. The results show that
affective computing is generally applied in courses related to social science and management field (Finch, Peacock, Lazdowski, &
Hwang, 2015; Heidig et al., 2015; Peterson et al., 2015; Urhahne, 2015), and to a considerably lesser extent in Life sciences &
medicine field. Therefore, it is hoped that researchers in the different disciplines can collaborate in designing and developing suitable
applications for under-represented courses. So far, the potential use of affective learning in latest open and online learning platforms
such as Massive Open Online Courses (MOOCs), M-learning (Mobile learning), and CSCL (Computer Supported Collaborative
Learning) have not critically explored. Hence, future study should be conducted on emotion-sensitive computerized MOOCs, M-
learning, and CSCL to provide more facilitated and personalized learning environments (M. Feidakis, 2016).

4.3. (RQ3) what are the main affective measurement channels and methods used in the selected papers?

Afzal and Robinson (2011) classified channels of nonverbal behavior into visual, vocal, and physiological. C. Wu et al. (2015)
reported the use of conventional questionnaires, skin conductance response, facial expression, heartbeat, EEG and EMG, as some of the
instruments used in affective computing in education/learning studies. However, based on the papers reviewed, no study has com-
prehensively classified the affective measurement channels and methods used in educational environments. This study identified five
major categories of affective measurement channels: textual, visual, vocal, physiological, and multimodal (see Fig. 6). Each channel
employs different methods: the textual channel uses self-report by questionnaire or by text, and expert observation; the visual channel
uses facial expression, head pose, body gesture, eye gaze, and keystroke dynamics; the vocal channel uses speech and prosody and
intonation; the physiological channel uses EMG, ECG, HR, HRV, EOG, EEG, EDA, BVP, SCL, breathing rate, temperature, and eye
tracking, while the multimodal channel employs any integration of the above-mentioned channels. This study presents new findings not
reported previously. The results show that speech and prosody and intonation have not been used alone in any of the studies reviewed.
Therefore, it can be concluded that a single vocal channel is the least useful channel, as it is only applicable as a part of multimodal
channel for detecting emotional states in education. It is widely believed that in emotional speech processing, the emotional speech
changes according to the different acoustic features (Scherer, 1986). The individuals show their feelings not only by the acoustic
features, but also with the content they intend to deliver. Different phrases, words, and syntactic structures can make different kinds of
expression styles and results. Consequently, the lack on the capture and the analysis of more detailed/reliable physiological features
limits emotional speech synthesis and recognition systems (Tao & Tan, 2005). Therefore, affective interaction multiagent systems need
to be designed to precisely capture emotions of a human via voice, facial expression and physiological signals (C. Wu et al., 2015).
Meanwhile, in speech emotion recognition, learning representations for natural speech segments which can be utilized effectively under
unconstrained and noisy settings is a substantial challenge. Thus, it is more effective to collect and transfer information and knowledge
through other channels in addition to the individual voice channel (Albanie, Nagrani, Vedaldi, & Zisserman, 2018).
Table 8 presents a brief comparative overview of different channels. Although the use of textual methods is easy and cheap, they are
struggling with some challenges such as cultural and language differences, not being real time, and not accurate enough (Broekens &
Brinkman, 2013; Lin et al., 2014). Visual methods provide extra information and are practically more deployable. However, they
associate with noisy sensors that are largely unscalable, image processing problems, and privacy issues. Vocal sensing provides accurate
information through integration into interactive user interfaces. However, it largely is limited to dialogue-based learning systems.
Physiological analysis requires tightly controlled environmental conditions as well as specialized and fragile equipment, thus may not

12
E. Yadegaridehkordi, et al.

Table 8
Pros and cons of different methods in affective computing.
Channel Method Pros Cons References

Textual Self-reported by - Easy to implement and use - Cultural and language differences (Bellocchi, Mills, & Ritchie, 2015; Castellar, Van Looy, Szmalec, & De
questionnaire - Meaningful feedback - Not being real time Marez, 2014; Craig, Graesser, Sullins, & Gholson, 2004; Leony et al., 2013;
Self-reported by text - Cheap and not dependent on the use - Not accurate enough Muis, Psaradellis, Lajoie, Di Leo, & Chevrier, 2015; Tj et al., 2015; Urhahne,
Expert observation of any special equipment 2015)
Visual Facial expression - Natural and observable - Time- and resource-consuming (Afzal & Robinson, 2011; Ammar, Neji, Alimi, & Gouardères, 2010;
Head Pose - Cheap equipment exception for - Noise Baldassarri et al., 2015; Behoora & Tucker, 2015; Duo & Song, 2012;
Body gesture gesture - Image processing problems Khalfallah & Slama, 2015; Qi-rong, 2010; Robinson, 2014; Shute et al.,
- Extra information - Privacy issues 2015) (Afzal & Robinson, 2011; Qi-rong, 2010)
- Practically deployable
Vocal Speech - Natural, discernible - Limited to dialogue-based systems (Xuejing, Zhiliang, Siyi, & Wei, 2009) (Tao & Tan, 2005)
Prosody & Intonation - Accurate - Time and resource-consuming
- Integrated into interactive user - Cultural and language differences
interfaces
- Practically deployable
Physiological EEG - Can be extended for real-time - Unobservable (Chen & Sun, 2012; Chen & Wu, 2015; Gil et al., 2015)

13
ECG processing - Issues with comfort and privacy (Ray & Chakrabarti, 2015; Saadatzi, Welch, Pennington, & Graham, 2013;
HRV - Easy to access the bio-signals - Requires tightly controlled environmental Shen, Wang, & Shen, 2009)
BVP conditions (Charoenpit & Ohkura, 2013, 2014; Shen, Callaghan, & Shen, 2008; C.-H.;
SCL - Specialized and fragile equipment Wu, Tzeng, & Huang, 2014)
HR - Low recognition accuracy on dominance emotion
Eye tracking - Difficult to interpret collected data
Multimodal Textual, Visual, - Overcome the constraints of - Technical challenges associated with integrating (Cooper et al., 2010; Harley, Bouchet, et al., 2015; Salmeron-Majadas et al.,
Physiological individual channels channels 2014; C.-H.; Wu et al., 2014) (D'Mello et al., 2011; Harley et al., 2013; Lin
Textual, Visual - Improve the accuracy over the - Difficulties associated with collecting adequate et al., 2012; Santos et al., 2013) (Alepis & Virvou, 2011; Banda & Robinson,
Textual, Visual, Vocal, individual channels and realistic data 2011, pp. 200–207; Caballé, 2015; Kaklauskas et al., 2015)
Physiological - Difficult to manage and interpret huge amount of (Mao & Li, 2010; Shen et al., 2014)
Textual, Vocal, data generated from various channels
Physiological
Visual, Vocal
Textual, Visual, Vocal
Visual, Vocal,
Physiological
Computers & Education 142 (2019) 103649
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

always be suitable for learning environments (Chen & Sun, 2012; Chen & Wu, 2015; Gil et al., 2015). The literature on affective
computing recommends that integrating different input sources can enhance the outcome of affect recognition. However, technical
challenges associated with integrating channels and difficulties associated with managing and interpreting huge amount of data gen-
erated from various channels are the basic issues that need to be considered when using multimodal channels (D'Mello & Kory, 2012).
The textual channel was the most commonly used affective measurement channel, as the vast majority of studies utilized a self-
reported questionnaire method (see Fig. 6). This could be due to the advantages of the questionnaire, such as its reliability, validity, ease
of use, meaningful feedback, cheapness, and the fact that it is not dependent on the use of any special equipment (Broekens & Brinkman,
2013; Lin et al., 2014). In addition, as shown in Table 4, AEQ is the most popular questionnaire in affective computing studies. AEQ was
proposed by Pekrun, Goetz, Titz, and Perry (2002) and Pekrun, Goetz, Frenzel, Barchfeld, and Perry (2011) to recognize emotional
states experienced by students. The instrument measures enjoyment, hope, pride, relief, anger, anxiety, shame, hopelessness, and
boredom in different situations using 24 scales. AEQ has been used successfully in affective computing studies (Moga, Sandu, Danciu,
Boboc, & Constantinescu, 2013; Muis et al., 2015; Pekrun, Cusack, Murayama, Elliot, & Thomas, 2014; Urhahne, 2015). However, since
the educational environment provides a good source of different emotional experiences, researchers are encouraged to develop a
multidimensional questionnaire that is comprehensive enough for measuring different emotions for assessing complex relationship
between cognitive and motivational aspects of learning in different teaching and learning activities and situations (Burić et al., 2016). It
is often difficult for individuals to recognize and report their own emotions under different situations. Meanwhile, controlling and
monitoring the dynamic student's characteristics and emotions during learning activities is problematic through textual method
(Chrysafiadi & Virvou, 2013). According to Poria et al. (2017), the way humans naturally express their emotions and feelings is typically
multimodal combination of the textual, audio, and visual modalities. Thus, according to Vogel-Walcutt et al. (2012), practitioners and
researchers should explore optimal combination self-reported instruments and other physiological affective recognition methods for
more accurate assessment. In this situation, the large amount of collected information could help in recognizing hidden emotions, as
well as achieving more comprehensive understanding of learners' behavior in the real environment (D'Mello & Kory, 2012).
The integration of textual and visual channels is the most widely used multimodal channel in affective computing studies. Facial
expression is the best direct method for accurately detecting emotional states specially for virtual learning environments (Baldassarri et al.,
2015; D'Mello & Kory, 2012; Yang et al., 2018). Meanwhile, according to Tao and Tan (2005), the quality of facial simulations has
enhanced significantly due to the recent advances of hardware and matching software. Therefore, multimodal-based studies have mainly
tried to solve the challenges of the textual channel (e.g., the Hawthorne effect, and lack of capability to recognize changes in affective
states) by taking advantage of the visual channel, especially the facial expression method (Lin et al., 2014; Salmeron-Majadas et al., 2014;
Santos et al., 2013; Tj et al., 2015). Visual + vocal was reported as another common multimodal channel (Alepis & Virvou, 2011; Hussain
et al., 2011). According to Tao and Tan (2005), audio-visual mapping is a powerful method for capturing and managing users' emotions
and feelings in a desired system. This finding supports the findings of D'Mello and Kory (2012) who conducted meta-analysis and found
that the vast majority of multimodal-related studies are bimodal and generally focused on audio-visual affect recognition.
Some researchers believe that multimodal approaches help to overcome the constraints of individual channels and improve the
accuracy over the best unimodal channels (Banda & Robinson, 2011, pp. 200–207; D'Mello & Kory, 2012; Michalis; Feidakis, Caballé,
Daradoumis, Jiménez, & Conesa, 2014; Harley, Bouchet, et al., 2015). However, some theoretical, methodological, and measurement
challenges still need to be addressed before these types of channels can be used for measuring learners' emotions empirically.
Furthermore, arising from the findings, some questions still remain on how to integrate different channels to achieve better results,
how to deal with the different data types or formats that different methods provide, how to manage disagreements on emotional
states among various methods at a particular time, how to get recognition results; these need to be answered in future in-depth
studies (Poria et al., 2017). Thus, researchers are urged to investigate the integration of different methods in order to propose
multimodal-based affective recognition systems that can comprehensively pattern human emotional states under different teaching/
learning conditions. Moreover, more efforts should be done in the parameters integration (Tao & Tan, 2005).
Generally, textual and visual can be appropriate affective recognition channels in financially restricted situations and/or in
situations that the available equipment is limited. Visual methods provide extra information and are practically more deployable.
Non-invasive physiological processes are appropriate in collecting and understanding the emotional state of the individuals without
having any physical contact. Multimodal approaches help to overcome the constraints of individual channels and improve the
accuracy over the best unimodal channels. However, challenges associated with managing actual aspects of affective states and
behavior, technical possibility, and practical matters make the selection of appropriate channel for emotional detection and inter-
pretation more difficult. Meanwhile, limitations of privacy, ethics, and comfort limit the deployment, design, and implementation of
the most fit sensing technologies. Thus, researchers, policymakers and practitioners in education sector need to adopt more ap-
propriate affective recognition channels and methods based on specific situations and on the availability of enough equipment and
financial resources by evaluating and comparing benefits and challenges associated to each channel and its methods.

4.4. (RQ4) what are the major theories/models of emotion adopted and the emotional states considered in the selected papers?

In previous studies, emotional states were discussed under categorical and dimensional emotion theories/models. Categorical
models reflect discrete emotions such as fear and anger, to model a person's emotional states, whereas dimensional models represent
a person's affective states in a multi-dimensional space, such as valence-arousal or learning-affect spaces (D'Mello & Kory, 2012).
Table 6 shows that the selected studies on affective computing in education mostly focused on applying a dimensional theory/model
for description of emotional states (Finch et al., 2015; Harley, Bouchet, et al., 2015; Moga et al., 2013; Muis et al., 2015), of which the
most popular is the control-value theory of achievement emotions proposed by Pekrun (2006). This finding is supported by Peterson

14
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

et al. (2015) who stated that achievement emotions-related research has been dominated by Pekrun's theory. According to the
control-value theory of achievement emotions, students' beliefs regarding their cognitive quality are significantly linked to their
control and value appraisals of the academic environment, which in turn, influence their emotional learning results (Pekrun, 2006).
The FACS proposed by Ekman and Friesen (1971) and OCC proposed by Ortony et al. (1988) are the two main categorical models
in affective-related studies. Ekman and Friesen introduced anger, fear, disgust, surprise, joy, and sadness as six “basic emotions”. Gil
et al. (2015) found Ekman's model to be the most commonly-used categorical affective computing cognitive model. However, ac-
cording to Banda and Robinson (2011, pp. 200–207) the FACS developed by Ekman and Friesen is the most widely used system for
the coding and measurement of facial movements. On the other hand, Ortony et al.'s OCC model is one of the well-known cognitive-
based psychological models for recognizing and interpreting emotional states (Jaques et al., 2011, pp. 599–608) because it is based
on the cognitive theory of emotions and is easy to implement computationally. This model confidently explains the appraisal and the
cognitive process that causes an emotion, from twenty-two fixed emotional categories. However, neither FACS nor OCC include
boredom and interest, which are important and relevant affect states in the learning environment (R. W. Picard et al., 2004).
Since emotional diversity is closely related to the selected theoretical foundation (Gross, 1998), limiting the range of emotions for
theoretical reasons might lead to missing important parts of students' affective states. In comparison to the categorical approach,
dimensional theories/methods are more preferable because they consider and describe a wide range of emotional states (Michalis
Feidakis et al., 2013). This can be one of the reasons why affective computing-related studies in education generally employed
dimensional theories/models for description of emotional states. However, according to (D'Mello & Kory, 2012), a mixed classifi-
cation can also be employed to broaden the emotional states studied. Since performing mixed classification is relatively rare (D'Mello
& Kory, 2012), future researches can focus more on application of this classification in education domain. Generally, only a few
studies have explicitly described and explained the emotion theory/model they used for affective recognition in education. Thus, a
gap exists between theory and practice. Theories of affect in learning need to go through practical testing through different affective
measurement channels and methods and evolve in the real learning processes in order to provide more effective, personalized, and
adaptive emotional feedback in learning environments.
Table 7 shows the 20 top emotional states that have been used in studies in the educational domain. The table indicates that most
of the studies focused on the management of negative emotions such as boredom, anger, and anxiety, and these are perceived as
hindrance in educational environment. This result is supported by Malekzadeh et al. (2015) and Vogel-Walcutt et al. (2012) who
reported that many empirical researches have focused on applying different techniques for managing the negative emotional state of
the learners to improve learning productivity during learning episode. According to Vogel-Walcutt et al. (2012), more in-depth
research is still required to recognize technology-based assessment and qualification approaches that focus on individual negative
academic emotions such as boredom. Although negative emotions are very important, researchers and other concerned stakeholders
need to explore additional strategies aimed at managing the positive emotions, in order to enhance efforts in achieving the desired
academic goals. Further, observing how different emotional states influence the learning processes and outcomes in different learning
approaches (such as online learning, face-to-face learning, mobile-learning, game-based learning) could be another direction for
future studies. The previous studies have paid little attention to specific academic-related emotions with respect to specific context,
age groups, gender, and subject domains. Therefore, researchers and practitioners could work towards providing a unique cate-
gorization of academic-related emotions, based on different age groups, genders, and subject domains, in order to accelerate and
improve the process of affective computing with minimum resource requirements in educational environments. Another interesting
direction for future study is to find ways of promoting and triggering positive academic-related emotions or to prevent negative ones
in educational environments. For example, as suggested by (Vogel-Walcutt et al., 2012), teachers could permit learners to be flexible
in choosing their topic for a particular class project rather than assigning a certain topic to them. This method can trigger positive
emotions and alleviate negative feelings of students in performing the assigned activity.
Fig. 7 shows the distribution of the top 20 emotional states by affective measurement channels. This figure can provide some
insights on selecting appropriate affective recognition channel and method based on different emotions under consideration or vice
versa. There is little reports that some of the physiological signals are associated with the “basic emotions” such as sadness, anger, and
disgust (Calvo & D'Mello, 2010). According to Li, Cheng, and Qian (2008), facial expressions are used to classify expressions into
basic emotions such as surprise, fear, anger, and disgust. However, this study goes a step further and explores the relationship
between the top emotional states and affective measurement channels in education domain. This is a new finding that has not been
reported in previous studies. This finding provides a guideline to identifying the channel that is most appropriate for recognition of
special emotional states. For example, textual methods are more suggested for exploring boredom, anxiety, anger, and enjoyment
emotional states, while multimodal methods are more appropriate for detecting anger, surprised, frustration, disgust, and confusion
emotional states. The channels can also be ranked, and in the event that some of them are unavailable, for instance, due to limitation
of resources and equipment, a comparable or the next best channel set can be selected. For example, the textual channel is the first
option for measuring boredom states. However, multimodal-based channels can be considered as a substitute in special situations,
such as for students with certain disabilities (such as blindness) and for non-literate older who cannot involve in textual methods.

5. Conclusion

This study has presented an overview of affective computing in education. A systematic literature review method was adopted
from Kitchenham and Charters (2007) to answer four research questions on different aspects of affective computing in education. The
review covers the studies published between 2010 and 2017 and indexed in ISI Web of Knowledge, ScienceDirect, IEEE Explore, and
Springer Link databases. A stepwise systematic process was performed and inclusion/exclusion criteria were applied to filter and

15
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

select the relevant studies. The final selected papers were reviewed and relevant information extracted based on a set of research
questions. Eight new findings are presented that have contributed to answering the four research questions: (1) the continues number
of published materials shows that the importance of affective computing in education has significantly increased in recent years; (2)
the research purpose of most affective computing studies is based on designing emotion recognition and expression systems/
methods/instruments, as well as on examining the relationships among emotion, motivation, learning style, and cognition; (3)
although affective computing has been actively investigated in the social sciences and management domain, a large number of studies
do not specify any learning domain as they mainly focus on the design of affective recognition systems and investigation of emotional
states; (4) affective measurement can be classified into textual, visual, vocal, physiological, and multimodal channels, with the textual
channel being the most widely-used affective measurement channel, in recent years; (5) AEQ is the most popular questionnaire used
in affective computing research in education studies; (6) integration of textual and visual channels is the most widely-used multi-
modal channel in affective computing studies; (7) dimensional theories/models are preferred for description of emotional states,
among which the control-value theory of achievement emotions proposed by Pekrun (2006) is the most widely-adopted theory; and
(8) boredom, anger, anxiety, enjoyment, surprise, sadness, frustration, pride, hopefulness, hopelessness, shame, confusion, happiness,
natural emotion, fear, joy, disgust, interest, relief, and excitement, are the top 20 emotional states. Finally, this study provides
recommendations and insights for future research directions. By reviewing the recent studies on affective computing in education and
providing new findings, it is hoped that this study can provide policymakers and practitioners in the education sector new insights
into the more effective application of affective computing technology. The findings can also serve as a basis for educational re-
searchers who look for new research opportunities and directions in this field.

Acknowledgement

The authors would like to thank University of Malaya for supporting and funding this research under the Grant- RP040A-15AET.

Appendix A: Physiological affective measurement methods (adopted from (C. Wu et al., 2015))

Measurement method Abbreviations Description

Electromyogram EMG A technique for evaluating and recording the electrical activity produced by skeletal muscles. EMG is performed using
an instrument called an electromyograph to produce a record called an electromyogram.
Electrocardiogram EKG or ECG The process of recording the electrical activity of the heart over a period of time using electrodes placed on the skin
Heart Rate HR The speed of the heartbeat measured by the number of contractions of the heart per minute (bpm)
Heart Rate Variability HRV The physiological phenomenon of variation in the time interval between heartbeats. It is measured by the variation in
the beat-to-beat interval.
Electrooculogram EOG A technique for measuring the corneo-retinal standing potential that exists between the front and the back of the
human eye
Electroencephalography EEG An electrophysiological monitoring method to record electrical activity of the brain. It is typically noninvasive, with
the electrodes placed along the scalp, although invasive electrodes are sometimes used in specific applications.
Electrodermal Activity EDA The property of the human body that causes continuous variation in the electrical characteristics of the skin.
Blood Volume Pressure BVP A subject's BVP can be measured by a process called photoplethysmography, which produces a graph indicating blood
flow through the extremities.
Skin Conductance Level SCL SCL refers to how well the skin conducts electricity when an external direct current of constant voltage is applied.

References

Afzal, S., & Robinson, P. (2011). Designing for automatic affect inference in learning environments. Educational Technology & Society, 14(4), 21–34.
Albanie, S., Nagrani, A., Vedaldi, A., & Zisserman, A. (2018). Emotion recognition in speech using cross-modal transfer in the wild. arXiv:1808.05561.
Alepis, E., & Virvou, M. (2011). Automatic generation of emotions in tutoring agents for affective e-learning in medical education. Expert Systems with Applications,
38(8), 9840–9847.
Ammar, M. B., Neji, M., Alimi, A. M., & Gouardères, G. (2010). The affective tutoring system. Expert Systems with Applications, 37(4), 3013–3023.
Arroyo, I., du Boulay, B., Eligio, U., Luckin, R., & Porayska-Pomsta, K. (2011). The mood for learning: Methodology (DO informatics, trans.). Cognitive Science Research
Papers.
Asadi, S., & Dahlan, H. M. (2017). Organizational research in the field of green IT: A systematic literature review from 2007 to 2016. Telematics and Informatics, 34(7),
1191–1249.
Baker, R. S. J.d., D'Mello, S. K., Rodrigo, M. M. T., & Graesser, A. C. (2010). Better to be frustrated than bored: The incidence, persistence, and impact of learners'
cognitive–affective states during interactions with three different computer-based learning environments. International Journal of Human-Computer Studies, 68(4),
223–241. https://doi.org/10.1016/j.ijhcs.2009.12.003.
Balaid, A., Rozan, M. Z. A., Hikmi, S. N., & Memon, J. (2016). Knowledge maps: A systematic literature review and directions for future research. International Journal
of Information Management, 36(3), 451–475.
Baldassarri, S., Hupont, I., Abadía, D., & Cerezo, E. (2015). Affective-aware tutoring platform for interactive digital television. Multimedia Tools and Applications, 74(9),
3183–3206.
Banda, N., & Robinson, P. (2011). Multimodal affect recognition in intelligent tutoring systems Affective Computing and Intelligent Interaction. Springer.
Barcelos, A. M. F., & Ruohotie-Lyhty, M. (2018). Teachers' emotions and beliefs in second language teaching: Implications for teacher education Emotions in Second Language
Teaching. Springer.
Behoora, I., & Tucker, C. S. (2015). Machine learning classification of design team members' body language patterns for real time emotional state detection. Design
Studies, 39, 100–127.
Bellocchi, A., Mills, K. A., & Ritchie, S. M. (2015). Emotional experiences of preservice science teachers in online learning: The formation, disruption and maintenance

16
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

of social bonds. Cultural Studies of Science Education, 1–24.


Broekens, J., & Brinkman, W.-P. (2013). AffectButton: A method for reliable and valid affective self-report. International Journal of Human-Computer Studies, 71(6),
641–667. https://doi.org/10.1016/j.ijhcs.2013.02.003.
Burić, I., Sorić, I., & Penezić, Z. (2016). Emotion regulation in academic domain: Development and validation of the academic emotion regulation questionnaire
(AERQ). Personality and Individual Differences, 96, 138–147.
Cabada, R. Z., Estrada, M. L. B., Hernández, F. G., Bustillos, R. O., & Reyes-García, C. A. (2018). An affective and Web 3.0-based learning environment for a
programming language. Telematics and Informatics, 35(3), 611–628. https://doi.org/10.1016/j.tele.2017.03.005.
Caballé, S. (2015). Towards a Multi-modal Emotion-awareness e-Learning System. Paper presented at the international conference on intelligent networking and colla-
borative systems (INCOS).
Calvo, R. A., & D'Mello, S. (2010). Affect detection: An interdisciplinary review of models, methods, and their applications. Affective Computing, IEEE Transactions on,
1(1), 18–37.
Castellar, E. N., Van Looy, J., Szmalec, A., & De Marez, L. (2014). Improving arithmetic skills through gameplay: Assessment of the effectiveness of an educational
game in terms of cognitive and affective learning outcomes. Information Sciences, 264, 19–31.
Charoenpit, S., & Ohkura, M. (2013). A new E-learning system focusing on emotional aspect using biological signals human-computer interaction. Applications and services.
Springer343–350.
Charoenpit, S., & Ohkura, M. (2014). Exploring emotion in an e-learning system using eye tracking. Paper presented at the IEEE Symposium on Computational Intelligence
in Healthcare and e-health (CICARE).
Chen, C.-M., & Sun, Y.-C. (2012). Assessing the effects of different multimedia materials on emotions and learning performance for visual and verbal style learners.
Computers & Education, 59(4), 1273–1285.
Chen, C.-M., & Wu, C.-H. (2015). Effects of different video lecture types on sustained attention, emotion, cognitive load, and learning performance. Computers &
Education, 80, 108–121.
Chrysafiadi, K., & Virvou, M. (2013). Student modeling approaches: A literature review for the last decade. Expert Systems with Applications, 40(11), 4715–4729.
Chung, S., Cheon, J., & Lee, K.-W. (2015). Emotion and multimedia learning: An investigation of the effects of valence and arousal on different modalities in an
instructional animation. Instructional Science, 43(5), 545–559.
Cooper, D. G., Muldner, K., Arroyo, I., Woolf, B. P., & Burleson, W. (2010). Ranking feature sets for emotion models used in classroom based intelligent tutoring
systems. Paper presented at the international conference on user modeling, adaptation, and personalization.
Craig, S., Graesser, A., Sullins, J., & Gholson, B. (2004). Affect and learning: An exploratory look into the role of affect in learning with AutoTutor. Journal of
Educational Media, 29(3), 241–250.
Crompton, H., Burke, D., Gregory, K. H., & Gräbe, C. (2016). The use of mobile learning in science: A systematic review. Journal of Science Education and Technology,
25(2), 149–160.
D'Mello, S., & Calvo, R. A. (2013). Beyond the basic emotions: What should affective computing compute? Paper presented at the CHI'13 extended abstracts on human
factors in computing systems.
D'Mello, S., & Kory, J. (2012). Consistent but modest: A meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies. Paper presented at the
14th ACM international conference on multimodal interaction.
Duo, S., & Song, L. X. (2012). An e-learning system based on affective computing. Physics Procedia, 24, 1893–1898.
Dupre, D., Akpan, D., Elias, E., Adam, J.-M., Meillon, B., Bonnefond, N., ... Tcherkassof, A. (2015). Oudjat: A configurable and useable annotation tool for the study of
facial expressions of emotion. International Journal of Human-Computer Studies, 83, 51–61. https://doi.org/10.1016/j.ijhcs.2015.05.010.
D'Mello, S. K., Lehman, B., & Graesser, A. (2011). A motivationally supportive affect-sensitive autotutor New perspectives on affect and learning technologies.
Springer113–126.
Ekman, P. (1992). An argument for basic emotions. Cognition & Emotion, 6(3–4), 169–200.
Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124.
Elaish, M. M., Shuib, L., Ghani, N. A., & Yadegaridehkordi, E. (2017a). Mobile English language learning (MELL): A literature review. Educational Review, 1–20.
https://doi.org/10.1080/00131911.2017.1382445.
Elaish, M. M., Shuib, L., Ghani, N. A., Yadegaridehkordi, E., & Alaa, M. (2017b). Mobile learning for English language acquisition: Taxonomy, challenges, and
recommendations. IEEE Access, 5, 19033–19047.
Faria, A. R., Almeida, A., Martins, C., Gonçalves, R., Martins, J., & Branco, F. (2017). A global perspective on an emotional learning model proposal. Telematics and
Informatics, 34(6), 824–837. https://doi.org/10.1016/j.tele.2016.08.007.
Feidakis, M. (2016). A Review of Emotion-Aware Systems for e-Learning in Virtual Environments A2 - Caballé, Santi. In R. Clarisó (Ed.). Formative assessment, learning
data analytics and gamification (pp. 217–242). Boston: Academic Press.
Feidakis, M., Caballé, S., Daradoumis, T., Jiménez, D. G., & Conesa, J. (2014). Providing emotion awareness and affective feedback to virtualised collaborative learning
scenarios. International Journal of Continuing Engineering Education and Life Long Learning, 6(24), 141–167 2.
Feidakis, M., Daradoumis, T., Caballé, S., Conesa, J., & Gañán, D. (2013). A dual-modal system that evaluates user's emotions in virtual learning environments and
responds affectively. J. UCS, 19(11), 1638–1660.
Finch, D., Peacock, M., Lazdowski, D., & Hwang, M. (2015). Managing emotions: A case study exploring the relationship between experiential learning, emotions, and
student performance. International Journal of Management in Education, 13(1), 23–36.
Fu, Y., Leong, H. V., Ngai, G., Huang, M. X., & Chan, S. C. F. (2017). Physiological mouse: Toward an emotion-aware mouse. Universal Access in the Information Society,
16(2), 365–379. https://doi.org/10.1007/s10209-016-0469-9.
Gil, R., Virgili-Gomá, J., García, R., & Mason, C. (2015). Emotions ontology for collaborative modelling and learning of emotional responses. Computers in Human
Behavior, 51, 610–617.
Goetz, T., Nett, U. E., Martiny, S. E., Hall, N. C., Pekrun, R., Dettmers, S., et al. (2012). Students' emotions during homework: Structures, self-concept antecedents, and
achievement outcomes. Learning and Individual Differences, 22(2), 225–234.
Gross, J. J. (1998). The emerging field of emotion regulation: An integrative review. Review of General Psychology, 2(3), 271.
Handayani, D., Yaacob, H., Rahman, A., Wahab, A., Sediono, W., & Shah, A. (2014). Systematic review of computational modeling of mood and emotion. Paper
presented at the the 5th international conference on information and communication technology for the Muslim world (ICT4M).
Harley, J. M., Bouchet, F., & Azevedo, R. (2013). Aligning and comparing data on emotions experienced during learning with MetaTutor. Paper presented at the artificial
intelligence in education.
Harley, J. M., Bouchet, F., Hussain, M. S., Azevedo, R., & Calvo, R. (2015a). A multi-componential analysis of emotions during complex learning with an intelligent
multi-agent system. Computers in Human Behavior, 48, 615–625.
Harley, J. M., Carter, C. C., Papaionnou, N., Bouchet, F., Landis, R. S., Azevedo, R., et al. (2015b). Examining the predictive relationship between personality and
emotion traits and learners' agent-direct emotions. Paper presented at the artificial intelligence in education.
Heidig, S., Müller, J., & Reichelt, M. (2015). Emotional design in multimedia learning: Differentiation on relevant design features and their effects on emotions and
learning. Computers in Human Behavior, 44, 81–95.
Hussain, M. S., AlZoubi, O., Calvo, R. A., & D'Mello, S. K. (2011). Affect detection from multichannel physiology during learning sessions with AutoTutor. Paper
presented at the international conference on artificial intelligence in education.
Jaques, P. A., Vicari, R., Pesty, S., & Martin, J.-C. (2011). Evaluating a cognitive-based affective student model Affective Computing and Intelligent Interaction. Springer.
Jiménez, S., Juárez-Ramírez, R., Castillo, V. H., & Ramírez-Noriega, A. (2017). Integrating affective learning into intelligent tutoring systems. Universal Access in the
Information Society. https://doi.org/10.1007/s10209-017-0524-1.
Kaklauskas, A., Kuzminske, A., Zavadskas, E. K., Daniunas, A., Kaklauskas, G., Seniut, M., ... Juozapaitis, A. (2015). Affective tutoring system for built environment
management. Computers & Education, 82, 202–216.

17
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Khalfallah, J., & Slama, J. B. H. (2015). Facial expression recognition for intelligent tutoring systems in remote laboratories platform. Procedia Computer Science, 73,
274–281.
Kitchenham, B. (2004). Procedures for performing systematic reviews, Vol. 33, Keele, UK: Keele University1–26 2004.
Kitchenham, B., & Charters, S. (2007). Guidelines for performing systematic literature reviews in software engineering EBSE Technical Report, Vol. 2, Keele University1–65.
Kort, B., Reilly, R., & Picard, R. W. (2001). An affective model of interplay between emotions and learning: Reengineering educational pedagogy-building a learning
companion. Paper presented at the icalt.
Lackéus, M. (2014). An emotion based approach to assessing entrepreneurial education. International Journal of Management in Education, 12(3), 374–396. https://doi.
org/10.1016/j.ijme.2014.06.005.
Lang, A. (2006). Using the limited capacity model of motivated mediated message processing to design effective cancer communication messages. Journal of
Communication, 56(s1), S57–S80.
Lavy, S., & Eshet, R. (2018). Spiral effects of teachers' emotions and emotion regulation strategies: Evidence from a daily diary study. Teaching and Teacher Education,
73, 151–161. https://doi.org/10.1016/j.tate.2018.04.001.
Leony, D., Muñoz-Merino, P. J., Pardo, A., & Kloos, C. D. (2013). Provision of awareness of learners' emotions through visualizations in a computer interaction-based
environment. Expert Systems with Applications, 40(13), 5093–5100.
Li, L., Cheng, L., & Qian, K.-x. (2008). An e-learning system model based on affective computing. Paper presented at the international conference on cyberworlds.
Lin, H.-C. K., Wang, C.-H., Chao, C.-J., & Chien, M.-K. (2012). Employing textual and facial emotion recognition to design an affective tutoring system. Turkish Online
Journal of Educational Technology-TOJET, 11(4), 418–426.
Lin, H.-C. K., Wu, C.-H., & Hsueh, Y.-P. (2014). The influence of using affective tutoring system in accounting remedial instruction on learning performance and
usability. Computers in Human Behavior, 41, 514–522.
Malekzadeh, M., Mustafa, M. B., & Lahsasna, A. (2015). A review of emotion regulation in intelligent tutoring systems. Educational Technology & Society, 18(4),
435–445.
Malekzadeh, M., Salim, S. S., & Mustafa, M. B. (2014). Towards integrating emotion management strategies in intelligent tutoring system used by children. Paper
presented at the international symposium on pervasive computing paradigms for mental health.
Mao, X., & Li, Z. (2010). Agent based affective tutoring systems: A pilot study. Computers & Education, 55(1), 202–208.
Merz, C. J., & Wolf, O. T. (2017). Sex differences in stress effects on emotional learning. Journal of Neuroscience Research, 95(1–2), 93–105.
Moga, H., Sandu, F., Danciu, G., Boboc, R., & Constantinescu, I. (2013). Extended control-value emotional agent based on fuzzy logic approach. Paper presented at the
Roedunet International Conference (RoEduNet), 2013 11th.
Morrish, L., Rickard, N., Chin, T. C., & Vella-Brodrick, D. A. (2018). Emotion regulation in adolescent well-being and positive education. Journal of Happiness Studies,
19(5), 1543–1564.
Muis, K. R., Psaradellis, C., Lajoie, S. P., Di Leo, I., & Chevrier, M. (2015). The role of epistemic emotions in mathematics problem solving. Contemporary Educational
Psychology, 42, 172–185.
Muñoz, K., Mc Kevitt, P., Lunney, T., Noguez, J., & Neri, L. (2011). An emotional student model for game-play adaptation. Entertainment Computing, 2(2), 133–141.
Neophytou, L. (2013). Emotional intelligence and educational reform. Educational Review, 65(2), 140–154.
Olcay, G. A., & Bulu, M. (2016). Is measuring the knowledge creation of universities possible? A review of university rankings. Technological Forecasting and Social Change.
Ortony, A., Clore, G., & Collins, A. (1988). The cognitive structure of emotions. Cambridge: Cambridge University Press.
Pekrun, R. (2006). The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educational
Psychology Review, 18(4), 315–341.
Pekrun, R., Cusack, A., Murayama, K., Elliot, A. J., & Thomas, K. (2014). The power of anticipated feedback: Effects on students' achievement goals and achievement
emotions. Learning and Instruction, 29, 115–124.
Pekrun, R., Goetz, T., Frenzel, A. C., Barchfeld, P., & Perry, R. P. (2011). Measuring emotions in students' learning and performance: The Achievement Emotions
Questionnaire (AEQ). Contemporary Educational Psychology, 36(1), 36–48.
Pekrun, R., Goetz, T., Titz, W., & Perry, R. P. (2002). Academic emotions in students' self-regulated learning and achievement: A program of qualitative and quan-
titative research. Educational Psychologist, 37(2), 91–105.
Peterson, E. R., Brown, G. T., & Jun, M. C. (2015). Achievement emotions in higher education: A diary study exploring emotions across an assessment event.
Contemporary Educational Psychology, 42, 82–96.
Picard, R. (1997). Affective computing. Cambridge: The MIT Press.
Picard, R. W., Papert, S., Bender, W., Blumberg, B., Breazeal, C., Cavallo, D., ... Strohecker, C. (2004). Affective learning—a manifesto. BT Technology Journal, 22(4),
253–269.
Plass, J. L., Heidig, S., Hayward, E. O., Homer, B. D., & Um, E. (2014). Emotional design in multimedia learning: Effects of shape and color on affect and learning.
Learning and Instruction, 29, 128–140.
Poria, S., Cambria, E., Bajpai, R., & Hussain, A. (2017). A review of affective computing: From unimodal analysis to multimodal fusion. Information Fusion, 37, 98–125.
Posner, J., Russell, J. A., & Peterson, B. S. (2005). The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and
psychopathology. Development and Psychopathology, 17(03), 715–734.
Qi-rong, C. (2010). Research on intelligent tutoring system based on affective model. Paper presented at the second international conference on multimedia and information
technology (MMIT).
QStopuniversities (2016). QS world university Rankings by subject. Retrieved from https://www.topuniversities.com/subject-rankings/2016/.
Ray, A., & Chakrabarti, A. (2015). Biophysical signal based emotion detection for technology enabled affective learning. Paper presented at the IEEE international
conference on electrical, computer and communication technologies (ICECCT).
Robinson, P. (2014). Modelling emotions in an on-line educational game. Paper presented at the international conference on control, decision and information technologies
(CoDIT).
Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178.
Saadatzi, M. N., Welch, K. C., Pennington, R., & Graham, J. (2013). Towards an affective computing feedback system to benefit underserved individuals: An example
teaching social media skills. Paper presented at the international conference on universal access in human-computer interaction.
Salmeron-Majadas, S., Santos, O. C., & Boticario, J. G. (2014). An evaluation of mouse and keyboard interaction indicators towards non-intrusive and low cost affective
modeling in an educational context. Procedia Computer Science, 35, 691–700.
Sandanayake, T., & Madurapperuma, A. (2013). Affective e-learning model for recognising learner emotions in online learning environment. Paper presented at the
international conference on advances in ICT for emerging regions (ICTer).
Santos, O. C., Salmeron-Majadas, S., & Boticario, J. G. (2013). Emotions detection from math exercises by combining several data sources. Paper presented at the
international conference on artificial intelligence in education.
Scherer, K. R. (1986). Vocal affect expression: A review and a model for future research. Psychological Bulletin, 99(2), 143.
Shen, L., Callaghan, V., & Shen, R. (2008). Affective e-Learning in residential and pervasive computing environments. Information Systems Frontiers, 10(4), 461–472.
Shen, L., Wang, M., & Shen, R. (2009). Affective e-Learning: Using" Emotional" Data to Improve Learning in Pervasive Learning Environment. Educational Technology &
Society, 12(2), 176–189.
Shen, L., Xie, B., & Shen, R. (2014). Enhancing user experience in mobile learning by affective interaction. Paper presented at the international conference on intelligent
environments (IE).
Shute, V. J., D'Mello, S., Baker, R., Cho, K., Bosch, N., Ocumpaugh, J., ... Almeda, V. (2015). Modeling how incoming knowledge, persistence, affective states, and in-
game progress influence student learning from an educational game. Computers & Education, 86, 224–235.
Siu, K. W. M., & Wong, Y. L. (2016). Fostering creativity from an emotional perspective: Do teachers recognise and handle students' emotions? International Journal of
Technology and Design Education, 26(1), 105–121.

18
E. Yadegaridehkordi, et al. Computers & Education 142 (2019) 103649

Sokolova, M. V., & Fernández-Caballero, A. (2015). A review on the role of color and light in affective computing. Applied Sciences, 5(3), 275–293.
Subramainan, L., Yusoff, M. Z. M., & Mahmoud, M. A. (2015). A classification of emotions study in software agent and robotics applications research. Paper presented at
the international symposium on agents, multi-agent systems and robotics (ISAMSR).
Tao, J., & Tan, T. (2005). Affective computing: A review. Paper presented at the International Conference on Affective computing and intelligent interaction.
Tian, F., Gao, P., Li, L., Zhang, W., Liang, H., Qian, Y., et al. (2014). Recognizing and regulating e-learners’ emotions based on interactive Chinese texts in e-learning
systems. Knowledge-Based Systems, 55, 148–164.
Tj, I., Leister, W., Schulz, T., & Larssen, A. (2015). The role of emotion and enjoyment for QoE—a case study of a science centre installation. Paper presented at the
seventh international workshop on quality of multimedia experience (QoMEX).
Tricco, A. C., Zarin, W., Antony, J., Hutton, B., Moher, D., Sherifali, D., et al. (2016). An international survey and modified Delphi approach revealed numerous rapid
review methods. Journal of Clinical Epidemiology, 70, 61–67.
Urhahne, D. (2015). Teacher behavior as a mediator of the relationship between teacher judgment and students' motivation and emotion. Teaching and Teacher
Education, 45, 73–82.
Vogel-Walcutt, J. J., Fiorella, L., Carper, T., & Schatz, S. (2012). The definition, assessment, and mitigation of state boredom within educational settings: A com-
prehensive review. Educational Psychology Review, 24(1), 89–111.
Wang, H., Huang, H., & Makedon, F. (2014). Emotion detection via discriminant laplacian embedding. Universal Access in the Information Society, 13(1), 23–31. https://
doi.org/10.1007/s10209-013-0312-5.
Webster, J., & Watson, R. T. (2002). Analyzing the past to prepare for the future: Writing a literature review. MIS Quarterly, 26(2), xiii–xxiii.
Whissel, C. (1989). In R. Plutchik, & H. Kellerman (Eds.). The dictionary of affect in language, emotion: Theory, research and experience: Vol. 4, the measurement of emotions.
New York: Academic.
Wingkvist, A., & Ericsson, M. (2013). A survey of research methods and purposes in mobile learning. International Journal of Mobile and Blended Learning, 3(1), 1–17.
Wu, C., Huang, Y., & Hwang, J. P. (2015). Review of affective computing in education/learning: Trends and challenges. British Journal of Educational Technology.
Wu, C.-H., Tzeng, Y.-L., & Huang, Y. M. (2014). Understanding the relationship between physiological signals and digital game-based learning outcome. Journal of
Computers in Education, 1(1), 81–97.
Xu, J. (2018). Emotion regulation in mathematics homework: An empirical study. The Journal of Educational Research, 111(1), 1–11.
Xuejing, G., Zhiliang, W., Siyi, Z., & Wei, W. (2009). Design and implementation of the emotional pedagogical agent. Paper presented at the international conference on
artificial intelligence and computational intelligence, 2009. AICI'09.
Yadegaridehkordi, E., & Iahad, N. A. (2012). Influences of demographic information as moderating factors in adoption of m-learning. International Journal of
Technology Diffusion (IJTD), 3(1), 8–21.
Yang, D., Alsadoon, A., Prasad, P. W. C., Singh, A. K., & Elchouemi, A. (2018). An emotion recognition model based on facial recognition in virtual learning
environment. Procedia Computer Science, 125, 2–10. https://doi.org/10.1016/j.procs.2017.12.003.

19

You might also like