Affective Engagement To Virtual and Live Lectures: Conference Paper

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/308497256

Affective Engagement to Virtual and Live Lectures

Conference Paper  in  Communications in Computer and Information Science · October 2016


DOI: 10.1007/978-3-319-46254-7_40

CITATIONS READS

2 1,056

4 authors:

Judita Kasperiuniene Meet Jariwala


Vytautas Magnus University Aarhus University
25 PUBLICATIONS   10 CITATIONS    3 PUBLICATIONS   2 CITATIONS   

SEE PROFILE SEE PROFILE

Egidijus Vaškevičius Saulius Satkauskas


Vytautas Magnus University Vytautas Magnus University
11 PUBLICATIONS   31 CITATIONS    89 PUBLICATIONS   1,193 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The Didactical Technology for the Development of Nuclear Educational Tourism in the Ignalina Nuclear Power Plant (INPP) Region (EDUATOM) View project

Research on university professors collective virtual behavior View project

All content following this page was uploaded by Judita Kasperiuniene on 28 October 2017.

The user has requested enhancement of the downloaded file.


Affective Engagement to Virtual and Live Lectures

Judita Kasperiuniene1,2 ✉ , Meet Jariwala1, Egidijus Vaskevicius1,


( )

and Saulius Satkauskas1


1
Vytautas Magnus University, Kaunas, Lithuania
judita.kasperiuniene@vdu.lt
2
Aleksandras Stulginskis University, Kaunas, Lithuania

Abstract. Affective engagement to university lectures needs external stimula‐


tion. This paper presents an empirical study on student’s engagement change to
live and virtual lectures with complex, picture, video and human body movement
stimuli. Each experiment lasted 30 min and was divided into 5 periods
(10-5-5-5-5 min each). At the end of each period different stimuli were exposed:
human interrupted the lecture; instructor presented slides; video materials; and
intensive body movements. Stimuli and study materials for the live and virtual
avatar- based lectures were developed following the same experiment lecture
model. Avatar was created and animated with CrazyTalk software. Experi‐
menting group consisted of 10 students, age 20–24, 4 females and 6 males.
Changes of attention to lecture materials were measured using triple indicators:
affective regulation monitored during all the lecture period with Muse portable
headband device; cognitive self-regulation was measured before the lecture using
questionnaire technique; behavioral regulation was observed using video
recording through the entire lecture period. The highest concentration, attentive‐
ness and active engagement was observed during the avatar-based lecture and
complex human stimuli, while in live lecture all the stimuli activated approxi‐
mately the same response. Image stimuli activated different reactions: in a live
lecture it slightly tweaked student attentiveness, while in avatar-based lecture
attentiveness was lowered. Reactions to video stimuli in both experimental groups
were opposite as for image stimuli. These research results can prompt instructors
how to construct training materials and implement additional stimuli grabbing
student’s attention. We recommend mixing video and live lectures and using
stimuli evoking and strengthening active engagements.

Keywords: Affective engagement · Virtual lecture · Biodata collection · MUSE


brain-sensing headband · Student attention span

1 Introduction

Affective engagement to lecture challenges university students [1–3]. Correlation


between lecture attendance, early revisions of study materials and academic achieve‐
ments is identified [5]; virtual lectures impact on exam performance researched [4, 10];
using gamification techniques [6], attention to reference materials is raised, online
participation and proactivity is stimulated [7, 8]. It is researched that students’ attention
to the live instructor’s, audio and video materials declines after first 10–15 min from the

© Springer International Publishing Switzerland 2016


G. Dregvaite and R. Damasevicius (Eds.): ICIST 2016, CCIS 639, pp. 499–508, 2016.
DOI: 10.1007/978-3-319-46254-7_40
500 J. Kasperiuniene et al.

start [11]. The use of various stimuli retrieving students’ attention to the lecture materials
has achieved much importance in the modern university [12]. Animated avatars of
instructor in a virtual learning environment as online manifestation of teacher’s self in
virtual worlds were designed seeking to enhance virtual student-instructor interaction
[9, 21, 24] and assisted in keeping temporal students attention and affective engagement
to learning texts. Lecture engagement is a multi-dimensional construct composed of
behavioral, emotional, and cognitive components [17]. Indicators of lecture engagement
are divided into four groups: affective (belonging to school and school connectedness),
cognitive (self-regulation, relevance of school to future aspirations, goal setting), behav‐
ioral (attendance, participation, behavioral incidents) and academic [13].
Affective engagement is defined through electroencephalographic monitoring of
human brain activity. Alpha-band oscillations (9 to 13 Hz) reflected the temporal struc‐
ture of cognitive learning processes, which may be described as ‘knowledge-based
consciousness. Alpha activity enabled ‘semantic orientation’ via controlled access to
information stored in a complex knowledge system of the learner [18]. Researchers
argued [14, 15] that alpha-band power could increase over cortical areas responsible for
processing potentially distracting information and these attention-related sustained
focus increases in alpha power occur prior to the arrival of an anticipated stimulus. Beta
waves (12 to 30 Hz) represented the attentive state and occurred during the heightened
state of awareness [16].
The model of physiology of emotions [25] considered that emotions are spread along
two main axes: arousal and valence. An emotion can be defined according to how “posi‐
tive” or “negative” it is felt (valence), and how much of activation it corresponds to
(arousal). Although scholars argue that 2D model of emotions do not describe all the
emotional states [26], such representation offers a great deal of simplicity. Emotional
human mind states such as valence, arousal and dominance are detected from alpha and
beta wave activity [20–22]. The model of learning in 3D virtual learning environments
described engagement as one of learning benefits [23, 24, 27].
In this study affective students’ engagement to a live and avatar-based video lecture
was measured. Two hypotheses were raised: (1) student’s attention span during virtual
lecture is higher than in a live lecture with the same educational content and (2) the
human- based stimuli evoke more active engagement to a virtual lecture than fully
artificially created stimuli such as images or videos were conducted.

2 Methods

The experiment was organized using traditional live lecture (auditorium style) as a model
of passive learning with the instructor. It is a common university practice that traditional
live lecture lasts 45 min. We experimented with 30 min lecture. Straight before the
lecture the self-control capacity state was measured using questionnaire technique. The
lecture was conveyed in English (not-native language of the students) seeking to add
additional longitudinal stimuli which lasted during all the lecture period. Entire lecture
was video-recorded and observed later for interpretations. During the entire lecture brain
sensing headband Muse [28] monitored participants’ brain activity, which varied from
Affective Engagement to Virtual and Live Lectures 501

calm states to neutral and active conditions. Taking into account that student’s attention
during the lecture starts to decrease after the first 10-15 min from the starting point;
various stimuli were showed to the participants. We used 10-5-5-5-5 min model of
lecture stimuli appearance. During the first lecture period (A to B in Fig. 1) no any stimuli
appeared. The first period without any specific stimuli lasted 10 min and was twice longer
than next periods. 10-5-5-5-5 min strategy of stimulus appearance was chosen letting
the experiment participant to adapt to the brain monitoring device and to the lecture in
foreign language. First stimulus- human (late student) entering the class- appeared after
10 min from the lecture start and interrupted instructors talk (point B in Fig. 1). The
human interruption lasted approximately 1 min. Second stimulus – instructor starting to
show lecture slides (picture stimulus) – appeared after 15 min from the lecture start
(point C in Fig. 1). 5 static slides were shown to the learner one at a time in succession.
The same time instructor explained the slides. The lecture period with the slides lasted
approximatelly 1 min. Third stimulus – instructor showing and explaining the video
materials– appeared after 20 min from the lecture start and lasted for 1 min (point D in
Fig. 1.). The fourth stimulus – human body movements during the lecture (instructor
started to make various body, hands movements and face mimics) appeared after 25 min
from the lecture start (point E in Fig. 1) and lasted for approximately 1 min.

Fig. 1. The model of experiment-lecture

Two types of lectures were planned using the same experiment lecture model – first:
live lecture in an auditorium, instructor standing in front of the class and reading the
lecture; second: virtual avatar- based lecture (Fig. 2).

Fig. 2. Types of experiment lectures: (a) live lecture; (b) avatar-based video lecture

For the virtual avatar- based lecture the avatar of instructor was computer-modeled.
The virtual lecture followed the same 10-5-5-5-5 model of lecture stimuli.
502 J. Kasperiuniene et al.

2.1 Participants
Research participants were university students, adults, ages ranged from 20 to 24, 4
female and 6 male. Each participant was randomly selected to one of the two experiment
groups: live lecture group or virtual avatar-based lecture group. Experiment participants
were evaluated for self-regulation before the lecture start using questionnaire for self-
regulation. While in a lecture concentration and stress levels were monitored using the
electroencephalography data extracted from the Muse headband.

2.2 Procedure

University students’ attention in a lecture was measured using triple indicators: affective
regulation during the lecture was monitored using Muse headband; cognitive (self-
regulation) was measured using self-regulation questionnaire and behavioral regulation
was observed using video recording of the experiment participant. The questionnaire of
self-regulation was adapted from the state self-control capacity scale [18].

2.3 Ethical Considerations

Participants were informed about the experiment procedure and instructed about the
Muse headband device. Students were introduced with wearable EEG biodata collection
technique, explained about its influence to the human health and asked to sign the
acceptance to participate in an experiment and give their biodata for the research
purposes.

2.4 Design of an Avatar-Based Instructor


CrazyTalkTM animation software from Reallusion, Inc. was used to create and animate
the instructors’ avatar. We chose CrazyTalk facial animation tool because of its potential
to operate with recorded voice and text animating facial images [19]. CrazyTalk software
has an auto motion engine which allows creators using the intensity of their voice to
drive animations in real-time and transform image to animated character. Virtual
avatar’s face which appears in an avatar-based lecture was created taking real instruc‐
tor’s face as a model. To create an avatar three steps technique was developed: first, a
digital photo of the human lecturer was taken; second - using a 3D modeling software
a model of the avatar was created; third - with a CrazyTalk lip-sync the model using the
audio file from the recorded lecture was prepared.

2.5 Student’s Brain Power Tracking Techniques

Many EEG tests require attaching sensors to the head; the sensors are connected to a
computer via wires. Dry EEG electrodes and simple wireless headsets offer partial but
comfortable devices to wear for extended time-periods providing for the collection of
large-time-extended data [29]. Muse device is a thin and light headband that was placed
across the student’s forehead and tucked behind ears during all the experiment. It uses
Affective Engagement to Virtual and Live Lectures 503

Fig. 3. (a) Muse electrode locations by 10-20 international standards [28]; (b) Student research
participant with Muse headband in a classroom

seven electroencephalographic (EEG) sensors along the human scalp and data is
collected via four channels (two on the forehead and two behind the ears, remaining are
reference sensors). 7 sensors montage enables estimation of hemispheric asymmetries
(with an emphasis on the pre-frontal lobe) and thus facilitates brain-state discrimination
that depends on asymmetry, such as emotional valence (positive vs. negative emotions).
For this research Muse headband was chosen because of dry EEG electrodes technology
(TP9, AF7, AF8, TP10 dry; 3 electrode electrical reference FPz (CMS/DRL) and its
compatibility to connect with monitoring system through Bluetooth. Muse headband
utilizes minimal number of sensors to be more portable and unobtrusive (Fig. 3).
Moreover, Muse provides functions to filter the raw brain waves including the power
frequency automatically, which helps increase the signal-noise ratio [29]. The device
performed robust on-board digital signal processing to filter out noise and compute Fast
Fourier Transforms (FFT) in real time [30]. The engagement was measured using a
proprietary algorithm from Muse.
During all the experimenting period live EEG data was plotted and researcher had
the possibility to monitor the biodata collection process. The frequencies of recorded
signals vary from 1 to 50 Hz. Using Muse headband raw EEG and accelerometer data
were collected; processed brain features, including typical EEG power-bands (delta,
theta, alpha, beta, and gamma), raw power spectrum, and muscle activity measured.
Muse fully processed data that yielded: beta/alpha ratio (focus vs relax) which will allow
measuring the concentration or relaxation during the lecture period. Behavioral engage‐
ment was observed using lecture observation and video recording technique.

3 Data Analysis and Results

For the data collection and analysis MUSE software developer kit (SDK) and the addi‐
tional tools coded by researchers were used. It allowed quantification of beta and alpha
waves during each task. An analysis of variances was used to compare concentration,
attentiveness, and active engagement, distraction and relaxation levels between research
participant groups.
504 J. Kasperiuniene et al.

The experiment (live lecture) lasted 28 min. 00 s. The lecture was divided using
10-5-5-5-5 min’s scheme into 5 approximately similar time intervals. Specific lecture
parts (intervals) were taken into consideration. In an experiment of live lecture interval
A lasted 12 min. 00 s. (0:00 to 12:00); interval B lasted 3 min. 35 s. (12:00 to 15:35);
interval C lasted 4 min. 35 s. (15:35 to 20:10); interval D lasted 4 min. 55 s. (20:10 to
25:05); and the final interval E lasted 2 min. 55 s. (25:05 to 28:00). The avatar based
lecture was made from live lecture recorded with audio using instructors avatar and
including the same four stimuli (human complex, images, video, human face and hand
movements) at the same time intervals.
The average intensity of alpha and beta waves with respect to different intervals of
different stimuli appearance showed that more prominent neural activity is observed in
students who listerned to virtual avatar-based lectures compared to those who partici‐
pated in live lectures.
Students felt more distracted and relaxed during the experiment lectures without any
stimuli; stimuli activated the participants. The most evoking was the first stimuli (point B
in Fig. 4) because of the two reasons – this was the first stimuli that appeared to the partic‐
ipants and the only one stimuli which was complex, including human face, body move‐
ment (visual stimuli) and voice (audio stimuli). Students reactions to image (point C in
Fig. 4) or video stimuli (point D in Fig. 4) was approximatelly the same and less than to
human complex stimuli (point B in Fig. 4) or instructors body movement (point E in Fig. 4).

Fig. 4. Relative alpha wave power (Bel) in avatar-based and live lectures during experimenting
time intervals.

Students were more concentrated and more attentive to the avatar based lecture than
to the live instructors talk in an auditorium. Students watching the avatar-based lecture
responded more eminently to stimuli than in a live lecture (Fig. 5).
Affective Engagement to Virtual and Live Lectures 505

Fig. 5. Relative beta wave power (Bel) in avatar-based and live lectures during experimenting
time intervals.

The highest concentration, attentiveness and active engagement was observed during
the avatar-based lecture with complex human stimuli, while in a live lecture all the
stimuli activated approximatelly the same students responses (point B in Fig. 5). Image
stimuli activated different reactions (point C in Fig. 5): in a live lecture it slightly tweaked
students attentiveness (beta change from 0.411 till 0.418) while in avatar-based lecture
the observed response was lowering (beta change from 0.702 till 0.544). Reactions to
video stimuli were opposite concerning to image stimuli (point D in Fig. 5): video stimuli
in an avatar-based lecture call more active engagement (beta change from 0.554 till
0.630) while in a live lecture suppressed engagement (beta change from 0.435 till 0.410).
Conditionally highest lecture engagement was observed in live lecture with human body
movement stimuli (point E in Fig. 5). Similar stimuli appearing in an avatar-based lecture
raised conditionally low response.
Results of all the lecture measurement showed that virtual avatar-based lectures
activated relatively greater level of concentration, attentiveness and active engagement
compared to live lectures.

4 Implications

Active students’ engagement to the university lectures challenges instructors and profes‐
sors. This research showed that virtual lecture forms such as avatar-based lectures acti‐
vated learner engagement to the lecture materials. The hypothesis that students attention
span during avatar-based video lecture is conditionally higher than in a live lecture with
the same audio materials was proved through electroencephalographic monitoring of
506 J. Kasperiuniene et al.

human brain alpha and beta activities. The experimenting on different type of stimuli
influence to learner concentration, attentiveness and engagement into the lecture showed
that human based artificially created stimuli evoke more active engagement to a virtual
lecture than images or video stimuli. In a live lecture human stimuli evokes less students’
attentiveness than image-stimuli or video stimuli.
Similar tendencies in live and avatar-based lectures for human interrupting instruc‐
tors talk, picture and video stimuli were observed in relative alpha wave power. Live
and virtual body movement evoked opposite reaction. It is observed that in live lecture
university instructors frequently move while lecturing. The experiment results showed
that instructor’ movements in a live lecture environment are less answered by students
than similar stimuli in a virtual lecture. Relative beta wave power indicated similar
student reactions in live and virtual lecture only to the first stimulus – interruption of
instructor’s monologue. Pictures, video and body movements in live and virtual lectures
evoked inverse student reactions: picture stimuli in a live lecture marginally raised and
for the avatar-based lecture - deflated relative beta wave power; video stimuli deflated
in live lecture and evoked learners in video-lecture; avatar movements reduced student’s
activation.
These results can stimulate university teachers training materials construction. We
recommend mixing video and live lectures; using various stimuli to evoke and strengthen
learners’ concentration and active engagements to learning materials.

5 Study Limitations and Future Research

Our research has some limitations. The live lecture was read only to one student at a
time because of biodata collection techniques and technical equipment. Because of that
we avoided additional unexpected stimuli and constructed the pure environment which
may limit our results.
In order to deepen understanding students’ affective engagement during virtual and
live lectures future research may be focused to larger experimental group and extended
EEG measurements. The experiment can be reiterated using other EEG portable device
(e.g. Emotiv Epoc) seeking for additional data validity.

References

1. Traphagan, T., Kucsera, J.V., Kishi, K.: Impact of class lecture webcasting on attendance and
learning. Educ. Technol. Res. Dev. 58(1), 19–37 (2010)
2. Bati, A.H., Mandiracioglu, A., Orgun, F., Govsa, F.: Why do students miss lectures? A study
of lecture attendance amongst students of health science. Nurse Educ. Today 33(6), 596–601
(2013)
3. Yeung, A., Raju, S., Sharma, M.D.: Investigating student preferences of online lecture
recordings and lecture attendance in a large first year psychology course. In: Proceedings of
The Australian Conference on Science and Mathematics Education (formerly UniServe
Science Conference) (2016)
Affective Engagement to Virtual and Live Lectures 507

4. Bos, N., Groeneveld, C., Bruggen, J., Brand-Gruwel, S.: The use of recorded lectures in
education and the impact on lecture attendance and exam performance. Br. J. Educ. Technol.
41(2), 271–286 (2015)
5. Abdulghani, H.M., Al-Drees, A.A., Khalil, M.S., Ahmad, F., Ponnamperuma, G.G., Amin,
Z.: What factors determine academic achievement in high achieving undergraduate medical
students? Qual. Study Med. Teach. 36(sup1), S43–S48 (2014)
6. Barata, G., Gama, S., Jorge, J., Gonçalves, D.: Improving participation and learning with
gamification. In: Proceedings of the First International Conference on Gameful Design,
Research, and Applications, pp. 10–17. ACM (2013)
7. Clarke, J.: Augmented reality, multimodal literacy and mobile technology: An experiment in
teacher engagement. In: QScience Proceedings (2013)
8. Wu, H.K., Lee, S.W.Y., Chang, H.Y., Liang, J.C.: Current status, opportunities and challenges
of augmented reality in education. Comput. Educ. 62, 41–49 (2013)
9. Mat-jizat, J.E., Osman, J., Yahaya, R., Samsudin, N.: The use of augmented reality (AR)
among tertiary level students: perception and experience. Aust. J. Sustain. Bus. Soc. 2(1), 42–
49 (2016)
10. O’Callaghan, F.V., Neumann, D.L., Jones, L., Creed, P.A.: The use of lecture recordings in
higher education: A review of institutional, student, and lecturer issues. Educ. Inf. Technol.
1–17 (2015)
11. Wilson, K., Korn, J.H.: Attention during lectures: Beyond ten minutes. Teach. Psychol. 34(2),
85–89 (2007)
12. Farley, J., Risko, E.F., Kingstone, A.: Everyday attention and lecture retention: the effects of
time, fidgeting, and mind wandering. Front. Psychol. 4(619), 1–9 (2013). 10-3389
13. Park, S., Holloway, S.D., Arendtsz, A., Bempechat, J., Li, J.: What makes students engaged
in learning? A time-use study of within-and between-individual predictors of emotional
engagement in low-performing high schools. J. Youth Adolesc. 41(3), 390–401 (2012)
14. Klimesch, W.: Alpha-band oscillations, attention, and controlled access to stored information.
Trends Cogn. Sci. 16(12), 606–617 (2012)
15. Foxe, J.J., Snyder, A.C.: The role of alpha-band brain oscillations as a sensory suppression
mechanism during selective attention. Front. Psychol. 2(154), 1–13 (2011)
16. Desai, R., Tailor, A., Bhatt, T.: Effects of yoga on brain waves and structural activation: A
review. Complement. Ther. Clin. Pract. 21(2), 112–118 (2015)
17. Wang, M.T., Eccles, J.S.: Adolescent behavioral, emotional, and cognitive engagement
trajectories in school and their differential relations to educational success. J. Res. Adolesc.
22(1), 31–39 (2012)
18. Christian, M.S., Ellis, A.P.: Examining the effects of sleep deprivation on workplace
deviance: a self-regulatory perspective. Acad. Manage. J. 54(5), 913–934 (2011)
19. Gurvitch, R., Lund, J.: Animated video clips: Learning in the current generation. J. Physic.
Educ. Recreation Dance 85(5), 8–17 (2014)
20. Ramirez, R., Vamvakousis, Z.: Detecting emotion from EEG signals using the emotive epoc
device. In: Zanzotto, F.M., Tsumoto, S., Taatgen, N., Yao, Y. (eds.) BI 2012. LNCS, vol.
7670, pp. 175–184. Springer, Heidelberg (2012)
21. Liu, Y., Sourina, O., Nguyen, M.K.: Real-time EEG-based human emotion recognition and
visualization. In: 2010 International Conference on Cyberworlds (CW), pp. 262–269. IEEE,
October 2010
22. Liao, L.D., Chen, C.Y., Wang, I.J., Chen, S.F., Li, S.Y., Chen, B.W., Lin, C.T.: Gaming
control using a wearable and wireless EEG-based brain-computer interface device with novel
dry foam-based sensors. J. Neuroengineering Rehabil. 9(1), 1–11 (2012)
508 J. Kasperiuniene et al.

23. Merchant, Z., Goetz, E.T., Cifuentes, L., Keeney-Kennicutt, W., Davis, T.J.: Effectiveness
of virtual reality-based instruction on students’ learning outcomes in K-12 and higher
education: A meta-analysis. Comput. Educ. 70, 29–40 (2014)
24. Dalgarno, B., Lee, M.J.: What are the learning affordances of 3-D virtual environments? Br.
J. Educ. Technol. 41(1), 10–32 (2010)
25. Lane, R.D., Chua, P.M., Dolan, R.J.: Common effects of emotional valence, arousal and
attention on neural activation during visual processing of pictures. Neuropsychologia 37(9),
989–997 (1999)
26. Fontaine, J.R., Scherer, K.R., Roesch, E.B., Ellsworth, P.C.: The world of emotions is not
two-dimensional. Psychol. Sci. 18(12), 1050–1057 (2007)
27. Mensia technologies. http://www.mensiatech.com/emotions-mensia/, 7 June 2016
28. MuseTM headband development and technical specifications. http://developer.choosemuse.
com/, Accessed 20 June 2016
29. Li, Z., Xu, J., Zhu, T.: Prediction of Brain States of Concentration and Relaxation in Real
Time with Portable Electroencephalographs (2015). arXiv preprint arXiv:1509.07642
30. Karydis, T., Aguiar, F., Foster, S.L., Mershin, A.: Self-calibrating protocols enhance wearable
EEG diagnostics and consumer applications. In: Proceedings of the 8th ACM International
Conference on Pervasive Technologies Related to Assistive Environments, p. 96. ACM, July
2015

View publication stats

You might also like