Professional Documents
Culture Documents
Thesis PDF
Thesis PDF
The prevalence of the Internet in business and in everyday life has significant
implications for social science research. One implication is the opportunity to
collect data using online surveys. While online surveys grow in importance
for data collection, there is little research on how to best utilize the online
medium to support data collection. One opportunity in the online setting is to
collect not only respondent answers, but also to log the respondents’ process
of providing those answers. Logging is utilized in several other situations –
e.g. to investigate use qualities of e-commerce web sites, but has not yet been
explored to evaluate and improve survey instruments (online questionnaires).
In this thesis, we conduct design science research in the context of eHealth
research. We articulate the innovative concept of respondent behavior logging
(RBL), consisting of (0) emergence of RBL (i) static and dynamic models to
capture and conceptualize respondent behavior in the context of online sur-
veys, (ii) visualization techniques for collected respondent behavior data, (iii)
constructs for measurement of respondent behavior, and (iv) a process model
for survey instrument evaluation and improvement based on the respondent
behavior logging concepts. We refer to 0 – iv as the RBL framework, which
is evaluated through (i) focus group evaluation, (ii) experimental evaluation,
and (iii) a comparative study based on data collected in the eHealth research
context contrasted with known issues with commonly used survey instru-
ments. Based on the articulation and evaluation of the RBL framework, we
put forward an informed argument about the usefulness of the framework to
support online data collection.
ii
Acknowledgements
iii
Contents
Acknowledgements ....................................................................................... iii
Definitions ...................................................................................................... x
Introduction ................................................................................................... 13
Research Setting ........................................................................................... 20
Uppsala University Psychosocial Care Programme (U-CARE) .............. 20
Information systems research in U-CARE ............................................... 30
Research Topic Emergence in the U-CARE setting................................. 35
Skip Logic............................................................................................ 36
Research Process Automation in Randomized Controlled Trials ........ 36
Environment Log ................................................................................. 37
Respondent Behavior Logging ............................................................ 39
Theoretical perspective ............................................................................ 43
A pragmatist interest in human action ................................................. 43
Characterization of the online medium................................................ 44
Summary: Implications for research .................................................... 46
Research Approach ....................................................................................... 48
A Staged Design Science Research Process............................................. 48
Stage 0: Emergence of RBL ..................................................................... 51
Stage I: Conceptual Design ...................................................................... 51
Stage II: RBL Visualizations .................................................................... 52
Stage III: RBL Statistical Measures ......................................................... 52
Stage IV: Formalization of Learning........................................................ 53
Rationale for a DSR approach .................................................................. 54
Application of DSR guidelines................................................................. 55
Guideline 1: Design as an Artifact....................................................... 56
Guideline 2: Problem Relevance ......................................................... 56
Guideline 3: Design Evaluation ........................................................... 56
Guideline 4: Research Contribution .................................................... 57
Guideline 5: Research Rigor................................................................ 57
Guideline 6: Design as a Search Process ............................................. 57
Guideline 7: Communication of Research ........................................... 58
Literature Review Strategy....................................................................... 58
Identify the purpose ............................................................................. 58
Draft protocol ...................................................................................... 58
Apply practical screen ......................................................................... 59
Search for literature ............................................................................. 59
Extract data .......................................................................................... 59
Appraise quality................................................................................... 59
iv
Synthesize studies ................................................................................ 59
Write the review .................................................................................. 60
DSR Knowledge types ............................................................................. 60
DSR Knowledge Contribution framework ............................................... 60
DSR Research Contributions Types ......................................................... 63
Knowledge Base ........................................................................................... 64
Existing tools for online surveys .............................................................. 65
Previous IS Research on User Logging .................................................... 65
Search strategy ..................................................................................... 66
Discussion of Findings ........................................................................ 66
Relevance and Implications ................................................................. 72
Previous HCI research on User Logging .................................................. 73
Relevance and Implications ................................................................. 76
Previous E-Health research on User Logging .......................................... 77
Relevance and Implications ................................................................. 79
Implications for the design of RBL .......................................................... 80
A Framework for Respondent Behavior Logging......................................... 82
Dynamic model of respondent behavior .................................................. 82
Static model of respondent behavior ........................................................ 83
Visualization Techniques for Respondent Behavior ................................ 84
RBL measurement constructs ................................................................... 86
A process model for questionnaire evaluation ......................................... 86
Evaluation ..................................................................................................... 88
Focus group evaluation ............................................................................ 90
Rationale .............................................................................................. 90
Evaluation Overview ........................................................................... 90
Data Collection .................................................................................... 92
Data Analysis....................................................................................... 93
Lessons Learned .................................................................................. 97
Experimental evaluation ........................................................................... 97
Rationale .............................................................................................. 97
Evaluation Overview ........................................................................... 97
Data Collection .................................................................................. 100
Data Analysis..................................................................................... 101
Lessons Learned ................................................................................ 102
Discussion ................................................................................................... 103
RQ1: How can the characteristics of the online medium be used to
improve the efficacy of online surveys? ................................................. 103
RQ2: How can we conceptualize respondent behavior and logging of
such behavior in a meaningful manner in the context of online
surveys? .................................................................................................. 105
v
RQ3: What qualities should guide the design of respondent behavior
logging software to support researchers in designing and executing
online surveys? ....................................................................................... 107
Functionality ...................................................................................... 107
Performance & Applicability ............................................................. 108
System Integrity & Reliability ........................................................... 109
Functionality (Opportunity for better designing)............................... 109
Uniqueness......................................................................................... 110
The contribution to Knowledge Base ................................................ 110
Conclusions and Research Outlook ............................................................ 112
Research Outlook ................................................................................... 113
References ................................................................................................... 114
Appendix A – Experimental questionnaire design ..................................... 121
Questionnaire 1 ...................................................................................... 124
Questionnaire 2 ...................................................................................... 130
Questionnaire 3 ...................................................................................... 135
Appendix B –Experimental Questionnaire Analysis .................................. 141
vi
vii
Abbreviations
viii
ix
Definitions
x
Tables
xi
Figures
xii
Introduction
The use of Internet has significantly increased over the years in the general
world population. It has risen from a distant 0.4 percent in 1995 (Internet
Growth Statistics IDC, 1995) to a very strong 50.1 percent (Internet Growth
Statistics, 2016) showing a continuous upward steady growth. This has in ef-
fect prompted changes in the way research is being conducted in different
fields (Katz, 2002). Surveys are one of the basic methods used to collect data
in larger scale since a long time. Now, with the use of Internet, data collection
methods such as surveys have shifted towards online medium. Online surveys
are easier to store, retrieve and analyze (Murthy 2008). At the same time, they
are significantly cheaper than mailing questionnaires to thousands of respond-
ents. Like any new technology, it usually takes time for an innovation like
online surveys to be part of the practice of the scientific community. In the
past, when tools like tape recorders came to market, those who used it for their
data collection were criticized by their peers since the rigors of the method
was questioned (Murthy 2008). Online questionnaires are effective but also
have their fair share of challenges, including not getting the desired infor-
mation from respondents (Bulmer, 2004). For example, Bulmer states that
most of the times online survey questions are left unanswered by the respond-
ents because of their poor structures (Bulmer, 2004).
With the rise of online research, studies have been conducted to examine
the differences between online vs paper methods of data collection (Ward,
Clark, & Zabriskie, 2012). Benefits of the online medium have greatly influ-
enced the way researches have been carried out. The Internet today has
reached a wide population than ever enabling online surveys a much quick
task from far off geographical locations. This is rather a challenge when it
comes to paper data collection methods (Ward, Clark, & Zabriskie, 2012).
Sometimes paper data collection methods can involve errors in the database
entry, data processing, and evaluating results. Catering to these errors or gaps,
the surveys are now programmed to capture data directly into the statistical
packages. This greatly reduced the number of errors that occurred in the case
of paper data collection methods (Ward, Clark, & Zabriskie, 2012). Cost is
another important element that is critical when it comes to data collection
methods. Online surveys can collect data from a wide group of people without
involving a huge expense that occurs in the case of paper data collection. The
online method lets you skip traveling cost, printing cost as well as data pro-
cessing cost (Ward, Clark, & Zabriskie, 2012).
13
In this regard, with the increase in the use of online survey as the preferred
data gathering method, there is also need to explore the opportunity of online
characteristics to its fullest extent. For instance, locating of the participants
may become useful so that it can be ensured that the information to be derived
on the survey responses should not only reflect the perceptions of one area. It
is also possible to design online surveys using varying formats of questions,
showing relevant questions based on participant responses, showing question-
naire in different formats based on devices being used, etc. All these can be
achieved in the online medium to attract more responses from participants and
to help researchers to ensure that the correct format of the survey question-
naires is used so that it can stay appealing to the respondents.
Online medium can also be useful for online surveys for its time tracking
characteristic. This is because it gives an overview of the time taken by the
respondent in analyzing the questions before responding to them. Through
time tracking, one will be at a position of measuring the difficulty of a question
on the basis of the time taken by the respondent to give a selection of a choice.
It is also salient because it gives a chance for one to spot the responses which
are given immediately. This gives an indication of the respondents who give
answers on the survey without analyzing and reading the questions. Hence
through time tracking the researchers should be at a position of evaluating the
effectiveness of the questionnaire.
Given the opportunities that can be explored in designing online surveys,
we articulate the first research question as following:
RQ1: How can the characteristics of the online medium be used to improve
the efficacy of online surveys?
15
As mentioned above, the behavior of the respondents is important to be con-
sidered to gather the information that is useful in deriving conclusions for the
survey. Therefore the researchers need guidance to design questionnaires that
successfully fulfill the motive of the survey. The researchers work to assure
that the data and information collected by this survey method are valuable,
lies in the domain, is collected from the intended users, and follows all security
constraints.
However, there are challenges to support design of online surveys. Differ-
ent approaches are followed to design and develop surveys for different pur-
poses. These can be commercial surveys, research surveys, and so on. These
different type of survey designs are based upon different considerations such
as the audience from which the data is to be collected, modes and time slots
used to gather this information, and the purpose of collecting it ("Research
Methods: Qualitative Research and Quantitative Research", 2017). Software
tools that are developed for research purposes in the academics are different
from the commercial ones. Research surveys are generally governed by dif-
ferent goals and ethical considerations. In case of academic research settings,
the evaluation techniques of surveys are very much context situated and often,
restrict researchers to seek commercially available solutions or techniques.
Surveys that are conducted over online medium is dependent on the tech-
nology. The number of responders may vary despite the ideal sample popula-
tion because of lack of access of the technology or lack of usability to enter or
alter answers. It may also mean that the researchers may have no control over
the answers that needs to be filled due to there lack of usuablity (Andrews,
Nonnecke, & Preece, 2003). For instance, a mobile or a tablet user may not
have the opportunity to access online surveys properly due to improper reso-
lutions, over memory consumption, poor structures, etc.
The aim of this research is to move beyond mere descriptive and explana-
tory knowledge into prescriptive knowledge. This will contribute to the effec-
tiveness of the online surveys based on the characters of the behavioral sci-
ences concept of “what is true” and the design science concept of “what is
effective” or “what is useful”, i.e. a pragmatist view on concepts. Online sur-
veys have a greater scope to be more effective and efficient when the concepts
of design science and behavioral science are wisely applied to develop them.
Therefore, this research seeks to answer the following research question:
RQ3: What qualities should guide the design of respondent behavior log-
ging software to support researchers in designing and executing online sur-
veys?
16
This research seeks to implement a self-evaluating online survey mechanism
based on respondents behavior logging in the context of E-Health. One such
E-Health program is U-CARE (see chapter 2). U-Care is a multidisciplinary
research program with an overall objective to promote psychosocial care to
the patients via the Internet. In this context, this research takes a Design Sci-
ence Research (DSR) (Gregor & Hevner, 2013; Hevner, March, Park, & Ram,
2004) (see chapter 3) approach to generate a viable solution for the online
research process in U-CARE as well as offer a new artifact that can be used
for data gathering. Given this context, this research aims to use the character-
istics of online medium and tries to offer a self-evaluating survey mechanism
using respondents’ activities while answering surveys online. Implementation
of such survey evaluation process hopes to provide potential benefits to
healthcare stakeholders in U-CARE. Healthcare providers and researchers
thereby can get a better understanding of the population of interest and their
understanding of the designed questionnaires. For example, if a high percent-
age of people from the population of interest change their replies to a given
question constantly, then healthcare providers may conclude that many of
these people did not necessarily understand the question. Such logging activ-
ities will provide a better understanding of the participants taking the ques-
tionnaires and how they interpret the questions asked. As such, survey logging
analysis stands to benefit the healthcare discipline with an effective evaluation
process that might shed light on a target population’s understanding of set
questionnaire. The idea is, by designing an evaluative platform for online sur-
veys, researchers in U-CARE can feel more confident making decisions based
on the information of respondents logging behavior. Medical practitioners and
therapists can therefore, get a better understanding of the participants and
come up with effective intervention approaches (commonly known as “Inter-
net based Cognitive Behavior Therapy” or ICBT).
Healthcare intervention relies greatly on the accuracy of information pro-
vided prior to the implementation of a given intervention approach. In order
to achieve the said accuracy, all participants involved in the process of data
collection need to be particular about information related to the problem that
needs intervention. U-Care provides various stakeholders including healthcare
researchers with the platform for collecting and analyzing information from a
population of interest. In turn, this information is used to make the necessary
decisions on the intervention being designed. However, there is always the
chance that information provided by the population of interest is faulty. There-
fore, a proper analysis and evaluation process for the information provided to
the healthcare stakeholders is needed. This way, any decisions made from this
information is expected to be reliable and accurate.
Most importantly, evaluating its effects may lead to the evaluation of the
healthcare quality which is the primary objective of the health facilities orga-
nized within the U-CARE research program. Therefore, by knowing how such
information will have an impact on the healthcare may lead to coming up with
new techniques for the improvement of the quality of healthcare being offered.
17
While numerous survey vendors have provided a wide variety of clients
with survey platforms for designing and collecting survey data, there has little
to no emphasis on the importance of evaluating surveys, survey platforms and
survey responses based on user logging activity. Therefore, implementation
of the logging analysis approach stands to provide an opportunity for Infor-
mation System (IS) researchers to build and critically evaluate a survey mech-
anism in the E-Health context. However, for researchers to accomplish this, it
is imperative that the implementation of respondents logging evaluation be
given close attention. By so doing, the collection, analysis and evaluation of
online surveys can be improved for future research and studies.
This research is structure following the Design Science Research (DSR)
guidelines to provide a step by step insight into the research and the evaluation
process. For this thesis, the research structure will be as presented below.
Chapter 2 will take a closer look at the U-CARE online research program
and how the research topic has emerged. This chapter will also shed light on
the emergence of the design artifact (Respondent Behavior Logging, RBL) as
well as the theoretical process on pragmatic interest of human actions based
on the use of RBL within the immediate research environment.
Chapter 3 seeks to break down the research artifact RBL in stages 0 – iv. It
closely analyze the literature revolving around implementation of RBL in In-
formation Systems (IS). A justification for the use of the Design Science Re-
search structure (DSR) is also provided. Additionally, this chapter provides
insight into the intended impact of the entire research to the disciplines of in-
terest, focusing mostly on RBL’s implementation to the IS discipline. By so
doing, supportive information is presented to justify the course of this research
and its expected contribution to IS.
Chapter 4 compiles literature research conducted on how user behavior in
use of IT artifacts is conceptualized in general and to identify evaluation tech-
niques of IT artifacts drawing from user behavior data. This chapter investi-
gates fields that are critical for this research; Information System (IS), Human-
Computer Interaction (HCI) and E-Health. It analyzes the impact of such lit-
eratures on RBL. This chapter will also points out specific areas which past
research has failed to focus on. By pointing out where the knowledge gap is,
this chapter helps to create a basis for the design artifact proposed in this re-
search.
Chapter 5 closely analyzes RBL and what it consists of. The chapter breaks
down RBL into five specific DSR artifacts: dynamic model of respondent be-
havior, static model of respondent behavior, visualization techniques, meas-
urement constructs, and a model for questionnaire evaluation and improve-
ment. These define RBL in a deeper sense and help to understand what imple-
mentation of RBL into information systems may entail.
Chapter 6 closely analyzes the evaluation process intended for RBL imple-
mentation in U-CARE. The chapter defines FEDS (Framework for Evaluation
in Design Science Research) and explains the parameters within which it can
18
be used to evaluate the implementation process as well as future research re-
sults. The chapter closely defines the evaluation process, the outcomes of the
evaluation process and an overall analysis of a focus group and experimental
evaluation (both of which are presented in the chapter).
Chapter 7 addresses the research questions and answers those questions in
an orderly manner.
Chapter 8 wraps up the research, pointing closely to the knowledge gaps
that exist in this topic and discussing what research presented will mean to
future studies in this discipline.
19
Research Setting
Grönqvist, H., Olsson, E. M. G., Johansson, B., Held, C., Sjöström, J., Norberg, A. L. & von Essen, L. (2017). Fifteen Challenges in Estab-
lishing a Multidisciplinary Research Program on eHealth Research in a University Setting: A Case Study. Journal of Medical Internet Re-
search, 19(5), e173
20
Figure 1. U-CARE organizational chart.
21
Table. Research studies using the U-CARE software system
22
ter- vention for women with negative or traumatic experiences re-
lated to childbirth or abortion. This also involved the women’s part-
ners.
AIDA-I & AIDA-II A project aimed at experimentally investigating how people are rein- Associ- 2015-2016 185 and 100
forced in psychotherapy and how this affects adherence and drop- ated study
out. A further aim was to examine whether live and internet-based
interventions differed in these aspects.
ENGAGE (1000g) An observational study aimed at getting an overall view of the physi- Associ- 2018-2019 170
cal and mental health in young adults born with an extremely low ated study
birth weight.
23
24
Table 3 illustrates the functionalities and activities of the U-CARE software
system as stated by the user roles. In this case, the researcher user role designs
and creates research studies tailors study-specific attributes, manages research
studies, schedules events, designs treatment material for psycho-education,
designs intervention therapies, manages the consents of research participants’,
adds and sends login details for incoming participants, uses decision support,
creates FAQs and chooses staff. Second, the therapist user role designs ques-
tionnaire, designs therapy content, designs intervention therapy, follows treat-
ments concerning the study protocol, approves ICBT surveys, responds to
homework quests, communicates with the studies participants, moderates chat
and forum, defines flag words, and monitors and answers FAQ. Third, the
participant user role provides consent online, fills questionnaires, chooses
nicknames, uploads pictures, completes ICBT modules, accesses homework
and self-help, and communicates with therapists and fills personal diary.
Fourth, the health care profession user role approves participants’ involve-
ment in a study, adds participants, designs therapy content, moderates chat
and forum, and monitors and answers FAQ. Fifth, the registrator role user adds
participants to a study and fills particular questionnaires. Sixth, the U-CARE
support user role adds support concerns, views activity snapshots and views a
study’s participants, and resets participants’’ flow. Seventh, the coordinator
user role coordinates research studies and groups, monitors study events and
audits privacy breaches. Eighth, any user translates texts to specific languages
in a study.
25
Use decision support (i.e., dash-
board which provides an over-
view of current state of activi-
ties).
Create FAQs.
Choose staff to be shown on About us
page.
Therapist [i.e., psychologist] Design questionnaires (using ge-
neric questionnaire design tool)
Design treatment content for psy-
cho-education (i.e., audio, video,
PDF, text, images, et cetera) using
the library.
Design intervention treatment (e.g.,
ICBT modules which in- clude treat-
ment contents, questionnaires and
homework) Follow treatments in ac-
cordance with study protocol.
Approve ICBT modules,
Respond to homework tasks.
Communicate with research partic-
ipants (e.g., using IM).
Use decision support (using pa-
tient indicators framework).
Moderate forum and chat.
Define flag words (i.e. suicide) that
when uses in chat or forum will alert
moderators that they need to pay atten-
tion to a particular conversation.
Answer and monitor FAQ.
[Research] Participant *Provide consent online
*Fill in questionnaires
*Choose nickname
*Upload picture [if they want to upload]
+Go through treatment by completing a list
of ICBT modules
+Access self-help and homework
+Communicate with therapists (e.g., using
IM)
+Communicate with peers through chat
and forum
+ Choose to be visible or not in chat and
forum
+Write personal diary
+Ask questions to health care professionals
[*any participant in reference,
control or treatment group]
[+functions for treatment group
only]
26
Add research participants (at various
health care sites across Sweden)
Design treatment content for psy-
cho-education (i.e., audio, video,
PDF, text, images, et cetera) using
the library Moderate forum and
chat.
Answer and monitor FAQ.
Registrator Add research participants (at various
healthcare sites across Sweden)
Design treatment content for psy-
cho-education (i.e., audio, video,
PDF, text, images, et cetera) using
the library Moderate forum and
chat
Answer and monitor FAQ.
Add research participants.
Fill in participant-specific question-
naires, often regarding clinical data.
[a Registrator may be assigned to a spe-
cific health care site and can only regis-
ter data for participants from their site]
27
Figure 2. The U-CARE software system.
28
The primary idea of this research program is to research how Internet–based
psychosocial support can provide psychosocial support to patients with vari-
ous somatic problems. An integral part of the planned research was to use data
collection via online questionnaires, and delivery of online content. Content
includes, for example, a library with videos and other resources, discussion
forum, and software features supporting ICBT (Internet based Cognitive Be-
havior Therapy).
The overall procedure is, U-CARE delivers online studies, which are based on
sequential observation points where participants go through some screening
questionnaires and placed into different groups2 depending on the severity of
their score. These observation points consist of different questionnaires that
are filled in by participants and health staff in different stages of a study.
Through this process, therapists, psychologists, and researchers collect survey
data in different observation points and provide online materials according to
the need of participants as part of the treatment process. Previous research in
– or in association with – U-CARE elaborate details on specific randomized
trials conducted using the software (e.g., Alfonsson, Olsson, Linderman, Win-
nerhed, & Hursti, 2016; Ander et al., 2017; Mattsson et al., 2013; Norlund,
Olsson, Burell, Wallin, & Held, 2015; Ternström et al., 2017).
2
Typically, participants are placed in either the treatment group, the control group or a reference
group.
29
Figure 3: Overview of the U-CARE online study process
Figure 3 shows that participants who take part in a study go through different
observation points. All the participants go through the screening phase by at-
tending to some questionnaires and thereafter, they are randomized into dif-
ferent groups (Treatment, Control, Reference). These groups are offered dif-
ferent questionnaires as configured in the design of the particular study. These
phases are called observation points. The entire study therefore, collects data
periodically in different times from different groups of participants.
30
providing a method space and other principles for the grounded U-CARE de-
sign that can help in sustaining and formalizing learning through various ADR
cases. Eventually, various practices use the multidisciplinary approach of U-
CARE to resonate with evaluative ideas of the IS research design (Mustafa,
2019, p.59). As a result, IS researchers will be able to apply the iterative U-
CARE software system in developing novel design knowledge that is needed
for understanding the problem domain. Consequently, IS researchers also
have an overarching ambition of developing new knowledge whose interests
are mainly associated with challenges facing the current research designs.
IS researchers form a greater part of the U-CARE team of development
whose main role is to develop an interactive and agile framework for software
development. U-CARE software systems are moderated by researchers who
are stakeholders from different regimes. These researchers usually conduct
meetings and research processes that provide feedback on the continued ef-
forts of research development. Such efforts are considered to be stakeholder-
centric hence they contribute significantly to the U-CARE programme’s over-
arching objectives (Mustafa, 2019, p.60). Various U-CARE stakeholders form
an integral part of the academic research context that oversees the eHealth
research software design. To achieve this successfully, all of the U-CARE
stakeholders in all categories are involved in an external process of design
evolution that manages all research feedback (Mustafa, 2019, p.61). Results
from such processes provide direct feedback to all stakeholders hence allow-
ing users to give feedback easily using the system design. In this case, table 4
illustrates stakeholders’ relevance both in the U-CARE context and in the dis-
sertation. Thus, clinical researchers conduct clinical trials to investigate vari-
ous mental and emotional health challenges that can arise as a result of phys-
ical disorders. In this case, they include cognitive scientists, nurses, and psy-
chologists. In the dissertation, clinical researchers are users of the U-CARE
system. On the other hand, psychologists deliver or develop psychosocial so-
lutions and also outline a CBT’s content, manage the content of the U-CARE
system, come up with CBT treatments and keep contact with the study’s par-
ticipants. Also, the health economists evaluate and study the expenses of the
software’s interventions and later provide feedback based on various stages of
the system. The associated researchers manage associated studies about to the
study’s context, provide varying requirements for the system, and influence
and evaluate the design choice. Again, the health care professionals who in-
clude hospital staff, nurses and physicians enroll research participants into the
system, moderate answer FAQS and discussion forums, and give a response
on task-related matters. Research participants are people who have volun-
teered to take part in a study and through the U-CARE system. Thus, they
provide secondary feedback on the system. The information systems research-
ers collaborate with software developers to shorten the software’s time of de-
velopment and minimize its cost. Finally, the software developers maintain
and develop the U-CARE system.
31
IS researchers have been active in the U-CARE program since 2010. Fol-
lowing a design science research (DS), IS researchers have acted as designers,
developers, and researchers.
32
Figure 4: My roles in U-CARE
33
Over the period of engagement in this research from 2010 to date, I
have been involved in various roles. I have worked on the U-CARE
project as a system developer, and a researcher at different times as
shown in the timeline (Figure 4).
Between November 2010 and October 2012, my main role was system de-
veloper in the project, included design of features for online questionnaires. I
was one of the leading developers to design and architect the U-CARE soft-
ware. My work responsibilities involved full-stack development which in-
cluded programming, front-end designing, database management, system
analysis and security assurance. I designed and developed proof–of–concepts
for ICBT (Internet-based Cognitive Behavior Therapy), Online Surveys, RCT
(Randomized Controlled Trial), Library, Chat, and many other features. Some
features were complex, such as the implementation of automation of software
behavior based on the answers people provided in of questionnaires. Such au-
tomation came in two parts. First, I implemented skip logic for the surveys
that ensured that subsequent questions will be seen by the respondents based
on their previous responses as further explained in section 2.3.1 of this thesis.
Second, I implemented a mechanism to support the the logic of RCTs based
on the expression evaluations drawing on the answers to questionnaires (fur-
ther explained in section 2.3.2). In brief, based on the results of the question-
naires, the responders or participants are randomly put into three groups
(Treatment, Control, and Reference), determined by logic that is provided dur-
ing survey design so that, the researchers, psychologists, and therapists are
able to provide the relevant materials to their target groups. Using the design,
U-CARE studies have automated logic for inclusion/exclusion of participants,
and randomizon of participants into different study groups, based on the re-
sults of the base line questionnaires. The work on questionnaire design and
automation of logic at this early stage provided me with knowledge, experi-
ence, and interest in solving design problems related to online surveys.
Following by my role as a developer I became a PhD student in autumn
2012. I initiated a systematic literature review to find out how user behavior
in use of IT artifacts is conceptualized in the literature in general, and to iden-
tify evaluation techniques of IT artifacts drawing from user behavior data.
Further explained in literature review chapter X of this thesis.
Due to a lack of developer resources, I paused my PhD work and again
worked as system developer twice in February 2013 and May 2014. The work
included upgrading the JQuery version on the U-Care portal, fixing various
bugs, and conversion of views in the system to Razor and HTML5. I was also
involved in the development of the mobile version of the application. This was
carried out in order to meet the demands of the various users to support old
34
browsers and mobile devices (as highlighted in section 4.5). Further I imple-
mented visualization techniques to present questionnaire data (as explained in
section X) – a practical assignment that resonated well with my emerging re-
search questions.
I was then involved as a PhD student again since June 2014 until now. A
detailed account of my roles in U-CARE is explained further in section X.
However, the interplay between roles gave me the opportunity to attain em-
pirical data (questionnaires filled in by participants in different studies in U-
CARE) and improvement opportunities for RBL techniques.
The system development and design process in U-CARE was set up in ac-
cordance with agile values (Conboy, 2009), characterized by sprint reviews
approximately every two weeks. In each sprint, a new set of tasks (features,
improvements, bug fixing, etc.) were prioritized from discussions including
the product owner (a role played representative from U-CARE), developers
and other stakeholders, leading to a sprint plan guiddevelopment during ing
the following two-week period. The review meetings had several recurring
members representing different professions and academic disciplines, includ-
ing psychology, caring sciences, economics, and information systems. Daily
meetings among the developers were conducted to discuss the impediments,
therefore possible actions to encounter those so that a smooth delivery of tasks
can be achieved. There were also other measures among the developers such
as workshops, and pair programming to discuss technical issues, testing strat-
egy and learning from each other’s work. In addition, external specialists and
patient groups were invited to explore the software, followed by workshops in
which they provided feedback to the developers.
35
Skip Logic
A key point in this regard is the branching design: participants are "tracked"
according to their responses in such a way that the only subsequent questions
seen will be the ones which are relevant on the basis of preceding responses.
This is an example of conditional branching, whereby the survey tool "would
be able to jump to a different instruction depending on the value of some data"
(Freiberger & Swaine, n.d., para. 4). This is a key component of what's per-
ceived as the decision-making "intelligence" of computers, and it plays an im-
portant role in the structure of online surveys. Another term that is often used
for this design element is "skip logic". University of Washington Information
Technology describes the role of skip logic: It "allows you to create custom
paths through your survey or quiz, showing the participants questions based
on their response to a previous question" (skip logic, para. 1). This makes the
survey relevant in a very clear way, because instead of generating a mass of
irrelevant data, the survey's structure is tailored to maximize relevance for
each and every person who takes it.
The example above shows that if a user chooses to answer ‘Ja’ (which carries
a weight 1) in question number 15 then, then question number 16 will be dis-
played to the user. The question number 16 has a conditional equation
‘@q15@ == 1’ based on which it remains invisible until the condition is ful-
filled. This way, survey designer has the capability to configure conditions to
display questions.
Figure 6 shows, there are multiple equations that the survey designer can de-
velop in the HADS questionnaire3 (Zigmund & Snaith, 1983) to setup an in-
clusion criteria for the RCT. The first equation Ångestindex (anxiety score) is
the sum of the answers of all the odd questions and Depressionsindex (depres-
sion score) is the sum of the answers of all the even questions. The final equa-
tion determines if the user shall be put to RCT depending on condition,
whether the user scores more than 7 in either Ångestindex or Depressionsin-
dex. C# syntax allows for complex conditions to be configured. In addition,
expressions can be defined based on previously defined expressions, as shown
in Figure 4.
Environment Log
Web user experience on multiple devices is becoming a common requirement
to support maximum usability of web applications. With the advent of
smartphones and improvements of browsers have amplified the use of web
applications. While, the limitation of early browsers are gone, it has become
very prevalent for application developers to detect device information so that,
information be better served. With device detection, the HTTP headers that
browsers send as part of every request users make are examined and are usu-
ally sufficient to identify the browser properties.
3
The hospital anxiety and depression index (HADS) has been widely used to screen patients
for anxiety and depression.
37
Figure 7. Environment log
The environment log collects information about web clients. These include types of browsers, operating systems, screen
resolutions, JavaScript versions, etc.
38
Task 1: Clarify the environment log more – (1page)
• The survey uses a responsive design. Environment log allows us to
better serve clients with better user experience. For example, if a cli-
ent uses a mobile device then, the survey will be presented in appro-
priate friendly format thus allows better user experience for the client.
• Javascript version and browser type allow survey software to function
properly with logging activities (i.e., click event) and presenting sur-
vey content better.
39
In developing this system that is survey design in general, the role has been
divided between two main actors. I have been involved in creating the survey
design while Jonas Sjöström has conceptualized the idea as Respondent Be-
havior Logging (RBL). The idea was to create a survey design mechanism
according to the needs of the therapists and the researchers in U-CARE. The
design and development work for surveys has grown more sophisticated over
the development cycles and thus became unique since, for a long time, there
has never been a survey design mechanism that is self-evaluating. The ability
to store every action of the respondent using AJAX technology enables us to
track every action of the respondent. This way, I have been able to identify the
specific traits of the participants. After developing the system, I have taken
part in debugging and improving the software. It is this system that Jonas
Sjöström, my colleague who amongst his other invaluable contributions has
conceptualized as Respondent Behaviour Logging (RBL).
Extensive logging of user actions was approved in the ethical approval for
U-CARE, based on the need to be able to study user behavior in relation to
treatment results. Such knowledge, i.e., ‘white–boxing’ user behavior, pro-
vides an important opportunity to better understand the design of eHealth sys-
tems. A subsequent ethical approval is related to the evaluation in stage X,
where we examine actual patient behaviors in the filling in of questionnaires
in the U-CARE context.
40
Figure 8. Question And Answers log
Question and Answer log shows details about each action made by the responders while filling in questionnaires. Among
these data there are, sessions which determines web client instantiations uniquely for each user; action_id determines type
of actions (view, add, edit, etc.); milliseconds determine duration of each action; parameters give all the details about
41
questions, etc. In this regard, through logging of data on the usage of online surveys, we will be at a position of obtaining
a richer representation of data.
42
Theoretical perspective
In this section, we account for some fundamental theoretical starting points
for this research. First, a pragmatist interest in understanding the use of infor-
mation technology in relation to human action. Second, a theoretical reflection
on the characteristics of the online medium and its implications for conducting
surveys. Third, the role of information technology to support automation of
tasks and complex logic to support human activity. These underlying theories
form a perspective that has influenced the entire research process, as elabo-
rated in section X.X.
Not applicable.
Instantaneity
Opportunities to re-
Self-expression
main either identified
or anonymous.
At the inception of this research process, some of the dimensions above were
at the core of the discussion, while being clearly related to online surveys.
When conducting surveys is concerned, there are a number of features that are
critical and can affect the data gathering mechanism. One of the dimensions
that is highlighted in Clark’s model, is recordlessness. Unlike the traditional
methods, the communication that takes place over the internet can be saved
and kept as a record. This information can later be revised and used for deci-
sion making (Clark, 1996). Same way, the fact about simultaneity works dif-
ferently in case of a normal conversation vs online. When it comes to surveys
conducted over the Internet, the participants can switch their answers as they
interpret the question they are asked. If the element of simultaneity can be
implemented in the online process, it can add more validity to their responses.
This can be one of the areas of opportunity to get more valuable feedbacks in
the ‘real time’ (Clark, 1996).
The actions of participants that take part in the online surveys are often
constrained by the technology. The element of self-determination adds re-
strictions in terms of technology to the respondents when they are a part of an
online survey. These restrictions are a little less and acts according to the struc-
ture of care in a normal conversation. Thus, this raises an opportunity to de-
velop a mechanism to strengthen self- determination (Sjöström & Alfonsson,
2012). In this research, the concept of Respondent Behavior Logging
therefore, provides us opportunities thus, to help improvise the design of ques-
tionnaires keeping in mind the behavioral trail of participants left while an-
swering surveys. If the constrain of technology can be mitigated and enhanced
with systematic means to understand respondents actions online, thus can pave
way to a new research opportunity. For example, if people who are filling the
entries of an online survey get an opportunity to change their feedback about
45
a question later can be a positive factor to strengthen respondent’s self-deter-
mination. Since, this signifies that they are not confident about the first answer
that they provided and later be able to change their mind as they feel.
Online tools possess quality outcome without intervening of a person. Tar-
geting the quality of response, business data and process can be improved
from the support of the real end users. Besides, quality measurement and
maintenance, analytics is another major advantage provided by the online me-
dium. The records are stored safe and accessible anytime, encouraging com-
panies for effective research. Instead of hypothetical conclusions, organiza-
tions can rely on the qualitative and quantitative data associated with analyti-
cal research. Thus, the online data collection for the purpose of conducting
surveys has a huge area of opportunities to meet. This medium can be used to
redefine elements of visibility, recordlessness, identity, extemporaneity, and
simultaneity when surveys are to be conducted on the Internet. Technology
can thus help you design the questionnaires interestingly. These intuitively
designed questionnaires for online surveys are capable of pulling in more par-
ticipants and inspire them to complete the survey. Surveys thus conducted
online can improvise the data collection process for to yield better results.
46
• The design of surveys within U-CARE demands some unique fea-
tures.
o It should support variety of devices to ensure maxi-
mum participants.
o It should facilitate RCT.
o Multiple views of questions to ensure survey’s usabil-
ity.
o Techniques to show relevant questions to the re-
sponders.
o Easy to design questionnaires with maximum flexi-
bility for the designers.
• Comprehensive collection of data based on online surveys.
• Given the underlying perspective of pragmatism, this research em-
phasizes how to understand people’s actions in the context of online
surveys.
• The RBL concept does not only account for the theoretical ap-
proach but also looks at the manner in which action oriented log-
ging mechanism using web technologies is applied to come up with
a process of enquiry that has practical impact on surveys used in U-
CARE studies.
47
Research Approach
As discussed above, the research process in this thesis is part of a larger re-
search setting in which a fairly complex piece of software was designed and
used to conduct randomized controlled trials in the context of eHealth re-
search. In this chapter, we present our design science research approach (sec-
tion 0) that was enacted within the U-CARE research setting. Note that design
science research was employed in the larger setting, as well as in the work
reported in this thesis. We provide the rationale for the selected approach
given the research questions at hand (section 0). Section 0 we appropriate DSR
guidelines for this research. Section 0 provides insight into how a literature
review was conducted to identify the knowledge base for this work, followed
by a discussion about in which way this research contributes to the knowledge
base.
This design science research has been divided into five different
stages as shown in the figure below. In this section, we account for
what has been done at each stage of the design science research pro-
cess.
49
Figure 9: Staged Design Process
50
Stage 0: Emergence of RBL
This stage was performed between December 2010 and 2012 and in-
volved design of the questionnaire, implementation of the skip logic,
as well as the implementation of the RCT. The system was thereafter
checked for fitness using workshops and sprint retrospectives. The im-
plementation of the skip logic at this stage of the project enabled de-
signers to create a branching system in the questionnaires. The survey
tool, in this case, will be able to jump to a given set point depending
on the answer given by the respondent (see section 2.3.1 for additional
details). In addition, the implementation of the idea of a Randomized
Controlled Trial (RCT) that is useful in grouping the respondents de-
pending on the conditions set by the designer at the design stage of the
questionnaires. In view of this, the designer can have multiple ques-
tions that can be used as an inclusion criterion in a survey. It was de-
signed using a system of equations whose score will determine
whether an individual is included or excluded from a study group (see
section 2.3.2 for more details). Additionally, the environment logs, are
capable of collecting user data such as browser type, operating sys-
tems, JavaScript versions, and screen resolutions for further analysis.
This combination of the features of the artifact enhances its capability
of tracking and studying the behavior of the respondents. Thus it was
the foundation for later stages to continue the design work and con-
ceptualize RBL.
This forms the stage II of this research that was carried out between 2013 and
2015. During this stage, a systematic literature review was performed mainly
in the areas of Information Systems (IS), Human Computer Interaction (HCI)
and E-Health. (See chapter 0). Further, I highlighted a summary of the various
literature reviewed in this research together with their findings (see section
4.2).
As a means to make RBL data interpretable and meaningful for stakehold-
ers in the design practice, a set of visualization techniques were crafted and
implemented into the software. The knowledge base on questionnaire evalua-
tion was factored in, primarily the idea of being able to assess time aspects,
phrasing/quality of single questions, and structural issues with the question-
naire.
On basis of collected RBL data, a series of visualizations were rendered
using the software. A focus group consisting of experts from the design prac-
tice was organized (See chapter 0). The visualizations were presented to the
focus group, and the participants were encouraged to discuss the utility of
these representations to (i) identify design flaws and suggestions for improve-
ments of the questionnaire and (ii) discuss if they would interpret the collected
data differently given the way they made sense of the RBL visualization.
Evaluation was thus based on the qualitative data collected in the focus group
session.
Ideally – depending on the ethical approval process – we can create visual-
izations from production data collected in the studies and conduct focus
groups with U-CARE stakeholders. If no ethical approval is given, the focus
groups may be conducted anyway, based on ‘fictive’ data or data from new
experiments on non-health respondents. Thereby, we will have an informed
argument about to which extent RBL analysis works to detect design flaws in
a survey instrument.
This forms the last stage of the design process and has been conducted
between 2017 and 2019. The aim of this stage is to further review the
design of the system based on the feedback obtained at the previous
stages 0 to III together with a review of the RBL framework and de-
sign principles. From the analysis in section 5.5, it was possible to de-
velop a system that can track the behavior of respondents during an
online survey. And based on the evaluations (see chapter X), it shows
that RBL can contribute how questionnaire evaluation techniques us-
ing behavioral data be used in practice. The tracking of the users’ log
data can be very significant in determining the flaws in the design of
the questionnaire. This may also inform about the errors in the ques-
tionnaire and act as a basis upon which further improvements can be
made to the study. Moreover, the comprehension of log data may ena-
ble researchers to enhance their skills in the future design of the ques-
tionnaire.
Then, we aim at conducting a final evaluation of RBL using the data col-
lected over 5+ years in U-CARE research trials. While such evaluation is
based on real clinical data, an ethical approval process was needed. Ethical
approval was granted in spring 2016, allowing for an analysis of RBL data
collected in U-CARE. Based on the RBL measurements, we will make state-
ments (hypothesis) about a selected well-known instrument used in eHealth
research. Those hypotheses will then be examined by surveying the literature
about the instrument, and comparing earlier tests and critique of the instrument
with the hypothesis.
53
Rationale for a DSR approach
For several reasons, Design Science Research (DSR) is a suitable approach to
find answers to the research questions posed in this thesis. We will address in
turn the reasons for adopting DSR.
Task 2: Rational for a DSR approach. One paragraph ‘per reason’ (3 pages).
Read Hevner’s paper that I have attached to get more ideas.
First, A…
Second, B…
Third, C…
Etc…
This research has emerged as an opportunity from the design of the U-CARE
online platform. In chapter 0, we provided the research context from which
we can draw that, this work is part of a larger DSR project and therefore inherit
designing to solve real problems and thus providing health care activities pos-
sible via the U-CARE online portal. However, there is little or no work done
on respondent behavior studies in the online survey context. Following Gregor
& Hevner (2013), that means that the phenomena at hand is suitable to inquire
into using a DSR approach. In this context, the technique in which a survey
can be evaluated based on its usages given the U-CARE research setting
makes the problem eccentric and therefore the solution to such problem is hard
to associate with any existing solutions. Often, academic software differs from
commercially developed software (Groen et al. 2015). U-CARE collect sensi-
tive data such as personal identity numbers, emails, and mobile numbers thus
cannot be shared with any third party online survey software. Even though,
there are software that function with anonymous users but the survey ques-
tions themselves are highly sensitive and any slightest chance to trace back
users mental or physical description is considered unethical and violation of
code of conducts approved for the U-CARE research. Moreover, the need for
customized designs of online surveys within U-CARE are very different than
54
what traditional online survey software can offer. The fact that, participants
are randomized into different groups based on scores participants make from
answering surveys, is a unique feature within the U-CARE online portal.
Therefore, many existing tools which may have mechanism to understand us-
ers’ online behavior become obsolete within U-CARE research context.
According to DSR, the aim is to create design-oriented knowledge and help
us understand a new problem domain. The idea of RBL is an example of de-
sign knowledge that has emerged due to the fact that, the change in medium -
the transition of paper based survey to the online medium has opened a new
wave of challenges for the U-CARE researchers that can diminish the good-
will of having cost/time effective online survey mechanism.
The framework relates to our description of each stage (Table 2). The stage
descriptions include the practice link, i.e. how needs in the environment influ-
enced design and development. Further, the evaluations conducted in each
55
stage may relate to the environment, when it includes naturalistic evaluation
(Venable et al., 2015). Evaluation may also be done outside the practice envi-
ronment, i.e. through artificial evaluation (Venable et al., 2015). Each stage
also builds on the knowledge base, by incorporating literature studies to affect
the design and/or evaluation in that stage. Finally, publications from the stages
constitute contributions to the knowledge base.
Hevner et al. (2004) provide a set of guidelines for conducting DSR. Be-
low, we elaborate on how these guidelines have been applied to this research.
Draft protocol
The literature review was done in the areas of Information Systems (IS), Hu-
man Computer Interaction (HCI) and E-Health. IS being the main area of in-
terest, we took a systematic review approach towards it, which resulted in a
process of conducting literature review in three steps based on Webster &
Watson (2002) strategy (See chapter 0). Areas of HCI and E-Health are also
equally important but limited search operations were conducted given the fact
that, the primary objective of this research is to contribute mainly to IS com-
58
munity. This research is situated in the U-CARE E-Health project and there-
fore it is imperative to understand what can be learned from existing phenom-
enon discussed within the E-Health community regarding evaluation tech-
niques from online traits of users. HCI is also salient in revealing techniques
related to logging, understanding how human interaction with technologies
are traced and used to improve human practices.
Extract data
The first round of collected articles provided us with more relevant references
and keywords. The reference articles were collected and a new set of relatively
improved keywords had formed that led to the second round search operation.
Appraise quality
After consecutive and contentious search operation of literatures, we were
able to collect more relevant and limited number of articles.
Synthesize studies
All the relevant articles were categorically summarized and reflected with re-
spect to each discipline (IS, HCI, E-Health) in order to develop a comprehen-
sive understanding about how logging of user behavior being conceptualized
and used within the respective areas (See chapter 0, 0, 0). Special attention
was given in the area of Information Systems.
59
Write the review
We account for the articles in delineating their relevance into our research in
chapters 0, 0, and 0.
60
the line (subjectivity) and the fact that nothing is entirely (really) new, every-
thing comes to exist based on something else (Hevner, 16). Based on these
considerations, there are four quadrants in the DSR knowledge contribution
Framework as shown below.
The quadrant of invention infers DSR work that leads to a radical break-
through (true invention). It is a pure departure from the ways of thinking and
doing which are readily accepted. DSR projects that fall in this quadrant have
little understanding of the existing context of the problem and the identified
problem does not have existing artifacts as solutions (Hevner, 18). According
to Gregor & Hevner (2013), the best description for the invention process is
an exploratory search done over a difficult problem space that requires cogni-
tive skills, i.e., imagination, curiosity, insight, creativity and knowledge of
various realms of review to identify a feasible solution.
Important to relate this discussion to your work. What type of DSR is this?
And, depending on what type, what types of contributions can be made. E.g.,
innovation research can lead to a better understanding of the problem domain.
Your discussions below about this are good, but they can probably be even
better if we go through them a bit more. Important to highlight the contribu-
tions of this work.
61
The improvement quadrant has a goal of providing a better solution that are
more effective and efficient processes, products, technologies, services or
ideas (Gregor & Hevner, 346). This quadrant applies to DSR projects that have
a mature context problem but present the need to produce solutions that are
more effective.
The exaptation quadrant where there is the existence of effective artifacts
in related problems areas and theses artifacts can be adapted to new problem
context (Gregor & Hevner, 347). The quadrant allows for the utilization of
knowledge in its current or refined form to a new area of application.
The routine design where known solutions are used for known problems.
In this quadrant, existing knowledge for problem areas is used on familiar
problem areas as a routine.
In this study, the DSR project involves the creation of an artifact where the
context problem is a questionnaire/survey tool. The artifact aims at providing
a solution for a way to carry out online surveys as well as evaluate the produc-
tivity and accuracy of the questionnaire by using it for evaluation. The arti-
fact’s structure is first to enable research to develop online questionnaires.
Second, the artifact will enable the researchers to analyze the effectiveness of
the questionnaire through the collection of data from the questionnaire. Third,
the literature presented in this research shows that there is no existing artifact
as a solution for this problem context. In this case, we believe, the research
falls between both Invention and Exaptation quadrants.
The research can be identified in the invention quadrant since it provides
RBL as a new concept when considering solution maturity and application
domain maturity. When considering its low level in the solution maturity,
there are no other known solutions for the problem context as provided in the
research. The existence of a solution is dependent on this research where if
successful, the result is a novel artifact or invention. RBL is an attempt both
to better conceptualize the problem domain and provide solution guidance.
When considering the application domain maturity, RBL is a novel way to
evaluate questionnaires, as compared to existing evaluation techniques. RBL
collects real time data from the online questionnaire. The interpretation of the
collected data logs can help in the improvement of the questionnaire structure
as well as to identify misconceptions in the questionnaire that reduce the ef-
fectiveness of understanding the questionnaires. RBL may lead to the evolve-
ment of new ideas about the questionnaire and its design features after the
evaluation process.
It can also be argued that the idea of logging user behavior is not entirely
new. Therefore, the contribution also resides in Exaptation quadrants. There
are studies track the respondents' behaviors, in the sense that the research
questions are meant to identify the effects that a certain behavior or perception
on the part of the respondent has on outcomes. But for this problem context
(i.e. online surveys and strategies for online survey design), there is no artifact
that exists which can be improved for the research to fall on the improvement
62
quadrant. Rather, RBL may provide solutions for the current research context
based on ideas of logging user behavior, which refers to applying an effective
artifact within a new problem domain.
63
Knowledge Base
64
how the behavior of the respondents can be used to improve surveys
and the quality of research in IS making the research more unique (see
further explanation in section 4.23 of the report). In addition, after not-
ing the knowledge gap in the field of IS, we proceeded and carried out
a review of literatures in the field of HCI and E-Health which have
been used to acquire knowledge about evaluation of artifacts using
logging mechanisms of user behavior (this is explained further in sec-
tion 4.3 of the report).
65
Search strategy
The literature review strategy was based on Webster & Watson (2002). The
following eight journals were selected as the sources for the present literature
review: European Journal of Information Systems, Information Systems Jour-
nal, Information Systems Research, Journal of the AIS, Journal of Information
Technology, Journal of Management of Information Systems, Journal of Stra-
tegic Information Systems, and Management Information Systems Quarterly.
The rationale for selection was that they are known in the information systems
community as the "basket of eight" (i.e. sources known for consistently pub-
lishing high-quality evidence). The search was conducted using the following
keywords.
online survey, online questionnaire, online therapy, survey instrument, sur-
vey log, respondent log, behavior log, questionnaire log, survey validation,
questionnaire validation, questionnaire tracking, online tracking, and survey
tracking.
The first iteration of this search produced a total of 119 articles. The titles
and abstracts of each article were manually evaluated for salience with regard
to the present research focus. This narrowed down the number of articles to
12, and these are the articles that are included in the present literature review.
Then, 8 additional references were retrieved through a careful reading of those
12 articles. That is, the new references were retrieved not through an inde-
pendent search of the literature but rather through a review of the references
that appeared in the original 12 articles.
The second iteration of search used new sets of relatively improved key-
words to produce more conclusive results. The following keywords were used
in the second iteration of search.
cognitive absorption, perceived utilitarian performance, expectation dis
confirmation, IT satisfaction, continual usage intention, positive emotion the-
ory, incentives for participation, expectation-confirmation theory, user be-
liefs, cognitive absorption, perceived usefulness, customer satisfaction, IS use,
continuance, acceptance, user satisfaction, confirmation, technology ac-
ceptance model, innovation diffusion, user attitudes, user behavior.
It produced 392 new articles. The same process of screening has taken place
as of the first iteration of search that has narrowed down the results to 11 arti-
cles. In total, the literature search thus resulted in 31 articles (appendix X lists
the articles).
Discussion of Findings
The concept of respondent behavior logging (RBL) must be interpreted in
broad terms as any effort to elucidate the logic and antecedents of a given
person's behaviors regarding information technology (IT). There are studies
track the respondents' behaviors, in the sense that the research questions are
meant to identify the effects that a certain behavior or perception on the part
66
of the respondent has on other outcomes. Research can then focus on opti-
mally changing those behaviors and perceptions. Some of the retrieved re-
search articles have very specific focuses in this regard. For example, Dewan
and Ramaprashad's (2012) study sought to identify relationships between so-
cial media use and music consumption; Ruth (2012) has investigated the ef-
fects that "conversation" in a question-and-answer site can have on user satis-
faction; and Koch et al. (2012) have explored the effects that social media use
in the workplace can have on employee outcomes. Perhaps the most salient of
the retrieved articles is the one written by Sen et al. (2006), in which the re-
searchers attempted to delineate the search strategies of online buyers. All of
these studies had the purpose of developing conceptual maps of what qualities
in the participants corresponded to what specific selected outcomes. On the
basis of the findings of the various studies, business and IT professional could
implement processes that are meant to change the independent variables (re-
spondents' qualities) and thereby change the dependent variables (selected
outcomes such IT use or morale in the workplace). Again, this could be called
respondent behavior tracking in broader sense.
The advent of Web 2.0 has resulted in new opportunities for how research
is conducted. One method of conducting research entails the use of the web to
get respondents answer of survey questions online and in real time. However,
respondents normally exhibit reflex reactions when answering such questions
and a means for logging and evaluating respondent behavior is necessary. It is
important to evaluate and predict, to an acceptable degree of accuracy, the
expected behavior of respondents to online forums and applications, such as
web based questionnaires and surveys. The behavior of respondents and other
users of online IT/ IS resources have been predicted using a variety of methods
such as EDT, trust theory, needs based online community commitment,
among others. Web based surveys provide a convenient means for collecting
data during surveys, and has been used previously and in many present re-
search surveys (Picoto, Belanger, et al. 2014). However, in their research
(Picoto, Belanger, et al. 2014) on m-business value determination and usage
factors, there is no mention of logging user activities to evaluate respondent
behavior. Their research used the traditional questionnaire pre-test methods
and pilot testing; these were done to evaluate possible user response in order
to make changes to the questionnaire before final administering.
The behavior of users is evaluated through research approaches, as Walsh
et al have done when researching on the use of IT by evaluating the IT users’
culture. The behavior of people in using technologies and other information
systems applications have been evaluated using the trust concept; that the trust
of an IS system influences the behavior of how the users’ interact and respond
to the system. The long term effect of trust on users’ continued consumption
and use of IT/IS systems and applications have been evaluated through the use
of CEDT model (complete expectation disconfirmations theory) (Lankton,
McKnight and Thatcher 133). Disconfirmation, satisfaction, and performance
affect the trusting intention when users interact with technology applications
67
such as web based surveys. The trust theory and EDT have been used to de-
termine/ predict technology user behavior with regard to user satisfaction con-
struct. IT/IS system usage can be evaluated using the system usage construct
approach to evaluate system user behaviors (Burton-Jones and Straub 228).
The construct of system usage has performed a central role in research involv-
ing IS (Information Systems). The system usage construct evaluates IT/IS user
behavior based using the re-conceptualization of the systems’ structure and
function that evaluate behavior through system, task, and user evaluation
(Burton-Jones and Straub 230). Just a few handful research has been done to
develop literature on the use of constructs in IS research applications, with the
most relevant work being done by Trice and Treacy in 1986 and by Seddon in
1997 (Burton-Jones and Straub Jr. 229-230). The EDT and user trust theory
have been used to evaluate characteristics that improve the decision making
of IT/IS system users.
To study the behavior of online communities in engaging in such activities
are responding to threads and making comments, the organizational commit-
ment typologies have been drawn upon to explain online community user be-
havior. According to the organizational behavior typology, individuals exhibit
behavior by developing psychological bonds based on their own affects, needs
and obligations. Every type of community commitment exerts a distinctive
effect on every behavior; need based commitments can predict whether a cer-
tain user will read a given thread or not. Obligations based commitment can
predict the moderating behavior of users of online applications and IS (Bate-
man, Gray and Butler 844). Studies of online based forums and communities
have not come up with a concrete explanation as to their nature; some studies
conclude that individuals in these (online) forums are motivated by their own
self-interest. Other studies have suggested that online communities’ behavior
is due to altruistic behavior (Bateman, Gray and Butler 844). Using the organ-
ization psychology typology and psychological affiliation, it can be predicted
how users will behave in online discussion forums. A users’ likelihood to re-
spond to a thread, for instance, can be predicted by evaluating the users’ needs.
Self-service IS have been set up by organizations as a way of getting infor-
mation from users/ customers for onward use in improving product/ service
delivery. To predict future usage, post adoption use of IS behavior can be used
by assessing feature level IS usage, exploring new IS uses, and integration of
IS into the work system (Saeed and Abdinoor 223). Using this approach of IS
user behavior, psychometric properties are seen to have a strong influence on
post IS adoption usage. The method shows that user behavior of IT/IS re-
sources is ease of use, user initiated learning, usefulness, voluntariness and
satisfaction can predict online user behavior. The engagement theory can be
used to predict, and enhance the participation of users in online forums and
communities. Online communities and forums incorporate people with di-
verse interests and of various backgrounds, age, gender, social backgrounds,
and tendencies, so their willingness to participate in given forums can be pre-
dicted, and enhanced through engagement (Kim and Morris 544). In addition,
68
the feeling of self-efficacy can also be used to predict the contributions of
users to online forums.
The social capital theory has been used as a way of examining user satis-
faction antecedents when using IT/IS systems; specifically, relational capital
and cognitive capital can predict the satisfaction of users that consume IT/IS
platforms (Fang, Lim, and Straub 1199). To evaluate the experiences of IT
users, the cognitive absorption theory can be used as a way of conceptualizing
optimal holistic feelings that users experience while using IT/IS platforms.
Post adoption of IT/IS systems use and continued intention of use can be pre-
dicted/ evaluated through the application of the cognitive absorption theory
(Deng, Turner, Gehlig and Prince 63-64).
Battacherjee (2001b) evaluating how the expectations of users affect their
engagement with information systems; and Bhattarcherjee (2001a) has also
evaluated the factors that contribute to users of electronic commerce services
achieving the selected outcome of continuance. A conceptually similar study
was conducted by Oliver (1980) regarding the factors that result in users of a
technology, product, and/or service experiencing satisfaction, as well as the
effects that follow from a consumer feeling satisfied. Similarly, Kim and
Steinfield (2004) investigated the relationship between customer satisfaction
and continuance intentions with respect to the mobile Internet services. In an-
other study, Rabandr and Rabine (2009) used statistical tests to evaluate pat-
terns in web-based interactions.
The other studies retrieved for the present literature review are very similar
to the ones discussed thus far. For example, Karahanna, Straub, and Chervany
(1999) explored the beliefs associated with the adoption of technology over
time; and Agarwal and Karahanna (2000) have evaluated the relationship be-
tween the variable of cognitive absorption on the one hand and the variable of
beliefs about technology on the other. Finally, Bhattacherjee and Prekumar
(2004) attempted to construct a theoretical model to explain changes regarding
information technology usage.
In the following table, the articles are summarized with respect to inclusion
criteria and their research methods.
Table 3. Overview of literatures found in IS
Article Inclusion rationale Context of Research Method
Research
User experi- The research model The purpose of the An online survey was conducted
ence, satisfac- uses the concept of paper is to develop to test the model and its associ-
tion, and con- cognitive absorption and test a research ated hypotheses.
tinual usage (CA) to conceptual- model that investi-
intention of IT ize the optimal ho- gates the effects of
listic experience that user experience
users feel when us- with information
ing IT. technology (IT) on
user satisfaction
with and continual
69
usage intention of
the technology.
IT service cli- Exploring the rela- This study empirically estab-
mate, anteced- tionship between lishes it as a predictor of IT ser-
ents and IT IT service climate vice quality using survey data
service quality and quality out- from both IT units and their cli-
outcomes: comes. ents.
Some initial
evidence
Bridging the Explored the ef- On the basis of a case study in-
work/social di- fects that social formed by boundary theory and
vide: the emo- media use in the the theory of positive emotions,
tional response workplace can the research describes the SNS
to organiza- have on employee (Socieal Networking Sites), its
tional social outcomes. uses and how it impacted both
networking the employees and the organiza-
sites tion.
Key infor- Evaluating the per- By comparing and This paper presents the major
mation tech- ceptions of IT from contrasting IT findings based on survey re-
nology and different geograph- trends from differ- sponses from 620 respondents
management ical areas. ent geographies, (275 US, 100 European, 59
issues 2011- this paper presents Asian, and 186 Latin) in mid-
2012: an inter- important local 2011.
national study and international
factors (e.g., man-
agement concerns,
influential technol-
ogies, budg-
ets/spending, or-
ganizational con-
siderations) neces-
sary to prepare IT
leaders for the
challenges that
await them.
Conversation Investigated the ef- This study exam- Data from a question and an-
as a source of fects that "conversa- ines the role of swer web site are analyzed by
satisfaction tion" in a question- free comments structural equations modeling to
and continu- and-answer site can given in a com- test the theoretical model
ance in a ques- have on user satis- mercial infor- whereby customer satisfaction is
tion-and-an- faction. mation service key to continuance and is pre-
swer site through the lens of dicted largely by social interac-
the expectation- tion that takes place on the site.
confirmation the-
ory and continu-
ance.
Using Ac- Delineate the rela- This paper intro- The authors introduce the facto-
countability to tionship between duces accountabil- rial survey to the IS field, a sce-
Reduce Access user accountability ity theory to the IS nario-based method with com-
Policy Viola- and access policy field, and by doing pelling advantages for the study
tions in Infor- violations so, the authors as- of information security policy
mation Sys- certain four system (ISP) violations and computer
tems. mechanisms that abuse generally.
70
can heighten an in-
dividual’s percep-
tion of accounta-
bility: identifiabil-
ity, awareness of
logging, awareness
of audit, and elec-
tronic presence.
A multidimen- User commitment Affective commit- To validate a proposed research
sional commit- plays an enormous ment—that is, in- model, cross-sectional, between-
ment model of role in positive en- ternalization and subjects, and within-subjects
volitional sys- gagement with vol- identification field data were collected from
tems adoption untary information based upon per- 714 users at the time of initial
and usage be- systems. sonal norms—ex- adoption and after six months of
havior hibits a sustained extended use.
positive influence
on usage behavior.
Buyers' choice Attempted to deline- An understanding
of online ate the search strate- of buyers' choice
search strategy gies of online buy- of online search
and its mana- ers. strategies can help
gerial implica- an online seller to
tions estimate its ex-
pected probability
of making an
online sale, opti-
mize its online
pricing, and im-
prove its online
promotional and
advertising activi-
ties.
User Satisfac- Advances the theo- Based on recon- A field study of 159 users in
tion with In- retical understand- ceptualization, the four financial companies pro-
formation ing of user satisfac- authors draw on vides general empirical support
Technology tion by re-conceptu- social capital the- for our hypotheses.
Service Deliv- alizing IT service ory to examine the
ery: A Social delivery as a bilat- antecedents of user
Capital Per- eral, relational pro- satisfaction with
spective cess between the IT IT service deliv-
staff and users. ery.
Progress in the On the basis of data This paper investi-
IT/business re- collected over a 3- gates perceptions
lationship: a year period using a of the IT/business
longitudinal survey instrument, relationship held
assessment the paper highlights by 653 IT manag-
areas of perceived ers and their staff
difficulty in the and 503 of their
IT/business relation- business counter-
ship and seeks evi- parts.
dence of trends in
the business/IT rela-
tionship.
71
A theoretical Discusses the rela- This paper devel- The model is tested using a sam-
integration of tionship between ops an integrated ple of 465 users from seven dif-
user satisfac- user satisfaction and research model ferent organizations who com-
tion and tech- acceptance of tech- that distinguishes pleted a survey regarding their
nology ac- nology. beliefs and atti- use of data warehousing soft-
ceptance tudes about the ware.
system (i.e., ob-
ject-based beliefs
and attitudes) from
beliefs and atti-
tudes about using
the system (i.e.,
behavioral beliefs
and attitudes) to
build the theoreti-
cal logic that links
the user satisfac-
tion and technol-
ogy acceptance lit-
erature.
Music Blog- Online social media Identify relation- Based on data from a leading
ging, Online such as blogs are ships between so- music blog aggregator, we ana-
Sampling, and transforming how cial media use and lyze the relationship between
the Long Tail consumers make music consump- music blogging and full-track
consumption deci- tion. sampling, drawing on theories
sions, and the music of online social interaction.
industry is at the
forefront of this rev-
olution.
72
systems community in a meaningful way. The extant research uses surveys in
order to track respondents' behaviors; while the presented idea flips the coin
over to inquire into how respondent’s behaviors logging may be conceptual-
ized and analyzed to improve research as such.
76
Previous E-Health research on User Logging
Given the empirical context, there was an interest to inquire into previous re-
search on logging in e-health research. The concept of logging, a data collec-
tion method that not only provides information regarding consumer prefer-
ence, but how consumers interact with online services of a certain nature. E-
health consumers represent a growing list of people interested in obtaining
information about their health and that of their loved ones; this implies that
for these services to be improved, service providers need to know what con-
sumers want and find ways to keep them interested in their sites.
A study conducted by Van Gemert-Pijnen (2014) shows, the use of log data
to understand the uptake of content of a web-based intervention that is based
on the acceptance and commitment therapy (ACT). The study demonstrates
how log data can be of value for improving the incorporation of content in
web-based interventions. The study looked into log data that were produced
by 206 participants who went through first nine lessons of the web-based in-
tervention. The log data included were, login, view feedback message, start
multimedia, view text message, etc. Differences in usage between lessons and
differences between groups both were explored with repeated measures ANO-
VAs (analysis of variance). It clearly shows that, log data can be a tool to
recognize the most salient part of the available content or it may reveal pat-
terns that were more useful than other ways. Therefore, researchers can test
their hypotheses about the usage of content delivered. Moreover, the log data
gives the researcher the ability to see through participant’s improvements in
terms of how much content they went through, which were the contents that
were more prevailing than others. On a collective level though, patterns may
be recognized and researcher therefore, may question or investigate adherence
of contents that were setup for the intervention.
According to Sieverink, Kelders, et al (2013), the adherence to eHealth in-
terventions is lower than expected despite the large number of eHealth pro-
jects. To discover how technology can influence the adherence to eHealth in-
terventions they use log data of the usage of an eMental health interventions
as an example. Primarily, logging involves the collection of numerous speci-
fied data such as when consumers log on to programs, what answers they give
for questions, when and why they change answers, among others. In essence,
the service provider is equipped with a lot of information that can be used
directly or otherwise to create a potent product that will increase consumer
continuity. Log-data represents a starting point to improve eHealth and make
it more persuasive to consumers (Kelders, Julia, & van Gemert-Pijnen, 2013);
hence, logging ought to be made as accurate as it can be. Indeed, technology
can be persuasive, making it possible for it to change people’s behavior; for
instance, eHealth technology could influence a consumer to be more health
conscious and support healthy living. To make this a reality, logging can be
implemented since log files present information about customers in real time;
subsequently, based on the data collected from the log files, it becomes easier
77
to personalize content to consumer preferences (Kelders, Julia, & van Gemert-
Pijnen, 2013).
Another benefit of log files is the improvement in the management of
chronic diseases that have been on the increase, especially among the older
generation. In essence, the older generation requires close monitoring since
their health deteriorates easily as compared to the younger one; for this reason,
logging has been introduced to help people understand how these diseases be-
have among aging people. Health professionals need to monitor the conditions
of the aging population and for this to be a reality, they use eHealth since it
can provide online support for self-management, improve interaction between
professionals about illnesses, and it helps in the overall monitoring process.
Sieverink, Kelders, Julia, & van Gemert-Pijnen, (2013) conducted a study to
investigate the navigation process of eVita, which is an electronic personal
health record (PHR) technology that was designed to be used by patients with
diabetes mellitus. In addition, the authors sought to improve the technology’s
efficiency and consumers’ adherence; to achieve this, they analyzed the log
data for six weeks after they used a renewed version of the technology. Indeed,
the PHR has proved to be an important component in the improvement of the
quality of managing chronic illnesses since people can share and access their
information with utmost security and confidentiality. With the continued ad-
vancement in technology, PHRs are constantly improving and today, they also
enable patients to manage their own conditions, they provide more infor-
mation about the illnesses, they offer peer support, and provider to patient
communication. Evidently, as Sieverink et al., (2014) found out, PHRs em-
ploy logging systems that enable them to provide this information; however,
exposure to PHRs do not guarantee improved health outcomes for patients
with diabetes. Interestingly, it is the information collected by such logging
systems that proves valuable, especially to web designers, and health service
providers.
Tian et al., (2009) point out the growing need for consumers to use the
Internet to search for medical information; in the same way, clinicians were
also found to use the internet often to communicate with patients. For this
reason, they assert, it is imperative that web designers find a way to improve
the services they provide on websites, in order to make it easy for both con-
sumers and clinicians. In order to outline the importance of their conclusions,
Tian et al. examined the usage of the Chronic Fatigue Syndrome (CFS) web-
site at the Center for Disease Control and Prevention (CDC). They also exam-
ined the results of a public awareness campaign for the CDC CFS and the
behavior of users towards these campaigns. At the of the research, they found
that the website had high usage which shows that it was an integral online
resource for those who wanted to know more about professional health care
and information. Again, these findings highlight the importance and shift in
information technology that continues to be used in improving health services
and how people can get them.
78
Relevance and Implications
This section outlines how RBL has benefited from the E-Health literature.
During the literature review stage (see chapter 4), an analysis of the past liter-
ature was conducted on the areas of Human Computer Interface (HCI), E-
Health, and Information Systems (IS). However, since this study focuses
mostly on the relevance of IS, research was also conducted for the field of E-
Health and HCI since these two fields are equally relevant for this research.
Moreover, this research is based on the U-CARE project hence it is vital to
comprehend the significance of evaluation techniques on respondent behavior
within the field of E-Health. Generally, it can be concluded from the reviewed
literature that there is high value in understanding respondent behavior in the
field of E-Health. Based on the reviewed data, it has been shown that the use
of E-Health can help practitioners to determine the uptake of online contents
by the respondents. Such data may include sickness conditions, the behavior
of respondents while using the internet such as the time the consumers log into
the system and when they answer questions, reaction to online content, and
disease trends, etc. This can be further used to personalize data depending on
the preferences of the consumers. Besides, the literature has shown that it is
possible to manage chronic diseases using e-platforms. On this note, there is
a need for the consumer to use the internet to look for health information.
Based on these studies, it is, therefore, possible for researchers to use the in-
dependent variables (such as consumer data obtained through e-health) and
convert to the dependent variable (outcomes such as management of chronic
diseases). Technically, the concept of using RBL in improving survey do not
vary much from the concepts presented in literature such as monitoring of
chronic diseases in the elderly or the uptake of online content.
Based on these existing works, the researchers can work toward developing
a system for improving surveys. For instance, this can be closely linked to the
concepts presented by Sieverink et al. (2013) about adherence to e-Health (see
section 4.4). However, it has been shown that amongst the reviewed literature,
none addresses the concept of using respondent behavior to improve surveys
in the field of e-health. Most of the past researchers just as the few selected
for this study have only focused on how e-health can be used to achieve the
results presented in section 4.4; though none has addressed its use in survey.
This makes this research not only unique but also more relevant as it has not
been exploited by past researchers in the field of e-health.
The current research uses the same concepts as the existing research but is
more inclined to information systems and survey. It uses surveys to monitor
the behavior of the respondents in the field of e-health (like the past research-
ers) but instead uses the data obtained to determine how to improve surveys.
As such, the research relies on e-health to the extent that using the internet and
tracking the health consumer data is concerned but is more valuable as it uses
this data in improving surveys. On this note, the value and relevance of the
79
current research are two-fold; it adds to the existing e-health literature on re-
spondent logging and also adds value to the field of IS and surveys through
the use of respondent login behavior to improve questionnaires.
81
A Framework for Respondent Behavior
Logging
82
Figure 14: Dynamic Model of Respondent Behavior
Seven action types are outlined in this Figure 14, all are possible to track in a
web context. The action types ‘open questionnaire’, ‘close to continue later’
and ‘submit questionnaire’ need no further explanation. The ‘focus on ques-
tion’ action type means that there is some indication that a question is cur-
rently in the focus of the user, e.g. by hovering a question area with the mouse
pointer or setting focus on a text field by clicking it. The ‘answer unanswered
question’ occurs when a user answers a question that has not been previously
answered, differentiating it from the ‘revise answer to question’ action type.
It is also possible to track when users switch to another page in the question-
naire. The dynamic model, despite its simplicity, is an important concept to
better understand what types of user actions we may trace in the context of
online questionnaires. A main idea of tracking these actions is that a rich ac-
count of actions taken serves to indicate how the respondent interprets various
situations when filling in a questionnaire. The dynamic RBL model was first
introduced by Sjöström et al. (2012).
83
filled in a questionnaire. In addition, there is a need to keep track of the re-
spondent’s (user’s) identity, to allow us to make queries related to a particular
individual.
‘User Action’ defines the actual log record. It shows how a user, at a certain
point in time, performs an action of some type in relation to a specific ques-
tionnaire. This action may include an answer to a question, which may be free
text, but it may also point out an optional answer to a question. Keeping track
of context is imperative, since one questionnaire may be used at different
times. This is the case in many randomized controlled trials where the same
questionnaire is typically used repeatedly to measure the same construct (e.g.
depression or anxiety) at different times in the study. By including action type,
we facilitate queries of user behavior based on grouping by the type of action
performed. The static RBL model was originally introduced by Sjöström et al.
(2012).
85
RBL measurement constructs
As a contrast to visualization techniques – i.e. supporting people to interpret
complex data – we also propose that RBL data may be the basis for statistical
analyses to reveal flaws in questionnaire design. The measures were to analyze
respondent behaviors on question level (Table 5) with corresponding
measures for the questionnaire level (Table 6).
Table 5. RBL question level measures
Measure Description
Change Frequency (CF) The number of times the answer to a question is
changed before the questionnaire is submitted.
Question Response Time (QRT) The time taken to complete a single question.
PingPong Frequency In (PFI) The number of jumps to a question in focus from
other questions.
PingPong Frequency Out (PFO) The number of jumps from a question in focus to
other questions.
PingPong Movement Factor (MF) MF = PFI + PFO
The number of RBL measures on questionnaire level are reduced, due to the
logical conflation of the PFI, PFO and MF measure into a total measure of
Overall Movement Factor (OMF) on the questionnaire level.
Table 6. RBL questionnaire level measures
Measure Description
Overall Change Frequency (OCF): The number of times an answer has been
changed in the questionnaire as a whole be-
fore it is submitted.
Overall Response Time (ORT). The response time for the whole question-
naire.
Overall PingPong Movement Factor (OMF) The number of jumps between questions in
the questionnaire, excluding the number of
necessary jumps to move through the ques-
tionnaire sequentially.
86
interpreting the situation; it proceeds to the different ways in which the par-
ticipant can answer a question or interact with the question and then returns to
an interpretation of the situation; and it ends with the participant either sub-
mitting the questionnaire or closing it in order to resume later. This tracking
mechanism thus allows researchers to improve questionnaires while expend-
ing a minimum of additional effort to gather the necessary information for
doing so.
87
Evaluation
89
questionnaire evalua-
tion.
Rationale
The goal of the focus group evaluation was to assess the reactions from active
researchers in U-CARE, and to what extent they gained new insights from
interpreting RBL visualizations. In doing so, we received stakeholder feed-
back about the value of RBL visualizations for researchers. Part of the evalu-
ation was also to implement software that produced the visualizations, i.e., a
proof-of-concept [22] or expository instantiation [10] demonstrating the fea-
sibility to apply the RBL visualizations in practice.
Evaluation Overview
We presented the visualizations to the focus group, and the participants were
encouraged to reflect about the utility of these representations to (i) identify
design flaws and suggestions for improvements of questionnaires and (ii) dis-
cuss if they would interpret the collected data differently given the way they
made sense of the RBL visualization. We thus based the evaluation on quali-
tative data collected in the focus group session, focusing on if and how RBL
visualizations were considered meaningful and valuable to the focus group
participants.
We have included six participants in the focus group who were vital stake-
holders and have firsthand interest in this research outcome. They were di-
90
rectly involved in designing online studies, they have been using online sur-
veys throughout the U-CARE research process since 2011. They were also
representatives of the three original clinical trials in U-CARE.
The focus group participants were asked to reflect on three basic scenarios.
First, the scenario when respondents’ answers were collectively shown in re-
lation to each questionnaire. Second, focuses on individual responses in a
questionnaire in relation to other respondents. Third, reveals the comparisons
of a same questionnaire filled in by a respondent at different times.
Collective collection of answers to one questionnaire is a research method
that encourages researchers to collect data from ongoing research. Therefore,
it is a research approach that enables the collection and analysis of data via
questionnaires which act as the critical incident technique (CIT), implying that
they help researchers to identify various human experiences (Sjöström et al.
2013, p.512). Therefore, the employment of questionnaires is a typical con-
struction of a research scenario that triggers a past experience and scenarios.
As a result, it helps researchers using the U-CARE software system to over-
come concerns that are associated with problems that prevent participants
from exploring the key scenarios. This approach offers a comprehensive prac-
tice that is in line with the U-CARE research feasibility hence can be applied
as practical scenarios experienced by the respondents. Therefore, individual
responses are a comprehensive complement to collective questionnaires in the
manner that they provide meaningful visualizations of the respondents. Fi-
nally, a questionnaire that is answered by either one or more respondents at
different times also provides details on the specific details about the focus
groups. Such details are used to identify the functionality of the evaluation
process because they reveal various aspects of the respondents in a formative
stage.
What to evaluate
The major aim of the process is to identify the efficacy of the online survey
process to yield reliable outcomes for research. In effect, the focus groups may
give more specific details regarding the use of online surveys in U-CARE and
help identify reliability and functionality of RBL. There were three aspects of
RBL evaluated during this evaluation process. i) Respondents activity for each
question and questionnaire, ii) time taken by respondents for each question
and questionnaire and iii) compare and contrast of the number of changes par-
ticipants made between previously attended questionnaire and now.
91
Why to evaluate
We consider the goals in doing focus group evaluation for RBL, is to gather
interpretations and reflections from the relevant audience. The evaluation pro-
cess may be in its formative stage. Thus, the “why” attribute of the research
process may allow us to interpret outcomes that would provide a viable basis
for next successful design cycle of RBL. Because, we believe, by showing
RBL data to the stakeholders may reveal weaknesses of RBL and therefor,
improve its reliably, usefulness and relevance. This way, the evaluation will
also help the researcher to derive at a summative approach that will confirm
all the decisions of the research process are taken into account.
When to evaluate
At this stage, RBL framework (Stage II) place into the time of both ex-ante
and ex-post evaluation. Because, the RBL framework has already in use in U-
CARE studies but at the same time it is open to research needs and improve-
ments. Therefore, ex-ante evaluation may help us evaluating the use of tech-
niques (i.e., static and dynamic model of RBL) and methods (visualization of
RBL data) of RBL. And ex-post evaluation based on RBL data that were al-
ready collected from U-CARE studies may help us decide whether the re-
search process attained its objectives so far.
How to evaluate
“How” evaluations take place between ex-ante and ex-post processes. The
evaluation approaches for the focus group intend to gather interpretations and
reflections from the relevant audience. And therefore, we have included six
participants in the focus group as previously mentioned in section X who were
vital stakeholders and were directly involved in designing online studies.
Through the focus group evaluation, the effectiveness of RBL data – espe-
cially the visualization of such data – is in focus, on the basis of whether the
visualization helps to produce meaningful insights that would have otherwise
been unavailable to the researchers and scholars of the focus group.
Data Collection
We presented a series of slides with visualizations of RBL data from ongoing
research in the eHealth practice. The visualizations were made based on how
respondents answered the HADS questionnaire; an instrument used to meas-
ure depression and anxiety. In the eHealth practice, HADS is used both for
screening and for subsequent follow-ups of the participants’ depression and
anxiety. We made an audio recording of the entire session (approximately one
hour), and took notes.
92
Data Analysis
We interpreted the audio recording to identify how focus group participants
made sense of the RBL visualizations, and how they ascribed meaning to
them. Below we account for the results of the analysis.
The RBL visualizations rendered some results related to the repeated use
of a questionnaire (HADS) which is used in Randomized Controlled Trials
(RCT). The charts and the cases were selected based on three overarching sce-
narios as previously mentioned in section X. The extreme cases were taken
out due to invalid or incomplete characteristics of user actions. For example,
a user closes the browser in the middle of answering a survey, or the session
of a browser dies due to longer response time.
Figure 18 shows the number of activities of the user Bengt (anonymized user
name) took to answer a questionnaire (HADS – Has Anxiety or Depression
Scale). There are 15 questions in HADS. The black line shows the mean num-
ber of activities to answer this questionnaire (for all users in the context). This
is one of the examples to showcase that the ability to reinterpret user activities
reveals knowledge about how a questionnaire has been generally used. In con-
trast to this case, there was another user who took quite a different path in
answering the same questionnaire. So the question is, how we make sense of
it? The user who took a different path, does that entail that, the user was very
much enlightened/disturbed by the questions? Or, the user had difficulties in
understanding some of the questions in the questionnaire.
93
One interpretation from one of the focus group members was, the HADS
questionnaire is generally used for screening participants into different patient
groups. So it is expected that, often we will discover some users who will take
different approach to this questionnaire. Basically it suggests that, the users
who often take unusual path the ones who need treatment.
Then a combination of activity charts was shown in which the same ques-
tionnaire was plotted but used in two different occasions. However, the results
were very much different in terms of users attending first time HADS compare
to second time. The designer of that questionnaire said “I am not sure this
brings any ambiguity because users when attending HADS for the second
time, they already have learned about the questions, .. but what should be
interesting is that, the users who consistently had same results from both
HADS”. That means, the user who scored the same, is important to study in
this case since it raises concerns that the user may not be improving or reacting
properly to treatment.
95
Figure 20 reveals some unanticipated jumps between questions by the re-
spondents on both sides of the gray line. We compare this Figure 20 with the
following Figure 21 where the same questionnaire was answered second time
by the respondents at later time in their study.
Figure 21 shows quite a different scenario compare to Figure 20. There are
significantly less unusual jumps and almost every respondent followed the
same tendency in filling out the questionnaire. The dominant view from the
focus group was, when the respondents took the HADS for the first time, they
clearly made lot of noises outside the desired path while the second time, re-
spondents followed the usual path because they already had learned about the
questionnaire.
96
Lessons Learned
In summary, the focus group evaluation shows that the visualizations support
researchers to reflect in new ways about the design of their questionnaires. We
have shown examples of how the focus group members have (i) reflected
about new ways of understanding the use of questionnaires at different obser-
vation points and (ii) how the visualization of RBL data correctly helped re-
searchers identify a problem with the HADS questionnaire. Hence, the evalu-
ation has shown proof-of-value (Nunamaker and Biggs, 2011) of the RBL vis-
ualization techniques in an actual research setting.
Experimental evaluation
Please find the full analysis in appendix A.
Rationale
The goal of the experimental evaluation is to evaluate if RBL data in combi-
nation with statistical analyses can be used to detect ‘flaws’ in questionnaire
design and to assess if the RBL measurement constructs support questionnaire
evaluation.
Evaluation Overview
In order to evaluate the RBL concept we have conducted an experiment using
the following setup. A questionnaire is designed in three versions –
Q2 and Q3 can differ quite a lot from each other and from Q1, as long as
the built-in design flaws/issues are not ’obvious’. A double-blind survey will
97
be conducted. Neither the respondents nor the researchers will be aware of the
questionnaire in use during data collection.
We based the experimental design on an original questionnaire design us-
ing guidelines from the Harvard University Program on Survey research (Q1).
In addition to Q1, we designed two derivate questionnaires with planted flaws:
Q2 has embedded flaws on the question level, while Q3 has embedded struc-
tural flaws. To have a useful baseline questionnaire, we proceeded with the
well-established USE questionnaire [18]. The questionnaire concerns online
learning platforms, making it relevant for the intended respondents, who were
students at the various campuses at the Uppsala University.
The dependent variable is whether there is an error or not in a questions.
Independent variables are the various quantified measures that RBL enables
(see Table 5 and Table 6), including but not limited to:
What to evaluate
Below mentioned hypotheses will be tested in this study.
• Hypothesis 1: This hypotheses will test if response time and
change frequency are significant discriminating variables across
three questionnaires
o Ho: Mean response time and change frequency is
same across three questionnaires.
o H1: Mean response time and change frequency is dif-
ferent across three questionnaires.
• Hypothesis 2: Questionnaire one has open ended questions placed
at end whereas questionnaire three has open ended questions placed
98
in middle of the questionnaire. This hypotheses will test if place-
ment of open ended questions in a questionnaire has impact on re-
sponse time and change frequency of questions
o Ho: Mean response time and change frequency for
open ended questions is same between questionnaire
one and three.
o H1: Mean response time and change frequency for
open ended questions is different questionnaire one
and three.
• Hypothesis 3: Questionnaire one has proper context available for
all questions whereas in questionnaire three context is missing for
some questions. This hypotheses will test if availability of context
for questions in a questionnaire has impact on response time and
change frequency of questions.
o Ho: Mean response time and change frequency for
the questions with/without context is same between
questionnaire one and three.
o H1: Mean response time and change frequency for
questions with/without context is different between
questionnaire one and three.
• Hypothesis 4: Questionnaire one has seven options available for
all multi-choice questions whereas questionnaire two has six. This
hypotheses will test if difference in number of options for questions
in a questionnaire has impact on response time and change fre-
quency of questions.
o Ho: Mean response time and change frequency for
all multi-choice questions is same between question-
naire one and two.
o H1: Mean response time and change frequency for all
multi-choice questions is different between question-
naire one and two.
Why to evaluate
The experimental evaluation is for estimation of the probability that an error
exists based on our measures that are found from the log data of RBL. The
experiment takes a survey approach by conducting a survey experiment. The
experimental sample chosen is supposed to infer to the bigger picture of the
goal of the research, which is to offer an evaluation technique for online sur-
veys.
When to evaluate
At this stage of this research, the idea is to conduct this experiment for the sole
purpose to collect as much insights as possible. It is hard to pre-determine the
number of respondents needed, since there is no previous data on variability
99
in the sample. At a later stage, we will determine variability based on the con-
trol group data. At this point, the aim is to collect as much data as possible
(”more is better”), with a preliminary aim of 40 respondents per questionnaire,
i.e. N=120. Variability is typically determined in a pre-study. This experiment
may be considered as a pre-study – the lessons learned here (including the
variability issue) are all part of developing and evaluating the RBL concept.
Data analysis conducted in upcoming research benefit from lessons learned
about variability in this experiment.
How to evaluate
In order to analyze the data and draw inference from the same, descriptive
statistics will be calculated separately for the metrics to be tested in each hy-
pothesis. One way MANNOVA will be performed using SPSS to test the hy-
pothesis and results will be analyzed to conclude on the same. All the assump-
tions of one way MANNOVA will also be tested using SPSS.
Data Collection
Students (N=120) were approached and asked to fill in the questionnaire using
tablets provided by the researcher. The process was double-blind (neither the
researchers nor the respondents knew which version of the questionnaire they
were filling in).
Table 5 and Table 6 show the quantified measures that RBL enables at this
time. Additional measures may be discovered and tested at a later stage, only
limited by the data and metadata in the RBL database (as shown in sections 0
– 0). The overall design of the experiment is based on the following:
100
Data Analysis
Find the detail statistical analysis in the appendix. Bellow we highlight the
findings from the analysis.
Lessons Learned
To this point, we have demonstrated that the conceptualization of RBL meas-
urement constructs is implementable and feasible to use in statistical analysis.
From a DSR evaluation perspective, we have thus provided a proof-of-concept
[23], i.e., that the measures are possible to calculate and that they can provide
a ground for statistical analysis. We have also demonstrated that the collection
of RBL data, and subsequent analysis using RBL measurement constructs, fa-
cilitates ex-post analyses of questionnaire design that would not be possible
without respondent behavior logging. Note that we have not investigated the
in-depth meaning of the experimental results; e.g., attempted to correlate the
characteristics of the free-text answers with the CF or time taken measures.
Instead, we have shown that the RBL data opens up new avenues of analyses,
and points us in new directions to understand our questionnaires as well as the
data collected in online surveys.
102
Discussion
In this chapter, we draw from literatures, experiences from the design process,
and the evaluations to address our research questions.
103
opposed to the paper platforms synonymous with the paper-based surveys.
The respondents are also enabled to make changes to the questions due to the
web platform. However, it has been noted that there are some challenges that
come as a result of both techniques. For example, the client-side tracking has
the benefit of allowing a large amount of detail by capturing every move of
the user despite the potential risk of data loss and the low speed of the browser
due to the script. On the other hand, a server-side tracking has a smaller effect
on the speed of the browser and less risk of data loss despite its inability to
capture much detail. As noted, therefore, the benefits of both the techniques
overrides their shortfalls making the RBL software more applicable as far as
time tracking is concerned.
Secondly, the innovative features of RBL allow for a self-evaluation of the
survey as opposed to the existing paper-based surveys. For instance, the com-
plexity of the structures of the research questions can be determined by how
long a user takes to respond to the questions. These will be indicated by either
the time chart or the answer matrix. Therefore, using RBL, the surveyors are
able to ascertain whether there is a need to improve the survey and how this
can be done. As stated earlier, this is a new feature in the U-CARE survey that
can be very vital for the continued quality improvement as far as the field of
the survey is concerned. Through this system, the surveyors are able to acquire
a large amount of data that is of great significance to the designers and re-
searchers.
Additionally, the available literature as noted earlier shows that users
across the globe use very different browsing software. Specifically, some us-
ers of the U-CARE research technique upon which the RBL software is based
still use outdated software like Explorer 6. Other common software used by
the likely applicants of the RBL systems are Safari, Google Chrome, and Fire-
fox. To meet these varying needs, RBL has been designed in a manner that it
is compatible with all these browsers. This caters to all the computer user of
the software. In addition to that, RBL has been designed in a manner that it is
light enough to run on mobile platforms. This, the mobile-based platforms
support server-side tracking together with certain client-side tracking. For in-
stance, as has been noted, events on the client side are only tracked when the
users answer questions. Others such as locations, questionnaires, and tracking
of questions will be done on the server-side. Further, in the design of RBL,
jQuery version X programming language has been used to aid in tracking on
the client side. jQuery which is a small and fast JavaScript rich in many fea-
tures is easy to use and applicable across a number of browsers thus increasing
the software versatility.
104
RQ2: How can we conceptualize respondent behavior
and logging of such behavior in a meaningful manner in
the context of online surveys?
From the available evidence, there is widespread use of online tools in con-
ducting online surveys. This has been motivated by the advent of Web 2.0
which has created new opportunities in how research is carried out. These
include services from vendors like Google, SurveyGizmo, and SurveyMonkey
that have been used in other fields to design questionnaires, collect, and ana-
lyze the collected data. Further, software such as SUML and queXML have
been used in the management of questionnaires that have automated the field
of online survey. Despite the vast applications of this software in other fields
other than the field of information science, it has been noted that little empha-
sis has been laid in the design and evaluation of questionnaires.
In the field of HCI for instance, implicit interactions and the ease of usabil-
ity have been used as parameters for testing the interactions between human
and computers. The efficiency and proficiency of any user have been tested
by allowing a user to fill a form and the time is taken by the user to fill the
indicated fields are documented. Using this method though (in HCI), there is
need to ensure that the time tracking does not affect the experience of the user.
Thus, it should be limited to the browser and server side settings. To realize
this recently, JavaScript codes have been used to modify the pages on an
HTML proxy. The script only consults the servers during loading or data sav-
ing. This technique has, however, been limited to the website usability tests
with the possibility of applications in developing web applications, profiling
users, and website implicit interactions in future. Additionally, mouse tracking
has been used in the field of HCI to keep the records of user behavior and
determine the usability of the website by keeping track of the type of data
collected by the users of the website.
Apart from HCI, user logging has also been used in the field of medical
services. For the mental health interventionists, for example, data such as the
time consumers logged into the program, why and when they changed answers
can be used to improve health and make it more persuasive as has been found
by Kelders, Julia & Van GemertPijnen (2013). In addition to that, user logging
has been used in the management of chronic diseases, especially for the el-
derly people. Since these diseases need a keen monitoring, log files have been
used in this field to keep track of the changes of these diseases in the elderly.
For the health professionals, eHealth provides online support, self-manage-
ment, and improve interaction between the patients and the medical practi-
tioners.
However, from the available evidence, little has been done in the field of
IS concerning the application of respondent behavior tracking in improving
the design of surveys (see details in the literature review, p. 48). The focus of
all the available literature has been the achievement of a concrete outcome that
105
is valuable in the real world while the concept of how tracking of respondent
behavior can be used has been greatly ignored in IS.
Due to this innovation gap in the field of IS, and given the recent develop-
ments in environment logging, we came up with the idea of RBL. As noted
earlier, the environment log has the capability of collecting details such as
screen resolutions, browsers, operating systems, and JavaScript versions that
can be used in determining the survey version for maximum usability. Hence,
RBL will use these recent techniques to design a survey mechanism that is
capable of tracking the behavior of respondents that will ultimately be used to
improve the questionnaires. Using this technique, we will be able to create an
online survey system that is self-evaluating. That is, from the collected data,
the RBL system in itself will be able to generate data on how the system can
be improved. Moreover, since this process needs to be interactive in order to
create a two-way type of communication, we will apply visual aid in the de-
velopment of RBL. Techniques such as activity chart, time chart, and answer
matrix have been used in this project to ensure the system has a social exist-
ence, in addition to the creation of ease of understanding.
To further conceptualize the idea of self-evaluation, RBL will use both
question level and questionnaire level measures to evaluate if there are any
challenges with the manner in which either questions or the questionnaires are
designed. At the question level, for instance, change frequency, question re-
sponse time, PingPong frequency in and out, and PingPong Movement Factor
has been used. On the other hand, overall change frequency, overall response
time, and overall PingPong Movement Factor have been used in RBL and
evaluation techniques at the questionnaire level (see chapter 0).
Based on Framework for Evaluation in Design Science Research (FEDS),
a research evaluation technique suggested by Venable et al (2016), we have
carried out; a focus group and experimental evaluation of the RBL system in
addition to a comparative analysis. During the focus group evaluation, it was
possible to ascertain how long a user takes to complete a HADS questionnaire
as demonstrated by the case of Bengt (an anonymous user). This gave more
insights into how the questionnaire in any particular case has been designed.
Apart from the approach taken by Bengt, it was realized from the data analysis
that some users took a different approach to the question. From the interpre-
tation by the focus group members, this data from RBL can be used to deter-
mine those who need treatment. Similarly, the time taken by a user to fill the
questionnaire as demonstrated by the case of Monika as simulated in the RBL
system can be used to a compare the situation of a patient if a similar test was
taken by the same user at different times. Further, it has been demonstrated
using this platform that it is possible to illustrate how the users jumped from
one question to another whether usually or unusually, the paths taken by the
user to respond to the questions. Also, it was possible to ascertain the improve-
ments when the users used the RBL system in the subsequent times.
On the other hand, an experimental evaluation of RBL was used to test the
impacts of open-ended questions on questionnaires. Change frequencies and
106
response time both on single and multiple questions have also been used in
RBL to determine the effectiveness of the structures of the questionnaires and
the questions alike, as illustrated in chapter 0. For example, open-ended ques-
tions placed at the end of a questionnaire attracted less response time as com-
pared to similar questions placed in the middle of a questionnaire. In summary
(and has been demonstrated earlier on) the RBL system is implementable and
the set-out parameters are measurable as have been indicated. Hence, with the
use of this system, it will be possible to automate and evaluate online surveys
in IS.
Functionality
Automation of questionnaires during the research process has been one of the
greatest challenges bedeviling surveys in Information Science. Precisely,
there has been a problem in the criteria in which participants of an online sur-
vey are selected in order to arrive at the most helpful and conclusive results.
Notably, the results of any research are principally based on the study samples
since it is based on these that the collected data are analyzed and results doc-
umented. However, most of the existing online survey systems have failed to
be discriminative in the selection of the sample population for any given sur-
vey based on the results at a given stage of the questionnaire. Therefore, the
existing systems have not been reliable as far as the sample selection at par-
ticular stages is concerned. Premised on this, the idea of RBL came in handy.
Using this system, the responses of the users at any stage of a questionnaire
can be obtained and reliable conclusions made. This way, it will be easy to
determine whether the structure for both the questions as well as the question-
naires match the desired objectives. Apparently, this mid-stage analysis is nei-
ther possible with the paper-based methods nor the existing software used in
online surveys making RBL both unique and applicable to online surveying.
107
Performance & Applicability
Secondly, when carrying out a software design, there is need to pay close at-
tention to the applicability of the developed system. As such, any system
should ensure that the final result of the process can be put to a practical use.
In addition to that, most of the systems are always created with a given niche
in mind. The designers, therefore, in most cases tailor their products to the
needs of the market. While designing RBL, we took these two aspects into
account. First, as has been demonstrated above (in this discussion chapter) and
in the evaluation of the system (chapter 6), through RBL, the researcher is able
to monitor the process through which the respondents answer questions simi-
lar to the mouse tracking technique that has been used in Human-Computer
Interface, before (HCI). Based on the reaction of the user, the researcher is
able to ascertain whether the correct format for both the questions and the
questionnaires were applied during the research design. Through demonstra-
tion, we have also been able to show that it is possible to collect user data such
as the time take to respond to a question, how frequently a user jumps from
either one question to another or one questionnaire to another as well as how
long a user takes to respond to a questionnaire as has been demonstrated in the
evaluation of this software (earlier).
Moreover, it has been proven through the use of RBL that it is possible to
collect data on the effects of types of questions on the response time of the
users. From the illustrations, open-ended questions appearing in the middle of
the questionnaire demands more response time than at the end of the question-
naire. This finding among others listed above is unique to RBL since they give
an opportunity for the evaluation of the online survey. Therefore, using RBL,
one can ascertain whether a design criterion used in an online survey best
serves the desired objectives or needs an improvement.
Hevner (2004) states that the applications of technology should lead to
“technology-based solutions to important and relevant business problems” (p.
83). As aforementioned, there has been a knowledge gap in the automation of
online surveys based on the available data. Further, Hevner (2004) asserts that
the quality, utility, and efficiency of any design must be demonstrated based
on a method that is well-executed. These evaluation criteria always follow
Framework for Evaluation in Design Science Research. In addition to that, it
has been argued in the literature that a design science needs to have a clear
and verifiable contribution in its area of knowledge. Also, the search for any
effective artifact in the field of research requires that the design uses the avail-
able data in order to solve a particular problem. Finally, Hevner (2004) has
argued that any research should be tailored to both the technology savvy indi-
viduals as well as management oriented users (p. 83).
In response to the available literature and has been suggested mainly by
Hevner (2004), we designed RBL such that it is possible to evaluate how ef-
ficient and effective the system is through focus group and experimental eval-
uations and comparative analysis. Notably, the idea of RBL is new since no
108
other known software collects the real-time data and carries out a self-evalua-
tion of an online survey based on the data acquired from the respondents. To
ensure that the process is rigorous as has been suggested in the existing liter-
ature, we organized workshops and collected the views of the various stake-
holders. RBL also has a looped system that is capable of solving its own flaws
making the artifact more effective. Finally, the use of visualizations tech-
niques, implementation of the RCT, conceptual models, and user guidelines
makes RBL relevant for both the management and the technology savvy users.
In summary, the design and implementation of RBL have followed most if not
all of the literally suggested techniques as we have validated.
Uniqueness
For a long time, there has been no agreement amongst the researchers on what
contributes to a DSR. However, Gregor & Hevner (2013) argue that there are
three levels at which a DSR project can constitute an artifact (pp. 341-342)
and as aforementioned. As established, the contribution of RBL in the field of
online surveying has not been formalized and documented before. Similar at-
tempts in the field of research have been limited to the interaction between
human and computers, the famous HCI and in the field of medical services.
However, no known facts have been established about a self-evaluating DSR
product making this project more unique. Moreover, there have been limita-
tions on artifacts that cut across more than a single discipline as does the RBL
software. Hence, RBL is not only unique in its application in the field of sur-
vey but also on the field in which it is developed and applied (IS).
110
111
Conclusions and Research Outlook
In this thesis, we have argued that respondent behavior logging allows for
questionnaire evaluation and data interpretation in a novel way. Although var-
ious forms of user behavior conceptualization has taken place, and user log-
ging has been used for other purposes, our literature review shows that it has
not yet been studied in the context of online surveys. We have proposed a
tentative RBL concept on the basis of (i) existing artifacts, (ii) literature and
(iii) a design process in the context of eHealth and eHealth research. We pro-
vided a proof-of-concept by implementing the RBL concept into software.
The software, being used in live eHealth trials, is used to collect a large
amount of data for respondent behavior while filling in questionnaires. So far,
we have conducted two separate evaluations to demonstrate the qualities of
RBL. First, we have conducted a focus group in which RBL visualizations
were presented to stakeholders in the eHealth research practice. In doing this,
we showed proof-of-concept for the conceptualization of RBL and for the vis-
ualization techniques as outlined in sections 0 – 0. In addition, the focus group
evaluation demonstrated proof-of-value for RBL visualization. Second, we
have conducted an experimental evaluation including 120 participants in
which the RBL measures were used to test hypotheses about questionnaire
flaws. The evaluation serves as a proof-of-concepts that these measures can
indeed be calculated and integrated into analysis. The results signal a potential
value of RBL as a means to identify questionnaire design issues, but also as a
means to make sense of data collected in the trials. Future evaluation will (i)
expand the experimental evaluation using binomial logistical regression and
(ii) compare large amounts of RBL data from the U-CARE context and con-
trast it with known issues in the questionnaires in use.
We do not propose RBL as a substitute for other questionnaire design and
evaluation techniques. It does, however, provide an alternative approach to
questionnaire evaluation that may be used stand-alone or in concert with other
evaluation techniques. While the implications of RBL at this stage are spec-
ulative, we find that it has good potential to become an important strategy for
questionnaire evaluation and re-design. It is a new use of technology that may
be integrated into the process of large-scale data collection online, supporting
both the interpretation of collected data and the refinement of questionnaires.
RBL allows for evaluation to occur automatically while the user is taking
the questionnaire. If RBL is ‘activated’, virtually no further work is needed on
the part of the researchers to collect and monitor data for questionnaire eval-
112
uation. RBL thus has the potential advantage of being cost-effective in com-
parison with other questionnaire evaluation techniques, either to make pilot
evaluations of questionnaires, or to continually collect data in order to be able
to fine-tune questionnaires that are used over a longer period of time. Other
methods typically require additional effort in order to carry out evaluation.
RBL potentially minimizes bias in the questionnaire evaluation process. In
general, unwanted bias diminishes the quality of any research process. In other
questionnaire evaluation methods such as cognitive interviews, there may be
a gap between what the users self-reported survey-taking behavior and their
actual survey-taking behavior. Even if the researchers were to conduct inde-
pendent quantitative analyses, there would still be a risk of selection bias re-
garding what data he chooses to focus on. RBL is likely to eliminate such risk
by providing a one to one correlation between the data produced and the actual
behaviors of the user. The value of this cannot be overstated.
Research Outlook
Continued research will be based on ‘stage IV’ of the research process as out-
lined in section 0. There is a need to plan in detail the continued evaluation
work. In addition, there is a need to extend the literature review based on peer-
review feedback obtained in the mid-term seminar, other conferences, and in
peer-review feedback from submissions to conferences and/or journals.
Finally, the final result of this work needs to incorporate ethical implica-
tions of RBL. A potential problem with RBL is that it adds an additional layer
of logging to online research. Logging of respondent behavior, especially in
the eHealth context, may be considered an intrusion of privacy. We are aware
of two implications, namely (i) the use of RBL would require consent when
applied in eHealth trials, and (ii) the fact that people are aware that their be-
havior while filling in the questionnaire is being logged may lead to a changed
behavior, i.e. a type of ‘observer effect’ occurring due to RBL. There is a need
in the final manuscript to further address ethical aspects and the potential ob-
server effects related to RBL.
113
References
Agarwal, R., & Karahanna, E. (2000). Time flies when you're having fun: Cognitive
Absorption and Beliefs about Information Technology usage. Management Infor-
mation Systems Quarterly, 24(4), 665-694.
Alfonsson, S., Olsson, E., Linderman, S., Winnerhed, S., & Hursti, T. (2016). Is
online treatment adherence affected by presentation and therapist support? A ran-
domized controlled trial. Computers in Human Behavior, 60, 550-558.
Ander, M., Wikman, A., Ljótsson, B., Grönqvist, H., Ljungman, G., Woodford, J., ...
& von Essen, L. (2017). Guided internet-administered self-help to reduce symptoms
of anxiety and depression among adolescents and young adults diagnosed with can-
cer during adolescence (U-CARE: YoungCan): a study protocol for a feasibility
trial. BMJ open, 7(1), e013906.
Arroyo, E., Selker, T., & Wei, W. (2006, April). Usability tool for analysis of web
designs using mouse tracks. In CHI'06 extended abstracts on Human factors in com-
puting systems (pp. 484-489). ACM.
Atterer, R., Wnuk, M., & Schmidt, A. (2006, May). Knowing the user's every move:
user activity tracking for website usability evaluation and implicit interaction. In
Proceedings of the 15th international conference on World Wide Web (pp. 203-212).
ACM.
Barki, H., Titah, R., & Boffo, C. (2007). Information System use - related Activity:
An Expanded Behavioral conceptualization of Individual-Level Information System
use. Information Systems Research, 18(2), 173-192. doi:10.1287/isre.1070.0122
Bass, B. M., & Riggio, R. E. (2006). Transformational leadership (2nd ed.). Psy-
chology Press.
Bateman, P. J., Gray, P. H., & Butler, B. S. (2010). Research Note--The Impact of
Community Commitment on Participation in Online Communities. Information Sys-
tems Research, 22(4), 841-854. doi:10.1287/isre.1090.0265
114
Bhattacherjee, A., & Premkumar, G. (2004). Understanding changes in belief and at-
titude toward Information Technology Usage: A Theoretical Model and Longitudi-
nal Test. Management Information Systems Quarterly, 28(2), 229-254.
Bramming, P., Gorm Hansen, B., Bojesen, A., & Gylling Olesen, K. (2012). (Im)
perfect pictures: snaplogs in performativity research. Qualitative Research in Or-
ganizations and Management: An International Journal, 7(1), 54-71.
Callegaro, M., Manfreda, K. L., & Vehovar, V. (2015). Web survey methodology.
Sage.
Davison, J., McLean, C., Warren, S., Bramming, P., Gorm Hansen, B., Bojesen, A.,
& Gylling Olesen, K. (2012). (Im) perfect pictures: snaplogs in performativity
research. Qualitative Research in Organizations and Management: An International
Journal, 7(1), 54–71.
Deng, L., Turner, D. E., Gehling, R., & Prince, B. (2010). User experience, satisfac-
tion, and continual usage intention of IT. European Journal of Information Systems,
19(1), 60-75. doi:10.1057/ejis.2009.50
Dewey, J. (1938). Logic: The theory of inquiry. Henry Holt and Company, New
York.
Dewan, S., & Ramaprasad, J. (2012). Research note-music blogging, online sam-
pling, and the long tail. Information Systems Research, 23(3-part-2), 1056-1067.
115
Fink, A. (2008). Practicing research: Discovering evidence that matters. SAGE.
Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M.
(1994). The new production of knowledge: the dynamics of science and research in
contemporary societies.
Gosling, S. D., & Johnson, J. A. (2010). Advanced methods for conducting online
behavioral research. American Psychological Association.
Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science re-
search for maximum impact. MIS quarterly, 37(2), 337-356
Grönqvist, H., Olsson, E. M. G., Johansson, B., Held, C., Sjöström, J., Norberg, A.
L., ... & von Essen, L. (2017). Fifteen Challenges in Establishing a Multidisciplinary
Research Program on eHealth Research in a University Setting: A Case Study. Jour-
nal of Medical Internet Research, 19(5), e173.
Hevner, A. R., Expanding the Impacts of Design Science Research. MKWI 2014
keynote, pp 1-42
Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design Science in Infor-
mation Systems Research. Management Information Systems Quarterly, 28(1), 75-
105.
Holm P (1996). On the design and usage of information technology and the structur-
ing of communication and work. Doctoral Dissertation, Stockholm University, Swe-
den.
Jia, R., & Reich, B. H. (2013). IT service climate, antecedents and IT service quality
outcomes: Some initial evidence. The Journal of Strategic Information Systems,
22(1), 51-69.
116
Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information Technology
Adoption Across Time: A Cross-Sectional Comparison of Pre-Adoption and Post-
Adoption Beliefs. Management Information Systems Quarterly, 23(2), 183-213.
Katz, J. E., & Rice, R. E. (2002). Social consequences of Internet use: Access, in-
volvement, and interaction. MIT press.
Kim, D., & Steinfield, C. (2004). Consumers’ mobile internet service satisfaction
and their continuance intentions. AMCIS 2004 Proceedings, 332.
Koch, H., Gonzalez, E., & Leidner, D. (2012). Bridging the work/social divide: the
emotional response to organizational social networking sites. European Journal of
Information Systems, 21(6), 699-717. doi:10.1057/ejis.2012.18
Krosnick, J.A., Presser, S.: Question and Questionnaire Design. In: Marsden, P.V.,
Wright, J.D. (eds.) Handbook of Survey Research, 2nd edn., pp. 263–313. Emerald
Group Publishing Limited (2010)
Luftman, J., Zadeh, H. S., Derksen, B., Santana, M., Rigoni, E. H., & Huang, Z. D.
(2012). Key information technology and management issues 2011–2012: an interna-
tional study. Journal of Information technology, 27(3), 198-212.
Markus, M. L., Majchrzak, A., & Gasser, L. (2002). A Design Theory for Systems
That Support Emergent Knowledge Processes. Management Information Systems
Quarterly, 26(3), 179-212.
Mattsson, S., Alfonsson, S., Carlsson, M., Nygren, P., Olsson, E., & Johansson, B.
(2013). U-CARE: Internet-based stepped care with interactive support and cognitive
behavioral therapy for reduction of anxiety and depressive symptoms in cancer-a
clinical trial protocol. BMC cancer, 13(1), 414.
McGrew, D., & Viega, J. (2006). The use of galois message authentication code
(GMAC) in ipsec ESP and AH (No. RFC 4543).
117
Mesa-Lago, C. (2012). Reassembling Social Security. doi:10.1093/ac-
prof:osobl/978019964612.001.0001
Newsted, P. R., Huff, S. L., & Munro, M. (1998). Survey Instruments in Information
Systems. Management Information Systems Quarterly, 22(4), 553-554.
Norlund, F., Olsson, E. M., Burell, G., Wallin, E., & Held, C. (2015). Treatment of
depression and anxiety with internet-based cognitive behavior therapy in patients
with a recent myocardial infarction (U-CARE Heart): study protocol for a random-
ized controlled trial. Trials, 16(1), 154.
Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys:
what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314.
doi:10.1080/02602930701293231
Presser, S. (2004). Methods for Testing and Evaluating Survey Questions. Public
Opinion Quarterly, 68(1), 109-130. doi:10.1093/poq/nfh008
Raban, D. R., & Rabin, E. (2009). Statistical inference from power law distributed
web-based social interactions. Internet Research, 19(3), 266-278.
doi:10.1108/10662240910965342
Searle JR (1995). The Construction of Social Reality, The Free Press, New York.
Sen, R., King, R. C., & Shaw, M. J. (2006). Buyers’ Choice of Online Search Strat-
egy and Its Managerial Implications. Journal of Management Information Systems,
23(1), 211-238. doi:10.2753/MIS0742-1222230107
118
Sein, M., Henfridsson, O., Purao, S., Rossi, M., & Lindgren, R. (2011). Action de-
sign research.
Sivo, S. A., Saunders, C., Chang, Q., & Jiang, J. J. (2006). How Low Should You
Go? Low Response Rates and the Validity of Inference in IS Questionnaire Re-
search. Journal of the Association for Information Systems, 7(6), 35-414.
Sjöström, J., von Essen, L., & Grönqvist, H. (2014). The origin and impact of ideals
in eHealth research: experiences from the U-CARE research environment. JMIR re-
search protocols, 3(2), e28.
Sjöström, J., Rahman, M. H., Rafiq, A., Lochan, R., & Ågerfalk, P. J. (2013, June).
Respondent behavior logging: an opportunity for online survey design. In Interna-
tional Conference on Design Science Research in Information Systems (pp. 511-
518). Springer Berlin Heidelberg.
Sun, Y., Fang, Y., Lim, K. H., & Straub, D. (2012). User satisfaction with infor-
mation technology service delivery: A social capital perspective. Information Sys-
tems Research, 23(4), 1195-1211.
Ternström, E., Hildingsson, I., Haines, H., Karlström, A., Sundin, Ö., Thomtén, J.,
... & Rubertsson, C. (2017). A randomized controlled study comparing internet-
based cognitive behavioral therapy and counselling by standard care for fear of
birth–a study protocol. Sexual & Reproductive Healthcare.
Topi, H., & Tucker, A. (Eds.). (2014). Computing Handbook: Information systems
and information technology (Vol. 2). CRC Press.
Vance, A., Lowry, P. B., & Eggett, D. (2013). Using accountability to reduce access
policy violations in information systems. Journal of Management Information Sys-
tems, 29(4), 263-290.
Venable, J., Pries-Heje, J., & Baskerville, R. (2012, May). A comprehensive frame-
work for evaluation in design science research. In International Conference on De-
sign Science Research in Information Systems (pp. 423-438). Springer, Berlin, Hei-
delberg.
Venable, J., Pries-Heje, J., & Baskerville, R. (2016). FEDS: a framework for evalua-
tion in design science research. European Journal of Information Systems, 25(1), 77-
89.
Von Alan, R. H., March, S. T., Park, J., & Ram, S. (2004). Design science in infor-
mation systems research. MIS quarterly, 28(1), 75-105.
119
Walsh, I. (2014). A strategic path to study IT use through users’ IT culture and IT
needs: A mixed-method grounded theory. The Journal of Strategic Information Sys-
tems, 23(2), 146-173.
Wand, Y., & Wang, R. Y. (1996). Anchoring data quality dimensions in ontological
foundations. Communications of the ACM, 39(11), 86–95.
Webster, J. & Watson, R.T. (2002) Analyzing the Past to Prepare for the Future.
MIS Quarterly, 25(2), xiii – xxiii.
Ågerfalk, P. J., & Sjöström, J. (2007). Sowing the seeds of self: a socio-pragmatic
penetration of the web artefact. doi:10.1145/1324237.1324238
120
Appendix A – Experimental questionnaire
design
This table shows the rationale for the design of the questionnaires used in the
experimental RBL evaluation.
122
For example, don’t ask “How in the question-
many jobs are available in your naires (before the
town: Many, a lot, some, or a few. actual USE ques-
“ It’s not clear to everyone that “a tionnaire starts):
lot” is less than “many.” A better “How much do you
scale might be: “A lot, some, only use the learning
a few, or none at all.”
platform?”. In Q1
and Q3, the options
are “A lot, some, a
little, not at all”. In
Q2, the options are
“Much, a lot, a lit-
tle, not at all”.
The Double- Questions should measure one Introduce a double-
barreled Ques- thing. Double barreled questions barreled question in
tion Principle try to measure two (or more!) Q2: Q2.15 should
things. For example: “Do you think be “It is flexible and
the president should lower taxes I can use it without
and spending.” Respondents who written instruc-
think the president should do only tions”.
one of these things might be con-
fused.
The Response If a respondent could have more In Q2.1, write the
Anticipation than one response to a question, question “Which
Principle it’s best to allow for multiple camp at UU do you
choices. If the categories you pro- study at” and make
vide don’t anticipate all possible it a multiple choice
choices, it’s often a good idea to question.
include an “Other-Specify” cate-
gory. If you are measuring some- Remove “other”
thing that falls on a continuum, and “I do not use
word your categories as a range. any learning plat-
form in my studies”
from Q2.4.
The Non-Lead- Avoid Questions Using Leading, Move Q3.32-35 to
ing Principle Emotional, or Evocative Language. the end of the first
For example, “Do you believe the page.
US should immediately withdraw
troops from the failed war in Iraq?”
“Do you support or oppose the
death tax?.” Sometimes the associ-
ations can be more subtle. For ex-
ample, “Do you support or oppose
President Bush’s plan to require
standardized testing of all public
school students?” Some people
might support or oppose this be-
cause it is sponsored by President
Bush, not because of their opinions
toward the merits of policy.
123
Questionnaire 1
Baseline questionnaire – designed to be a ’good questionnaire’, i.e. without
deliberate design flaws. Q1 respondents will serve as a control group so that
we know that the results aren’t just an effect of ’randomness’
124
125
126
127
128
129
Questionnaire 2
Erroneous questionnaire A – Designed to contain some built-in design issues
with respect to isolated questions, e.g. ambiguous questions or questions with
non-intuitive answering options.
130
131
132
133
134
Questionnaire 3
Erroneous questionnaire B – Design to contain built-in structural issues, i.e.
questions in a non-logical sequence or a lack of informative texts to provide
context for the respondent.
135
136
137
138
139
140
Appendix B –Experimental Questionnaire
Analysis
141
142