Download as pdf or txt
Download as pdf or txt
You are on page 1of 142

Respondent Behavior Logging

An Opportunity for Online Survey Design


Abstract

The prevalence of the Internet in business and in everyday life has significant
implications for social science research. One implication is the opportunity to
collect data using online surveys. While online surveys grow in importance
for data collection, there is little research on how to best utilize the online
medium to support data collection. One opportunity in the online setting is to
collect not only respondent answers, but also to log the respondents’ process
of providing those answers. Logging is utilized in several other situations –
e.g. to investigate use qualities of e-commerce web sites, but has not yet been
explored to evaluate and improve survey instruments (online questionnaires).
In this thesis, we conduct design science research in the context of eHealth
research. We articulate the innovative concept of respondent behavior logging
(RBL), consisting of (0) emergence of RBL (i) static and dynamic models to
capture and conceptualize respondent behavior in the context of online sur-
veys, (ii) visualization techniques for collected respondent behavior data, (iii)
constructs for measurement of respondent behavior, and (iv) a process model
for survey instrument evaluation and improvement based on the respondent
behavior logging concepts. We refer to 0 – iv as the RBL framework, which
is evaluated through (i) focus group evaluation, (ii) experimental evaluation,
and (iii) a comparative study based on data collected in the eHealth research
context contrasted with known issues with commonly used survey instru-
ments. Based on the articulation and evaluation of the RBL framework, we
put forward an informed argument about the usefulness of the framework to
support online data collection.

Keywords: Design science research, respondents behavior logging, research


software, academic research context, online data collection, survey design,
survey evaluation.

ii
Acknowledgements

iii
Contents
Acknowledgements ....................................................................................... iii
Definitions ...................................................................................................... x
Introduction ................................................................................................... 13
Research Setting ........................................................................................... 20
Uppsala University Psychosocial Care Programme (U-CARE) .............. 20
Information systems research in U-CARE ............................................... 30
Research Topic Emergence in the U-CARE setting................................. 35
Skip Logic............................................................................................ 36
Research Process Automation in Randomized Controlled Trials ........ 36
Environment Log ................................................................................. 37
Respondent Behavior Logging ............................................................ 39
Theoretical perspective ............................................................................ 43
A pragmatist interest in human action ................................................. 43
Characterization of the online medium................................................ 44
Summary: Implications for research .................................................... 46
Research Approach ....................................................................................... 48
A Staged Design Science Research Process............................................. 48
Stage 0: Emergence of RBL ..................................................................... 51
Stage I: Conceptual Design ...................................................................... 51
Stage II: RBL Visualizations .................................................................... 52
Stage III: RBL Statistical Measures ......................................................... 52
Stage IV: Formalization of Learning........................................................ 53
Rationale for a DSR approach .................................................................. 54
Application of DSR guidelines................................................................. 55
Guideline 1: Design as an Artifact....................................................... 56
Guideline 2: Problem Relevance ......................................................... 56
Guideline 3: Design Evaluation ........................................................... 56
Guideline 4: Research Contribution .................................................... 57
Guideline 5: Research Rigor................................................................ 57
Guideline 6: Design as a Search Process ............................................. 57
Guideline 7: Communication of Research ........................................... 58
Literature Review Strategy....................................................................... 58
Identify the purpose ............................................................................. 58
Draft protocol ...................................................................................... 58
Apply practical screen ......................................................................... 59
Search for literature ............................................................................. 59
Extract data .......................................................................................... 59
Appraise quality................................................................................... 59
iv
Synthesize studies ................................................................................ 59
Write the review .................................................................................. 60
DSR Knowledge types ............................................................................. 60
DSR Knowledge Contribution framework ............................................... 60
DSR Research Contributions Types ......................................................... 63
Knowledge Base ........................................................................................... 64
Existing tools for online surveys .............................................................. 65
Previous IS Research on User Logging .................................................... 65
Search strategy ..................................................................................... 66
Discussion of Findings ........................................................................ 66
Relevance and Implications ................................................................. 72
Previous HCI research on User Logging .................................................. 73
Relevance and Implications ................................................................. 76
Previous E-Health research on User Logging .......................................... 77
Relevance and Implications ................................................................. 79
Implications for the design of RBL .......................................................... 80
A Framework for Respondent Behavior Logging......................................... 82
Dynamic model of respondent behavior .................................................. 82
Static model of respondent behavior ........................................................ 83
Visualization Techniques for Respondent Behavior ................................ 84
RBL measurement constructs ................................................................... 86
A process model for questionnaire evaluation ......................................... 86
Evaluation ..................................................................................................... 88
Focus group evaluation ............................................................................ 90
Rationale .............................................................................................. 90
Evaluation Overview ........................................................................... 90
Data Collection .................................................................................... 92
Data Analysis....................................................................................... 93
Lessons Learned .................................................................................. 97
Experimental evaluation ........................................................................... 97
Rationale .............................................................................................. 97
Evaluation Overview ........................................................................... 97
Data Collection .................................................................................. 100
Data Analysis..................................................................................... 101
Lessons Learned ................................................................................ 102
Discussion ................................................................................................... 103
RQ1: How can the characteristics of the online medium be used to
improve the efficacy of online surveys? ................................................. 103
RQ2: How can we conceptualize respondent behavior and logging of
such behavior in a meaningful manner in the context of online
surveys? .................................................................................................. 105

v
RQ3: What qualities should guide the design of respondent behavior
logging software to support researchers in designing and executing
online surveys? ....................................................................................... 107
Functionality ...................................................................................... 107
Performance & Applicability ............................................................. 108
System Integrity & Reliability ........................................................... 109
Functionality (Opportunity for better designing)............................... 109
Uniqueness......................................................................................... 110
The contribution to Knowledge Base ................................................ 110
Conclusions and Research Outlook ............................................................ 112
Research Outlook ................................................................................... 113
References ................................................................................................... 114
Appendix A – Experimental questionnaire design ..................................... 121
Questionnaire 1 ...................................................................................... 124
Questionnaire 2 ...................................................................................... 130
Questionnaire 3 ...................................................................................... 135
Appendix B –Experimental Questionnaire Analysis .................................. 141

vi
vii
Abbreviations

viii
ix
Definitions

x
Tables

Table 1. Ten dimensions of communication (after Clark, 1996) ........ 44


Table 2. Overview of DSR activities .................................................. 48
Table 3. Overview of literatures found in IS ...................................... 69
Table 4. RBL Visualization Techniques ............................................. 85
Table 5. RBL question level measures ............................................... 86
Table 6. RBL questionnaire level measures ....................................... 86
Table 7: RBL evaluation type ............................................................. 89

xi
Figures

Figure 1. U-CARE organizational chart............................................. 21


Figure 2. The U-CARE software system. .......................................... 28
Figure 3: Overview of the U-CARE online study process ................. 30
Figure 4: My roles in U-CARE .......................................................... 33
Figure 5. Setup for the branching condition. ...................................... 36
Figure 6. Example setup of inclusion criteria. .................................... 37
Figure 7. Environment log .................................................................. 38
Figure 8. Question And Answers log ................................................. 41
Figure 9: Staged Design Process ........................................................ 50
Figure 10: The DSR Framework (Hevner et.al, 2004) ....................... 55
Figure 11: DSR Knowledge Contribution Framework ....................... 61
Figure 12. Design Science Research contribution types .................... 63
Figure 13: Focus of the literature review. ........................................... 64
Figure 14: Dynamic Model of Respondent Behavior ......................... 83
Figure 15: Static Model of Respondent Behavior .............................. 84
Figure 16: A process-model for questionnaire improvement through
respondent behavior logging ...................................................... 87
Figure 17: FEDS evaluation process. ................................................. 88
Figure 18: Activity Chart .................................................................... 93
Figure 19: Time Chart ........................................................................ 94
Figure 20: Answer Matrix (Observation point 1) ............................... 95
Figure 21: Answer Matrix (Observation point 3) ............................... 96

xii
Introduction

The use of Internet has significantly increased over the years in the general
world population. It has risen from a distant 0.4 percent in 1995 (Internet
Growth Statistics IDC, 1995) to a very strong 50.1 percent (Internet Growth
Statistics, 2016) showing a continuous upward steady growth. This has in ef-
fect prompted changes in the way research is being conducted in different
fields (Katz, 2002). Surveys are one of the basic methods used to collect data
in larger scale since a long time. Now, with the use of Internet, data collection
methods such as surveys have shifted towards online medium. Online surveys
are easier to store, retrieve and analyze (Murthy 2008). At the same time, they
are significantly cheaper than mailing questionnaires to thousands of respond-
ents. Like any new technology, it usually takes time for an innovation like
online surveys to be part of the practice of the scientific community. In the
past, when tools like tape recorders came to market, those who used it for their
data collection were criticized by their peers since the rigors of the method
was questioned (Murthy 2008). Online questionnaires are effective but also
have their fair share of challenges, including not getting the desired infor-
mation from respondents (Bulmer, 2004). For example, Bulmer states that
most of the times online survey questions are left unanswered by the respond-
ents because of their poor structures (Bulmer, 2004).
With the rise of online research, studies have been conducted to examine
the differences between online vs paper methods of data collection (Ward,
Clark, & Zabriskie, 2012). Benefits of the online medium have greatly influ-
enced the way researches have been carried out. The Internet today has
reached a wide population than ever enabling online surveys a much quick
task from far off geographical locations. This is rather a challenge when it
comes to paper data collection methods (Ward, Clark, & Zabriskie, 2012).
Sometimes paper data collection methods can involve errors in the database
entry, data processing, and evaluating results. Catering to these errors or gaps,
the surveys are now programmed to capture data directly into the statistical
packages. This greatly reduced the number of errors that occurred in the case
of paper data collection methods (Ward, Clark, & Zabriskie, 2012). Cost is
another important element that is critical when it comes to data collection
methods. Online surveys can collect data from a wide group of people without
involving a huge expense that occurs in the case of paper data collection. The
online method lets you skip traveling cost, printing cost as well as data pro-
cessing cost (Ward, Clark, & Zabriskie, 2012).

13
In this regard, with the increase in the use of online survey as the preferred
data gathering method, there is also need to explore the opportunity of online
characteristics to its fullest extent. For instance, locating of the participants
may become useful so that it can be ensured that the information to be derived
on the survey responses should not only reflect the perceptions of one area. It
is also possible to design online surveys using varying formats of questions,
showing relevant questions based on participant responses, showing question-
naire in different formats based on devices being used, etc. All these can be
achieved in the online medium to attract more responses from participants and
to help researchers to ensure that the correct format of the survey question-
naires is used so that it can stay appealing to the respondents.
Online medium can also be useful for online surveys for its time tracking
characteristic. This is because it gives an overview of the time taken by the
respondent in analyzing the questions before responding to them. Through
time tracking, one will be at a position of measuring the difficulty of a question
on the basis of the time taken by the respondent to give a selection of a choice.
It is also salient because it gives a chance for one to spot the responses which
are given immediately. This gives an indication of the respondents who give
answers on the survey without analyzing and reading the questions. Hence
through time tracking the researchers should be at a position of evaluating the
effectiveness of the questionnaire.
Given the opportunities that can be explored in designing online surveys,
we articulate the first research question as following:

RQ1: How can the characteristics of the online medium be used to improve
the efficacy of online surveys?

Research in information systems, as well as in other disciplines, often includes


data collection through surveys (Newsted et al. 1998; Sivo et al. 2006). As
stated by Krosnick & Presser (2010), “the heart of a survey is its question-
naire”. The importance of well-designed questionnaires cannot be under-
stated. Flaws in questionnaire design may lead to response errors (Krosnick
2010) that may negatively impact the entire study. There are several methods
to design questionnaires and evaluate their efficacy, ranging from cognitive
interviews with participants to abstract statistical modeling to designing ex-
periments with the explicit objective of testing the survey itself (Presser et al.
2004). However, little attention has been paid to how the online medium
brings about new opportunities for evaluating the qualities of a questionnaire
– thus supporting questionnaire design improvements.
There is a general trend in web development to use user behavior data to
empower new types of applications. This trend is clearly related to a pragma-
tist interest in human action. While users ‘give’ answers, they also ‘give off’
signs of their behavior that may be used in various ways by designers (Åger-
falk & Sjöström 2007). The online medium allows for analysis of both ‘gives’
and ‘give-offs’ as a means to understand user behavior. Consequentially,
14
online surveys can provide us with an account of the actions that users have
taken while filling in the survey. Performing evaluation through reviewing log
data of used surveys can potentially be effective in terms of both efficacy and
cost. Flaws in the questionnaires can be detected through logging and analyz-
ing the behavior of the respondents during the response time and then analyze
why those behaviors are shown by the respondents (Laudon & Laudon, 2004).
Research shows that there is value in conceptualizing and analyzing peo-
ple's behavior with respect to IT use (Dewan and Ramaprashad's (2012), Koch
et al. (2012) Sen et al. (2006), Richard, Monika and Albrecht (2006), Selker
and Wei (2006), Van Gemert-Pijnen (2014) Sieverink, Kelders et al. (2013)).
Such conceptualizations are the foundations for logging behavior. Such logs
are often used by researchers in Human Computer Interaction (HCI) and E-
Health. Logs are observed and analyzed in depth, e.g. for performance meas-
urement. There is, however, a lack of research to address how such analysis
may be used to support questionnaire evaluation and design and thus improve
online research (we expand on related research in chapter 0).
Understanding the behavior of the respondents is important when the data
that needs to be collected is used to make critical decisions. Observing the
behavioral patterns of the online responders of the questionnaires is crucial
because unlike the face-to-face conversations, the attitude, body language and
confidence levels of the responders are difficult to identify. The response time
towards a certain question also may change, in the cases of online or face-to-
face question sessions. The survey responders may also change their answers
to certain questions in the case of online surveys later (Andrews, Nonnecke,
& Preece, 2003). All these above-mentioned traits are difficult to be identified
when surveys are conducted on the Internet. These are also the factors that are
important to draw conclusions at the end of these surveys (Andrews,
Nonnecke, & Preece, 2003).
While these behavioral patterns of survey takers are critical to derive con-
clusions, researchers are still trying to understand them. Having a strong un-
derstanding of these patterns ultimately helps to design and document ques-
tionnaires in ways in order to generate productive results. Having a deep un-
derstanding about the respondent’s behavior during an online survey can also
predict the results of the study. If the researchers know what all behaviors,
inputs patterns or external elements can generate varied results of the survey,
they can design the questionnaires so as to generate maximum output and bet-
ter results.
While the problem domain is still not well understood, part of the research
contribution includes a better description and understanding of problem do-
main (Gregor & Hevner 2013). The goal for a better understanding of the
problem domain leads us to the research question as following:

RQ2: How can we conceptualize respondent behavior and logging of such


behavior in a meaningful manner in the context of online surveys?

15
As mentioned above, the behavior of the respondents is important to be con-
sidered to gather the information that is useful in deriving conclusions for the
survey. Therefore the researchers need guidance to design questionnaires that
successfully fulfill the motive of the survey. The researchers work to assure
that the data and information collected by this survey method are valuable,
lies in the domain, is collected from the intended users, and follows all security
constraints.
However, there are challenges to support design of online surveys. Differ-
ent approaches are followed to design and develop surveys for different pur-
poses. These can be commercial surveys, research surveys, and so on. These
different type of survey designs are based upon different considerations such
as the audience from which the data is to be collected, modes and time slots
used to gather this information, and the purpose of collecting it ("Research
Methods: Qualitative Research and Quantitative Research", 2017). Software
tools that are developed for research purposes in the academics are different
from the commercial ones. Research surveys are generally governed by dif-
ferent goals and ethical considerations. In case of academic research settings,
the evaluation techniques of surveys are very much context situated and often,
restrict researchers to seek commercially available solutions or techniques.
Surveys that are conducted over online medium is dependent on the tech-
nology. The number of responders may vary despite the ideal sample popula-
tion because of lack of access of the technology or lack of usability to enter or
alter answers. It may also mean that the researchers may have no control over
the answers that needs to be filled due to there lack of usuablity (Andrews,
Nonnecke, & Preece, 2003). For instance, a mobile or a tablet user may not
have the opportunity to access online surveys properly due to improper reso-
lutions, over memory consumption, poor structures, etc.
The aim of this research is to move beyond mere descriptive and explana-
tory knowledge into prescriptive knowledge. This will contribute to the effec-
tiveness of the online surveys based on the characters of the behavioral sci-
ences concept of “what is true” and the design science concept of “what is
effective” or “what is useful”, i.e. a pragmatist view on concepts. Online sur-
veys have a greater scope to be more effective and efficient when the concepts
of design science and behavioral science are wisely applied to develop them.
Therefore, this research seeks to answer the following research question:

RQ3: What qualities should guide the design of respondent behavior log-
ging software to support researchers in designing and executing online sur-
veys?

16
This research seeks to implement a self-evaluating online survey mechanism
based on respondents behavior logging in the context of E-Health. One such
E-Health program is U-CARE (see chapter 2). U-Care is a multidisciplinary
research program with an overall objective to promote psychosocial care to
the patients via the Internet. In this context, this research takes a Design Sci-
ence Research (DSR) (Gregor & Hevner, 2013; Hevner, March, Park, & Ram,
2004) (see chapter 3) approach to generate a viable solution for the online
research process in U-CARE as well as offer a new artifact that can be used
for data gathering. Given this context, this research aims to use the character-
istics of online medium and tries to offer a self-evaluating survey mechanism
using respondents’ activities while answering surveys online. Implementation
of such survey evaluation process hopes to provide potential benefits to
healthcare stakeholders in U-CARE. Healthcare providers and researchers
thereby can get a better understanding of the population of interest and their
understanding of the designed questionnaires. For example, if a high percent-
age of people from the population of interest change their replies to a given
question constantly, then healthcare providers may conclude that many of
these people did not necessarily understand the question. Such logging activ-
ities will provide a better understanding of the participants taking the ques-
tionnaires and how they interpret the questions asked. As such, survey logging
analysis stands to benefit the healthcare discipline with an effective evaluation
process that might shed light on a target population’s understanding of set
questionnaire. The idea is, by designing an evaluative platform for online sur-
veys, researchers in U-CARE can feel more confident making decisions based
on the information of respondents logging behavior. Medical practitioners and
therapists can therefore, get a better understanding of the participants and
come up with effective intervention approaches (commonly known as “Inter-
net based Cognitive Behavior Therapy” or ICBT).
Healthcare intervention relies greatly on the accuracy of information pro-
vided prior to the implementation of a given intervention approach. In order
to achieve the said accuracy, all participants involved in the process of data
collection need to be particular about information related to the problem that
needs intervention. U-Care provides various stakeholders including healthcare
researchers with the platform for collecting and analyzing information from a
population of interest. In turn, this information is used to make the necessary
decisions on the intervention being designed. However, there is always the
chance that information provided by the population of interest is faulty. There-
fore, a proper analysis and evaluation process for the information provided to
the healthcare stakeholders is needed. This way, any decisions made from this
information is expected to be reliable and accurate.
Most importantly, evaluating its effects may lead to the evaluation of the
healthcare quality which is the primary objective of the health facilities orga-
nized within the U-CARE research program. Therefore, by knowing how such
information will have an impact on the healthcare may lead to coming up with
new techniques for the improvement of the quality of healthcare being offered.
17
While numerous survey vendors have provided a wide variety of clients
with survey platforms for designing and collecting survey data, there has little
to no emphasis on the importance of evaluating surveys, survey platforms and
survey responses based on user logging activity. Therefore, implementation
of the logging analysis approach stands to provide an opportunity for Infor-
mation System (IS) researchers to build and critically evaluate a survey mech-
anism in the E-Health context. However, for researchers to accomplish this, it
is imperative that the implementation of respondents logging evaluation be
given close attention. By so doing, the collection, analysis and evaluation of
online surveys can be improved for future research and studies.
This research is structure following the Design Science Research (DSR)
guidelines to provide a step by step insight into the research and the evaluation
process. For this thesis, the research structure will be as presented below.
Chapter 2 will take a closer look at the U-CARE online research program
and how the research topic has emerged. This chapter will also shed light on
the emergence of the design artifact (Respondent Behavior Logging, RBL) as
well as the theoretical process on pragmatic interest of human actions based
on the use of RBL within the immediate research environment.
Chapter 3 seeks to break down the research artifact RBL in stages 0 – iv. It
closely analyze the literature revolving around implementation of RBL in In-
formation Systems (IS). A justification for the use of the Design Science Re-
search structure (DSR) is also provided. Additionally, this chapter provides
insight into the intended impact of the entire research to the disciplines of in-
terest, focusing mostly on RBL’s implementation to the IS discipline. By so
doing, supportive information is presented to justify the course of this research
and its expected contribution to IS.
Chapter 4 compiles literature research conducted on how user behavior in
use of IT artifacts is conceptualized in general and to identify evaluation tech-
niques of IT artifacts drawing from user behavior data. This chapter investi-
gates fields that are critical for this research; Information System (IS), Human-
Computer Interaction (HCI) and E-Health. It analyzes the impact of such lit-
eratures on RBL. This chapter will also points out specific areas which past
research has failed to focus on. By pointing out where the knowledge gap is,
this chapter helps to create a basis for the design artifact proposed in this re-
search.
Chapter 5 closely analyzes RBL and what it consists of. The chapter breaks
down RBL into five specific DSR artifacts: dynamic model of respondent be-
havior, static model of respondent behavior, visualization techniques, meas-
urement constructs, and a model for questionnaire evaluation and improve-
ment. These define RBL in a deeper sense and help to understand what imple-
mentation of RBL into information systems may entail.
Chapter 6 closely analyzes the evaluation process intended for RBL imple-
mentation in U-CARE. The chapter defines FEDS (Framework for Evaluation
in Design Science Research) and explains the parameters within which it can

18
be used to evaluate the implementation process as well as future research re-
sults. The chapter closely defines the evaluation process, the outcomes of the
evaluation process and an overall analysis of a focus group and experimental
evaluation (both of which are presented in the chapter).
Chapter 7 addresses the research questions and answers those questions in
an orderly manner.
Chapter 8 wraps up the research, pointing closely to the knowledge gaps
that exist in this topic and discussing what research presented will mean to
future studies in this discipline.

19
Research Setting

In this chapter, we present our research context (section 0) based on which we


appropriate Information Systems research (section 0), and how the research
topic has emerged within this setting (section 0). This chapter also describes
organizational detail and further elaborates on the theoretical perspective lead-
ing to the research topic (section 0).

Uppsala University Psychosocial Care Programme (U-


CARE)
U-CARE1 is a multidisciplinary research program with an overall objective to
promote psychosocial care to the patients who are struck by somatic diseases
and their loved ones. Researchers from multiple disciplines including Public
Health, Economics and Information Systems are part of the setting of this care
giving activities. Its key task was to develop a specific system using the U-
CARE software which offers both psychosocial and psychological support to
research participants with physical challenges (Mustafa, 2019, p.49). In that
case, the research offered eHealth interventions to support self-help pro-
grammes via the internet that could help in researching with international
standard quality. It considered various psychological factors involved in fetch-
ing data amid the research process hence it was meant to improve the research
framework.

To oversee its effectiveness, U-CARE has undergone several structural ad-


justments as illustrated in figure 5.

1U-CARE is an abbreviation of Uppsala University Psychosocial Care Program. Visit http://u-


care.se and http://www.u-care.uu.se
Sjöström, J., von Essen, L., & Grönqvist, H. (2014). The origin and impact of ideals in eHealth research: experiences from the U-CARE
research environment. JMIR research protocols, 3(2), e28.

Grönqvist, H., Olsson, E. M. G., Johansson, B., Held, C., Sjöström, J., Norberg, A. L. & von Essen, L. (2017). Fifteen Challenges in Estab-
lishing a Multidisciplinary Research Program on eHealth Research in a University Setting: A Case Study. Journal of Medical Internet Re-
search, 19(5), e173

20
Figure 1. U-CARE organizational chart.

This chart demonstrates the general overview of U-CARE’s current organ-


ization and the categories of its stakeholders. Thus, the steering committee
determines the guidelines and founds regulations to be used in operations. The
executive committee oversees the execution of the organization’s project plan.
The advisory board which is made up of adept researchers and people who
have either had past experiences or close relative play a crucial role in propos-
ing changes or reviews studies. In addition to this, the programme director
steers U-CARE’s operations and while the management team fully supports
the programme’s director in accomplishing research projects. Furthermore,
the U-CARE is also split into three areas. These include the research area that
executes innovative research regarding the significance of psychological ele-
ments for somatic and psychological diseases. Additionally, they develop, test
and evaluate psychological interventions. Second, the education area offers
outgoing education to undergraduates and postgraduates pursuing eHealth and
clinical psychology. Third, the collaboration area contributes to the implemen-
tation of interventions developed within the organization.
U-CARE software systems are identified as solutions for research designs
that can offer interventions or aid for effective data collections. As illustrated
in table 2, several research studies have integrated the U-CARE software sys-
tem for their design of cognitive behavior therapy (CBT) intervention, making
use of randomized control trials (RCT). On this note, these studies have sev-
eral empirical contexts indicating that U-CARE software has been engineered
to support different kinds of research studies. This illustrates that the software
evolved and the relevant stakeholders’ specifications were designed and de-
livered to incorporate different scenarios in the U-CARE software.

21
Table. Research studies using the U-CARE software system

Name Description Category Inclusion duration Participants19


U-CARE AdultCan A randomized controlled trial (RCT) to study the effect of a stepped U-CARE 2013–2017 1,117
care self- help programme via the internet on anxiety and depres-
sion among adults with cancer.
U-CARE Heart A randomized controlled study of the effect of an internet-based U-CARE 2013–2016 1,052
psychological self-help programme on anxiety and depression in
post-myocardial infarction (MI) adult patients. The study aims to ex-
amine the clinical efficacy and cost-effectiveness of internet-based
cognitive behavioral therapy for post-MI patients.
U-CARE Young- A feasibility study of an intervention including psychosocial sup- U-CARE 2015–2016 6
Can port and psychological treatment via the internet to adolescents
and young adults struck with cancer during adolescence.
U-CARE ParentsCan A study focused on offering psychological help via the internet to U-CARE 2016–2017 121
parents of children with cancer. [ParentsCan has multiple phases.
In the first phase, the study was designed. In the second phase, a
sub-study (PUSSEL) was executed.]
U-CARE MAYA A study to identify cancer-related emotional suffering in young U-CARE 2015–2015 10
people diagnosed with cancer during their teenage years and to
develop psychological treatments to alleviate such suffering. [The
study used the U-CARE software system only to collect research
participants’ consent.]
UPPS An observational study, Uppsala Pelvic Pain Study, concerning pelvic Associ- 2014–2019 356
pain arising during pregnancy. ated study
ISAK A randomized controlled study of the effect of internet-based relapse Associ- 2014–2016 105
prevention as an adjunct to anti-depressive medication. ated study
U-CARE Pregnant A randomized controlled study of the effect of an internet-based Associ- 2015–2016 270
intervention aimed at pregnant women with childbearing fears. ated study
JUNO A randomized controlled study of the effect of an internet-based in- Associ- 2015-2018 401
ated study

22
ter- vention for women with negative or traumatic experiences re-
lated to childbirth or abortion. This also involved the women’s part-
ners.
AIDA-I & AIDA-II A project aimed at experimentally investigating how people are rein- Associ- 2015-2016 185 and 100
forced in psychotherapy and how this affects adherence and drop- ated study
out. A further aim was to examine whether live and internet-based
interventions differed in these aspects.
ENGAGE (1000g) An observational study aimed at getting an overall view of the physi- Associ- 2018-2019 170
cal and mental health in young adults born with an extremely low ated study
birth weight.

23
24
Table 3 illustrates the functionalities and activities of the U-CARE software
system as stated by the user roles. In this case, the researcher user role designs
and creates research studies tailors study-specific attributes, manages research
studies, schedules events, designs treatment material for psycho-education,
designs intervention therapies, manages the consents of research participants’,
adds and sends login details for incoming participants, uses decision support,
creates FAQs and chooses staff. Second, the therapist user role designs ques-
tionnaire, designs therapy content, designs intervention therapy, follows treat-
ments concerning the study protocol, approves ICBT surveys, responds to
homework quests, communicates with the studies participants, moderates chat
and forum, defines flag words, and monitors and answers FAQ. Third, the
participant user role provides consent online, fills questionnaires, chooses
nicknames, uploads pictures, completes ICBT modules, accesses homework
and self-help, and communicates with therapists and fills personal diary.
Fourth, the health care profession user role approves participants’ involve-
ment in a study, adds participants, designs therapy content, moderates chat
and forum, and monitors and answers FAQ. Fifth, the registrator role user adds
participants to a study and fills particular questionnaires. Sixth, the U-CARE
support user role adds support concerns, views activity snapshots and views a
study’s participants, and resets participants’’ flow. Seventh, the coordinator
user role coordinates research studies and groups, monitors study events and
audits privacy breaches. Eighth, any user translates texts to specific languages
in a study.

User roles Activities and functionalities


[Clinical] Researcher Create and design research studies (accord-
ing to protocol).
Customises study-specific features
(e.g., allow the user to ac- cess chat,
forum, internal/instant messages (IM)
and library) Manage research studies.
Schedule events (e.g., reminders).
Design questionnaires (using ge-
neric questionnaire design tool).
Design treatment content for psy-
cho-education (i.e., audio, video,
PDF, text, images, et cetera) using
the library.
Design intervention treatment (e.g.,
ICBT modules which include treat-
ment contents, questionnaires and
homework).
Manage research participants’ con-
sent
Add a research participant to the re-
search study and end login information

25
Use decision support (i.e., dash-
board which provides an over-
view of current state of activi-
ties).
Create FAQs.
Choose staff to be shown on About us
page.
Therapist [i.e., psychologist] Design questionnaires (using ge-
neric questionnaire design tool)
Design treatment content for psy-
cho-education (i.e., audio, video,
PDF, text, images, et cetera) using
the library.
Design intervention treatment (e.g.,
ICBT modules which in- clude treat-
ment contents, questionnaires and
homework) Follow treatments in ac-
cordance with study protocol.
Approve ICBT modules,
Respond to homework tasks.
Communicate with research partic-
ipants (e.g., using IM).
Use decision support (using pa-
tient indicators framework).
Moderate forum and chat.
Define flag words (i.e. suicide) that
when uses in chat or forum will alert
moderators that they need to pay atten-
tion to a particular conversation.
Answer and monitor FAQ.
[Research] Participant *Provide consent online
*Fill in questionnaires
*Choose nickname
*Upload picture [if they want to upload]
+Go through treatment by completing a list
of ICBT modules
+Access self-help and homework
+Communicate with therapists (e.g., using
IM)
+Communicate with peers through chat
and forum
+ Choose to be visible or not in chat and
forum
+Write personal diary
+Ask questions to health care professionals
[*any participant in reference,
control or treatment group]
[+functions for treatment group
only]

Health care professional Get patient approval to participate in re-


search or be contacted

26
Add research participants (at various
health care sites across Sweden)
Design treatment content for psy-
cho-education (i.e., audio, video,
PDF, text, images, et cetera) using
the library Moderate forum and
chat.
Answer and monitor FAQ.
Registrator Add research participants (at various
healthcare sites across Sweden)
Design treatment content for psy-
cho-education (i.e., audio, video,
PDF, text, images, et cetera) using
the library Moderate forum and
chat
Answer and monitor FAQ.
Add research participants.
Fill in participant-specific question-
naires, often regarding clinical data.
[a Registrator may be assigned to a spe-
cific health care site and can only regis-
ter data for participants from their site]

U-CARE support Add support issues (received through


phone calls or support emails)
View research participant and activity
snapshots.
Reset research participant flow.
Any user (except research participants) Translate text to research study-specific
language (using in- place translation
feature)

Figure 6 illustrates, the U-CARE system comprises of different compo-


nents. These include the U-CARE portal, a personal data web service, an event
windows service, and external services. In this case, the content delivery por-
tal and U-CARE portal are hosted by the production server. On the other hand,
the personal data web service and cache and event windows service are hosted
by the personal server host. Third, the beta and alpha portals are hosted by the
test server. Finally, for automatic system testing and building, the team foun-
dation server and source code version are hosted by the development server
host.

27
Figure 2. The U-CARE software system.

28
The primary idea of this research program is to research how Internet–based
psychosocial support can provide psychosocial support to patients with vari-
ous somatic problems. An integral part of the planned research was to use data
collection via online questionnaires, and delivery of online content. Content
includes, for example, a library with videos and other resources, discussion
forum, and software features supporting ICBT (Internet based Cognitive Be-
havior Therapy).

The overall procedure is, U-CARE delivers online studies, which are based on
sequential observation points where participants go through some screening
questionnaires and placed into different groups2 depending on the severity of
their score. These observation points consist of different questionnaires that
are filled in by participants and health staff in different stages of a study.
Through this process, therapists, psychologists, and researchers collect survey
data in different observation points and provide online materials according to
the need of participants as part of the treatment process. Previous research in
– or in association with – U-CARE elaborate details on specific randomized
trials conducted using the software (e.g., Alfonsson, Olsson, Linderman, Win-
nerhed, & Hursti, 2016; Ander et al., 2017; Mattsson et al., 2013; Norlund,
Olsson, Burell, Wallin, & Held, 2015; Ternström et al., 2017).

2
Typically, participants are placed in either the treatment group, the control group or a reference
group.
29
Figure 3: Overview of the U-CARE online study process

Figure 3 shows that participants who take part in a study go through different
observation points. All the participants go through the screening phase by at-
tending to some questionnaires and thereafter, they are randomized into dif-
ferent groups (Treatment, Control, Reference). These groups are offered dif-
ferent questionnaires as configured in the design of the particular study. These
phases are called observation points. The entire study therefore, collects data
periodically in different times from different groups of participants.

Information systems research in U-CARE


Research software is used in the academic field today to increase research ac-
cessibility and quality. Researchers who are part of the U-CARE research en-
vironment work under multi-disciplinary teams with knowledge from various
fields of Information Science research (Mustafa, 2019, p.58). For this reason,
the scientific community has shown more interest in the need to grow sustain-
able research software to sustain eHealth research information. It also involves

30
providing a method space and other principles for the grounded U-CARE de-
sign that can help in sustaining and formalizing learning through various ADR
cases. Eventually, various practices use the multidisciplinary approach of U-
CARE to resonate with evaluative ideas of the IS research design (Mustafa,
2019, p.59). As a result, IS researchers will be able to apply the iterative U-
CARE software system in developing novel design knowledge that is needed
for understanding the problem domain. Consequently, IS researchers also
have an overarching ambition of developing new knowledge whose interests
are mainly associated with challenges facing the current research designs.
IS researchers form a greater part of the U-CARE team of development
whose main role is to develop an interactive and agile framework for software
development. U-CARE software systems are moderated by researchers who
are stakeholders from different regimes. These researchers usually conduct
meetings and research processes that provide feedback on the continued ef-
forts of research development. Such efforts are considered to be stakeholder-
centric hence they contribute significantly to the U-CARE programme’s over-
arching objectives (Mustafa, 2019, p.60). Various U-CARE stakeholders form
an integral part of the academic research context that oversees the eHealth
research software design. To achieve this successfully, all of the U-CARE
stakeholders in all categories are involved in an external process of design
evolution that manages all research feedback (Mustafa, 2019, p.61). Results
from such processes provide direct feedback to all stakeholders hence allow-
ing users to give feedback easily using the system design. In this case, table 4
illustrates stakeholders’ relevance both in the U-CARE context and in the dis-
sertation. Thus, clinical researchers conduct clinical trials to investigate vari-
ous mental and emotional health challenges that can arise as a result of phys-
ical disorders. In this case, they include cognitive scientists, nurses, and psy-
chologists. In the dissertation, clinical researchers are users of the U-CARE
system. On the other hand, psychologists deliver or develop psychosocial so-
lutions and also outline a CBT’s content, manage the content of the U-CARE
system, come up with CBT treatments and keep contact with the study’s par-
ticipants. Also, the health economists evaluate and study the expenses of the
software’s interventions and later provide feedback based on various stages of
the system. The associated researchers manage associated studies about to the
study’s context, provide varying requirements for the system, and influence
and evaluate the design choice. Again, the health care professionals who in-
clude hospital staff, nurses and physicians enroll research participants into the
system, moderate answer FAQS and discussion forums, and give a response
on task-related matters. Research participants are people who have volun-
teered to take part in a study and through the U-CARE system. Thus, they
provide secondary feedback on the system. The information systems research-
ers collaborate with software developers to shorten the software’s time of de-
velopment and minimize its cost. Finally, the software developers maintain
and develop the U-CARE system.

31
IS researchers have been active in the U-CARE program since 2010. Fol-
lowing a design science research (DS), IS researchers have acted as designers,
developers, and researchers.

32
Figure 4: My roles in U-CARE

33
Over the period of engagement in this research from 2010 to date, I
have been involved in various roles. I have worked on the U-CARE
project as a system developer, and a researcher at different times as
shown in the timeline (Figure 4).

Between November 2010 and October 2012, my main role was system de-
veloper in the project, included design of features for online questionnaires. I
was one of the leading developers to design and architect the U-CARE soft-
ware. My work responsibilities involved full-stack development which in-
cluded programming, front-end designing, database management, system
analysis and security assurance. I designed and developed proof–of–concepts
for ICBT (Internet-based Cognitive Behavior Therapy), Online Surveys, RCT
(Randomized Controlled Trial), Library, Chat, and many other features. Some
features were complex, such as the implementation of automation of software
behavior based on the answers people provided in of questionnaires. Such au-
tomation came in two parts. First, I implemented skip logic for the surveys
that ensured that subsequent questions will be seen by the respondents based
on their previous responses as further explained in section 2.3.1 of this thesis.
Second, I implemented a mechanism to support the the logic of RCTs based
on the expression evaluations drawing on the answers to questionnaires (fur-
ther explained in section 2.3.2). In brief, based on the results of the question-
naires, the responders or participants are randomly put into three groups
(Treatment, Control, and Reference), determined by logic that is provided dur-
ing survey design so that, the researchers, psychologists, and therapists are
able to provide the relevant materials to their target groups. Using the design,
U-CARE studies have automated logic for inclusion/exclusion of participants,
and randomizon of participants into different study groups, based on the re-
sults of the base line questionnaires. The work on questionnaire design and
automation of logic at this early stage provided me with knowledge, experi-
ence, and interest in solving design problems related to online surveys.
Following by my role as a developer I became a PhD student in autumn
2012. I initiated a systematic literature review to find out how user behavior
in use of IT artifacts is conceptualized in the literature in general, and to iden-
tify evaluation techniques of IT artifacts drawing from user behavior data.
Further explained in literature review chapter X of this thesis.
Due to a lack of developer resources, I paused my PhD work and again
worked as system developer twice in February 2013 and May 2014. The work
included upgrading the JQuery version on the U-Care portal, fixing various
bugs, and conversion of views in the system to Razor and HTML5. I was also
involved in the development of the mobile version of the application. This was
carried out in order to meet the demands of the various users to support old

34
browsers and mobile devices (as highlighted in section 4.5). Further I imple-
mented visualization techniques to present questionnaire data (as explained in
section X) – a practical assignment that resonated well with my emerging re-
search questions.
I was then involved as a PhD student again since June 2014 until now. A
detailed account of my roles in U-CARE is explained further in section X.
However, the interplay between roles gave me the opportunity to attain em-
pirical data (questionnaires filled in by participants in different studies in U-
CARE) and improvement opportunities for RBL techniques.
The system development and design process in U-CARE was set up in ac-
cordance with agile values (Conboy, 2009), characterized by sprint reviews
approximately every two weeks. In each sprint, a new set of tasks (features,
improvements, bug fixing, etc.) were prioritized from discussions including
the product owner (a role played representative from U-CARE), developers
and other stakeholders, leading to a sprint plan guiddevelopment during ing
the following two-week period. The review meetings had several recurring
members representing different professions and academic disciplines, includ-
ing psychology, caring sciences, economics, and information systems. Daily
meetings among the developers were conducted to discuss the impediments,
therefore possible actions to encounter those so that a smooth delivery of tasks
can be achieved. There were also other measures among the developers such
as workshops, and pair programming to discuss technical issues, testing strat-
egy and learning from each other’s work. In addition, external specialists and
patient groups were invited to explore the software, followed by workshops in
which they provided feedback to the developers.

Research Topic Emergence in the U-CARE setting


The U-CARE software was built to facilitate both psychosocial care online,
and research into the care provided online. The design and development of the
online survey tool was intended to allow researchers to design online surveys
with ease, and deliver them to relevant participants. At the inception of the U-
CARE project, it was discussed whether to use an external survey service –
such as survey monkey – or develop and integrate tailored survey tool features
into the software. The latter was decided, for two main purposes. First, to keep
control of all the survey data (that might contain very sensitive information).
Second, to be able to design the tool to meet the the specific needs in the U-
CARE studies. The in-house development also facilitated information systems
researchers in U-CARE to test new design ideas and evaluate the practical
implications of those ideas. Below, we elaborate on a few interesting design
issues experienced in the design and development of the survey software in
U-CARE: skip logic, research process automation in Randomized Controlled
Trials (RCTs), environment log, and respondent behavior logging.

35
Skip Logic
A key point in this regard is the branching design: participants are "tracked"
according to their responses in such a way that the only subsequent questions
seen will be the ones which are relevant on the basis of preceding responses.
This is an example of conditional branching, whereby the survey tool "would
be able to jump to a different instruction depending on the value of some data"
(Freiberger & Swaine, n.d., para. 4). This is a key component of what's per-
ceived as the decision-making "intelligence" of computers, and it plays an im-
portant role in the structure of online surveys. Another term that is often used
for this design element is "skip logic". University of Washington Information
Technology describes the role of skip logic: It "allows you to create custom
paths through your survey or quiz, showing the participants questions based
on their response to a previous question" (skip logic, para. 1). This makes the
survey relevant in a very clear way, because instead of generating a mass of
irrelevant data, the survey's structure is tailored to maximize relevance for
each and every person who takes it.

Figure 5. Setup for the branching condition.

The example above shows that if a user chooses to answer ‘Ja’ (which carries
a weight 1) in question number 15 then, then question number 16 will be dis-
played to the user. The question number 16 has a conditional equation
‘@q15@ == 1’ based on which it remains invisible until the condition is ful-
filled. This way, survey designer has the capability to configure conditions to
display questions.

Research Process Automation in Randomized Controlled Trials


A sophisticated feature in the U-CARE survey software is the mechanism to
automate parts of the research process based on questionnaire results. For in-
stance, in an online randomized controlled trial (RCT), this technique can be
used in a baseline survey in combination with well-defined inclusion and ex-
clusion criteria to place study participants in the treatment or in the control
36
group of the study. The researchers design such predefined conditions at the
time of designing the surveys. Effective randomization has enormous positive
effects on the quality of research, and automation of randomization may re-
duce bias caused by human actors in the research process (Jadad & Enkin,
2008). In this context, it's clearly an effective invariant to facilitate RCTs in
online studies.

Figure 6. Example setup of inclusion criteria.

Figure 6 shows, there are multiple equations that the survey designer can de-
velop in the HADS questionnaire3 (Zigmund & Snaith, 1983) to setup an in-
clusion criteria for the RCT. The first equation Ångestindex (anxiety score) is
the sum of the answers of all the odd questions and Depressionsindex (depres-
sion score) is the sum of the answers of all the even questions. The final equa-
tion determines if the user shall be put to RCT depending on condition,
whether the user scores more than 7 in either Ångestindex or Depressionsin-
dex. C# syntax allows for complex conditions to be configured. In addition,
expressions can be defined based on previously defined expressions, as shown
in Figure 4.

Environment Log
Web user experience on multiple devices is becoming a common requirement
to support maximum usability of web applications. With the advent of
smartphones and improvements of browsers have amplified the use of web
applications. While, the limitation of early browsers are gone, it has become
very prevalent for application developers to detect device information so that,
information be better served. With device detection, the HTTP headers that
browsers send as part of every request users make are examined and are usu-
ally sufficient to identify the browser properties.

3
The hospital anxiety and depression index (HADS) has been widely used to screen patients
for anxiety and depression.
37
Figure 7. Environment log

The environment log collects information about web clients. These include types of browsers, operating systems, screen
resolutions, JavaScript versions, etc.

38
Task 1: Clarify the environment log more – (1page)
• The survey uses a responsive design. Environment log allows us to
better serve clients with better user experience. For example, if a cli-
ent uses a mobile device then, the survey will be presented in appro-
priate friendly format thus allows better user experience for the client.
• Javascript version and browser type allow survey software to function
properly with logging activities (i.e., click event) and presenting sur-
vey content better.

Respondent Behavior Logging


Given the research context of this design process, logging was considered
important through the design of the U-CARE portal, also in the context of
survey features. At an early stage, it was recognized by the research team
that logging respondent's behavior could serve as a source of investigation
for quality improvement of questionnaires. The idea of “Respondent Behav-
ior Logging” or RBL is to track respondent activities when answering ques-
tionnaires. This is possible due to the opportunity in the web context com-
pared to paper based surveys. The opportunity to mimic a paper version of a
questionnaire in the online setting provides novel opportunities to revisit
questionnaires for quality improvements. On the one hand, online question-
naires facilitate completely new ways for people to interact with questions.
We want to exploit such opportunities. Sometimes, however, we want the
online questionnaire to be as similar as possible to its corresponding paper
version. High similarity between the online and the paper version facilitates
such comparison between the online study and previous paper-based studies.

Let’s find out, if participants take an unusually long amount of time on a


given question of a survey, then this can be logged and potentially interpreted
as indicative of an issue with the structure of that question. This ability can be
understood as a means for conducting a sort of formative assessment (Crooks,
2001; Huhta, 2010). That is: the survey, via the process of being taken, is able
to generate data about how the survey itself may be improved. During the de-
sign of the survey features, I developed an interest in this feedback loop, and
its implications for facilitating perpetual quality improvement. The idea of re-
spondent behavior logging was especially appealing to explore further, since
it may provide huge amounts of data that potentially holds significant value
for researchers and survey designers. For instance, data may be analyzed to
understand the use of surveys on different platform and devices. Thus, as part
of the U-CARE setting, the idea emerged to explore and better understand
logging of respondent behavior as a means to support questionnaire evalua-
tion, as well as interpretation of survey data.

39
In developing this system that is survey design in general, the role has been
divided between two main actors. I have been involved in creating the survey
design while Jonas Sjöström has conceptualized the idea as Respondent Be-
havior Logging (RBL). The idea was to create a survey design mechanism
according to the needs of the therapists and the researchers in U-CARE. The
design and development work for surveys has grown more sophisticated over
the development cycles and thus became unique since, for a long time, there
has never been a survey design mechanism that is self-evaluating. The ability
to store every action of the respondent using AJAX technology enables us to
track every action of the respondent. This way, I have been able to identify the
specific traits of the participants. After developing the system, I have taken
part in debugging and improving the software. It is this system that Jonas
Sjöström, my colleague who amongst his other invaluable contributions has
conceptualized as Respondent Behaviour Logging (RBL).

Extensive logging of user actions was approved in the ethical approval for
U-CARE, based on the need to be able to study user behavior in relation to
treatment results. Such knowledge, i.e., ‘white–boxing’ user behavior, pro-
vides an important opportunity to better understand the design of eHealth sys-
tems. A subsequent ethical approval is related to the evaluation in stage X,
where we examine actual patient behaviors in the filling in of questionnaires
in the U-CARE context.

40
Figure 8. Question And Answers log

Question and Answer log shows details about each action made by the responders while filling in questionnaires. Among
these data there are, sessions which determines web client instantiations uniquely for each user; action_id determines type
of actions (view, add, edit, etc.); milliseconds determine duration of each action; parameters give all the details about

41
questions, etc. In this regard, through logging of data on the usage of online surveys, we will be at a position of obtaining
a richer representation of data.

42
Theoretical perspective
In this section, we account for some fundamental theoretical starting points
for this research. First, a pragmatist interest in understanding the use of infor-
mation technology in relation to human action. Second, a theoretical reflection
on the characteristics of the online medium and its implications for conducting
surveys. Third, the role of information technology to support automation of
tasks and complex logic to support human activity. These underlying theories
form a perspective that has influenced the entire research process, as elabo-
rated in section X.X.

A pragmatist interest in human action


This research builds on the philosophy of pragmatism, which leads us to seek
an understanding of the world as an ongoing process of action (Blumer, 1969).
The RBL concept is an attempt to understand the respondent’s situation when
filling in a questionnaire. Our view is based on Dewey’s (1938) conception of
a situation as a contextual whole, consisting of objects and events: “What is
designated by the word ‘situation’ is not a single object or event or set of
events. For we never experience nor form judgments about objects and events
in isolation, but only in connection with a contextual whole. This latter is what
is called a situation.” Our aim is to provide a conceptual understanding of re-
spondent behavior that can bring us more understanding of the respondent’s
interpretation of the questionnaire by facilitating inquiry into the events
through which their answers to the questionnaire emerged.
The pragmatic starting point has implications for conceptual modeling
(Ågerfalk, 2010), a topic central to the IS field (Wyssusek, 2006). A common
perspective in the literature is that information systems are symbolic represen-
tations of reality (Wand & Wang, 1996). An alternative view, rooted in prag-
matism, constitutes a critique to the representational view (Hirschheim, Klein,
& Lyytinen, 1995; Holm, 1996; P. Ågerfalk, 2010). Drawing on the principles
of pragmatism, language philosophers (Habermas, 1984; Searle, 1969) have
argued that language (and by extension also information systems) does not
just get the (state of the) world but also brings about change to the world. From
such a pragmatic viewpoint, IT artifacts are instruments that are used in per-
formance of action. From the pragmatic viewpoint, IT artifacts are instruments
used to perform action (Goldkuhl & Lyytinen, 1982; Sjöström, 2010) and me-
diators of actors’ intentions. In the case of RBL, we want to create a formal-
ized representation of respondent behavior.
Pragmatism has been linked with a number of research-based approaches
to try and improve the quality of research and the quality of results attained
from particular research (Goldkuhl, 2005). These approaches, intertwined
with pragmatism as an approach to refining research, have paved way for RBL
43
to offer improvement opportunities of surveys based on understanding of log-
ging data attained from the uses of surveys. For example, a participant may
change answers to questions depending on the environment they are in or time
they are allocated. Therefore, a pragmatist perspective on human actions and
the context of action, served as a basis to investigate online surveys in U-
CARE.
Given the underlying perspective of pragmatism, this research emphasizes
how to understand people’s actions in the context of online surveys. Part of
understanding action includes a conceptualization that account for the actions
the respondents have taken in the process of providing answers to a question-
naire. These actions reveal to us when the respondent hesitated, changed their
mind, got tired of answering questions, etc.

Characterization of the online medium


In order to characterize the online medium, we distinguish between techno-
logical opportunities and generic features of human-to-human communica-
tion. Regarding technological opportunities, online surveys clearly offer new
ways to communicate, and new ways to monitor communication. The IP stack
allows us to trace part of the user context, e.g. the whereabouts of the user, the
time a request is made, et cetera. Regarding generic features of human-to-hu-
man communication, we draw from Clark (1996), who suggests a ten-dimen-
sional framework to understand characteristics of communication in different
settings (Table 1).
Table 1. Ten dimensions of communication (after Clark, 1996)
Feature Explanation Dimensions Online survey setting
Immediacy Immediacy refers to Co-presence Respondents do not
the presence of both share the same physi-
the participants dur- cal environment.
ing the conversation.
Visibility Respondents are not
visible.

Respondents are not


Audibility heard during their sur-
vey.

Not applicable.
Instantaneity

Medium Medium refers to the Evanescence Medium is achieved


way in which conver- through Internet using
sation takes place, internet enabled de-
and, simultaneity oc- vices.
curs when both the
participants can send Online actions can be
Recordlessness
and receive messages logged and recorded.
together.
44
Simultaneity Possible to generate re-
sponses based on re-
spondents actions
while taking surveys.

Control Control is related to Extemporaneity Respondent’s actions


the constraints to are observed during
which both the partic- extended amounts of
ipants need to follow time.
the conversation.
Self-determination Respondent’s actions
are facilitated by
means of technology.

Opportunities to re-
Self-expression
main either identified
or anonymous.

At the inception of this research process, some of the dimensions above were
at the core of the discussion, while being clearly related to online surveys.
When conducting surveys is concerned, there are a number of features that are
critical and can affect the data gathering mechanism. One of the dimensions
that is highlighted in Clark’s model, is recordlessness. Unlike the traditional
methods, the communication that takes place over the internet can be saved
and kept as a record. This information can later be revised and used for deci-
sion making (Clark, 1996). Same way, the fact about simultaneity works dif-
ferently in case of a normal conversation vs online. When it comes to surveys
conducted over the Internet, the participants can switch their answers as they
interpret the question they are asked. If the element of simultaneity can be
implemented in the online process, it can add more validity to their responses.
This can be one of the areas of opportunity to get more valuable feedbacks in
the ‘real time’ (Clark, 1996).
The actions of participants that take part in the online surveys are often
constrained by the technology. The element of self-determination adds re-
strictions in terms of technology to the respondents when they are a part of an
online survey. These restrictions are a little less and acts according to the struc-
ture of care in a normal conversation. Thus, this raises an opportunity to de-
velop a mechanism to strengthen self- determination (Sjöström & Alfonsson,
2012). In this research, the concept of Respondent Behavior Logging
therefore, provides us opportunities thus, to help improvise the design of ques-
tionnaires keeping in mind the behavioral trail of participants left while an-
swering surveys. If the constrain of technology can be mitigated and enhanced
with systematic means to understand respondents actions online, thus can pave
way to a new research opportunity. For example, if people who are filling the
entries of an online survey get an opportunity to change their feedback about

45
a question later can be a positive factor to strengthen respondent’s self-deter-
mination. Since, this signifies that they are not confident about the first answer
that they provided and later be able to change their mind as they feel.
Online tools possess quality outcome without intervening of a person. Tar-
geting the quality of response, business data and process can be improved
from the support of the real end users. Besides, quality measurement and
maintenance, analytics is another major advantage provided by the online me-
dium. The records are stored safe and accessible anytime, encouraging com-
panies for effective research. Instead of hypothetical conclusions, organiza-
tions can rely on the qualitative and quantitative data associated with analyti-
cal research. Thus, the online data collection for the purpose of conducting
surveys has a huge area of opportunities to meet. This medium can be used to
redefine elements of visibility, recordlessness, identity, extemporaneity, and
simultaneity when surveys are to be conducted on the Internet. Technology
can thus help you design the questionnaires interestingly. These intuitively
designed questionnaires for online surveys are capable of pulling in more par-
ticipants and inspire them to complete the survey. Surveys thus conducted
online can improvise the data collection process for to yield better results.

Summary: Implications for research


Drawing from the practical design setting outlined in this chapter, the idea of
RBL emerged: to further explore logging of participants’ answers while at-
tending a survey online. The data produced by the logging entails, a promising
post-mark strategy for evaluating questionnaires’ effectiveness and useful-
ness. The design of RBL therefore, is a part of a larger design research pro-
gram in which a fairly complex piece of software is built in the context of
eHealth research. A summary of ’bullet points’ that constitute a rationale for
RBL and that has ingrained the design of the RBL concepts are mentioned
bellow.

• Online surveys are growing in importance for researchers.


• The online setting provides opportunities for the design, execution,
and evaluation of online surveys that do not exist in paper-based
surveys due to change in medium which allow us -
o To reach participants with very low cost.
o Allow respondents to change answers with ease.
o Allow researchers to revisit respondent’s actions at
any given time.
o Time tracking is much more sophisticated and pre-
cise.
o The IP stack allows researchers to trace part of the
user context, e.g. the whereabouts of the user, the time
a request is made, et cetera.

46
• The design of surveys within U-CARE demands some unique fea-
tures.
o It should support variety of devices to ensure maxi-
mum participants.
o It should facilitate RCT.
o Multiple views of questions to ensure survey’s usabil-
ity.
o Techniques to show relevant questions to the re-
sponders.
o Easy to design questionnaires with maximum flexi-
bility for the designers.
• Comprehensive collection of data based on online surveys.
• Given the underlying perspective of pragmatism, this research em-
phasizes how to understand people’s actions in the context of online
surveys.
• The RBL concept does not only account for the theoretical ap-
proach but also looks at the manner in which action oriented log-
ging mechanism using web technologies is applied to come up with
a process of enquiry that has practical impact on surveys used in U-
CARE studies.

47
Research Approach

As discussed above, the research process in this thesis is part of a larger re-
search setting in which a fairly complex piece of software was designed and
used to conduct randomized controlled trials in the context of eHealth re-
search. In this chapter, we present our design science research approach (sec-
tion 0) that was enacted within the U-CARE research setting. Note that design
science research was employed in the larger setting, as well as in the work
reported in this thesis. We provide the rationale for the selected approach
given the research questions at hand (section 0). Section 0 we appropriate DSR
guidelines for this research. Section 0 provides insight into how a literature
review was conducted to identify the knowledge base for this work, followed
by a discussion about in which way this research contributes to the knowledge
base.

A Staged Design Science Research Process


In this section we give an overview of the research process. Subsequent sec-
tions will provide a deeper argumentation about the underlying rationale.
Research was conducted in four stages as shown in Table 2. Essentially,
the process is a design science research approach (Gregor & Hevner, 2013;
Hevner, March, Park, & Ram, 2004) consisting of (i) drawing relevance from
practice (i.e., the U-CARE eHealth practice), (ii) performing design and eval-
uation and (iii) applying methods and drawing theoretical influences from the
knowledge base. In doing so, we propose constructs and models that shape the
static and dynamic models of RBL, and corresponding methods / techniques
to evaluate online questionnaires. In addition, we instantiate these concepts
into software. Through the evaluation activities, we assess the RBL concepts
and techniques as outlined below. The whole process is then the basis for dis-
cussing the concept at hand, in what ways it contributes to the IS knowledge
base, and its implications for research and practice.
Table 2. Overview of DSR activities
Stage 0. Stage I. Stage II. Stage III. Stage IV.
Emergence Conceptual RBL Visu- RBL Statis- Formaliza-
of RBL design alization tical tion of
Measures learning
Design re- Online sur- RBL con- RBL visual- RBL meas- RBL frame-
sults vey design ceptual ization tech- urement con- work and de-
models niques structs sign princi-
ples
48
Practice U-CARE U-CARE U-CARE Field work 3+ years of
link RCT using design pro- design pro- (see evalua- data collec-
(Rele- online sur- cess cess, prelim- tion) tion from re-
vance) veys. inary experi- search trials
ences from a
research
trial
Knowledge AOCM, IS+HCI lit Extended lit Additional
base pragmatic review, review literature in-
(Rigor) philosophy, Question- (eHealth), fluences
DSR meth- naire design Question- based on
odology literature, naire design peer-review
visualization principles, feedback +
techniques, descriptive ethical and
Web visuali- statistics, bi- legal consid-
zation API nomial lo- erations
gistic regres-
sion
Evaluation Workshops, Expository Focus group Experimental
sprint retro- instantiation evaluation evaluation
spectives, of software (proof-of- (proof-of-
etc. (proof-of- value), value)
concept) ethical ap-
proval pro-
cess
Publica- DESRIST
tion(s) paper - Sjö-
ström et al.
(2012)
Time pe- November January Sep 2012 – Jan 2016 – Jan 2017 –
riod 2010 – De- 2012 – June 2015 Jan 2017 Dec 2017
cember July 2012
2012

This design science research has been divided into five different
stages as shown in the figure below. In this section, we account for
what has been done at each stage of the design science research pro-
cess.

49
Figure 9: Staged Design Process

50
Stage 0: Emergence of RBL

This stage was performed between December 2010 and 2012 and in-
volved design of the questionnaire, implementation of the skip logic,
as well as the implementation of the RCT. The system was thereafter
checked for fitness using workshops and sprint retrospectives. The im-
plementation of the skip logic at this stage of the project enabled de-
signers to create a branching system in the questionnaires. The survey
tool, in this case, will be able to jump to a given set point depending
on the answer given by the respondent (see section 2.3.1 for additional
details). In addition, the implementation of the idea of a Randomized
Controlled Trial (RCT) that is useful in grouping the respondents de-
pending on the conditions set by the designer at the design stage of the
questionnaires. In view of this, the designer can have multiple ques-
tions that can be used as an inclusion criterion in a survey. It was de-
signed using a system of equations whose score will determine
whether an individual is included or excluded from a study group (see
section 2.3.2 for more details). Additionally, the environment logs, are
capable of collecting user data such as browser type, operating sys-
tems, JavaScript versions, and screen resolutions for further analysis.
This combination of the features of the artifact enhances its capability
of tracking and studying the behavior of the respondents. Thus it was
the foundation for later stages to continue the design work and con-
ceptualize RBL.

Stage I: Conceptual Design

We outlined the theoretical perspective as well including the pragmatist un-


derpinnings in section 2.4. In stage I, drawing mainly from a pragmatist phi-
losophy and a general interest in extensive action logging, a first version of
RBL was designed based on both the dynamic and static models introduced
by Sjöström et al (2013) (see sections 0 – 0). In addition, the design and re-
search context was important to set the constraints and opportunities for the
first stage, aiming at logging respondent behavior in a manner that allowed a
fairly detailed reconstruction of how respondents filled in questionnaires. The
RBL concepts were implemented into the software, i.e. a proof-of-concept im-
plementation. Data has been collected in the trials in three /six years using the
original RBL implementations.
51
Stage II: RBL Visualizations

This forms the stage II of this research that was carried out between 2013 and
2015. During this stage, a systematic literature review was performed mainly
in the areas of Information Systems (IS), Human Computer Interaction (HCI)
and E-Health. (See chapter 0). Further, I highlighted a summary of the various
literature reviewed in this research together with their findings (see section
4.2).
As a means to make RBL data interpretable and meaningful for stakehold-
ers in the design practice, a set of visualization techniques were crafted and
implemented into the software. The knowledge base on questionnaire evalua-
tion was factored in, primarily the idea of being able to assess time aspects,
phrasing/quality of single questions, and structural issues with the question-
naire.
On basis of collected RBL data, a series of visualizations were rendered
using the software. A focus group consisting of experts from the design prac-
tice was organized (See chapter 0). The visualizations were presented to the
focus group, and the participants were encouraged to discuss the utility of
these representations to (i) identify design flaws and suggestions for improve-
ments of the questionnaire and (ii) discuss if they would interpret the collected
data differently given the way they made sense of the RBL visualization.
Evaluation was thus based on the qualitative data collected in the focus group
session.
Ideally – depending on the ethical approval process – we can create visual-
izations from production data collected in the studies and conduct focus
groups with U-CARE stakeholders. If no ethical approval is given, the focus
groups may be conducted anyway, based on ‘fictive’ data or data from new
experiments on non-health respondents. Thereby, we will have an informed
argument about to which extent RBL analysis works to detect design flaws in
a survey instrument.

Stage III: RBL Statistical Measures

Having accomplished the above designs, it was thus necessary to de-


sign a method on how to measure the level of success of the RBL con-
struct. At this stage, surveys of 120 students were conducted in order
to find out the potential usefulness of RBL (See chapter 0 and appen-
dices X and Y). In brief, three questionnaires were designed and were
distributed among three different groups, having each group contains
40 students. One questionnaire being the ideal questionnaire where
there were no flaws in it and the other two were crafted with different
52
problems having in mind finding out different kinds of flaws (i.e.,
structural problems and logical errors). The idea was to see how RBL
reveals these issues from the usage of these surveys and differences
between these surveys can be statistically measured and understood as
a proof of the value (see further details in section 6.2 of the report).

Machine learning approach to test RBL measures…

Stage IV: Formalization of Learning

This forms the last stage of the design process and has been conducted
between 2017 and 2019. The aim of this stage is to further review the
design of the system based on the feedback obtained at the previous
stages 0 to III together with a review of the RBL framework and de-
sign principles. From the analysis in section 5.5, it was possible to de-
velop a system that can track the behavior of respondents during an
online survey. And based on the evaluations (see chapter X), it shows
that RBL can contribute how questionnaire evaluation techniques us-
ing behavioral data be used in practice. The tracking of the users’ log
data can be very significant in determining the flaws in the design of
the questionnaire. This may also inform about the errors in the ques-
tionnaire and act as a basis upon which further improvements can be
made to the study. Moreover, the comprehension of log data may ena-
ble researchers to enhance their skills in the future design of the ques-
tionnaire.
Then, we aim at conducting a final evaluation of RBL using the data col-
lected over 5+ years in U-CARE research trials. While such evaluation is
based on real clinical data, an ethical approval process was needed. Ethical
approval was granted in spring 2016, allowing for an analysis of RBL data
collected in U-CARE. Based on the RBL measurements, we will make state-
ments (hypothesis) about a selected well-known instrument used in eHealth
research. Those hypotheses will then be examined by surveying the literature
about the instrument, and comparing earlier tests and critique of the instrument
with the hypothesis.

53
Rationale for a DSR approach
For several reasons, Design Science Research (DSR) is a suitable approach to
find answers to the research questions posed in this thesis. We will address in
turn the reasons for adopting DSR.

Task 2: Rational for a DSR approach. One paragraph ‘per reason’ (3 pages).
Read Hevner’s paper that I have attached to get more ideas.

First, A…

Second, B…

Third, C…

Etc…

Hint: In DSR, an artifact has to be developed (i.e. RBL), which has to be


unique, relevant, innovative and solve real-world problems. The artifact must
also be rugosely evaluated. In that regard, DSR approach can better represent
the work being done here. The following text may also help you finding at
least one reason but try to add as many points as you can by reading from the
previous chapters, the research context and above all the Hevner’s paper I give
you.

This research has emerged as an opportunity from the design of the U-CARE
online platform. In chapter 0, we provided the research context from which
we can draw that, this work is part of a larger DSR project and therefore inherit
designing to solve real problems and thus providing health care activities pos-
sible via the U-CARE online portal. However, there is little or no work done
on respondent behavior studies in the online survey context. Following Gregor
& Hevner (2013), that means that the phenomena at hand is suitable to inquire
into using a DSR approach. In this context, the technique in which a survey
can be evaluated based on its usages given the U-CARE research setting
makes the problem eccentric and therefore the solution to such problem is hard
to associate with any existing solutions. Often, academic software differs from
commercially developed software (Groen et al. 2015). U-CARE collect sensi-
tive data such as personal identity numbers, emails, and mobile numbers thus
cannot be shared with any third party online survey software. Even though,
there are software that function with anonymous users but the survey ques-
tions themselves are highly sensitive and any slightest chance to trace back
users mental or physical description is considered unethical and violation of
code of conducts approved for the U-CARE research. Moreover, the need for
customized designs of online surveys within U-CARE are very different than
54
what traditional online survey software can offer. The fact that, participants
are randomized into different groups based on scores participants make from
answering surveys, is a unique feature within the U-CARE online portal.
Therefore, many existing tools which may have mechanism to understand us-
ers’ online behavior become obsolete within U-CARE research context.
According to DSR, the aim is to create design-oriented knowledge and help
us understand a new problem domain. The idea of RBL is an example of de-
sign knowledge that has emerged due to the fact that, the change in medium -
the transition of paper based survey to the online medium has opened a new
wave of challenges for the U-CARE researchers that can diminish the good-
will of having cost/time effective online survey mechanism.

Application of DSR guidelines


The staged design process in section Error! Reference source not found.
emerged during the first two years of research. The process was originally
inspired by the practical problem at hand, i.e. the design and execution of web-
based surveys in the U-CARE context. During stage 1, the research questions
were articulated, and the DSR approach (Error! Reference source not
found.) was used to outline research stages II – IV.

Figure 10: The DSR Framework (Hevner et.al, 2004)

The framework relates to our description of each stage (Table 2). The stage
descriptions include the practice link, i.e. how needs in the environment influ-
enced design and development. Further, the evaluations conducted in each
55
stage may relate to the environment, when it includes naturalistic evaluation
(Venable et al., 2015). Evaluation may also be done outside the practice envi-
ronment, i.e. through artificial evaluation (Venable et al., 2015). Each stage
also builds on the knowledge base, by incorporating literature studies to affect
the design and/or evaluation in that stage. Finally, publications from the stages
constitute contributions to the knowledge base.
Hevner et al. (2004) provide a set of guidelines for conducting DSR. Be-
low, we elaborate on how these guidelines have been applied to this research.

Guideline 1: Design as an Artifact


Hevner et al. (2004, p. 83) state that DSR must “produce a viable artifact in
the form of a construct, a model, a method, or an instantiation.” The outcome
of this research includes the design and development of a framework (Chapter
0) consisting of constructs, models, as well as a software instantiation of the
conceptual RBL framework. Additionally, we have contributed with the entire
software artifact that facilitates online surveys.

Guideline 2: Problem Relevance


DSR should lead to “technology-based solutions to important and relevant
business problems” (Hevner et al 2004, p. 83). As argued in the introduction,
there is an opportunity to develop new knowledge and technology to support
the design and evaluation of online surveys. The increased use of such surveys
among researchers and in business signal the relevance of this research. Ad-
ditionally, the practical design work in the U-CARE research environment
shows the relevance of more knowledge in the immediate practice situation.
The integration of IS (Information Systems) research into the multi-discipli-
nary context promotes the relevance of the research. The collaboration with
senior researchers as well as practitioners in other fields than IS has been con-
ceived as imperative in understanding the problem domain (psychosocial care)
and the clinical needs.

Guideline 3: Design Evaluation


Hevner et al. (2004, p. 83) state that “The utility, quality, and efficacy of a
design artifact must be rigorously demonstrated via well-executed evaluation
methods.” This research follows evaluation strategy based on Framework for
Evaluation in Design Science Research (FEDS) (Venable et al, 2016) (See
chapter 0). As shown in the staged DSR process (section X.X), RBL is evalu-
ated using multiple strategies, including qualitative evaluation using a focus
group and experimental evaluation with two data analysis approaches (hy-
pothesis testing using MANNOVA and a machine learning approach based on
binomial regression analysis), and a large–scale investigation of respondent
behavior when filling in the HADS questionnaire in the U-CARE context.
56
Guideline 4: Research Contribution
“Effective design-science research must provide clear and verifiable contribu-
tions in the areas of the design artifact, design foundations, and/or design
methodologies” (Hevner et al 2004, .p. 83). The idea of RBL in this research
is new. When considering the application domain maturity, RBL provides a
unique evaluation technique when compared to other evaluation techniques.
RBL collects real time data from the online questionnaire. The interpretation
of the collected data logs can help in the improvement of the questionnaire
structure as well as to identify misconceptions in the questionnaire that reduce
the effectiveness of understanding the questionnaires.

Guideline 5: Research Rigor


“Design-science research relies upon the application of rigorous methods in
both the construction and evaluation of the design artifact” (Hevner et al 2004,
.p. 83). Rigor concerns both domain knowledge (in this case regarding previ-
ous research relating to RBL) and the application of appropriate methodolo-
gies in research. Section 3.4 outlines the process of investigating the domain
knowledge base. Regarding application of appropriate methodologies, The re-
search environment tends to emphasize issues of relevance and rigor, since
there is a continual need within the group to provide an agreed-upon rationale
for design decisions. The rigor cycle can be recognized due to the fact that,
the design process included workshops in which stakeholders from different
areas were involved to verify the strengths and limitations of the questionnaire
logging mechanism. In addition, as discussed above, we have applied various
evaluation methods following established methods for evaluation, in order to
determine qualities of the RBL concept as a means to evaluate and improve
online questionnaires.

Guideline 6: Design as a Search Process


“The search for an effective artifact requires utilizing available means to reach
desired ends while satisfying laws in the problem environment” (Hevner et al
2004, .p. 83). In this regard, the development of RBL is one that changes per-
spective between design processes and designed artifact as the problem solv-
ing continuous. The artifact is evolved through reoccurring evaluation process
and this provides the need to improve the product as well as change the design
processes or improve them. This is a loop process that continues until a quality
product is produced. This being the concept of the design-science paradigm,
the sprint development method effectively fits. The agile sprint development
method in this research loops every two weeks till a quality questionnaire/sur-
vey design tool is developed. In each sprint, through a meeting, the sprint is
evaluated and new task prioritized for the adjacent sprint which will allow the
adjacent sprint to be improved from the previous sprint.
57
Guideline 7: Communication of Research
“Design-science research must be presented effectively both to technology-
oriented as well as management-oriented audiences” (Hevner et al 2004, .p.
83). In this regard, the RBL framework is designed to support designers/re-
searchers (e.g. conceptual models, user guidelines, workshops) and other
stakeholders (e.g. through the visualization techniques, implementation of
RCT). The design process of RBL framework included researchers from mul-
tiple disciplines and successfully resulted in a software design. The utility of
RBL software has been received, challenged, and improved over many design
cycles. There were workshops to showcase knowhow and through reflection
and feedback the design of RBL was adopted to solve larger research imped-
iments. Therefore, this thesis takes a DSR approach to clarify the implications
of RBL design both in practice and in research to a broader audience.

Literature Review Strategy


An essential aspect of research is to build on the existing knowledge base. In
this section, the process of reviewing the literature is explained, followed by
a reflection about two major types of knowledge that inform DSR. A system-
atic literature in was conducted at an early stage of research. Below we outline
the process following Okoli’s (2015) eight steps.

Identify the purpose


The focus of the literature review was, to find out how user behavior is con-
ceptualized in the literature in general and to identify evaluation techniques
drawing from user behavior data. Due to lack of information on the evaluation
strategy of online surveys in the information systems, there was a necessity
for systematic literature review to be done in this area. The lack of efficient
methods for evaluating the effectiveness of online surveys and questionnaires
necessitated more research concerning the evaluation techniques that can be
used in online surveys.

Draft protocol
The literature review was done in the areas of Information Systems (IS), Hu-
man Computer Interaction (HCI) and E-Health. IS being the main area of in-
terest, we took a systematic review approach towards it, which resulted in a
process of conducting literature review in three steps based on Webster &
Watson (2002) strategy (See chapter 0). Areas of HCI and E-Health are also
equally important but limited search operations were conducted given the fact
that, the primary objective of this research is to contribute mainly to IS com-

58
munity. This research is situated in the U-CARE E-Health project and there-
fore it is imperative to understand what can be learned from existing phenom-
enon discussed within the E-Health community regarding evaluation tech-
niques from online traits of users. HCI is also salient in revealing techniques
related to logging, understanding how human interaction with technologies
are traced and used to improve human practices.

Apply practical screen


A set of keywords was developed and mechanism to conduct search operation
within the above mentioned areas have been planned. First, the search mech-
anism was applied in the area Information Systems (IS). There were eight
journals selected as the sources (often known as basket of eight) for the present
literature review (See chapter 0). The search was followed by two consecutive
iterations. After the first iteration, new keywords were merged and references
were collected from the articles that were found relevant. The second iteration
was followed with relatively improved key words within the same search
space (basket of eight). Separate search operations were conducted both in the
field of HCI and E-Health.

Search for literature


The search process resulted in many articles. Each articles were reviewed
based on relevance by carefully reading their titles, abstracts, and research
methods. Although a large number of articles were found but only few remain
relevant for this research purpose (See chapter 0).

Extract data
The first round of collected articles provided us with more relevant references
and keywords. The reference articles were collected and a new set of relatively
improved keywords had formed that led to the second round search operation.

Appraise quality
After consecutive and contentious search operation of literatures, we were
able to collect more relevant and limited number of articles.

Synthesize studies
All the relevant articles were categorically summarized and reflected with re-
spect to each discipline (IS, HCI, E-Health) in order to develop a comprehen-
sive understanding about how logging of user behavior being conceptualized
and used within the respective areas (See chapter 0, 0, 0). Special attention
was given in the area of Information Systems.
59
Write the review
We account for the articles in delineating their relevance into our research in
chapters 0, 0, and 0.

DSR Knowledge types


Design-science research knowledge can be descriptive knowledge (Ω or
omega) or prescriptive knowledge (Λ or lambda) (Gregor & Hevner, 2013).
Descriptive knowledge is the ‘what’ knowledge about the natural phenomena
as well as the laws and regulations concerning phenomena. Prescriptive
knowledge is the ‘how’ knowledge in relation to human-built artifacts. Pre-
scriptive knowledge relates to the methods, instantiations, model and con-
structs in addition to design theory of the artifact.
The Ω knowledge is that there are ways to conceptualize user behavior in
general. Barki et. al (174) recognize that there are approaches to conceptualize
and measure the construct of IS use, although the theoretical advances are in-
sufficient, in this regard. These are conceptualizations that focus solely on the
technology interaction behaviors through the construct of information system
activity that is use-related. There is a wide variety of conceptualization of user
behavior in information systems as a whole. Although this is true, very few
ways or none exist in the context of using user activities of surveys in order to
improve the same surveys. We thus see a need to further conceptualize user
behavior.
The Λ knowledge is that there are other third party tools to support design-
ing of the online surveys. Thus, we need to investigate those external tools.
However, the research context of RBL artifact makes it exclusively sensitive.
We believe, sensitivity to context makes other tools highly vulnerable to u-
care online study data collection.

DSR Knowledge Contribution framework


This section represents the insights on the best understanding and positioning
of the contributions of the DSR project (Gregor & Henver, 343). Knowledge
contribution of a DSR project is difficult to identify since one has to recognize
the nature of the artifact being designed, the audience to communicate to, the
publication outlet and the state of the knowledge field. In the positioning of
the knowledge contribution framework of the DSR project, the guidelines to
observe are the two available dimensions and the difficulties involved. The
two dimensions are application domain maturity which are the opportunities
or problems and the maturity of solutions which provides the possibility of
existing artifacts (Hevner, 16). There are two difficulties that is, where to draw

60
the line (subjectivity) and the fact that nothing is entirely (really) new, every-
thing comes to exist based on something else (Hevner, 16). Based on these
considerations, there are four quadrants in the DSR knowledge contribution
Framework as shown below.

Figure 11: DSR Knowledge Contribution Framework

The quadrant of invention infers DSR work that leads to a radical break-
through (true invention). It is a pure departure from the ways of thinking and
doing which are readily accepted. DSR projects that fall in this quadrant have
little understanding of the existing context of the problem and the identified
problem does not have existing artifacts as solutions (Hevner, 18). According
to Gregor & Hevner (2013), the best description for the invention process is
an exploratory search done over a difficult problem space that requires cogni-
tive skills, i.e., imagination, curiosity, insight, creativity and knowledge of
various realms of review to identify a feasible solution.

Important to relate this discussion to your work. What type of DSR is this?
And, depending on what type, what types of contributions can be made. E.g.,
innovation research can lead to a better understanding of the problem domain.
Your discussions below about this are good, but they can probably be even
better if we go through them a bit more. Important to highlight the contribu-
tions of this work.
61
The improvement quadrant has a goal of providing a better solution that are
more effective and efficient processes, products, technologies, services or
ideas (Gregor & Hevner, 346). This quadrant applies to DSR projects that have
a mature context problem but present the need to produce solutions that are
more effective.
The exaptation quadrant where there is the existence of effective artifacts
in related problems areas and theses artifacts can be adapted to new problem
context (Gregor & Hevner, 347). The quadrant allows for the utilization of
knowledge in its current or refined form to a new area of application.
The routine design where known solutions are used for known problems.
In this quadrant, existing knowledge for problem areas is used on familiar
problem areas as a routine.

In this study, the DSR project involves the creation of an artifact where the
context problem is a questionnaire/survey tool. The artifact aims at providing
a solution for a way to carry out online surveys as well as evaluate the produc-
tivity and accuracy of the questionnaire by using it for evaluation. The arti-
fact’s structure is first to enable research to develop online questionnaires.
Second, the artifact will enable the researchers to analyze the effectiveness of
the questionnaire through the collection of data from the questionnaire. Third,
the literature presented in this research shows that there is no existing artifact
as a solution for this problem context. In this case, we believe, the research
falls between both Invention and Exaptation quadrants.
The research can be identified in the invention quadrant since it provides
RBL as a new concept when considering solution maturity and application
domain maturity. When considering its low level in the solution maturity,
there are no other known solutions for the problem context as provided in the
research. The existence of a solution is dependent on this research where if
successful, the result is a novel artifact or invention. RBL is an attempt both
to better conceptualize the problem domain and provide solution guidance.
When considering the application domain maturity, RBL is a novel way to
evaluate questionnaires, as compared to existing evaluation techniques. RBL
collects real time data from the online questionnaire. The interpretation of the
collected data logs can help in the improvement of the questionnaire structure
as well as to identify misconceptions in the questionnaire that reduce the ef-
fectiveness of understanding the questionnaires. RBL may lead to the evolve-
ment of new ideas about the questionnaire and its design features after the
evaluation process.
It can also be argued that the idea of logging user behavior is not entirely
new. Therefore, the contribution also resides in Exaptation quadrants. There
are studies track the respondents' behaviors, in the sense that the research
questions are meant to identify the effects that a certain behavior or perception
on the part of the respondent has on outcomes. But for this problem context
(i.e. online surveys and strategies for online survey design), there is no artifact
that exists which can be improved for the research to fall on the improvement
62
quadrant. Rather, RBL may provide solutions for the current research context
based on ideas of logging user behavior, which refers to applying an effective
artifact within a new problem domain.

DSR Research Contributions Types

There is a vivid discussion in information systems about what types of contri-


butions to expect from DSR. Gregor & Hevner (2013) propose three maturity
levels of DSR outcomes where a DSR project can produce an artifact on either
of this level or more than one of the levels.

Figure 12. Design Science Research contribution types

In level one an artifact is developed in the form of processes and products; in


level two, the project produces an artifact in the form of constructs, models,
design principles, technological rules or methods (nascent design theory); and
in level three, the DSR project produces the artifact as a well-developed design
theory about the area under study (Gregor & Hevner, 2013 p. 341). These
levels present the levels of maturity in abstraction as level one presents the
least abstract while level three is the most abstract which is directly propor-
tional to the level of knowledge maturity.
According to Topi and Tucker (22-14), the demonstration of an invention
or novel artifact has the possibility of being a research contribution that rep-
resents design ideas and design theories that are yet to be expressed, formal-
ized and understood fully. This research provides a novel artifact therefore it
presents an idea and design that has not yet been articulated, formalized and
understood fully. The artifact designed is a questionnaire/survey tool, to be
used for online studies and questionnaire evaluation. The questionnaire/survey
artifact is produced in the form of a product and process. The product is the
tool ready for use and the main processes are the creation of online surveys
and the collection of real time data used in the evaluation of the questionnaires.
Therefore it can be argued that this research achieves abstraction in knowledge
maturity as the RBL is able to conceptualize user logging as a technique and
actually has to offer an evaluation mechanism of online surveys.

63
Knowledge Base

In this section, we follow Gregor and Hevner’s (2013) view on characterizing


the knowledge base as an account of both existing artifacts for online surveys
(0) and for previous research that influenced our work so far. Although guide-
lines for online questionnaire design exist (Lumsden & Morgan, 2005), there
is a lack of knowledge on how to design IT artifacts that support evaluation
and re-design of questionnaires. Our ambition to develop such knowledge
draws from the knowledge through the conducted literature review, as dis-
cussed in sections 0 – 0. The literature review includes areas of Information
Systems (IS), Human Computer Interaction (HCI) and E-Health. The focus of
the literature review is, to find out how user behavior is conceptualized in the
literature in general and to identify evaluation techniques drawing from user
behavior data.

Figure 13: Focus of the literature review.

We investigated fields that are critical for this research; Information


System (IS), Human-Computer Interaction (HCI) and E-Health. Re-
view of literatures in the field of Information Systems was significant
as it helped us identify the need to conceptualize the behavior of peo-
ple with respect to the use of IT. Though the past literature explores
the use of IT and Information System in surveys, none has explained

64
how the behavior of the respondents can be used to improve surveys
and the quality of research in IS making the research more unique (see
further explanation in section 4.23 of the report). In addition, after not-
ing the knowledge gap in the field of IS, we proceeded and carried out
a review of literatures in the field of HCI and E-Health which have
been used to acquire knowledge about evaluation of artifacts using
logging mechanisms of user behavior (this is explained further in sec-
tion 4.3 of the report).

Existing tools for online surveys


There is no shortage of online tools and services to conduct online surveys.
Several vendors (e.g. Google, SurveyGizmo and SurveyMonkey) provide fea-
tures for designing questionnaires, collecting data and analyzing responses.
Open source alternatives emerge, both in the form of software to power online
surveys (e.g. LimeSurvey) and as open source schemas for describing and
managing questionnaires (e.g. queXML and SUML). These emerging tech-
nologies and standards highlight various aspects of the complexities of con-
ducting online surveys. In addition to questionnaire design, they support data
collection and data analysis. As an example, SurveyGizmo provides the option
to embed decision logic within questionnaires, and an application program-
ming interface (API) for developers and social plug-ins for Facebook and
Twitter. LimeSurvey supports multilingualism as well as ‘skip logic’ (e.g.
showing question Y if and only if the user chose a specific optional answer
for question X). As an additional example, there are typically features for data
export/import to various formats, including CSV, PDF, SPSS, queXML and
MS Excel. Third-party modules are available to, for example, connect Lime-
Survey to Wordpress (SurveyPress) and Drupal (LimeSurvey Sync). The ex-
isting tools and schemas include a variety of features to set up questionnaires,
and to collect and analyze data. There is, however, little emphasis on the pro-
cess of designing and evaluating questionnaires.

Previous IS Research on User Logging


The review process has three main parts. The first part will delineate the search
strategy used for the literature review and the results that were obtained. The
second part will discuss the actual findings of the retrieved articles. The third
part will then reflect on the relevance and implications of the findings for the
present research project.

65
Search strategy
The literature review strategy was based on Webster & Watson (2002). The
following eight journals were selected as the sources for the present literature
review: European Journal of Information Systems, Information Systems Jour-
nal, Information Systems Research, Journal of the AIS, Journal of Information
Technology, Journal of Management of Information Systems, Journal of Stra-
tegic Information Systems, and Management Information Systems Quarterly.
The rationale for selection was that they are known in the information systems
community as the "basket of eight" (i.e. sources known for consistently pub-
lishing high-quality evidence). The search was conducted using the following
keywords.
online survey, online questionnaire, online therapy, survey instrument, sur-
vey log, respondent log, behavior log, questionnaire log, survey validation,
questionnaire validation, questionnaire tracking, online tracking, and survey
tracking.
The first iteration of this search produced a total of 119 articles. The titles
and abstracts of each article were manually evaluated for salience with regard
to the present research focus. This narrowed down the number of articles to
12, and these are the articles that are included in the present literature review.
Then, 8 additional references were retrieved through a careful reading of those
12 articles. That is, the new references were retrieved not through an inde-
pendent search of the literature but rather through a review of the references
that appeared in the original 12 articles.
The second iteration of search used new sets of relatively improved key-
words to produce more conclusive results. The following keywords were used
in the second iteration of search.
cognitive absorption, perceived utilitarian performance, expectation dis
confirmation, IT satisfaction, continual usage intention, positive emotion the-
ory, incentives for participation, expectation-confirmation theory, user be-
liefs, cognitive absorption, perceived usefulness, customer satisfaction, IS use,
continuance, acceptance, user satisfaction, confirmation, technology ac-
ceptance model, innovation diffusion, user attitudes, user behavior.
It produced 392 new articles. The same process of screening has taken place
as of the first iteration of search that has narrowed down the results to 11 arti-
cles. In total, the literature search thus resulted in 31 articles (appendix X lists
the articles).

Discussion of Findings
The concept of respondent behavior logging (RBL) must be interpreted in
broad terms as any effort to elucidate the logic and antecedents of a given
person's behaviors regarding information technology (IT). There are studies
track the respondents' behaviors, in the sense that the research questions are
meant to identify the effects that a certain behavior or perception on the part
66
of the respondent has on other outcomes. Research can then focus on opti-
mally changing those behaviors and perceptions. Some of the retrieved re-
search articles have very specific focuses in this regard. For example, Dewan
and Ramaprashad's (2012) study sought to identify relationships between so-
cial media use and music consumption; Ruth (2012) has investigated the ef-
fects that "conversation" in a question-and-answer site can have on user satis-
faction; and Koch et al. (2012) have explored the effects that social media use
in the workplace can have on employee outcomes. Perhaps the most salient of
the retrieved articles is the one written by Sen et al. (2006), in which the re-
searchers attempted to delineate the search strategies of online buyers. All of
these studies had the purpose of developing conceptual maps of what qualities
in the participants corresponded to what specific selected outcomes. On the
basis of the findings of the various studies, business and IT professional could
implement processes that are meant to change the independent variables (re-
spondents' qualities) and thereby change the dependent variables (selected
outcomes such IT use or morale in the workplace). Again, this could be called
respondent behavior tracking in broader sense.
The advent of Web 2.0 has resulted in new opportunities for how research
is conducted. One method of conducting research entails the use of the web to
get respondents answer of survey questions online and in real time. However,
respondents normally exhibit reflex reactions when answering such questions
and a means for logging and evaluating respondent behavior is necessary. It is
important to evaluate and predict, to an acceptable degree of accuracy, the
expected behavior of respondents to online forums and applications, such as
web based questionnaires and surveys. The behavior of respondents and other
users of online IT/ IS resources have been predicted using a variety of methods
such as EDT, trust theory, needs based online community commitment,
among others. Web based surveys provide a convenient means for collecting
data during surveys, and has been used previously and in many present re-
search surveys (Picoto, Belanger, et al. 2014). However, in their research
(Picoto, Belanger, et al. 2014) on m-business value determination and usage
factors, there is no mention of logging user activities to evaluate respondent
behavior. Their research used the traditional questionnaire pre-test methods
and pilot testing; these were done to evaluate possible user response in order
to make changes to the questionnaire before final administering.
The behavior of users is evaluated through research approaches, as Walsh
et al have done when researching on the use of IT by evaluating the IT users’
culture. The behavior of people in using technologies and other information
systems applications have been evaluated using the trust concept; that the trust
of an IS system influences the behavior of how the users’ interact and respond
to the system. The long term effect of trust on users’ continued consumption
and use of IT/IS systems and applications have been evaluated through the use
of CEDT model (complete expectation disconfirmations theory) (Lankton,
McKnight and Thatcher 133). Disconfirmation, satisfaction, and performance
affect the trusting intention when users interact with technology applications
67
such as web based surveys. The trust theory and EDT have been used to de-
termine/ predict technology user behavior with regard to user satisfaction con-
struct. IT/IS system usage can be evaluated using the system usage construct
approach to evaluate system user behaviors (Burton-Jones and Straub 228).
The construct of system usage has performed a central role in research involv-
ing IS (Information Systems). The system usage construct evaluates IT/IS user
behavior based using the re-conceptualization of the systems’ structure and
function that evaluate behavior through system, task, and user evaluation
(Burton-Jones and Straub 230). Just a few handful research has been done to
develop literature on the use of constructs in IS research applications, with the
most relevant work being done by Trice and Treacy in 1986 and by Seddon in
1997 (Burton-Jones and Straub Jr. 229-230). The EDT and user trust theory
have been used to evaluate characteristics that improve the decision making
of IT/IS system users.
To study the behavior of online communities in engaging in such activities
are responding to threads and making comments, the organizational commit-
ment typologies have been drawn upon to explain online community user be-
havior. According to the organizational behavior typology, individuals exhibit
behavior by developing psychological bonds based on their own affects, needs
and obligations. Every type of community commitment exerts a distinctive
effect on every behavior; need based commitments can predict whether a cer-
tain user will read a given thread or not. Obligations based commitment can
predict the moderating behavior of users of online applications and IS (Bate-
man, Gray and Butler 844). Studies of online based forums and communities
have not come up with a concrete explanation as to their nature; some studies
conclude that individuals in these (online) forums are motivated by their own
self-interest. Other studies have suggested that online communities’ behavior
is due to altruistic behavior (Bateman, Gray and Butler 844). Using the organ-
ization psychology typology and psychological affiliation, it can be predicted
how users will behave in online discussion forums. A users’ likelihood to re-
spond to a thread, for instance, can be predicted by evaluating the users’ needs.
Self-service IS have been set up by organizations as a way of getting infor-
mation from users/ customers for onward use in improving product/ service
delivery. To predict future usage, post adoption use of IS behavior can be used
by assessing feature level IS usage, exploring new IS uses, and integration of
IS into the work system (Saeed and Abdinoor 223). Using this approach of IS
user behavior, psychometric properties are seen to have a strong influence on
post IS adoption usage. The method shows that user behavior of IT/IS re-
sources is ease of use, user initiated learning, usefulness, voluntariness and
satisfaction can predict online user behavior. The engagement theory can be
used to predict, and enhance the participation of users in online forums and
communities. Online communities and forums incorporate people with di-
verse interests and of various backgrounds, age, gender, social backgrounds,
and tendencies, so their willingness to participate in given forums can be pre-
dicted, and enhanced through engagement (Kim and Morris 544). In addition,
68
the feeling of self-efficacy can also be used to predict the contributions of
users to online forums.
The social capital theory has been used as a way of examining user satis-
faction antecedents when using IT/IS systems; specifically, relational capital
and cognitive capital can predict the satisfaction of users that consume IT/IS
platforms (Fang, Lim, and Straub 1199). To evaluate the experiences of IT
users, the cognitive absorption theory can be used as a way of conceptualizing
optimal holistic feelings that users experience while using IT/IS platforms.
Post adoption of IT/IS systems use and continued intention of use can be pre-
dicted/ evaluated through the application of the cognitive absorption theory
(Deng, Turner, Gehlig and Prince 63-64).
Battacherjee (2001b) evaluating how the expectations of users affect their
engagement with information systems; and Bhattarcherjee (2001a) has also
evaluated the factors that contribute to users of electronic commerce services
achieving the selected outcome of continuance. A conceptually similar study
was conducted by Oliver (1980) regarding the factors that result in users of a
technology, product, and/or service experiencing satisfaction, as well as the
effects that follow from a consumer feeling satisfied. Similarly, Kim and
Steinfield (2004) investigated the relationship between customer satisfaction
and continuance intentions with respect to the mobile Internet services. In an-
other study, Rabandr and Rabine (2009) used statistical tests to evaluate pat-
terns in web-based interactions.
The other studies retrieved for the present literature review are very similar
to the ones discussed thus far. For example, Karahanna, Straub, and Chervany
(1999) explored the beliefs associated with the adoption of technology over
time; and Agarwal and Karahanna (2000) have evaluated the relationship be-
tween the variable of cognitive absorption on the one hand and the variable of
beliefs about technology on the other. Finally, Bhattacherjee and Prekumar
(2004) attempted to construct a theoretical model to explain changes regarding
information technology usage.
In the following table, the articles are summarized with respect to inclusion
criteria and their research methods.
Table 3. Overview of literatures found in IS
Article Inclusion rationale Context of Research Method
Research

User experi- The research model The purpose of the An online survey was conducted
ence, satisfac- uses the concept of paper is to develop to test the model and its associ-
tion, and con- cognitive absorption and test a research ated hypotheses.
tinual usage (CA) to conceptual- model that investi-
intention of IT ize the optimal ho- gates the effects of
listic experience that user experience
users feel when us- with information
ing IT. technology (IT) on
user satisfaction
with and continual

69
usage intention of
the technology.
IT service cli- Exploring the rela- This study empirically estab-
mate, anteced- tionship between lishes it as a predictor of IT ser-
ents and IT IT service climate vice quality using survey data
service quality and quality out- from both IT units and their cli-
outcomes: comes. ents.
Some initial
evidence
Bridging the Explored the ef- On the basis of a case study in-
work/social di- fects that social formed by boundary theory and
vide: the emo- media use in the the theory of positive emotions,
tional response workplace can the research describes the SNS
to organiza- have on employee (Socieal Networking Sites), its
tional social outcomes. uses and how it impacted both
networking the employees and the organiza-
sites tion.
Key infor- Evaluating the per- By comparing and This paper presents the major
mation tech- ceptions of IT from contrasting IT findings based on survey re-
nology and different geograph- trends from differ- sponses from 620 respondents
management ical areas. ent geographies, (275 US, 100 European, 59
issues 2011- this paper presents Asian, and 186 Latin) in mid-
2012: an inter- important local 2011.
national study and international
factors (e.g., man-
agement concerns,
influential technol-
ogies, budg-
ets/spending, or-
ganizational con-
siderations) neces-
sary to prepare IT
leaders for the
challenges that
await them.
Conversation Investigated the ef- This study exam- Data from a question and an-
as a source of fects that "conversa- ines the role of swer web site are analyzed by
satisfaction tion" in a question- free comments structural equations modeling to
and continu- and-answer site can given in a com- test the theoretical model
ance in a ques- have on user satis- mercial infor- whereby customer satisfaction is
tion-and-an- faction. mation service key to continuance and is pre-
swer site through the lens of dicted largely by social interac-
the expectation- tion that takes place on the site.
confirmation the-
ory and continu-
ance.
Using Ac- Delineate the rela- This paper intro- The authors introduce the facto-
countability to tionship between duces accountabil- rial survey to the IS field, a sce-
Reduce Access user accountability ity theory to the IS nario-based method with com-
Policy Viola- and access policy field, and by doing pelling advantages for the study
tions in Infor- violations so, the authors as- of information security policy
mation Sys- certain four system (ISP) violations and computer
tems. mechanisms that abuse generally.

70
can heighten an in-
dividual’s percep-
tion of accounta-
bility: identifiabil-
ity, awareness of
logging, awareness
of audit, and elec-
tronic presence.
A multidimen- User commitment Affective commit- To validate a proposed research
sional commit- plays an enormous ment—that is, in- model, cross-sectional, between-
ment model of role in positive en- ternalization and subjects, and within-subjects
volitional sys- gagement with vol- identification field data were collected from
tems adoption untary information based upon per- 714 users at the time of initial
and usage be- systems. sonal norms—ex- adoption and after six months of
havior hibits a sustained extended use.
positive influence
on usage behavior.
Buyers' choice Attempted to deline- An understanding
of online ate the search strate- of buyers' choice
search strategy gies of online buy- of online search
and its mana- ers. strategies can help
gerial implica- an online seller to
tions estimate its ex-
pected probability
of making an
online sale, opti-
mize its online
pricing, and im-
prove its online
promotional and
advertising activi-
ties.
User Satisfac- Advances the theo- Based on recon- A field study of 159 users in
tion with In- retical understand- ceptualization, the four financial companies pro-
formation ing of user satisfac- authors draw on vides general empirical support
Technology tion by re-conceptu- social capital the- for our hypotheses.
Service Deliv- alizing IT service ory to examine the
ery: A Social delivery as a bilat- antecedents of user
Capital Per- eral, relational pro- satisfaction with
spective cess between the IT IT service deliv-
staff and users. ery.
Progress in the On the basis of data This paper investi-
IT/business re- collected over a 3- gates perceptions
lationship: a year period using a of the IT/business
longitudinal survey instrument, relationship held
assessment the paper highlights by 653 IT manag-
areas of perceived ers and their staff
difficulty in the and 503 of their
IT/business relation- business counter-
ship and seeks evi- parts.
dence of trends in
the business/IT rela-
tionship.

71
A theoretical Discusses the rela- This paper devel- The model is tested using a sam-
integration of tionship between ops an integrated ple of 465 users from seven dif-
user satisfac- user satisfaction and research model ferent organizations who com-
tion and tech- acceptance of tech- that distinguishes pleted a survey regarding their
nology ac- nology. beliefs and atti- use of data warehousing soft-
ceptance tudes about the ware.
system (i.e., ob-
ject-based beliefs
and attitudes) from
beliefs and atti-
tudes about using
the system (i.e.,
behavioral beliefs
and attitudes) to
build the theoreti-
cal logic that links
the user satisfac-
tion and technol-
ogy acceptance lit-
erature.
Music Blog- Online social media Identify relation- Based on data from a leading
ging, Online such as blogs are ships between so- music blog aggregator, we ana-
Sampling, and transforming how cial media use and lyze the relationship between
the Long Tail consumers make music consump- music blogging and full-track
consumption deci- tion. sampling, drawing on theories
sions, and the music of online social interaction.
industry is at the
forefront of this rev-
olution.

Relevance and Implications


The general insight that can be gleaned from this literature review is that there
is value in carefully conceptualizing people's behaviors with respect to IT use.
On the basis of the findings of the various studies, business and IT professional
could implement processes that are meant to change the independent variables
(respondents' qualities) and thereby change the dependent variables (selected
outcomes such IT use or morale in the workplace). Structurally, the idea of
using respondent behavior tracking in order to improve the design of a survey
is not so different from identifying antecedents of commitment IT use so that
professionals can then work toward fulfilling those antecedents and thus bring
about desired outcomes. In a narrower sense, though, it seems clear that there
is virtually no literature that has focused on the use of respondent behavior
tracking to improve the design of surveys per se. All of the reviewed literature
focuses on achieving a concrete outcome that would be valuable out in the
"real" world; none of the literature has focused on how the concept of respond-
ent behavior tracking could be used in order to improve the quality of infor-
mation systems research itself. The main subject of the present research idea
is thus a unique one that has not been previously addressed by the information

72
systems community in a meaningful way. The extant research uses surveys in
order to track respondents' behaviors; while the presented idea flips the coin
over to inquire into how respondent’s behaviors logging may be conceptual-
ized and analyzed to improve research as such.

Previous HCI research on User Logging


Given the lack of directly related IS research – i.e. logging to support (artifact)
evaluation – the next step was to conduct a search in the HCI literature.
Human computer interaction has seen massive expansion since its incep-
tion. It has overtime, incorporated diverse concepts and means of approaches
to critical problems. It is currently a tool for collecting semi-autonomous fields
of research in informatics based on humans. The synthesis due to disparate
approaches and conceptions to the field of science has been a major boost in
advancing HCI. It now provides examples of how various epistemologies can
be reconciled, integrated, and connected vibrantly to give an intellectual pro-
ject. The main purpose of the HCI is to understand well the interactive tech-
nologies used by people (Rormano 2014). However, it is more focused on the
evolution of such interactions as the appropriate technologies, skills, concepts,
and expectation of these people. It seeks to understand human practices as
well as their aspirations. Therefore, any web based survey must adhere to these
interactions in order to make any significant progress. However, the main
challenge is the inability to draw probable samples from the same. In this re-
gard, RBL becomes very handy to show detail records about the use of online
surveys hence, enable one to one mapping of participants activities with ques-
tionnaires.
Sharp (2013) states that, in the past, it was difficult to evaluate and approx-
imate human computer interactions based on the viewpoint that they were un-
dertaken through face to face interactions whereby the evaluator met with the
web-users physically. In this case, the usability of the concerned websites was
confirmed once the evaluation observed the clients use the websites for their
desired needs. However, as of now, computer systems and applications are
used all over the world and it is not sufficient to make use of the physical
interaction schemes so as to evaluate the usability of any given application
(Bauchaman, 2012). In light of this, there is a substantial number of remote
schemes of evaluation that have been formulated, since these approaches are
preferred due to the fact that they are not limited to hurdling instances such as
time and space.
In the article “Knowing the user’s every move” by Richard, Monika and
Albrecht (2006) have conducted a survey in detail on how web technologies
which are standard can be used in the monitoring detailed user interactions.
Ease in usability evaluation and implicit interactions are the major motivators
and parameters in this investigation. Classical log files provide very minimal
information on user's interaction in a web application thus fined grained and
73
detailed information is essential. The efficiency and proficiency of a user in
computer usage can be tested by allowing a user to fill in form fields and re-
cording the time spent by the user to fill in the details in the indicated fields.
Caution is important to ensure that the method used in tracking does not affect
or interfere with the expected user experience but also confining it to the pre-
sent browser setups and servers. Java scripts code are mainly used in the mod-
ification of HTML pages by an HTTP proxy. HTTP protocol was mainly
used in the earlier days but in the recent times, there is a frequent use of the
Java scripts based applications which the only function is to contact the server
during loading or saving of data. Activity tracking has only been used on web-
site usability tests but with further developments, it will be possible to use the
approach in the development of web applications, profiling users, and web-
sites implicit interactions (Atterer, 2016).
Arroyo, Selker and Wei (2006) have also been working on tracking to come
up with usability tool that can help analyze web designs. In their article Usa-
bility Tool for Analysis of Web Design using mouse track, there is the presen-
tation of mouse track which is a type of weblogging system with the ability to
track the movement of a mouse on a website. A visualization tool is included
which help in displaying the mouse cursor path which the website visitor has
followed. The major function is to enable the administrators of the site to an-
alyze the data collected and run a usability test. Optimization of the websites
is essential in attracting new customers, and many website administrators have
to look for new ways to optimize their websites. Usability testing and website
tracking can be used to provide the optimization goal. The success and failure
of a website can be indicated by the analysis of the server logs. However, the
server logs provide very little information on the activities of the users at the
granular level. Eye trackers are mainly used in the provision of usability in-
formation used as a guide in the redesign process (Arroyo, 2006).
Human computer interaction presents a new discipline that will help in the
linking of computer operation and human capabilities; in essence, the interac-
tion of people and computers to yield desired results. Like all other technolo-
gies, HCI is continually affected by the advancement in information technol-
ogy and the pervasive nature of computers in society; this presents an area of
research that not only speaks to the innovation required, but to the importance
of merging human interaction with information technology. Another contri-
bution to the need for HCI as a field of study is the increase in number of
people who use technology such as smart phones, which have been fitted with
many interactive technologies such as GPS and internet capability among oth-
ers. HCI influences the marketability of these new devices that have network-
ing abilities; for this reason, it has been the key to developing improved ver-
sions, especially since it merges multiple disciplines such as psychology, so-
ciology, engineering, design, and ergonomics among others. Collectively,
HCI was created to enable the design of interaction and interfaces that ensure
high usability; in this sense, high usability is used to infer that the interfaces
ought to be efficient, easy to use, safe, and easy completion of a task (Kim
74
2015). Evidently, for this technology to be considered successful, people have
to find it easier to use in order to remain interested in it; otherwise, developers
risk the likelihood of people rejecting a certain technology if they do not un-
derstand it. Consequently, this factor introduces the aesthetics as an important
requirement, despite the importance of the technology, its commercial success
might depend on whether it is attractive or not.
In a study of literature conducted by LoicCaroux (2014), it was found that
while video games presented a complex information system, players were in-
terested to interact with them since they were goal oriented; players were in-
fluenced from multiple parameters such as emotional and physiological. In
this sense, it is undeniable that HCI affect users in a unique way; therefore, as
aforementioned, the peak of HCI will be realized when interaction between
humans and computers feels as natural as human-to-human interaction (Rau-
taray& Agrawal, 2012). Rautaray& Agrawal (2012) move on to assert that
since gestures have been considered an important component of human inter-
action, they might help a lot in the development of more natural ways of com-
municating with computers. In this research, three major phases of hand
recognition were discussed, namely, tracking, detection, and recognition. It
was found that while the use of hand gestures for interacting with computers
and other similar devices continues to thrive, research in the field is still want-
ing.
Together, all these components play an important role in the creation of
potent HCI models that enable people to interact well with their computers or
computer devices. Today, heath records are stored digitally, meaning that peo-
ple have minimal access to them for many different reasons; for example, this
information is stored in databases that require authorization from the service
providers. Subsequently, people turn to the internet to find answers regarding
their health (Röcker, Ziefle, &Holzinger, 2014; Mamykina et al., 2009); it fol-
lows that these online healthcare providers represent the closest sources.
Hence, they ought to make it easier for people to access this information. Ap-
plication of HCI principles will ensure that more people have the ability to
access information, which can then be used to monitor their health conditions.
Here, the key is to find a link between HCI and eHealth and how these two
platforms can be used in tandem since they seem to complement each other
(Mamykina et al., 2009). A good HCI system will ensure that people from
different demographics can access information about their health such as rec-
ords and trends in their health, without complications. In the same way, it will
be easier for service providers to interact with patients and provide the right
advice; in addition, since both parties can interact freely with the systems, cli-
nicians are also in a position to collect logging data that has not been compro-
mised by other factors (Röcker, Ziefle, &Holzinger, 2014). For instance, if a
consumer does not understand some aspects of the system, it becomes hard
for him or her to provide answers based on his or her understanding.
Today, people own many devices that have been designed to help monitor
and record health conditions; these devices require the user to understand how
75
they work (Mamykina et al., 2009). It follows that developers have a duty to
ensure users do not find it hard to use these HCI devices so that they extract
the intended information without challenges. The link between HCI and
eHealth can also be seen in the type of tasks monitors complete such as re-
cording food and water intake, drug consumption, work out schedules, and
insulin injections. Notably, the sensitivity of this information revolved around
how humans can interact with computers.

Relevance and Implications


Generally, it can be noted from the literature that HCI has attempted to explore
how humans interact with computers. As such, there has been the development
of interfaces to aid such interactions. This has helped stakeholders to improve
services based on understandings from the interactions between individuals
and computers unlike in the past when there was great dependence on physical
interactions. On this note, this research is based on the existing researches to
establish how HCI can be helpful in the field of survey and IS to improve how
surveys are conducted. This study, therefore, takes advantage of the existing
HCI technology to trace online activities during surveys of web users and thus
collect valuable data for survey improvement.
The architecture of this artifact (RBL) uses JavaScript-based techniques to
interact with the server during saving and loading of data of surveys. This has
made it possible to track user activity on the online platforms which are the
basis of the functionality of this artifact. RBL thus clearly tries to harness the
JavaScript technology to facilitate client-side tracking. This has also allowed
the creation of a more robust use of RBL in different devices.
Additionally, RBL uses visualization techniques similar to works in HCI
where visualizations have been commonly used for the web administrators to
track the activities of the web users and thus form the basis of improvement
of the websites. Further, it has been also common in HCI practices that web
users are capable of carrying out usability test which forms the basis of web-
site optimization. Nonetheless, in this case of RBL, visualizations is used to
map the activities of the web users on how they fill the questionnaires as well
as a recreation of the obtained data. This will be obtained through activity
charts, time chart, and answer matrix (see chapter X). The activity chart will
indicate the number of user’s activities per question per questionnaire. The
time chart indicates the time taken per question while the answer matrix indi-
cates how the respondent moves between questions. RBL, therefore, exploits
the use of visualization to contribute to the field of a survey by using the feed-
back collected through HCI to improve the questionnaires.

76
Previous E-Health research on User Logging
Given the empirical context, there was an interest to inquire into previous re-
search on logging in e-health research. The concept of logging, a data collec-
tion method that not only provides information regarding consumer prefer-
ence, but how consumers interact with online services of a certain nature. E-
health consumers represent a growing list of people interested in obtaining
information about their health and that of their loved ones; this implies that
for these services to be improved, service providers need to know what con-
sumers want and find ways to keep them interested in their sites.
A study conducted by Van Gemert-Pijnen (2014) shows, the use of log data
to understand the uptake of content of a web-based intervention that is based
on the acceptance and commitment therapy (ACT). The study demonstrates
how log data can be of value for improving the incorporation of content in
web-based interventions. The study looked into log data that were produced
by 206 participants who went through first nine lessons of the web-based in-
tervention. The log data included were, login, view feedback message, start
multimedia, view text message, etc. Differences in usage between lessons and
differences between groups both were explored with repeated measures ANO-
VAs (analysis of variance). It clearly shows that, log data can be a tool to
recognize the most salient part of the available content or it may reveal pat-
terns that were more useful than other ways. Therefore, researchers can test
their hypotheses about the usage of content delivered. Moreover, the log data
gives the researcher the ability to see through participant’s improvements in
terms of how much content they went through, which were the contents that
were more prevailing than others. On a collective level though, patterns may
be recognized and researcher therefore, may question or investigate adherence
of contents that were setup for the intervention.
According to Sieverink, Kelders, et al (2013), the adherence to eHealth in-
terventions is lower than expected despite the large number of eHealth pro-
jects. To discover how technology can influence the adherence to eHealth in-
terventions they use log data of the usage of an eMental health interventions
as an example. Primarily, logging involves the collection of numerous speci-
fied data such as when consumers log on to programs, what answers they give
for questions, when and why they change answers, among others. In essence,
the service provider is equipped with a lot of information that can be used
directly or otherwise to create a potent product that will increase consumer
continuity. Log-data represents a starting point to improve eHealth and make
it more persuasive to consumers (Kelders, Julia, & van Gemert-Pijnen, 2013);
hence, logging ought to be made as accurate as it can be. Indeed, technology
can be persuasive, making it possible for it to change people’s behavior; for
instance, eHealth technology could influence a consumer to be more health
conscious and support healthy living. To make this a reality, logging can be
implemented since log files present information about customers in real time;
subsequently, based on the data collected from the log files, it becomes easier
77
to personalize content to consumer preferences (Kelders, Julia, & van Gemert-
Pijnen, 2013).
Another benefit of log files is the improvement in the management of
chronic diseases that have been on the increase, especially among the older
generation. In essence, the older generation requires close monitoring since
their health deteriorates easily as compared to the younger one; for this reason,
logging has been introduced to help people understand how these diseases be-
have among aging people. Health professionals need to monitor the conditions
of the aging population and for this to be a reality, they use eHealth since it
can provide online support for self-management, improve interaction between
professionals about illnesses, and it helps in the overall monitoring process.
Sieverink, Kelders, Julia, & van Gemert-Pijnen, (2013) conducted a study to
investigate the navigation process of eVita, which is an electronic personal
health record (PHR) technology that was designed to be used by patients with
diabetes mellitus. In addition, the authors sought to improve the technology’s
efficiency and consumers’ adherence; to achieve this, they analyzed the log
data for six weeks after they used a renewed version of the technology. Indeed,
the PHR has proved to be an important component in the improvement of the
quality of managing chronic illnesses since people can share and access their
information with utmost security and confidentiality. With the continued ad-
vancement in technology, PHRs are constantly improving and today, they also
enable patients to manage their own conditions, they provide more infor-
mation about the illnesses, they offer peer support, and provider to patient
communication. Evidently, as Sieverink et al., (2014) found out, PHRs em-
ploy logging systems that enable them to provide this information; however,
exposure to PHRs do not guarantee improved health outcomes for patients
with diabetes. Interestingly, it is the information collected by such logging
systems that proves valuable, especially to web designers, and health service
providers.
Tian et al., (2009) point out the growing need for consumers to use the
Internet to search for medical information; in the same way, clinicians were
also found to use the internet often to communicate with patients. For this
reason, they assert, it is imperative that web designers find a way to improve
the services they provide on websites, in order to make it easy for both con-
sumers and clinicians. In order to outline the importance of their conclusions,
Tian et al. examined the usage of the Chronic Fatigue Syndrome (CFS) web-
site at the Center for Disease Control and Prevention (CDC). They also exam-
ined the results of a public awareness campaign for the CDC CFS and the
behavior of users towards these campaigns. At the of the research, they found
that the website had high usage which shows that it was an integral online
resource for those who wanted to know more about professional health care
and information. Again, these findings highlight the importance and shift in
information technology that continues to be used in improving health services
and how people can get them.

78
Relevance and Implications
This section outlines how RBL has benefited from the E-Health literature.
During the literature review stage (see chapter 4), an analysis of the past liter-
ature was conducted on the areas of Human Computer Interface (HCI), E-
Health, and Information Systems (IS). However, since this study focuses
mostly on the relevance of IS, research was also conducted for the field of E-
Health and HCI since these two fields are equally relevant for this research.
Moreover, this research is based on the U-CARE project hence it is vital to
comprehend the significance of evaluation techniques on respondent behavior
within the field of E-Health. Generally, it can be concluded from the reviewed
literature that there is high value in understanding respondent behavior in the
field of E-Health. Based on the reviewed data, it has been shown that the use
of E-Health can help practitioners to determine the uptake of online contents
by the respondents. Such data may include sickness conditions, the behavior
of respondents while using the internet such as the time the consumers log into
the system and when they answer questions, reaction to online content, and
disease trends, etc. This can be further used to personalize data depending on
the preferences of the consumers. Besides, the literature has shown that it is
possible to manage chronic diseases using e-platforms. On this note, there is
a need for the consumer to use the internet to look for health information.
Based on these studies, it is, therefore, possible for researchers to use the in-
dependent variables (such as consumer data obtained through e-health) and
convert to the dependent variable (outcomes such as management of chronic
diseases). Technically, the concept of using RBL in improving survey do not
vary much from the concepts presented in literature such as monitoring of
chronic diseases in the elderly or the uptake of online content.
Based on these existing works, the researchers can work toward developing
a system for improving surveys. For instance, this can be closely linked to the
concepts presented by Sieverink et al. (2013) about adherence to e-Health (see
section 4.4). However, it has been shown that amongst the reviewed literature,
none addresses the concept of using respondent behavior to improve surveys
in the field of e-health. Most of the past researchers just as the few selected
for this study have only focused on how e-health can be used to achieve the
results presented in section 4.4; though none has addressed its use in survey.
This makes this research not only unique but also more relevant as it has not
been exploited by past researchers in the field of e-health.
The current research uses the same concepts as the existing research but is
more inclined to information systems and survey. It uses surveys to monitor
the behavior of the respondents in the field of e-health (like the past research-
ers) but instead uses the data obtained to determine how to improve surveys.
As such, the research relies on e-health to the extent that using the internet and
tracking the health consumer data is concerned but is more valuable as it uses
this data in improving surveys. On this note, the value and relevance of the

79
current research are two-fold; it adds to the existing e-health literature on re-
spondent logging and also adds value to the field of IS and surveys through
the use of respondent login behavior to improve questionnaires.

Implications for the design of RBL


In this section, we address the findings of the literature and its implications
for the design of RBL in the U-CARE context.
Web technologies allow us logging of user activities by client-side tracking
and/or server-side tracking. There are potential advantages and limitations on
both techniques. There are some design decisions made to tackle tradeoffs
with both techniques for the RBL. On one hand, RBL must remain compatible
regardless which browsers and devices are used and, on the other hand, it must
collect sufficient information to be able to interpret the use of questionnaires.
In the following matrix, we show, some of the benefits and shortcomings for
both techniques in terms of RBL’s performance, data quality and robustness.

Client-Side tracking Server-side tracking


Performance Potentially large effect on Smaller effect on user expe-
user experience (script slows rience, only a small over-
down browser). head of processing with
each HTTP request.
Data quality Allow higher level of detail, Limited level of detail in
i.e., “capturing the user’s captured data.
every move” such as mouse
movements and clicks.
Robustness Potential risk of data loss Less risk of data loss.
due to high browser incom-
patibility.

The design of RBL, is based upon U-CARE research application software.


One of the main objectives for researchers in U-CARE is to collect data from
ranges of population and geographic areas. We find in our observation that,
there are users who use Internet Explorer 6, which is considered to be a very
old browser. Many of web applications today have actually stopped support-
ing Internet Explorer 6. There are other browsers that are commonly used by
the majority of participants such as Google Chrome, Firefox, Safari, etc. These
browsers vary with the use of different devices. It is likely, that Android mo-
bile users use Google Chrome, while IOS users use Safari. The desktop users
have even more options to use browsers of their own choices. Therefore, in
order to maintain maximum use, the design of RBL must consider the variety
of devices, operating systems, and browsers that may be used. This has led us
to make following design decisions.

• Support all the latest browsers including Internet Explorer versions


higher than 6.
80
• In order to support mobile devices, the RBL has to be light weight,
therefore, mainly server-side tracking through HTTP request is de-
signed along with carefully chosen some client-side tracking tech-
niques. For example, click events on the client-side are only tracked
when the users answer questions. These events are sent to the
server-side through HTTP request. Other tracking features i.e, time
tracking for both questions and questionnaires, location, etc., are
maintained by the server-side.
• JQuery version 1.9 has been used to facilitate client-side tracking.
JQuery is a fast, small, and feature-rich JavaScript library. It has
features like HTML document traversal and manipulation, event
handling, animation, and Ajax with an easy-to-use API that works
across a multitude of browsers.

81
A Framework for Respondent Behavior
Logging

In this section, we detail our findings on how to conceptualize respondent be-


havior in the context of online surveys. We refer to the full set of findings as
RBL, an abbreviation for the framework for respondent behavior logging pre-
sented in this chapter. RBL consists of:

• A dynamic model of respondent behavior (section 0)


• A static model of respondent behavior (section 0)
• Visualization techniques (section 0)
• Measurement constructs (section 0)
• A process model for questionnaire evaluation and improvement
(section 0)

First, we present a dynamic model to frame user actions related to filling in


questionnaires. Second, we introduce a static model intended to keep records
of these user actions. Third, we introduce a selected set of techniques to visu-
alize respondent behavior, i.e. make it accessible for various stakeholders for
interpretation.

Dynamic model of respondent behavior


A logical first step in development is to identify what user actions to log. Dur-
ing the design process, we identified a set of user actions that could be traced
at the server side through the HTTP requests client browsers make to the web
server.

82
Figure 14: Dynamic Model of Respondent Behavior

Seven action types are outlined in this Figure 14, all are possible to track in a
web context. The action types ‘open questionnaire’, ‘close to continue later’
and ‘submit questionnaire’ need no further explanation. The ‘focus on ques-
tion’ action type means that there is some indication that a question is cur-
rently in the focus of the user, e.g. by hovering a question area with the mouse
pointer or setting focus on a text field by clicking it. The ‘answer unanswered
question’ occurs when a user answers a question that has not been previously
answered, differentiating it from the ‘revise answer to question’ action type.
It is also possible to track when users switch to another page in the question-
naire. The dynamic model, despite its simplicity, is an important concept to
better understand what types of user actions we may trace in the context of
online questionnaires. A main idea of tracking these actions is that a rich ac-
count of actions taken serves to indicate how the respondent interprets various
situations when filling in a questionnaire. The dynamic RBL model was first
introduced by Sjöström et al. (2012).

Static model of respondent behavior


The entity/relationship diagram bellow shows a static model to keep track of
user actions as shown in Figure 15. The purpose of the design is to allow trace-
ability, i.e. that we are able to reconstruct the manner in which a respondent

83
filled in a questionnaire. In addition, there is a need to keep track of the re-
spondent’s (user’s) identity, to allow us to make queries related to a particular
individual.

Figure 15: Static Model of Respondent Behavior

‘User Action’ defines the actual log record. It shows how a user, at a certain
point in time, performs an action of some type in relation to a specific ques-
tionnaire. This action may include an answer to a question, which may be free
text, but it may also point out an optional answer to a question. Keeping track
of context is imperative, since one questionnaire may be used at different
times. This is the case in many randomized controlled trials where the same
questionnaire is typically used repeatedly to measure the same construct (e.g.
depression or anxiety) at different times in the study. By including action type,
we facilitate queries of user behavior based on grouping by the type of action
performed. The static RBL model was originally introduced by Sjöström et al.
(2012).

Visualization Techniques for Respondent Behavior


In organizational research visual aids are powerful methods of articulation.
Visual aids perform, they are not representations of realities or practices
84
(Davison et al., 2012). Visual media compels performance, i.e. enforces a two
way process of understanding and acting at the same time. Using visual media
in qualitative research is a good way of giving the field alternative ways to
understand and talk about social existence. Visual aids produce a more dy-
namic configuration of the world around us that leave a lasting effect on the
observer. It helps the users to avoid preconceived ideas while at the same time
creating a possibility to observe the object of study in a different light, unlike
in a verbal or written presentation (Nulty, 2008).
Based on the way information is stored in the static RBL model, there are
various ways to query into the RBL database to render reports and visualiza-
tions of RBL data. Here we will present three such visualization techniques in
order to clarify the idea of visualizing respondent behavior. In this section, we
only briefly describe the techniques and their intended purpose (Table 4. RBL
Visualization Techniques).
Table 4. RBL Visualization Techniques
Visualiza- Quality Description Purpose
tion Tech- Dimension
nique
Activity Question The activity More than 1 activity indicates that the
Chart Issue chart shows the respondent shows some uncertainty how
number of to answer or what to answer.
activities
(answer, delete,
change answer)
per question in
a questionnaire.
Time Time Issue The time chart If a question takes a long time to answer, it
Chart shows how could indicate some design flaw in the
much time questionnaire (e.g. the phrasing of the
taken to answer question).
each question
in a
questionnaire.
Answer Structural The answer A lot of movement back and forth between
Matrix Issue matrix shows questions may signal structural problems
how a with the questionnaire.
respondent
moves between
questions in a
questionnaire.

Each visualization technique aims at disclosing a quality dimension of a ques-


tionnaire -- either targeting the structure dimension or the question dimension
of the questionnaire. Drawing from questionnaire design literature, we differ-
entiate between structural questionnaire qualities and question qualities.
We will then return to these techniques in the evaluation section (see chap-
ter Focus group evaluation) by showing the visualization techniques in use,
and how focus group participants interpreted them.

85
RBL measurement constructs
As a contrast to visualization techniques – i.e. supporting people to interpret
complex data – we also propose that RBL data may be the basis for statistical
analyses to reveal flaws in questionnaire design. The measures were to analyze
respondent behaviors on question level (Table 5) with corresponding
measures for the questionnaire level (Table 6).
Table 5. RBL question level measures
Measure Description
Change Frequency (CF) The number of times the answer to a question is
changed before the questionnaire is submitted.
Question Response Time (QRT) The time taken to complete a single question.
PingPong Frequency In (PFI) The number of jumps to a question in focus from
other questions.
PingPong Frequency Out (PFO) The number of jumps from a question in focus to
other questions.
PingPong Movement Factor (MF) MF = PFI + PFO

The number of RBL measures on questionnaire level are reduced, due to the
logical conflation of the PFI, PFO and MF measure into a total measure of
Overall Movement Factor (OMF) on the questionnaire level.
Table 6. RBL questionnaire level measures
Measure Description
Overall Change Frequency (OCF): The number of times an answer has been
changed in the questionnaire as a whole be-
fore it is submitted.

Overall Response Time (ORT). The response time for the whole question-
naire.
Overall PingPong Movement Factor (OMF) The number of jumps between questions in
the questionnaire, excluding the number of
necessary jumps to move through the ques-
tionnaire sequentially.

A process model for questionnaire evaluation


Above we have presented a conceptualization of RBL (static and dynamic
model), visualization techniques, and definition of quantified measures of re-
spondent behavior. In addition, the RBL process model exists to convey the
use of RBL in a process to improve questionnaire design.
In essence, RBL is a tool for studying of the respondent to a questionnaire
actually interacts with that questionnaire during the process of taking it. In
another word, RBL tracks participant activities during the process of answer-
ing a questionnaire. It begins with the participant opening a questionnaire and

86
interpreting the situation; it proceeds to the different ways in which the par-
ticipant can answer a question or interact with the question and then returns to
an interpretation of the situation; and it ends with the participant either sub-
mitting the questionnaire or closing it in order to resume later. This tracking
mechanism thus allows researchers to improve questionnaires while expend-
ing a minimum of additional effort to gather the necessary information for
doing so.

Figure 16: A process-model for questionnaire improvement through respondent be-


havior logging

Figure 16 shows that a questionnaire is answered and the process of answering


questions is white boxed. Meaning, the user activities (e.g., answer, withdraw,
open, close, next page, save, etc) are logged and later can be revisited for in-
terpretations. There are two major contributions of this process. First, the in-
terpretation of the log data can serve as an opportunity to improve the structure
of questionnaires and may inform us about errors, or unintended flaws in the
design process of questionnaires. Second, we can interpret the log data to iden-
tify areas of common misconceptions, and generate adequate understanding
about questionnaire effectiveness. Most importantly, the holistic understand-
ing of the log data may lead to new ideas which can be used in future ques-
tionnaire design.

87
Evaluation

The Framework for Evaluation in Design Science Research (FEDS) by Ve-


nable et al. (2016) provides a reliable research framework for embracing arti-
facts and design theories. It offers a reliable method for the development and
explication of research evidence. The effects of the process are that it provides
characterization of artifact’s utility and design principles, whose functional
purpose is to explicate the goals of the project, select an ideal evaluation strat-
egy, identify the attributes to evaluate, and design individual evaluation epi-
sodes (Gill & Hevner, 2013). In this chapter, we detail all evaluation episodes
and results of the emerging RBL framework based on FEDS evaluation strat-
egies.
While implementation of RBL in U-CARE is expected to provide research-
ers with a clear understanding of the users’ logging behavior, FEDS helps to
interpret RBL in a structural and process-based manner. With the goals and
artifacts to evaluate presented, it is crucial to analyze the implications of using
FEDS to evaluate RBL.

Figure 17: FEDS evaluation process.


FEDS provides researchers with the option to evaluate a system before and
after a DSR artifact is implemented such as the implementation of RBL in this
case. Ex ante and ex post evaluations have been commonly used to provide
researchers with a time-based evaluation for systems. Ex ante evaluative ap-
proach try to analyze the system’s techniques and predict future outcome that
might help provide an expectation based on the implementation of RBL in the
U-CARE system. Ex ante evaluation minimizes the risk of user interference
with the overall system evaluation (Venable, Pries-Heje & Baskerville, 2012).
By so doing, researchers and stakeholders of the system are well-informed
88
about all aspects and artifacts of the system before users are brought into the
picture. The static and dynamic models of RBL (Chapter X) and methods (vis-
ualization of RBL data) are examples ex-ante evaluation process, the use of
these models explains the RBL techniques in detail and serve as a proof to
conceptualizing RBL. On the other hand, evaluation can be carried out after
implementation of specific technologies. In this case, an ex post evaluation is
required. Ex post evaluations are designed to provide researchers with a clear-
cut result of the implementation of a given artifact (Venable, Pries-Heje &
Baskerville, 2012). It helps to reduce the possibility of a false-positive result
(Venable, Pries-Heje & Baskerville, 2012). An after-use evaluation plays a
huge role in defining the effectiveness of a system and the ability of the system
to accomplish explicate objectives. Therefore, adapting the FEDS approach
will prove crucial to the success of the U-CARE system and to the implemen-
tation of RBL to the system. As such, evaluation and analysis of information
from responses and user behavior should provide researchers with reliable and
accurate information from which they can draw conclusions and make deci-
sions.

We have conducted multiple evaluations of the RBL framework. Ta-


ble X shows an overview, followed by detailed accounts of the evalua-
tions in the following sections.

Table 7: RBL evaluation type


Evaluation Ep- FEDS Evaluation Type Aim
isode
Focus group Ex Post Naturalistic. The goal of the focus
evaluation. group evaluation was to
assess the reactions from
active researchers, and to
what extent they gained
new insights from inter-
preting RBL visualiza-
tions.
Experimental Ex Post Artificial. The goal of the experi-
evaluation. mental evaluation is to
evaluate if RBL data in
combination with sta-
tistical analyses can be
used to detect ‘flaws’
in questionnaire design
and to assess if the
RBL measurement
constructs support

89
questionnaire evalua-
tion.

FEDS’ evaluation episode is based on understanding the evaluation process in


depth. That means getting a crystal clear image of what to evaluate, when to
evaluate and why to evaluate (Venable, Pries-Heje & Baskerville, 2016). We
present each evaluation episode using the following categories, inspired by
Venable et al. (2016):
• Rationale – why we do this particular evaluation
• Evaluation overview – an overview of what was done
• Data Collection – details about data collection (but some parts may
be in an appendix)
• Data Analysis – details about data analysis (but some parts may be
in an appendix)
• Lessons Learned – what can we infer about the utility/qualities of
the RBL framework based on this evaluation?

Focus group evaluation

Rationale
The goal of the focus group evaluation was to assess the reactions from active
researchers in U-CARE, and to what extent they gained new insights from
interpreting RBL visualizations. In doing so, we received stakeholder feed-
back about the value of RBL visualizations for researchers. Part of the evalu-
ation was also to implement software that produced the visualizations, i.e., a
proof-of-concept [22] or expository instantiation [10] demonstrating the fea-
sibility to apply the RBL visualizations in practice.

Evaluation Overview
We presented the visualizations to the focus group, and the participants were
encouraged to reflect about the utility of these representations to (i) identify
design flaws and suggestions for improvements of questionnaires and (ii) dis-
cuss if they would interpret the collected data differently given the way they
made sense of the RBL visualization. We thus based the evaluation on quali-
tative data collected in the focus group session, focusing on if and how RBL
visualizations were considered meaningful and valuable to the focus group
participants.
We have included six participants in the focus group who were vital stake-
holders and have firsthand interest in this research outcome. They were di-

90
rectly involved in designing online studies, they have been using online sur-
veys throughout the U-CARE research process since 2011. They were also
representatives of the three original clinical trials in U-CARE.
The focus group participants were asked to reflect on three basic scenarios.
First, the scenario when respondents’ answers were collectively shown in re-
lation to each questionnaire. Second, focuses on individual responses in a
questionnaire in relation to other respondents. Third, reveals the comparisons
of a same questionnaire filled in by a respondent at different times.
Collective collection of answers to one questionnaire is a research method
that encourages researchers to collect data from ongoing research. Therefore,
it is a research approach that enables the collection and analysis of data via
questionnaires which act as the critical incident technique (CIT), implying that
they help researchers to identify various human experiences (Sjöström et al.
2013, p.512). Therefore, the employment of questionnaires is a typical con-
struction of a research scenario that triggers a past experience and scenarios.
As a result, it helps researchers using the U-CARE software system to over-
come concerns that are associated with problems that prevent participants
from exploring the key scenarios. This approach offers a comprehensive prac-
tice that is in line with the U-CARE research feasibility hence can be applied
as practical scenarios experienced by the respondents. Therefore, individual
responses are a comprehensive complement to collective questionnaires in the
manner that they provide meaningful visualizations of the respondents. Fi-
nally, a questionnaire that is answered by either one or more respondents at
different times also provides details on the specific details about the focus
groups. Such details are used to identify the functionality of the evaluation
process because they reveal various aspects of the respondents in a formative
stage.

According to FEDS (Framework for Evaluating Design Science Research;


Venable et al., 2016), an evaluation episode should clearly state what, why,
when and how to evaluate.

What to evaluate
The major aim of the process is to identify the efficacy of the online survey
process to yield reliable outcomes for research. In effect, the focus groups may
give more specific details regarding the use of online surveys in U-CARE and
help identify reliability and functionality of RBL. There were three aspects of
RBL evaluated during this evaluation process. i) Respondents activity for each
question and questionnaire, ii) time taken by respondents for each question
and questionnaire and iii) compare and contrast of the number of changes par-
ticipants made between previously attended questionnaire and now.

91
Why to evaluate
We consider the goals in doing focus group evaluation for RBL, is to gather
interpretations and reflections from the relevant audience. The evaluation pro-
cess may be in its formative stage. Thus, the “why” attribute of the research
process may allow us to interpret outcomes that would provide a viable basis
for next successful design cycle of RBL. Because, we believe, by showing
RBL data to the stakeholders may reveal weaknesses of RBL and therefor,
improve its reliably, usefulness and relevance. This way, the evaluation will
also help the researcher to derive at a summative approach that will confirm
all the decisions of the research process are taken into account.

When to evaluate
At this stage, RBL framework (Stage II) place into the time of both ex-ante
and ex-post evaluation. Because, the RBL framework has already in use in U-
CARE studies but at the same time it is open to research needs and improve-
ments. Therefore, ex-ante evaluation may help us evaluating the use of tech-
niques (i.e., static and dynamic model of RBL) and methods (visualization of
RBL data) of RBL. And ex-post evaluation based on RBL data that were al-
ready collected from U-CARE studies may help us decide whether the re-
search process attained its objectives so far.

How to evaluate
“How” evaluations take place between ex-ante and ex-post processes. The
evaluation approaches for the focus group intend to gather interpretations and
reflections from the relevant audience. And therefore, we have included six
participants in the focus group as previously mentioned in section X who were
vital stakeholders and were directly involved in designing online studies.
Through the focus group evaluation, the effectiveness of RBL data – espe-
cially the visualization of such data – is in focus, on the basis of whether the
visualization helps to produce meaningful insights that would have otherwise
been unavailable to the researchers and scholars of the focus group.

Data Collection
We presented a series of slides with visualizations of RBL data from ongoing
research in the eHealth practice. The visualizations were made based on how
respondents answered the HADS questionnaire; an instrument used to meas-
ure depression and anxiety. In the eHealth practice, HADS is used both for
screening and for subsequent follow-ups of the participants’ depression and
anxiety. We made an audio recording of the entire session (approximately one
hour), and took notes.

92
Data Analysis
We interpreted the audio recording to identify how focus group participants
made sense of the RBL visualizations, and how they ascribed meaning to
them. Below we account for the results of the analysis.
The RBL visualizations rendered some results related to the repeated use
of a questionnaire (HADS) which is used in Randomized Controlled Trials
(RCT). The charts and the cases were selected based on three overarching sce-
narios as previously mentioned in section X. The extreme cases were taken
out due to invalid or incomplete characteristics of user actions. For example,
a user closes the browser in the middle of answering a survey, or the session
of a browser dies due to longer response time.

Figure 18: Activity Chart

Figure 18 shows the number of activities of the user Bengt (anonymized user
name) took to answer a questionnaire (HADS – Has Anxiety or Depression
Scale). There are 15 questions in HADS. The black line shows the mean num-
ber of activities to answer this questionnaire (for all users in the context). This
is one of the examples to showcase that the ability to reinterpret user activities
reveals knowledge about how a questionnaire has been generally used. In con-
trast to this case, there was another user who took quite a different path in
answering the same questionnaire. So the question is, how we make sense of
it? The user who took a different path, does that entail that, the user was very
much enlightened/disturbed by the questions? Or, the user had difficulties in
understanding some of the questions in the questionnaire.

93
One interpretation from one of the focus group members was, the HADS
questionnaire is generally used for screening participants into different patient
groups. So it is expected that, often we will discover some users who will take
different approach to this questionnaire. Basically it suggests that, the users
who often take unusual path the ones who need treatment.
Then a combination of activity charts was shown in which the same ques-
tionnaire was plotted but used in two different occasions. However, the results
were very much different in terms of users attending first time HADS compare
to second time. The designer of that questionnaire said “I am not sure this
brings any ambiguity because users when attending HADS for the second
time, they already have learned about the questions, .. but what should be
interesting is that, the users who consistently had same results from both
HADS”. That means, the user who scored the same, is important to study in
this case since it raises concerns that the user may not be improving or reacting
properly to treatment.

Figure 19: Time Chart

Figure 19 is an example of how much time a participant consumed to fill in


HADS. The black line represents the mean time for participants, the blue line
represents the anonymized user Monika’s time. It seems that Monika took
some more time than normal around the questions 3 to 7 and the rest of the
questions were filled in following the same time which most of the participants
actually took. There is nothing more to prevail from Monika except the fact
94
that she may have been very thoughtful around the first 7 questions. The
HADS questionnaire consists of questions 1 – 7 related to measuring depres-
sion and questions 8 – 14 related to measuring anxiety. One specific comment
from one of the members of the focus group was, “this chart can be very useful
if we combine the same chart time to time of a specific patient in order to
compare a patient’s situation when the same patient takes HADS later in their
treatment process”.
The following Figure 20 is a matrix of answers that shows how many times
users have ‘jumped’ from one question to another. The axes represent the
question numbers as they appear to the users. The gray boxes show the most
common path or sequence the users have answered the questions.

Figure 20: Answer Matrix (Observation point 1)

95
Figure 20 reveals some unanticipated jumps between questions by the re-
spondents on both sides of the gray line. We compare this Figure 20 with the
following Figure 21 where the same questionnaire was answered second time
by the respondents at later time in their study.

Figure 21: Answer Matrix (Observation point 3)

Figure 21 shows quite a different scenario compare to Figure 20. There are
significantly less unusual jumps and almost every respondent followed the
same tendency in filling out the questionnaire. The dominant view from the
focus group was, when the respondents took the HADS for the first time, they
clearly made lot of noises outside the desired path while the second time, re-
spondents followed the usual path because they already had learned about the
questionnaire.

96
Lessons Learned
In summary, the focus group evaluation shows that the visualizations support
researchers to reflect in new ways about the design of their questionnaires. We
have shown examples of how the focus group members have (i) reflected
about new ways of understanding the use of questionnaires at different obser-
vation points and (ii) how the visualization of RBL data correctly helped re-
searchers identify a problem with the HADS questionnaire. Hence, the evalu-
ation has shown proof-of-value (Nunamaker and Biggs, 2011) of the RBL vis-
ualization techniques in an actual research setting.

Experimental evaluation
Please find the full analysis in appendix A.

Rationale
The goal of the experimental evaluation is to evaluate if RBL data in combi-
nation with statistical analyses can be used to detect ‘flaws’ in questionnaire
design and to assess if the RBL measurement constructs support questionnaire
evaluation.

Evaluation Overview
In order to evaluate the RBL concept we have conducted an experiment using
the following setup. A questionnaire is designed in three versions –

• Q1: Baseline questionnaire – designed to be a ’good question-


naire’, i.e. without deliberate design flaws. Q1 respondents will
serve as a control group so that we know that the results aren’t
just an effect of ’randomness’.

• Q2: Erroneous questionnaire A – Designed to contain some


built-in design issues with respect to isolated questions, e.g. am-
biguous questions or questions with non-intuitive answering op-
tions.

• Q3: Erroneous questionnaire B – Design to contain built-in


structural issues, i.e. questions in a non-logical sequence or a
lack of informative texts to provide context for the respondent.

Q2 and Q3 can differ quite a lot from each other and from Q1, as long as
the built-in design flaws/issues are not ’obvious’. A double-blind survey will
97
be conducted. Neither the respondents nor the researchers will be aware of the
questionnaire in use during data collection.
We based the experimental design on an original questionnaire design us-
ing guidelines from the Harvard University Program on Survey research (Q1).
In addition to Q1, we designed two derivate questionnaires with planted flaws:
Q2 has embedded flaws on the question level, while Q3 has embedded struc-
tural flaws. To have a useful baseline questionnaire, we proceeded with the
well-established USE questionnaire [18]. The questionnaire concerns online
learning platforms, making it relevant for the intended respondents, who were
students at the various campuses at the Uppsala University.
The dependent variable is whether there is an error or not in a questions.
Independent variables are the various quantified measures that RBL enables
(see Table 5 and Table 6), including but not limited to:

• Change Frequency (CF): The number of times the answer to a


question is changed before the questionnaire is submitted.
• Question Response Time (QRT): The time taken to complete a
single question.
• PingPong Frequency In (PFI): The number of jumps to a ques-
tion in focus from other questions.
• PingPong Frequency Out (PFO): The number of jumps from a
question in focus to other questions.
• PingPong Frequency Total (PFT): PFI + PFO.

The measures may also be defined at questionnaire level:

• Overall Change Frequency (OCF): The number of times an an-


swer has been changed in the questionnaire as a whole before it
is submitted.
• Overall Response Time (ORT).
• Overall PingPong Frequency (OPF).

What to evaluate
Below mentioned hypotheses will be tested in this study.
• Hypothesis 1: This hypotheses will test if response time and
change frequency are significant discriminating variables across
three questionnaires
o Ho: Mean response time and change frequency is
same across three questionnaires.
o H1: Mean response time and change frequency is dif-
ferent across three questionnaires.
• Hypothesis 2: Questionnaire one has open ended questions placed
at end whereas questionnaire three has open ended questions placed

98
in middle of the questionnaire. This hypotheses will test if place-
ment of open ended questions in a questionnaire has impact on re-
sponse time and change frequency of questions
o Ho: Mean response time and change frequency for
open ended questions is same between questionnaire
one and three.
o H1: Mean response time and change frequency for
open ended questions is different questionnaire one
and three.
• Hypothesis 3: Questionnaire one has proper context available for
all questions whereas in questionnaire three context is missing for
some questions. This hypotheses will test if availability of context
for questions in a questionnaire has impact on response time and
change frequency of questions.
o Ho: Mean response time and change frequency for
the questions with/without context is same between
questionnaire one and three.
o H1: Mean response time and change frequency for
questions with/without context is different between
questionnaire one and three.
• Hypothesis 4: Questionnaire one has seven options available for
all multi-choice questions whereas questionnaire two has six. This
hypotheses will test if difference in number of options for questions
in a questionnaire has impact on response time and change fre-
quency of questions.
o Ho: Mean response time and change frequency for
all multi-choice questions is same between question-
naire one and two.
o H1: Mean response time and change frequency for all
multi-choice questions is different between question-
naire one and two.

Why to evaluate
The experimental evaluation is for estimation of the probability that an error
exists based on our measures that are found from the log data of RBL. The
experiment takes a survey approach by conducting a survey experiment. The
experimental sample chosen is supposed to infer to the bigger picture of the
goal of the research, which is to offer an evaluation technique for online sur-
veys.

When to evaluate
At this stage of this research, the idea is to conduct this experiment for the sole
purpose to collect as much insights as possible. It is hard to pre-determine the
number of respondents needed, since there is no previous data on variability

99
in the sample. At a later stage, we will determine variability based on the con-
trol group data. At this point, the aim is to collect as much data as possible
(”more is better”), with a preliminary aim of 40 respondents per questionnaire,
i.e. N=120. Variability is typically determined in a pre-study. This experiment
may be considered as a pre-study – the lessons learned here (including the
variability issue) are all part of developing and evaluating the RBL concept.
Data analysis conducted in upcoming research benefit from lessons learned
about variability in this experiment.

How to evaluate
In order to analyze the data and draw inference from the same, descriptive
statistics will be calculated separately for the metrics to be tested in each hy-
pothesis. One way MANNOVA will be performed using SPSS to test the hy-
pothesis and results will be analyzed to conclude on the same. All the assump-
tions of one way MANNOVA will also be tested using SPSS.

Data Collection
Students (N=120) were approached and asked to fill in the questionnaire using
tablets provided by the researcher. The process was double-blind (neither the
researchers nor the respondents knew which version of the questionnaire they
were filling in).
Table 5 and Table 6 show the quantified measures that RBL enables at this
time. Additional measures may be discovered and tested at a later stage, only
limited by the data and metadata in the RBL database (as shown in sections 0
– 0). The overall design of the experiment is based on the following:

• Create a ’flawless’ or at least good baseline questionnaire, i.e. a


questionnaire that is aligned with established questionnaire guide-
lines. Guidelines from the Harvard University Program on Survey
Research were used. An established instrument as a starting point,
and add some demographic and contextual questions. We pro-
ceeded with the USE questionnaire (Lund, 2001), which is a fairly
well-tested questionnaire to study user opinions about usability of
some IT system. We can ask students to fill in the questionnaire,
and ask them questions about learning platforms at the university.
• Create good ’flaws’ to inject into Q2 and Q3. The starting point
here should be some typology of typical errors in questionnaire de-
sign. Appendix A shows how errors were injected into Q2 and Q3
by deliberately injecting systematic deviations from the question-
naire design guidelines.

100
Data Analysis
Find the detail statistical analysis in the appendix. Bellow we highlight the
findings from the analysis.

Hypothesis 1: As the null hypothesis is accepted it can be concluded that


response time and change frequency are not significant discriminating varia-
bles for the three questionnaires. It can also be said that factors like question
sequence, number of option for each question and availability of context did
not have a significant impact on overall response time and change frequency
of the three questionnaires.
Hypothesis 2: As the null hypothesis was rejected it can be concluded that
response time and change frequency of open ended questions are significant
discriminating variables for questionnaire 1 and 3. Open ended questions were
placed at end in questionnaire one whereas they were placed in middle in ques-
tionnaire three. Based on the result of the hypothesis we can conclude that
placement of open ended questions in a questionnaire has a significant impact
on their response time and change frequency. It can be said that open ended
questions placed at the end of a questionnaire have less response time and
change frequency compared to if they were placed in the middle of the ques-
tionnaire.
Hypothesis 3: As the null hypothesis was accepted it can be concluded that
response time and change frequency of questions with/without context are not
significant discriminating variables for questionnaire 1 and 3. Questionnaire
1 had proper context for all questions whereas no context was provided for
some questions in questionnaire 3. Based on the result of the hypothesis it can
be concluded that presence of context does not have significant impact on re-
sponse time and change frequency of questions.
Hypothesis 4: As the null hypothesis was accepted it can be concluded that
response time and change frequency of multi choice questions are not signifi-
cant discriminating variables between questionnaire 1 and 2. Questionnaire 1
had seven options for each multi choice question whereas questionnaire 2 had
six. Based on the results of the hypothesis it can be concluded that the number
of options available for multi choice questions does not have a significant im-
pact on response time and change frequency of questions.
Questionnaire one has open-ended questions placed at the end whereas
questionnaire three has open-ended questions put in the middle of the ques-
tionnaire. We tested the null hypothesis ‘Mean response time and change fre-
quency for open ended questions is the same between questionnaire one and
three.’ The null hypothesis was rejected, demonstrating that response time and
change frequency of open ended questions are significant discriminating var-
iables for questionnaire 1 and 3. We placed open-ended questions at the end
of questionnaire one whereas we placed them in the middle of questionnaire
three. Based on the result of the hypothesis we can also conclude that place-
ment of open-ended questions in a questionnaire has a significant impact on
101
their response time and change frequency. In other words, we anticipated that
that open-ended questions placed at the end of a questionnaire have less re-
sponse time and change frequency compared to placement in the middle of the
questionnaire.
A limitation is that the variables in each category were not normally dis-
tributed and that could have impacted the results of the analysis to some ex-
tent. Another limitation is that there are multiple instances, especially in ques-
tionnaire three, where the respondents’ response time for numerous questions
is 0 seconds. That is; Respondents chose not to respond to questions that were
placed out of order or were not properly contextualized, and this could have
impacted the results of the evaluation.

Lessons Learned
To this point, we have demonstrated that the conceptualization of RBL meas-
urement constructs is implementable and feasible to use in statistical analysis.
From a DSR evaluation perspective, we have thus provided a proof-of-concept
[23], i.e., that the measures are possible to calculate and that they can provide
a ground for statistical analysis. We have also demonstrated that the collection
of RBL data, and subsequent analysis using RBL measurement constructs, fa-
cilitates ex-post analyses of questionnaire design that would not be possible
without respondent behavior logging. Note that we have not investigated the
in-depth meaning of the experimental results; e.g., attempted to correlate the
characteristics of the free-text answers with the CF or time taken measures.
Instead, we have shown that the RBL data opens up new avenues of analyses,
and points us in new directions to understand our questionnaires as well as the
data collected in online surveys.

102
Discussion

In this chapter, we draw from literatures, experiences from the design process,
and the evaluations to address our research questions.

RQ1: How can the characteristics of the online medium


be used to improve the efficacy of online surveys?
The revolution that has come as a result of the Information Age has led to
massive changes in all sectors of work. Equally affected is the field of research
as Katz (2002) asserts. For a long time, surveys have been very critical in data
collection. However, the trend has shifted towards technology in the storage,
retrieval, and analysis of data (Murthy, 2008). To meet these changing needs,
though, some critical traits are demanded of the available online survey plat-
forms. For instance, there is a need for the online platform to directly capture
the necessary data into the applicable statistical package as this limits the er-
rors associated with the paperwork. Secondly, any efficient system needs to
be more cost-friendly as Ward, Clark & Zabriskie (2012) confirm. Thus, for
an online survey method to be more efficient, there is a need for the designers
of the system to ensure that it has a cost-advantage over the other existing
media. In addition to that, versatility is another necessary requirement for the
online mediums for the enhancement of their usability and access. Recently,
with the widespread use of smartphones and the various operating systems,
there is a need to develop a system that can easily launch on a variety of plat-
forms. As such, both software and hardware compatibility measures are al-
ways taken into account during the design process. Moreover, ability to keep
traceable records that have been a challenge with the former analog system is
a necessary and key feature of the online survey systems. Oftentimes, there
has been a need to keep track of the collected data during a survey, a purpose
that the online systems need to serve well. Other necessary features of the
online systems are extemporaneity, simultaneity, self-determination, analyt-
ics, visibility, identity and quality measurement and maintenance (Clark,
1996). The design of RBL has therefore taken these features into account and
responded to the challenges inherent in the analog systems.
First, RBL enables to timely track the activities of the respondents when
answering questions. This can be done through both the server-side as well as
the client-side tracking. This is due to the opportunity on the web platform as

103
opposed to the paper platforms synonymous with the paper-based surveys.
The respondents are also enabled to make changes to the questions due to the
web platform. However, it has been noted that there are some challenges that
come as a result of both techniques. For example, the client-side tracking has
the benefit of allowing a large amount of detail by capturing every move of
the user despite the potential risk of data loss and the low speed of the browser
due to the script. On the other hand, a server-side tracking has a smaller effect
on the speed of the browser and less risk of data loss despite its inability to
capture much detail. As noted, therefore, the benefits of both the techniques
overrides their shortfalls making the RBL software more applicable as far as
time tracking is concerned.
Secondly, the innovative features of RBL allow for a self-evaluation of the
survey as opposed to the existing paper-based surveys. For instance, the com-
plexity of the structures of the research questions can be determined by how
long a user takes to respond to the questions. These will be indicated by either
the time chart or the answer matrix. Therefore, using RBL, the surveyors are
able to ascertain whether there is a need to improve the survey and how this
can be done. As stated earlier, this is a new feature in the U-CARE survey that
can be very vital for the continued quality improvement as far as the field of
the survey is concerned. Through this system, the surveyors are able to acquire
a large amount of data that is of great significance to the designers and re-
searchers.
Additionally, the available literature as noted earlier shows that users
across the globe use very different browsing software. Specifically, some us-
ers of the U-CARE research technique upon which the RBL software is based
still use outdated software like Explorer 6. Other common software used by
the likely applicants of the RBL systems are Safari, Google Chrome, and Fire-
fox. To meet these varying needs, RBL has been designed in a manner that it
is compatible with all these browsers. This caters to all the computer user of
the software. In addition to that, RBL has been designed in a manner that it is
light enough to run on mobile platforms. This, the mobile-based platforms
support server-side tracking together with certain client-side tracking. For in-
stance, as has been noted, events on the client side are only tracked when the
users answer questions. Others such as locations, questionnaires, and tracking
of questions will be done on the server-side. Further, in the design of RBL,
jQuery version X programming language has been used to aid in tracking on
the client side. jQuery which is a small and fast JavaScript rich in many fea-
tures is easy to use and applicable across a number of browsers thus increasing
the software versatility.

104
RQ2: How can we conceptualize respondent behavior
and logging of such behavior in a meaningful manner in
the context of online surveys?
From the available evidence, there is widespread use of online tools in con-
ducting online surveys. This has been motivated by the advent of Web 2.0
which has created new opportunities in how research is carried out. These
include services from vendors like Google, SurveyGizmo, and SurveyMonkey
that have been used in other fields to design questionnaires, collect, and ana-
lyze the collected data. Further, software such as SUML and queXML have
been used in the management of questionnaires that have automated the field
of online survey. Despite the vast applications of this software in other fields
other than the field of information science, it has been noted that little empha-
sis has been laid in the design and evaluation of questionnaires.
In the field of HCI for instance, implicit interactions and the ease of usabil-
ity have been used as parameters for testing the interactions between human
and computers. The efficiency and proficiency of any user have been tested
by allowing a user to fill a form and the time is taken by the user to fill the
indicated fields are documented. Using this method though (in HCI), there is
need to ensure that the time tracking does not affect the experience of the user.
Thus, it should be limited to the browser and server side settings. To realize
this recently, JavaScript codes have been used to modify the pages on an
HTML proxy. The script only consults the servers during loading or data sav-
ing. This technique has, however, been limited to the website usability tests
with the possibility of applications in developing web applications, profiling
users, and website implicit interactions in future. Additionally, mouse tracking
has been used in the field of HCI to keep the records of user behavior and
determine the usability of the website by keeping track of the type of data
collected by the users of the website.
Apart from HCI, user logging has also been used in the field of medical
services. For the mental health interventionists, for example, data such as the
time consumers logged into the program, why and when they changed answers
can be used to improve health and make it more persuasive as has been found
by Kelders, Julia & Van GemertPijnen (2013). In addition to that, user logging
has been used in the management of chronic diseases, especially for the el-
derly people. Since these diseases need a keen monitoring, log files have been
used in this field to keep track of the changes of these diseases in the elderly.
For the health professionals, eHealth provides online support, self-manage-
ment, and improve interaction between the patients and the medical practi-
tioners.
However, from the available evidence, little has been done in the field of
IS concerning the application of respondent behavior tracking in improving
the design of surveys (see details in the literature review, p. 48). The focus of
all the available literature has been the achievement of a concrete outcome that

105
is valuable in the real world while the concept of how tracking of respondent
behavior can be used has been greatly ignored in IS.
Due to this innovation gap in the field of IS, and given the recent develop-
ments in environment logging, we came up with the idea of RBL. As noted
earlier, the environment log has the capability of collecting details such as
screen resolutions, browsers, operating systems, and JavaScript versions that
can be used in determining the survey version for maximum usability. Hence,
RBL will use these recent techniques to design a survey mechanism that is
capable of tracking the behavior of respondents that will ultimately be used to
improve the questionnaires. Using this technique, we will be able to create an
online survey system that is self-evaluating. That is, from the collected data,
the RBL system in itself will be able to generate data on how the system can
be improved. Moreover, since this process needs to be interactive in order to
create a two-way type of communication, we will apply visual aid in the de-
velopment of RBL. Techniques such as activity chart, time chart, and answer
matrix have been used in this project to ensure the system has a social exist-
ence, in addition to the creation of ease of understanding.
To further conceptualize the idea of self-evaluation, RBL will use both
question level and questionnaire level measures to evaluate if there are any
challenges with the manner in which either questions or the questionnaires are
designed. At the question level, for instance, change frequency, question re-
sponse time, PingPong frequency in and out, and PingPong Movement Factor
has been used. On the other hand, overall change frequency, overall response
time, and overall PingPong Movement Factor have been used in RBL and
evaluation techniques at the questionnaire level (see chapter 0).
Based on Framework for Evaluation in Design Science Research (FEDS),
a research evaluation technique suggested by Venable et al (2016), we have
carried out; a focus group and experimental evaluation of the RBL system in
addition to a comparative analysis. During the focus group evaluation, it was
possible to ascertain how long a user takes to complete a HADS questionnaire
as demonstrated by the case of Bengt (an anonymous user). This gave more
insights into how the questionnaire in any particular case has been designed.
Apart from the approach taken by Bengt, it was realized from the data analysis
that some users took a different approach to the question. From the interpre-
tation by the focus group members, this data from RBL can be used to deter-
mine those who need treatment. Similarly, the time taken by a user to fill the
questionnaire as demonstrated by the case of Monika as simulated in the RBL
system can be used to a compare the situation of a patient if a similar test was
taken by the same user at different times. Further, it has been demonstrated
using this platform that it is possible to illustrate how the users jumped from
one question to another whether usually or unusually, the paths taken by the
user to respond to the questions. Also, it was possible to ascertain the improve-
ments when the users used the RBL system in the subsequent times.
On the other hand, an experimental evaluation of RBL was used to test the
impacts of open-ended questions on questionnaires. Change frequencies and
106
response time both on single and multiple questions have also been used in
RBL to determine the effectiveness of the structures of the questionnaires and
the questions alike, as illustrated in chapter 0. For example, open-ended ques-
tions placed at the end of a questionnaire attracted less response time as com-
pared to similar questions placed in the middle of a questionnaire. In summary
(and has been demonstrated earlier on) the RBL system is implementable and
the set-out parameters are measurable as have been indicated. Hence, with the
use of this system, it will be possible to automate and evaluate online surveys
in IS.

RQ3: What qualities should guide the design of


respondent behavior logging software to support
researchers in designing and executing online surveys?
Based on the justification of a behavioral science theory, a designed IT artifact
should be evaluated on the appropriate metrics by gathering and analyzing the
appropriate data (Von et al, 2004, p. 85). The qualities include functionality,
consistency, completeness, performance, accuracy, fit with the organization,
and reliability (Venable, Pries-Heje & Baskerville, 2012, p. 426; Von et al,
2004, p. 85).

Functionality
Automation of questionnaires during the research process has been one of the
greatest challenges bedeviling surveys in Information Science. Precisely,
there has been a problem in the criteria in which participants of an online sur-
vey are selected in order to arrive at the most helpful and conclusive results.
Notably, the results of any research are principally based on the study samples
since it is based on these that the collected data are analyzed and results doc-
umented. However, most of the existing online survey systems have failed to
be discriminative in the selection of the sample population for any given sur-
vey based on the results at a given stage of the questionnaire. Therefore, the
existing systems have not been reliable as far as the sample selection at par-
ticular stages is concerned. Premised on this, the idea of RBL came in handy.
Using this system, the responses of the users at any stage of a questionnaire
can be obtained and reliable conclusions made. This way, it will be easy to
determine whether the structure for both the questions as well as the question-
naires match the desired objectives. Apparently, this mid-stage analysis is nei-
ther possible with the paper-based methods nor the existing software used in
online surveys making RBL both unique and applicable to online surveying.

107
Performance & Applicability
Secondly, when carrying out a software design, there is need to pay close at-
tention to the applicability of the developed system. As such, any system
should ensure that the final result of the process can be put to a practical use.
In addition to that, most of the systems are always created with a given niche
in mind. The designers, therefore, in most cases tailor their products to the
needs of the market. While designing RBL, we took these two aspects into
account. First, as has been demonstrated above (in this discussion chapter) and
in the evaluation of the system (chapter 6), through RBL, the researcher is able
to monitor the process through which the respondents answer questions simi-
lar to the mouse tracking technique that has been used in Human-Computer
Interface, before (HCI). Based on the reaction of the user, the researcher is
able to ascertain whether the correct format for both the questions and the
questionnaires were applied during the research design. Through demonstra-
tion, we have also been able to show that it is possible to collect user data such
as the time take to respond to a question, how frequently a user jumps from
either one question to another or one questionnaire to another as well as how
long a user takes to respond to a questionnaire as has been demonstrated in the
evaluation of this software (earlier).
Moreover, it has been proven through the use of RBL that it is possible to
collect data on the effects of types of questions on the response time of the
users. From the illustrations, open-ended questions appearing in the middle of
the questionnaire demands more response time than at the end of the question-
naire. This finding among others listed above is unique to RBL since they give
an opportunity for the evaluation of the online survey. Therefore, using RBL,
one can ascertain whether a design criterion used in an online survey best
serves the desired objectives or needs an improvement.
Hevner (2004) states that the applications of technology should lead to
“technology-based solutions to important and relevant business problems” (p.
83). As aforementioned, there has been a knowledge gap in the automation of
online surveys based on the available data. Further, Hevner (2004) asserts that
the quality, utility, and efficiency of any design must be demonstrated based
on a method that is well-executed. These evaluation criteria always follow
Framework for Evaluation in Design Science Research. In addition to that, it
has been argued in the literature that a design science needs to have a clear
and verifiable contribution in its area of knowledge. Also, the search for any
effective artifact in the field of research requires that the design uses the avail-
able data in order to solve a particular problem. Finally, Hevner (2004) has
argued that any research should be tailored to both the technology savvy indi-
viduals as well as management oriented users (p. 83).
In response to the available literature and has been suggested mainly by
Hevner (2004), we designed RBL such that it is possible to evaluate how ef-
ficient and effective the system is through focus group and experimental eval-
uations and comparative analysis. Notably, the idea of RBL is new since no
108
other known software collects the real-time data and carries out a self-evalua-
tion of an online survey based on the data acquired from the respondents. To
ensure that the process is rigorous as has been suggested in the existing liter-
ature, we organized workshops and collected the views of the various stake-
holders. RBL also has a looped system that is capable of solving its own flaws
making the artifact more effective. Finally, the use of visualizations tech-
niques, implementation of the RCT, conceptual models, and user guidelines
makes RBL relevant for both the management and the technology savvy users.
In summary, the design and implementation of RBL have followed most if not
all of the literally suggested techniques as we have validated.

System Integrity & Reliability


Usually, a product design is an iterative process which is incremental in na-
ture. During the evaluation process, the feedback is collected and is used in
the construction phase of the same artifact to improve the quality of the pro-
cess. This goes on until the system satisfies the requirements for which it was
intended. At this point, it is said to be either complete or has met the required
level of integrity. Additionally, system integrity is measured by the effects of
either the internal or the external environments on the operational parameters
of the system. During the design process of RBL, several inputs were tested
and the results used in the continued development of the final product. The
evaluation criteria used in this product have been so vigorous that the final
product will ensure that the system does not change due to the changes in any
input parameters. Furthermore, both focus group and experimental techniques
have been used to ensure the completeness of this artifact. Using RBL, the
data that will be collected using these evaluation techniques will be valuable,
collected from the right sample population, and most significantly follows all
the required security constraints, hence system integrity.

Functionality (Opportunity for better designing)


With the development of technology, design science has been very active in
the creation of artifacts that influence organizations and people. There has
been a constant focus in solving the existing problems using a simplistic view
and context within which these artifacts must function (Von et al, 2004, p. 98).
As compared to the paper-based survey, RBL (an online survey technique)
gives the users an opportunity to evaluate the effectiveness of the system. This
has been achieved and demonstrated through the various evaluation tech-
niques. Most significantly, with the RBL, it is possible to track the respondent
logs thereby providing an opportunity for evaluation. An example is the de-
termination of the duration a respondent takes answering a single question or
how long one takes to respond to an open-ended question in the middle of the
questionnaire as compared to that at the end of a questionnaire. Conversely,
in the paper-based survey, it is impossible to keep track of these parameters
109
hence the system cannot be easily evaluated. The applicable evaluation sys-
tems in RBL gives an opportunity for the improvement of the system design
unlike in the case of paper-based surveys which are tedious to evaluate.

Uniqueness
For a long time, there has been no agreement amongst the researchers on what
contributes to a DSR. However, Gregor & Hevner (2013) argue that there are
three levels at which a DSR project can constitute an artifact (pp. 341-342)
and as aforementioned. As established, the contribution of RBL in the field of
online surveying has not been formalized and documented before. Similar at-
tempts in the field of research have been limited to the interaction between
human and computers, the famous HCI and in the field of medical services.
However, no known facts have been established about a self-evaluating DSR
product making this project more unique. Moreover, there have been limita-
tions on artifacts that cut across more than a single discipline as does the RBL
software. Hence, RBL is not only unique in its application in the field of sur-
vey but also on the field in which it is developed and applied (IS).

The contribution to Knowledge Base


According to Gregor & Hevner (2013), the contribution of any artifact to the
existing knowledge base can be categorized as improvement, invention, rou-
tine design, or exaptation (p. 345). As has been discussed, RBL project is both
an invention and an exaptation thus greatly contributes to the existing
knowledge base. Invention (as mentioned earlier) focusses on the creation of
a solution to a complex problem and always involves cognition and curiosity
(Gregor & Hevner, 2013, p. 345). On the other hand, exaptation involves using
a knowledge base in one field to get a solution for a problem in another field
(Gregor & Hevner, 2013, p. 347). In its category (and has been noted in the
existing literature), RBL is the only artifact in this category that provides a
unique solution to the challenge of evaluating an online survey through the
acquisition of the data and the tracking of the activities of the respondents (as
discussed earlier). Moreover, this IS research aims at solving an existing prob-
lem in the field of Survey. Based on this, we assert that this research will have
an enormous contribution in a different field and overhaul the way in which
online survey is carried out (creation of online questionnaires, evaluation of
the questionnaires, and process improvement).

110
111
Conclusions and Research Outlook

In this thesis, we have argued that respondent behavior logging allows for
questionnaire evaluation and data interpretation in a novel way. Although var-
ious forms of user behavior conceptualization has taken place, and user log-
ging has been used for other purposes, our literature review shows that it has
not yet been studied in the context of online surveys. We have proposed a
tentative RBL concept on the basis of (i) existing artifacts, (ii) literature and
(iii) a design process in the context of eHealth and eHealth research. We pro-
vided a proof-of-concept by implementing the RBL concept into software.
The software, being used in live eHealth trials, is used to collect a large
amount of data for respondent behavior while filling in questionnaires. So far,
we have conducted two separate evaluations to demonstrate the qualities of
RBL. First, we have conducted a focus group in which RBL visualizations
were presented to stakeholders in the eHealth research practice. In doing this,
we showed proof-of-concept for the conceptualization of RBL and for the vis-
ualization techniques as outlined in sections 0 – 0. In addition, the focus group
evaluation demonstrated proof-of-value for RBL visualization. Second, we
have conducted an experimental evaluation including 120 participants in
which the RBL measures were used to test hypotheses about questionnaire
flaws. The evaluation serves as a proof-of-concepts that these measures can
indeed be calculated and integrated into analysis. The results signal a potential
value of RBL as a means to identify questionnaire design issues, but also as a
means to make sense of data collected in the trials. Future evaluation will (i)
expand the experimental evaluation using binomial logistical regression and
(ii) compare large amounts of RBL data from the U-CARE context and con-
trast it with known issues in the questionnaires in use.
We do not propose RBL as a substitute for other questionnaire design and
evaluation techniques. It does, however, provide an alternative approach to
questionnaire evaluation that may be used stand-alone or in concert with other
evaluation techniques. While the implications of RBL at this stage are spec-
ulative, we find that it has good potential to become an important strategy for
questionnaire evaluation and re-design. It is a new use of technology that may
be integrated into the process of large-scale data collection online, supporting
both the interpretation of collected data and the refinement of questionnaires.
RBL allows for evaluation to occur automatically while the user is taking
the questionnaire. If RBL is ‘activated’, virtually no further work is needed on
the part of the researchers to collect and monitor data for questionnaire eval-

112
uation. RBL thus has the potential advantage of being cost-effective in com-
parison with other questionnaire evaluation techniques, either to make pilot
evaluations of questionnaires, or to continually collect data in order to be able
to fine-tune questionnaires that are used over a longer period of time. Other
methods typically require additional effort in order to carry out evaluation.
RBL potentially minimizes bias in the questionnaire evaluation process. In
general, unwanted bias diminishes the quality of any research process. In other
questionnaire evaluation methods such as cognitive interviews, there may be
a gap between what the users self-reported survey-taking behavior and their
actual survey-taking behavior. Even if the researchers were to conduct inde-
pendent quantitative analyses, there would still be a risk of selection bias re-
garding what data he chooses to focus on. RBL is likely to eliminate such risk
by providing a one to one correlation between the data produced and the actual
behaviors of the user. The value of this cannot be overstated.

Research Outlook
Continued research will be based on ‘stage IV’ of the research process as out-
lined in section 0. There is a need to plan in detail the continued evaluation
work. In addition, there is a need to extend the literature review based on peer-
review feedback obtained in the mid-term seminar, other conferences, and in
peer-review feedback from submissions to conferences and/or journals.
Finally, the final result of this work needs to incorporate ethical implica-
tions of RBL. A potential problem with RBL is that it adds an additional layer
of logging to online research. Logging of respondent behavior, especially in
the eHealth context, may be considered an intrusion of privacy. We are aware
of two implications, namely (i) the use of RBL would require consent when
applied in eHealth trials, and (ii) the fact that people are aware that their be-
havior while filling in the questionnaire is being logged may lead to a changed
behavior, i.e. a type of ‘observer effect’ occurring due to RBL. There is a need
in the final manuscript to further address ethical aspects and the potential ob-
server effects related to RBL.

113
References

Agarwal, R., & Karahanna, E. (2000). Time flies when you're having fun: Cognitive
Absorption and Beliefs about Information Technology usage. Management Infor-
mation Systems Quarterly, 24(4), 665-694.

Alfonsson, S., Olsson, E., Linderman, S., Winnerhed, S., & Hursti, T. (2016). Is
online treatment adherence affected by presentation and therapist support? A ran-
domized controlled trial. Computers in Human Behavior, 60, 550-558.

Ander, M., Wikman, A., Ljótsson, B., Grönqvist, H., Ljungman, G., Woodford, J., ...
& von Essen, L. (2017). Guided internet-administered self-help to reduce symptoms
of anxiety and depression among adolescents and young adults diagnosed with can-
cer during adolescence (U-CARE: YoungCan): a study protocol for a feasibility
trial. BMJ open, 7(1), e013906.

Arroyo, E., Selker, T., & Wei, W. (2006, April). Usability tool for analysis of web
designs using mouse tracks. In CHI'06 extended abstracts on Human factors in com-
puting systems (pp. 484-489). ACM.

Atterer, R., Wnuk, M., & Schmidt, A. (2006, May). Knowing the user's every move:
user activity tracking for website usability evaluation and implicit interaction. In
Proceedings of the 15th international conference on World Wide Web (pp. 203-212).
ACM.

Barki, H., Titah, R., & Boffo, C. (2007). Information System use - related Activity:
An Expanded Behavioral conceptualization of Individual-Level Information System
use. Information Systems Research, 18(2), 173-192. doi:10.1287/isre.1070.0122

Bass, B. M., & Riggio, R. E. (2006). Transformational leadership (2nd ed.). Psy-
chology Press.

Bateman, P. J., Gray, P. H., & Butler, B. S. (2010). Research Note--The Impact of
Community Commitment on Participation in Online Communities. Information Sys-
tems Research, 22(4), 841-854. doi:10.1287/isre.1090.0265

Bhattacherjee, A. (2001). An empirical analysis of the antecedents of electronic


commerce service continuance. Decision Support Systems, 32(2), 201-214.
doi:10.1016/S0167-9236(01)00111-7

Bhattacherjee, A. (2001). Understanding Information Systems Continuance: An Ex-


pectation-Confirmation Model. Management Information Systems Quarterly, 25(3),
351-370.

114
Bhattacherjee, A., & Premkumar, G. (2004). Understanding changes in belief and at-
titude toward Information Technology Usage: A Theoretical Model and Longitudi-
nal Test. Management Information Systems Quarterly, 28(2), 229-254.

Blumer, H. (1969). Symbolic interactionism: Perspective and method. University of


California Press.
Boffo, C., & Barki, H. (2003). Conceptualizing Information System use: A Behav-
ioral and Perceptual Framework. Cahier du GReSI, 3(05).

Bulmer, M. (Ed.). (2004). Questionnaires. SAGE Publications Limited.

Bogdan, R. C. & Biklen, SK (2006). Qualitative research for education: An intro-


duction to theory and methods^ 5th ed.). Boston, MA: Allyn & Bacon.

Bramming, P., Gorm Hansen, B., Bojesen, A., & Gylling Olesen, K. (2012). (Im)
perfect pictures: snaplogs in performativity research. Qualitative Research in Or-
ganizations and Management: An International Journal, 7(1), 54-71.

Burton-jones, A., & Straub, D. W. (2006). Reconceptualizing System Usage: An


Approach and Empirical Test. Information Systems Research, 17(3), 228-246.
doi:10.1287/isre.1060.0096

Callegaro, M., Manfreda, K. L., & Vehovar, V. (2015). Web survey methodology.
Sage.

Conboy, K. (2009). Agility from First Principles: Reconstructing the Concept of


Agility in Information Systems Development. Information Systems Research, 20(3),
329-354. doi:10.1287/isre.1090.0236

Crooks, T. (2001, September). The validity of formative assessments. Retrieved


from http://www.leeds.ac.uk/educol/documents/00001862.htm

Davison, J., McLean, C., Warren, S., Bramming, P., Gorm Hansen, B., Bojesen, A.,
& Gylling Olesen, K. (2012). (Im) perfect pictures: snaplogs in performativity
research. Qualitative Research in Organizations and Management: An International
Journal, 7(1), 54–71.

Deng, L., Turner, D. E., Gehling, R., & Prince, B. (2010). User experience, satisfac-
tion, and continual usage intention of IT. European Journal of Information Systems,
19(1), 60-75. doi:10.1057/ejis.2009.50

Denzin, N. K. (2006). Sociological methods: A sourcebook. New Brunswick, NJ:


Aldine Transaction.

Dewey, J. (1938). Logic: The theory of inquiry. Henry Holt and Company, New
York.

Dewan, S., & Ramaprasad, J. (2012). Research note-music blogging, online sam-
pling, and the long tail. Information Systems Research, 23(3-part-2), 1056-1067.

115
Fink, A. (2008). Practicing research: Discovering evidence that matters. SAGE.

Freiberger, P. A. (n.d.). Analytical Engine. Retrieved June 15, 2015, from


http://www.britannica.com/EBchecked/topic/22628/Analytical-Engine#ref1069900

Gibbons, M., Limoges, C., Nowotny, H., Schwartzman, S., Scott, P., & Trow, M.
(1994). The new production of knowledge: the dynamics of science and research in
contemporary societies.

Goldkuhl, G., & Lyytinen, K. (1982). A language action view of information


systems. In G. Ross (Ed.), Proceedings of the 3rd international conference on
information systems.

Gosling, S. D., & Johnson, J. A. (2010). Advanced methods for conducting online
behavioral research. American Psychological Association.

Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science re-
search for maximum impact. MIS quarterly, 37(2), 337-356

Grönqvist, H., Olsson, E. M. G., Johansson, B., Held, C., Sjöström, J., Norberg, A.
L., ... & von Essen, L. (2017). Fifteen Challenges in Establishing a Multidisciplinary
Research Program on eHealth Research in a University Setting: A Case Study. Jour-
nal of Medical Internet Research, 19(5), e173.

Habermas, J. (1984). The Theory of Communicative Action. Cambridge: Polity.

Hevner, A. R., Expanding the Impacts of Design Science Research. MKWI 2014
keynote, pp 1-42

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design Science in Infor-
mation Systems Research. Management Information Systems Quarterly, 28(1), 75-
105.

Hirschheim, R., Klein, H. K., & Lyytinen, K. (1995). Information systems


development and data modeling: conceptual and philosophical foundations (Vol. 9).
Cambridge University Press.

Holm P (1996). On the design and usage of information technology and the structur-
ing of communication and work. Doctoral Dissertation, Stockholm University, Swe-
den.

"Internet Growth Statistics - The Global Village Online". Internetworldstats.com.


N.p., 2017. Web. 12 Jan. 2017.

Jadad, A. R., & Enkin, M. W. (2008). Retrieved from http://www.blackwellpublish-


ing.com/content/BPL_Images/Content_store/Sample_chap-
ter/9781405132664/9781405132664_4_003.pdf

Jia, R., & Reich, B. H. (2013). IT service climate, antecedents and IT service quality
outcomes: Some initial evidence. The Journal of Strategic Information Systems,
22(1), 51-69.
116
Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information Technology
Adoption Across Time: A Cross-Sectional Comparison of Pre-Adoption and Post-
Adoption Beliefs. Management Information Systems Quarterly, 23(2), 183-213.

Katz, J. E., & Rice, R. E. (2002). Social consequences of Internet use: Access, in-
volvement, and interaction. MIT press.

Kim, D., & Steinfield, C. (2004). Consumers’ mobile internet service satisfaction
and their continuance intentions. AMCIS 2004 Proceedings, 332.

Koch, H., Gonzalez, E., & Leidner, D. (2012). Bridging the work/social divide: the
emotional response to organizational social networking sites. European Journal of
Information Systems, 21(6), 699-717. doi:10.1057/ejis.2012.18

Krosnick, J.A., Presser, S.: Question and Questionnaire Design. In: Marsden, P.V.,
Wright, J.D. (eds.) Handbook of Survey Research, 2nd edn., pp. 263–313. Emerald
Group Publishing Limited (2010)

Lankton, N., McKnight, D. H., & Thatcher, J. B. (2014). Incorporating trust-in-tech-


nology into Expectation Disconfirmation Theory. The Journal of Strategic Infor-
mation Systems, 23(2), 128-145.

Laudon, K. C., & Laudon, J. P. (2004). Management information systems: managing


the digital firm. New Jersey, 8.

Luftman, J., Zadeh, H. S., Derksen, B., Santana, M., Rigoni, E. H., & Huang, Z. D.
(2012). Key information technology and management issues 2011–2012: an interna-
tional study. Journal of Information technology, 27(3), 198-212.

Lumsden, J., Morgan, W.: Online-Questionnaire Design: Establishing Guidelines


and Evaluating Existing Support. In: Proceedings of the 16th Annual International
Conference of the Information Resources Management Organization (IRMA 2005),
vol. 31 (2005).

Malhotra, Y., & Galletta, D. F. (2005). A Multidimensional Commitment Model of


Volitional Systems Adoption and Usage Behavior. Journal of Management Infor-
mation Systems, 22(1), 117-151.

Markus, M. L., Majchrzak, A., & Gasser, L. (2002). A Design Theory for Systems
That Support Emergent Knowledge Processes. Management Information Systems
Quarterly, 26(3), 179-212.

Mattsson, S., Alfonsson, S., Carlsson, M., Nygren, P., Olsson, E., & Johansson, B.
(2013). U-CARE: Internet-based stepped care with interactive support and cognitive
behavioral therapy for reduction of anxiety and depressive symptoms in cancer-a
clinical trial protocol. BMC cancer, 13(1), 414.

McGrew, D., & Viega, J. (2006). The use of galois message authentication code
(GMAC) in ipsec ESP and AH (No. RFC 4543).
117
Mesa-Lago, C. (2012). Reassembling Social Security. doi:10.1093/ac-
prof:osobl/978019964612.001.0001

Murthy, D. (2008). Digital Ethnography: An Examination of the Use of New Tech-


nologies for Social Research. Sociology-the Journal of The British Sociological As-
sociation, 42(5), 837-855. doi:10.1177/0038038508094565

Newsted, P. R., Huff, S. L., & Munro, M. (1998). Survey Instruments in Information
Systems. Management Information Systems Quarterly, 22(4), 553-554.

Norlund, F., Olsson, E. M., Burell, G., Wallin, E., & Held, C. (2015). Treatment of
depression and anxiety with internet-based cognitive behavior therapy in patients
with a recent myocardial infarction (U-CARE Heart): study protocol for a random-
ized controlled trial. Trials, 16(1), 154.

Nulty, D. D. (2008). The adequacy of response rates to online and paper surveys:
what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314.
doi:10.1080/02602930701293231

Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of sat-


isfaction decisions. Journal of Marketing Research, 17(4), 460-469.

Presser, S. (2004). Methods for Testing and Evaluating Survey Questions. Public
Opinion Quarterly, 68(1), 109-130. doi:10.1093/poq/nfh008

Picoto, W. N., Bélanger, F., & Palma-dos-Reis, A. (2014). An organizational per-


spective on m-business: usage factors and value determination†. European Journal
of Information Systems, 23(5), 571-592.

Raban, D. R., & Rabin, E. (2009). Statistical inference from power law distributed
web-based social interactions. Internet Research, 19(3), 266-278.
doi:10.1108/10662240910965342

Rothbauer, P. M. (2008). Triangulation. The Sage encyclopedia of qualitative re-


search methods, 2, 892-894. Retrieved from http://www.stiba-malang.ac.id/upload-
bank/pustaka/RM/QUALITATIVE%20METHOD%20SAGE%20ENCY.pdf

Ruth, R. D. (2012). Conversation as a source of satisfaction and continuance in a


question-and-answer site. European Journal of Information Systems, 21(4), 427-437.

Saeed, K. A., & Abdinnour, S. (2013). Understanding post‐adoption IS usage stages:


an empirical assessment of self‐service information systems. Information Systems
Journal, 23(3), 219-244.

Searle JR (1995). The Construction of Social Reality, The Free Press, New York.

Sen, R., King, R. C., & Shaw, M. J. (2006). Buyers’ Choice of Online Search Strat-
egy and Its Managerial Implications. Journal of Management Information Systems,
23(1), 211-238. doi:10.2753/MIS0742-1222230107

118
Sein, M., Henfridsson, O., Purao, S., Rossi, M., & Lindgren, R. (2011). Action de-
sign research.

Sivo, S. A., Saunders, C., Chang, Q., & Jiang, J. J. (2006). How Low Should You
Go? Low Response Rates and the Validity of Inference in IS Questionnaire Re-
search. Journal of the Association for Information Systems, 7(6), 35-414.

Sjöström, J.: Designing Information Systems – A pragmatic account. Doctoral Dis-


sertation, Uppsala University, Sweden (2010) ISBN: 978-91-506-2149-5

Sjöström, J., von Essen, L., & Grönqvist, H. (2014). The origin and impact of ideals
in eHealth research: experiences from the U-CARE research environment. JMIR re-
search protocols, 3(2), e28.

Sjöström, J., Rahman, M. H., Rafiq, A., Lochan, R., & Ågerfalk, P. J. (2013, June).
Respondent behavior logging: an opportunity for online survey design. In Interna-
tional Conference on Design Science Research in Information Systems (pp. 511-
518). Springer Berlin Heidelberg.

Sun, Y., Fang, Y., Lim, K. H., & Straub, D. (2012). User satisfaction with infor-
mation technology service delivery: A social capital perspective. Information Sys-
tems Research, 23(4), 1195-1211.

Ternström, E., Hildingsson, I., Haines, H., Karlström, A., Sundin, Ö., Thomtén, J.,
... & Rubertsson, C. (2017). A randomized controlled study comparing internet-
based cognitive behavioral therapy and counselling by standard care for fear of
birth–a study protocol. Sexual & Reproductive Healthcare.

Topi, H., & Tucker, A. (Eds.). (2014). Computing Handbook: Information systems
and information technology (Vol. 2). CRC Press.

Vance, A., Lowry, P. B., & Eggett, D. (2013). Using accountability to reduce access
policy violations in information systems. Journal of Management Information Sys-
tems, 29(4), 263-290.

Venable, J., Pries-Heje, J., & Baskerville, R. (2012, May). A comprehensive frame-
work for evaluation in design science research. In International Conference on De-
sign Science Research in Information Systems (pp. 423-438). Springer, Berlin, Hei-
delberg.

Venable, J., Pries-Heje, J., & Baskerville, R. (2016). FEDS: a framework for evalua-
tion in design science research. European Journal of Information Systems, 25(1), 77-
89.

Viera, A. J., & Bangdiwala, S. I. (2007). Eliminating Bias in Randomized Con-


trolled Trials: Importance of Allocation Concealment and Masking. Family Medi-
cine, 39(2), 132-137.

Von Alan, R. H., March, S. T., Park, J., & Ram, S. (2004). Design science in infor-
mation systems research. MIS quarterly, 28(1), 75-105.
119
Walsh, I. (2014). A strategic path to study IT use through users’ IT culture and IT
needs: A mixed-method grounded theory. The Journal of Strategic Information Sys-
tems, 23(2), 146-173.

Wand, Y., & Wang, R. Y. (1996). Anchoring data quality dimensions in ontological
foundations. Communications of the ACM, 39(11), 86–95.

Webster, J. & Watson, R.T. (2002) Analyzing the Past to Prepare for the Future.
MIS Quarterly, 25(2), xiii – xxiii.

Willcoxson, L., & Chatham, R. (2004). Progress in the IT/business relationship: a


longitudinal assessment. Journal of Information Technology, 19(1), 71-80.
doi:10.1057/palgrave.jit.2000004

Wixom, B. H., & Todd, P. A. (2005). A Theoretical Integration of User Satisfaction


and Technology Acceptance. Information Systems Research, 16(1), 85-102.
doi:10.1287/isre.1050.0042

Wyssusek, B. (2006). On ontological foundations of conceptual modelling.


Scandinavian Journal of Information Systems, 18(1), 9.

Ågerfalk, P. (2010). Getting pragmatic. European Journal of Information Systems,


19, 251–256. Retrieved from http://www.palgrave-journals.com/ejis/jour-
nal/v19/n3/abs/ejis201022a.html

Ågerfalk, P. J., & Sjöström, J. (2007). Sowing the seeds of self: a socio-pragmatic
penetration of the web artefact. doi:10.1145/1324237.1324238

120
Appendix A – Experimental questionnaire
design

This table shows the rationale for the design of the questionnaires used in the
experimental RBL evaluation.

Category Principle Details Experiment Design


Implications
Structure The Length Keep Your Questionnaire Short. Will not be imple-
Level De- Principle Respondents are less likely to an- mented since we
sign Prin- swer a long questionnaire than a want all three ques-
ciples short one, and often pay less atten- tionnaires to be
tion to questionnaires which seem equally long to fa-
long, monotonous, or boring. cilitate easy com-
parison between Q1
– Q3. All three
questionnaires are
kept short though.
The Introduc- Start a questionnaire with an intro- Q3: Minimize intro-
tion Principle duction. If a respondent reads the duction and expla-
survey, provide a title for each sec- nations.
tion. If an interviewer reads a sur-
vey, write smooth verbal transi-
tions.
The Warm-up It’s usually best to start a survey Q3: Move some
Principle with general questions that will be general questions to
easy for a respondent to answer. the end of the ques-
It’s usually best to ask any sensi- tionnaire. Keep the
tive questions, including de- question about gen-
mographics (especially income), der as the first ques-
near the end of the survey. 
 tion in the question-
naire (it is arguably
the most sensitive
question in the
questionnaire).
The Interfer- Things mentioned early in a survey Q3: The Q3 intro-
ence Principle can impact answers later. If the duction text now in-
survey mentions something, re- cludes the phrase
spondents might be primed to think “Learning platforms
of this in other questions. 
 often suffer from
bad design and
technical problems,
which in turn cause
students to be
stressed and frus-
trated”.
121
The Question If you are asking a series of similar Not supported by
Shuffling Prin- questions, randomizing the order the survey software.
ciple respondents hear them can improve
your data. 

The Relevance Respondents should only be asked Use some branching
Principle questions that apply to them. In in Q1 that is re-
cases where some questions might moved in Q3, i.e.
be relevant to only some respond- showing irrelevant
ents, it is best to specifically deter- questions to the re-
mine applicability. 
 spondents of Q3.
Q1 and Q2 now has
the rule @q3@ < 5
to display question
4. Q3 does not have
that branching rule.
Question The N/A Prin- One of biggest issues is whether Remove the “don’t
Level De- ciple you offer an explicit “Don’t know” option in
sign Prin- Know” or “Not applicable” box. Q3.1.
ciples
The Attitude Usually between five and seven Use rating scales
Rating Scale points is best. Generally, providing with 6 options in
Principle a middle category provides better Q2 (Q2.4 – 32).
data. Points on the scale should be
labeled with clear, unambiguous
words.
The Agree Questions which use agree/disa- The USE instru-
Bias Principle. gree scales can be biased toward ment is based on
the “agree” side, so it’s usually ‘agree’ questions,
best to avoid this wording. 
 thus we will not do
any changes based
Try to write questions so that both on this principle in
positive and negative items are the experiment.
scored “high” and “low” on a
scale. 

The Primacy Respondents tend to pick the first Change the order of
Effect Princi- choice. the options for
ple question Q2.3
(moodle, ping pong,
studentportalen).
The Response Randomizing or rotating response Not supported by
Option Shuf- options is usually a good idea. the survey software.
fling Principle
The Response In Internet surveys, radio buttons Q1.1: Should be a
Option Presen- work better than drop-down radio button.
tation principle menus. 
 Change Q3.1 should
be a dropdown
menu instead of a
radio button.
The Question Avoid technical terms and jargon Q2.17 should read
Phrasing Prin- Avoid vague or imprecise terms “The software ar-
ciple Avoid complex sentences chitecture is ro-
Define things specifically bust.”
Provide reference frames
The Ordinal If you are using a rating scale, each Introduce a new ra-
Scale Principle point should be clearly higher or dio button question
lower than the other for all people.

122
For example, don’t ask “How in the question-
many jobs are available in your naires (before the
town: Many, a lot, some, or a few. actual USE ques-
“ It’s not clear to everyone that “a tionnaire starts):
lot” is less than “many.” A better “How much do you
scale might be: “A lot, some, only use the learning
a few, or none at all.” 
 platform?”. In Q1
and Q3, the options
are “A lot, some, a
little, not at all”. In
Q2, the options are
“Much, a lot, a lit-
tle, not at all”.
The Double- Questions should measure one Introduce a double-
barreled Ques- thing. Double barreled questions barreled question in
tion Principle try to measure two (or more!) Q2: Q2.15 should
things. For example: “Do you think be “It is flexible and
the president should lower taxes I can use it without
and spending.” Respondents who written instruc-
think the president should do only tions”.
one of these things might be con-
fused.
The Response If a respondent could have more In Q2.1, write the
Anticipation than one response to a question, question “Which
Principle it’s best to allow for multiple camp at UU do you
choices. If the categories you pro- study at” and make
vide don’t anticipate all possible it a multiple choice
choices, it’s often a good idea to question.
include an “Other-Specify” cate-
gory. If you are measuring some- Remove “other”
thing that falls on a continuum, and “I do not use
word your categories as a range. any learning plat-
form in my studies”
from Q2.4.
The Non-Lead- Avoid Questions Using Leading, Move Q3.32-35 to
ing Principle Emotional, or Evocative Language. the end of the first
For example, “Do you believe the page.
US should immediately withdraw
troops from the failed war in Iraq?”
“Do you support or oppose the
death tax?.” Sometimes the associ-
ations can be more subtle. For ex-
ample, “Do you support or oppose
President Bush’s plan to require
standardized testing of all public
school students?” Some people
might support or oppose this be-
cause it is sponsored by President
Bush, not because of their opinions
toward the merits of policy.

123
Questionnaire 1
Baseline questionnaire – designed to be a ’good questionnaire’, i.e. without
deliberate design flaws. Q1 respondents will serve as a control group so that
we know that the results aren’t just an effect of ’randomness’

124
125
126
127
128
129
Questionnaire 2
Erroneous questionnaire A – Designed to contain some built-in design issues
with respect to isolated questions, e.g. ambiguous questions or questions with
non-intuitive answering options.

130
131
132
133
134
Questionnaire 3
Erroneous questionnaire B – Design to contain built-in structural issues, i.e.
questions in a non-logical sequence or a lack of informative texts to provide
context for the respondent.

135
136
137
138
139
140
Appendix B –Experimental Questionnaire
Analysis

Please find the attachment for the detail analysis.

141
142

You might also like