Professional Documents
Culture Documents
Slides Reips
Slides Reips
Ulf-Dietrich Reips
University of Konstanz
Laura Gary Txipi Frederik Tim Unai Ulf
http://iscience.uni.kn/
Study Computer
Methodology technology THE FIRST YEARS
• 1992 World Wide Web is invented
Internet
• 1995 Web Experimental Psychology Lab opens with two Web
technology
experiments (Tübingen). Google e.g. “AAAbacus Reips”
better Impulse for • Reips, U.-D. (1996, October). Experimenting
in the World Wide
Methods Applications
Web. Paper presented at the 1996 Society for Computers in
Psychology conference, Chicago.
quicker, more • Krantz et al. (1996): First within-subjects Web experiment
solid results, new
areas... • Sept. 1997 Google search
Reips, U.-D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49, 243-256.
Experiments: Defining
Experimenting:
orderly,
characteristics
systematic, replicable • Active, willful causation of the event to be studied (e.g., change of
and active picture location)
manipulation • Orderliness (planned), controlled conditions
of one or more • Replicability: others find the same results when using the same
independent method
variables.
• Variability: IVs and their levels can be changed, DVs as well
WWW Participants
•
with
Web Browsers Systematic variation
Study
Materials
• create 2, better 3-5 levels of an independent variable
(variants of an assumed cause)
Web Server
CGI(s)
• --> better theory building
Local Participants
with Web Browsers
Logfile(s)
Comparing Offline Data with Online Data
Principle II
Birthday technique
• Random distribution to conditions
• less programming
• lower cost
Participant numbers in all nine studies from 20 years of research on the rare condition sexsomnia (Mangan & Reips, 2007). The two
Web surveys reached more than five times as many participants from the target population than all previous studies combined.
INTERNET-BASED EXPERIMENTS: INTERNET-BASED EXPERIMENTS:
ADVANTAGES ADVANTAGES
• Ease of access to a large number of demographically and culturally
diverse participants ..., • Avoidance of organizational problems (scheduling of rooms etc);
• ... as well as to very rare, specific participant populations; • High statistical power --> optimal sample size;
• Experimenting around the clock; • Reduced cost (lab space, person hours, equipment, administration);
• High ecological validity: generalizability to more settings and situations; • Reduction of demand characteristics;
INTERNET-BASED EXPERIMENTS: Musch and Reips (2000) Web survey: How important
were the following factors for your decision to conduct
ADVANTAGES the Web experiment?
Mean SD N
• Access to the number of “non-participants”;
• Comparability of results with results from locally tested low cost 3.2 2.2 21
samples; high speed 3.6 2.4 20
• Greater external validity through greater technical variance; large number of participants
reach participants from other countries
5.5
3.6
1.9
2.2
20
20
• Ease of access for the participants: bringing the experiment
high statistical power 4.5 2.2 20
to the participant instead of the opposite;
chance to better reach a special subpopulation on the
2.6 2.5 20
• Detectability of motivational confounding; web (e.g., handicapped, rape victims, chess players)
high external/ecological validity 3.4 2.1 20
• Greater openness of the research process.
replicate a lab experiment with more power 2.9 2.5 20
Speed & Costs
!
4-->2 31 % 69 %
0-->2 88 % 12 %
4-->2 64% (58) 36% (32) 4-->2 58% (91) 42% (66)
Reips, U.-D. (2007). The methodology of Internet-based experiments. In A. Joinson, K. McKenna, T. Postmes, & U.-D. Reips (Eds.), The Oxford
Handbook of Internet Psychology (pp. 373-390). Oxford: Oxford University Press.!
! (Reips, 1997, 2000)
Reips, U.-D. (2009). Internet experiments: methods, guidelines, metadata. Human Vision and Electronic Imaging XIV, Proc. SPIE. 7240(1), 724008.
CHECK: DROPOUT
DROPOUT PREDICTED
„just look“
„participate“
EMPIRICAL TEST OF SERIOUSNESS
0
10
20
30
40
50
60
70
80
90
100
trial_1
trial_2
trial_3
trial_4
trial_5
trial_6
trial_7
trial_8
trial_9
trial_10
trial_11
trial_12
trial_13
trial_14
trial_15
trial_16
trial_17
trial_18
trial_19
trial_20
trial_21
trial_22
trial_23
trial_24
trial_25
trial_26
trial_27
trial_28
trial_29
trial_30
trial_31
trial_32
trial_33
trial_34
trial_35
trial_36
trial_37
trial_38
trial_39
trial_40
trial_41
trial_42
Percent remaining participants by Mode of Recruitment
trial_43
trial_44
SERIOUSNESS CHECK
trial_45
trial_46
Dropout depends on mode
trial_47
trial_48
Lab
Flyer
Web
100 JavaScript
50
50 medium difficult 40
30
20
25 10
0
Start Welcome 4th page Self-threat Questions End
Web pages
0
trial_1
trial_2
trial_3
trial_4
trial_5
trial_6
trial_7
trial_8
trial_9
trial_10
trial_11
trial_12
trial_13
trial_14
trial_15
trial_16
trial_17
trial_18
trial_19
trial_20
trial_21
trial_22
trial_23
trial_24
trial_25
trial_26
trial_27
trial_28
trial_29
trial_30
trial_31
trial_32
trial_33
trial_34
trial_35
trial_36
trial_37
trial_38
trial_39
trial_40
trial_41
trial_42
trial_43
trial_44
trial_45
trial_46
trial_47
trial_48
D R O P O U T D E P E N D I N G O N J AVA S C R I P T V S . S E RV E R -
SIDE. OVERALL REMAINING 63.2% VS. 49.8%.
Figure 2 from Schwarz and Reips (2001).
(Reips, Schwaninger, & Neuhaus, in prep.)
Warm-up phase
Experiment phase
WARM-UP
The Warm-Up Technique as used in the Web
Experiment by Reips, Morger and Meier (2001)
100
90
Remaining participants
80
70
in percent
60
50
40
30
Experi-
20
Warm-up phase mental
10 phase
0
Start Instr 1 Instr 2 Instr 3 Instr 4 Item 1 Item 12 Last Item
Blocks of Web pages
TYPOLOGY OF NON-RESPONSE
Bosnjak (2001)
FACTORS INFLUENCING
NON-RESPONSE
Mode!
Anonymity!
Incentives and
20
10
0
Percent completed Participants per
Anonymity
week Musch and Reips (2000)
E X P E R I M E N TA L D E S I G N : FA C T O R S I N C E N T I V E I N F O R M AT I O N
(BEFORE OR AFTER QUESTIONNAIRE), ANONYMITY (DEMOGRAPHIC
Q U E S T I O N S AT B E G I N N I N G O R E N D ) , A N D O R D E R O F Q U E S T I O N S
F O R T I M E S P E N T ( T V C O N S U M P T I O N A N D C H A R I TA B L E R E P O RT E D T I M E S I N M I N U T E S
O R G A N I Z AT I O N ) . P L U S L A N G U A G E . Left: TV consumption, right: charitable organization
ONLINE-OFFLINE: DISTRIBUTED
COLLABORATIVE RESEARCH
• Advantages:
• instructs search engine routines (“bots”, “crawlers”)
• more detailed information by item (RTs, item of dropout is known)
• prevents linking of deep study content on search • more data are collected (e.g. filled items on an all-items-on-one
engines, so participants only enter on first page of study page questionnaire are lost)
• avoids
• motivating in short questionnaires, de-motivating in longer ones
caching of page content on proxy servers, thus
prevents delivery of outdated materials • all studies so far show no format dependent differences in content
results
Reips, U.-D. (2010). Design and formatting in Internet-based research. In S. Gosling & J. Johnson (eds.), Advanced methods for conducting online
behavioral research (pp. 29-43). Washington, DC: American Psychological Association.
"
C1 D7
Configuration error II: Public .html .html
Imagine being a participant...
display of confidential participant
You log on to
data through URL http://www.someserver.edu/exp/intro.html
A1 A2 A3 A4
.html .html .html .html
Configuration error V: Ignorance towards the
Screens
Krantz (2000, 2001) showed
that CRT monitors show
considerable within
variation in signal strength and
color. Also, there is systematic
variation, for example these
monitors need to warm-up for
half an hour to emit a constant
signal throughout the screen.
FROM SMALL SCREENS TO LARGE SCREENS
Mice
Plant, Hammond, & Whitehouse
(2003) reported mice – even Keyboards
those of the same brand & type response for same key
– can vary considerably in the pressed actually doesn‘t
speed of transmission, vary much - but the key‘s
with averages ranging from 7 position may vary
to 62 ms considerably
• Partial
replication with 4443 respondents to the
German Big 5 on iscience.eu
• Macintosh users had significantly higher values on
Macintosh users were significantly more scales for “Emotional Stability” and ”Openness to
“Open to Experience” than were PC users! Experience“ than PC users.
People using Javascript-enabled browsers
had significantly lower education levels.
CONSEQUENCE:
LOW TECH PRINCIPLE FORCED RESPONSE
• avoids
sampling biases coming from technology preferences
(Buchanan & Reips, 2001; Reips & Buchanan, in prep.)
Stieger, S., Reips, U.-D., & Voracek, M. (2007). Forced-response in online surveys: Bias from reactance and an increase in sex-specific dropout.
Journal of the American Society for Information Science and Technology, 58, 1653-1660. doi:10.1002/asi.20651
Visual Analogue Scales
Reips, U.-D., & Funke, F. (2008). Interval level measurement with visual analogue scales in Internet-based research: VAS Generator.
Behavior Research Methods, 40, 699–704.
Recruitment options
• Mailing lists
• Forums/Newsgroups
• Online panels
• Social media (Facebook, Twitter, Tuenti...)
• Frequented/special target Websites, e.g. news,
geneaologists
• Google ads
• Banner (Tuten, T. L., Bosnjak, M. & Bandilla, W. (2000). Banner-
advertised Web surveys. Marketing Research, 11(4), 17-21.)
Recruitment via Amazon
Mechanical Turks
Reips, Ulf-Dietrich, Buffardi, Laura, & Kuhlmann, Tim: Why NOT to use Amazon Mechanical Turk for the recruitment of participants. 33rd Society
for Judgment and Decision Making (SJDM) conference, Minneapolis (USA), November 16-19, 2012."
"
Reips, Ulf-Dietrich, Buffardi, Laura, & Kuhlmann, Tim: Using Amazon’s Mechanical Turk for the recruitment of participants in Internet-based
research. 13th General Online Research meeting, University of Düsseldorf, March 15, 2011.
45000
40000
30000
25000 non‐Mturk
Mturk
20000
15000
10000
5000
0
rt_1 rt_2 rt_3 rt_4 rt_5 rt_6 rt_7 rt_8 rt_9 rt_10 rt_11 rt_12 rt_14 rt_15 rt_16 rt_17 rt_18 rt_19 rt_20 rt_21 rt_22 rt_23
Web page
Results: Dropout Clicking through?
Percent Remaining Participants
120.0
•
100.0
•
60.0
Mturk
In fact, out of the 64 items with different
40.0
means, MTurkers scored more in the
middle of the scale in 50 items.
20.0
0.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Web page
Reips, U.-D., & Buffardi, L. (2012). Studying migrants with the help of the Internet:
Methods from psychology. Journal of Ethnic and Migration Studies, 38(9),
1405-1424. doi:10.1080/1369183X.2012.698208
Garaizar, P. & Reips, U.-D. (2013). Build your own social network laboratory with
Social Lab: A tool for research in social media. Behavior Research Methods. doi:
10.3758/s13428-013-0385-3
http://en.sociallab.es
1. Sign up 2. Sign in
http://en.sociallab.es/signup http://en.sociallab.es/sigin
Each time a friendship request is done,
Social Lab checks if it involves an automated profile and
3. Solve social challenges if that is the case, it schedules a task
http://en.sociallab.es/profile/messages http://en.sociallab.es/profile/request/id/2
http://www.gnu.org/licenses/agpl-3.0.html
• For some it will become a new trend to run offline studies Publications:
http://tinyurl.com/reipspub
IJIS.net
2002
2001
2002
149
Features
• targeted at researchers
• allows comparative search
• by location
• by Boolean search operators
• Visualization on maps, animated sequences
• Global search via Twitter independent
database
• download results in several formats
Global Search
Name study
An example study characteristics inferred from first names (Mehrabian & Piercy,
1993)
affective and personality characteristics inferred from • via Internet rather than paper-and-pencil
first names
• quick
• to find and later adjust for the base rate, we first do a simple
search for each name
• Supporting the original findings for male names in the U.S., we did not
find a single combination of the low-connotation names with any of • These findings replicate for tweets from the U.K. and Ireland:
the terms “successful,” “ambitious,” “intelligent,” and “creative.”
• no tweets for combinations of the four personality characteristics
• All the high-connotation names did indeed appear in the same tweets with the low-connotation names,
with some of the aforementioned terms; for example, Alexander
appeared 6 times with either “creative” or “successful”. Kenneth was
tweeted 15 times in combination with “successful”, and Charles 38
• but again some combinations for two of the three high-
connotation names.
times with “creative,” “intelligent,” or “successful”.