Download as pdf or txt
Download as pdf or txt
You are on page 1of 41

Research iScience group

Internet-based • Psychology of the Internet (Communication,


experiments: Foundations, Anonymity, Privacy, Social exchange...)
• Use the Internet to conduct studies
Methods, Innovations (Experiments, Surveys, Non-reactive data
collection, Test Apps,…)

Ulf-Dietrich Reips
University of Konstanz
Laura Gary Txipi Frederik Tim Unai Ulf

http://iscience.uni.kn/ Stefan Eunike Juliane Michael Angelika Andrés

…and currently looking for a PhD student (contact me!)

http://iscience.uni.kn/
Study Computer
Methodology technology THE FIRST YEARS
• 1992 World Wide Web is invented
Internet
• 1995 Web Experimental Psychology Lab opens with two Web
technology
experiments (Tübingen). Google e.g. “AAAbacus Reips”
better Impulse for • Reips, U.-D. (1996, October). Experimenting
in the World Wide
Methods Applications
Web. Paper presented at the 1996 Society for Computers in
Psychology conference, Chicago.
quicker, more • Krantz et al. (1996): First within-subjects Web experiment
solid results, new
areas... • Sept. 1997 Google search

THE FIRST YEARS


• 1998 AlisonPiper publishes “Conducting social science laboratory
experiments on the world wide web” in Library and Information
Science Research

• 1998-1999 Musch & Reips survey of first Web experimenters


(published 2000)

• 1999Michael Birnbaum experiments with Decision Makers,


showing higher quality of data online than offline

• 2000 WEXTOR.org goes online

• 2001 first Advanced Training Institute …


2002
Birnbaum McGraw Krantz McClelland Schmidt Reips

Advanced Training Institute (ATI)


in Psychological Research on the Internet, Fullerton
2001-2011

Reips, U.-D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49, 243-256.

GROWTH OF WEB GROWTH OF WEB


EXPERIMENTING EXPERIMENTING
1995: 3 Web experiments
1998: ∼ 50 Web experiments

exponnet list (Krantz)


What defines an
experiment?

Experiments: Defining
Experimenting:
orderly,
characteristics
systematic, replicable • Active, willful causation of the event to be studied (e.g., change of
and active picture location)
manipulation • Orderliness (planned), controlled conditions
of one or more • Replicability: others find the same results when using the same
independent method
variables.
• Variability: IVs and their levels can be changed, DVs as well

• Power of determining causal relationships increases even more if


theory-guided

15 • Web experiments are experiments conducted via the WWW


Web Experiments: Schema
Principle I
Exp. Design

WWW Participants


with
Web Browsers Systematic variation
Study
Materials
• create 2, better 3-5 levels of an independent variable
(variants of an assumed cause)
Web Server
CGI(s)
• --> better theory building

Local Participants
with Web Browsers

(Automatic) Filtering, Formatting,


and Statistical Analysis

Logfile(s)
Comparing Offline Data with Online Data

Principle II
Birthday technique
• Random distribution to conditions

• will effectively eliminate confoundings


In which month are you born? Click on it.
• makes your life easy
January February March
• will convince reviewers

• Which randomization technique is most April May June


compatible?
July August September
Javascript
Java October November December
Birthday technique
Plug-in
Server-side
http://iscience.eu

--> Principle III http://iscience.eu


• Low tech!

• counters effects of technical incompatibilities &


malfunctioning

• often makes your life easy

• less programming

• lower cost

• avoids sampling biases coming from technology


preferences

• personality and education associated with tech


prefs (Buchanan & Reips, 2001)

Design and procedure

Reips, U.-D., & Neuhaus, C. (2002). WEXTOR: A Web-based tool for


generating and visualizing experimental designs and procedures. Behavior
Research Methods, Instruments, & Computers, 34, 234-240.
Some characteristics

Why Internet-based (1) reduction of experimental control in favor of increased


generalizability,
experiments? (2) the evaporating necessity to use statistics - and only
report effect sizes as a consequence of the large power
achievable in Internet-based research,
(3) voluntariness as a means to increase data quality

Access to specific groups


Accessing people with specific conditions that are
sometimes difficult to contact in other ways:
• Ecstasy consumers (Rodgers et al. 2001, 2006),
• Drug dealers (Coomber, 1997),
• Decision researchers (Birnbaum, 1999, 2001),
•Sleep sexers (Mangan & Reips, 2007): E A S E O F A C C E S S A N D I M P R E S S I O N O F A N O N Y M I T Y P S Y C H O L O G I C A L LY
S U P P O RT S E L F D I S C L O S U R E O N S E N S I T I V E T O P I C S

Participant numbers in all nine studies from 20 years of research on the rare condition sexsomnia (Mangan & Reips, 2007). The two
Web surveys reached more than five times as many participants from the target population than all previous studies combined.
INTERNET-BASED EXPERIMENTS: INTERNET-BASED EXPERIMENTS:
ADVANTAGES ADVANTAGES
• Ease of access to a large number of demographically and culturally
diverse participants ..., • Avoidance of organizational problems (scheduling of rooms etc);

• ... as well as to very rare, specific participant populations; • High statistical power --> optimal sample size;

• Experimenting around the clock; • Reduced cost (lab space, person hours, equipment, administration);

• Better generalizability of findings; • Reduction of experimenter effects;

• High ecological validity: generalizability to more settings and situations; • Reduction of demand characteristics;

• Public control of ethical standards; • Truly voluntary participation possible;


Reips, U.-D. (1997). Das psychologische Experimentieren im Internet [Psychological experimenting on the Internet]. In B. Batinic (Ed.), Internet für
Psychologen (pp. 245-265). Göttingen: Hogrefe.!
!
Reips, U.-D. (2000). The Web Experiment Method: Advantages, disadvantages, and solutions. In M. H. Birnbaum (Ed.), Psychological experiments
on the Internet (pp. 89-118). San Diego, CA: Academic Press.

INTERNET-BASED EXPERIMENTS: Musch and Reips (2000) Web survey: How important
were the following factors for your decision to conduct
ADVANTAGES the Web experiment?
Mean SD N
• Access to the number of “non-participants”;

• Comparability of results with results from locally tested low cost 3.2 2.2 21
samples; high speed 3.6 2.4 20

• Greater external validity through greater technical variance; large number of participants
reach participants from other countries
5.5
3.6
1.9
2.2
20
20
• Ease of access for the participants: bringing the experiment
high statistical power 4.5 2.2 20
to the participant instead of the opposite;
chance to better reach a special subpopulation on the
2.6 2.5 20
• Detectability of motivational confounding; web (e.g., handicapped, rape victims, chess players)
high external/ecological validity 3.4 2.1 20
• Greater openness of the research process.
replicate a lab experiment with more power 2.9 2.5 20
Speed & Costs
!

Open University PRESTO panel (Joinson & Reips,


2007): Invitation per e-mail to 1405 Panel members.
Survey via Web.
Within 24 hours 660 responses (48%)
Speed
10 days: 79% response rate. in Web experimenting
Pick from 157’000 potential participants
Costs for 100’000 participants same as for 100
traditionally surveyed (including data analysis)

Experimenting overnight McKenzie and Nelson’s task


condition 1
The iScience tools were used at a Decision Making conference
(SPUDM) to replicate an experiment during the meeting.
On the first conference day McKenzie and Nelson (2003)
presented their „information leakage“ experiment in a session
on framing effects.
Results from the Internet-based replication were presented
the next day. Within 8 hours, complete data sets from 162
participants had been collected (compared to 64 in the
original laboratory experiment).
The information leakage effect was replicated, an additional
effect was observed.
McKenzie and Nelson’s task Results by McKenzie and
condition 2 Nelson
N=64
CHOICE
1/2 full 1/2 empty
SCENARIO

4-->2 31 % 69 %

0-->2 88 % 12 %

My results after 8 hours My results after 8 days


N=162 N=315"
CHOICE CHOICE
1/2 full 1/2 empty 1/2 full 1/2 empty
SCENARIO SCENARIO

4-->2 64% (58) 36% (32) 4-->2 58% (91) 42% (66)

0-->2 94% (68) 6% (4) 0-->2 92% (145) 8% (13)


About 2 1/4 years later... Conclusion
N=1727"
CHOICE • Internet-based research can be used
with existing tools to achieve research
1/2 full 1/2 empty
results fast.
SCENARIO
• Collaborative empirical investigation of
4-->2 61% (516) 39% (334) competing theories becomes easier (e.g.
during a conference).
0-->2 89% (783) 11% (94)

Areas of research that use


Internet-based experiments
tagcloud from titles and topics of studies on the web experiment list and the web
survey list (http://wexlist.net).

Controlling possible effects of


sampling, self-selection etc.

Methods and Techniques

Reips, U.-D. (2007). The methodology of Internet-based experiments. In A. Joinson, K. McKenna, T. Postmes, & U.-D. Reips (Eds.), The Oxford
Handbook of Internet Psychology (pp. 373-390). Oxford: Oxford University Press.!
! (Reips, 1997, 2000)
Reips, U.-D. (2009). Internet experiments: methods, guidelines, metadata. Human Vision and Electronic Imaging XIV, Proc. SPIE. 7240(1), 724008.
CHECK: DROPOUT
DROPOUT PREDICTED

„just look“
„participate“
EMPIRICAL TEST OF SERIOUSNESS

0
10
20
30
40
50
60
70
80
90
100

trial_1
trial_2
trial_3
trial_4
trial_5
trial_6
trial_7
trial_8
trial_9
trial_10
trial_11
trial_12
trial_13
trial_14
trial_15
trial_16
trial_17
trial_18
trial_19
trial_20
trial_21
trial_22
trial_23
trial_24
trial_25
trial_26
trial_27
trial_28
trial_29
trial_30
trial_31
trial_32
trial_33
trial_34
trial_35
trial_36
trial_37
trial_38
trial_39
trial_40
trial_41
trial_42
Percent remaining participants by Mode of Recruitment

trial_43
trial_44
SERIOUSNESS CHECK

trial_45
trial_46
Dropout depends on mode

trial_47
trial_48
Lab

Flyer
Web

(Reips, Schwaninger, & Neuhaus, in prep.)


Dropout depends on task difficulty Continued participation, depending on the use of
JavaScript

100 JavaScript

Remaining participants in percent


100 90 CGI
80
easy 70
75 60

50

50 medium difficult 40

30

20

25 10

0
Start Welcome 4th page Self-threat Questions End

Web pages
0
trial_1
trial_2
trial_3
trial_4
trial_5
trial_6
trial_7
trial_8
trial_9
trial_10
trial_11
trial_12
trial_13
trial_14
trial_15
trial_16
trial_17
trial_18
trial_19
trial_20
trial_21
trial_22
trial_23
trial_24
trial_25
trial_26
trial_27
trial_28
trial_29
trial_30
trial_31
trial_32
trial_33
trial_34
trial_35
trial_36
trial_37
trial_38
trial_39
trial_40
trial_41
trial_42
trial_43
trial_44
trial_45
trial_46
trial_47
trial_48
D R O P O U T D E P E N D I N G O N J AVA S C R I P T V S . S E RV E R -
SIDE. OVERALL REMAINING 63.2% VS. 49.8%.
Figure 2 from Schwarz and Reips (2001).
(Reips, Schwaninger, & Neuhaus, in prep.)

On the Internet: Warm-up Technique


• completely voluntary continued participation;

• detectability of motivational confounding:


Start phase

Warm-up phase

Experiment phase
WARM-UP
The Warm-Up Technique as used in the Web
Experiment by Reips, Morger and Meier (2001)

100

90
Remaining participants

80

70
in percent

60

50

40

30
Experi-
20
Warm-up phase mental
10 phase
0
Start Instr 1 Instr 2 Instr 3 Instr 4 Item 1 Item 12 Last Item
Blocks of Web pages
TYPOLOGY OF NON-RESPONSE
Bosnjak (2001)

FACTORS INFLUENCING
NON-RESPONSE
Mode!

Anonymity!

Incentives (Frick, Bächtiger, & Reips, 2001; Göritz, 2006)!


Incentives
Personalization & Power (Joinson & Reips, 2007)!

Privacy & Trust (Joinson, Reips, Buchanan, & Schofield,


2010)!

Technology used (Schwarz & Reips, 2001)


100 88
90
80
70
60 49 financial incentive
45
50 37
40
30 no financial incentive

Incentives and
20
10
0
Percent completed Participants per

Anonymity
week Musch and Reips (2000)

Result on incentives from two web surveys among the


earliest Web experimenters (on 35 Studies).

E X P E R I M E N TA L D E S I G N : FA C T O R S I N C E N T I V E I N F O R M AT I O N
(BEFORE OR AFTER QUESTIONNAIRE), ANONYMITY (DEMOGRAPHIC
Q U E S T I O N S AT B E G I N N I N G O R E N D ) , A N D O R D E R O F Q U E S T I O N S
F O R T I M E S P E N T ( T V C O N S U M P T I O N A N D C H A R I TA B L E R E P O RT E D T I M E S I N M I N U T E S
O R G A N I Z AT I O N ) . P L U S L A N G U A G E . Left: TV consumption, right: charitable organization

Figure from Frick, Bächtiger and Reips (2001).


PROGRESS INDICATORS
Research in Internet-based data
collection has shown that progress
indicators are motivating if there are
less than ca. 30 screens.
For more findings see Callegaro et al.
M A I N E F F E C T S O F P O S I T I O N S O F I N C E N T I V E I N F O R M AT I O N A N D
(2011).
DEMOGRAPHIC QUESTIONS
Left: TV consumption, right: charitable organization
SLIDE 4 of 279

ONLINE-OFFLINE: DISTRIBUTED
COLLABORATIVE RESEARCH

(Reips, 1997, 2000)


META TAGGING One item one screen (OIOS) design

• Advantages:
• instructs search engine routines (“bots”, “crawlers”)
• more detailed information by item (RTs, item of dropout is known)
• prevents linking of deep study content on search • more data are collected (e.g. filled items on an all-items-on-one
engines, so participants only enter on first page of study page questionnaire are lost)

• avoids
• motivating in short questionnaires, de-motivating in longer ones
caching of page content on proxy servers, thus
prevents delivery of outdated materials • all studies so far show no format dependent differences in content
results

Reips, U.-D. (2010). Design and formatting in Internet-based research. In S. Gosling & J. Johnson (eds.), Advanced methods for conducting online
behavioral research (pp. 29-43). Washington, DC: American Psychological Association.

Instructional Manipulation Check

• Oppenheimer, Meyvis & Davidenko (2009) tried to identify


respondents who did not even read the instructions in experiments:

• ”a question embedded within the experimental material that is similar


to the other questions in length and response format (e.g. Likert scale,
check boxes, etc.). However, unlike the other questions, the
[Instructional Manipulation Check] asks participants to ignore the
standard response format and instead provide a confirmation that they
have read the instructions.” (p. 867)
ipants to report demographic information about their best friend, the inclusion of an IMC, especially in cross cultural studies. It is
therefore important that researchers including IMCs be aware of
this possibility, and structure their studies accordingly.
It is also worth noting that there are other ways of ensuring that
Instructional Manipulation Check
participants read instructions that may be more effective than an
IMC. For example, orally presenting the materials, or close supervi-
sion to increase motivation levels, could yield even higher quality

data than Oppenheimer
the inclusion etof
al.an
(2009)
IMC.found evidence that
We encourage the
researchers to use
exclusion of participants failing the Instructional
such tools in the design of their studies to minimize the likelihood
Manipulation Check improved data quality.
of satisficing. However, such approaches are not always possible,

especiallyThewith increasingly
size of popular
the effects that could web-survey
be observed in studies.
two More-
over, evenclassic judgment
using and decision
such other methods, making
the paradigms
addition of was
an IMC can
larger,
be helpful. andexample,
For a Need for in Cognition scale wasstudies
the preliminary answered reported in
more consistently
the introduction, by participants
participants were halfpassing the to
as likely check.
fail an IMC in
the presence of a supervisor, but 14% still failed. Thus, an IMC in
• Prompting those failing the check to carefully reread the
conjunction with other methods for increasing participant dili-
instructions the answer behavior of participants initially
gence can help
failing theidentify non-diligent
check became participants
indistinguishable if those
from that of other
methodsparticipants
are not 100% effective.
passing the check.
We recommend using IMCs early in a study to convert satisfic-
ing participants into diligent participants, as in Study 2. This ap-
Fig. 2. Example of an alternate IMC – the Blue Dot task. (For interpretation of the
references to color in this figure legend, the reader is referred to the web version of proach has the advantage that data from participants who fail
this article.) the IMC are not excluded, thus preventing a reduction in sample
Common errors uncovered
There is an increasing number of Web
Common Errors studies. However, very few institutions are
teaching the specifics of the new related
Best Practices methods. As a consequence, five
configuration errors can frequently be
observed (Reips, 2002 SSCORE), some of
which may have severe consequences.
Reips, U.-D. (2002). Internet-based psychological experimenting: Five dos and five don'ts.
These typical errors are presented below.
Social Science Computer Review, 20 (3), 241-249. doi:10.1177/089443930202000302
Common errors uncovered (Reips, 2002)
Best practice

• Configuration error I: Allowing external access to


unprotected directories (e.g. downloaded 65’000 e-mail
addresses)

• II: Public display of confidential participant data through URL

• III: Obvious file naming (“exp2b/control/page3.html”)

• IV: Biasing configuration of form elements (e.g., preset


values)

• V: Underestimating technical variance


Securityfocus.com (2002)

"
C1 D7
Configuration error II: Public .html .html
Imagine being a participant...
display of confidential participant
You log on to
data through URL http://www.someserver.edu/exp/intro.html

In the browser‘s location window you see the following page


addresses:
Last Logfile of
page Intro B1 B2 B3 B4
third party server .html .html
Form .html .html .html

A1 A2 A3 A4
.html .html .html .html
Configuration error V: Ignorance towards the

Best Practices technical variation present in the Internet. In reality,


this is a whole group of configuration errors, addressing
lack of attention to differences in Web browsers, net
connections, hardware components etc. For example,
dropout can result from interactions between certain Web
browser versions and incompatible elements on Web
pages (Eichstaedt, 2001; Schwarz & Reips, 2001).

--> Separate intention from carelessness

Screens
Krantz (2000, 2001) showed
that CRT monitors show
considerable within
variation in signal strength and
color. Also, there is systematic
variation, for example these
monitors need to warm-up for
half an hour to emit a constant
signal throughout the screen.
FROM SMALL SCREENS TO LARGE SCREENS
Mice
Plant, Hammond, & Whitehouse
(2003) reported mice – even Keyboards
those of the same brand & type response for same key
– can vary considerably in the pressed actually doesn‘t
speed of transmission, vary much - but the key‘s
with averages ranging from 7 position may vary
to 62 ms considerably

Interaction of Personality and Technology


(Buchanan & Reips, 2001)

Responses of 2148 participants


to an online Five Factor

Interaction Personality personality inventory (Buchanan,


Goldberg & Johnson, 1999; see
iscience.eu) and demographic
x Technology items were compared for users
of different computing platforms.
The responses of participants
whose Web browsers were
Javascript-enabled were also
compared with those whose Web
Buchanan, T., & Reips, U.-D. (2001). Platform-dependent biases in Online Research: Do Mac users really think different? In K. J. Jonas, P. Breuer,
B. Schauenburg & M. Boos (Eds.), Perspectives on Internet Research: Concepts and Methods. URL: http://www.psych.uni-goettingen.de/congress/ browsers were not.
gor-2001/contrib/buchanan-tom
Interaction of Personality and Technology
(Buchanan & Reips, 2001) NEW RESULTS

• Partial
replication with 4443 respondents to the
German Big 5 on iscience.eu
• Macintosh users had significantly higher values on
Macintosh users were significantly more scales for “Emotional Stability” and ”Openness to
“Open to Experience” than were PC users! Experience“ than PC users.
People using Javascript-enabled browsers
had significantly lower education levels.

CONSEQUENCE:
LOW TECH PRINCIPLE FORCED RESPONSE

• avoids
sampling biases coming from technology preferences
(Buchanan & Reips, 2001; Reips & Buchanan, in prep.)

Stieger, S., Reips, U.-D., & Voracek, M. (2007). Forced-response in online surveys: Bias from reactance and an increase in sex-specific dropout.
Journal of the American Society for Information Science and Technology, 58, 1653-1660. doi:10.1002/asi.20651
Visual Analogue Scales

Reips, U.-D., & Funke, F. (2008). Interval level measurement with visual analogue scales in Internet-based research: VAS Generator.
Behavior Research Methods, 40, 699–704.

VAS Generator VAS Generator

Figure 1: Visual Analogue Scale in an


online survey
VAS Generator
An Experiment
• student sample
between-subjects design
n = 355

• 13 different target values (presented twice in reverse orders) had to


be located on the VAS:
50%, 75%, 10%, 33%, 80%, 95%, 25%, 67%, 40%, 5%, 60%, 90%, 20%

• 3 VAS lengths: 50 pixel, 200 pixel & 800 pixel

(Reips & Funke, 2008)


Recruitment of
participants
Funke, F., & Reips, U.-D. (2012). Why semantic differentials in Web-based research should be made from visual
analogue scales and not from 5-point scales. Field Methods.!
"
Also: Funke, F., Reips, U.-D., & Thomas, R. K. (2011). Sliders for the smart: Type of rating scale on the Web interacts
with educational level. Social Science Computer Review, 29, 221-231.

Controlling possible effects of sampling,


self-selection etc. Recruitment principles

• always be careful: do not send unsolicited invitations

• address properly and individually, e.g. „Dear colleague“ instead of


„Dear colleagues“

• ask forum managers for permissions

• consider designated recruitment sites first (web experiment list, online


panels, ...)

• „Friends“ and panelists are samples with particular characteristics

(Reips, 1997, 2000)


Recruitment lists

Recruitment options
• Mailing lists
• Forums/Newsgroups
• Online panels
• Social media (Facebook, Twitter, Tuenti...)
• Frequented/special target Websites, e.g. news,
geneaologists
• Google ads
• Banner (Tuten, T. L., Bosnjak, M. & Bandilla, W. (2000). Banner-
advertised Web surveys. Marketing Research, 11(4), 17-21.)
Recruitment via Amazon
Mechanical Turks
Reips, Ulf-Dietrich, Buffardi, Laura, & Kuhlmann, Tim: Why NOT to use Amazon Mechanical Turk for the recruitment of participants. 33rd Society
for Judgment and Decision Making (SJDM) conference, Minneapolis (USA), November 16-19, 2012."
"
Reips, Ulf-Dietrich, Buffardi, Laura, & Kuhlmann, Tim: Using Amazon’s Mechanical Turk for the recruitment of participants in Internet-based
research. 13th General Online Research meeting, University of Düsseldorf, March 15, 2011.

Results: Mean RTs


50000

45000

40000

Mean Response Times


35000

30000

25000 non‐Mturk
Mturk
20000

15000

10000

5000

0
rt_1 rt_2 rt_3 rt_4 rt_5 rt_6 rt_7 rt_8 rt_9 rt_10 rt_11 rt_12 rt_14 rt_15 rt_16 rt_17 rt_18 rt_19 rt_20 rt_21 rt_22 rt_23

Web page
Results: Dropout Clicking through?
Percent Remaining Participants

120.0


100.0

MTurkers answered more to the middle of


80.0
response scales than participants recruited
via other sources.
non‐Mturk


60.0

Mturk
In fact, out of the 64 items with different
40.0
means, MTurkers scored more in the
middle of the scale in 50 items.
20.0

0.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23

Web page

Social Media and Geo Tagging

• Social media turn into interesting sources for social-


behavioral research.
Social Media • David Crandall and colleagues (Cornell University)
created the following map and similar ones from ca.
35 million geo-tagged photos that had been
uploaded to flickr.
Narcissism!and!
Social Media and Geo Tagging with Laura
Buffardi
Facebook Profiles
Narcissism predicts:
! More friends
! More wallposts
! Self-promoting info
! Self-promoting quotes
! Attractive,
Attractive sexy,
sexy fun,
fun self-promoting
self promoting photos

Strangers detect narcissism from:


! Self-promoting main photos
! Attractive main photos
! Quantity of social interaction

Reips, U.-D., & Buffardi, L. (2012). Studying migrants with the help of the Internet:
Methods from psychology. Journal of Ethnic and Migration Studies, 38(9),
1405-1424. doi:10.1080/1369183X.2012.698208

Challenges with data


collection in Social Media
Sampling biases via choice of Social Media
An Open Source Facebook:

Garaizar, P. & Reips, U.-D. (2013). Build your own social network laboratory with
Social Lab: A tool for research in social media. Behavior Research Methods. doi:
10.3758/s13428-013-0385-3

Learning Game about Privacy: http://en.sociallab.es


Social engineering wargame
A privacy challenge in which players must gain access
Example: to user profiles in a "social sandbox" (a fake social network)
“Social Hacking” - Game: http://en.sociallab.es

http://en.sociallab.es

1. Sign up 2. Sign in

http://en.sociallab.es/signup http://en.sociallab.es/sigin
Each time a friendship request is done,
Social Lab checks if it involves an automated profile and
3. Solve social challenges if that is the case, it schedules a task

http://en.sociallab.es/profile/messages http://en.sociallab.es/profile/request/id/2

Social Lab: Automated Social Lab Privacy Game:


Social Agents Example networks
• “stateful” = remember past states, react on
their basis • Spanish: 1292 users in es.sociallab.es

• systematic experimental variation in a Social • English: 200 users in en.sociallab.es


Network
• Basque: 44 users in eu.sociallab.es
• relative ease of programming
• German: 30 users in de.sociallab.de
• mostly appear like human users
(as of October 2014)
And...

Tracking user behavior

http://www.gnu.org/licenses/agpl-3.0.html

Visualize user access


Marc Smith, Netscan (Microsoft Research)
Outlook

• Experimentation within Social Media

• Personality is related to technology preference, use low tech

• Methods and techniques such as OIOS, warm-up,


seriousness check will be used more often, reviewers and
editors will ask for them

• For some it will become a new trend to run offline studies Publications:
http://tinyurl.com/reipspub
IJIS.net

…and currently looking for a PhD student (contact me!)

Publications on Internet & Psychology


1999
Pablo

2002

2001

2002

2003 2007 IJIS.net 2006-


Simple Twitter
search can be
found in many
browsers and The iScienceMaps for Twitter app:
apps
http://tweetminer.eu

149

Features
• targeted at researchers
• allows comparative search
• by location
• by Boolean search operators
• Visualization on maps, animated sequences
• Global search via Twitter independent
database
• download results in several formats
Global Search

Name study

• attempt to replicate a study on affective and personality

An example study characteristics inferred from first names (Mehrabian & Piercy,
1993)

affective and personality characteristics inferred from • via Internet rather than paper-and-pencil

first names
• quick

• independent of local sampling effects


Name study: Method Name study: Hypothesis

• If these names’ having the connotation of a personality


• From Table 2 in their article, we take the first six male names; for characteristic really holds, this likely should be apparent when
Twitter is mined, because attributions to persons, such as
three of these, (Alexander, Charles, Kenneth) the connotation of
the dimension “successful” was strong, and for three (Otis, “Charles is an intelligent guy,” frequently appear in text-based
Tyrone, Wilbur), it was weak. message services like Twitter.

• Successful meant “ambitious,” “intelligent,” and “creative.”

Name study: Steps

• define locations (western US and UK/Ireland)

• define date range (e.g. last 3 days)

• to find and later adjust for the base rate, we first do a simple
search for each name

• search for each name in combination with an attribute, e.g.


„Charles“ AND „intelligent“
Name study: Results Name study: Results

• Supporting the original findings for male names in the U.S., we did not
find a single combination of the low-connotation names with any of • These findings replicate for tweets from the U.K. and Ireland:
the terms “successful,” “ambitious,” “intelligent,” and “creative.”
• no tweets for combinations of the four personality characteristics
• All the high-connotation names did indeed appear in the same tweets with the low-connotation names,
with some of the aforementioned terms; for example, Alexander
appeared 6 times with either “creative” or “successful”. Kenneth was
tweeted 15 times in combination with “successful”, and Charles 38
• but again some combinations for two of the three high-
connotation names.
times with “creative,” “intelligent,” or “successful”.

Name study: Discussion

• Critically the base rate of high-connotation (Alexander, 5‘478; Kenneth,


2‘005; Charles, >16‘760) versus low-connotation names (Otis, 1‘296;
Tyrone, 1‘324; Wilbur, 355) appears to be a confounding factor and may
also explain findings in the original study, because less frequent names
may cognitively be less associated with any personality characteristics.

You might also like