Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 148

Introduction to behavioral research

By G. Hapunda

“ Research is at the heart of knowledge advancement and improved


wellbeing of the universe”
1
Introduction to behavioral research
• Imagine vividly as you
can, a scientist at work.
Let your imaginations
fill in as many details
regarding the scene
– What does the imagined
scientist look like?
– Where is the person
working?
– What is the scientist
doing?

2
Is researcher a scientist?
• Before we answer this question
lets define research;
• The systematic investigation
into and study of phenomena
and sources in order to establish
facts and reach new conclusions.
Key words:
• Investigation, experimentation,
testing, exploration, analysis,
fact-finding, examination,
scrutiny, probing
3
Research as a scientific approach
Behavioral research is scientific because :
1. Systematic empiricism – relying on observations to
draw conclusions about the world
-Science is based on objective observations and not
assumptions, beliefs etc
-Structure observations in systematic way to draw
valid conclusions
2. Public verification – findings of one researcher can be
observed, replicated and verified by others.

4
Two reasons for public verification
a. To ensure that the phenomena studied are real and
observable and not just fabrications
b. Public verification makes science self-correcting.
Errors in methodology and interpretation can be
discovered and corrected by others.
-Thus report methods and findings to scientific
community e.g. journals, conferences etc.

5
Goals of behavioral research
1. Describing behavior - focus on describing patterns of
behavior, thoughts and emotions
Survey research aims to determine what people think,
feel and do
2. Predicting behavior - interest lays in predicting people’s
behavior.
Personal psychologist job performance
Educational ‘’ ‘’ academic perf.
Forensic ‘’ ‘’ dangerous
criminals
Developmental ‘’’ adjustment
6
Cont…
3. Explaining behavior - research feel they understand a
phenomenon if they can explain it.
-We can describe and predict prisoner violent
behaviour, but until we can explain why prisoners are
violent, the picture is not complete
-Sciences involves developing and testing theories that
explain phenomenon of interest

7
What factors influences performance
Several factors can
cause poor performance
for example:
Cognition
Affective
Demographic
Metacognition
others

8
Researcher’s two Job
1. Detecting phenomena – discovering and
documenting phenomena, patterns and relationships
-Not all scientific investigations test theoretical
explanations of phenomena
2. Construct and evaluate explanations – develop
theories to explain patterns observed, then conduct
research to test the theory.

9
Theory
• A set of propositions that attempt to specify the interrelationships
among a set of concepts (Leary, 2004)

• Fieldler’s (1967) contingency theory of leadership specifies the


conditions in which certain kinds of leaders will be more effective in
group settings.

• Theory construction is a creative exercise and ideas for theories come


from anywhere e.g. literature, data collection process, observations etc

• Theories specify both how and why concepts are interrelated

• Models describe only how they are interrelated

10
Ethics in research

“ They discovered your research is fraudulent,


so your grant will be funded in counterfeit
bills.”

11
Approaches to ethical decisions
Ethics -norms for conduct that distinguish between
acceptable and unacceptable behavior (Resnik 2011)
Two sets of obligations that sometimes conflict:
 provide information that enhance our understand of
behavioral processes and leads to the improvement of
human or animal welfare.
Obligation to protect the rights and welfare of the human
and non human participants
When these two obligations coincide, few ethical
issues arise. The dilemma after must be resolved by
three schools of thoughts:
12
Cont..
Deontology – ethics must be judged in the light of
universal moral code
Certain actions are inherently unethical and should never
be performed regardless of circumstances
Ethical skepticism – there is no inside truth to moral
codes as claimed by deontologists
Rather ethical rules are arbitrary and relative to culture
and time
Utilitarian –judgement regarding ethics of a particular
action depends on the consequences of that action
Cost benefit analysis should take prominence

13
Basic ethical guidelines
 Thus, in determining whether to conduct a study researchers
must consider likely costs and benefits
 Potential benefits include the following
 Basic knowledge
 Improvement of research or assessment techniques
 Practical outcomes
 Benefits to researcher
 Benefits to research participants

 Potential costs
 Participants investment of time and effort
 Mental and physical risks on participants
 Financial investment on research – is it justifiable

 Institutional Review bodies (IRB) AKA ethics committees must


review all research studies before conducted.
14
The Principle of Informed consent
Informed consent - informing research participants of
the nature of the study and obtaining their agreement
to participate
IC ensures researchers do not violate peoples privacy
and for participants to make informed decision
whether to participate or not
Obtaining informed consent
 Use language that is understood by participants
 Include clause: free to participate or decline or withdraw without
any consequences

15
Problems with obtaining informed consent
 Compromise the validity of study
 Participants who are unable to give informed consent
 Absurd cases of informed consent
 Research can be conducted without obtaining informed consent if:
1. Research involves no more than minimal risks to participants
2. The waiver of informed consent will not adversely affect the rights and
welfare of participants
3. The research could not feasibly be carried out if informed consent were
required.
 Invasion of privacy
 Coercion to participate
 Physical and mental stress – minimal risk “ risk that is no greater in
probability and severity than that ordinarily encountered in daily life or during
the performance of routine physical or psychological examination or tests”
(Offical IRB guideline , 1986)
16
Deception in research
Researchers use deception to prevent participants learn
the true purpose of a study so that their behavior will
not be artificially affected .e.g. priming studies
Deception involves
Using a experimental confederate who poses as another
participants or uninvolved bystander
Provide false feedback to participants
Presenting two feedback to participants
Giving incorrect information regarding stimulus
materials e.g. placebos

17
Objection to deception
Lying and deceit are immoral and reprehensible acts
even when used for good acts
Deception may led to undesirable consequences e.g.
Because of widespread deception, research participants
my enter research studies already suspicious of what the
researchers tells them
Participants who learn that they were deceived my come
to distrust researchers.
APA no use of deception unless such is justified by
scientific , educational or applied value and could not be
conducted without use of deception e.g. priming studies
18
Debriefing
A good debrief has 4 goals
1. Clarifies nature of study to participants
2. Remove any stress or negative consequences that the
study may have induced
3. Obtain participants reaction of the study its self
4. Induce a feeling of happiness… convey appreciation
for time and cooperation

19
Other issues
Confidentiality
Common courtesy
 Show up on time
 Don’t be cold, rude
 Show appreciation

Scientific misconduct
Fabrication
Falsification
Plagiarism

20
How to read a research paper

Peer review

21
Goal of reading research paper
Why should we read research papers?
To understand scientific contribution that the authors are
making
For public verification – microscopic examination e.g.
methodological issues
For application in every day life – evidence based
research
To review them for conference or class
To keep current in their field
Reading articles should be done in three pass (Keshav,
2013)
22
The first pass
Involves quick scan (bird’s eye view of paper)
involving the following:
Carefully read the title, abstract and introduction
Read the section and sub-section headings abut ignore
everything else
Glance at the statistical or analysis content to determine
the underlying theoretical foundations
Read conclusion
Glance over the reference mentally ticking off the ones
you have already read

23
Cont…
At the end of the first pass you should be able to
answer the five Cs:
Category: what type of paper is this? Measurement,
review, descriptive etc
Context: which other papers is it related to? Which
theoretical base was used to analyse the problem?
Correctness: do the assumptions appear to be valid?
Contributions: what are the papers main contributions
Clarity: is the paper well written

24
Second pass
If you decide to read the paper further, then you need
to grasp its content with greater care. During this
phase:
 NOTE down terms you don’t understand or questions you may
want to ask the author
 If you are acting as a reviewers these will help you give feedback
 Look carefully at the figures, diagrams and other illustrations. Are
axes properly labelled? Are results shown with proper signs and
other mathematic symbols?
 Remember to mark relevant unready references for further
reading

This pass takes up an hour even hours


25
Third pass
The hallmark here is to fully understand the paper
The key to this is to attempt to virtually re-implement
the paper: that is making the some assumptions as the
author, re-create the work
By comparing this re-creation to the actual paper, you
can easily identify not only the papers innovation, but
its hidden failings and assumptions.
Identify and challenge any assumption in every
statement
Pass three takes hours

26
Reading research articles: Qualitative vs. quantitative
What is the purpose of the study? Does the purpose of the study relate to an
important problem or discovery?
Qualitative paper Quantitative paper
-Is the purpose translated clearly into a -is the purpose translated into a clearly
clearly worded research question worded hypothesis or research question
appropriate for qualitative study? suitable for quantitative study?
-Does the research question offer insight - Does it describe or imply a relationship
into and ask questions about social, among phenomenon using systematic,
emotional and experiential phenomenon objective and empirical methods?
to determine the meaning of X or about · Is the purpose deducted from an
the attitudes, beliefs, behavior of… adequate review of the literature?
- Is the purpose inductive from the · Is a theoretical perspective identified?
experience? · What problem or gap in our knowledge
· How was theory considered? is being addressed by this research study?
Has the problem been clearly stated in
the article?
· Is the problem important? Why? To
Whom?

27
How was the purpose investigated? Was the question
studies in a credible and rigorous manner?
Qualitative paper Quantitative paper
-Was the sampling strategy appropriate to - Was the sampling strategy appropriate
address the research question[s]? to address the hypothesis or research
- Purposeful selection? question[s]?
- Flexibility in sampling process?
- Ethics procedure?
- Informed consent obtained?
- Saturation of data? - Inclusion and exclusion criteria identified?
- Bias minimized?
· Was the paradigm/research strategy - Sample size and power?
[ethnography, phenomenology, grounded · Was the research design appropriate for
theory, PAR, other] appropriate for the addressing the hypothesis or research
research question? questions?
· Were the data collected in a way that - Type of design?
addresses the research question[s]? - Subject assignment to groups?
- Is there a clear and complete description - Randomization used?
of site participants? - Control group used?
- Is there a clear description of methods - Independent variable[s] identified?
used [observation, interviews, focus - Dependent variable[s] identified?
groups, other] - Variables operationally defined?

28
Qualitative paper cont… Quantitative paper cont…
-Are theCont…
role of the researcher and Were data collection procedures designed to limit
relationship with participants clearly bias and address the stated hypothesis or research
described?
- Are the assumptions and biases of
questions?
researcher identified? - Instrumentation used identified?
- Is the data representative of the whole - Instrumentation described in detail for
picture replication
- Instrument validity and reliability documented?
Was the data analysis sufficiently - Type of data collected identified [nominal,
rigorous ordinal, ratio or interval]?
- Is it clear how the categories/themes - Dependent variables are appropriate outcome measures
derived from the data? - Internal and external validity
- Were decision trail developed & rules Was the data analysis appropriate for the study
reported?
- Have steps been taken to test the credibility
design, question and type of data?
of the findings [triangulation, member - Statistical tests identified?
checking]? - Alpha level identified?
- Are you confident that all data were - Sufficient rigor?
taken into account? · Do the results and findings relate the data to the
- Can you follow what the authors did? purpose of the study and its hypothesis or research
question?
- Do results match methods?
- Are findings statistical significant?
- Are tables and figures appropriate and clear?
- Hypothesis accepted or rejected?
- Are alternative explanations provided?
- Are results clinically important?
- Are limitations discussed?

29
What are the findings and conclusions? Do the findings and
conclusions relate the data to the purpose?
Qualitative paper Quantitative paper
- Is there a clear statement of the -Were the conclusions appropriate given
findings? the study
· Is there a sense of personally findings?
experiencing - Were conclusions based on results?
the event or phenomenon being - Do the findings contribute to theory
studied? development and future
· What were the main limitations of the practice/research?
study? - Are there recommendations for
· Were the conclusions appropriate additional
given the study?
study findings?
· Do the findings contribute to theory
development and future practice/
research?

30
Are the findings of the study applicable to
practice, policy or theory?
Qualitative paper Quantitative paper
Does this study help me to understand Are the findings applicable to my
the patient/client/ sample?
context of my practice or going to · Are the findings generalizable to the
influence policy or theory formulation? population
· Does this study help me understand Are the findings applicable for policy
my formulation or supporting the main
relationships with my patients and their theory of the study
families, community, or health care
system?
· Are the findings transferable to my
practice, or my theory?

31
Developing research topics, questions,
hypothesis and objectives

“ A research topic without a question or hypothesis is an


32 aimless one”
Research questions/hypothesis
• With a theory or model in mind consider the
unresolved questions around you.
– What is such-and-such a situation like?
– Why does such –and –such a phenomenon occur?
– What does it all mean?
• Research begins with such questions
• A research question is a clear, focused, concise,
complex and arguable question around which you
center your research

33
Cont…
• Research questions help writers focus their research by
providing a path through the research and writing
process.
• its guides your:
– Formulation of a research plan or proposal
– Aims and objectives
– Literature search
– Research design
– Decisions about what data is needed and from whom
– Decisions about what form of analysis of data you will
use
34
Steps to developing a research question
• Choose an interesting general topic
• Do some preliminary research on your general topic
• Consider your audience. For most college papers, your
audience will be academic, but always keep your
audience in mind when narrowing your topic and
developing your question.
• Start asking questions -start asking yourself open-
ended “how” and “why” questions about your general
topic. For example, “How does depression affect self
care?”

35
Cont…
• Evaluate your question.
– Is your research question clear?
– With so much research available on any given topic, research
questions must be as clear as possible in order to be effective
in helping the writer direct his or her research.
• Is your research question focused?
• Is your research question complex? Research questions
should not be answerable with a simple “yes” or “no” or
by easily-found facts.

36
Sample research questions
• Unclear: Why are social networking sites harmful?
• Clear: How are online users experiencing or addressing privacy
issues on such social networking sites as MySpace and Facebook?
• Unfocused: What is the effect on the environment from global
warming?
• Focused: How is glacial melting affecting penguins in the Arctic
Circle?
• Too simple: How are doctors addressing diabetes in the Zambia?
• Appropriately Complex: What are common traits of those
suffering from diabetes in Zambia, and how can these
commonalities be used to aid the medical community in prevention
of the disease?

37
Constructing a research question
The sub-questions are
Main
question
building blocks
(stepping along the way)
towards the main
question.
Sub Sub Sub
They must be linked to
questio questio questio create an argument
n n n

38
From research question to hypothesis
After you’ve come up with a question, think about
what the path you think the answer will take.
Where do you think your research will take you?
What kind of argument are you hoping to make/support?
 What will it mean if your research disputes your
planned argument?

39
Cont…
• A hypothesis is a specific proposition that logically
follows from the theory (Leary, 2004)
• Researcher typically start with a priori – hypotheses
before collecting data (Theory driven)
• Hypothesis must be stated in a falsifiable way
• Most commonly, hypotheses take two formats:
– a conditional statement, “social networking may affect
interpersonal relationships."
– an If, then statement, "If interpersonal relationships related
to social networking, then increasing the number of social
networking sites will reduce physical contact of ones social
network."
40
Five Ws and an H in research question
development
Whats?
What problem, person, relationship, event, circumstance,
mystery, do you plan to investigate?
What specific aspects are you examining
What has been done before in this field and are you
satisfied with what has been done?
What types of things do you think you might have to do
to find out more about this issue?

41
Where?
Refers to the location the research question will be
addressing
Geographical locality
Place
Neighbourhood
City
Country
Institution
Section

42
When?
Is there a need to limit the study to a specific period of
time
Effect of the 2003 mental health policies on current
mental health interventions
Correlates of HbA1C level in individual who have been
diagnosed with diabetes 6 months and more
Rate of memory recall after taking highly concentrated
caffeine fluids

43
Who?
Be specific about the universe of study: is it a particular
group of people
 Patients with diabetes mellitus

Is it a particular group?


 Adults who identity themselves as gay
 People with diabetes and depression

Why?
Avoid the “ so what factor”
Qtns should say why study is important or significant
(Not always)
 Social impact, theoretical and conceptual contribution, replication
etc
44
How?
Think about what can change your planned question:
Consider
1. How will you find out what you need to know?
2. How will you gain access?
3. How will data be analysed?
4. How will you finance the study?
5. How will you find the answer within the time frame
you have?

45
Constraint to research questions
Discuss these constraints in relation to your research
question
 Time
 Funds
 Personal preferences
 Capabilities
 Expertise
 Resources
 Supervision expertise
 Approval by relevant authorities including ethics committees
 Access to research participants and/or research site

46
Research question vs. topic
• Questions force you to close off some line of inquiry
• Topics invites discussion while questions force you to
think in terms of finding answers.
• Topic should be clear and focused
Topic: epileptic school going children and
discrimination
Question: what factors contribute towards
discrimination of children with epilepsy?

47
Cementing the three
Topic
“School and problem pupils: A qualitative study”
Main question
“What meaning does school have for problem pupils?”
Sub questions
How is education perceived by problem pupils?
What has been the experience of school by problem pupils?
How do problem pupils perceive expectations of them on
the part of a)parents; b) teachers; c) administrators
How do problem pupils think parents, teachers and
administrators perceive them?

48
Study Objectives
Aim: general statement of which the study is trying to
do
 To determine…inorder to….

Objectives: specific statements of desired change that


the study intends to accomplish by a given time.
 To influence policy of integrating problem children in schools

Writing Objectives
Do don’t confuse objectives and activities. Objectives
ask “ what are we trying to change?”
To create a video explaining how HIV and AIDS is
transmitted = Activity
To increase knowledge on how AIDS is transmitted =
49
Objective
Cont…
What are we trying to change?
Taxonomies of change in behavioural research
Knowledge: increasing knowledge on a particular
subject
Attitudes: creating a attitude that favors desired
behaviour
Skills: developing individual capacities to adopt a
given behavior
Behavior: maintaining and adopting a particular
behaviour
50
Performing literature review:
Statement of the problem

“ Research without a problem is baseless”


51
The problem: The heart of the research
process
Problems in need of research are everywhere
Your problem should address an important question
and the answer should “make a difference”
Your problem should advance the frontiers of
knowledge; knew way of thinking, suggesting
applications, paving way for future research
Avoid research problems that
 A ploy for achieving self-enlightment
 Compare two data sets e.g. employment of women in 1990 vs.
2000
 Results in a yes or no answer

52
Finding research problems
Look around you
Read the literature
Attend professional conferences
Seek advice of experts
Choose a topic that intrigues and motivates you
Choose a topic that others will find interesting and
worthy paying attention
Exam current community, provincial or national
problems
Review funder`s current areas of funding and interest

53
Cont..
Solvable problems – science deals with solvable
problems
-Investigates questions that are answerable given
current knowledge and research techniques.
-Other questions are outside the realm of scientific
investigation
e.g. Are there angles? (this question falls under
pseudoscience: believing the unbelievable)

54
Stating the research problem
State the problem clearly and completely
State why this is a problem and substantiate the statement
with statistics and or relevant statement
 Welfare of children`s attitudes
 Busing of schoolchildren

Think through the feasibility of the project that the


problem implies
 The study proposes to study what it means to be gay in
Zambia????????????
State the problem in a way that reflects an open mind
about its solution
Prove facts and don’t assume
Delimit the research problem
55
Fine tuning your research problem
1. Conduct a thorough literature review
2. Try to see the problem from all sides
3. Think through the process – link to all parts of
proposal
4. Discuss your research problem with others
5. Let it be examined and critiqued
6. Consider time

56
Practical application: Home work
Using a clean sheet of paper and a clearly marked name
State the research problem and sub problems
Write your hypotheses or questions
Write the delimitations
Write the definition of key terms
Write the importance of the study
 Review proposal using check list

57
Checklist
1. Will the project advance the frontiers of knowledge
in an important way
2. Have you asked an expert in your field to advise you
on the value of your research effort
3. What is good about your potential proposal
4. What are the pitfalls of attempting this research

58
Reviewing literature

“ As a researcher, you should ultimately know the


59 literature about your topic very well”
Understand the role of literature review
In addition to pinning down your research problem,
literature review has the following benefits
 It can offer new ideas, perspectives and approaches that may not
have occurred to you
 It can inform you about other researchers who conduct working in
this areas – contact for feedback and advice
 Show how others handle methodological and design issues in
studies similar to yours
 Reviews resources of data that you may not have known existed
 Can introduce you to measurement tools that others have used
 Reviews methods of dealing with problem situations that you may
face
 Can help to interpret and make sense of the findings
 Boost confidence that your topic is worth studying

60
Conducting literature search
Write down the problem in its entirety at the top of the
page or computer screen
Write down each sub problem in its entirety as well
Identify the important words and phrases in each problem
Translate these words and phrases into specific topics
you must learn more about. These topics become your
agenda as you read the literature
Go to library to seek out the resources related to your
agenda or search electronically
Ready and put down necessary information backed with
full reference

61
Literature search example

62
Knowing when to quit
When you find repetitive
patterns in the materials you
are finding and reading
When you get the feeling of
déjà vu
When you are not encountering
new view points

63
Writing literature review
Don’t always write:
`in 1995, Jones found…, further, smith (2013)
discovered that… Black (2010) proposed….`
Boring…!

Good literature review doest just report related


literature, but evaluates, organises and synthesis

64
Cont…
How then can I evaluate, organise and synthesis
literature so that its not boring ?
1. Compare and contrast varying theoretical perspectives
on the topic
2. Show how approaches to the topics have changed
3. Describe general trends in research findings
4. Identify discrepant or contradictory findings, and
suggest possible explanations for such discrepancies
5. Identify general themes that run throughout the
literature

65
Clear and cohesive literature review
Get the proper
psychological orientation
–read 5-10 good recent
articles
Have a plan (see funnel
approach)
Emphasize relatedness
and differences
Summarise your review
At least have 5 drafts to
perfect you work
66
Data collection methods
Document review
Biophysical measurements
Direct observations
Questionnaire
Open &Semi-structured interviews
Diaries
Focus groups

67
Research design

“Systematic approach of how research will be


conducted”

68
Linking data and research designs
Data and methodology are intertwined
Methodology to be used for a particular research
question must always take into account the nature of
data
To some extent data dictates the research method
 E.g. historical data can be best obtained from written records than
experimentation in the lab
Research approaches: quantitative vs. qualitative
Research designs for:
Quantitative approach
Qualitative approach

69
Quantitative vs. qualitative approach
Question Quantitative Qualitative
What is the purpose of the •To explain and predict •To describe and explain
research? •To confirm and validate •To explore and interpret
•To test a theory •To build theory

What is the nature of the •Focused •Holistic


research process? •Known variables •Unknown variables
•Established guidelines •Flexible guidelines
•Predetermined methods •Emergent methods
•Somewhat context •Context bound
•Detached view •Personal view

What are the data like, and •Numeric data •Textual or image based
how are they collected? •Representative, large •Informative, small sample
sample •Loosely structured or non-
•Standardized instruments standardized observations
and interview

70
Cont….
Question quantitative Qualitative
Who are data analysed to •Statistical analysis •Search for themes and
determine their meaning? •Stress on objectivity categories
•Deductive reasoning •Data may be subjective
and potentially biased
•Inductive reasoning
How are findings •Numbers •Words
communicated? •statistics, aggregated •Narratives, individual
data quotes
•Formal voice, scientific •Personal voice, literary
style style

71
Mixed method approach
Sequential explanatory approaches to mixed method
may include:

 Quantitative Qualitative
 Qualitative Quantitative
Concurrent triangulation approach to mixed method
involves the application of both approaches
simultaneously

 Quantitative + Qualitative

72
Choosing a research approach
Use this approach if: Quantitative Qualitative

If you believe that: There is an objective reality There are multiple possible
that can be measured realities constructed by
different individuals

Your audience is: Familiar with or support Familiar with or supportive


quantitative studies of qualitative studies

Your research question is: Confirmatory, predictive Exploratory, interpretative

The available literature is: Relatively large Limited


Your time available is: Relatively short Relatively long

73
Cont..
Use this approach if: Quantitative qualitative
Your ability or desire to Medium to low High
work with people is:
Your desire for structure High Low
is:
You have skills in the Deductive reasoning and Inductive reasoning and
area(s) of: statistics attention to detail
Your writing skills are Technical, scientific Literary, narrative writing
strong in the area of: writing

74
variables

Researchers use objects, events and


hypothetical constructs to measures
phenomena

75
What is a variable
Variable – something (e.g. object, event) that can vary,
assume number and can be quantified
 independent vs. dependent
 Effect or influence on Z. what is being measures (how it responds
to the influence of x)
Discrete vs. continuous
 Take whole value e.g. one child vs. take other value e.g. 2.5 kgs

Qualitative vs. quantitative


 Descriptions vs. magnitudes

76
Independent vs. dependent variables

77
Example: types of variables

78
Behavioral measurement

79
Preamble
All behavioural research
involves the
measurement of some
behavioural, cognitive,
emotional or
physiological response
Poor measurement can
affect the study – thus
there are characteristics
that distinguish good
measures from bad ones
80
Type of Measurements
Measures in behavioural studies fall roughly into three
categories: observational, physiological and self report
measures
 Observational measures involves the direct observation of
behaviour either directly or from audio- or video recordings and
record the participants behaviour.
 Physiological measures – examine the relationship between
bodily processes and behaviour mostly those that are not directly
observable such as heart rate, brain activity, and hormonal
changes etc

81
Cont…
Self report measures – involves the reply that people
give to questionnaires or interviews. Responses from
this measure include information on their thoughts,
feelings or behaviour (Leary, 2004).
 Cognitive self reports – measure what people think about
something
 Affective self reports measure how respondents feel about an
event or object
 Behavioural self report – involves participants report of how they
act.

82
Converging Operations in Measurements
Because any particular measurement procedure may
provide only a rough and imperfect measure of a given
construct, researchers sometimes measure a given
construct in several ways
By using several types of measures, researchers can
more accurately assess the variable of interest
When different measures provide the same results,
there is more confidence in their validity
This approach to measurement is called converging
operations or triangulation (Leary, 2004)

83
Scales of Measurements
Regardless of what kind of measure is used –
observational, physiological or self report, the goal of
measurement is to assign numbers to participants
responses
Numbers are assigned so that responses can be
summarized and analyzed (2004, Leary)
However, not all numbers can be treated in the
someway as some are really numbers that can be
added, subtracted, multiplied and divided .

84
Cont…
However, Some numbers have special characteristics
and require special treatment
Researchers distinguish among four different levels or
scales of measurements
These scales of measurement differ in the degree to
which the number being used to represent participants’
behaviour correspond to the real number system
(Leary, 2004)

85
Types of Scales of Measurements
1. Nominal – is the simplest and here numbers that are
assigned to participants’ behaviour or characteristics
are just labels e.g. 1 boys & 2 girls (Leary, 2004)
 Numbers on a nominal scale indicate attributes of our
participants, but there are labels or names rather than real
numbers (Leary, 2004)
 Thus, you can not perform any mathematical operations on these
numbers

86
Cont…
2. Ordinal scale – involves the rank ordering of a set of
behaviours or characteristics.
 It tells us the relative order of our participants on a particular dimension
but does not indicate the distance between participants on a dimension
being measured (Leary, 2004).
 E.g. loudest applause at a content each participants receives

3. Interval scale – involves equal differences between the


numbers on the characteristic being measured(Leary, 2004)
 E.g. on an IQ test the difference between scores of 90 and 100 (10
points) is the same as the difference between scores of 130 and 140 (10
points)
 Interval scale does not have a true zero point that indicates the absence
of the quality being measured. E.g. 0 IQ or temperature

87
Cont…
4. Ordinal scale – is the highest level of measurement
 It has a true zero point therefore involves real numbers that can be
added, subtracted, multiplied and divided (Leary, 2004)
 Many measures of physical characteristics such as weight, height
are on a ration scale
 Because weight as a true zero point (indicating no weight) , it
makes sense to talk about 100 pounds being twice as heavy as 50
pounds

88
Importance of Scales of Measurement
1. Measurement scale determines the amount of
information provided by a particular measure.
 Nominal scales provide less information than ordinal ,
interval or ratio scales
2. The kind of a scales of measurements determines the
kinds of statistical analyses that can be performed on
data (Leary, 2004)
 Certain mathematical operations can be performed only on
numbers that conform to properties of a particular
measurement scale.

89
Estimating the Reliability of a Measure
How do we know whether a particular
measurement technique does in fact produce
meaningful and useful scores that accurately
reflect what we want to measure?
The first characteristic that any good measure
should possess is reliability
Reliability refers to the consistency or
dependability of a measuring technique (Leary,
2004)
Imagine a bathroom scale giving you different

90
weights 59, 65,69, 70 within a day – it is not
reliable
Measurement Error
A participants’ score on a particular measure consists of
two component: the true score and measurement error.
It can be portrayed by the equation:
observed score = true score + measurement error
(Leary,2004)
The true score is the score that the participants would
obtain if our measure were perfect (Leary, 2004)
However , virtually all measures contain measurement
error – the component of the participants score that is as a
result of factors that distort the score so that it is not
precisely what it should be
91
Factors contributing to measurement error
These factors fall into five major categories:
1. Transient state – participant’s mood, health, level
of fatigue and anxiety levels contribute to
measurement error so that the observed score doe
not reflect the participants true characteristics or
reactions (Leary,2004)
2. Stable attributes –paranoid or suspicious
participants may distort their answers and less
intelligent participants may misunderstand some
questions. Motivational levels also may account
for measurement error (Leary, 2004)
92
Cont…
3. Situational factors – researcher‘s friendliness or
sternness or aloofness may make the participant
work harder, feeling intimidated , angered or
unmotivated respectively (Leary,2004)
 Room temperature, lighting and crowding also can artificially
affect scores
4. Characteristics of the measure itself –
ambiguous questions, measures that induce
fatigue (such as long tests), or fear ( such as
intrusive or painful physiological measures can
affect test scores
93
Cont…
5. Actual mistakes –these could be in recording
participant’s responses and can make the observed
score different from true score (Leary, 2004)
E.g. if the researcher sneezes while counting the number
of times a rat presses a bar, he/she may lose count
Careless administrators may write 3s that like 5s, the
person entering data into the computer may enter it
incorrectly
 what ever the source, measurement error
undermines the reliability of the measure

94
Reliability as Systematic Variance
Researchers never know for certain how much
measurement error is contained in a particular
participants score or what a true score really is.
However, Researchers have ways of estimating the
reliability of the measure they use
If a measure is not acceptably reliable, measures can
be taken to increase its reliability
Assessing a measure’s reliability involves an
analysis of the variability in a set of scores (Leary,
2004)

95
Cont…
If we combine the scores of many participants and
calculate, the total variance of the set of scores is
composed of the same two components:
 Total variance in a set of scores = variance due to true scores + variance due to measurement

error

The total variance in a set of scores that is associated


with participants true score is systematic variance
because the true component is related to the actual
attribute that is being measured (Leary, 2004
The variance due to measurement error is error
variance because it is not related to the attribute being
measured
96
Cont…
Therefore, to assess the reliability of a measure,
researchers estimate the proportion of the total
variance in the data that is true-score (systematic)
variance versus measurement error (Lear, 2004)
Reliability = true – score variance/ total variance
Thus, reliability is the proportion of the total variance
in a set of score that is systematic variance associated
with participants’ true score
The reliability of a measure of a measure can range
from .00 (indicating no reliability) to 1.00 (indicating
perfect reliability)- Leary 2004
97
Cont…
The reliability is zero when none of the total
variance is true-score variance
When reliability is zero, the scores reflect
noting but measurement error and the
measurement is totally worthless (Leary 2004).
As a rule of thumb, a measure is considered
reliable if it is at least 70% of the total variance
in scores is systematic or true-score variance
(Leary, 2004).

98
Assessing Reliability
Researchers use three methods to estimate the
reliability of their measure: test-retest, interitem and
interrater reliability.
All three are based on the same logic: to the extent
that two measurements of the same behaviour,
object or event yield similar scores, we can assume
that both measures are taping into the same true
score.
But, if two measurements yield very different scores,
the measure must contain a high degree of
measurement error (Leary, 2004).
99
Cont…
Most estimates of reliability are obtained by
examining the correlation between what are
supposed to be two measures of the same
behaviour, attribute or event
This correlation is expressed as a correlation
coefficient (Leary, 2004)
Correlation coefficient is a statistic that expresses
the relationship between two measures on a scale
from .00 (no reliability) to 1.00 ( a perfect
relationship)
Correlation coefficients can be positive or negative
100
Methods of estimating reliability
1. Test-Retest – refers to consistency of participants’
responses on a measure over time (Leary, 2004)
 Assuming that the characteristic being measured is relatively
stable and does not change over time, participants should
obtain approximately the same score each time they are
measured.
 Because there is some measurement error in even well-
designed tests, the score wont be exactly the same, but
should be close (Leary, 2004)

101
Cont…
Test retest is determined by measuring participants on
two occasions, usually separated by a few weeks (Leary,
2004)
Then two sets of scores are correlated to see how closely
related the two are to each other.
If the two sets of scores correlate highly (at least .70) the
measure has good test retest reliability.
If they do not correlate highly then the test contain too
much measurement error (Leary, 2004)
Assessing test retest makes sense if the attribute being
measured would not be expected to change e.g.
intelligence, attitudes, personality but not hunger or
fatigue
102
Cont…
2. Interitem reliability- assesses the degree of
consistency among the item a scale (Leary,2004)
• Personality inventory for example, consist of several
questions that are summed to provide a single score
that reflects the respondent’s extraversion, shyness etc
• Measure of depress ask participants to rate themselves
on several mood-related item (sad, unhappy, helpless
etc) that are added to ether to provide a single
depression score (Leary, 2004)
• When researchers sum up such scores, they must be
sure that all of the items are tapping into the same
construct ( such as a particular trait, emotion or
attitude)
103
Cont…
 On an inventory measuring extraversion, for example, researchers
want all items to measure some aspects of extraversion
 Including the items that don’t measure the construct of interest on
a test increases measurement error
 To know whether, items in a measure tap into the same construct,
researchers look at the Item Total Correlation for each item on a
scale (Leary, 2004)
 Item total correlation is the correlation between a particular item
and the sum of all other items on the scale

104
Cont…
For example, if you have 10 items on a hostility measure,
you could look at the item total correlation between each
item and the sum of peoples scores on the other nine items
If a particular item measures the same construct as the rest
of the items, it should correlate at least moderately with
the sum of those item (Leary, 2004)
Psychometricians also use split-half reliability as an index
of interitem relaibility
With split-half reliability, psychometricians divide the
item on the scale into two sets

105
Cont…
This is done by splitting the items in half or odd
numbered items are formed as one set and even
numbered items as one
Then the total score are obtained by adding then
items within each set, and the correlation between
the two sets is then calculated
If the items on the scale measure the same
construct, scores obtained on the two half should
correlated highly .70 or more (Leary, 2004)

106
Cont…
3. Interrater Reliability – also called interjudge or
interobserver reliability involves consistency
among two or more test administrators who
observe and record participants behaviour (Leary,
2004)
 If one observer records 15 bar presses and another observer
records 18 bar presses, the difference between their
observation represents measurement error
 Researchers use two methods for assessing interrater
reliability (Leary, 2004):

107
Cont…
1. If the raters are simply recording whether a
behaviour occurred, we calculate the percentage
of times they agreed
2. If the raters are rating the participants behaviour
on a scale (e.g. anxiety rating from 1 to 5), we
can correlate their rating across participants
 If the observers are making similar ratings we should
obtain a relatively high correlation (at least .70)
between them.

108
Increasing the reliability of a measure
It is not always possible to assess the reliability of
measures, however, efforts can be make to maximize
the reliability of a measure:
Standardize the administration of a measure
Clarify instructions and questions
Train observers
Minimize error in coding data

109
Estimating the Validity of a Measure
Validity refers to the extent to which a
measurement procedure measures what it is
intended to measure.
Validity is the degree to which variability in
participants score on a particular measure reflects
variability in the characteristic we want to
measure
Validity asks questions like:
Do scores on the measure relate to the behaviour or attribute of
interest?
Are we measuring what we think we are measuring?
110
Assessing validity
Researchers often refer to three different types of
validity: face validity, construct validity, and
criterion related validity
1.Face validity – refers to the extent to which a measure
appears to measure what it is supposed to measure
 Face validity involves the judgment of the test
administrator
 A measure has face validity if people think it does

111
Cont…
Three qualifications must be kept in mind when
estimating validity using face validity:
1. Just because a measure has face validity doest not
mean that it is actually valid
2. Many measures that lack face validity are in fact
validity
3. Sometimes psychometricians design measures that
lack face validity if they think that respondents will
hesitate answering sensitive questions honestly

112
Cont…
3. Construct Validity – much of behavioural research
involves the measurement of hypothetical constructs –
entities that cannot be directly observed but inferred on
the basis of empirical evidence.
Hypothetical constructs such as intelligence, status, self
concept, motivation, learning etc are common in psychology
Cronbach & Meehl (1955) suggested that the validity of
measures of hypothetical constructs can be assessed by
studying the relationship between the measure of the construct
and scores of other measures
We should specify what the scores on a particular measure
should relate to if that measure is valid

113
Cont…
For example, scores on a measure of self esteem should
be positively related to scores on measures of
confidence and optimism but negatively related to
measures of insecurity and anxiety
We examine construct validity by calculating
correlations between the measure we wish to validate
and other measures
To have construct validity a measure should both
correlate with other measures that it should correlate
with (convergent validity) and not correlate with
measures that it should not correlate with (discriminate
validity)
114
Cont…
3. Criterion-Related Validity – refers to the extent to
which a measure allows us to distinguish among
participants on the basis of a particular
behavioural criterion
 For example do scores on the SAT permit us to
distinguish students who will do well in college from
those who will no? does a self report measure of marital
conflict actually correlate with the number of fights that
married couples have? do scores on depression scale
discriminate between people who do and do not show
depression patterns of behaviour

115
Cont…
Note that in each of these examples do not assess
the relationship with other constructs as in
construct validity but assessing the relationship
between each measure and a relevant behavioural
criterion
When assessing criterion related validity,
psychometricians identify behavioural outcomes
that the measure should be related to if the measure
is valid.
If the measure does not predict behavioural
criteria, then it lacks criterion validity
116
Cont…
• Researchers distinguish between two primary kinds of
criterion validity: concurrent and predictive validity
• In concurrent validity, the two measures are administered
at roughly the sometime
• The question is whether the measure distinguishes
successfully between people who score low vs. high on
the behavioural criterion at the present time
• When the score on the measure are related to behaviours
that they should be related to right now, the measure
possess concurrent validity

117
Cont….
Predictive validity refers to a measures ability
to distinguish between people on a relevant
behavioural criterion in future e.g. scholastic
aptitude test
Criterion related validity is of interest in
applied psychological practices e.g.
 In educational research, we are interested in the degree to which
tests predict academic performance
 In personnel selection, test must demonstrate that the test will
successfully predict future on job performance

118
Designing questionnaires and
interview guides

119
What is a questionnaire?
Formalised set of questions for obtaining information
from respondents
The objective in questionnaire construction is to
translate the researcher's information needs into a set
of specific questions that respondents are will and able
to answer
Information need is specific type of information
required form respondents to answer your research
objectives

120
Study objectives and information needs
To examine the association of being diagnosed with
diabetes and psychological distress
 information needs
To find out diabetes related stress
Information needs
To determine affects influencing diabetes related stress
Information needs

121
Why a questionnaire
Main means of
collecting quantitative
primary data
Not difference
questionnaire interview
and questionnaire
Standardised, leading to
internal consistency and
coherent of analysis

122
The nature of questionnaires
For any questionnaire to be a good measure it needs
enough appropriate items and a scale which measures only
the attribute and nothing else.
This principle is called unidimensionality (Nunnally,
1978).
Well designed items are more likely to measure the
intended attribute and to distinguish effectively between
people (Coaley, 2010).
It is generally thought that at least 30 items are needed for
good accuracy, although it is difficult to predict how many
and it depends with the construct to be measures (Coaley,
2010).
123
Item writing
Writing items can be difficult. DeVellis(1991) provides six
guidelines for item writing;
1. Define clearly, what you want to measure. To do this , use
substantive theory as a guide and try to make items as
specific as possible
2. Generate an item pool. Theoretically, all items are
randomly chosen from a universe of item content. In
practice, however, care in selecting and developing items
is valuable. Avoid redundant items. In the initial phase,
you may want to write three or four items for each one
that will eventually be used on the test or scale
3. Avoid exceptionally long items which are rarely good
124
Cont…
4.Keep the level of reading difficulty appropriate for those who will
complete the scale
5. Avoid “double-barrelled” items, which convey two or more ideas
at the same time. For example, consider an item that asks the
respondents to agree or disagree with the statement, “I vote PF
because I support social programs”. There are two different
statements with which the person could agree: “I vote PF” and “I
support social programs”
6. Consider mixing positively and negatively worded items.
Sometimes respondents develop the “acquiescence response set”.
This means that the respondent will tend to agree with most items.
To avoid this bias, you can include some items worded in the
opposite direction. (like you did with the self-actualization measure,
reverse the values for your scale, 1=4, 2=3, 3=2, 4=1).
125
Item Formats
The dichotomous format- this format offers two
alternatives for each item.
Usually a point is given for the selection of one the
alternatives. Common example of this format is the two
choice formats: true/false, yes / no
These are easy to construct, administer and score
 these require absolute judgement
However, they encourage test taker to memorize materials
Also the mere chance of getting any item correct is 50% -
thus to be reliable they must include many items (Kaplan
& Succuzzo, 2001)

126
Cont…
Dichotomous formats appear in educational tests, and
many personality tests.
Personality test constructors often prefer the two choice
formats (true/false, yes/no) because they require absolute
judgment.
e.g. I often worry about my sexual performance”
I have often cried whilst watching sad films
I believe I am being followed
sometimes I see and hear things that other people
do not hear or see

127
Example

128
When to use this format
When measuring absolute constructs /factors e.g.
personality tests
When measuring achievement, ability, demographic
characteristics
E.g. Do you suffer from any disease? YES_ NO__
Descriptive statistics e.g. Frequencies, graphs, cross
tables

129
Likert Scale
Likert scale requires the respondent to indicate the
degree of agreement with a particular non absolute
question (Kaplan & Succuzzo, 2001)
 the technique is part of Likert’s (1932) method of
attitude scale construction e.g. attitude scale, mood etc
Three, four, five or six alternatives are offered
E.g. strongly disagree, disagree, neutral, agree,
strongly agreee
Scoring requires that negatively worded items be
reverse scored and the responses than be summed

130
The category format
 Similar to Likert scale but uses greater number of choices e.g.
10 point scale
 It need not have exactly ten point , it can either have more or
fewer categories
 The endpoints should be well defined and respondents continue
to be reminded of the definition of end points
 Controversy: evidence suggest that increasing number of
response categories may not increase reliability and validity
 The fact is, increasing number of responses beyond nine can reduce
reliability because responses may be more likely to include elements of
randomness when there are so many alternatives that can not be clearly
discriminated between choices (Clarke &Watson, 1998)

131
Example

132
Visual Analogue Scale
Related to category scale but here the respondent is
given a 100 centimetre line and asked to place the
mark between two defined well defined points
The scores are scored according to the measured
distance from the first end point to the mark
Visual analogue scale are popular for measuring self
related health e.g. adherence, pain
Good for use with illiterate population but not good
for multi-item scale because they are time consuming

133
Examples of VAS
A B

134
Checklist
This method requires a subject to pick from a long
list of adjectives and indicate which ones are
characteristic of him/herself
Checklist can be used to describe oneself or some-
else
Common in personality test which characterise
groups or individuals by traits
E.g. adventurous, alert, quiet, imaginative, fair-
mined
Good for demographic section of questionnaire
135
Example

136
Item Analysis
Item analysis - refers to
a set of methods used to
evaluate test items.
Item analysis id an
important aspect of test
construction
Good test making
require careful attention
to the principles of test
construction

137
Purpose for item analysis
Evaluates the quality of each item

Rationale: the quality of items determines the


quality of test (i.e., reliability & validity)

 May suggest ways of improving the measurement


of a test
Can help with understanding why certain tests
predict some criteria but not others

138
Questions about performance of items
When analyzing the test items, we have several
questions about the performance of each item.
Some of these questions include:
· Are the items congruent with the test objectives?
· Are the items valid? Do they measure what they're
supposed to measure?
· Are the items reliable? Do they measure
consistently?

139
Cont…
· How long does it take a respondent to
complete each item?
· What items are most difficult to answer
correctly?
· What items are easy?
· Are there any poor performing items that
need to be discarded?

140
Descriptive statistic analysis
This provides the most common means of evaluating
item
It involves considerations of the item mean and its
variability
In general, the more the mean is at the centre of
distribution of item score and the higher its variability,
the more effective an item will be

141
Quantitative Item Analysis
Inter-item correlation matrix displays the correlation of
each item with every other item
provides important information for increasing the test’s
internal consistency
each item should be highly correlated with every other
item measuring the same construct and not correlated
with items measuring a different construct

142
Quantitative Item Analysis
Items that are not highly correlated with other items
measuring the same construct can and should be
dropped to increase internal consistency

143
EFA – underlying factors for a construct/measures

144
Developing interview guide
Pick a topic that is interesting to you.
Research should guide your questions.
you should know what the research literature says about the
people you are studying.
 Using research to guide your questions means that you have
done a thorough review of the literature and that you know
what other scholars say about the people you are studying.
Questions should be open ended.
The goal of qualitative research is to uncover as much about
the participants and their situations as possible and yes or no
questions stop the interviewee before getting to the “good
stuff”.

145
Cont…
 Start with the basics.
Ask your interviewee basic background data about her/himself
(things like name, where they grew up, etc.) as a way of warming
up your participant.
Background information is also important for demographic data
about your participants
Begin with easy to answer questions and move towards
ones that are more difficult or controversial.
The idea, again, is to slowly build confidence and trust with the
interviewee.
In other words, you would not want to start with a big, probing,
“high stakes” question like, “Have you ever been date raped?”
Chances are if you do, your interviewee will withdraw.

146
Cont..
 The phrase “tell me about…”is great way to start a question.
 The phrase “tell me about” is not only an invitation for the interviewee to
tell you a story, but also it assumes that the interviewee will talk and it
subtlety commands the interviewee to begin talking.
Use prompts.
 As a qualitative researcher conducting interviews, you should both
trust your instincts and be ready for surprises.
 Creating probes or prompts for each question helps keep you on
track.
Be willing to make “on the spot” revisions to your
interview protocol.
 Many times when you are conducting interviews a follow up question may
pop into your mind.
 If a question occurs to you in the interview ask it.

147
Cont…
 Don’t make the interview too long.
 Practice with a friend.
Do your questions make sense? Do other people understand what
you are trying to ask?
 It is always a good idea to pilot test your questions with someone
you know to make sure that your questions are clear.
 Make sure that you have set up a second shorter interview to
help you clarify or ask any questions you missed after you
have transcribed the interview.
Once you read over the transcribed interview, you may not
understand what was said or what your interviewee meant and a
second shorter interview lets you clear up anything that you do
not understand.
148

You might also like