Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Research - systematic investigation that is empirical in nature and designed to contribute to

public knowledge

*Empiryczny oznacza możliwość poznania danej rzeczy opiera się na doświadczeniu, obserwacji,
eksperymentach.

Social research involves topics such as our societies, the things we do, how we interact, how we
live, how we feel, and how we see ourselves.
Done in elds such as sociology education, public health, criminology, housing, public welfare,
applied and social psychology.

Research enterprise - the macro-level e ort to accumulate knowledge across multiple empirical
systematic public research projects

We do research studies in order to contribute to our knowledge.


We accumulate knowledge with the idea that it may contribute some day to something we can
use.
When we move research from discovery to practice we can say we are translating research into
practice.

Translational research - is the systematic e ort to move research from initial discovery to practice
and ultimately to impacts on our lives.

The Research-practice continuum - process of moving from an initial research idea or discovery
to practice, and the potential for the idea to in uence our lives or world

Basic research - research that is designed to generate discoveries and to understand how the
discoveries work

Applied research - research where a discovery is tested under increasingly controlled conditions
in real-world contexts

Implementation and dissemination research - research that assesses how well an innovation or
discovery can be distributed in and carried out in a broad range of contexts that extend beyond
the original controlled studies

Impact research - research that assesses the broader e ects of a discovery or innovation on
society

Policy research - research that is designed to investigate existing policies or develop and test new
ones
fi
ff
ff
fl
ff
Research synthesis - is a systematic study of multiple prior research projects that address the
same research question or topic and summarize the results in a manner that can be used by
practitioners

Meta-analysis - uses statistical methods to combine the results of similar studies quantitatively in
order to allow general conclusions to be made

Systematic review - is a research synthesis approach that focuses on a speci c question or issue
and uses speci c preplanned methods to identify, select, assess, and summarize the ndings of
multiple research studies, often a systematic review involves a panel of experts who discuss the
research literature and reach conclusions about how well a discovery works to address a problem

While a meta-analysis is always a quantitative synthesis, a systematic review may be a


judgmental expert-driven synthesis, a meta-analysis, or both.

Guideline - is the result of a systematic process that leads to a speci c set of research-based
recommendations for practice that usually includes some estimates of how strong the evidence is
for each recommendation

The interpositioning of a synthesis and guideline process in the middle of the research-practice
continuum has transformed virtually every area of applied social research practice in our society.

Evidence-based practice ( EBP ) - is a movement designed to encourage or require practitioners


to employ practices that are based on research evidence as re ected in research syntheses or
practice guidelines

The EBP movement represents a major attempt of the research enterprise to achieve a better
integration of research and practice.

Evolutionary epistemology - the branch of philosophy that holds that ideas evolve through the
process of natural selection

In this view, an individual discovery or idea is like an organism in biology. It competes with other
established and emerging discoveries.
Ultimately, any discovery has survival value; it either survives or it doesn’t.

One of the most di cult aspects of research is how to develop the idea for a research project.
Many of the practical problems that arise in practice can lead to extensive research e orts.

Requests for proposals ( RFPs ) - a document issued by a government agency or other


organization that, typically, describes the problem that needs addressing, the contexts in which it
operates, the approach the agency would like you to take to investigate the problem, and the
amount the agency would be willing to pay for such research
fi
ffi
fl
fi
fi
ff
fi
One of the most important early steps in a research project is the conducting of the literature
review.

Literature review - is a systematic compilation and written summary of all of the literature
published in scienti c journals that is related to a research topic of interest, a literature review is
typically included int he introduction section of a research write-up

A literature review is designed to identify related research, to set the current research project
within a conceptual and theoretical context.

Peer review - a system for reviewing potential research publications where authors submit
potential articles to a journal editor who solicits several reviewers who agree to give a critical
review of the paper. The paper is sent to these reviewers with no identi cation of the author so
that there will be no personal bias (either for or against the author). Based on the reviewers’
recommendations, the editor can accept the article, reject it, or recommend that the author revise
and resubmit it.

Articles in journals with peer review processes are likely to have a fairly high level of credibility.

Theoretical - pertaining to theory. social research is theoretical, meaning that much of it is


concerned with developing, exploring, or testing the theories or ideas that social researchers have
about how the world operates

Empirical - based on direct observations and measurements of reality

We present the rst two terms—theoretical and empirical—together because they are often
contrasted with each other.
You can even think of most research as a blending of these two terms—a comparison of theories
about how the world operates with observations of its operation.

Experience has shown that even the most accurate of our instruments and measurement
procedures inevitably have some inaccuracy in them.

Probabilistic - based on probabilities

Causal - pertaining to a cause-e ect relationship, hypothesis, or relationship. something is causal


if it leads to an outcome or makes an outcome happen

Causal relationship - a cause-e ect relationship. For example, when you evaluate whether your
treatment or program causes an outcome to occur, you are examining a causal relationship

Many studies explore relationships, for example, a study may attempt to determine whether there
is a relationship between gender and salary.

Research projects usually can be classi ed into one of three basic forms:

1. Descriptive studies are designed primarily to document what is going on or what exists.
Public opinion polls that seek to describe the proportion of people who hold various
opinions are primarily descriptive in nature. For instance, if you want to know what
percentage of the population would vote for a Democrat or a Republican in the next
presidential election, you are simply interested in describing something.

2. Relational studies look at the relationships between two or more variables. A public
opinion poll that compares the proportion of males and females who say they would vote
for a Democratic or a Republican candidate in the next presidential election is essentially
studying the relationship between gender and voting preference.
fi
fi
ff
ff
fi
fi
3. Causal studies are designed to determine whether one or more variables (for example, a
program or treatment variable) causes or a ects one or more outcome variables. If you
performed a public opinion poll to try to determine whether a recent political advertising
campaign changed voter preferences, you would essentially be studying whether the
campaign (cause) changed the proportion of voters who would vote Democratic or
Republican (e ect).

Cross-sectional studies - study that take place at a single point in time

Longitudinal study - take place over multiple points in time, di erent time points, you are
measuring multiple waves of measurement
Two types of longitudinal designs:

Repeated measures - two or more waves of measurement over time, fewer than twenty

Time series - many waves of measurement over time, usually at least twenty waves

A relationship refers to the correspondence between two variables.


The types of relationships we can mean it in two ways: nature of the relationship or the pattern of
it.
Correlational relationship and a causal relationship.

Relationship - simply says that two things perform in a synchronized manner, an association
between two variables such that, in general, the level on one variable is related to the level on the
other. Technically, the term “correlational relationship” is redundant: a correlation by de nition
always refers to a relationship. how- ever the term correlational relationship is used to distinguish
it from the speci c type of association called a causal relationship

For instance, economists often talk of a correlation between in ation and unemployment.
When in ation is high, unemployment also tends to be high.
The two variables are correlated; but knowing that two variables are correlated does not tell
whether one causes the other.

Causal relationship - is a synchronized relationship between two variables just as a correlational


relationship is, but in a causal relationship we say that one variable causes the other to occur

Third variable ( missing variable problem ) - an unobserved variable that accounts for a correlation
between two variables

It may be that a third variable is causing both the building of roads and the birthrate and leading
to the correlation that is observed. For instance, perhaps the general world economy is
responsible for both. When the economy is good, more roads are built in Europe and more
children are born in the United States.

Relationships
fl
ff
fi
ff
ff
fl
fi
No relationship - for instance, we suspect that there is no relationship between the length of the
lifeline on your hand and your grade-point average. If we know your GPA, we don’t have any idea
how long your lifeline is.

Positive relationship - high values on one variable are associated with high values on the other,
and low values on one are associated with low values on the other

Negative relationship - high values on one variable are associated with low values on the other

Curvilinear relationship - shows a relationship that changes over the range of both variables, for
instance as the dosage rises, the severity of illness goes down; but at some point, the patient
begins to experience negative side e ects associated with too high a dosage, and the severity of
illness begins to increase again

Hypotheses

Hypothesis - a speci c statement of prediction, it describes in concrete (rather than theoretical)


terms what you expect to happen in your study

Actually, whenever we talk about an hypothesis, we are really thinking simultaneously about two
hypotheses. Let’s say that you predict that there will be a relationship between two variables in
your study. The way to set up the hypothesis test is to formulate two hypothesis statements: one
that describes your prediction and one that describes all the other possible outcomes with
respect to the hypothesized relationship.

Alternative hypothesis - the hypothesis that you support (your prediction), what you expect will
happen in your study

Null hypothesis - the hypothesis that describes the possible outcomes other than the alternative
hypothesis. usually, the null hypothesis predicts there will be no e ect of a program or treatment
you are studying

In some studies, your prediction might well be that there will be no di erence or change. In this
case, you are essentially trying to nd sup- port for the null hypothesis and you are opposed to
the alternative.

One-tailed hypothesis - a hypothesis that speci es a direction; for example, when your hypothesis
predicts that your program will increase the outcome. If your prediction speci es a direction, the
null hypothesis automatically includes both the no-di erence prediction and the prediction that
would be opposite in direction to yours, for instance, let’s imagine that you are investigating the
e ects of a new treatment for depression and that you believe one of the outcomes will be that
there will be less depression
Your two hypotheses might be stated something like this:
The null hypothesis for this study is - HO: As a result of the new program, there will either be no
signi cant di erence in depression or there will be a signi cant increase.
Which is tested against the alternative hypothesis - HA: As a result of the new program, there will
be a signi cant decrease in depression.

Two-tailed hypothesis - a hypothesis that does not specify a direction. For example, if your
hypothesis is that your program or intervention will have an e ect on an outcome, but you are
unwilling to specify whether that e ect will be positive or negative, For instance, let’s assume that
you are studying a new drug treatment for depression. The drug has gone through some initial
animal trials, but has not yet been tested on humans. You believe (based on theory and the
previous research) that the drug will have an e ect, but you are not con dent enough to
hypothesize a direction and say the drug will reduce depression. After all, you’ve seen more than
enough promising drug treatments come along that eventually were shown to have severe side
e ects that actually worsened symptoms
ff
ff
fi
fi
ff
fi
fi
ff
ff
ff
fi
ff
fi
ff
ff
ff
fi
fi
The important thing to remember about stating hypotheses is that you formulate your prediction
(directional or not), and then you formulate a second hypothesis that is mutually exclusive of the
rst and incorporates all possible alternative outcomes for that case.
When your study analysis is completed, the idea is that you will have to choose between the two
hypotheses.
If your prediction was correct, you would (usually) reject the null hypothesis and accept the
alternative. If your original prediction was not supported in the data, you will accept the null
hypothesis and reject the alternative.

Hypothetico-deductive model - a model in which two mutually exclusive hypotheses that together
exhaust all possible outcomes are tested, such that if one hypothesis is accepted, the second
must therefore be rejected

Variables

Variable - Any entity that can take on di erent values, Anything that can vary can be considered a
variable. For instance, age can be considered a variable because age can take on di erent values
for di erent people at di erent times

Quantitative - the numerical representation of some object. A quantitative variable is any variable
that is measured using numbers

Attribute - a speci c value of a variable. For instance, the variable sex or gender has two
attributes: male and female
Five attributes:
1 = strongly disagree
2 = disagree
3 = neutral
4 = agree
5 = strongly agree

Independent variable - treatment or program or cause, the variable that you manipulate

Dependent variable - e ects or outcomes, the variable a ected by the independent variable

For instance, if you are looking at the e ects of a new surgical treatment for cancer on rates of
mortality for that cancer, the independent variable would be the surgical treatment and the
dependent variable would be the mortality rates. The independent variable is what you (or nature)
do, and the dependent variable is what results from that.

The attributes of a variable should be both exhaustive and mutually exclusive.

Exhaustive - property of a variable that occurs when you include all possible answerable
responses

Mutually exclusive - the property of a variable that ensures that the respondent is not able to
assign two attributes simultaneously. For example, gender is a variable with mutually exclusive
options if it is impossible for the respondents to simultaneously claim to be both male and female.

Types of data

Qualitative data - data in which the variables are not in a numerical form, but are in the form of
text, photographs, sound bites, and so on

Quantitative data - data that appear in numerical form

The quantitative types argue that their data are hard, rigorous, credible, and scienti c. The
qualitative proponents counter that their data are sensitive, nuanced, detailed, and contextual.
All quantitative data are based upon qualitative judgments; and all qualitative data can be
summarized and manipulated numerically.
fi
ff
fi
ff
ff
ff
ff
ff
fi
ff
Unit of analysis - is whatever entity you are analyzing in your study

For instance, any of the following could be a unit of analysis:


• Individuals
• Groups
• Artifacts (books, photos, newspapers)
• Geographical units (town, census tract, state)
• Social interactions (dyadic relations, divorces, arrests)
If you collect data about crime rates in major cities in the country, your unit of analysis would be
the city.
Because it is the analysis you do in your study that determines what the unit is.
If you decide to base an analysis on student scores, the individual student is the unit.
However, you might decide to compare average achievement test performance for the students
with a classroom climate score. In this case, the data that go into the analysis include a variable
(achievement) where the student is the unit of analysis and a variable (classroom climate) where
the classroom is.

Hierarchical modeling - a statistical model that allows for the inclusion of data at di erent levels,
where the unit of analysis at some levels is nested within the unit of analysis at others (e.g.,
student within class within school within school district)

deductive inductive

Deductive (reasoning) - top-down reasoning that works from the more general to the more
speci c

Inductive (reasoning) - Bottom-up reasoning that begins with speci c observations and measures
and ends up as general conclusion or theory.

Structure of reaserch
fi
fi
ff
Research question -the central issue being addressed in the study, typically phrased in the
language of theory

The research question is the central issue being addressed in the study and is often phrased in
the language of theory. The research question might at this point be:
Are school-based interventions more e ective (as compared to no such interventions) in reducing
childhood obesity?

The problem with such a question is that it is still too general to be studied directly. Consequently,
in much research, an even more speci c statement, called a hypothesis is developed that
describes in operational terms exactly what you think will happen in the study (see the section,
Hypotheses, earlier in this chapter). For instance, the hypothesis for your study might be
something like the following:
Schools that integrate nutritional education in their curriculum will see a signi cant decrease in the
proportion of overweight children as compared with schools that do not adopt such a program.

In causal studies, there are at least two major variables of interest: the cause and the e ect.
Usually the cause is some type of event, program, or treatment.
A distinction is made between causes that the researcher can control (such as a program) versus
causes that occur naturally or outside the researcher’s in uence (such as a change in interest
rates).
The e ect (or dependent variable) is the outcome that you wish to study.

Operationalization - The act of translating a construct into its manifestation—for example,


translating the idea of your treatment or program into the actual program, or translating the idea of
what you want to measure into the real measure. The result is also referred to as an
operationalization; that is, you might describe your actual program as an operationalized program.

Thus, a key component in a causal study concerns how you decide which units (people) receive
the program and which are placed in an alternative condition.

These, then, are the major components in a causal study:


• The research problem
• The research question
• The program (cause)
• The units
• The outcomes (e ect)
• The design

Validity - is a term that we use to discuss the quality of various conclusions you might reach
based on a research project, the best available approximation of the truth of a given proposition,
inference, or conclusion

To understand the types of validity, you have to know something about how researchers
investigate a research question. We will use a causal study as an example because only causal
questions involve all four validity types.

Cause constrict - the abstract idea or theory of what the cause is in a cause-e ect relationship
you are investigating

E ect construct - the abstract idea or theory of what the outcome is in a cause-e ect relationship
you are investigating

Threats to validity - reasons your conclusion or inference might be wrong


ff
ff
ff
fi
ff
fl
fi
ff
ff
ff
Validity types:
- Conclusion validity: the degree to which conclusions you reach about relationships in your
data are reasonable
- Internal validity: the approximate truth about inferences regarding cause-e ect or causal
relationships
- Construct validity: the degree to which inferences can legitimately be made from the
operationalizations in your study to the theoretical constructs on which those
operationalizations are based
- External validity: the degree to which the conclusions in your study would hold for other per-
sons in other places and at other times

Chapter 2

Con ict of interest - when a researcher’s primary interest in the integrity of a study is
compromised by a secondary interest such as personal gain (e.g., nancial pro t).

Nuremberg Code - This code was developed following the trial of Nazi doctors after World War II.
It includes 10 principles to guide research involving human subjects. The Code has been
extremely important as a reference point for all regulations related to the protection of human
subjects. Among other things, it established the principles of informed consent, voluntary
participation without coercion, clear scienti c justi cation for research, and most important, limits
on the risk of harm.

Thalidomide tragedy - this event involved the occurrence of very serious birth defects in children
of pregnant women who had been given thalidomide as a sedative. the drug side e ects should
have been known and available to doctors and patients, but were not until much harm had been
done.

Kefauver-Harris Amendments - After the thalidomide tragedy, these amendments to the Food,
Drug and cosmetic Act were passed to ensure greater drug safety. For the rst time, drug
manufacturers were legally required to present evidence on the safety and e ectiveness of their
products to the FDA before marketing them. it also established the requirement that participants
be fully informed about potential risks or harm, and that based on this information, they voluntarily
agree to participate in clinical trials.

Declaration of Helsinki - the World Medical Association adopted the Declaration of helsinki in
1964 in order to provide a set of principles to guide the practice of medical research. the
principles include such statements as “research protocols should be reviewed by an independent
committee prior to initiation.”

Tuskegee Syphilis Study - this was a 40-year observational study of the impact of untreated
syphilis on men. the participants in the study were low-income African American men who were
led to believe they were receiving treatment when in fact they were not. this was one of the major
stimuli of the Belmont conference.
fl
fi
fi
fi
ff
fi
ff
fi
ff
National Research Act - an Act passed by the Us congress in 1974. it represents the rst serious
attempt to build a comprehensive system of research ethics in the U.s. it created a national
commission to develop guidelines for human subjects research and to oversee and regulate the
use of human experimentation in medicine.

Belmont Report - the report includes basic standards that should underlie the conduct of any
biomedical and behavioral research involving human participants. it emphasizes universal
principles that are unbounded by time or technology. the three core principles described in
the Belmont Report are
1) respect for persons
2) bene cence
3) justice

respect for persons - this principle means that people are to be treated as independent and
autonomous individuals. It also means that the well-being of those who are not in a position to be
fully autonomous should also be protected. See also “vulnerable populations.”

bene cence - the expected impact on a person’s well-being that may result from participation
in research. Researchers should attempt to maximize the bene ts of participation and take steps
to identify and limit the potential for harm. this is typically done when planning a study by a careful
risk/bene t assessment.

justice - justice means that participation in research should be based on fairness and not on
circumstances that give researchers access to or control of a population based on status. the
principle of justice also means that recruiting should be done in a way that respects the rights of
individuals and does not take advantage of their status as patients, students, prisoners, children,
or others not fully able to make independent decisions.

vulnerable populations - this is the term used to designate those who may not be fully in control of
their decision making. There are several speci c groups considered vulnerable, including children,
prisoners, and people with impaired cognitive capacity (e.g., with mental retardation or dementia).
children under 18 are not of legal age to provide consent. Instead, researchers must seek
permission from parents or legal guardians.

assent - it means that a child has a rmatively agreed to participate in a study. It is not the mere
absence of an objection. The appropriateness of an assent procedure depends on the child’s
developmental level and is determined by the researcher in consultation with the IRB.

informed consent - a policy of informing study participants about the procedures and risks
involved in research that ensures that all participants must give their consent to participate.

voluntary participation - the right of voluntary participation is one of the most basic and important
of all research participant rights. it means that fully informed individuals have the right to decide
whether or not to participate in a study without coercion. it also means that they can terminate
participation at any point without negative consequences.

privacy - protection of personal information about participants. It is typically accomplished


through the use of con dentiality procedures that specify who will have access to personally
identifying data and the limits of that access. It can also include the use of anonymous data in
which no personally identifying information is ever collected.

anonymity - the assurance that no one, including the researchers, will be able to link data
to a speci c individual. This is the strongest form of privacy protection in research because no
identifying information is collected on participants.

con dentiality - an assurance made to study participants that identifying information about them
acquired through the study will not be released to any- one outside of the study.
fi
fi
fi
fi
fi
fi
ffi
fi
fi
fi
de-identi cation - the process of removing identifying information from data sets in order to
assure the anonymity of individuals.

deception - the intentional use of false or misleading information in study procedures.


Researchers must justify the use of deception on scienti c grounds and be certain to provide
complete debrie ng about the actual nature of the study once it has been completed.

debrie ng - the process of providing participants with full information about the purpose of a
study once a person has completed participation. Participants are typically o ered a standard
written description of the study and why it was conducted, given the opportunity to ask
questions, and given the opportunity to have their data withdrawn if they are not satis ed with
what has happened in the study.

right to service - the right of study participants to the best available and most appropriate
services relevant to a condition that is part of a study.

placebo - the use of a treatment that may look like the real treatment, but is inert. The use of
placebos in research is allowed if careful assessment of the scienti c need and care of
participants can be provided.

Institutional Review Boards (IRBs) - a panel of people who review research proposals with
respect to ethical implications and decide whether additional actions need to be taken to assure
the safety and rights of participants.

protocol - a detailed document summarizing the purpose and procedures of a study. It provides a
reader such as an IRB reviewer with enough information to know how participants are recruited,
consented, and treated in all phases of a study. It typically includes details on all measures, the
handling and storage of data, consent and assent forms, debrie ng materials, and any other item
that might have an impact on a participant.

1. Phase I study - a research study designed to test a new drug or treatment in a small group of
people for the rst time to evaluate its safety, determine a safe dosage range, and identify
potential side e ects.
2. Phase II study - a research study designed to test a drug or treatment that is given in a larger
group of people than in a Phase I study to see if it is e ective and to further evaluate its safety.
3. Phase III study - a research study designed to test a drug or treatment that is given to large
groups of people using a highly controlled research design, in order to con rm the
intervention’s e ectiveness, monitor side e ects, compare it to commonly used treatments,
and collect information that will allow the drug or treatment to be used safely.

research misconduct - fabrication, falsi cation, or plagiarism in proposing, performing, or


reviewing research, or in reporting research results, are all prime examples of research
misconduct. Fabrication is making up data or results and recording or reporting them. Falsi cation
is manipulating research materials, equipment, or processes, or changing or omitting data or
results such that the research is not accurately represented in the research record.

plagiarism - the use of another person’s work without proper credit.

piecemeal and duplicate publication - an ethical issue in the dissemination of research referring
to the possibility that, in order to maximize personal reward (status, promotion, pay, and such)
from publication, a researcher would essentially repackage the results of a single study in multiple
articles.

fairness in publication credit - authorship credit should be established on the basis of the
quality and quantity of one’s contributions to a study, rather than on status, power, or any other
factor. Many research journals have now adopted guidelines to help authorship groups determine
credit.
fi
fi
fi
ff
ff
fi
fi
ff
ff
fi
fi
fi
fi
ff
fi
fi
Chapter 3

Qualitative research is typically the approach of choice in situations where you want to:

● Generate new constructs, theories, or hypotheses


● Develop detailed stories to describe a phenomenon
● Achieve a deeper understanding of issues
● Improve the quality of quantitative measures

qualitative - the descriptive nonnumerical characteristic of some object. A qualitative variable is a


descriptive nonnumerical observation.

qualitative measures - data not recorded in numerical form.

ethnography - study of a culture using qualitative eld research.

participant observation - a method of qualitative observation where the researcher becomes a


participant in the culture or context being observed.

phenomenology - a philosophical perspective as well as an approach to qualitative methodology


that focuses on people’s subjective experiences and interpretations of the world.

meaning units - in qualitative data analysis, a small segment of a transcript or other text that
captures a concept that the analyst considers to be important.

eld research - a research method in which the researcher goes into the eld to observe the
phenomenon in its natural state.

grounded theory - a theory rooted in observation about phenomena of interest. Also, a method
for achieving such a theory.

direct observation - the process of observing a phenomenon to gather information about it. This
process is distinguished from participant observation, in that a direct ob- server doesn’t typically
try to become a participant in the context and does strive to be as unobtrusive as possible so as
not to bias the observations.

unstructured interviewing - an interviewing method that uses no predetermined interview


protocol or survey and where the interview questions emerge and evolve as the interview
proceeds.

case study - an intensive study of a speci c individual or speci c context.

unobtrusive measures - methods used to collect data without interfering in the lives of the
respondents.

indirect measure - an unobtrusive measure that occurs naturally in a research context.

content analysis - the analysis of text documents. The analysis can be quantitative, qualitative,
or both. Typically, the major purpose of content analysis is to identify patterns in text.

exception dictionary - a dictionary that includes all nonessential words like “is,” “and,” and “of,”
in a content analysis study.

unitizing - in content analysis, the process of dividing a continuous text into smaller units that can
then be analyzed.

coding - the process of categorizing qualitative data.


fi
fi
fi
fi
fi
All qualitative data can be coded quantitatively.
All quantitative data are based on qualitative judgment.

mixed methods research - research that uses a combination of qualitative and quantitative
methods.

data audit - the process of systematically assessing the quality of data in a qualitative study.

secondary analysis - quantitative analysis of existing data that is done either to verify or extend
previously accomplished analyses or to explore new research questions. using existing data can
be an e cient alternative to collecting original primary data and can extend the validity, quality
and scope of what can be accomplished in a single study.

A two dimensional model of research methods:

Chapter 4

sampling - the process of selecting units (e.g., participants) from the population of interest.

population - the group you want to generalize to and the group you sample from in a study.

theoretical population - a group which, ideally, you would like to sample from and generalize to.
This is usually contrasted with the accessible population.

accessible population - a group that re ects the theoretical population of interest and that you
can get access to when sampling. This is usually contrasted with the theoretical population.

sampling frame - the list from which you draw your sample. in
some cases, there is no list; you draw your sample based upon an
explicit rule.

sample - the actual units you select to


participate in your study.

bias - a systematic error in an estimate. A bias


can be the result of any factor that leads to an
incorrect estimate. When bias exists, the
values that are measured do not accurately
re ect the true value.

generalizing, generalizability - the process of making an inference that the results observed in a
sample would hold in the population of interest. if such an inference or conclusion is valid we can
say that it has generalizability.

sampling model - a model for generalizing in which you identify your population, draw a fair
sample, conduct your research, and nally generalize your results from the sample to the
population.
fl
ffi
fi
fl
proximal similarity model - a model for generalizing from your study to other contexts based
upon the degree to which the other context is similar to your study context.

gradient of similarity - the dimensions along which your study context can be related to other
potential contexts to which you might wish to generalize. Contexts that are closer to yours along
the gradient of similarity of place, time, people, and so on can be generalized to with more
con dence than ones that are further away.

probability sampling - method of sampling that utilizes some form of random selection.

Nonprobability sampling:

nonprobability sampling - sampling that does not involve random selection.

random selection - process or procedure that assures that the di erent units in your population
are selected by chance.

con dence intervals (CIs) - a con dence interval is used to indicate the precision of an estimate
of a statistic. The Ci provides the lower and upper limits of the statistical estimate at a speci c
probability level. For instance, a 95% con dence interval for an estimate of a mean or average
(the statistic) is the range of values within which there is a 95% chance that the true mean is likely
to fall.

modal instance sample - sampling for the most typical case.

expert sampling - a sample of people with known or demonstrable experience and expertise in
some area.

quota sampling - any sampling method where you sample until you achieve a speci c number of
sampled units for each subgroup of a population.

proportional quota sampling - a sampling method where you sample until you achieve a speci c
number of sampled units for each subgroup of a population, where the proportions in each group
are the same.

nonproportional quota sampling - a sampling method where you sample until you achieve a
speci c number of sampled units for each subgroup of a population, where the proportions in
each group are not the same.

heterogeneity sampling - sampling for diversity or variety.

snowball sampling - a sampling method in which you sample participants based upon referral
from prior participants.

respondent-driven sampling - a nonprobability sampling method that combines chain-referral or


snowball sampling, with a statistical weighting system that helps compensate for the fact that the
sample was not drawn randomly.

response - a speci c measurement value that a sampling unit supplies.

statistic - a value that is estimated from data.

population parameter - the mean or average you would obtain if you were
able to sample the entire population.
fi
fi
fi
fi
fi
fi
ff
fi
fi
fi
sampling distribution - the theoretical distribution of an in nite number of samples of the
population of interest in your study.

standard deviation - an indicator of the variability of a set of scores in a sample around the mean
of that sample.

standard error - the spread of the averages around the average of averages in a sampling
distribution.

sampling error - error in measurement associated with sampling.

normal curve - a common type of distribution where the values of a variable have a bell-shaped
histogram or frequency distribution. in a normal distribution, approximately 68 percent of cases
occur within one standard deviation of the mean or center, 95 percent of the cases fall within two
standard deviations, and 99 percent are within three standard deviations.

bell curve - also known as a normal curve. A type of distribution where the values of a variable
have a smoothed histogram or frequency distribution that is shaped like a bell. in a normal
distribution, approximately 68 percent of cases occur within one standard deviation of the mean
or center, 95 percent of the cases fall within two standard deviations, and 99 percent are within
three standard deviations.

● N is the number of cases in the sampling frame.

● n is the number of cases in the sample.

NCn is
● the number of combinations (subsets) of n from N.

● f 5 n/N is the sampling fraction.

simple random sampling - a method of sampling that involves drawing a sample from a
population so that every possible sample has an equal probability of being selected.
• Objective: To select n units out of N such that each NCn has an equal chance of being selected.
• Procedure: Use a table of random numbers, a computer random-number generator, or a
mechanical device to select the sample.

strati ed random sampling - a method of sampling that involves dividing your population into
homogeneous subgroups and then taking a simple random sample in each subgroup.

systematic random sampling - a sampling method where you determine randomly where you
want to start selecting in the sampling frame and then follow a rule to select every xth element in
the sampling frame list (where the ordering of the list is assumed to be random).
1. Number the units in the population from 1 to N.
2. Decide on the n (sample size) that you want or need.
3. Calculate k 5 N/n 5 the interval size.
4. Randomly select an integer between 1 and k.
5. Take every kth unit.
cluster random sampling or area random sampling - a sampling method that involves dividing
the population into groups called clusters, randomly selecting clusters, and then sampling each
fi
fi
element in the selected clusters. This method is useful when sampling a population that is spread
across a wide area geographically.
In this sampling you follow these steps:
1. Divide population into clusters (usually along geographic boundaries).
2. Randomly sample clusters.
3. Measure all units within sampled clusters.

multistage sampling - the combining of several sampling techniques to create a more e cient or
e ective sample than the use of any one sampling type can achieve on its own.

threats to external validity - any factors that can lead you to make an incorrect generalization
from the results of your study to other persons, places, times, or settings.

Chapter 5

level of measurement - the relationship between numerical values on a measure. There are
di erent types of levels of measurement (nominal, ordinal, interval, ratio) that determine how you
can treat the measure when analyzing it. For instance, it makes sense to compute an average of
an interval or ratio variable but does not for a nominal or ordinal one.

nominal level of measurement - measuring a variable by assigning a number arbitrarily in order


to name it numerically so that it might be distinguished from other objects. The jersey numbers in
most sports are measured at a nominal level.

ordinal level of measurement - measuring a variable using rankings. class rank is a variable
measured at an ordinal level.

interval level of measurement - measuring a variable on a scale where the dis- tance between
numbers is interpretable. For instance, temperature in Fahrenheit or celsius is measured on an
interval level.

ratio level of measurement - measuring a variable on a scale where the distance between
numbers is interpretable and there is an absolute zero value. For example, weight is a ratio
measurement.
ff
ff
ffi
Reliability is an important aspect of measurement quality, reliability refers to consistency of
measurement, reliability is the consistency or stability of an observation.

true score theory - a theory that maintains that an observed score is the sum of two
components: true ability (or the true level) of the respondent; and random error

random error - a component or part of the value of a measure that varies entirely by chance.
Random error adds noise to a measure and obscures the true value.

systematic error - a component of an observed score that consistently a ects the responses in
the distribution.

triangulate - combining multiple independent measures to get at a more accurate estimate of a


variable.

Reducing measurement error:


1. One thing you can do is to pilot test your instruments to get feedback from your
respondents regarding how easy or hard the measure was, and information about how the
testing environment a ected their performance. In the pilot run, you should also check if
there was su cient time to complete the measure.
2. Second, if you are gathering measures using people to collect the data (as interviewers or
observers), you should make sure you train them thoroughly so that they aren’t
inadvertently introducing error. To improve response rates, the interviewers should be
trained to assure respondents about the con dentiality of their answers.
3. Third, when you collect the data for your study you should double-check the data
thoroughly. All data entry for computer analysis should be double- entered and veri ed.
This means that you enter the data twice, the second time having the computer check that
you are typing the exact same data you typed the rst time. Inadequate checking at this
stage may lead to loss of data/omissions (when data is inadvertently left out) or duplication.
4. Fourth, you can use statistical procedures to adjust for measurement error. These range
from rather simple formulas you can apply directly to your data to complex procedures for
modeling the error and its e ects.
5. Finally, one of the best things you can do to deal with measurement errors, especially
systematic errors, is to use multiple measures of the same con- struct. Especially if the
di erent measures don’t share the same systematic errors, you will be able to triangulate
across the multiple measures in the data analysis and get a more accurate sense of what’s
happening.

There are four general classes of reliability estimators, each of which estimates reliability in a
di erent way:
● Inter-rater or inter-observer reliability is used to assess the degree to which di erent
raters/observers give consistent estimates of the same phenomenon.
● Test-retest reliability is used to assess the consistency of an observation from one time
to another.
● Parallel-forms reliability is used to assess the consistency of the results of two tests
constructed in the same way from the same content domain.
● Internal consistency reliability is used to assess the consistency of results across items
within a test.
ff
ff
ffi
ff
ff
fi
fi
ff
fi
ff
inter-rater or inter-observer reliability - the degree of agreement or correlation between the
ratings or codings of two independent raters or observers of the same phenomenon.

test-retest reliability - the correlation between scores on the same test or measure at two
successive time points.

parallel-forms reliability - the correlation between two versions of the same test or measure that
were constructed in the same way, usually by randomly selecting items from a com- mon test
question pool.

internal consistency reliability - a correlation that assesses the degree to which items on the
same multi-item instrument are interrelated. The most common forms of internal consistency
reliability are the average inter-item correlation, the average item-total correlation, the split half
correlation and cronbach’s Alpha.

Cohen’s Kappa - a statistical estimate of inter-rater agreement or reliability that is more robust
than percent agreement because it adjusts for the probability that some agreement is due to
random chance.

average inter-item correlation - an estimate of internal consistency reliability that uses the
average of the correlations of all pairs of items.

average item-total correlation - an estimate of internal consistency reliability where you rst
create a total score across all items and then compute the correlation of each item with the total.
The average inter-item correlation is the average of those individual item-total correlations.

split-half reliability - an estimate of internal consistency reliability that uses the correlation
between the total score of two randomly selected halves of the same multi-item test or measure.

Cronbach’s Alpha - one speci c method of estimating the internal consistency reliability of a
measure. Although not calculated in this manner, cronbach’s Alpha can be thought of as
analogous to the average of all possible split-half correlations.

Validity - describes the accuracy with which a theoretical construct is translated into an actual
measure.

Construct validity:
● Translation validity
● Face validity
● Content validity
● Criterion-related validity
● Predictive validity
● Concurrent validity
● Convergent validity
● Discriminant validity

translation validity - a type of construct validity related to how well you translated the idea of
your measure into its operationalization.

criterion-related validity - the validation of a measure based on its relationship to another


independent measure as predicted by your theory of how the measures should behave.

face validity - a validity that checks that “on its face” the operationalization seems like a good
translation of the construct.
fi
fi
content validity - a check of the operationalization against the relevant content domain for the
construct.

predictive validity - a type of construct validity based on the idea that your mea- sure is able to
predict what it theoretically should be able to predict.

concurrent validity - an operationalization’s ability to distinguish between groups that it should


theoretically be able to distinguish between.

convergent validity - the degree to which the operationalization is similar to (converges on) other
operationalizations to which it should be theoretically similar.

discriminant validity - the degree to which the operationalization is not similar to (or diverges
from) other operationalizations that it theoretically should not be similar to.

threats to construct validity - any factor that causes you to make an incorrect conclusion about
whether your operationalized variables (e.g., your program or outcome) re ect well the constructs
they are intended to represent.

mono-operation bias - a threat to construct validity that occurs when you rely on only a single
implementation of your independent variable, cause, program, or treatment in your study.

mono-method bias - a threat to construct validity that occurs because you use only a single
method of measurement.

hypothesis guessing - a threat to construct validity and a source of bias in which participants in
a study guess the purpose of the study and adjust their responses based on that.
fl
Chapter 6

scaling - the branch of measurement that involves the construction of an instrument that
associates qualitative constructs (i.e., objects) with quantitative metric units.

response scale - a sequential-numerical response format, such as a 1-to-5 rating format.

dichotomous response - a measurement response that has two possible options (e.g., true/false
or yes/no).

interval response scale - a measurement response format with multiple response options that
are set at equal intervals (e.g., 1-to-5 or 1-to-7).

However, if all you are doing is attaching a response scale to an object or statement, you can’t call
that scaling. As you will see, scaling involves procedures that you perform independently of the
respondent so that you can come up with an overall numerical value for the object.

semantic di erential - a scaling method in


which an object is assessed by the respondent on a set of bipolar adjective pairs.

Thurstone scaling - a class of scaling methods (the method of equal-appearing intervals, the
method of successive intervals, and the method of paired comparisons) that were designed to
yield unidimensional, interval- level, multi-item scales.
1. Developing the Focus
2. Generating Potential Scale Items
3. Rating the Scale Items
4. Computing Scale Score Values for Each Item
5. Selecting the Final Scale Items
ff
interquartile range - the di erence between the 75th (upper quartile)
and 25th (lower quartile) percentile scores on the distribution of a variable. The interquartile range
is an estimate of the spread or variability of the measure.

Likert or summative scaling - a method of scaling in which the items are assigned interval- level
scale values and the responses are gathered using an interval-level response format.
1. De ning the Focus
2. Generating the Items
3. Rating the Items
4. Selecting the Items
5. Administering the Scale

forced-choice response scale - a response scale that does not allow for a neutral or undecided
value. By de nition, a forced-choice response scale has an even number of response options. For
example, the following would be a forced choice response scale:1 5 strongly disagree;
2 5 disagree; 3 5 agree; 4 5 strongly agree.

reversal items - items on a multi-item scale whose wording is in the opposite direction of the
construct of interest. Reversal items must have their scores reversed prior to computing total
scale scores. For example, if you had a multi-item self-esteem scale, an item such as “I generally
feel bad about myself as a person” would be a reversal item, Notice also that items 2, 5, 6, 8, and
9 are reversal items. The total self esteem score is obtained by summing the individual item
scores after reversing the values for the negative items (i.e., making 1 5 4, 2 5 3, 3 5 2, and 4 5 1).

Guttman or cumulative scaling - a method of scaling in which the items are assigned scale
values that allow them to be placed in a cumulative ordering with respect to the construct being
scaled.
1. De ne the Focus
2. Develop the Items
3. Rate the Items
4. Develop the Cumulative Scale
5. Administering the Scale

tests - measurement instruments designed to assess a respondent’s knowledge, skill, or


performance.

factor analysis - a multivariate statistical analysis that uses observed correlations (variability) as
input and identi es a fewer number of unobserved variables, known as factors, that describe the
original data more e ciently.

consequential validity - the approximate truth or falsity of assertions regarding the intended or
unintended consequences of test interpretation and use.

item analysis - the systematic statistical analysis of items on a scale or test undertaken to
determine the properties of the items, especially for purposes of selecting nal items or deciding
which items to combine. An item-to- total-score correlation is an example of an item analysis.

index - a quantitative score that measures a construct of interest by applying a formula or a set of
rules that combines relevant data.

weighted index - a quantitative score that measures a construct of interest by applying a formula
or a set of rules that combines relevant data where the data components are weighted di erently.
fi
fi
fi
fi
ffi
ff
fi
ff
After

Types of validity

Types of research

Grounded theory

Participatory action research

Sampling de nitions w równaniu

Standard error

Chapter 4 - 37, 38 liczbowe pytania

Mono-operation bias

Reliability fractions

Standart devitation?

Split-half?

Cronbach alpha - where?

Equal-appearing intervals ( 1 - 11 )

Scaling methods

Standardization, item analysis, factor analysis

Interval response?

Semantic di erential theory


ff
fi

You might also like