Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

NURS 328 Units 8-11 notes

UNIT 8 NOTES

Characteristics of Qualitative Research Design


#1
- Is flexible, capable of adjusting to what is learned during data collection
- Often involves triangulating various data collection strategies
- Tends to be holistic, striving for an understanding of the whole
#2
- requires researchers to become intensely involved and reflexive and can require
a lot of time
- Benefits from ongoing data analysis to guide subsequent strategies
- Emergent: evolves as researchers make ongoing decisions about their data
needs based on what they have already learned

Ethnography
#1
- Describes and interprets a culture and cultural behavior
- Culture is the way a group of people live- the patterns of activity and the symbolic
structures (e.g., the values and norms) that give such activity significance.
- Relies on extensive, labor-intensive fieldwork *
- Culture is inferred from the group’s words, actions, and products of its members.
- Assumption: Cultures guide the way people structure their experiences.
- Macroethnography vs. focused ethnography.
#2
- Seeks an emic perspective *(insider’s view) of the culture and to reveal tacit knowledge-
information about the culture that is deeply embedded in the culture
- Relies on wide range of data sources and three broad types of information: cultural
behavior, cultural artifacts, and cultural speech.
- Participant observation is a particularly important source
- Product: an in-depth holistic portrait of the culture under study.
- Comes from the discipline of anthropology.
- In this tradition the investigator can look at the emic or etic perspective.
- Studies culture both broadly and narrowly defined.
- The main data source in this tradition is in-depth conversation.

Phenomenology
- Focuses on the description and interpretation of people’s lived experience
- Asks: what is the essence of a phenomenon as it is experienced by people, and what
does it mean?
- Acknowledges people’s physical ties to their world: “being in the world”
Is rooted in the discipline of psychology or philosophy.
Is focused on finding out more about lived experience.
This tradition assumes there is an essence that can be understood.

Descriptive Phenomenology
- Based on philosophy of Husserl and his question: “what do we know as persons?”
- Describes human experience
- Insists on the careful portrayal of ordinary conscious experience of everyday life- a
depiction of “things” as people experience them
- Hearing, seeing, believing, feeling, remembering, deciding, and evaluating
- May involve maintaining a reflexive journal

Phases of Descriptive Phenomenological Study


- Bracketing: the process of identifying and holding in abeyance preconceived beliefs and
opinions about the phenomenon under study
- Intuiting: occurs when researchers remain open to the meanings attributed to the
phenomenon by those who have experienced it
- Analyzing: extracting significant statements, categorizing, and making sense of essential
meanings
- Describing: defining the phenomenon

Interpretive Phenomenology
- Based on philosophy of Heidegger; Heideggerian: hermeneutics as a basic
characteristic of human existence.
- Gadamer: the hermeneutic circle
- Emphasis on interpreting and understanding experience, not just describing it;
bracketing does not occur.
- Relies on in-depth interviews and supplementary data sources: texts, artistic
expressions.

Grounded Theory
- Focuses on the discovery of a basic social psychological problem that a defined group of
people experience
- Elucidates social psychological processes and social structures
- Has a number of theoretical roots- e.g., symbolic interaction
- Originally developed by sociologists Glaser and Strauss
- Is derived from the discipline of sociology.
- This tradition studies social processes and social structure.
Grounded Theory Methods
- Developed by Glaser and Strauss (1967), whose theoretical roots were in symbolic
interaction: how people make sense of social interaction
- Has contributed to the development of many middle-range theories of phenomena
relevant to nurses
- Primary data sources: in-depth interviews with 20 to 30 people; may be supplemented
with observations, written documents
- Data collection, data analysis, and sampling occur simultaneously
Grounded Theory Analysis
- Constant comparison: used to develop and refine theoretically relevant categories
- Categories elicited from the data are constantly compared with date obtained
earlier so that commonalities and variations can be detected.
- Focus is on understanding a central concern or core variable
- A basic social process (BSP) explains how people come to resolve the problem or
concern
Alternate Views of Grounded Theory
- 1990, Strauss and Corbin published a controversial book, Basics of Qualitative
Research: Techniques and Procedures for Developing Grounded Theory.
- Glaser 1992 thought Strauss and Corbin developed a method no grounded theory but
rather called “full conceptual description”
- Nurse researchers also use an approach called constructivist grounded theory- Charmaz
regards Glaser and Strauss’ grounded theory as having positivist roots.
Historical Research
- Systematic collection and critical evaluation of data relating to past occurrences
- Relies primarily on qualitative (narrative) data but can sometimes involve statistical
analysis of quantitative data
- Nurses have used historical research methods to examine a wide range of phenomena
in both the recent and more distant path.
- Forms included written records and non written materials.
- It is usually interpretive.
Other Types of Qualitative Research
- Not all qualitative studies are conducted within a disciplinary tradition. Examples include:
- Case studies
- The focus on a thorough description and explanation of a single case or
small number of cases
- Cases can be individual, families, groups, organizations or communities.
- Data often are collected over an extended period.

Narrative analysis
- Texts that provide detailed stories are sometimes analyzed through narrative analysis
- There are numerous approaches to analyzing texts.
- One example is Burke’s pentadic dramatism: analyzes five elements of a story (act,
scene, agent, agency, purpose); meant to be analyzed in ratios, such as act: agent.
Descriptive Qualitative Studies
- Descriptive qualitative studies tend to be eclectic in their designs and methods and are
based on the general premises of constructivist inquiry.
- Such descriptive studies seek to holistically describe phenomena as they are perceived
by the people who experience them
- The researchers may say that they did a content analysis of the narrative data with the
intent of understanding important themes and patterns.

Research With Ideological Perspectives


#1
- Critical theory research
- Concerned with a critique of existing social structures and with envisioning new
possibilities
- Critical ethnography focuses on raising consciousness in the hope of effecting
social change. Transformation is a key objective.
- Critical ethnographers attempt to increase the political dimensions of cultural
research and undermine oppressive systems
#2
- Feminist research
- Focuses on how gender domination and discrimination shape women’s lives and
their consciousness
- Participatory action research (PAR):
- Produces knowledge through close collaboration with groups or communities that
are vulnerable to control or oppression

KEY TAKEAWAYS

- In qualitative studies, the goal is generally to understand the multitude of causes that account

for the specific instance the researcher is investigating.

- In quantitative studies, the goal may be to understand the more general causes of some

phenomenon rather than the idiosyncrasies of one particular instance.

- Quantitative research may point qualitative research toward general causal relationships that

are worth investigating in more depth.

- In order for a relationship to be considered causal, it must be plausible and nonspurious, and

the cause must precede the effect in time.


- A unit of analysis is the item you wish to be able to say something about at the end of your

study while a unit of observation is the item that you actually observe.

- When researchers confuse their units of analysis and observation, they may be prone to

committing either the ecological fallacy or reductionism.

- Hypotheses are statements, drawn from theory, which describe a researcher’s expectation

about a relationship between two or more variables.

- Qualitative research may point quantitative research toward hypotheses that are worth

investigating.

Summary Points

● Qualitative studies involve the exploration of phenomena.


● Ethnography, phenomenology and grounded theory are established
methods or designs used in the qualitative paradigm.
● Narrative inquiry, hermeneutics and the qualitative descriptive
approach are other fairly common methodologies for exploring qualitative
research questions.
● New approaches to qualitative research are continually emerging.
● Indigenous knowledge and ways of knowing is another emerging approach.

Narrative Research
- A study of individuals life experiences told to researchers or obtained in documents and
archival material
- Focuses on:
- Studying a single person
- Gather data through collection of stories
- Reports individual experiences
- Discussing the meaning of those experiences for the individual
Types of narrative designs
- Who writes the story
- How much life is recorded and presented
- Theoretical lens being used
- Narrative form being combined

Who writes/records the story- biography vs. autobiography

Key characteristics of narrative design:


- Individual experiences
- Collecting individual stories
- Chronology of the experiences
- Restorying
- Coding for themes
- Context or setting
- Collaborating with participants

Ethical issues in gathering stories


- distorting data
Inability to tell the story because it is horrific
Forgetting story

UNIT 9
Sampling and data collection
: In Quantitative Studies

Basic sampling concepts


#1
- Population (“P” in PICO questions)
- The entire group of interest based on eligibility criteria
- Sampling
- Selection of a portion of the population (a sample) to represent the entire
population
- Eligibility criteria
- The characteristics that define the population
- Inclusion criteria
- Exclusion criteria
#2
- Sampling bias: overrepresentation of underrepresenting population segment in terms of
key characteristics
- Strata: subpopulations of a population (e.g., male/female)
- Target population: the entire population of interest
- Accessible population
- The portion of the target population that is accessible to the researcher, from
which a sample is drawn

Sampling Goal in Quantitative Research


- Representative Sample
- A sample whose key characteristics closely approximate those of the
population- a sampling goal in quantitative research

- More easily achieved with:


- Probability sampling
- Homogeneous populations
- Larger samples achieved through power analysis
Sampling designs in quantitative studies
- Nonprobability sampling
- Does not involve selection of elements at random; is rarely
representative of the population
- Probability sampling
- Involves random selection of elements: each element has
an equal, independent chance of being selected.
- Allows researchers to estimate the magnitude of sampling
error ( difference between population values and sample
values)

Types of nonprobability sampling- quantitative research

- Convenience sampling: selecting the most conveniently available people as participants-


most vulnerable to sampling biases
- Quota sampling: identifying population strata and figuring out how many people are
needed from each stratum
- Consecutive sampling: recruiting all people from an accessible population over a specific
time interval
- Purposive sampling: handpicking sample members

Types of Probability Sampling

- Simple random sampling


- Researchers establish a sampling frame- a list of population elements
- Stratified random sampling
- The population is first divided into two or more strata, from which elements are
randomly selected.- enhances representativeness; cluster sampling is associated
with a larger sampling error but is considered more efficient.
- Systematic sampling
- Involved the selection of every kth case from a list, such as every 10th person on
a patient list,
Sample size
- The number of study participants in the final sample
- Sample size adequacy is a key determinant of sample quality in quantitative
research
- Sample size needs can and should be estimated through power analysis
- The risk of “getting it wrong” ( statistical conclusion validity) increases when
samples are too small

Critiquing Sampling plans: Considerations

- The type of sampling approach used (e.g., convenience, consecutive, random)


- The population and eligibility criteria for sample selection
- The sample size, with a rationale
- A description of the sample’s main characteristics (e.g., age, gender, clinical status, and
so on)

Data collection in quantitative research

- Basic decision is the use of:


- New data, collected specifically for research purposes, or
- Existing data
- Records (e.g., patient charts)
- Historical data
- Existing data set (secondary analysis)

Examples of records, documents and available data


- Hospital records (e.g., nurses’ shift reports)
- School records (e.g., student absenteeism)
- Corporate records (e.g., health insurance choices)
- Letters, diaries, minutes of meetings, etc.
- Photographs

Major types of data collection methods


- Self-reports
- Patient - reported outcome
- Observation
- Biophysiologic measures

Overview of data collection and sources


- Structure
- Quantifiability
- Objectivity

Structured Self-Reports

- Data is collected with a formal instrument.


- Interview schedule
- Questions are prespecified but asked orally.
- Either face-to-face or by telephone
- Questionnaire
- Questions prespecified in written form, to be self-administered by
respondents
Types of questions in a structured instrument
- Closed-ended (fixed alternative) questions
- For example, “within the past 6 months, were you ever a member of a fitness
center or gym?” (yes/no)
- Open-ended questions
- For example, “why did you decide to join a fitness center or gym?”
Advantages of questionnaires (compared with interviews)
- Questionnaires are less costly and are advantageous for geographically dispersed
samples
- Questionnaires offer the possibility of anonymity, which may be crucial in obtaining
information about certain opinions or traits.
Advantages of interviews (compared with questionnaires)
- Higher response rates
- Appropriate for more diverse audiences
- Some people cannot fill out a questionnaire
- Opportunities to clarify questions or to determine comprehension
- Opportunity to collect supplementary data through observation

Composite psychosocial scales


- Scale- a device that assigns a numeric score to people along a continuum
- Used to make fine quantitative discriminations among people with different
attitudes perceptions, traits
- Likert scales- summated rating scales
- Summated rating scales (composite scales)

Likert scales
- Consist of several declarative statements (items) expressing viewpoints
- Responses are on an agree/disagree continuum (usually five or seven response
options).
- Responses to items are summed to compute a total scale score.

Visual Analog Scale (VAS)


- Used to measure subjective experiences (e.g., pain, nausea)
- Measurements are on a straight line measuring 100mm.
- End points labeled as extreme limits of sensation
- E.g: pain scale level 1-10

Response set biases


- Biases reflecting the tendency of some people to respond to items in characteristic
ways, independently of item content
- Examples
- Social desirability response set bias
- Extreme response set
- Acquiescence response set (yea-sayers)
Evaluation of self- report
- Strong on directness
- Allows access to information otherwise not available to researchers
- But can we be sure participants actually feel or act the way they say they do?
Observation
- Structured observation of prespecified behaviors
- Involves the use of formal instruments and protocols that dictate what to observe,
how long to observe it, nd how to record the data
- Focus of observation
- Concealment
- Method of recording observations
Structured observations
- Category systems → checklists
- Formal systems for systematically recording the incidence or frequency of
prespecified behaviors or events
- Systems vary in their exhaustiveness.
- Exhaustive system: all behaviors of a specific type recorded, and each
behavior are assigned to one mutually exclusive category,
- Nonexhaustive system: specific behaviors, but not all behaviors, are
recorded.
Rating scales
- Ratings are on a descriptive continuum, typically bipolar.
- Ratings can occur:
- At specific intervals
- Upon the occurrence of certain events
- After an observational session (global ratings)
Observational sampling
- Time sampling- sampling of time intervals for observation
- Examples:
- Random sampling of intervals of a given length
- Systematic sampling of intervals of a given length
- Event sampling- observation of integral events; requires researchers to either know
when events will occur or wait for their occurrence.
Evaluation of observational methods
- Excellent method for capturing many clinical phenomena and behaviors
- Potential problem of reactivity when people are aware that they are being observed
- Risk of observational biases- factors that can interfere with objective observations
- Observational biases probably cannot be eliminated, but they can be minimized
through careful observer training and assessment.
Biophysiologic measures
- In vivo measurements
- Performed directly within or on living organisms (e.g., blood pressure measures)
- In vitro measurements
- Performed outside the organism’s body (e.g., urinalysis)
Evaluation of biophysiologic measures
- Strong on accuracy, objectivity, validity, and precision
- May be cost-effective for nurse researchers
- But caution may be required for their use, and advanced skills may be needed for
interpretation
Factors affecting data quality in quantitative research
- Procedures used to collect the data
- Circumstances under which data were gathered
- Adequacy of instruments or scales used to measure constructs
- Psychometric assessment evaluates the measure’s measurement properties.
- Reliability: extent to which score are free from measurement error
Data quality in quantitative research validity
- Face validity: whether the instrument looks like it is measuring the target construct
- Content validity: the extent to which the instrument’s content adequately captures the
construct
- Criterion validity: the extent to which the scores on a measure are a good reflection of a
“gold standard”
- Construct validity: the degree to which evidence about a measure’s scores in relation to
other variables supported the inference that the construct has been well represented

Sampling and data collection


: In Qualitative Studies

Sampling in qualitative research


- Qualitative researchers are as concerned as quantitative researchers with the quality of
their samples, but they use different considerations in selecting study participants.
- Selection of sample members guided by desire for information- rich data sources
- “Representativeness” not a key issue
- Random selection not considered productive
Types of qualitative sampling
- Convenience (volunteer) sampling; not preferred approach but economical
- Snowball sampling (network sampling): sample might be restricted to a small network of
acquaintances
- Purposive sampling: researchers deliberately choose the cases that will best contribute
to the study
- Theoretical sampling: involves decisions about where to find data to develop an
emerging theory optimally

Types of purposive sampling in qualitative research


- Maximum variation sampling
- extreme/deviant case sampling
- Typical case sampling
- Criterion sampling
- Confirming and disconfirming cases
Theoretical sampling
- Preferred sampling method in grounded theory research
- Involves selecting sample members who best facilitate and contribute to the
development of the emerging theory

Sample size in qualitative research


- No explicit, formal criteria
- Sample size determined by informational needs
- Decisions to stop sampling guided by data saturation
- Data quality can affect sample size
Sampling in the main qualitative traditions
- Ethnography
- Mingling with many members of the culture- a “big net” approach
- Informal conversations with 25 to 50 informants
- Multiple interview with smaller number of key informants
- Typically involves sampling things as well as people
Sampling in phenomenology
- Relies on very small samples (often 10 or fewer)
- Two principles guide sample selection
- Participants must have experienced phenomenon of interest
- They must be able to articulate what it is like to have lived that experience
- May sample artistic or literary sources
Sampling in grounded theory
- Typically involves samples of 20 to 30 people
- Selection of participants who can contribute to emerging theory (usually theoretical
sampling)
Data collection in qualitative studies
- Data collection methods may change as study progresses
- In-depth interviews most common method
- Observation also common
Types of qualitative self- report techniques #1
- Unstructured interviews
- conversational ,totally flexible
- Use of grand tour questions
- Semi Structured interviews
- Use of topic guide
Types of qualitative self- report techniques #2
- Focus group interviews
- Interviews in small groups (5 to 10 people)
- Led by a moderator
- Diaries
- Source in historical research
- Provide intimate detail of everyday life
Types of qualitative self- report techniques #3
- Photo elicitation
- Interview stimulated and guided by photographic images
- Photovoice:asking participants to take photo themselves and interpret them

Qualitative observational methods


- Qualitative studies: unstructured observation in naturalistic settings
- Includes participant observation

Gathering qualitative self- report data


- Researchers gather narrative self-report data to develop a construction of a
phenomenon that is consistent with that of participants.
- This goal requires researchers to overcome communication barriers and to enhance the
flow of information.
Gathering participant observation data
- The physical setting
- The participants
- Activities
- Frequency and duration
- Process
- Outcomes
Recording observations
- Logs (field diaries)
- Field notes
- Descriptive (observational) notes
- Reflective notes
- Methodologic notes
- Theoretical notes (or analytical notes)
- Personal notes
Evaluation of unstructured observational methods
- Excellent method for capturing many clinical phenomena and behaviors
- Potential problem of reactivity when people are aware that they are being observed
- Risk of observational biases- factors that can interfere with objective observation

- Nonprobability samples might be used when researchers are conducting


exploratory research, by evaluation researchers, or by researchers whose aim
is to make some theoretical contribution.
- There are several types of nonprobability samples, including purposive
samples, snowball samples, quota samples, and convenience samples.
- In probability sampling, the aim is to identify a sample that resembles the
population from which it was drawn.
- There are several types of probability samples including simple random
samples, systematic samples, stratified samples, and cluster samples.
- A population is the group that is the focus of a researcher’s interest; a
sample is the group from whom the researcher collects data.

Key Takeaways QUANTITATIVE

● The goal of probability sampling is to identify a sample that resembles the population
from which it was drawn.
● There are several types of probability samples including simple random samples,
systematic samples, stratified samples, and cluster samples.
● Probability samples usually require a real list of elements in your sampling frame,
though cluster sampling can be conducted without one.

Glossary

Cluster sampling– a sampling approach that begins by sampling groups (or clusters) of
population elements and selecting elements from within those groups

Generalizability– the idea that a study’s results will tell us something about a group larger than
the sample from which the findings were generated

Periodicity– the tendency for a pattern to occur at regular intervals

Probability proportionate to size– in cluster sampling, giving clusters different chances of being
selected based on their size so that each element within those clusters has an equal chance of
being selected

Probability sampling– sampling approaches for which a person’s likelihood of being selected
from the sampling frame is known

Random selection– using a randomly generated numbers to determine who from the sampling
frame gets recruited into the sample

Representative sample– a sample that resembles the population from which it was drawn in all
the ways that are important for the research being conducted

Sampling error– a statistical calculation of the difference between results from a sample and the
actual parameters of a population
Simple random sampling– selecting elements from a list using randomly generated numbers

Strata– the characteristic by which the sample is divided

Stratified sampling– dividing the study population into relevant subgroups and then drawing a
sample from each subgroup

Systematic sampling– selecting every kth element from a list

Key Takeaways QUALITATIVE

● Nonprobability samples might be used when researchers are conducting qualitative (or
idiographic) research, exploratory research, student projects, or pilot studies.
● There are several types of nonprobability samples including purposive samples,
snowball samples, quota samples, and convenience samples.

Glossary

Convenience sample– researcher gathers data from whatever cases happen to be convenient

Nonprobability sampling– sampling techniques for which a person’s likelihood of being selected
for membership in the sample is unknown

Purposive sample– when a researcher seeks out participants with specific characteristics
Quota sample– when a researcher selects cases from within several different subgroups

Snowball sample– when a researcher relies on participant referrals to recruit new participants

UNIT 10
STATISTICAL ANALYSIS OF QUANTITATIVE DATA

Purposes of Statistical Analysis in Quantitative Research


- To describe the data (e.g., sample characteristics)
- To test hypotheses
- To provide evidence regarding measurement properties of quantified variables

Levels of Measurement
- Nominal (N): lowest level; involves using numbers simply to categorize attributes. The
lowest level of measurement involving assigning characteristics into
categories (e.g., males = 1; females = 2).

- Ordinal (O): ranks people on an attribute. A measurement level that ranks or


orders objects on a scale; the distance between the objects on the scale is
unknown.

- Interval (I) : ranks people on an attribute and specifies the distance between them. A
measurement level specifying the ranking of objects on a scale that has
equal distances between points on that scale (e.g., Celsius degrees)

- Ratio (R) : highest level; ratio scales, unlike interval scales, have a meaningful zero and
provide information about the absolute magnitude of the attribute. A measurement
level with equal distances between scores and a true meaningful zero,
and that provides information about the magnitude of an attribute (e.g.,
weight).

- Many physical measures, such as a person’s weight, are ratio measures. Gender
is an example of a nominally measured variable. A measurement of ability to
perform ADL’s is an example of ordinal measurement, and interval measurement
occurs when researchers can rank people on an attribute and specify the
distance between them, e.g Psychological testing.

Statistical analysis
- Descriptive statistics
- Used to describe and synthesize data
- Parameters: descriptor for a population
- Statistics: descriptive index from a sample
- Inferential statistics
- Used to make inferences about the population based on a sample data
- Draw conclusions about a population given data from a sample

Frequency distributions
- A systematic arrangement of numeric values on a variable from lowest to highest and a
count of the number of times (and/ or percentage) each value was obtained
- Frequency distributions can be described in terms of:
- Shape
- Central tendency
- Variability
- Can be presented in a table ( Ns and percentages) or graphically (e.g., frequency
polygons)

Shapes of distributions #1
- Symmetry
- Symmetric
- Skewed (asymmetric)
- Positive skew (long tail points to the right)
- Negative skew (long tail points to the left)

Shapes of distributions #2
- Modality (number of peaks)
- Unimodal (1 peak)- a special distribution called the normal distribution (a bell-
shaped curve) is symmetric, unimodal, and not very peaked.
- Bimodal (2 peaks)
- Multimodal (2+ peaks)
Central tendency
- Index of “typicalness” of a set of score that comes from center of the distribution
- Mode- the most frequently occurring score in a distribution
- Ex: 2,3,3,3,4,5,6,7,8,9, mode= 3
- Median- the point in a distribution above which and below which 50% of cases fall
- Ex: 2,3,3,3,4 | 5,6,7,8,9, median = 4.5
- Mean- equals the sum of all scores divided by the total number of scores
- Ex: 2,3,3,3,4,5,6,7,8,9, mean= 5.0
Comparison of Measures of Central Tendency
- Mode: useful mainly as gross descriptor, especially of nominal measures
- Median: useful mainly as descriptor of typical value when distribution is skewed (e.g.,
household income)
- Mean: most stable and widely used indicator of central tendency
Variability
- The degree to which score in a distribution are spread out or dispersed
- Homogeneity- little variability
- Heterogeneity- great variability
Indexes of Variability
- Range: highest value minus lowest value
- Standard deviation (SD): average deviation of scores in a distribution
Bivariate Descriptive Statistics
- Used for describing the relationship between two variables
- Two common approaches
- Crosstabs (contingency tables)
- Correlation coefficients

Correlation coefficients #1
Correlation coefficients can range from -1.00 to +1.00
- Negative relationship (0.00 to -1.00) — one variable increases in value as the other
decreases. E.g., amount of exercise and weight.
- Positive relationship (0.00 to +1.00) — both variables increase, e.g., calorie consumption
and weight.
Correlation Coefficients #2
- The greater the absolute value of the coefficient, the stronger the relationship:
Ex: r = -.45 is stronger than r = +.40.
- With multiple variables, a correlation matrix can be displayed to show all pairs of
correlations.
- Pearson’s r (the product- moment correlation coefficient): computed with continuous
measures
- A correlation coefficient (r ) would be used to test the relationship between
two interval-level variables, scores on two tests.
- Spearman’s rho: used for correlations between variables measured on an ordinal scale.
Describing Risk
- Clinical decision making for EBP may involve the calculation of risk indexes, so that
decisions can be made about relative risks for alternative treatments or exposures.
- Some frequently used indexes
- Absolute risk
- Absolute risk reduction (ARR)
- Odds ratio (OR)
- Numbers needed to treat
The Odds Ratio (OR)
- The odds = the proportion of people with an adverse outcome relative to those without
it
- For example, the odds of…
- The odds ratio is computed to compare the odds of an adverse outcome for two groups
being compared (e.g., men vs women, experimentals vs. controls).
Inferential Statistics
- Used to make objective decisions about population parameters using sample data
- Provide a means for drawing inferences about a population, given data from a sample
- Based on laws of probability
- Uses the concept of theoretical distributions
- For example, the sampling distribution of the mean
Sampling Distribution of the Mean
- A theoretical distribution of means for an infinite number of samples drawn from the
same population
- Is always normally distributed
- Its mean equals the population mean
- Its standard deviation is called the standard error of the mean (SEM).
- SEM is estimated from a sample SD and the sample size.
Estimation of Parameters
- Point estimation- a single descriptive statistic that estimates the population value (e.g., a
mean, percentage, or OR)
- Interval estimation- a range of values within which a population value probably lies
- Involves computing a confidence interval (CI)
- CIs reflect how much risk of being wrong researchers take.
Confidence Intervals
- CIs indicate the upper and lower confidence limits and the probability that the population
value is between those limits
- For example, a 95% CI of 40 to 50 for a sample mean of 45 indicates there is a
95% probability that the population mean is between 40 and 50.

Hypothesis Testing #1
- Based on rules of negative inference: Research hypotheses are supported if null
hypotheses can be rejected.
- Involves statistical decision making to either:
- Accept the null hypothesis or
- Reject the null hypothesis
Hypothesis Testing #2
- If the value of the test statistic indicates that the null hypothesis is improbable, then the
result is statistically significant.
- A nonsignificant result means that any observed difference or relationship could have
happened by chance
- Statistical decisions are either correct or incorrect.

Errors in Statistical Decisions


- Type I error: rejection of a null hypothesis when it should not be rejected; a false-
positive result
- Risk of error is controlled by the level of significance (alpha), e.g., ɒ = .05 or .01.
- Type II error: failure to reject a null hypothesis when it should be rejected; a false-
negative result
- The risk of this error is beta (�).
- Power is the ability of a test to detect true relationships; power =1- �.
- By convention, power should be at least .80.
- Larger samples= greater power
Overview of Hypothesis Testing Procedures
- Select an appropriate test statistic.
- Establish significance criterion (e.g., ɒ=.05).
- Compute test statistic with actual data.
- Determine degrees of freedom (df) for the test statistic
- Compare the computed test statistic to a theoretical value.
- Make decision to accept or reject null hypothesis.
Bivariate Statistical Tests
- T- Tests
- Analysis of variance (ANOVA)
- Chi-squared test
- Correlation coefficients
- Effect size indexes
t-Test
- Tests the different between two means
- t-Test for independent groups: between- subjects tests
- For example, mans for men vs. women
- t-Test for dependent (paired) groups: within-subjects test
- for example, means for patients before and after surgery
- A t-test would be used to test differences in the means between the two groups
of mothers on a ratio-level variable, birth weight.

Analysis of Variance (ANOVA)


- Tests the different between more than two means
- Sorts out the variability of an outcome variable into two components: variability
due to the independent variable and variability due to all other source
- Variation between groups is contrasted with variation within groups to yield an F
ratio statistic.
- One-way ANOVA (e.g., three groups)
- Multifactor (e.g., two-way) ANOVA
- Repeated measures ANOVA (RM-ANOVA): within subjects
- ANOVA would be used to test differences in the means of the three
groups of patients on a variable measured on an interval-level scale,
anxiety scores.
-

Chi- Squared Test


- Tests the difference in proportions in categories within a contingency table
- Compares observed frequencies in each cell with expected frequencies- the
frequencies expected if there was no relationship
- A chi-squared test would be used to test differences in proportions
between the two groups on a nominal-level variable, ever used versus
never used an illegal drug.
-
Correlation Coeffients
- Pearson’s r is both a descriptive and an inferential statistic.
- Tests that the relationship between two variables is not zero
Effect Size
- Effect size is an important concept in power analysis
- Effect size indexes summarize the magnitude of the effect of the independent variable
on the dependent variable.
- In a comparison of two group means (i.e., in a t-test situation), the effect size index is d.
- By convention:
- D ≤.20, small effect
- D = .50, moderate effect
- D ≥ .80, large effect
Multivariate Statistical Analysis
- Statistical procedures for analyzing relationships among three or more variable
simultaneously
- Commonly used procedures in nursing research
- Multiple regression
- Analysis of covariance (ANCOVA)
- Logistic regression
- Multivariate analysis of variance (MANOVA)
Multiple Regression #1
- Used to predict a dependent variable based on two or more independent (predictor)
variables
- The statistic used in multiple regression is the multiple correlation coefficient, symbolized
as R.
- Dependent variable is continuous (interval or ratio-level data)
- Predictor variables are continuous (interval or ratio) or dichotomous.
Multiple Regression #2
Multiple Correlation Coefficient ( R )
- The correlation index for a dependent variable and more than two independent
(predictor) variables: R
- Does not have negative values: shows strength of relationships, not direction
- R2 is an estimate of the proportion of variability in the dependent variable accounted for
by all predictors.
Analysis of Covariance (ANCOVA)
- Extends ANOVA by removing the effect of the confounding variables (covariates) before
testing whether mean group differences are statistically significant
- Levels of measurement of variables
- Dependent variable is continuous- ratio or interval level
- Independent variable is nominal (group status).
- Covariates are continuous or dichotomous.
Logistic Regression
- Analyzes relationships between a nominal-level dependent variable and more than two
independent variables
- Yields an odds ratio- the risk of an outcome occurring given one condition versus the risk
of it occurring given a different condition
- The OR is calculated after first removing (statistically controlling) the effects of the
confounding variables.
Measurement Statistics
- Reliability assessment
- Test- retest reliability
- Interrater reliability
- internal consistency reliability
- Validity assessment
- Content validity
- Construct validity
- Criterion validity
Research Article Information: Hypothesis Testing
- The test used
- The value of the calculated statistic
- Degrees of freedom
- Level of statistical significance

Interpretation and Clinical Significance in Quantitative Research

Interpretation and Quantitative Results


- The statistical results of a study, in and of themselves, do not communicate much
meaning.
- Statistical results must be interpreted to be of use to clinicians and other researchers.

Interpretive Task: Six Considerations


- The credibility and accuracy of the results
- The precision of the estimate of effects
- The magnitude of effects and importance of results
- The meaning of the results; especially causality
- The generalizability of the results
- The implications of the results for practice, theory, further research
Inference and interpretation
- Interpreting research results involves making a series of inferences.
- An inference involves drawing conclusions based on limited information, using logical
reasoning.
- We infer from study results “Truth in the real world.”
- The findings are “stand-ins” for the true state of affairs.

The interpretative Mindset


- Evidence-based practice involves integrating research evidence into clinical decision
making.
- Approach the task of interpretation with a critical- and even skeptical- mindset.
- Test the “null hypothesis” that the results are wrong against the “research hypothesis”
that they are right.
- Show me!!! Expect researchers to provide strong evidence that their results are credible-
i.e., that the “null hypothesis” has no merit.
Credibility of Quantitative Results
- Proxies and interpretation
- Credibility and validity
- Credibility and bias
- Credibility and corroboration
CONSORT Guidelines
- Reporting guidelines have been developed so that readers can better evaluate
methodologic decisions and outcomes.
- The Consolidated Standards of Reporting Trials (CONSORT) include a flow chart for
documenting participant flow in a study.
Precision of the Results
- Results should be interpreted in light of the precision of the estimates (often
communicated through confidence intervals) and magnitude of effects (effect sizes).
- Considered especially important to clinical decision making
- In quantitative studies, results that support the researcher’s hypotheses are described as
significant.
- A careful analysis of study results involves evaluating whether, in addition to being
statistically significant, the effects are large and clinically important.
The Meaning of Results
- If the results are credible and of sufficient precision and importance, then inferences
must be made about what they mean.
- An interpretation of meaning requires understanding not only methodological issues but
also theoretical and substantive ones.
- Interpreting statistical results is easiest when hypotheses are supported, i.e., when there
are positive results.
Meaning and Causality
- Great caution is needed in drawing causal inferences- especially when the study is
nonexperimental (and cross-sectional).
- Critical maxim:
- CORRELATION DOES NOT PROVE CAUSATION.
Interpreting Hypothesized Results
- Greatest challenges to interpreting the meaning of results:
- Nonsignificant results
- Serendipitous significant results
- Mixed results
- Because statistical procedures are designed to provide support for research hypotheses
through the rejection of the null hypothesis, testing a research hypothesis that is a null
hypothesis is very difficult.
Clinical Significance
- The practical importance of research results in terms of whether they have genuine,
palpable effects on the daily lives of patients or on the health care decisions made on
their behalf.
Clinical Significance at the Group Level
- Group-level clinical significance (which is sometimes called practical significance)
typically involves using statistical information other than p values to draw conclusions
about the usefulness of research findings.
- The most widely used statistic for this purpose are:
- Effect size (ES) indexes
- Confidence intervals (Cis)
- Number needed to treat (NNT)
Clinical Significance at the Individual Level
- Involves establishing a benchmark (or threshold) that designates the score value on a
measure (or the value of a change score) that would be considered clinically important
- Conceptual definitions of clinical significance
- Operationalizing clinical significance; establishing the MIC Benchmark
- The focus is on the individual change scores, not differences between
groups.

Analysis of Qualitative Data

Qualitative Analysis Challenges


- No universal rules; no one set way to do an analysis correctly.
- Voluminous amount of narrative data= lots of intensive work
- Need for strong inductive powers and creativity
- Condensing rich data to fit into concise reports

Qualitative Data Management and Organization


- Transcribing data
- Developing a coding scheme
- Coding qualitative data
- Organizing the data
- Manual methods of organizations (conceptual files)
- Computerized methods of organization using CAQDAS
A General Analytic Overview
- Identify themes or broad categories.
- A theme represents the labeling of similar ideas shared by the study
participants.
- A theme is an abstract entity that brings meaning and identity to a current
experience and its variant manifestations.
- A theme captures and unifies the nature or basis of the experience into a
meaningful whole.
- Search for patterns among themes, variations in the data.
- Develop charting devices, timelines.
- In some cases, use metaphors to evoke a visual analogy.
- Validate themes, patterns.
- Weave thematic pieces into an integrated whole.
Qualitative Content Analysis
1. Analyze the content of narrative data to identify prominent themes and patterns
among the themes
2. Break down data into smaller units
3. Code and name units according to content.
4. Group coded material based on shared content.
Ethnographic Analysis: Spradley’s 12- Step Method #1
1. Locating an informant
2. Interviewing an informant
3. Making an ethnographic record
4. Asking descriptive questions
5. Analyzing ethnographic interviews
6. Making a domain analysis (first level of analysis)
Ethnographic Analysis: Spradley’s 12- Step Method #2
7. Asking structural questions
8. Making a taxonomic analysis (second level)
9. Asking contrast questions
10. Making a componential analysis (third level)
11. Discovery cultural themes, them analysis (fourth level)
12. Writing ethnography
Phenomenological Analysis
- Three broad schools of phenomenology
- Duquesne School (descriptive phenomenology)
- Colaizzi
- Giorgi
- Van Kaam
- Utrecht School (description and interpretive phenomenology)
- Van Manen
- Heideggerian hermeneutics (interpretative)
- Gadamer
- Diekelmann, Allen, and Tanner
- Benner
Van Manen’s Phenomenological Method:
Six Activities
1. Turning to the nature of the lived experience
2. Exploring the experience as we live it
3. Reflecting on essential themes
4. Describing the phenomenon through the art of writing and rewriting
5. Maintaining a strong relation to the phenomenon
6. Balancing the research context by considering parts and whole
Benner’s Hermeneutic Analysis
- Search for paradigm cases
- Thematic analysis
- Analysis of exemplars
Grounded Theory Analysis
- Uses constant comparative method of analysis
- Two competing grounded theory strategies
- Glaser and Strauss (Glaserian)
- Strauss and Corbin (Straussian)
Coding: Glaserian Approach
- Substantive codes
- Open codes– ends when core category is identified
- One type of core category is a basic social process (BSP)
- Level I (in vivo) codes; level II codes; level III codes
- Selective codes– codes relating to core category
- Theoretical codes
Examples of Families of Theoretical Codes (Glaser)
- Process: stages, phases, passages, transitions
- Strategy: tactics, techniques, maneuverings
- Cutting point: boundaries, turning points
- The six Cs: causes contexts, conditions, contingencies, consequences, and covariances
Strauss and Corbin’s Method of Grounded Theory
- Three types of coding
- Open coding
- Axial coding
- Selective coding– deciding on the central (or core) category
Constructivist Grounded Theory Approach
- Theories include researcher’s experience and involvements
- Initial coding: data are studied to learn what participants view as problematic.
- Focused coding: identify most significant initial code and then theoretically code

Trustworthiness and Integrity in Qualitative Research

Debates About Rigor and Validity


- Controversies about quality
- What should the key quality-related goals be, and what terminology should be
used?
- A major dispute has involved whether “validity” and “rigor” are
appropriate.
- Some reject these terms and concepts totally, some think they are
appropriate, and others have search for parallel goals.
Terminology Proliferation and Confusion
- No common vocabulary exists.
- Goodness
- Truth value
- Integrity
- Trustworthiness
- Validity and rigor
Controversies in Qualitative Research
- Some frameworks and criteria aspire to being generic- to be applicable across
qualitative traditions.
- Other frameworks are specific to a tradition or even to a specific analytic approach within
a tradition.

Two Quality Frameworks


- Lincoln and Guba– often considered the “gold standard”, and are widely cited

Lincoln and Guba’s Framework


- Key goal: trustworthiness
- Concerns the “truth value” of qualitative data, analysis, and interpretation
- A parallel perspective, with analogs to quantitative criteria
- Encompasses four criteria
- Credibility
- Dependability
- Confirmability
- Transferability
Lincoln and Guba’s:
Credibility
- Involves two aspects: (1) carrying out the study in a way that enhances the believability
of the findings and (2) taking steps to demonstrate credibility to external readers
- Refers to confidence in the truth of the data and interpretations of them
- The analog of internal validity in quantitative research
- Arguably the most important criterion for assessing the quality and integrity of a
qualitative inquiry
Dependability
- Refers to stability of data over time and over conditions
- The analog of reliability in quantitative research
Confirmability
- Refers to objectivity– the potential for congruence between two or more independent
people about data accuracy, relevance, or meaning
- The analog of objectivity in quantitative research
- The criterion is concerned with establishing that the data represent the information
participants provided and that the interpretations of those data are not imagined by the
inquirer.
Transferability
- The extent to which qualitative findings can be transferred to other settings or groups
- The analog of generalizability or external validity in quantitative research
Authenticity
- The extent to which the researchers fairly and faithfully show a range of different realities
and convey the feeling/tone of participants’ lives as they are lived
- No analog in quantitative research
- Added to the Lincoln and Gruba’s framework at a later date
Strategies to Enhance Quality in Qualitative Inquiry
- Researchers can take many steps to enhance the quality of their inquiries.
- Consumers can assess quality-enhancement efforts by looking for these steps and
assessing their success in strengthening integrity/validity/trustworthiness.
Strategies During Data Collection #1:
- Prolonged engagement: investing sufficient time to have in-depth understanding
- Persistent observation: intensive focus on salience of data being gathered
- Reflexivity strategies: attending to researcher’s effect on data
Strategies During Data Collection #2:
- Comprehensive and vivid recording of information
- Maintenance of an audit trail, a systematic collection of documentation and materials,
and a decision trail that specifies decision rules
- Member checking: providing feedback to participants about emerging interpretations;
obtaining their reactions
- A controversial procedure– considered essential by some but inappropriate by
others
Data and Method Triangulation: Denzin
- Data triangulation: the use of multiple data sources to validate conclusions (time, space,
and person triangulation)
- Investigator triangulation: not relevant to data collection
- Method triangulation: the use of multiple methods of data collection to study the same
phenomenon (e.g., self-report, observation)
- Theory triangulation: not relevant to data collection
Strategies Relating to Coding and Analysis
- Search for disconfirming evidence as the analysis proceeds, through
purposive/theoretical sampling of cases that can challenge interpretations
- Negative case analysis: a specific search for cases that appear to discredit earlier
hypothesis
- Peer review and debriefing: sessions with peers specifically designed to elicit critical
feedback
- Inquiry audit: a formal scrutiny of the data and relevant supporting documents and
decisions by an external reviewer
Strategies Relating to Presentation
- Thick and contextualized description: vivid portrayal of study participants, their context,
and the phenomenon under study
- Researcher credibility: enhancing confidence by sharing relevant aspects of the
researcher’s experience, credentials, and motivation
Interpretation of Qualitative Findings
- Interpretation in qualitative inquiry– making meaning from the data– relies on adequate
incubation.
- The process of living the data
- Similar interpretive issues as in quantitative research: credibility, meaning, importance,
transferability, and implications

Summary Points

● Level of measurement (in quantitative research) is a way of classifying


measurement according to the type of mathematical operations involved,
including nominal, ordinal, interval, and ratio.
● Reliability refers to the ability of a scale to produce consistent results, while
validity refers to the degree to which a scale measures what it is designed to
measure.
● Generalizability is the extent to which findings can be generalized from the
sample to the population.
● Clinical significance is the importance of research findings in so far as they
affect the lives of patients or decisions of health care professionals.
● Credibility is the criterion for evaluating the integrity of qualitative research;
this involves confidence in the truth(s) conveyed by the data.
● Dependability is a measure for evaluating the integrity of qualitative
research; it signifies the stability of the findings over time and across
conditions.
● Confirmability refers to the trustworthiness of the qualitative study in
relation to the objectivity of the data and analysis.
● Transferability refers to the degree to which qualitative findings can be
transferred to other settings; it is similar to generalizability in quantitative
studies.

UNIT 11
Reading and Critiquing Research Articles

Types of research report


- Presentations at professional conferences
- Oral presentations
- Poster sessions
- Journal articles
- Papers often subjected to peer review
- Peer reviews are often blind (reviewers are not told names of authors and vice
versa)
Content of research Journal Articles
- IMRAD Format
- Title and abstract
- Introduction
- Method
- Results
- And Discussion
- References
Title and Abstract
- Title
- Qualitative studies: title normally includes the central phenomenon and group
under investigation.
- Quantitative studies: title communicates key variables and the population (PICO
components)
- Abstract: brief description of major features of a study at the beginning of a journal article
Components of the Introduction
- Description of central phenomena, concepts, or variables
- Study purpose, research questions, or hypotheses
- Review of literature
- theoretical/conceptual framework
- Study significance, need for study
Method Section: Quantitative Studies
- Research design
- Sampling plan
- Methods of measuring variables and collecting data
- Study procedures, including procedures to protect participants
- Analytic methods and procedures
Method Section: Qualitative Studies
- Discuss many of the same issues as quantitative researchers but with different
emphases
- Provide more information about the research setting and the context of the study
- Describe the researchers’ efforts to enhance the integrity of the study
Results Section
- Findings
Quantitative studies
- The names of the statistical tests used
- The value of the calculated statistic
- Statistical significance
- Level of statistical significance
- Index of how probable it is that the findings are reliable
Qualitative studies
- Findings often organized according to major themes, processes, or categories
identified in the analysis
- Almost always includes raw data– quotes directly from study participants
Discussion Section
- Interpretation of the results
- Clinical and research implications
- Study limitations and ramifications for the believability of the results
Why Research Articles Are Hard to Read?
- Compactness–page constraints
- Jargon
- Objectivity, impersonality
- Statistical information
- Last two especially prominent in quantitative research articles
Tips on Reading Research Articles
- Read regularly, get used to style.
- Read copied articles– underline, highlight write notes
- Read slowly
- Read actively.
- Look up technical terms in the glossary.
- Don't be intimidated by statistics– grasp the gist of the story.
- “Translate” articles or abstracts
Research Critique
- Definition: an objective assessment of a study’s strength and limitations
- Critiques to inform EBP focus on whether evidence is accurate, believable, and clinically
relevant.
- Careful and objective appraisals of the researcher’s major conceptual and methodologic
decisions
- Critiques of individual studies can be done for a variety of reasons (e.g., for a student
assignment, for making decisions about whether or not to publish a manuscript, for EBP
purposes)
- Vary in scope, length, and form, depending on purpose
Key research challenges
- Designing studies to support inferences that are:
- Reliable and valid (quantitative studies)
- Trustworthy (qualitative studies)
- An inference is a conclusion drawn from the study evidence using logical
reasoning and taking into account the methods used to generate that evidence.
Criteria for evaluating quantitative research (scientific merit)
- Reliability
- The accuracy and consistency of obtained information
- Validity
- The soundness of the evidence– whether findings are convincing, are well-
grounded, and support the desired inferences
Evaluative Criteria in Qualitative Studies
- Trustworthiness
- Credibility- a key criterion, achieved to the extent that researchers can engender
confidence in the truth of the data and their interpretations
- Confirmability
- Dependability
- Transferability
- Authenticity
Triangulation
- Triangulation is the use of multiple sources or referents to draw conclusions about what
constitutes the truth.
- Triangulation can contribute to credibility.
- Triangulation is a useful strategy in both qualitative and quantitative research
Bias
- A distortion or influence that results in an error in inference
- Examples of factors creating bias
- Lack of participants’ candor
- Faulty methods of data collection
- Researcher’s preconceptions
- Participants’ awareness of being in a special study
- Fault study design
Research Control
- In quantitative studies, research control involves holding constant extraneous factors
(confounding variables) that influence the dependent variable to better understand
relationships between the independent and dependent variables.
- Research control is one method of addressing bias.
Bias Reduction: Randomness and Blinding
- Randomness– allowing certain aspects of the study to be left to chance rather than to
researcher or participant choice
- An important tool for achieving control over confounding variables and for
avoiding bias
- Blinding (or masking), which is used in some quantitative studies to prevent biases
stemming from people's awareness
- Blinding involves concealing information from participants, data collectors, or
care providers to enhance objectivity.
Reflexivity
- The process of reflecting critically on the self and of attending to personal values that
could affect data collection and interpretations of the data
- Qualitative researchers are trained to explore these issues, to be reflective about
decisions made during the inquiry, and to record their thoughts in personal diaries and
memos.
- Reflexivity can be a useful tool in quantitative as well as qualitative research– self
awareness and introspection can enhance the quality of any study.
Generalizability and Transferability
- Generalizability: the criterion used in quantitative studies to assess the extent to which
the findings can be applied to other groups and settings
- Transferability (qualitative research): the extent to which qualitative findings can be
transferred to other settings
- An important mechanism for promoting transferability is the amount of rich descriptive
information qualitative researchers provide about study contexts.

Summary

Terms to Know

● Statistical tests
● Statistical significance
● p values
● Themes and raw data
● Discussion and interpretation
● Clinical and research implications
● Study limitations

Summary Points

● Research studies are often peer reviewed by two reviewers (e.g., other
researchers) and are blind reviewed (i.e., reviewers do not know names of
authors of research).
● A research critique is an evaluation of a study’s strengths and limitations.
● An abstract is a brief description of a research article; it focuses on the
purpose of the study, its research questions, methods, key findings, and
implications.
● The introduction addresses the key issue or problem that will be studied,
the purpose of the study and its research question, an appraisal of related
research, a conceptual framework (if the study has one), and the importance
of the study.
● The methods section includes the research design, a description of how
sampling was conducted, methods of measuring variables or investigating
study phenomena and data collection, study procedures (including ethical
considerations), and data analysis.
● The results section includes findings of the study, a description of study
participants, statistical tests used and statistical significance (quantitative
research), or themes and raw data.
● The discussion includes an interpretation of results, clinical and research
implications, and study limitations.

You might also like