Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

PHINMA CAGAYAN DE ORO COLLEGE

COLLEGE OF ALLIED HEALTH SCIENCES


NUR 028 NURSING RESEARCH 2
PERIODICAL 1 REVIEW NOTES
AUGUST 02, 2022

INSTRUCTOR: PHOEBE JAENN TAN RN


patan.coc@phinmaed.com

Types of Research Data

1. Cross-sectional Data The researcher dips into the study setting at a given point after the study
design is completed, then gathers present data on events occurring at that time. Such data is limited
to the subjects at one point in time, and is best gathered when the time frame is of short duration.
2. Retrospective Data Frequently called ex-post facto studies. In this particular type, data is collected
on events in the past before a study design is completed, or data collected “after the fact”.

3. Prospective Data Refers to future data or events that occurred after the study design has been
completed, but the study is pursued over a relatively long period of time into the future.

Note: Prospective and retrospective researches are sometimes called longitudinal studies because they
extend over a long period of time. All experimental studies are prospective, and have the advantage of the
possibility of manipulating research variables and observing the effects of these after a period of time.

Categories of Data Collection

1. Primary data collection - In primary data collection, the researcher personally collects data from actual
respondents using methods such as interviews and questionnaires.

The key point here is that the data collected is unique to the researcher and the research. It is not
accessible to anyone else until it is published.

2. Secondary data collection In secondary data collection, the researcher uses data that was collected for
another purpose such as a patient's medical records, employees or patients’ satisfaction surveys,
organizational business reports and government databases.

By their nature, they are easier to collect and gather than primary data. Its use is limited only by the
specific administrative procedures in different institutions

Methods of Collecting Data

1. Use of already existing or available data These are pertinent reports and other documents of an institution
which could be any of the following

● Raw data form basic documents such as records of patient’s admissions, birth dates, and discharges
among others

● Tubular data indicating number of patients admitted or discharged by year or month or total number
of deliveries surgeries or the workload of nurses.

2. Use of observers’ data These are gathered through actual observation and recording of events. For ethical
reasons the subjects must be informed that they are being observed.

Types of Observers

1
I. Non-participant observer This observer does not share the same milieu with the subjects and is not a
member of the group or subjects of the study. Data from this source have the advantage of high level
precision because subjective judgement is minimal.

Types of non-participant observers

● Overt non-participant observer The observer identifies herself and her task of conducting
research and informing the subjects of the types of data to be collected.

● Covert non-participant observer The observer does not identify herself to the subjects she
will observe. This may not be ethical since the latter have the right to be informed of
activities being involved in the process of research investigation.

II. Participant observer The observer shares the same milieu and is better acquainted with the subjects. The
observer may be a member of the group assigned to collect data while taking part in the activity of the
subjects.

Types of Participant observers:

● Overt participant observer The observer is involved with the subjects and has full knowledge and
awareness of the subjects to be observed.

● Covert participant observer The observer interacts with the subjects and observes their behaviour
without their knowledge. This may be construed by the subjects as “spying who might find out the
real purpose of the observer’s activity. This may have ethical implications similar to those of the
covert non-participant observer.

Two methods of observations

A. Structured Observations. These are done when the researcher has prior knowledge of the
phenomenon of interest. The behaviour checklist may help indicate the frequency of the subjects’
observed behaviour.

B. Unstructured Observation The researcher attempts to describe the events or behaviour with no
preconceived ideas of what will be seen or observed. This requires a high degree of attention and
concentration on the part of the researcher.

3. The Use of Self Recording or the Reporting Approach Self-recording and reporting method of data
collection uses a specially prepared document intended to collect data called instruments. This method
describes tools, devices, tests and other measures used in data collection. It explains in detail how these are
applied and validated.

4. Use of Delphi Technique This technique uses a series of questionnaires to gather a consensus of options
and information from a group of experts. The process continues until a consensus is reached. Similar to
questionnaires, it includes a large number of subjects or respondents but its primary objective is to gather
consensus of opinions, judgments or choices.

The following are types of Delphi technique according to Benner & Ketefian (2008):

● Classic Delphi Questions are presented to a panel of informed individuals in a scientific field asking
their opinions on a particular issue or problem.
● Modified Delphi Uses interview or focus groups to gather their opinions on certain issues or trends.
● Policy Delphi Used mostly in organizations to examine and explore policy issues. A committee could
be used to formulate the argued policy
● Real-time Delphi Uses a structured time, avoiding delay caused by pen and paper type, hence a face
to face analysis is done by the researcher.

2
● E-delphi Uses electronics of e-mails, completion of online information by either classic or modified
Delphi technique.

5. Critical Incident Technique This technique employs a set of principles for collecting data on observable
human activities. It has high value in nursing since the data is based on actual incidents and is not merely
hypothetical. It is flexible to examine interpersonal communication skills (Polit & Beck, 2008) The researcher
develops a codebook to define data before initiating data collection.

● Variables-are qualities, properties or characteristics of persons, things, or situations, may change or


vary, manipulated, measured, or controlled.

● Measurement is a procedure for assigning numerical values to represent the amount of an attribute
present in an object or person.

Advantages of Measurement

1. It removes subjectivity and guesswork.


2. Obtain reasonably precise information.
3. It serves as a language of communication.

Levels of Measurement determine the type of statistical analysis that can be used and the type of
conclusions that can be drawn from the investigation.

1. Nominal are used to classify variables into categories but cannot be ranked, categories are mutually
exclusive

2. Ordinal are used to show relative rankings of variables, and in ordering observations according to
magnitude or intensity; from most to least; highest to lowest

● Likert Scale- consists of several declarative items that express a viewpoint on a topic. Respondents
typically are asked to indicate the degree to which they agree or disagree with the opinion expressed
by the statement
● Semantic Differential Scale -participants are asked to rate concepts on a series of bipolar adjectives.
Respondents place a check at the appropriate point on a 7-point scale that extends from one
extreme of the dimension to the other.
● Vignettes- brief case reports or descriptions of events to which respondents are asked to react.
Descriptions are structured to elicit information about respondents’ perceptions of some
phenomenon or their projected actions

● Q Sorts- participants are presented with a set of cards on which words or phrases are written; they
are told to sort the cards along a specified bipolar dimension. Typically, there are between 50 to 100
cards to be sorted into 9 or 11 piles

3. Interval- rankings of variables on a scale with equal intervals between the numbers, consists of real
numbers, The zero point remains arbitrary and not absolute

4. Ratio- ranking of variables on scales with equal intervals, distance between ranks is specified up to the
zero point level, the zero is absolute

Sources of Measurement Errors

Measurement errors occur when there is a difference between what exists in reality and what is measured
by a research instrument (Burns & Grove, 2007). Instruments and objects being measured are influenced by
several factors which represent bias or can alter the resulting data.

1. Environmental contaminants Responses and scores are affected by the situation.


2. Variation in personal factors Response or scores are affected by the respondents’ personal states
which influence motivation to cooperate.

3
3. Quality of responses Characteristics and responses of respondents can interfere with accurate
measures. Acquiescence happens when the respondents agree to statements regardless of content.
4. Variation in data collections Different ways of collecting data from one person to another can cause
score variations.
5. Clarity of the instrument If instructions are poorly understood, then scores are affected by confusion
or misunderstanding.
6. Sampling of items Errors can occur as a result of sampling of items used to measure a variable.

7. Format of the instrument The manner in which the instrument is prepared such as its technical
aspect, can influence measurement.

Reliability refers to the accuracy and precision of a research tool. When a measure is precise, then the
reader has a level of confidence that differences between groups are not explained by differences in the way the
trait was measured.

Methods of Testing Reliability

a. Stability of measurement This is the extent to which the scores are obtained when the instrument is used
with same samples on separate occasions. A stable research instrument is one that can be repeated over
and over (at different times) on the same research subject and will produce the same result.

Major limitation → it can only be done when you can assume that the trait being measured will remain
constant over time; it is not useful in the measurement of changeable or transient states

● Example of a stable concept = intelligence (can be measured repeatedly, at regular intervals, and to
obtain the same score)

● Example of an unstable concept = pain (changeable, subject to frequent fluctuations even in persons
with chronic pain)

Tests of Stability

1. Test – Retest- the repeated measurements over time using the same instrument on the same
subjects is expected to produce the same results. This is used in interviews, examinations, and
questionnaires. A reliable questionnaire will give you consistent results over time. If the results are
not consistent, the questionnaire is not considered reliable and will need to be revised until it does
measure consistently.

2. Repeated Observations- this method has the same basic elements as test/retest. The measurement
of the variable or trait is repeated over time, and the results at each measurement time are
expected to be very similar. If you get different ratings at each measurement time, you will question:
Whether or not you have a reliable rating; Whether or not you are measuring a stable trait or
characteristic; or, Whether or not you are observing the same way each time (i.e., whether you are a
consistent observer)

b. Internal Consistency The instrument shows that all indicators or subparts measures the same
characteristics or attributes of the variables Internal consistency must be established before an instrument
can be used for research purposes .

● Any new instrument that you might develop will require pilot testing before using it in your research
project.
● If you revise an existing tool, you should treat it as a new tool for purposes of reliability testing.

● even an established instrument should be tested for internal consistency each time it is used with a
new population.

Tests of Internal Consistency

4
1. Split-Half Correlations- scores on one half of a subject’s responses are compared to scores on the
other half. If all items are consistently measuring the overall concept, then the scores on the two
halves of the test should be highly correlated.

2. Cronbach’s alpha coefficient- a measure of internal consistency, that is, how closely related a set of
items are as a group. It is a useful device for establishing reliability in a highly structured quantitative
data collection instrument.

c. Equivalence Primarily concerns the degree to which two or more independent observers or coders agree
about scoring. If there is a high level of agreement, then the assumption is that measurement errors have
been minimized.

Tests of Equivalence

1.Alternate Form- two tests are developed based on the same content but the individual items are different.
When these two tests are administered to subjects at the same time, the results can be compared. This can
be new sources of error through subjects fatigue and boredom. Obtaining similar results on the two
alternate forms of the instrument gives support for the reliability of both forms of instrument.

2. Inter-Rater / Intra-Rater Reliability- this is the method of testing for equivalence when the design calls for
observation and is used to determine whether two observers (inter-rater) using the same instrument at the
same time will obtain similar results. Intra-rater reliability is a measure of consistency an observer is at
measuring a constant phenomenon, A reliable instrument should produce the same results if both observers
are using it the same way.

Validity is the degree by which the instrument measures what it intends to measure. Validity should proved
that the instrument will consistently measure the right variables to be investigated. It also refers to an
instrument’s ability to actually measure/test what it is supposed to measure/test.

METHODS OF ESTABLISHING VALIDITY OF MEASUREMENT TECHNIQUE

A. SELF-EVIDENT MEASURES These methods of establishing validity deal with basic levels of knowledge
about the variable and look at an instrument’s apparent value as a measurement technique rather than at its
actual value. In other words, the instrument appears to measure what it is supposed to measure.

● Face Validity Refers to whether the instrument looks as though it is measuring the appropriate
construct At the most basic level, when little or nothing is known about the variable being measured,
the level of validity obtainable is called face validity
● Content Validity Concerns the degree to which an instrument has an appropriate sample of items for
the construct being measured and adequately covers the construct domain
○ Use of Judge Panels → Face Validity You put together a group of people that you believe are
knowledgeable about the content you are testing or knowledgeable about the process of
developing questions. These people, called Panel of Experts or Subject Matter Experts
(SMEs), are asked to judge whether or not, “on the face of it,” your work appears to be
sound, that it will do what you want it to do.
○ CONTENT VALIDITY INDEX (CVI)

○ Parameters: 1 = Not Relevant 2 = Somewhat Relevant 3 = Quite Relevant 4 = Highly Relevant

ITEM CONTENT VALIDITY INDEX = Number of experts giving a rating of 3 or 4

Number of experts

0.78 (acceptable) or higher

SCALE CONTENT VALIDITY INDEX (S-CVI = Take average across I-CVIs

S-CVI = .90 or higher

5
Ave

PRAGMATIC MEASURES

Also referred to as Criterion-Related Validity

● These test the practical value of a particular research instrument or tool and focus on the questions, “Does it
work?” “Does it do what it is supposed to do?”
● It involves determining the relationship between an instrument and an external criterion
● One requirement of this approach is the availability of a reliable and valid criterion with which measures on
the instrument can be compared
● The instrument is said to be valid if its scores correlate highly with scores on the criterion

1. Concurrent Validity Refers to an instrument’s ability to distinguish individuals who differ on a present or
current criterion. Instruments that attempt to test a research participant on some current characteristic have
concurrent validity if the results are compared and have a high correlation with an established (tested)
measurement.

2. Predictive Validity Refers to the adequacy of an instrument in differentiating between people’s performance
on some future criterion Instruments that accurately predict some future occurrence have predictive validity
Measures designed to predict success in educational programs fall into this category, as do aptitude tests.
They are designed to measure some current characteristic that is expected to predict something that will
occur sometime in the future.

C. CONSTRUCT VALIDITY

Attempts to answer the question – “What is the instrument really measuring?” Used mainly for measures of traits or
feelings, such as generosity, grief, or satisfaction. The theoretical base for the concept is tested by determining the
extent to which the instrument actually measures that concept.

1. Contrasted Groups or Known-Group Technique This is carried out by comparing two groups, one of which is
known to be very high on the concept being measured by the tool, and the other very low on that concept
2. Experimental Manipulation An experiment is designed to test the theory or conceptual framework
underlying the instrument. The experiment would have hypotheses that predict the behaviour of people
who score at various levels on the tool
3. Multitrait-Multimethod Matrix Method (MTMM) (by Campbell and Fiske) This approach is based on the
premise that different measures of the same construct should produce very similar results, and that
measures of different constructs should produce very different results.

The procedure involves the concepts of convergence and discriminability

● Convergent validity is the evidence that different methods of measuring a construct yield similar results
Discriminant validity is the ability to differentiate construct from other similar constructs

● MTMM is the preferred method of establishing construct validity whenever it is possible to use it. To
perform this type of validity, you must have access to more than one method of measuring the construct
under study, and you must be able to measure another construct at the same time. Thus, you have data
from two or more tools designed to measure the construct you are studying and one or more measures of a
different construct.

❖ Sensitivity is the ability of the instrument to correctly screen or identify the variables to be manipulated and
measure to diagnose its condition.

❖ Specificity is the ability of the instrument to correctly identify non-cases or extraneous variables and screen
out those condition not necessary for manipulation.

Sample formula

6
Internal Validity-This means the degree to which changes in the dependent variable (effects) can be
attributed to the independent variable (cause).

Threats to Internal Validity

● a. Selection Bias This exists when study results are attributed to the experimental treatment, when
in fact results are due to differences among the subjects even before the treatment.
● b. History This occurs when some events besides the experimental treatment takes place during the
course of the study and affects or influences the dependent variable
● c. Maturation This take place when changes within the subjects occur during the experimental study
thus may influence study results.
● d. Testing Possible testing threat in studies in which a pre-test is a requisite. This refers to the
influence of the pre-test, which already projects the result of the post-test scores.
● e. Instrumentation Change The existence of a difference between pre-test and post test results
caused by change in the accuracy of the instrument of the ratings, rather than the results of the
experimental treatment.
● f. Mortality This happens when a difference exists between the subject dropout rates of either the
experimental group and the control group.

External Validity This is the degree to which study results can be influenced or affected by external factors
or populations and settings.

Threats to external validity

a. Hawthorne Effect occurs when study participants respond in a particular manner, or there is obvious
change of behaviour because they are aware that they are being observed. Dealing with this problem is
handled by having a control group that is subject to the same conditions as the treatment group, then
administering a placebo to the control group.

The study is termed a blind experiment when the subject does not know whether he or she is receiving the
treatment or a placebo.

b. Experimenter Effect This refers to a threat to the study which results when the researcher’s behaviour
influences the behaviour of the subjects, such as the researcher’s facial expression, gender and clothing
among others.

c. Reactive Effect of the Pre-Test This occurs when the subjects have been sensitized to the treatment by
taking the pre-test and thereafter influence the post-test results.

d. Halo Effect This is the tendency of the researcher to rate the subject high or low because of the
impression he has on the latter

7
● For the researcher to minimize threats to external validity, the double blind method may be used to
remove the observer’s bias. This means that neither the subject nor the observer knows the specific
research objective or the specific subjects who belong to the experimental or control group. Hence,
the observer cannot distort the data.
● If the double blind method is not feasible, the double observer method may be used to determine
the extent of bias between the two observers as they both observe and record the subjects’
performance on a dependent variable.

Construct Validity is the extent to which the measurements used, often questionnaires, actually test the
hypothesis or theory they are measuring.

Threats to Construct Validity

● Reactivity to the study situation When people’s responses reflect, in part, their perceptions become
part of the treatment construct under study.
● Researcher expectancies A similar threat stems from the researcher’s influence on participant
responses through subtle (or not so-subtle) communication about desired outcomes. When this
happens, the researcher’s expectations become part of the treatment (or non manipulated
independent variable) construct that is being tested.
● Novelty effects When a treatment is new, participants and research agents alike might alter their
behavior. People may be either enthusiastic or skeptical about new methods of doing things. Results
may reflect reactions to the novelty rather than to the intrinsic nature of an intervention, so the
intervention construct is clouded by novelty content.
● Compensatory effects In intervention studies, compensatory equalization can occur if health care
staff or family members try to compensate for the control group members’ failure to receive a
perceived beneficial treatment. The compensatory goods or services must then be part of the
construct description of the treatment conditions.
○ Compensatory rivalry is a related threat arising from the control group members’ desire to
demonstrate that they can do as well as those receiving a special treatment.
● Treatment diffusion or contamination Sometimes alternative treatment conditions can get blurred,
which can impede good construct descriptions of the independents variable. This may occur when
participants in a control group condition receive services similar to those available in the treatment
condition.

Threats to Statistical Conclusion Validity

One criterion for establishing causality is demonstrating that there is a relationship between the
independent and dependent variables. Statistical Methods are used to support inferences about whether
relationships exist.

1. Low Statistical Power- Statistical power refers to the ability to detect true relationships among variables.
Adequate statistical power can be achieved by using a sufficiently large sample. Another aspect of statistical
power is precision. This is achieved through accurate measuring tools, controls over confounding variables,
and powerful statistical methods.

2. Restriction of Range Although the control of extraneous variation through homogeneity is easy to use and
can help to clarify the relationship between key research variables, it can be risky. Not only does this
approach limit the generalizability of study findings, but it can also sometimes undermine statistical
conclusion validity.

3. Unreliable Implementation of a Treatment Intervention fidelity (or treatment fidelity) concerns the extent
to which the implementation of an intervention is faithful to its plan. Interventions can be weakened by
various factors, which researchers can often influence. One issue concerns the extent to which the
intervention is similar from one person to the next.

8
● Usually, researchers strive for constancy of conditions in implementing a treatment because lack of
standardization adds extraneous variation and can diminish the intervention’s full force.
● This may involve a manipulation check to assess whether the treatment was in place, was
understood or was perceived in an intended manner.
● Another aspect of treatment fidelity for interventions designed to promote behavioral changes
concern the concept of enactment. Enactment refers to participants’ performance of the treatment-
related skills, behaviors and cognitive strategies in relevant real-life settings.
● Another issue is that participants often fail to receive the desired intervention due to lack of
treatment adherence. It is not unusual for those in the experimental group to elect not to
participate fully in the treatment. This might mean making the intervention as enjoyable as possible,
offering incentives, and reducing burden in terms of the intervention and data collection (Polit &
Gillespie, 2010).

Assessing Qualitative Data

Qualitative researchers are concerned with the quality of data reflecting the “true state” of human experience. The
truth value of the study is necessary to influence acceptance of authorities in the field of qualitative research.
Lincoln and Guba (1985) suggested five (5) criteria to establish trustworthiness of qualitative data such as credibility,
dependability, confirmability, transferability and authenticity.

1. Credibility

Credibility refers to confidence in the truthfulness of data and their interpretations. There are various techniques
that can improve and document credibility of qualitative data, and this is done by carrying out investigation in a way
that believability is enhanced. The ways to demonstrate credibility are:

a. Prolonged Engagement

· The researcher is immersed in the group and spends time to collect in-depth data by engaging
their views with the group and understanding their feelings, ideas and opinions in a longer
period of time.

b. Persistent Observations

· The researcher focuses on all the aspects of the situation that are relevant to the phenomenon
being studied until even up to the point of saturation.

c. Triangulation

· Credibility is enhanced through the use of multiple references to draw conclusions about what
constitutes the truth of the study. Triangulation aims to prevent the intrinsic bias that comes
from the single method, single observer and single theory studies (Denzin, 1989). Triangulation
helps capture a more complete and contextualized status of a phenomenon.

d. Peer Debriefing and Member Checks

· A time to meet and confer with peers to objectively review and explore various aspects of the
phenomenon. Member checks involve asking study participants’ reactions to preliminary
findings and interpretations of the inquiry.

e. Search for Disconfirming Evidence

· This involves search for data that could challenge an emerging concept or descriptive theory.
This occurs with purposive sampling and facilitated by prolonged engagement and debriefing.

9
The sampling of individuals who can give conflicting viewpoints can strengthen the qualitative
description of a phenomenon.

f. Comprehensive and Vivid Recording of Information

·This involves field notes with rich descriptions of what transpired in the field, descriptions of
participants’ demeanor and behavior during interactions and description of interview context.

2. Dependability (analogous to Reliability)

Dependability is concerned with the stability of qualitative data for a long period of time and are conditioned by a
stepwise replication or an approach using split-half technique with several researchers divided into two teams. The
researcher conduct inquiries separately and then compare their data and conclusions. An inquiry audit is also
necessary to ensure dependability of data and conclusions. An inquiry audit is also necessary to ensure dependability
of data, which involve scrutiny of data and supporting documents by an external reviewer.

3. Confirmability

Confirmability refers to the objectivity and neutrality of data to show congruence between two or more independent
variables to determine accuracy, relevance and meaning of data. To confirm the data, findings must reflect the
participants’ opinions, feelings and attitude regarding the inquiry and not the opinions, biases or perspective of the
researcher.

4. Transferability (analogous to Generalizability/External Validity)

Transferability of data refers to research findings which can be transferred or applied to other settings. This is similar
to the concept of generalizability of data in which the researcher provides a thorough and sufficient descriptive data
of the research setting, transactions and processes observed during the inquiry. These are all written in the research
report so that consumers can evaluate the applicability of the data to the research setting, transactions and
processes observed during the inquiry. These are all written in the research report so that consumers can evaluate
the applicability of the data to other contexts. It also requires thick description.

Thick Description- a rich, thorough, and vivid description of the research context, the study participants, and the
experiences and processes observed during the inquiry

5. Authenticity

Authenticity is another criteria to establish trustworthiness of qualitative studies. This refers to the extent to which
the researcher describes truthfully and accurately varied existence of different realities. Authenticity demonstrates
in-depth feelings and emotions of participants lives as they are lived. This enables the readers to develop a
heightened sensitivity and understanding of issues being portrayed with sense of mood, feeling of experience,
language and context of those lives (Polit & Beck, 2008)

Statistics

Is a branch of Mathematics used to summarized, organize, present, analyze and interpret numerical data such as the
numerical characteristics of sample parameters and the numerical characteristics of a population.

Statistics improve the quality of data with the design of experiments and survey sampling. Statistics also provide
tools for prediction and forecasting using data and statistical models. Statistics is applicable to a wide variety of
academic disciplines, including natural and social science, government business and nursing.

Kind of Statistics

10
Statistical methods can be used to summarize or describe a collection of data, this is called descriptive statistics. This
is useful in research, particularly when communicating the results of experiments. In addition, randomized and
uncertainty in the observations may be used to draw inferences about the process or population being studied, this
is called inferential statistics.

Inference is a vital element of scientific research, since it will predict a phenomenon based on data leading to theory
development. These predictions are scientifically tested to further prove the theory. If the inference is correct, then
the descriptive statistics of the new data increase the testability of that hypothesis. Descriptive statistics and
inferential statistics or predictive statistics together compare applied statistics.

1. Descriptive statistics or Summary Statistics

These are statistics intended to organize and summarize numerical data from the population and sample.

· One-tailed test of significance is the test for directional hypothesis and the extreme statistical values that occur
on a single tail of the curve.
· Two-tailed test of significance is the analysis of a non-directional hypothesis. This requires the researcher to
have sufficient knowledge of the variables to predict whether the difference will be in the tail above the mean or the
tail below the mean (Burns & Grove, 2007)

Statistical Tools for Treatment of Data

Following are statistical techniques used to treat research data for in-depth solution of problems raised in studies
and researches:

1. Percentage (P) is computed to determine the proportion of a part to a whole such as a given number
of respondents in relation to the entire population. For example, the percentage of female patients in a
hospital suffering from mental illness vis-à-vis the total number of mentally ill patients.

2. Ranking is used to determine the order of decreasing or increasing magnitude of variables. The largest frequency
is ranked 1, the second 2, and so on down to the last rank.

3. Weighted mean (WM) refers to the overall average or responses / perceptions of the study respondents. It is the
sum of the scores and its product.

11
4. Measures of Central Tendency

a. Arithmetic mean or average weighted mean (X) describes the central tendency of the given criteria or variables.
All the numbers in a series added together and divided by the number of numbers in a series.

b. Median means central position. Arranges the ratings in an array from highest to lowest or vice-versa. The central
value in a series of numbers arranged in ascending order. If there are two central values, then it is the average of the
two. The middle of items is odd. If the number of items is even, one-half (1/2) of two median items is taken as
median.

Example: 70 73 76 = median is 73 ; 70 73 76 = median is 73+76 = 149 = 74.5

c. Mode is the score which has the most number of observations or responses. The most frequently occurring values
in a series of numbers where more than one value equally shares the highest number of frequencies then the data
are said to be multimodal.

Example : Score # of Responses

70 5 73 10 Mode = 76 - 15

In descriptive statistics the mode, median and mean are used together to describe of data.

Inferential Statistics

Researchers use inferential statistics to determine the probability that the null hypothesis is untrue

Levels of Significance

This is used to indicate the chance that researchers are wrong in rejecting the null hypothesis. It is also called the
level of probability or p level. When p=.01, for example, means that the probability of finding the stated difference as
a result of chance is only 1 in 100

Revisiting Hypothesis

Research hypothesis: the research prediction that is tested (e.g. students in situation A will perform better than
students in situation B) Null hypothesis: a statement of “no difference” between the means of two populations
(there will be no difference in the performance of students in situations A and B) The null hypothesis is a technical
necessity of inferential statistics

Researchers use inferential statistics to determine the probability that the null hypothesis is untrue.

A type I error is made when a researcher rejects the null hypothesis when it is true. The probability of making this
type of error is equal to the level of significance. A type II error is made when a researcher accepts the null
hypothesis when it is false. As the level of significance increases, the likelihood of making a Type II error decreases.

12
Researchers generally look for levels of significance equal to or less than .05. If the desired level of significance is
achieved, the null hypothesis is rejected and we say that there is a statistically significant difference in the means.

Parametric Tests

This is used when the researcher can assume that the population values are normally distributed, variances are
equal, and data are interval or ratio in scale

t-test

It is the most common statistical procedure for determining the level of significance when two means are compared.
It generates a number that is used to determine the p-level of rejecting the null hypothesis and assumes equal
variability in both data sets. A t-test may also be used when a researcher wants to show that a correlation coefficient
is significantly different from 0 (which would indicate no correlation).

Analysis of Variance (ANOVA)

Similar to a t-test, but used when there are more than two groups being compared. ANOVA is an extension of the t-
test and allows a researcher to examine differences in all population means simultaneously rather than conducting a
series of t-tests. It uses variances (rather than means) of groups to calculate a value that reflects the degree of
differences in the means

A "1x4 ANOVA" is a one-way (i.e. one independent variable) ANOVA that is comparing four group means

Factorial Analysis of Variance

This is used when there are two or more independent variables being analyzed simultaneously. A "2x3 ANOVA"
indicates that there are 3 groups being compared on 2 variables

13
Post hoc test

Post hoc (“after this” in Latin) tests are used to uncover specific differences between three or more group means
when an analysis of variance (ANOVA) is significant. Statistical tests that tell the researcher which means are
different. Common post hoc comparisons include Fisher’s LSD (least-significant difference), Duncan’s new multiple
range test, the Newman-Keuls, Tukey’s test, and the Scheffe’s test The choice of post hoc test depends on
characteristics of the data sets.

ANCOVA

Analysis of covariance (ANCOVA) is used when the researcher needs to adjust initial group differences statistically on
one or more variables that are related to the independent variable but uncontrolled, and to increase the likelihood
of finding a significant difference between two group means.

Example:

You might want to find out if a new drug works for depression. The study has three treatment groups and one
control group. A regular ANOVA can tell you if the treatment works. ANCOVA can control for other factors that might
influence the outcome. For example: family life, job status, or drug use.

Multivariate analysis

This is used to investigate problems in which the researcher is interested in studying more than one dependent
variable

Example:

“Attitudes towards MCN” is a complex construct that might involve things like enjoying MCN, valuing MCN, attitudes
towards different concepts in MCN, Return Demonstration in Newborn Care, DR Nursing etc.etc.

Nonparametric Tests

A nonparametric test (sometimes called a distribution free test) is used when a researcher does not assume anything
about the underlying distribution (for example, that the data comes from a normal distribution). The word “non
parametric” doesn’t quite mean that you know nothing about the population. It usually means that you know the
population data does not have a normal distribution. A distribution free statistics at nominal/ordinal level of
measurement.

Chi square

Nonparametric procedure used when data are in nominal form. It is a way of answering questions about relationship
based on frequencies of observations in categories.

Example:

14
What is the relationship between year in college (freshman, sophomore, junior, senior) and use of campus
counseling services? Responses to this question will involve a count of how many in each group use the counseling
service. The independent variable is year in college which has four categories

Spearman rho

It is a non-parametric test used to measure the strength of association between two variables, where the value r = 1
means a perfect positive correlation and the value r = -1 means a perfect negative correlation.

Example:

Find out whether people's height and shoe size are correlated (they will be - the taller people are, the bigger their
feet are likely to be).

Thematic Analysis (TA)

The analysis of qualitative materials typically begins with a search for broad categories or themes. In their thorough
review of how the term theme is used among qualitative researchers, DeSantis and Ugarriza (2000) offered this
definition: “A theme is an abstract entity that brings meaning and identity to a current experience into a meaningful
whole”

Thematic analysis often relies on what Spradley (1979) called the similarity principle and the contrast principle. The
similarity principle involves looking for units of information with similar content, symbols or meanings. The contrast
principle guides efforts to find out how content or symbols differ from other content or symbols – that is to identify
what is distinctive about emerging themes or categories.

Phases in Reflexive Thematic Analysis

In this TA, there are not rules to follow rigidly, but rather a series of conceptual and practice oriented ‘tools’ that
guides the analysis to facilitate a rigorous process of data interrogation and engagement. Today, the TA is now called
Reflexive TA. The six phases are the following:

1. Familiarization with the data. This phase involves reading and re-reading the data, to become immersed and
intimately familiar with its content.

2. Coding. This phase involves generating succinct labels (codes!) that identify important features of the data that
might be relevant to answering the research question. It involves coding the entire dataset, and after that, collating
all the codes and all relevant data extracts, together for later stages of analysis.

3. Generating initial themes. This phase involves examining the codes and collated data to identify significant
broader patterns of meaning (potential themes). It then involves collating data relevant to each candidate theme, so
that you can work with the data and review the viability of each candidate theme.

4. Reviewing themes. This phase involves checking the candidate themes against the dataset, to determine that they
tell a convincing story of the data, and one that answers the research question. In this phase, themes are typically
refined, which sometimes involves them being split, combined, or discarded. In our TA approach, themes are defined
as pattern of shared meaning underpinned by a central concept or idea.

5. Defining and naming themes. This phase involves developing a detailed analysis of each theme, working out the
scope and focus of each theme, determining the ‘story’ of each. It also involves deciding on an informative name for
each theme.

6. Writing up. This final phase involves weaving together the analytic narrative and data extracts, and contextualizing
the analysis in relation to existing literature.

15

You might also like