Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

Types of Research Data

1. Cross-sectional data - data collected after study design is completed, data on events occurring AT
THAT TIME, short period of duration

2. Prospective data - data collected after study design is completed BUT data on events AFTER the study
or FUTURE DATA, ideal for long period of time; experimental studies

3. Retrospective data - formerly known as EX-POST FACTO research design, data collected from the PAST
or "after the fact"

Prospective and Retrospective are both longitudinal studies because they extend over a long period of
time

Categories of data collection

1. Primary - first-hand information, unique to the researcher, PERSONALLY collects fresh data

2. Secondary - 2nd hand information, information is already ou there, easier to collect

Methods of collecting data

1. Use of already available data -reports and documents already out, or from organizations or offices

1.1 Raw data - birth days, admissions, discharges/basic documents

1.2 Tubular data - indicating number i.e number of patients, etc.

2. Use of OBSERVER'S data - gathering of data through actual observation and recording of events

Types of observers

1. Non-participant observer - observer DOES NOT that share the same setting with the subjects being
observed and is not a member of the subjects, advantageous due to minimal subjective judgement
Types of non-participant observers

1.1 Overt non-participant - observer knows and the task of conducting the study and
informing the subjects

1.2 Covert non-participant - observer does not identify self to the subjects being
observed, less advantageous since we have to have consent

2. Participant observer - observer DOES share the same setting with the subjects and acquainted with
subjects, may be a member of the group to collect data and takes part with activity

Types of participant observers

2.1 Overt participant - observer is involved and aware of the subjects being observed

2.2 Covert non-participating - observer interacts with subjects and is aware but
without subject's knowledge, may also be referred to as "spying", unethical most of the time

Two methods of observations

A. Structured - researcher has prior knowledge to the phenomenon, uses checklist

B. Unstructured - researcher has no idea or prior knowledge to the phenomenon,


requires high concentration and attention

3. Self-recording//Reporting approach - uses a specially prepared documents to collect data called


INSTRUMENTS, describes tools, tests

4. Delphi Technique - uses a series of questionnaires to gather consensus from a group of experts,
continues until consensus is reached, and includes large number of participants
Types of delphi technique

4.1 Classic - presented to a panel of INFORMED individuals and ask their opinion on a
certain problem

4. 2 Policy - mostly in organizations and there is a committee

4.3 Real-time - uses structured time, face to face analysis

4.4 Modified - uses interview, FGDs to gather opinion on a certain problem/issue

4.5 E-delphi - used of electronics, emails of online information either modified or classic

5. Critical Incident Technique - employs set of principles for collecting data; data is based on ACTUAL
INCIDENTS AND NOT HYPOTHETICAL, researcher develops a codebook to DEFINE DATA before starting
to collect data

5.1 Variables - qualities, properties, characteristics, things, situations, MAY


CHANGE/VARY, MANIPULATED, MEASURED AND CONTROLLED

5.2 Measurement - procedure for assigning numerical values to represent the AMOUNT
of an attribute in a person/object

Levels of measurement

1. Nominal - classify variables into category BUT cannot be


ranked

2. Ordinal - used to show RELATIVE RANKINGS of variables,


ordering observations according to MAGNITUDE or INTENSITY

2.1 - Likert scale - agree or disagree/viewpoint on a topic

2.2 Semantic differential scale - rate concepts of BIPOLAR ADJECTIVES, uses a 7


point scale, from one etxreme to the other

2.3 Vignettes - brief case reports or DESCRIPTIONS OF EVENTS, where


parrticipants are asked to react, to get respondents' PERCEPTIONS of a phenomenon
2.4 Q-sorts - sort cards along a specified bipolar dimension, cards could go from
5--=100 and sorted into 9 or 11 piles

3. Interval - rankings of variables on a scale with equal intervals between the numbers,
CONSISTS OF REAL NUMBERS, zero on the scale is not absolute

i.e $50, $100, $150

4. Ratio - highest and most informative scale, precise scale, absolute zero point

i.e 0.00, 2.00, 4.00

Sources of measurement errors

1. Environmental contaminants - responses/scores are AFFECTED BY SITUATION

2. Variation in personal factors - response is affected by PERSONAL STATES which influence motivation
to cooperate

3. Quality of responses - characteristics of respondents can interfere with accurate data; respondents
agree to statements regardless of content

4. Variation in data collection - DIFFERENT WAYS of collecting data can cause SCORE/RESPONSE
VARIATION

5. Clarity of the instrument - if instructions are poorly understood, it will result to poor results DUE TO
CONFUSION/MISUNDERSTANDING

6. Sampling of items - errors can result due to sampling of items to measure a variable

7. Format of the instrument - even TECHNICAL ASPECTS such as grammar, format and translation can
affect the response
RELIABILITY- refers to the ACCUURACY and PRECISION of a tool

Methods of testing reliability

A. Stability of measurement - can be REPEATED OVER AND OVER at different times, same research and
will produce the same result

Major limitation - can only be done if that trait is CONSTANT OVER TIME; not useful for
changeable states

> Stable concept - intelligence (can be measure repeatedly)

> Unstable concept - pain (changeable over time)

Test of stability

1. Test-retest - repeated measurements over time using the same tool on the same
subjects EXPECTED to produce same results; used in interviews, examinations, questionnaires that GIVE
CONSISTENT RESULTS

2. Repeated observations - same concept as test/re-test but it's more on YOU. You will
have to ask yourself if you are consistent in observing or if you observe the same way as before or the
trait you're observing is still the same

B. Internal consistency - the instruments should show that all indicators measures the same
characteristics or traits; must be done before an instruments can be used officially

NOTE:

> new instrument require pilot testing

> when revising a tool, it is considered AS NEW

> an instrument should be tested each time it is used with a new population/setting
Tests of internal consistency

1. Split-half correlations- scores on one half of respondents' answers are compared to


the other half, if items are CONSISTENT, then scores of the two halves are HIGHLY CORRELATED

2. Cronbach's alpha coefficient - measures how closely related are a set of items as a
group; very useful for highly structured quantitative data collection tool

C. Equivalence - degree to which two or more observers agree about scoring, if HIGH LEVEL OF
AGREEMENT, it means errors have been MINIMIZED

Tests of equivalence

1. Alternate form - two tests are developed with SAME CONTENT but DIFFERENT
INDIVIDUAL ITEMS, obtaining similar results mean the instruments are reliable in both forms

2. Inter-rater/intra-rater - used when the design calls for an observation whether 2


observers (inter-rater) using the same tool at the same time produce same results. (Intra-rater)
measures a CONSTANT PHENOMENON. If both observers use it the same way and produce same
results, then the tool is reliable

Validity - degree by which the tool measures what it INTENDS to measure; refers to an instrument's
ability to ACTUALLY MEASURE what is SUPPOSED TO MEASURE

A. Self-evident measures - the instrument appears to measure what is supposed to measure; deals with
basic levels of knowledge about the variable

>Face validity - refers to whether the tools looks AS THOUGH it is measuring the
appropriate construct of the instrument

>Content validity - degree to which an instrument has an appropriate sample of items


for the variable being measured
a. Use of judge panels - used in face validity, where you out together a group of
people who are experts or knowledgeable on something to test if your instrument is good to go or will
help you in developing questions. Referred to as Panel of Experts or SMEs

b. Content validity index / CVI

Parameters: 1 = not relevant 2 = somewaht relevant

3 = quite relevant 4 = highly relevant

ITEM CVI = number of experts giving rate 3 or 4 / number of experts

0.78 (acceptable) or higher

SCALE CVI = average across ICVI

.90 or higher

B. PRAGMATIC MEASURES - also referred to as Criterion-related validity

>it tests practical value of a research instrument and answers the questions of "does it work?" or "does
it do what it's supposed to do?"

>determines relationship between instrument and external criterion

>one requirement is the availability of a valid criterion which can be compared to the instrument being
measured

>instrument is valid IF SCORES CORRELATE HIGHLY WITH SCORES ON THE CRITERION

1. Concurrent validity - refers to an instrument's ability to distinguish individuals who


differ on a present or current criterion; if participant have that current "characteristic" compared to
concurrent validity, it means that it has high correlation
2. Predictive validity - differentiates people's performance on future criterion; instruments that
ACCURATELY PREDICT FUTURE OCCURENCE; measures designed to predict success i.e in educational
programs fall into this category

C. Construct validity - attempts to answer the question, "what is this instrument really measuring?" ;
used mainly for measures of TRAITS, FEELINGS, GRIEF, SATISFACTION; actually test the hypothesis or
theory they are measuring

1. Contrasted groups or known-group technique - comparing two groups one of is VERY


HIGH on the concept and one is VERY LOW

2. Experimental manipulation - designed to TEST THE THEORY/CONCEPTUAL


FRAMEWORK underlying the instrument

3. Multi-trait/multi-method matrix method (MTMM) - different measures of the SAME


CONSTRUCT produce VERY SIMILAR RESULTS and measures of the DIFFERNT CONSTRUCT produce VERY
DIFFERENT RESULTS

PROCEDURES:

1. Convergent validity - evidence that DIFFERENT METHODS yield SIMILAR RESULTS

2. Discriminant validity - ability to DIFFERENTIATE from OTHER SIMILAR CONSTRUCTS

3. MTMM - PREFERRRED METHOD because you have two or more tools designed to
measure the construct being studied and one or more measures of a different construct; to do so, you
must have MORE THAN ONE METHOD OF MEASURING and can be DONE AT THE SAME TIME

>Sensitivity - correctly screen or identify the variables to be manipulated and measure


to diagnose its condition

>Specificity - correctly identify non-cases or extraneous variables and screen out those
conditions not necessary for manipulation
Threats to Construct validity:

a. Reactivity - when people’s responses reflect, in part, their perceptions become part of
the treatment construct under study

b. Researcher expectancies - stems from the researcher’s influence on participant


responses through subtle (or not so-subtle) communication about desired outcomes; since it's SUBTLE -
it presents as non-manipulated independent variable

c. Novelty effects - When a treatment is new, participants and research agents alike
MIGHT ALTER THEIR BEHAVIOR; people may be either enthusiastic or skeptical about new methods thus
affecting the ACTUAL nature or intent (clouded)

d. Compensatory effects - usually happens in intervention studies; in which comparison


groups not obtaining the preferred treatment are PROVIDED WITH COMPENSATIONS that make the
comparison groups more equal than originally planned

Compensatory RIVALRY - related threat arising from the control group


members’ desire to demonstrate that they can do as well as those receiving a special treatment

e. Treatment diffusion or contamination - may occur when participants in a control


group condition receive services similar to those available in the treatment condition WHEN IT IS NOT
MEANT TO since we are measuring TWO different groups with different treatments

Internal validity - mean the degree to which CHANGES in the DEPENDENT VARIABLE (effects) can be
ATTRIBUTED to the INDEPENDENT VARIABLE (cause)

Threats to Internal validity:

a. Selection Bias - when study results are attributed to the experimental treatment BUT
ACTUALLY results are DUE TO THE DIFFERENCES AMONG SUBJECTS EVEN FROM THE START

b. History - when some EVENTS besides the experimental treatment TAKES PLACE
DURING the course of the study and INFLUENCES THA VARIABLE
c. Maturation - changes WITHIN the subjects occur during the study and affect the study
results

d. Testing possible - only happens when PRE-TESTING is REQUISITE; it influences the


post-test scores

e. Instrumentation Change - caused by change in the accuracy of the instrument of the


ratings resulting to difference between pre-test and post test results

f. Mortality - when a difference exists between the subject dropout rates of either the
experimental group and the control group

External validity - degree to which study results can be influenced or affected by external factors or
populations and settings

Threats to external validity:

a. Hawthorne Effect - obvious change of behaviour because they are aware that they are
being observed

NOTE: The study is termed a blind experiment when the subject does not know whether
he or she is receiving the treatment or a placebo

b. Experimenter Effect - when the researcher’s behaviour influences the behaviour of


the subjects, such as the researcher’s facial expression, gender and clothing among others

c. Reactive Effect of the Pre-Test - sensitized to the treatment by taking the pre-test and
thereafter influences the post-test results

d. Halo Effect - tendency of the researcher to rate the subject high or low because of the
impression he has on the latter; an error in reasoning; influence multiple judgements or ratings of
unrelated factors
How to avoid:

> Double blind method - to remove bias, neither the subject nor the observer knows the
specific research objective or the specific subjects who belong to the experimental or control group.
Hence, the observer cannot distort the data

> Double observer method - may be used to determine the extent of bias between the
two observers as they both observe and record the subjects’ performance on a dependent variable

Statistical Conclusion Validity - degree to which CONCLUSIONS about the relationship AMONG
VARIABLES based on the data are CORRECT OR REASONABLE; ensuring the use of adequate sampling
procedures, appropriate statistical tests, reliable measurement procedures

Threats to statistical conclusion validity

a. Low Statistical Power -there are two aspects to this. First, statistical power which
refers to the ability to detect true relationships among variables and then precision which refers to
accurate measuring tools, controls over confounding variables, and powerful statistical methods

b. Restriction of Range - case in which observed sample IS NOT AVAILABLE ACROSS the
entire range of interest; the observed correlation will be lower than it would be since it is a range
restricted sample

c. Treatment fidelity - also called procedural integrity; extent to which the


implementation of an intervention is faithful to its plan; refers to the methodological strategies used to
EVALUATE the EXTENT to which an intervention(s) is/are being implemented AS INTENDED.
Notes:

>This is actually not a threat BUT if not adhered and measured, it could be a a threat to the
study and AFFECT INTERNAL VALIDITY or the study itself

>Lack of standardization adds extraneous variation and can diminish the intervention’s full force
>Manipulation check is important to assess whether the treatment was in place, was
understood or was perceived in an intended manner.

>Another issue is that participants often fail to receive the desired intervention due to lack of
treatment adherence DUE TO THE CONCEPT OF ENACTMENT. Enactment refers to participants’
performance of the treatment-related skills, behaviors and cognitive strategies in relevant real-life
settings.

> Make the intervention as enjoyable as possible, offering incentives, and reducing burden in
terms of the intervention and data collection

You might also like