Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

Practical Research 2

IDENTIFYING RESEARCH DESIGNS


Methods Used (Research Design)

 a brief description of the method of research used in doing the study


 quantitative or qualitative
What is a Research Design?

 “Research design is a master plan specifying the methods and procedures for collection and
analyzing the needed information” Zikmund (1988).
 “Research design is the plan, structure and strategy of investigation conceived so as to obtain
answers to research questions and to control variance” Kerlinger (1986).
What are Research Designs?

 are the specific procedures involved in the research process: data collection, data analysis, and
report writing (Creswell, 2012, p. 20).
TYPES OF QUANTITATIVE RESEARCH DESIGNS

Non-experimental Experimental

 Descriptive  Pre
 Correlational  True
 Causal  Quasi
Experimental Designs

 concerned primarily with cause-and-effect relationships in studies that involve manipulation or


control of the independent variables (causes) and measurement of the dependent variables
(effects)
 also called intervention studies or group comparison studies
 are procedures in quantitative research in which the investigator determines whether an activity
or materials make a difference in results for participants
 you assess this impact by giving one group one set of activities (called an intervention) and
withholding the set from another group
KEY CHARACTERISTICS OF EXPERIMENTS

 Random assignment
 Control over extraneous variables
 Manipulation of the treatment conditions
 Outcome measures
 Group comparisons
 Threats to Validity

Random assignment

 process of assigning individuals at random to groups or to different groups in an experiment


 any bias in the personal characteristics of individuals in the experiment is distributed equally among
the groups
 equating the groups means that the researcher randomly assigns individuals to group and equally
distributes any variability of individuals between or among the groups or conditions in the
experiment
Control over Extraneous Variables

 extraneous factors are any influences in the selection of participants, the procedures, the
statistics, or the design likely to affect the outcome and provide an alternative explanation for our
results than what we expected
1) Pretests and Posttests
 pretest – provides a measure on some attribute or characteristic that you assess for participants
in an experiment before they receive a treatment
 posttest – measure on some attribute or characteristic that is assessed for participants in an
experiment after a treatment
2) Covariance
 covariates – variables that the researcher controls for using statistics and that relate to the
dependent variable but that do not relate to the independent variable
3) Matching of Participants
 process of identifying one or more personal characteristics that influence the outcome and
assigning individuals with that characteristic equally to the experimental and control groups
4) Homogenous Sample
 make the groups comparable is to choose homogeneous samples by selecting people who vary
little in their personal characteristics
5) Blocking Variables
 a variable the researcher controls before the experiment starts by dividing (or “blocking”) the
participants into subgroups (or categories) and analyzing the impact of each subgroup on the
outcome
Manipulating Treatment Conditions

 once you select participants, you randomly assign them to either a treatment condition or the
experimental group
 in experimental, the researcher physically intervenes to alter the conditions experienced by the
experimental unit
Outcome Measures

 in all experimental situations, you asses whether a treatment condition influences an outcome or
dependent variable
 in experiments, the outcome (or response, criterion, or posttest) is the dependent variable that is the
presumed effect of the treatment variable
Group comparisons

 process of a researcher obtaining scores for individuals or groups on the dependent variable
and comparing the means and variance both within the group and between the groups
Threats to Validity
1) Internal Validity
 degree to which changes in the dependent variable can be attributed to the independent variable
 changes in DV cause by IV (internally valid)
2) External Validity
 degree to which the changes in the dependent variable can be attributed to the extraneous
variables
 changes in DV cause by surrounding variables (extraneous variables)
THREATS TO INTERNAL VALIDITY
Threats related to participants
 Selection bias
 this results when the subjects or respondents of the study are not randomly selected
 Maturation
 this happens when the experiment is conducted beyond a longer period of time during which is
most of the subjects undergo physical, emotional and/or psychological changes
 History
 happens during the conduct of the study when an unusual event affects the result of an
experiment
 Mortality
 one or more subjects die, drop out, or transfer as in the case of a student who has not
completed his/her participation in the experiment
 Regression
 when researchers select individuals for a group based on extreme score, they will naturally do
better (or worse) on the posttest than the pretest regardless of the treatment
 Interactions with selection
 several of the threats mentioned thus far can interact (or relate) with the selection of participants
to add additional threats to an experiment

Threats related to treatments


 Diffusion of treatments
 when the experimental and control groups can communicate with each other, the control group
may learn from the experimental group information about the treatment and create a threat to
internal validity
 Compensatory equalization
 when only the experimental group receives a treatment, an inequality exists that may threaten
the validity of the study
 Compensatory rivalry
 if you publicly announce assignment to the control and experimental groups, compensatory
rivalry may develop between the groups because the control group feels that is the “underdog”
 Resentful demoralization
 when a control group is used, individuals in this group may become resentful and demoralized
because they perceive that they receive a less desirable treatment that other groups

Threats related to procedures


 Instrumentation
 the instrumentation used in gathering the data must not be changed or replaced during the
conduct of the study
 Testing
 when a pretest is given to subjects who have knowledge of baseline data

THREATS TO EXTERNAL VALIDITY


 Experimenter effect
 the characteristics of the researcher affect the behavior of the subjects or respondents
 Hawthorne effect
the respondents or subjects respond artificially to the treatment because they know they are
being observed as part of a research study
 Measurement effect
 also called the reactive effects of the pretest
 Interaction of selection and treatment
 involves the inability to generalize beyond the groups in the experiment, such as other racial,
social, geographical, age, gender, or personality groups

 Interaction of setting and treatment


 arises from the inability to generalize from the setting where the experiment occurred to another
setting
 Interaction of history and treatment
 develops when the researcher tries to generalize findings to past and future situations
TYPES OF EXPERIMENTAL RESEARCH DESIGNS
R – random selection
O1 – pretest
O2 – posttest
X – intervention
Pre-experimental design
 is considered very weak because the researcher has little control over research
 One-shot case study
 a single group is exposed to an experimental treatment and observed after the treatment
 the procedure us summarized as: X O
X – intervention
O – test
 One-group pretest-posttest design
 it provides a comparative description of a group of subjects before and after the experimental
treatment
 the procedure is summarized as: O1 X O2
O1 – pretest
X – intervention
O2 – posttest

True experimental design


 a design is considered a true experiment when:
1. the researcher manipulates the experimental variables
2. there is one experimental group and one comparison or control group
3. the subjects are randomly assigned either to the comparison or experimental group

 Pretest-posttest controlled group design


 subjects are randomly assigned to groups
 a pretest is given to both groups
 the experimental group receives treatment while control group does not
 a posttest is given to both groups
 the procedure is summarized as:
- R O1 X O2 (experimental group)
- R O1 O2 (control group)
R – random selection
O1 – pretest
X – intervention
O2 – posttest
 Posttest only controlled group design
 subjects are randomly assigned to groups
the experimental group receives treatment while control group does not
a posttest is given to both groups
the procedure is summarized as:
- R X O2 (experimental group)
- R O2 (control group)
R – random selection
X – intervention
O2 – posttest
 Solomon four-group design
 subjects are randomly assigned to groups
 two of the groups (experimental group 1 and control group 1) are pretested
 two other groups (experimental group 2 and control group 2) receive routine treatment or no
treatment
 the procedure is summarized as:
- R O1 X O2 (experimental group 1)
- R O1 O2 (control group 1)
-R X O2 (experimental group 2)
-R O2 (control group 2)
R – random selection
X – intervention
O1 – pretest
O2 – posttest

Quasi experimental design


 a design in which either there is no control group or the subjects are not randomly assigned to
groups
 Non-equivalent controlled group design
 this design is similar to the pretest-posttest control group design except that there is no random
assignment of subjects to the experimental and control groups
 the procedure is summarized as:
- O1 X O2 (experimental group)
- O1 O2 (control group)
X – intervention
O1 – pretest
O2 – posttest
 Time-series design
 the researcher periodically observes or measures the subjects
 the procedure is summarized as:
- O1 O2 O3 X O4 O5 O6
O1, O2, O3 – pretest (multiple observations)
O4, O5, O6 – posttest (multiple observations)

TYPES OF NON-EXPERIMENTAL RESEARCH DESIGNS


Survey Research Designs

 are procedures in quantitative research in which investigators administer a survey to a sample or to


the entire population of people to describe the attitudes, opinions, behaviors, or characteristics of
the population
 use survey research to describe trends, such as to determine individual opinions about policy
issues, such as whether male students need a haircut
Survey/Descriptive Research Design

 used to describe a certain condition or phenomenon in a given sample using quantifiable


descriptors
 example:
- A teacher wants to determine the number of her students, grouped according to their sex, who are
still non readers.
- A teacher wants to determine the general academic performance of her students in mathematics.

 Cross-Sectional Survey Designs


 the researcher collects data at one point in time
 types:
- can examine current attitudes, beliefs, opinions, or practices
- compares two or more educational groups
- measure community needs
- evaluate a program
- statewide study or a national survey

 Longitudinal Survey Designs


 involves the survey procedure of collecting data about trends with the same population, changes
in a cohort group or subpopulation, or changes in a panel group of the same individuals over
time
 the participants may be different or the same people
 Trend Studies – identifying a population and examining changes within that population over
time
 Cohort Study – identifies a subpopulation based on some specific characteristic and then
studies that subpopulation over time
 Panel Study – examines the same people over time

Correlational Research Design

 provide an opportunity for you to predict scores and explain the relationship among variables
 investigators use the correlation statistical test to describe and measure the degree of
association (or relationship) between two or more variables or sets of scores
 use this design when you seek to relate two or more variables to see if they influence each other

 Explanatory Design
 is a correlational design in which the researcher is interested in the extent to which two variables
(or more) co-vary, that is, where changes in one variable are reflected in changes in the other.
 Characteristics:
- The investigators correlate two or more variables.
- The researchers collect data at one point in time.
- The investigator analyzes all participants as a single group.
- The researcher obtains at least two scores for each individual in the group—one for each
variable.
- The researcher reports the use of the correlation statistical test (or an extension of it) in the
data analysis.
- The researcher makes interpretations or draws conclusions from the statistical test results.

 Prediction Design
 instead of simply relating variables, researchers seek to anticipate outcomes by using certain
variables as predictors
 prediction studies are useful because they help anticipate or forecast future behavior
 identify variables that will predict an outcome or criterion
 Characteristics:
- The authors typically include the word prediction in the title.
- The researchers typically measure the predictor variable(s) at one point in time and the
criterion variable at a later point in time.
- The authors forecast future performance
Key Characteristics

 Displays of scores (scatterplots and matrices)


 Associations between score (direction, form, and strength)
 Multiple variable analysis (partial correlations and multiple regression)
IDENTIFYING SAMPLE AND SAMPLING TECHNIQUES
Population

 the totality and aggregate of the observation with which the researcher is concerned
 composed of persons or objects that posses some common characteristics that are of interest to the
researcher
TWO TYPES OF POPULATION
1. Target – consist of the entire group of people or objects to which the findings of the study generally
apply
2. Accessible – specific study population
Sample

 subset of the entire population or a group of individuals that represents the population and serves as
the respondents of the study
Sampling

 process of choosing a representative portion of the entire population


STRENGTHS OF SAMPLING
 Time, money, and effort are minimized. By sampling, the number of respondents, subjects or
items to be studied becomes small yet they represent the population. As such, data collection,
analysis and interpretation are lessened.
 Sampling is more effective. Since every individual in the population is given a chance to be
selected through sampling, data are scientifically gathered, analyzed and interpreted.
 Research is made faster and cheaper. By selecting a small portion of the population as
representative, collection, analysis, and interpretation of data are faster and cheaper.
 Sampling makes research more accurate. Because of the small size of the data collected from a
small number of sources, collection, tabulation, presentation, analysis, and interpretation have fewer
errors as compared to voluminous data from the whole population.
 Sampling gives more comprehensive information. With a small sample representing a big
population, a thoroughly investigated study can yield results that give more comprehensive
information that allows generalization and conclusions.
WEAKNESS OF SAMPLING
 Due to the limited number of data source, detailed subclassification must be prepared with utmost
care.
 Incorrect sampling design or incorrectly following the sampling plan will obtain results that are
misleading.
 Sampling requires an expert to conduct the study in an area, otherwise the results obtained will be
erroneous.
 The characteristic to be observed may occur rarely in a population. For example, teacher over 30
years of teaching experience.
 Complicated sampling plans are laborious to prepare
FACTORS TO CONSIDER IN DETERMINING SAMPLE SIZE
Homogeneity of the Population

 the higher the degree of variation within the population, the smaller the sample size that can be
utilized
Degree of precision desired by the researcher

 a larger sample size will result in greater precision or accuracy of results


Types of sampling procedure

 probability sampling utilizes smaller sample sizes than non-probability sampling


THE USE OF FORMULAS
a. Slovin’s Formula

 It is used to compute for sample size (Sevilla, 2003). This formula is used when you have limited
information about the characteristics of the population and are using a non- probability sampling
procedure (Ellen, 2016).

b. Calmorin’s Formula

 This is used when the population is more than 100 and the researcher decides to utilize scientific
sampling (Calmorin & Calmorin, 2003).
Other considerations
a) Sample sizes as small as 30 are generally adequate to ensure that the sampling distribution of
the mean will approximate the normal curve (Shott, 1990).
b) When the total population is equal to or less than 100, this same number may serve as the
sample size. This is called universal sampling.
c) The following are the acceptable sizes for different types of research (Gay, 1976).
a. Descriptive – 10% - 20% may be required
b. Correlational – 30 subjects or respondents
c. Comparative – 15 subjects/groups
d. Experimental – 15 – 30 subjects per group
Representative

 refers to the selection of individuals from a sample of a population such that the individuals selected
are typical of the population under study, enabling you to draw conclusions from the sample about the
population as a whole
Population

 is a group of individuals who have the same characteristic


Sample

 is a subgroup of the target population that the researcher plans to study for generalizing about the
target population
 The larger the sample, the less the potential error is that the sample will be different from the
population.
 This difference between the sample estimate and the true population score is called sampling error.
TWO GENERAL TYPES OF SAMPLING METHODS/TECHNIQUES
1. Probability Sampling

 the researcher selects individuals from the population who are representative of that population
 a type of sampling in which all members of the population are given a chance of being selected. This
is also called scientific sampling

a) Simple Random Sampling


 the researcher selects participants (or units, such as schools, hospitals) for the sample so that any
individual has an equal probability of being selected from the population.
 It is made when all the members of the population are given a chance to be selected. Selection is
done by draw lot or the use of the table of random numbers.
 This is an unbiased way of selection as samples are drawn by chance.
 Get a list of the total population.
 Cut pieces of paper to small sizes (1 x 1 in.) that can be rolled.
 Put a number on each piece of paper corresponding to each number of the total population
 Roll each piece of numbered paper and put them in a box.
 Shake well the box to give equal chance for every number to be chosen as sample.
 Pick out one rolled paper at a time and unroll it. › Record the number of the unrolled paper.
 Repeat picking out until the desired number of sample is completed.

b) Stratified Random Sampling


 The population is first divided into different strata then sampling follows. Age, gender, and
educational qualifications are just some of the criteria used in dividing the population into strata.
 Example: Common Causes and Effects of Smoking among Senior High School Students.
› First Stratum: Public and Private Schools
› Second Stratum: Grade level
› Third Stratum: Gender
You use stratification when the population reflects an imbalance on a characteristic of a sample
c) Cluster Sampling
 This is used when the population is homogeneous but scattered geographically in all parts of the
country and that there is no need to include all in the sampling.
 The researcher chooses a sample in two or more stages because either the researchers cannot
easily identify the population or the population is extremely large.
 Example:
› SHS Honor students (n= 100)
› 10 public schools in the division, each cluster or school must have 10 samples to complete the
total statistics of 100

d) Systematic Sampling
 It is a method of selecting every nth element of the population (e.g., every fifth, eight, ninth, or
eleventh element). After the size of the sample has been determined, the selection of the sample
follows.
 This sampling scheme is used when there is a ready list of the total population.
 The steps in using these schemes are:
- Get the list of the total population.
- Divide the total population by the desired sample size to get the sampling interval.
 Example:
- If the total population is 5,000 and the desired sample is 100,
- Sampling Interval = 5,000 = 50 100
 Get the No. 50 as the first sample and every 50th person in the list or 100, 150, etc. until 100
respondents are completed.
2. Non-probability Sampling

 A process of selecting respondents in which members of the entire population do not have an equal
chance of being selected as samples.
 The researcher selects individuals because they are available, convenient, and represent some
characteristic the investigator seeks to study.

a) Convenience Sampling
 It is also called accidental or incidental sampling.
 For example, after you have already determined the size of the sample from your population of
elementary pupils, the elementary pupils who are at the moment present during the visit will be
chosen as respondents.
 the researcher selects participants because they are willing and available to be studied
 the researcher cannot say with confidence that the individuals are representative of the population

b) Snowball Sampling
 the researcher asks participants to identify others to become members of the sample
c) Quota Sampling
 It is somewhat similar to stratified sampling in which the population is divided into homogenous
strata and then sample elements are selected from each stratum.

d) Purposive Sampling
 It involves handpicking of subjects.
 This is also called judgmental sampling.
ELEMENTS TO CONSIDER IN FORMULATING POPULATION AND SAMPLING TECHNIQUE
1. The total population and its parameters 4. An explanation and discussion of the sampling
2. The sample and its statistics method
3. The sampling method, with references to 5. An explanation of how the sampling is done
support it 6. An enumeration of the qualifying criteria
7. The profile of the respondents
DEVELOPING AN INSTRUMENT AND PLANNING DATA COLLECTION
What is an Instrument?

 is a tool for measuring, observing, or documenting quantitative data.


 may be a test, questionnaire, tally sheet, log, observational checklist, inventory, or assessment
instrument.
 use to measure achievement, assess individual ability, observe behavior, develop a psychological
profile of an individual, or interview a person.
TYPES OF DATA AND MEASURES
 Performance measures
 use to assess an individual’s ability to perform on an achievement test, intelligence test, aptitude test,
interest inventory, or personality assessment inventory.
 Attitudinal measures
 use by researchers when they measure feelings
 Behavioral observations
 are made by selecting an instrument (or using a behavioral protocol) on which to record a behavior,
observing individuals for that behavior, and checking points on a scale that reflect the behavior
(behavioral checklists)
 Factual information
 or personal documents consist of numeric, individual data available in public records.
 examples of these types of data include grade reports, school attendance records, student
demographic data, and census information

Web-Based Electronic Data Collection

 In this approach, the participant in a study logs on to a computer, downloads a questionnaire from the
Internet, completes the questionnaire, and sends the completed questionnaire back to the researcher.
 Electronic data collection provides an easy, quick form of data collection.

What are the most frequently used DATA COLLECTION TECHNIQUES?

 Document Analysis
 This technique is used to analyze primary and secondary sources that are available mostly in
churches, schools, public or private offices, hospitals, or in community, municipal, and city halls. At
times, data are not available or are difficult to locate in these places and the information gathered
tend to be incomplete or not definite and conclusive.
 Interview
 The instrument used in this method is the interview schedule. The skill of the interviewer determines
if the interviewee is able to express his/her thoughts clearly.
 Three types: Unstructured, Structured, and Semi- structured
 Observation
 This process or technique enables the researcher to participate actively in the conduct of the
research. The instrument used in an observation is called the observation guide or observation
checklist.
 Two types: Structured and Unstructured
 Physiological Measures
 The technique involves the collection of physical data from the subjects. It is considered more
accurate and objective than other data- collection methods.
 Psychological Tests
 These include personality inventories and projective techniques.
 Personality inventories are self- reported measures that assess the differences in personality traits,
needs, or values of people.
 In Projective techniques, subject is presented with a stimulus designed to be ambiguous or vague
in meaning.
 Questionnaire
 It is the most commonly used instrument in research. It is a list of questions about a particular topic,
with spaces provided for the response to each question, and intended to be answered by a number
of persons (Good, 1984).
 It can be structured or unstructured.

What are the TYPES OF QUESTIONS?

 Yes or No Type
 Items are answerable by “yes” or “no”.

 Recognition Type
 Alternative responses are already provided, and the respondents simply choose among the given
choices. It also contains close- ended questions.

 Completion Type
 The respondents are asked to fill in the blanks with necessary information. Questions are open-
ended.

 Coding Type
 Numbers are assigned to names, choices, and other pertinent data. This entails knowledge of
statistics on the part of the researcher, as the application of statistical formulas is necessary to arrive
at the findings. Example: On a scale of one (1) to ten (10), how will you rate the skills of the
manager?

 Subjective Type
 The respondents are free to give their opinions about an issue of concern. Examples: What can you
say about teachers who are deeply committed to their work? Will senior high school students be
allowed to change their specialization?

 Combination Type
 The questionnaire is a combination of two or more types of questions.

Wordings of Questions

1. State questions in an affirmative rather than in a negative manner.


2. Avoid ambiguous questions (e.g. those which contain many, always, usually, few)
3. Avoid negative questions (e.g. Don’t you disagree with the idea that minors be not allowed to drink
liquors?)

4. Avoid double- barred questions (i.e., asking two questions in one question)
Example:
- Will you be happy joining the Division Quiz Bee and be given additional examinations afterwards?
- Do you want to run for the student council and aim to be valedictorian?

SCALES COMMONLY USED IN AN INSTRUMENT

Characteristics of a good data-collection instrument


1. It must be concise yet able to elicit the needed data.
2. It seeks information which cannot be obtained from other sources like documents that are available at
hand.
3. Questions must be arranged in sequence, from simplest to the complex.
4. It must also be arranged according to the questions posed in the statement of the problem.
5. It should pass validity and reliability.
6. It must be easily tabulated and interpreted.

WHAT INSTRUMENT WILL YOU USE TO COLLECT DATA?


1. Locate or Develop an Instrument
 Three options exist for obtaining an instrument to use:
- you can develop one yourself,
- locate one and modify it, or
- locate one and use it in its entirety.
 Modifying an instrument means locating an existing instrument, obtaining permission to change it,
and making changes in it to fit your requirements
 Developing an instrument consists of several steps, such as identifying the purpose of the
instrument, reviewing the literature, writing the questions, and testing the questions with individuals
similar to those you plan to study
2. Search for an Instrument
 Finding a good instrument that measures your independent, dependent, and control variables is not
easy.
 Whether you search for one instrument or several to use, several strategies can aid in your search:
- Look in published journal articles.
- Run an ERIC search.
- Examine guides to tests and instruments that are available commercially.

Criteria for Choosing a Good Instrument


1) Have authors developed the instrument recently, and can you obtain the most recent version?
2) Is the instrument widely cited by other authors?
3) Are reviews available for the instrument?
4) Is there information about the reliability and validity of scores from past uses of the instrument?
5) Does the procedure for recording data fit the research questions/hypotheses in your study?
6) Does the instrument contain accepted scales of measurement?

Validity vs Reliability
Validity – is the ability of an instrument to measure what it intends to measure
Reliability – refers to the consistency of results

Reliability and Validity

 Reliability means that scores from an instrument are stable and consistent. Scores should be nearly
the same when researchers administer the instrument multiple times at different times.
 Validity is the development of sound evidence to demonstrate that the test interpretation (of scores
about the concept or construct that the test is assumed to measure) matches its proposed use.
TYPES OF RELIABILITY
Test-retest reliability

 measures the consistency of results when you repeat the same test on the same sample at a
different point in time
 you use it when you are measuring something that you expect to stay constant in your sample
Interrater reliability

 also called interobserver reliability. It measures the degree of agreement between different people
observing or assessing the same thing
 you use it when data is collected by researchers assigning ratings, scores or categories to one or
more variables
Parallel forms reliability

 measures the correlation between two equivalent versions of a test


 you use it when you have two different assessment tools or sets of questions designed to measure
the same thing
Internal consistency

 assesses the correlation between multiple items in a test that are intended to measure the same
construct
 you can calculate internal consistency without repeating the test or involving other researchers, so
it’s a good way of assessing reliability when you only have one data set
TYPES OF VALIDITY
Face validity

 It is also known as logical validity. It involves an analysis of whether the instrument is using a valid
scale.
 The procedure calls only for intuitive judgement.
Content validity

 This is determined by studying the questions to see whether they are able to elicit the necessary
information.
 An instrument with high content validity has to meet the objectives of the research.
Construct validity

 This refers to whether the test corresponds to its theoretical construct.


 It is concerned with the extent to which a particular measure relates to the other measures and its
consistency with theoretically- derived hypothesis.
Criterion-related validity or equivalent test

 This type of validity is an expression of how scores form the test are correlated with an external
criterion.
a. Concurrent – deals with measures that can be administered and validated at the same time.
b. Predictive – refers to how well the test predicts some future behavior of the examinees.
Reliability – results are consistent
Validity – results satisfy objectives

You might also like