Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

ST.

JOHN PAUL II INSTITUTE OF TECHNOLOGY


TRAINING ASSESSMENT AND LEARNING CENTER, INC
FRA Building, Carmen West, Rosales, Pangasinan/
Aguila Road, Brgy. Sevilla, San Fernando City, La Union

Practical Research 2
Quarter I – Module 4:
Understanding Data and Ways to
Systematically Collect Data
(Week 6)

P a g e 1 | 10
Subject: Practical Research 2
Grade & Section: Grade 12-ABM
Module No. 4
Week No. 6
Instructor: Ms. Camille N. Cornelio
Objectives:

At the end of the lesson, students should be able to:


1. Choose appropriate quantitative research design;
2. Describes sampling procedure and the sample;
3. Plans data collection procedure;
4. Plans data analysis using statistics and hypothesis testing ;
5. Presents written research methodology; and
6. Implements design principles to produce creative work.

Lesson
Quantitative Research Design
1
QUANTITATIVE RESEARCH

If the researcher views quantitative design as a continuum, one end of the range
represents a design where the variables are not controlled at all and only observed.
Connections amongst variable are only described. At the other end of the spectrum,
however, are designs which include a very close control of variables, and relationships
amongst those variables are clearly established. In the middle, with experiment design
moving from one type to the other, is a range which blends those two extremes together.

TYPES OF QUANTITATIVE RESEARCH

Quantitative research is a type of empirical investigation. That means the


research focuses on verifiable observation as opposed to theory or logic. Most often this
type of research is expressed in numbers. A researcher will represent and manipulate
certain observations that they are studying. They will attempt to explain what it is they
are seeing and what affect it has on the subject. They will also determine and what the
changes may reflect. The overall goal is to convey numerically what is being seen in the
research and to arrive at specific and observable conclusions. (Klazema 2014)

P a g e 2 | 10
A. Non-Experimental Research Design
Non-experimental research means there is a predictor variable or group of
subjects that cannot be manipulated by the experimenter. Typically, this means that
other routes must be used to draw conclusions, such as correlation, survey or case study.
(Kowalczyk 2015)

Types of Non-Experimental Research


1. Survey Research
Survey research uses interviews, questionnaires, and sampling polls to get a sense of
behavior with intense precision. It allows researchers to judge behavior and then
present the findings in an accurate way. This is usually expressed in a percentage.
Survey research can be conducted around one group specifically or used to compare
several groups. When conducting survey research it is important that the people
questioned are sampled at random. This allows for more accurate findings across a
greater spectrum of respondents.

Remember!
 It is very important when conducting survey research that you work with
statisticians and field service agents who are reputable. Since there is a high level of
personal interaction in survey scenarios as well as a greater chance for unexpected
circumstances to occur, it is possible for the data to be affected. This can heavily
influence the outcome of the survey.
 There are several ways to conduct survey research. They can be done in person,
over the phone, or through mail or email. In the last instance they can be self-
administered. When conducted on a single group survey research is its own
category.

2. Correlational Research
Correlational research tests for the relationships between two variables. Performing
correlational research is done to establish what the effect of one on the other might
be and how that affects the relationship.

Remember!
 Correlational research is conducted in order to explain a noticed occurrence. In
correlational research the survey is conducted on a minimum of two groups. In most
correlational research there is a level of manipulation involved with the specific
variables being researched. Once the information is compiled it is then analyzed
mathematically to draw conclusions about the effect that one has on the other.
 Correlation does not always mean causation. For example, just because two data
points sync doesn‘t mean that there is a direct cause and effect relationship.
Typically, you should not make assumptions from correlational research alone.

3. Descriptive
As stated by Good and Scates as cited by Sevilla (1998), the descriptive method is
oftentimes as a survey or a normative approach to study prevailing conditions.
Remember!
 Descriptive method involves the discretion, recognition, analysis and
interpretation of condition that currently exist. Moreover, according to Gay (2007)
Descriptive research design involves the collection of the data in order to test

P a g e 3 | 10
hypotheses or to answer questions concerning the current status of the subject of
the study. It determines and reports the way things are.

4. Comparative
Comparative researchers examine patterns of similarities and differences across a
moderate number of cases. The typical comparative study has anywhere from a
handful to fifty or more cases. The number of cases is limited because one of the
concerns of comparative research is to establish familiarity with each case included
in a study. (Ragin, Charles 2015)
 Like qualitative researchers, comparative researchers consider how the different
parts of each case - those aspects that are relevant to the investigation - fit together;
they try to make sense of each case. Thus, knowledge of cases is considered an
important goal of comparative research, independent of any other goal.

5. Ex Post Facto
According to Devin Kowalczyk, that Ex post facto design is a quasi-experimental
study examining how an independent variable, present prior to the study, affects a
dependent variable.
Remember!
 A true experiment and ex post facto both are attempting to say: this independent
variable is causing changes in a dependent variable. This is the basis of any
experiment - one variable is hypothesized to be influencing another. This is done by
having an experimental group and a control group. So if you're testing a new type of
medication, the experimental group gets the new medication, while the control
group gets the old medication. This allows you to test the efficacy of the new
medication. . (Kowalczyk 2015)

B. Experimental Research
Though questions may be posed in the other forms of research, experimental research is
guided specifically by a hypothesis. Sometimes experimental research can have several
hypotheses. A hypothesis is a statement to be proven or disproved. Once that statement
is made experiments are begun to find out whether the statement is true or not. This
type of research is the bedrock of most sciences, in particular the natural sciences.
Quantitative research can be exciting and highly informative. It can be used to help
explain all sorts of phenomena. The best quantitative research gathers precise empirical
data and can be applied to gain a better understanding of several fields of study.
(Williams 2015)
Types of Experimental research
1. Quasi-experimental Research
Design involves selecting groups, upon which a variable is tested without any
random pre-selection process. For example, to perform an educational experiment,
a class might be arbitrarily divided by alphabetical selection or by seating
arrangement. The division is often convenient especially in an educational
situations cause a little disruption as possible.
2. True Experimental Design
According to Yolanda Williams (2015) that a true experiment is a type of
experimental design and is thought to be the most accurate type of experimental
research. This is because a true experiment supports or refutes a hypothesis using
statistical analysis. A true experiment is also thought to be the only experimental

P a g e 4 | 10
design that can establish cause and effect relationships. So, what makes a true
experiment?
There are three criteria that must be met in a true experiment
1. Control group and experimental group
2. Researcher-manipulated variable
3. Random assignment

Lesson
Instrument Development
2
Developing a research instruments

Before the researchers collect any data from the respondents, the young
researchers will need to design or devised new research instruments or they may adopt
it into the other researches (the tools they will use to collect the data).

If the researcher/s is planning to carry out interviews or focus groups, the young
researchers will need to plan an interview schedule or topic guide. This is a list of
questions or topic areas that all the interviewers will use. Asking everyone the same
questions means that the data you collect will be much more focused and easier to
analyze.

If the group wants to carry out a survey, the young researchers will need to
design a questionnaire. This could be on paper or online (using free software such as
Survey Monkey). Both approaches have advantages and disadvantages.

If the group is collecting data from more than one “type”of person (such as young
people and teachers, for example), it may well need to design more than one interview
schedule or questionnaire. This should not be too difficult as the young researchers can
adapt additional schedules or questionnaires from the original.

When designing the research instruments ensure that:


 they start with a statement about.
 the focus and aims of the research project
 how the person‘s data will be used (to feed into a report?)
 confidentiality
 how long the interview or survey will take to complete.
 Usage of appropriate language
 every question must be brief and concise.
 any questionnaires use appropriate scales. For young people ‘smiley face’ scales
can work well

REMEMBER!
Any questionnaires ask people for any relevant information about themselves, such as
their gender or age, if relevant. Don‘t ask for so much detail that it would be possible to
identify individuals though, if you have said that the survey will be anonymous.

THE INSTRUMENT

P a g e 5 | 10
Instrument is the generic term that researchers use for a measurement device (survey,
test, questionnaire, etc.). To help distinguish between instrument and instrumentation,
consider that the instrument is the device and instrumentation is the course of action
(the process of developing, testing, and using the device).
Instruments fall into two broad categories, researcher-completed and subject-
completed, distinguished by those instruments that researchers administer versus those
that are completed by participants. Researchers chose which type of instrument, or
instruments, to use based on the research question. Examples are listed below:

Researcher-completed Instruments Subject-completed Instruments


Rating scales Questionnaires
Interview schedules/guides Self-checklists
Tally sheets Attitude scales
Flowcharts Personality inventories
Performance checklists Achievement/aptitude tests
Time-and-motion logs Projective devices
Observation forms Sociometric devices

USABILITY
Usability refers to the ease with which an instrument can be administered, interpreted
by the participant, and scored/interpreted by the researcher. Example usability
problems include:

Students are asked to rate a lesson immediately after class, but there are only a few
minutes before the next class begins (problem with administration).
Students are asked to keep self-checklists of their after school activities, but the
directions are complicated and the item descriptions confusing (problem with
interpretation).

Teachers are asked about their attitudes regarding school policy, but some questions are
worded poorly which results in low completion rates (problem with
scoring/interpretation).

Validity and reliability concerns (discussed below) will help alleviate usability issues.
For now, we can identify five usability considerations:
 How long will it take to administer?
 Are the directions clear?
 How easy is it to score?
 Do equivalent forms exist?
 Have any problems been reported by others who used it?

VALIDITY
Validity is the extent to which an instrument measures what it is supposed to
measure and performs as it is designed to perform. It is rare, if nearly impossible,
that an instrument be 100% valid, so validity is generally measured in degrees. As a
process, validation involves collecting and analyzing data to assess the accuracy of
an instrument. There are numerous statistical tests and measures to assess the

P a g e 6 | 10
validity of quantitative instruments, which generally involves pilot testing. The
remainder of this discussion focuses on external validity and content validity.

External validity is the extent to which the results of a study can be generalized
from a sample to a population. Establishing eternal validity for an instrument,
then, follows directly from sampling. Recall that a sample should be an accurate
representation of a population, because the total population may not be available.
An instrument that is externally valid helps obtain population generalizability, or
the degree to which a sample represents the population.

Content validity refers to the appropriateness of the content of an instrument.


In other words, do the measures (questions, observation logs, etc.) accurately
assess what you want to know? This is particularly important with achievement
tests. Consider that a test developer wants to maximize the validity of a unit test for
7th grade mathematics. This would involve taking representative questions from
each of the sections of the unit and evaluating them against the desired outcomes.

RELIABILITY
Reliability can be thought of as consistency. Does the instrument consistently
measure what it is intended to measure? It is not possible to calculate reliability;
however, there are four general estimators that you may encounter in reading
research:

 Inter-Rater/Observer Reliability: The degree to which different


raters/observers give consistent answers or estimates.

 Test-Retest Reliability: The consistency of a measure evaluated over time.

 Parallel-Forms Reliability: The reliability of two tests constructed the same


way, from the same content.

 Internal Consistency Reliability: The consistency of results across items,


often measured with Cronbach‘s Alpha.

P a g e 7 | 10
Lesson Guidelines in Writing
3 Research Methodology

Methodology is the systematic, theoretical analysis of the methods applied to a


field of study. It comprises the theoretical analysis of the body of methods and principles
associated with a branch of knowledge.
Methodology section is one of the parts of a research paper. This part is the core of your
paper as it is a proof that you use the scientific method. Through this section, your
study‘s validity is judged. So, it is very important. Your methodology answers two main
questions:

Guided Question to start writing a research methodology:


 How did you collect or generate the data?
 How did you analyze the data?

While writing this section, be direct and precise. Write it in the past tense. Include
enough information so that others could repeat the experiment and evaluate whether
the results are reproducible the audience can judge whether the results and conclusions
are valid.
The explanation of the collection and the analysis of your data are very important
because;
 Readers need to know the reasons why you chose a particular method or procedure
instead of others.
 Readers need to know that the collection or the generation of the data is valid in
the field of study.
 Discuss the anticipated problems in the process of the data collection and the steps
you took to prevent them.
 Present the rationale for why you chose specific experimental procedures.
 Provide sufficient information of the whole process so that others could replicate
your study.

You can do this by: giving a completely accurate description of the data collection
equipment and the techniques. Explaining how you collected the data and analyzed
them.

Specifically;

P a g e 8 | 10
 Present the basic demographic profile of the sample population like age, gender,
and the racial composition of the sample. When animals are the subjects of a study,
you list their species, weight, strain, sex, and age.
 Explain how you gathered the samples/ subjects by answering these questions: -
Did you use any randomization techniques? - How did you prepare the samples?
 Explain how you made the measurements by answering this question.
 What calculations did you make?
 Describe the materials and equipment that you used in the research.
 Describe the statistical techniques that you used upon the data.

The order of the methods section;


1. Describing the samples/ participants.
2. Describing the materials you used in the study
3. Explaining how you prepared the materials
4. Describing the research design
5. Explaining how you made measurements and what calculations you performed
6. Stating which statistical tests you did to analyze the data.

Name:_____________________________ __Score: _____________


Strand/Section/Grade:___________________Date: ______________

DIRECTIONS: Make a reflection Relating Reliability and Validity at least 250 words.
(25 points)

Rubrics:
 Depth of Reflection : 25%
 Required Components: 25%
 Structure: 25%
 Evidence and Practice : 25%

P a g e 9 | 10
References:

Yadipe University Writing Center School of Foreign Languages


https://yuwritingcenter.wikispaces.com/
How+to+Write+the+Methodology+of+a+Research+Paper

http://people.uwec.edu/piercech/researchmethods/data%20collection%20methods/
data%20collection%20methods.htm

http://www.socialresearchmethods.net/kb/sampprob.php

http://www.stat.ncsu.edu/info/srms/survpamphlet.html

http://www.statcan.ca/english/edu/power/ch2/methods/methods.htm

http://www.statisticssolutions.com/quantitative-research-approach/

http://study.com/academy/lesson/true-experiment-definition-examples.html

http://study.com/academy/lesson/non-experimental-and-experimental-research-
differences-advantages-disadvantages.html

P a g e 10 | 10

You might also like