Professional Documents
Culture Documents
Res12 Module 2
Res12 Module 2
Res12 Module 2
S T U D E N T ’ S L E A R N I N G M O D U L E
Content Standard :
The learner is able to describe quantitative research designs, sample, and data collection and
analysis procedures.
Performance Standard :
The learner will make a research proposal of a quantitative research.
Module No. 2
Time Frame: 3 weeks
Learning Targets: At the end of the lesson, I can
I. INTRODUCTION:
Panagdait! How are you today? This is your second module for this semester
of the subject Practical Research 2. Are you excited to start? I hope that you
are!
KEY CONCEPT
Read me!
research it is important that the people questioned are sampled at random. This allows for more
accurate findings across a greater spectrum of respondents.
Remember!
✓ It is very important when conducting survey research that you work with statisticians and
field service agents who are reputable. Since there is a high level of personal interaction
in survey scenarios as well as a greater chance for unexpected circumstances to occur, it
is possible for the data to be affected. This can heavily influence the outcome of the
survey.
✓ There are several ways to conduct survey research. They can be done in person, over the
phone, or through mail or email. In the last instance they can be self-administered. When
conducted on a single group survey research is its own category.
2. Correlational Research
Correlational research tests for the relationships between two variables. Performing correlational
research is done to establish what the effect of one on the other might be and how that affects
the relationship.
Remember!
✓ Correlational research is conducted in order to explain a noticed occurrence. In
correlational research the survey is conducted on a minimum of two groups. In most
correlational research there is a level of manipulation involved with the specific variables
being researched. Once the information is compiled it is then analyzed mathematically to
draw conclusions about the effect that one has on the other.
✓ Correlation does not always mean causation. For example, just because two data points
sync doesn‘t mean that there is a direct cause and effect relationship. Typically, you should
not make assumptions from correlational research alone.
3. Descriptive
As stated by Good and Scates as cited by Sevilla (1998), the descriptive method is oftentimes as
a survey or a normative approach to study prevailing conditions.
Remember!
✓ Descriptive method involves the discretion, recognition, analysis and interpretation of
condition that currently exist. Moreover, according to Gay (2007) Descriptive research
design involves the collection of the data in order to test hypotheses or to answer
questions concerning the current status of the subject of the study. It determines and
reports the way things are.
4. Comparative
Comparative researchers examine patterns of similarities and differences across a moderate
number of cases. The typical comparative study has anywhere from a handful to fifty or more
cases. The number of cases is limited because one of the concerns of comparative research is
to establish familiarity with each case included in a study. (Ragin, Charles 2015)
✓ Like qualitative researchers, comparative researchers consider how the different parts of
each case - those aspects that are relevant to the investigation - fit together; they try to
make sense of each case. Thus, knowledge of cases is considered an important goal of
comparative research, independent of any other goal.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 4
5. Ex Post Facto
According to Devin Kowalczyk, that Ex post facto design is a quasi-experimental study
examining how an independent variable, present prior to the study, affects a dependent
variable.
✓ A true experiment and ex post facto both are attempting to say: this independent variable
is causing changes in a dependent variable. This is the basis of any experiment - one
variable is hypothesized to be influencing another. This is done by having an experimental
group and a control group. So if you're testing a new type of medication, the experimental
group gets the new medication, while the control group gets the old medication. This
allows you to test the efficacy of the new medication. . (Kowalczyk 2015)
Experimental Research
Though questions may be posed in the other forms of research, experimental research is guided
specifically by a hypothesis. Sometimes experimental research can have several hypotheses. A
hypothesis is a statement to be proven or disproved. Once that statement is made experiments
are begun to find out whether the statement is true or not. This type of research is the bedrock of
most sciences, in particular the natural sciences. Quantitative research can be exciting and highly
informative. It can be used to help explain all sorts of phenomena. The best quantitative research
gathers precise empirical data and can be applied to gain a better understanding of several fields
of study. (Williams 2015)
If the researcher/s is planning to carry out interviews or focus groups, the young researchers will
need to plan an interview schedule or topic guide. This is a list of questions or topic areas that all
the interviewers will use. Asking everyone the same questions means that the data you collect
will be much more focused and easier to analyze.
If the group wants to carry out a survey, the young researchers will need to design a questionnaire.
This could be on paper or online (using free software such as Survey Monkey). Both approaches
have advantages and disadvantages.
If the group is collecting data from more than one ‘type‘ of person (such as young people and
teachers, for example), it may well need to design more than one interview schedule or
questionnaire. This should not be too difficult as the young researchers can adapt additional
schedules or questionnaires from the original.
When designing the research instruments ensure that they start with a statement about:
• the focus and aims of the research project
• how the person‘s data will be used (to feed into a report?)
• confidentiality
• how long the interview or survey will take to complete.
• Usage of appropriate language
• every question must be brief and concise.
• any questionnaires use appropriate scales.
For young people ‘smiley face‘ scales can work well
REMEMBER!
Any questionnaires ask people for any relevant information about themselves, such as their
gender or age, if relevant. Don‘t ask for so much detail that it would be possible to identify
individuals though, if you have said that the survey will be anonymous.
The Instrument
Instrument is the generic term that researchers use for a measurement device (survey, test,
questionnaire, etc.). To help distinguish between instrument and instrumentation, consider that
the instrument is the device and instrumentation is the course of action (the process of developing,
testing, and using the device).
Usability
Usability refers to the ease with which an instrument can be administered, interpreted by the
participant, and scored/interpreted by the researcher. Example usability problems include:
Students are asked to rate a lesson immediately after class, but there are only a few minutes
before the next class begins (problem with administration).
Students are asked to keep self-checklists of their after school activities, but the directions are
complicated and the item descriptions confusing (problem with interpretation).
Teachers are asked about their attitudes regarding school policy, but some questions are worded
poorly which results in low completion rates (problem with scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues. For now, we
can identify five usability considerations:
How long will it take to administer?
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 6
question is proposed, for example, you might propose that if you give a student training in how to
use a search engine it will improve their success in finding information on the Internet.
You could then go on to explain why a particular answer is expected - you put forward a theory.
We can gather quantitative data in a variety of ways and from a number of different sources. Many
of these are similar to sources of qualitative data, for example:
✓ Questionnaires - a series of questions and other prompts for the purpose of gathering
information from respondents
✓ Interviews - a conversation between two or more people (the interviewer and the
interviewee) where questions are asked by the interviewer to obtain information from the
interviewee - a more structured approach would be used to gather quantitative data
✓ Observation - a group or single participants are manipulated by the researcher, for
example, asked to perform a specific task or action. Observations are then made of their
user behavior, user processes, workflows etc, either in a controlled situation (e.g. lab
based) or in a realworld situation (e.g. the workplace)
✓ Transaction logs - recordings or logs of system or website activity
✓ Documentary research - analysis of documents belonging to an organization
Why do we do quantitative data analysis?
Once you have collected your data you need to make sense of the responses you have got back.
Quantitative data analysis enables you to make sense of data by:
• organizing them
• summarizing them
• doing exploratory analysis
And to communicate the meaning to others by presenting data as:
• tables
• graphical displays
• summary statistics
We can also use quantitative data analysis to see:
• where responses are similar , for example, we might find that the majority of students all go to
the university library twice a week
• if there are differences between the things we have studied, for example, 1st year students
might go once a week to the library, 2 nd year students twice a week and 3 rd year students three
times a week
• if there is a relationship between the things we have studied. So, is there a relationship between
the number of times a student goes to the library and their year of study?
Using software for statistical analysis
Some key concepts
Before we look at types of analysis and tools we need to be familiar with a few concepts first:
• Population - the whole units of analysis that might be investigated, this could be students, cats,
house prices etc.
• Sample - the actual set of units selected for investigation and who participate in the research
• Variable - characteristics of the units/participants
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 9
• Value - the score/label/value of a variable, not the frequency of occurrence. For example, if age
is a characteristic of a participant then the value label would be the actual age, eg. 21, 22, 25, 30,
18, not how many participants are 21, 22, 25, 30, 18.
• Case/subject - the individual unit/participant of the study/research.
Sampling
Sampling is complex and can be done in many ways dependent on 1) what you want to achieve
from your research, 2) practical considerations of who is available to participate! The type of
statistical analysis you do will depend on the sample type you have. Most importantly, you cannot
generalize your findings to the population as a whole if you do not have 68 a random sample. You
can still undertake some inferential statistical analysis but you should report these as results of
your sample, not as applicable to the population at large.
Common sampling approaches include:
• Random sampling
• Stratified sampling
• Cluster sampling
• Convenience sampling
• Accidental sampling
Steps in Quantitative Data Analysis
According to Baraceros (2016), she identified the different steps in Quantitative data analysis and
she quoted that no “data organization means no sound data analysis”.
1. Coding system – to analyzed data means to quantify of change the verbally expressed data
into numerical information. Converting the words, images, or pictures into numbers, they become
fit for any analytical procedures requiring knowledge of arithmetic and mathematical
computations. But it is not possible for the researcher to do the mathematical operations such as
division, multiplication, or subtraction in the word level, unless you code the verbal responses and
observation categories.
Step 2: Analyzing the Data
Data coding and tabulation are both essential in preparing the data analysis. Before you interpret
every component of the data, the researcher decides first what kind of quantitative analysis to use
whether to use a simple descriptive statistical technique or an advance analytical method. The
first one that college students often use tells some aspects of categories of data such as:
frequency of distribution, measure of central tendency (mean, median and mode), and standard
deviation. However, this does not give information about population from where the sample came.
The second one, on the other hand, fits graduate-level studies because this involves complex
statistical analysis requiring a good foundation and thorough knowledge the datagathering
instrument used. The results of the analysis reveal the following aspects of an item in a set of
data (Mogan 2014; Punch 2014; Walsh 2010) cited by Baraceros (2016):
• Frequency distribution – gives you the frequency of distribution and percentage of the
occurrence of an item in asset of data. In other words, it gives you the number of
responses given repeatedly for one question.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 10
• Measure of Central Tendency – indicates the different positions or values of the items,
such that in a category of data, you find an item or items serving as the:
Mean – average of all the items or scores
Example: 3+8+9+2+3+10+3 = 38
38 ÷ 7 = 5.43 (Mean)
Median – the score in the middle of the set of items that cuts or divides the set into two
groups
Example: The number in the example for the Mean has 2 as the Median.
Mode – refers to the item or score in the data set that has the most repeated appearance
in the set.
Example: Again, in the example above for the Mean, 3 is the Mode.
• Standard Deviation – shows the extent of the difference of the data from the mean. An
examination of this gap between the mean and the data gives you an idea about the extent
of the similarities and differences between the respondents. There are mathematical
operations that you have to determine the standard deviation.
Step 1: Compute the Mean.
Step 2: Compute the deviation (difference) between each respondent‘s answer (data item)
and the mean. The positive sign (+) appears before the number if the difference is higher;
negative sign (-), if the difference is lower.
Step 3: Compute the square of each deviation.
Step 4: Compute the sum of squares by adding the squared figures.
Step 5: Divide the sum of squares by the number of data items to get the variance.
Step 6: Compute the square root of variance figure to get standard deviation.
Example:
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 11
list, the sample is drawn so that each person or item has an equal chance of being drawn during
each selection round (Kanupriya, 2012).
To draw a simple random sample without introducing researcher bias, computerized sampling
programs and random number tables are used to impartially select the members of the population
to be sampled. Subjects in the population are sampled by a random process, using either a
random number generator or a random number table, so that each person remaining in the
population has the same probability of being selected for the sample (Friedrichs, 2008).
Systematic Sampling Procedure
Systematic sampling procedure often used in place of simple random sampling. In systematic
sampling, the researcher selects every nth member after randomly selecting the first through nth
element as the starting point. For example, if the researcher decides to sample 20 respondents
from a sample of 100, every 5th member of the population will systematically be selected.
A researcher may choose to conduct a systematic sample instead of a simple random sample for
several reasons. Firstly, systematic samples tend to be easier to draw and execute, secondly, the
researcher does not have to go back and forth through the sampling frame to draw the members
to be sampled, thirdly, a systematic sample may spread the members selected for measurement
more evenly across the entire population than simple random sampling. Therefore, in some
cases, systematic sampling may be more representative of the population and more precise
(Groves et al., 2006).
Stratified Sampling Procedure
Stratified sampling procedure is the most effective method of sampling when a researcher wants
to get a representative sample of a population. It involves categorizing the members of the
population into mutually exclusive and collectively exhaustive groups. An independent simple
random sample is then drawn from each group. Stratified sampling techniques can provide more
precise estimates if the population is surveyed is more heterogeneous than the categorized
groups. This technique can enable the researcher to determine desired levels of sampling
precision for each group, and can provide administrative efficiency. The main advantage of the
approach is that it‘s able to give the most representative sample of a population (Hunt & Tyrrell,
2001).
Cluster Sampling Procedure
In cluster sampling, a cluster (a group of population elements), constitutes the sampling unit,
instead of a single element of the population. The sampling in this technique is mainly
geographically driven. The main reason for cluster sampling is cost efficiency (economy and
feasibility). The sampling frame is also often readily available at cluster level and takes short time
for listing and implementation. The technique is also suitable for survey of institutions (Ahmed,
2009) or households within a given geographical area.
But the design is not without disadvantages, some of the challenges that stand out are: it may not
reflect the diversity of the community; other elements in the same cluster may share similar
characteristics; provides less information per observation than an SRS of the same size
(redundant information: similar information from the others in the cluster); standard errors of the
estimates are high, compared to other sampling designs with the same sample size.
Non Probability Sampling Procedures
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 13
Non probability sampling is used in some situations, where the population may not be well defined.
In other situations, there may not be great interest in drawing inferences from the sample to the
population. The most common reason for using non probability sampling procedure is that it is
less expensive than probability sampling procedure and can often be implemented more quickly
(Michael, 2011). It includes purposive, convenience and quota sampling procedures.
Purposive/Judgmental Sampling Procedure
In purposive sampling procedure, the researcher chooses the sample based on who he/she thinks
would be appropriate for the study. The main objective of purposive sampling is to arrive as at a
sample that can adequately answer the research objectives. The selection of a purposive sample
is often accomplished by applying expert knowledge of the target population to select in a
nonrandom manner a sample that represents a cross-section of the population (Henry, 1990).
A major disadvantage of this method is subjectivity since another researcher is likely to come up
with a different sample when identifying important characteristics and picking typical elements to
be in the sample. Given the subjectivity of the selection mechanism, purposive sampling is
generally considered most appropriate for the selection of small samples often from a limited
geographic area or from a restricted population definition. The knowledge and experience of the
researcher making the selections is a key aspect of the “success‘‘ of the resulting sample
(Michael, 2011). A case study research design for instance, employs purposive sampling
procedure to arrive at a particular ‘case‘ of study and a given group of respondents. Key
informants are also selected using this procedure.
Convenience Sampling Procedure
Convenience sampling is sometimes known as opportunity, accidental or haphazard sampling. It
is a type of nonprobability sampling which involves the sample being drawn from that part of the
population which is close to hand, that is, a population which is readily available and convenient.
The researcher using such a sample cannot scientifically make generalizations about the total
population from this sample because it would not be representative enough (Michael, 2011). This
type of sampling is most useful for pilot testing.
Convenience sampling differs from purposive sampling in that expert judgment is not used to
select a representative sample. The primary selection criterion relates to the ease of obtaining a
sample. Ease of obtaining the sample relates to the cost of locating elements of the population,
the geographic distribution of the sample, and obtaining the interview data from the selected
elements (de Leeuw, Hox & Huisman, 2003).
SAMPLING TECHNIQUES
When sampling, you need to decide what units (i.e., what people, organizations, data, etc.) to
include in your sample and which ones to exclude. As you'll know by now, sampling techniques
act as a guide to help you select these units, and you will have chosen a specific probability or
non-probability sampling technique:
If you are following a probability sampling technique, you'll know that you require a list of the
population from which you select units for your sample. This raises potential data protection and
confidentiality issues because units in the list (i.e., when people are your units) will not necessarily
have given you permission to access the list with their details. Therefore, you need to check that
you have the right to access the list in the first place.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 14
If using a non-probability sampling technique, you need to ask yourself whether you are
including or excluding units for theoretical or practical reasons. In the case of purposive sampling,
the choice of which units to include and exclude is theoretically-driven. In such cases, there are
few ethical concerns. However, where units are included or excluded for practical reasons, such
as ease of access or personal preferences (e.g., convenience sampling), there is a danger that
units will be excluded unnecessarily. For example, it is not uncommon when select units using
convenience sampling that researchers' natural preferences (and even prejudices) will influence
the selection process. For example, maybe the researcher would avoid approaching certain
groups (e.g., socially marginalized individuals, people who speak little English, disabled people,
etc.). Where this happens, it raises ethical issues because the picture being built through the
research can be excessively narrow, and arguably, unethically narrow. This highlights the
importance of using theory to determine the creation of samples when using non-probability
sampling techniques rather than practical reasons, whenever possible.
DRAW CONCLUSIONS AND RECOMMENDATIONS
Drawing Conclusions
For any research project and any scientific discipline, drawing conclusions is the final, and most
important, part of the process. Whichever reasoning processes and research methods were used,
the final conclusion is critical, determining success or failure. If an otherwise excellent experiment
is summarized by a weak conclusion, the results will not be taken seriously. Success or failure is
not a measure of whether a hypothesis is accepted or refuted, because both results still advance
scientific knowledge. ( Shuttleworth 2014)
Failure is poor experimental design, or flaws in the reasoning processes, which invalidate the
results. As long as the research process is robust and well designed, then the findings are sound,
and the process of drawing conclusions begins. Generally, a researcher will summarize what they
believe has been learned from the research, and will try to assess the strength of the hypothesis.
Even if the null hypothesis is accepted, a strong conclusion will analyze why the results were not
as predicted. In observational research, with no hypothesis, the researcher will analyze the
findings, and establish if any valuable new information has been uncovered.
Generating Leads for Future Research
However, very few experiments give clear-cut results, and most research uncovers more
questions than answers.
The researcher can use these to suggest interesting directions for further study. If, for example,
the null hypothesis was accepted, there may still have been trends apparent within the results.
These could form the basis of further study, or experimental refinement and redesign.
Evaluation - Flaws in the Research Process
The researcher will then evaluate any apparent problems with the experiment. This involves
critically evaluating any weaknesses and errors in the design, which may have influenced the
results.
Even strict, 'true experimental,' designs have to make compromises, and the researcher must be
thorough in pointing these out, justifying the methodology and reasoning.
For example, when drawing conclusions, the researcher may think that another causal effect
influenced the results, and that this variable was not eliminated during the experimental process.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 15
A refined version of the experiment may help to achieve better results, if the new effect is included
in the design process.
In the global warming example, the researcher might establish that carbon dioxide emission alone
cannot be responsible for global warming. They may decide that another effect is contributing, so
propose that methane may also be a factor in global warming. A new study would incorporate
methane into the model.
What are the Clear-Cut Benefits of the Research
The next stage is to evaluate the advantages and benefits of the research. In medicine and
psychology, for example, the results may throw out a new way of treating a medical problem, so
the advantages are obvious. However, all well-constructed research is useful, even if it is just
adding to the fount of human knowledge. An accepted null hypothesis has an important meaning
to science.
Suggestions Based Upon the Conclusions
The final stage is the researcher's recommendations based upon the results, depending upon the
field of study. This area of the research process can be based around the researcher's personal
opinion, and will integrate previous studies.
For example, a researcher into schizophrenia may recommend a more effective treatment. A
physicist might postulate that our picture of the structure of the atom should be changed. A
researcher could make suggestions for refinement of the experimental design, or highlight
interesting areas for further study. This final piece of the paper is the most critical, and pulls
together all of the findings.
The area in a research paper that causes intense and heated debate amongst scientists is when
drawing conclusions.
It is critical in determining the direction taken by the scientific community, but the researcher will
have to justify their findings.
Summary - The Strength of the Results
The key to drawing a valid conclusion is to ensure that the deductive and inductive processes are
correctly used, and that all steps of the scientific method were followed.
If your research had a robust design, questioning and scrutiny will be devoted to the experiment
conclusion, rather than the methods.
Recommendations
Other recommendations may also be appropriate. When preparing this section, remember that in
making your recommendations, you must show how your results support them. A
recommendation for a preferred alternative should include:
1. Specifically stating what should be done, the steps required to implement the policy, and the
resources needed;
2. discussion of the benefits to the organization and what problems would be corrected or avoided;
3. discussion of the feasibility of the proposed policy; and
4. general statement about the nature and timing of an evaluation plan that would be used to
determine the effectiveness of the proposed policy.
Recommendations for Further Research
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 16
In this section, you finally have the opportunity to present and discuss the actions that future
researchers should take as a result of your Project. A well-thought-out set of recommendations
makes it more likely that the organization will take your recommendations seriously. Ideally you
should be able to make a formal recommendation regarding the alternative that is best supported
by the study. Present and discuss the kinds of additional research suggested by your Project. If
the preferred alternative is implemented, what additional research might be needed?
II. INTERACTION:
A. Learning Activities :
Assessment Technique:
1. When can you say that an instrument to be used for the data gathering is usable, valid and
reliable?
2. In writing the research methodology, what should be the order of the methods section?
4. Explain what you understand about probability sampling and non-probability sampling by
giving what is asked in the table below:
My Understanding Example
Systematic Sampling
Procedure
Probability
sampling Stratified Sampling Procedure
Purposive/Judgmental
Sampling Procedure
Non-
probability
Convenience Sampling
sampling
Procedure
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 19
III. INTEGRATION
A. Transfer of Learning
B. Reflection:
1. Why do you think drawing conclusions is considered as one of the most important process of
a research?
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 20