Res12 Module 2

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 20

TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 1

Saint Columban College


SENIOR HIGH SCHOOL DEPARTMENT
7016 Pagadian City

S T U D E N T ’ S L E A R N I N G M O D U L E

Student’s Name: ___________________________ Date: ____________________

Grade & Section:______________ Subject: _________________

Content Standard :
The learner is able to describe quantitative research designs, sample, and data collection and
analysis procedures.

Performance Standard :
The learner will make a research proposal of a quantitative research.

Learning Contents: Understanding data and ways to systematically collect data


Learning Resources:
Biay, Evelyn & Cortez, Shiahari (n.d.). Module in Practical Research 2. Retrieved from:
https://www.scribd.com/document/415119663/Practical-Research-2-Modular-Approach-Docx
Solano, I. P. & David, O. M. (2019). Practical Research 2. Makati City, Philippines: Diwa
Learning Systems INC.
Core Values: Inquisitiveness, Wisdom, Honesty, and Willingness to Cooperate with Others

Module No. 2
Time Frame: 3 weeks
Learning Targets: At the end of the lesson, I can

1. Choose appropriate quantitative research design;


2. Describes sampling procedure and the sample;
3. Plans data collection procedure;
4. Plans data analysis using statistics and hypothesis testing ;
5. Presents written research methodology; and
6. Implements design principles to produce creative work.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 2

I. INTRODUCTION:

Panagdait! How are you today? This is your second module for this semester
of the subject Practical Research 2. Are you excited to start? I hope that you
are!

KEY CONCEPT
Read me!

QUANTITATIVE DATA RESEARCH DESIGN


QUANTITATIVE RESEARCH
If the researcher views quantitative design as a continuum, one end of the range represents a
design where the variables are not controlled at all and only observed. Connections amongst
variable are only described. At the other end of the spectrum, however, are designs which include
a very close control of variables, and relationships amongst those variables are clearly
established. In the middle, with experiment design moving from one type to the other, is a range
which blends those two extremes together.
TYPES OF QUANTITATIVE RESEARCH
Quantitative research is a type of empirical investigation. That means the research focuses on
verifiable observation as opposed to theory or logic. Most often this type of research is expressed
in numbers. A researcher will represent and manipulate certain observations that they are
studying. They will attempt to explain what it is they are seeing and what affect it has on the
subject. They will also determine and what the changes may reflect. The overall goal is to convey
numerically what is being seen in the research and to arrive at specific and observable
conclusions. (Klazema 2014)
Non-Experimental Research Design
Non-experimental research means there is a predictor variable or group of subjects that cannot
be manipulated by the experimenter. Typically, this means that other routes must be used to draw
conclusions, such as correlation, survey or case study. (Kowalczyk 2015)
Types of Non-Experimental Research
1. Survey Research
Survey research uses interviews, questionnaires, and sampling polls to get a sense of behavior
with intense precision. It allows researchers to judge behavior and then present the findings in an
accurate way. This is usually expressed in a percentage. Survey research can be conducted
around one group specifically or used to compare several groups. When conducting survey
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 3

research it is important that the people questioned are sampled at random. This allows for more
accurate findings across a greater spectrum of respondents.
Remember!
✓ It is very important when conducting survey research that you work with statisticians and
field service agents who are reputable. Since there is a high level of personal interaction
in survey scenarios as well as a greater chance for unexpected circumstances to occur, it
is possible for the data to be affected. This can heavily influence the outcome of the
survey.
✓ There are several ways to conduct survey research. They can be done in person, over the
phone, or through mail or email. In the last instance they can be self-administered. When
conducted on a single group survey research is its own category.
2. Correlational Research
Correlational research tests for the relationships between two variables. Performing correlational
research is done to establish what the effect of one on the other might be and how that affects
the relationship.
Remember!
✓ Correlational research is conducted in order to explain a noticed occurrence. In
correlational research the survey is conducted on a minimum of two groups. In most
correlational research there is a level of manipulation involved with the specific variables
being researched. Once the information is compiled it is then analyzed mathematically to
draw conclusions about the effect that one has on the other.
✓ Correlation does not always mean causation. For example, just because two data points
sync doesn‘t mean that there is a direct cause and effect relationship. Typically, you should
not make assumptions from correlational research alone.
3. Descriptive
As stated by Good and Scates as cited by Sevilla (1998), the descriptive method is oftentimes as
a survey or a normative approach to study prevailing conditions.
Remember!
✓ Descriptive method involves the discretion, recognition, analysis and interpretation of
condition that currently exist. Moreover, according to Gay (2007) Descriptive research
design involves the collection of the data in order to test hypotheses or to answer
questions concerning the current status of the subject of the study. It determines and
reports the way things are.
4. Comparative
Comparative researchers examine patterns of similarities and differences across a moderate
number of cases. The typical comparative study has anywhere from a handful to fifty or more
cases. The number of cases is limited because one of the concerns of comparative research is
to establish familiarity with each case included in a study. (Ragin, Charles 2015)
✓ Like qualitative researchers, comparative researchers consider how the different parts of
each case - those aspects that are relevant to the investigation - fit together; they try to
make sense of each case. Thus, knowledge of cases is considered an important goal of
comparative research, independent of any other goal.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 4

5. Ex Post Facto
According to Devin Kowalczyk, that Ex post facto design is a quasi-experimental study
examining how an independent variable, present prior to the study, affects a dependent
variable.
✓ A true experiment and ex post facto both are attempting to say: this independent variable
is causing changes in a dependent variable. This is the basis of any experiment - one
variable is hypothesized to be influencing another. This is done by having an experimental
group and a control group. So if you're testing a new type of medication, the experimental
group gets the new medication, while the control group gets the old medication. This
allows you to test the efficacy of the new medication. . (Kowalczyk 2015)

Experimental Research
Though questions may be posed in the other forms of research, experimental research is guided
specifically by a hypothesis. Sometimes experimental research can have several hypotheses. A
hypothesis is a statement to be proven or disproved. Once that statement is made experiments
are begun to find out whether the statement is true or not. This type of research is the bedrock of
most sciences, in particular the natural sciences. Quantitative research can be exciting and highly
informative. It can be used to help explain all sorts of phenomena. The best quantitative research
gathers precise empirical data and can be applied to gain a better understanding of several fields
of study. (Williams 2015)

Types of Experimental research


1. Quasi-experimental Research
Design involves selecting groups, upon which a variable is tested without any random pre-
selection process. For example, to perform an educational experiment, a class might be arbitrarily
divided by alphabetical selection or by seating arrangement. The division is often convenient
especially in an educational situations cause a little disruption as possible.
2. True Experimental Design
According to Yolanda Williams (2015) that a true experiment is a type of experimental design and
is thought to be the most accurate type of experimental research. This is because a true
experiment supports or refutes a hypothesis using statistical analysis. A true experiment is also
thought to be the only experimental design that can establish cause and effect relationships. So,
what makes a true experiment?
There are three criteria that must be met in a true experiment
1. Control group and experimental group
2. Researcher-manipulated variable
3. Random assignment
Instrument Development
Developing a research instruments
Before the researchers collect any data from the respondents, the young researchers will need to
design or devised new research instruments or they may adopt it into the other researches (the
tools they will use to collect the data).
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 5

If the researcher/s is planning to carry out interviews or focus groups, the young researchers will
need to plan an interview schedule or topic guide. This is a list of questions or topic areas that all
the interviewers will use. Asking everyone the same questions means that the data you collect
will be much more focused and easier to analyze.
If the group wants to carry out a survey, the young researchers will need to design a questionnaire.
This could be on paper or online (using free software such as Survey Monkey). Both approaches
have advantages and disadvantages.
If the group is collecting data from more than one ‘type‘ of person (such as young people and
teachers, for example), it may well need to design more than one interview schedule or
questionnaire. This should not be too difficult as the young researchers can adapt additional
schedules or questionnaires from the original.
When designing the research instruments ensure that they start with a statement about:
• the focus and aims of the research project
• how the person‘s data will be used (to feed into a report?)
• confidentiality
• how long the interview or survey will take to complete.
• Usage of appropriate language
• every question must be brief and concise.
• any questionnaires use appropriate scales.
For young people ‘smiley face‘ scales can work well
REMEMBER!
Any questionnaires ask people for any relevant information about themselves, such as their
gender or age, if relevant. Don‘t ask for so much detail that it would be possible to identify
individuals though, if you have said that the survey will be anonymous.
The Instrument
Instrument is the generic term that researchers use for a measurement device (survey, test,
questionnaire, etc.). To help distinguish between instrument and instrumentation, consider that
the instrument is the device and instrumentation is the course of action (the process of developing,
testing, and using the device).
Usability
Usability refers to the ease with which an instrument can be administered, interpreted by the
participant, and scored/interpreted by the researcher. Example usability problems include:
Students are asked to rate a lesson immediately after class, but there are only a few minutes
before the next class begins (problem with administration).
Students are asked to keep self-checklists of their after school activities, but the directions are
complicated and the item descriptions confusing (problem with interpretation).
Teachers are asked about their attitudes regarding school policy, but some questions are worded
poorly which results in low completion rates (problem with scoring/interpretation).
Validity and reliability concerns (discussed below) will help alleviate usability issues. For now, we
can identify five usability considerations:
How long will it take to administer?
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 6

Are the directions clear?


How easy is it to score?
Do equivalent forms exist?
Have any problems been reported by others who used it?
Validity
Validity is the extent to which an instrument measures what it is supposed to measure and
performs as it is designed to perform. It is rare, if nearly impossible, that an instrument be 100%
valid, so validity is generally measured in degrees. As a process, validation involves collecting
and analyzing data to assess the accuracy of an instrument. There are numerous statistical tests
and measures to assess the validity of quantitative instruments, which generally involves pilot
testing. The remainder of this discussion focuses on external validity and content validity.
External validity is the extent to which the results of a study can be generalized from a sample to
a population. Establishing eternal validity for an instrument, then, follows directly from sampling.
Recall that a sample should be an accurate representation of a population, because the total
population may not be available. An instrument that is externally valid helps obtain population
generalizability, or the degree to which a sample represents the population.
Content validity refers to the appropriateness of the content of an instrument. In other words, do
the measures (questions, observation logs, etc.) accurately assess what you want to know? This
is particularly important with achievement tests. Consider that a test developer wants to maximize
the validity of a unit test for 7th grade mathematics. This would involve taking representative
questions from each of the sections of the unit and evaluating them against the desired outcomes
Reliability
Reliability can be thought of as consistency. Does the instrument consistently measure what it
is intended to measure? It is not possible to calculate reliability; however, there are four general
estimators that you may encounter in reading research:
Inter-Rater/Observer Reliability: The degree to which different raters/observers give consistent
answers or estimates.
Test-Retest Reliability: The consistency of a measure evaluated over time.
Parallel-Forms Reliability: The reliability of two tests constructed the same way, from the same
content.
Internal Consistency Reliability: The consistency of results across items, often measured with
Cronbach‘s Alpha.

Guidelines in Writing Research Methodology


Methodology is the systematic, theoretical analysis of the methods applied to a field of study. It
comprises the theoretical analysis of the body of methods and principles associated with a branch
of knowledge.
Methodology section is one of the parts of a research paper. This part is the core of your paper
as it is a proof that you use the scientific method. Through this section, your study‘s validity is
judged. So, it is very important. Your methodology answers two main questions:
Guided Question to start writing a research methodology:
✓ How did you collect or generate the data?
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 7

✓ How did you analyze the data?


While writing this section, be direct and precise. Write it in the past tense. Include enough
information so that others could repeat the experiment and evaluate whether the results are
reproducible the audience can judge whether the results and conclusions are valid.
The explanation of the collection and the analysis of your data are very important because;
✓ Readers need to know the reasons why you chose a particular method or procedure
instead of others.
✓ Readers need to know that the collection or the generation of the data is valid in the field
of study.
✓ Discuss the anticipated problems in the process of the data collection and the steps you
took to prevent them.
✓ Present the rationale for why you chose specific experimental procedures.
✓ Provide sufficient information of the whole process so that others could replicate your
study.
You can do this by: giving a completely accurate description of the data collection equipment and
the techniques. Explaining how you collected the data and analyzed them.
Specifically;
✓ Present the basic demographic profile of the sample population like age, gender, and the
racial composition of the sample. When animals are the subjects of a study, you list their
species, weight, strain, sex, and age.
✓ Explain how you gathered the samples/ subjects by answering these questions:
✓ Did you use any randomization techniques?
✓ How did you prepare the samples?
✓ Explain how you made the measurements by answering this question.
✓ What calculations did you make?
✓ Describe the materials and equipment that you used in the research.
✓ Describe the statistical techniques that you used upon the data.
The order of the methods section;
1. Describing the samples/ participants.
2. Describing the materials you used in the study
3. Explaining how you prepared the materials
4. Describing the research design
5. Explaining how you made measurements and what calculations you performed
6. Stating which statistical tests you did to analyze the data.
QUANTITATIVE DATA ANALYSIS
Quantitative Data Analysis
It is a systematic approach to investigations during which numerical data is collected and/or the
researcher transforms what is collected or observed into numerical data. It often describes a
situation or event; answering the 'what' and 'how many' questions you may have about something.
This is research which involves measuring or counting attributes (i.e. quantities)
A quantitative approach is often concerned with finding evidence to either support or contradict
an idea or hypothesis you might have. A hypothesis is where a predicted answer to a research
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 8

question is proposed, for example, you might propose that if you give a student training in how to
use a search engine it will improve their success in finding information on the Internet.
You could then go on to explain why a particular answer is expected - you put forward a theory.
We can gather quantitative data in a variety of ways and from a number of different sources. Many
of these are similar to sources of qualitative data, for example:
✓ Questionnaires - a series of questions and other prompts for the purpose of gathering
information from respondents
✓ Interviews - a conversation between two or more people (the interviewer and the
interviewee) where questions are asked by the interviewer to obtain information from the
interviewee - a more structured approach would be used to gather quantitative data
✓ Observation - a group or single participants are manipulated by the researcher, for
example, asked to perform a specific task or action. Observations are then made of their
user behavior, user processes, workflows etc, either in a controlled situation (e.g. lab
based) or in a realworld situation (e.g. the workplace)
✓ Transaction logs - recordings or logs of system or website activity
✓ Documentary research - analysis of documents belonging to an organization
Why do we do quantitative data analysis?
Once you have collected your data you need to make sense of the responses you have got back.
Quantitative data analysis enables you to make sense of data by:
• organizing them
• summarizing them
• doing exploratory analysis
And to communicate the meaning to others by presenting data as:
• tables
• graphical displays
• summary statistics
We can also use quantitative data analysis to see:
• where responses are similar , for example, we might find that the majority of students all go to
the university library twice a week
• if there are differences between the things we have studied, for example, 1st year students
might go once a week to the library, 2 nd year students twice a week and 3 rd year students three
times a week
• if there is a relationship between the things we have studied. So, is there a relationship between
the number of times a student goes to the library and their year of study?
Using software for statistical analysis
Some key concepts
Before we look at types of analysis and tools we need to be familiar with a few concepts first:
• Population - the whole units of analysis that might be investigated, this could be students, cats,
house prices etc.
• Sample - the actual set of units selected for investigation and who participate in the research
• Variable - characteristics of the units/participants
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 9

• Value - the score/label/value of a variable, not the frequency of occurrence. For example, if age
is a characteristic of a participant then the value label would be the actual age, eg. 21, 22, 25, 30,
18, not how many participants are 21, 22, 25, 30, 18.
• Case/subject - the individual unit/participant of the study/research.
Sampling
Sampling is complex and can be done in many ways dependent on 1) what you want to achieve
from your research, 2) practical considerations of who is available to participate! The type of
statistical analysis you do will depend on the sample type you have. Most importantly, you cannot
generalize your findings to the population as a whole if you do not have 68 a random sample. You
can still undertake some inferential statistical analysis but you should report these as results of
your sample, not as applicable to the population at large.
Common sampling approaches include:
• Random sampling
• Stratified sampling
• Cluster sampling
• Convenience sampling
• Accidental sampling
Steps in Quantitative Data Analysis
According to Baraceros (2016), she identified the different steps in Quantitative data analysis and
she quoted that no “data organization means no sound data analysis”.
1. Coding system – to analyzed data means to quantify of change the verbally expressed data
into numerical information. Converting the words, images, or pictures into numbers, they become
fit for any analytical procedures requiring knowledge of arithmetic and mathematical
computations. But it is not possible for the researcher to do the mathematical operations such as
division, multiplication, or subtraction in the word level, unless you code the verbal responses and
observation categories.
Step 2: Analyzing the Data
Data coding and tabulation are both essential in preparing the data analysis. Before you interpret
every component of the data, the researcher decides first what kind of quantitative analysis to use
whether to use a simple descriptive statistical technique or an advance analytical method. The
first one that college students often use tells some aspects of categories of data such as:
frequency of distribution, measure of central tendency (mean, median and mode), and standard
deviation. However, this does not give information about population from where the sample came.
The second one, on the other hand, fits graduate-level studies because this involves complex
statistical analysis requiring a good foundation and thorough knowledge the datagathering
instrument used. The results of the analysis reveal the following aspects of an item in a set of
data (Mogan 2014; Punch 2014; Walsh 2010) cited by Baraceros (2016):
• Frequency distribution – gives you the frequency of distribution and percentage of the
occurrence of an item in asset of data. In other words, it gives you the number of
responses given repeatedly for one question.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 10

• Measure of Central Tendency – indicates the different positions or values of the items,
such that in a category of data, you find an item or items serving as the:
Mean – average of all the items or scores
Example: 3+8+9+2+3+10+3 = 38
38 ÷ 7 = 5.43 (Mean)
Median – the score in the middle of the set of items that cuts or divides the set into two
groups
Example: The number in the example for the Mean has 2 as the Median.
Mode – refers to the item or score in the data set that has the most repeated appearance
in the set.
Example: Again, in the example above for the Mean, 3 is the Mode.

• Standard Deviation – shows the extent of the difference of the data from the mean. An
examination of this gap between the mean and the data gives you an idea about the extent
of the similarities and differences between the respondents. There are mathematical
operations that you have to determine the standard deviation.
Step 1: Compute the Mean.
Step 2: Compute the deviation (difference) between each respondent‘s answer (data item)
and the mean. The positive sign (+) appears before the number if the difference is higher;
negative sign (-), if the difference is lower.
Step 3: Compute the square of each deviation.
Step 4: Compute the sum of squares by adding the squared figures.
Step 5: Divide the sum of squares by the number of data items to get the variance.
Step 6: Compute the square root of variance figure to get standard deviation.

Example:
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 11

2. Advanced Quantitative Analytical Methods – An analysis of quantitative data that involves


the use of more complex statistical methods needing computer software like the SPSS, STATA,
or MINITAB, among others, occurs graduate-level students taking their MA or PhD degrees.
Some of the advanced method of quantitative data analysis are the following (Argyous 2011;
Levin & Fox 2014; Godwin 2014; as cited by Baraceros 2016)
a) Correlation – uses statistical analysis to yield results that describes the relationship of two
variables. The results, however are incapable of establishing casual relationships.
b) Analysis of Variance (ANOVA) - is a statistical method used to test differences between two
or more means. It may seem odd that the technique is called "Analysis of Variance" rather than
"Analysis of Means." As you will see, the name is appropriate because inferences about means
are made by analyzing variance.
c) Regression - In statistical modeling, regression analysis is a statistical process for estimating
the relationships among variables. It includes many techniques for modeling and analyzing
several variables, when the focus is on the relationship between a dependent variable and one
or more independent variables (or 'predictors').
SAMPLING PROCEDURE
Sampling is a process or technique of choosing a sub-group from a population to participate in
the study; it is the process of selecting a number of individuals for a study in such a way that the
individuals selected represent the large group from which they were selected (Ogula, 2005).
There are two major sampling procedures in research. These include probability and non-
probability sampling.
Probability Sampling Procedures
In probability sampling, everyone has an equal chance of being selected. This scheme is one in
which every unit in the population has a chance (greater than zero) of being selected in the
sample. There are four basic types of sampling procedures associated with probability samples.
These include simple random, systematic sampling, stratified and cluster.
Simple Random Sampling Procedure
Simple random sampling provides the base from which the other more complex sampling
methodologies are derived. To conduct a simple random sample, the researcher must first
prepare an exhaustive list (sampling frame) of all members of the population of interest. From this
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 12

list, the sample is drawn so that each person or item has an equal chance of being drawn during
each selection round (Kanupriya, 2012).
To draw a simple random sample without introducing researcher bias, computerized sampling
programs and random number tables are used to impartially select the members of the population
to be sampled. Subjects in the population are sampled by a random process, using either a
random number generator or a random number table, so that each person remaining in the
population has the same probability of being selected for the sample (Friedrichs, 2008).
Systematic Sampling Procedure
Systematic sampling procedure often used in place of simple random sampling. In systematic
sampling, the researcher selects every nth member after randomly selecting the first through nth
element as the starting point. For example, if the researcher decides to sample 20 respondents
from a sample of 100, every 5th member of the population will systematically be selected.
A researcher may choose to conduct a systematic sample instead of a simple random sample for
several reasons. Firstly, systematic samples tend to be easier to draw and execute, secondly, the
researcher does not have to go back and forth through the sampling frame to draw the members
to be sampled, thirdly, a systematic sample may spread the members selected for measurement
more evenly across the entire population than simple random sampling. Therefore, in some
cases, systematic sampling may be more representative of the population and more precise
(Groves et al., 2006).
Stratified Sampling Procedure
Stratified sampling procedure is the most effective method of sampling when a researcher wants
to get a representative sample of a population. It involves categorizing the members of the
population into mutually exclusive and collectively exhaustive groups. An independent simple
random sample is then drawn from each group. Stratified sampling techniques can provide more
precise estimates if the population is surveyed is more heterogeneous than the categorized
groups. This technique can enable the researcher to determine desired levels of sampling
precision for each group, and can provide administrative efficiency. The main advantage of the
approach is that it‘s able to give the most representative sample of a population (Hunt & Tyrrell,
2001).
Cluster Sampling Procedure
In cluster sampling, a cluster (a group of population elements), constitutes the sampling unit,
instead of a single element of the population. The sampling in this technique is mainly
geographically driven. The main reason for cluster sampling is cost efficiency (economy and
feasibility). The sampling frame is also often readily available at cluster level and takes short time
for listing and implementation. The technique is also suitable for survey of institutions (Ahmed,
2009) or households within a given geographical area.
But the design is not without disadvantages, some of the challenges that stand out are: it may not
reflect the diversity of the community; other elements in the same cluster may share similar
characteristics; provides less information per observation than an SRS of the same size
(redundant information: similar information from the others in the cluster); standard errors of the
estimates are high, compared to other sampling designs with the same sample size.
Non Probability Sampling Procedures
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 13

Non probability sampling is used in some situations, where the population may not be well defined.
In other situations, there may not be great interest in drawing inferences from the sample to the
population. The most common reason for using non probability sampling procedure is that it is
less expensive than probability sampling procedure and can often be implemented more quickly
(Michael, 2011). It includes purposive, convenience and quota sampling procedures.
Purposive/Judgmental Sampling Procedure
In purposive sampling procedure, the researcher chooses the sample based on who he/she thinks
would be appropriate for the study. The main objective of purposive sampling is to arrive as at a
sample that can adequately answer the research objectives. The selection of a purposive sample
is often accomplished by applying expert knowledge of the target population to select in a
nonrandom manner a sample that represents a cross-section of the population (Henry, 1990).
A major disadvantage of this method is subjectivity since another researcher is likely to come up
with a different sample when identifying important characteristics and picking typical elements to
be in the sample. Given the subjectivity of the selection mechanism, purposive sampling is
generally considered most appropriate for the selection of small samples often from a limited
geographic area or from a restricted population definition. The knowledge and experience of the
researcher making the selections is a key aspect of the “success‘‘ of the resulting sample
(Michael, 2011). A case study research design for instance, employs purposive sampling
procedure to arrive at a particular ‘case‘ of study and a given group of respondents. Key
informants are also selected using this procedure.
Convenience Sampling Procedure
Convenience sampling is sometimes known as opportunity, accidental or haphazard sampling. It
is a type of nonprobability sampling which involves the sample being drawn from that part of the
population which is close to hand, that is, a population which is readily available and convenient.
The researcher using such a sample cannot scientifically make generalizations about the total
population from this sample because it would not be representative enough (Michael, 2011). This
type of sampling is most useful for pilot testing.
Convenience sampling differs from purposive sampling in that expert judgment is not used to
select a representative sample. The primary selection criterion relates to the ease of obtaining a
sample. Ease of obtaining the sample relates to the cost of locating elements of the population,
the geographic distribution of the sample, and obtaining the interview data from the selected
elements (de Leeuw, Hox & Huisman, 2003).
SAMPLING TECHNIQUES
When sampling, you need to decide what units (i.e., what people, organizations, data, etc.) to
include in your sample and which ones to exclude. As you'll know by now, sampling techniques
act as a guide to help you select these units, and you will have chosen a specific probability or
non-probability sampling technique:
 If you are following a probability sampling technique, you'll know that you require a list of the
population from which you select units for your sample. This raises potential data protection and
confidentiality issues because units in the list (i.e., when people are your units) will not necessarily
have given you permission to access the list with their details. Therefore, you need to check that
you have the right to access the list in the first place.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 14

 If using a non-probability sampling technique, you need to ask yourself whether you are
including or excluding units for theoretical or practical reasons. In the case of purposive sampling,
the choice of which units to include and exclude is theoretically-driven. In such cases, there are
few ethical concerns. However, where units are included or excluded for practical reasons, such
as ease of access or personal preferences (e.g., convenience sampling), there is a danger that
units will be excluded unnecessarily. For example, it is not uncommon when select units using
convenience sampling that researchers' natural preferences (and even prejudices) will influence
the selection process. For example, maybe the researcher would avoid approaching certain
groups (e.g., socially marginalized individuals, people who speak little English, disabled people,
etc.). Where this happens, it raises ethical issues because the picture being built through the
research can be excessively narrow, and arguably, unethically narrow. This highlights the
importance of using theory to determine the creation of samples when using non-probability
sampling techniques rather than practical reasons, whenever possible.
DRAW CONCLUSIONS AND RECOMMENDATIONS
Drawing Conclusions
For any research project and any scientific discipline, drawing conclusions is the final, and most
important, part of the process. Whichever reasoning processes and research methods were used,
the final conclusion is critical, determining success or failure. If an otherwise excellent experiment
is summarized by a weak conclusion, the results will not be taken seriously. Success or failure is
not a measure of whether a hypothesis is accepted or refuted, because both results still advance
scientific knowledge. ( Shuttleworth 2014)
Failure is poor experimental design, or flaws in the reasoning processes, which invalidate the
results. As long as the research process is robust and well designed, then the findings are sound,
and the process of drawing conclusions begins. Generally, a researcher will summarize what they
believe has been learned from the research, and will try to assess the strength of the hypothesis.
Even if the null hypothesis is accepted, a strong conclusion will analyze why the results were not
as predicted. In observational research, with no hypothesis, the researcher will analyze the
findings, and establish if any valuable new information has been uncovered.
Generating Leads for Future Research
However, very few experiments give clear-cut results, and most research uncovers more
questions than answers.
The researcher can use these to suggest interesting directions for further study. If, for example,
the null hypothesis was accepted, there may still have been trends apparent within the results.
These could form the basis of further study, or experimental refinement and redesign.
Evaluation - Flaws in the Research Process
The researcher will then evaluate any apparent problems with the experiment. This involves
critically evaluating any weaknesses and errors in the design, which may have influenced the
results.
Even strict, 'true experimental,' designs have to make compromises, and the researcher must be
thorough in pointing these out, justifying the methodology and reasoning.
For example, when drawing conclusions, the researcher may think that another causal effect
influenced the results, and that this variable was not eliminated during the experimental process.
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 15

A refined version of the experiment may help to achieve better results, if the new effect is included
in the design process.
In the global warming example, the researcher might establish that carbon dioxide emission alone
cannot be responsible for global warming. They may decide that another effect is contributing, so
propose that methane may also be a factor in global warming. A new study would incorporate
methane into the model.
What are the Clear-Cut Benefits of the Research
The next stage is to evaluate the advantages and benefits of the research. In medicine and
psychology, for example, the results may throw out a new way of treating a medical problem, so
the advantages are obvious. However, all well-constructed research is useful, even if it is just
adding to the fount of human knowledge. An accepted null hypothesis has an important meaning
to science.
Suggestions Based Upon the Conclusions
The final stage is the researcher's recommendations based upon the results, depending upon the
field of study. This area of the research process can be based around the researcher's personal
opinion, and will integrate previous studies.
For example, a researcher into schizophrenia may recommend a more effective treatment. A
physicist might postulate that our picture of the structure of the atom should be changed. A
researcher could make suggestions for refinement of the experimental design, or highlight
interesting areas for further study. This final piece of the paper is the most critical, and pulls
together all of the findings.
The area in a research paper that causes intense and heated debate amongst scientists is when
drawing conclusions.
It is critical in determining the direction taken by the scientific community, but the researcher will
have to justify their findings.
Summary - The Strength of the Results
The key to drawing a valid conclusion is to ensure that the deductive and inductive processes are
correctly used, and that all steps of the scientific method were followed.
If your research had a robust design, questioning and scrutiny will be devoted to the experiment
conclusion, rather than the methods.
Recommendations
Other recommendations may also be appropriate. When preparing this section, remember that in
making your recommendations, you must show how your results support them. A
recommendation for a preferred alternative should include:
1. Specifically stating what should be done, the steps required to implement the policy, and the
resources needed;
2. discussion of the benefits to the organization and what problems would be corrected or avoided;
3. discussion of the feasibility of the proposed policy; and
4. general statement about the nature and timing of an evaluation plan that would be used to
determine the effectiveness of the proposed policy.
Recommendations for Further Research
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 16

In this section, you finally have the opportunity to present and discuss the actions that future
researchers should take as a result of your Project. A well-thought-out set of recommendations
makes it more likely that the organization will take your recommendations seriously. Ideally you
should be able to make a formal recommendation regarding the alternative that is best supported
by the study. Present and discuss the kinds of additional research suggested by your Project. If
the preferred alternative is implemented, what additional research might be needed?

Great job! After studying the concept notes, let’s try to


answer these activities!
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 17

II. INTERACTION:
A. Learning Activities :
Assessment Technique:
1. When can you say that an instrument to be used for the data gathering is usable, valid and
reliable?

2. In writing the research methodology, what should be the order of the methods section?

3. Explain briefly the steps in quantitative data analysis.


TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 18

4. Explain what you understand about probability sampling and non-probability sampling by
giving what is asked in the table below:

My Understanding Example

Simple Random Sampling


Procedure

Systematic Sampling
Procedure

Probability
sampling Stratified Sampling Procedure

Cluster Sampling Procedure

Purposive/Judgmental
Sampling Procedure

Non-
probability
Convenience Sampling
sampling
Procedure
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 19

III. INTEGRATION
A. Transfer of Learning

THREE THINGS I HAVE LEARNED

TWO QUESTIONS I STILL HAVE

ONE OPINION THAT I HAVE

B. Reflection:

1. Why do you think drawing conclusions is considered as one of the most important process of
a research?
TRENDS, NETWORKS, AND CRITICAL THINKING IN THE 21ST CENTURY CULTURE 20

Great job! We are finally done with the module! I hope


you enjoyed learning the topic! See you in our next
journey! If you have questions please feel free to
contact me through my mobile number 09667016407

Prepared by: Andres Kim D. Apoya

You might also like