Professional Documents
Culture Documents
Book Research Methodology Latest 6 Aug
Book Research Methodology Latest 6 Aug
net/publication/361880298
CITATIONS READS
0 27
2 authors, including:
Dileep KUMAR M.
Nile University of Nigeria
159 PUBLICATIONS 664 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Factors that Influence Experiential Value of Timeshare Owners toward Timeshare Purchasing in Malaysia View project
All content following this page was uploaded by Dileep KUMAR M. on 09 July 2022.
14
Citation: Kumar D.M & NAM Noor (2013) Ways and means of Research Method,
Research India Publications, Ist Edition, India.
“This book is dedicated to our teachers who has shown me the path of ‘learning by
doing’ with hard work, sincerity, and commitment”
1
ACKNOWLEDGEMENT
First and foremost, I would like to thank GOD for HIS countless support in the
accomplishment of this monograph.
I would like to thank my family members who have fully supported me throughout my
life. I would especially like to thank them for their love and direction.
I would like to thank the Datuk Prof Dr Mohamed Mustafa Ishak, Vice Chancellor,
Universiti Utara Malaysia, for providing a work culture which is appropriate to get
ahead in literary works.
I am in deep gratitude to my advisor, Prof. Dr. Noor Azizi Ismail, Dean, Othman Yeop
Abdullah Graduate School of Business, for his constant encouragement in my
publication endeavors. Thank you for sharing your knowledge and experience with me
and guiding me through this process with patience and encouragement.
2
FORWARD
Numbers of books have come out with research tools and technique. While
concentrating developing countries and in par with their knowledge and skill very few
books are tailor made. Start with the concepts, differentiating the methods, fine
tuning the knowledge regarding research with process, tools, and techniques,
incorporated in this book make a reader or scholar simple enough to capture the
meaning and its applicability. Pursuing a research topic, or arriving at managerial
decisions, with accompanying proof and evidence also need the support of tools and
techniques for authenticity and accountability. I believe; this book is an outstanding
work of art for any scholar or professional who want to be investigative, critical, logical
and scientific in their decision making and inferences leading to policy and project
implementation.
The “ways and means of research methods” incorporated in this book will equip the
readers, researchers and professionals with scientific knowledge, critical mindset, and
logical conclusions in an efficient and effective way. I presume this book titled ““ways
and means of research methods”” will extend better insight and understanding of
research and research methodology to the managers, teachers and professionals,
researchers, and student community.
Principal
Meredian College
Ullal, Mangalore.
India.
3
PREFACE
Research is a logical and systematic search for new and useful information on a
particular issue, incident, a situation or collection of issues on a topic. The exploration
of such topic is possible only through Research and Development activities. To convert
and assumptions of theory and practice, the application of scientific principles and
research techniques with objectivity and empirical observation is very much required.
Research and Research Methodology, monograph focus on helping the students and
faculty members in developing an overview of conceptual understanding on the
nature and features of empirical research in social science field, and how the tools and
techniques support in arriving at effective orientation to take a research process.
Though there are several concepts related to Research and Research Methodology,
this book represents a comprehended and revised version of a collection of literature
and modules on Research and Research Methodology.
Chapter 5 consists of the Research Process. The students get an idea of, from where
they have to commence the research and how they should end with.
Chapter 7 contains the information related to Research Design. The students get an
idea of, what is a Research Plan of Action and how various types of Research Designs
apply in Social Research.
4
Chapter 9 formally presents what is Hypothesis as different types of Hypothesis
formulated in Social Research. The students get a clear understanding on how they
can formulate a hypothesis and how they can plan their analysis further with data
collection.
Chapter 11 devoted specifically to Item Analysis. The students get an idea of how a
tool in Research can be developed, how relevant items can be included in data
collection tool and thereby how they can obtain valuable information on the Research
topic. In
Chapter 12 examines tool administration process, data processing methods and how
the data analysis with various statistical methods can be done.
Chapter 13 finally discusses the method of report writing. It describes different types
of report writing based on the design adopted and gives the student a clear guideline
on how to write a research report.
5
COURSE CONTENT
6
MODULE 1
SCIENTIFIC RESEARCH
Learning objectives:
INTRODUCTION
As in many other spheres of human endeavor, research provides a key basis for
developing knowledge. In the physical sciences, physicists, biologists,
mathematicians, chemists, and so on, have long relied on and used research as a way
of helping to define and refine knowledge in their subject areas. It is only
comparatively recently that the social scientist has begun to use research for the same
purpose. Certainly, research in management is one of the newest areas of research.
This chapter concentrates more on basic concepts of research and research
methodology that support one to get a better understanding on the steps, process,
approaches and methods in research and the right way of application of research.
7
relations among the events in nature. Science is not a democratic process. Majority
rule does not determine what sound science is. Science does not accept notions
that are proven false by experiment.
8
INTERPRETATION: Drawing out conclusions from an organized data. Establishing
the relationship between data of a study, study findings and other scientific
knowledge.
FUNCTIONS OF A SCIENCE
Verification: All information should be subject to verification. The facts and figures
gathered to be verified with independent observations that withstand subjectivity
in arriving inference.
Definition: The facts and figures that subject to scientific research should be clear
and definite. There should not be any confusion regarding the variables the
researcher incorporated for research.
Prediction: The inference arrived from the analysis and interpretation of the result
should enable for appropriate prediction of implications.
9
SCIENTIFIC METHOD
It is true that personal and cultural beliefs influence both our perceptions and our
interpretations of natural phenomena. The scientific method attempts to minimize
the influence of bias or prejudice in the experimenter when testing a hypothesis or a
theory. The scientific method distinguishes science from other forms of explanation
because of its requirement of systematic experimentation. The scientific method is a
formalized way of answering questions about causation in the natural world. In
principle, the scientific method has three main steps. The first step is observation of
phenomena that can be detected by the senses. Second, the scientist forms a
hypothesis, or idea about the cause of the phenomena that has been observed. The
third step is experimentation, performing tests designed to show that one or more of
the hypotheses is more or less likely to be correct. These tests often include numerical
data so the results can be quantified.
So, the scientific method is the process by which scientists, collectively and over time,
endeavor to construct an accurate (that is, reliable, consistent, and non-arbitrary)
representation of the world.
Our personal and cultural beliefs can influence our perceptions and our
interpretations of phenomena _ The scientific method minimizes the influence of
experimenter bias or prejudice. The scientific method enables researchers,
collectively and over time, to construct a reliable, consistent, and non-arbitrary
representation of the world
Scientific Methods are Empirical: Information or facts about the world based on
sensory experiences. That is direct observation of the world, to see whether
scientific theories or speculations agree with the facts.
Scientific Methods are Systematic: All aspects of the research process are carefully
planned in advance, and nothing is done in a casual or haphazard fashion
10
Scientific Methods are Search for Causes: Scientists assume that there is order in
the universe, that there are ascertainable reasons for the occurrence of all events,
and that science can discover the orderly nature of the world.
Scientific Methods are Objective: Scientists attempt to remove their bias, belief,
preferences, wishes, and values from their scientific research. It means the ability
to see and accept facts as they are, not as one might wish them to be.
Scientific Methods are for reliance on evidence: The proofs are more important in
every research. Without proof generalization of the facts may not be made
possible. The scientific methods support in empirical observation, and which is
based on facts and figures.
Scientific Methods enable prediction: The inference arrived from the analysis and
interpretation of the result should enable for appropriate prediction of
implications.
11
indirectly). In educational research, the hypotheses are being often a question
about the relationship between or among variables that may influence
learning. The hypothesis may be one that merely asks whether a relationship
exists (correlational research), or the hypothesis may state a cause-and-effect
relationship.
STEPS AT A GLANCE
12
The scientific method is characterized by many components like (1) order (2) control
(3) empiricism (4) generalization and theoretical formulation. The scientific approach
to problem solving requires the application of order and discipline to instill confidence
in the investigator’s results. This requires the application of the scientific method, in
which a series of systematic steps are followed to solve problems. It can be tested and
the steps may be deleted, skipped or the order may vary. The steps may include the
following:
The success of science has more to do with an attitude common to scientists than with
a particular method. This attitude is one of inquiry, experimentation, and humility
before the facts (Hewitt, 1965)
Careful logical analysis: Research is logical and objective; every possible step is
taken to ensure the validity of procedure, tools and conclusions. The researcher
strives to eliminate personal feelings and bias through scientific method. Logical
analysis of the concepts and problems related to phenomena are envisaged for
better research.
13
Problem based data collection: There may be lot information available. It is one
of the requisite in scientific research “the segregation of relevant and irrelevant
data”. A proper application of scientific methods required relevant data for
analysis and interpretation. The data should be limited to the problem focused.
1. Many of the crucial processes occurred in the past and are difficult to test
in the present.
2. Personal biases are especially strong on topics related to the origins
because of the wider implication
3. Hypothesis-testing process fails to eliminate most of the personal and
cultural biases of the community of investigators
4. Researchers who follow the scientific path have often been referred to as
‘positivists and the positivist ideal is much harder to imagine
5. Scientific studies in psychology are usually highly controlled, which makes
them highly artificial.
6. Due to the need to have completely controlled experiments to test a
hypothesis, science cannot prove everything.
Scientific attitudes can be regarded as a complex of values and norms which is held to
be binding on the man of science. The norms are expressed in the forms of
prescriptions, proscriptions, preferences, and permissions. They are legitimatized in
terms of institutional values (Barnes and Dolby, 1970). The norms and values are
supposed to be internalized by the scientist and thereafter they fashion his/her
14
scientific practice. The current set of scientific attitudes of objectivity, open-
mindedness, un-biasedness, curiosity, suspended judgment, critical mindedness, and
rationality has evolved from a systematic identification of scientific norms and values.
The earlier papers of any importance in the field of scientific attitudes are those of
Merton, (1957).
1. Empiricism. Simply said, a scientist prefers to "look and see." You do not
argue about whether it is raining outside--just stick a handout the window.
Underlying this is the belief that there is one real world following constant
rules of nature, and that we can probe that real world and build our
understanding--it will not change for us. Nor does the real world depend
upon our understanding--we do not "vote" on science.
2. Determinism. "Cause-and-effect" underlies everything. In simple
mechanisms, an action causes a reaction, and effects do not occur without
causes. This does not mean that some processes are not random or
chaotic. But a causative agent does not alone produce one effect today
and another tomorrow.
3. A belief that problems have solutions. Major problems have been tackled
in the past, from the Manhattan Project to sending a man to the moon.
Other problems such as pollution, war, poverty, and ignorance are seen as
having real causes and are therefore solvable--perhaps not easily, but
possible.
4. Parsimony. Prefer the simple explanation of the complex: when both the
complex earth-centered system with epicycles and the simple Copernican
sun-centered system explain apparent planetary motion, we choose the
simpler.
5. Scientific manipulation. Any idea, even though it may be simple and
conform to apparent observations, must usually be confirmed by the work
that teases out the possibility that the effects are caused by other factors.
6. Skepticism. Nearly all statements make assumptions of prior conditions. A
scientist often reaches a dead end in research and has to go back and
determine if all the assumptions made are true to how the world operates.
7. Precision. Scientists are impatient with vague statements: A virus causes
disease? How many viruses are needed to infect? Are any hosts immune
to the virus? Scientists are very exact and very "picky".
8. Respect for paradigms. A paradigm is our overall understanding about
how the world works. Does a concept "fit" with our overall understanding
or does it fail to weave in with our broad knowledge of the world? If it
doesn't fit, it is "bothersome" and the scientist goes to work to find out if
the new concept is flawed or if the paradigm must be altered.
15
9. A respect for power of theoretical structure. Diederich describes how a
scientist is unlikely to adopt the attitude: "That is all right in theory but it
won't work in practice." He notes that the theory is "all right" only if it does
work in practice. Indeed the rightness of the theory is in the end what the
scientist is working toward; no scientific facts are accumulated at random.
(This is an understanding that many sciences fair students must learn!)
10. Willingness to change opinion. When Harold Urey, author of one textbook
theory on the origin of the moon's surface, examined the moon rocks
brought back from the Apollo mission, he immediately recognized this
theory did not fit the hard facts lying before him. "I've been wrong!" He
proclaimed without any thought of defending the theory he had supported
for decades.
11. Loyalty to reality. Urey above did not convert to just any new idea but
accepted a model that matched reality better. He would never have
considered holding an opinion just because it was associated with his
name.
12. Aversion to superstition and an automatic preference for scientific
explanation. No scientist can know all the experimental evidence
underlying current science concepts and therefore must adopt some views
without understanding their basis. A scientist rejects superstition and
prefers science paradigms out of an appreciation for the power of reality
based knowledge.
13. A thirst for knowledge, an "intellectual drive." Scientists are addicted
puzzle-solvers. The little piece of the puzzle that doesn't fit is the most
interesting. However, as Diederich notes, scientists are willing to live with
incompleteness rather than "...fill the gaps with off-hand explanations."
14. Suspended judgment. Again Diederich states: "A scientist tries hard not to
form an opinion on a given issue until he has investigated it, because it is
so hard to give up opinion already formed, and they tend to make us find
facts that support the opinions... There must be however, a willingness to
act on the best hypothesis that one has time or opportunity to form."
15. Awareness of assumptions. Diederich describes how a good scientist
starts by defining terms, making all assumptions very clear, and reducing
necessary assumptions to the smallest number possible. Often, we want
scientists to make broad statements about a complex world. But usually,
scientists are very specific about what they "know" or will say with
certainty: "When these conditions hold true, the usual outcome is such-
and-such."
16. Ability to separate fundamental concepts from the irrelevant or
unimportant. Some young science students get bogged down in
observations and data that are of little importance to the concept they
want to investigate.
16
17. Respect for quantification and appreciation of mathematics as a
language of science. Many of nature's relationships are best revealed by
patterns and mathematical relationships when reality is counted or
measured; and this beauty often remains hidden without this tool.
18. An appreciation of probability and statistics. Correlations do not prove
cause-and-effect, but some pseudoscience arises when a chance
occurrence is taken as "proof." Individuals who insist on an all-or-none
world and who have little experience with statistics will have difficulty
understanding the concept of an event occurring by chance.
19. An understanding that all knowledge has tolerance limits. All careful
analyses of the world reveal the values that scatter at least slightly around
the average point; a human's core body temperature is about so many
degrees and objects fall to a certain rate of acceleration, but there is some
variation. There is no absolute certainty.
20. Empathy for the human condition. Contrary to popular belief, there is a
value system in science, and it is based on humans being the only
organisms that can "imagine" things that are not triggered by stimuli
present at the immediate time in their environment; we are, therefore, the
only creatures to "look" back on our past and plan our future. This is why
when you read a moving book, you imagine yourself in the position of
another person and you think "I know what the author meant and feels."
Practices that ignore this empathy and the resultant value of human life
produce inaccurate science. (Bronowski, 1978; Diederich, 1967; and
Whaley and Surratt 1967).
Holton and Roller (1958) have found that the actual human characteristics exhibited
by scientists are quite distant from the attitudes ascribed to scientists. Anne Roe
(1961) reports that personal factors inevitably enter into scientific activity. They
influence a scientist's choice of what observations to make; they influence a scientist's
selective perception when making the observations. They also influence their
judgments about when there is sufficient evidence to be conclusive and
considerations as to whether discrepancies between experimental and theoretical
data are important or unimportant to their pet theories. Mitroff 's study (1974) of the
behaviors of Apollo moon scientists shows that scientists are passionate, irrational
and strongly committed to their own favored theories. What this means is that
subjective characteristics of the scientists act as norms rather than the widely
accepted Mertonian norms.
Mitroff (1974) also noted that scientists are seldom objective; there is no such thing
as the disinterested observer. As Mitroff sees it, the real process of doing science is
17
much more complicated. It is filled with subjective and even irrational elements that
have been generally unacknowledged. Mitroff concludes by suggesting that “to
remove commitment and even bias may be to remove one of the strongest sustaining
force for both the discovery of scientific ideas and for their subsequent testing."
(Mitroff, 1973: 765). Quite often school science implies or depicts scientists as being
rational and critical in their scientific activities. This, however, may not always be the
case. Gauld, (1973) admits that rationality does play a part in scientific activity but is
not always evident and not always practiced by all the members of a 43 scientific
community.
Kirkut, (1960) suggested that rational thinking is certainly exercised in judging the
products of these with whom one disagrees although the same case may not be
lavished on the arguments of scientists whose views are closer to one's own. Writings
by Kuhn, (1962) also provide an insight into the factors and personal characteristics
that influence a scientist's activity. The degree of resistance, stubbornness jealousy
and a rigid commitment witnessed among the members of the scientific community
further undermines the total acceptance of scientific attitudes.
It expects that the scientific method needs research questions and empirical
observations that will lead to unbiased answers. The empirical observations of
scientific research must be well designed to provide accurate and repeatable (precise)
results. The higher the precisity of result it is having the likelihood that of the events
happens again. This indicates that the result obtained through the scientific method
having high predictability. The findings will be repeated with test re-tester observation
and having high reliability.
The scientific method enables us to test a hypothesis and distinguish between the
correlation of two or more things happening in association with each other and the
actual cause of the phenomenon we observe. The correlation of two variables cannot
explain the cause and effect of their relationship. Scientists design experiments using
a number of methods to ensure the results reveal the likelihood of the observation
happening (probability). Controlled experiments are used to analyze these
relationships and develop cause and effect relationships. Statistical analysis is used to
determine whether differences between treatments can be attributed to the
treatment applied, if they are artifacts of the experimental design, or of natural
variation. In summary, the scientific method produces answers to questions posed in
the form of a working hypothesis that enables us to derive theories about what we
observe in the world around us. Its power lies in its ability to be repeated, providing
unbiased answers to questions to derive theories. This information is powerful and
offers opportunity to predict future events and phenomena (Ryan, 2002).
18
CONCLUSION
This particular chapter provides an understanding of basic terms which are frequently
used in research and research methodology. In order to get an analytical
understanding of research, understanding and application of these terms in its right
sense are very much important. This section further detailed the importance of
understanding of what is science, functions of science, scientific method, and steps
that to be followed in scientific undertaking. Further this portion explained how a
researcher can develop a scientific attitude for effective scientific research
undertaking.
DISCUSSION QUESTIONS
19
MODULE 2
INTRODUCTION TO RESEARCH
Learning objectives:
INTRODUCTION
The term ‘research’ has been viewed with mystique by many people. It is seen to be
the preserve of academicians and professional elite. In most people’s minds, the word
‘research’ raises up the image of a scholar, laboratory work, university or other
‘academic’ setting. But research is simply the process of asking questions and
answering them by survey or experiment in an organized way. It should not be
confined to academicians alone. Every thinking person has the capacity and should do
research. The fundamental requirement for research is an inquiring mind in order to
recognize that there are questions that need answers. The quest for knowledge then
is the basic idea behind the research. The word research is derived from the Latin word
meaning to know.
DEFINITION
The Advanced Learner’s Dictionary of Current English (1952) lays down the meaning
of research as “a careful investigation or inquiry especially through the search for new
facts in any branch of knowledge.” Redman and Mory (1923) define research as a
“systematized effort to gain new knowledge.” Slesinger and Stephenson in the
Encyclopedia of Social Sciences (1930) define research as “the manipulation of things,
concepts or symbols for the purpose of generalizing to extend, correct or verify
knowledge, whether that knowledge aids in the construction of theory or in the
20
practice of an art. According to Moser Social research is a systematized investigation
to gain new knowledge about social phenomenon and problems.
SYSTEMATIC because there is a definite set of procedures and steps which you will
follow. There are certain things in the research process which are always done in order
to get the most accurate results.
QUESTIONS are central to the research. If there is no question, then the answer is of
no use. Research is focused on relevant, useful, and important questions. Without a
question, research has no focus, drive, or purpose.
ANALISIS OF DEFENITIONS
21
Research is a systematic process of identifying the problems, defining the problems,
identifying the variables/ indicators to address these problems, collecting, compiling,
processing, and analyzing data to assess the inherent characteristics of the
phenomenon under study and to identify the objective basis for arriving at a
correct/reliable decision.
OBJECTIVES OF RESEARCH
22
TYPES OF RESEARCH
Research at broader level is divided into two categories: pure and applied. Applied
research has a practical problem-solving emphasis (Neuman, 2005). In business
research, it may help managers and researchers reveal answers to practical business
problems pertaining to performance, strategy, structure, culture, organizational
policy and the like. Pure or basic research is also problem solving, but in a different
sense. It obtains answers to questions of a theoretical nature and adds its contribution
to the existing body of knowledge (Sekaran and Bougie, 2010). Thus, both applied and
pure researches are problem based, but applied research helps managers in
organizational decision making.
Examples:
Applied (or action) research Applied research is done to solve specific, practical
questions; its primary aim is not to gain knowledge for its own sake. Applied research
aims at finding a solution to an immediate problem facing a society or an
industrial/business organization. Research to identify social, economic or political
trends that may affect a particular institution or the copy research (research to find
out whether certain communications will be read and understood) or the marketing
research or evaluation research are examples of applied research. Thus, the central
aim of applied research is to discover a solution for some pressing practical problem.
23
Common areas of applied research include electronics, informatics, process
engineering and applied science.
Examples:
DESCRIPTIVE RESEARCH
Descriptive research includes surveys and fact-finding inquiries of different kinds. The
major purpose of descriptive research is a description of the situation as it exists at
present. The main characteristic of this method is that the researcher has no control
over the variables; he can only report what has happened or what is happening. Most
ex post facto research projects are used for descriptive studies in which the researcher
seeks to measure such items as, for example, frequency of shopping, preferences of
people, or similar data.
Examples:
QUANTITATIVE
Examples:
24
QUALITATIVE RESEARCH
Examples:
CONCEPTUAL RESEARCH
Examples:
EMPIRICAL RESEARCH
25
stimulate the production of desired information. In such research, the researcher
must first provide himself with a working hypothesis or guess as to the probable
results. He then works to get enough facts (data) to prove or disprove his hypothesis.
He then sets up experimental designs which he thinks will manipulate the persons or
the materials concerned so as to bring forth the desired information. Such research is
thus characterized by the experimenter’s control over the variables under study and
his deliberate manipulation of one of them to study its effects. Evidence gathered
through experiments or empirical studies is today considered to be the most powerful
support possible for a given hypothesis.
Examples:
One time research or longitudinal research (depending upon the time): Longitudinal
research is the case where the research is carried on over several time-periods. A
longitudinal study follows a group composed of the same people across a period of
the life span. The behavior of these individuals is observed and/or measured at several
intervals over time to study the changes in their behavior. Longitudinal studies may
cover a short time, such as a few weeks, or a long time, such as the entire life span.
From the point of view of time, we can think have research either as one-time research
or longitudinal research. In the former case the research is confined to a single time-
period, whereas in the latter case the research is carried on over several time-periods.
Example:
26
• Longitudinal studies in health-related behaviors and processes
over time.
• Longitudinal research in the study of children with chronic physical
conditions.
Examples;
SIMULATION RESEARCH
27
research is for theory development, prediction, education, training, performance etc.
Computer simulation is growing in popularity as a methodological approach for
organizational researchers.
Characteristics
Example:
Example:
EXPLORATORY RESEARCH
28
Example:
HISTORICAL RESEARCH
Historical research: The purpose of an historical method is "to reconstruct the past
objectively and accurately, often in relation to the tenability of an hypothesis" (Isaac
and Michael, 1977). For example, a study renovating practice in the pedagogy of
learning and development in Taiwan during the past 70 years would be considered a
historical study. It is that which utilizes historical sources like documents, remains, etc.
to study events or ideas of the past, including the philosophy of persons and groups
at any remote point of time. Historical research utilizes existing documents to study
the events of the past. Research can also be experimental or evaluative.
Examples:
CONCLUSION-ORIENTED RESEARCH
Characteristics:
29
• Sound statistical methods and formal research methodologies are used to
increase the reliability of the information
• Data sought tends to be specific & decisive
• Also more structured & formal than exploratory data
Example:
DECISION-ORIENTED RESEARCH
Example:
EVALUATIVE STUDIES
30
agencies are referred to as summative evaluations. State tests administered by states
to assess performance on state standards are an example of the information collected
to support a summative evaluation.
Examples:
ACTION RESEARCH
Examples:
31
3. Do classes with assigned seats have less discipline problems than classes
without assigned seating?
ANALYTICAL RESEARCH
Examples:
SURVEYS
Surveys: Surveys are fact finding studies. This research is simply to provide
information on a certain situation, incident, or phenomena. It aims to explain
phenomena. This research is concerned with the cause effect relationship between
variables. Here the researcher tries to make a comparison of demographic groups.
Surveys cover definite geographical area. It requires well thought out plan, careful
analysis, and rational interpretation of findings. In a census survey, however,
information is gathered from every member of the population. For that reason, a
census survey is applicable when the population is relatively small and readily
accessible.
Example:
Advantages of surveys
32
3. Can be cost-effective (if you use the Internet, for example).
4. Can take less time for respondents to complete (compared to an
interview or focus group).
Disadvantages of surveys
CASE STUDIES
Examples:
33
3. Are often interesting to read.
RESEARCH APPROACHES
The above description of the types of research brings to light the fact that there are
two basic approaches to research, viz., quantitative approach and the qualitative
approach.
QUANTITATIVE APPROACH
The quantitative approach involves the generation of data in quantitative form which
can be subjected to rigorous quantitative analysis in a formal and rigid fashion. This
approach can be further sub-classified into inferential, experimental and simulation
approaches to research. The purpose of inferential approach to research is to form a
database from which to infer characteristics or relationships of population. This
usually means survey research where a sample of population is studied (questioned
or observed) to determine its characteristics, and it is then inferred that the
population has the same characteristics.
34
Characteristic:
Examples:
QUALITATIVE APPROACH
Characteristic:
Examples:
35
2. Phenomenology: Moral emotions and phenomenology.
3. Grounded theory: From the colony to the corporation: Studying
knowledge transfer across international boundaries.
MIXED METHOD
Mixed method: Certain research problems call for combining both quantitative and
qualitative methodologies. A researcher might adopt therefore, mixed methods
approach where both quantitative and qualitative data collection techniques and
analytical procedures are used in the same research design (Saunders et al., 2009). A
good researcher chooses to go for quantitative or qualitative or sometimes, a
combination of the two types of research based on his problem statement and the
study objectives.
Objective:
Characteristics
Example:
EXPERIMENTAL APPROACH
36
most valid approach to the solution of educational problems, both practical and
theoretical, and to the advancement of education as a science— (Gay, 1992).
Example:
According to Cooper and Schindler (2003), descriptive study mostly seeks answers to
who, what, when, where and sometimes how questions and explanatory study seeks
37
answers to how and most importantly ‘why’ questions. This is used "to describe
systematically the facts and characteristics of a given population or area of interest,
factually and accurately" (Isaac and Michael, 1977). Contrary to an exploratory
research, a descriptive study is more rigid, preplanned and structured, and is typically
based on a large sample (Churchill and Iacobucci, 2004; Hair, et al. 2003; Malhotra,
1999).
For instance, a survey reporting the results of public service commission rank list,
would be considered a descriptive study. A study of this kind involves collecting data
that test the validity of the hypotheses regarding the present status of the subjects of
the study. It does not seek or try to explain relationships or make implications.
Population census studies, public opinion surveys, fact-finding surveys, observation
studies, job descriptions, surveys of the literature, reports, test score analyses can be
cited as examples of a descriptive research. Explanatory study is also called causal
study because it seeks to study cause and effect relationship between different
variables in the study.
Characteristics
Examples:
38
3. Sales potential studies for particular geographic region or
population segment.
4. Determining perceptions of company brand or product
characteristics.
Advantages
Disadvantages
VARIABLES
The meaning of the word ‘variable’ as is given by Collins Dictionary refers to something
that changes quite often and usually there seems to be no fixed pattern of these
changes. A variable is any observation that can take different values. Example: race,
gender, curriculum used, student outcomes, instructional strategies, computer labs,
reading program, math program, student attitude parent satisfaction, readiness for
first grade etc.
According to Zikmund, Babin, Carr and Griffin (2009), there can be number of variables
in a research study among which a researcher may want to study relationships such
as:
1. Independent variable(s)
2. Dependent variables(s)
3. Moderating variable(s)
4. Mediating variable(s) and
5. Extraneous variable(s)
39
DEPENDENT VARIABLE
The result or outcome that is expected to occur from a treatment. The change in the
dependent variable is presumed to be caused by the independent variable. The
dependent variable is observed, measured, and analyzed to detect changes caused by
the independent variable. On a graph, the dependent variable is placed on the vertical
axis.
CONTROL VARIABLE
The characteristic that is controlled by the researcher in order to reduce any impact
this factor might have on the interpretation of the results. Controlled variables should
not be allowed to change. Controlled variables are not shown on a graph. Suppose
that the experiment is about the effect of color on mood. For theoretical reasons,
psychologists may decide to recruit male subjects within a particular age range only.
Consequently, the variables gender and age are represented at the same level in the
test conditions defined by the three colors present in the example. It is in this sense
that experimenters hold constant gender and age. Consequently, gender and age are
the control variables of the experiment.
MODERATOR VARIABLE
A characteristic that influences the impact of the independent variable upon the
dependent variable. In general terms, a moderator is a qualitative (e.g., sex, race,
class) or quantitative (e.g., level of reward) variable that affects the direction and
/or strength of the relation between an independent or predictor variable and a
dependent or criterion variable.
EXTRANEOUS VARIABLE
INDEPENDENT VARIABLE
40
determined by what the subject does. In a hypothetical relationship between
advertising expenditures and sales growth, for example, advertising expenditures is
independent variable, whereas sales growth is a dependent variable. A moderator
variable or moderator is one that has a strong contingent effect on the relationship
between independent and dependent variable (Sekaran and Bougie, 2010). In
statistical terms, the effect of a moderating variable in the relationship between
independent and dependent variable is termed as interaction (Cohen & Patricia,
2002). In the relationship between advertising expenditures and sales growth, for
example, „type of media‟ may be considered as a moderating variable.
MEDIATING VARIABLE:
It is important for a researcher to understand the nature of variables in his study and
how to measure those variables. Measurement of variables may be easy or difficult
depending upon their complexity. A variable may be uni-dimensional (concept) such
as age or occupation, or multi-dimensional (construct) such as personality, job stress
or motivation and for the measurement of later, the researcher needs to develop
operational definitions (Cooper and Schindler, 2006). Once a researcher develops
hypotheses or lists research questions, he has a clear idea about the related variables
in his research activity. Equally important for the researcher is to know that how the
variables will be measured and what sort of data he will collect regarding each of the
concerned variables. The data may be nominal, ordinal, interval or ratio.
In any field of research, a researcher has to understand the general tone of research
and the direction of logic which is guided by either of the two approaches – deduction
and induction. A general distinction made between the two logical paths to knowledge
is that induction is the formation of a generalization derived from examination of a set
of particulars, while deduction is the identification of an unknown particular, drawn
from its resemblance to a set of known facts (Rothchild, 2006). The deduction is a
process which goes from general to specific while induction is a process which goes
from specific to general (Decoo, 1996). In deduction, researchers use “top down”
approach where conclusions follow logically from the premises and in induction,
researchers use “bottom up” approach where the conclusion is likely based on
premises (Aqil Burney, and Nadeem, 2006). However, deduction contains an element
41
of induction and the inductive process is likely to contain a smidgen of deduction
(Bryman and Bell, 2003).
DEDUCTIVE RESEARCH
Theory
Hypothesis
Obserevation
Confirmation
j
As Gill and Johnson (2002) assert, a deductive research method entails the
development of a conceptual and theoretical structure prior to its testing through
empirical observation. While providing insights on deductive research, Remenyi et al.
(1998) state that, in this approach the researcher may have deduced a new theory by
analyzing and then synthesizing ideas and concepts already present in the literature.
The emphasis on this type of research will be on the deduction of ideas or facts from
the new theory in the hope that it provides a better or a more coherent framework
than the theories that preceded it. However, by taking a slightly different perspective,
Gill and Johnson (2002) argue that, what is important is the logic of deduction and the
operationalization process, and how this involves the consequent testing of the theory
by its confrontation with the empirical world. According to Collis and Hussey (2003),
deduction is the dominant research approach in the natural sciences, where laws
present the basis of explanation, allow the anticipation of phenomena, predict their
occurrence and therefore permit them to be controlled.
42
Accordingly, Robson (2002) introduces five sequential stages through which deductive
research will be progressed:
• Testing a theory
• Deductive reasoning works from the more general to the more specific.
• Sometimes this is informally called a "top-down “approach.
• There is the search to explain causal relationships between variables.
• Concepts need to be operationalized in a way that enables facts to be
measured quantitatively.
• The conclusion follows logically from the premises (available facts)
• Generalization. (Weijun, 2008)
Confirmation or Rejection
43
INDUCTIVE RESEARCH METHOD
Within inductive approach, the theory would follow the data rather than vice versa as
with deduction. As Gill and Johnson (2002) describe, learning is done by reflecting
upon particular past experiences and through the formulation of abstract concepts
and theories, hence, induction corresponds to the right-hand side of Kolb’s learning
cycle. In sharp contrast to the deductive traditional theory is the outcome of
induction.
Theory
TENTATIVE
Hypothesis
Pattrern
Obserevation
Kuhn (1962) implies that deductive researchers are enslaved normal scientists, while
inductive researchers are paradigm-breaking revolutionaries (Glaser and Strauss,
1967). Although the debate between supporters of induction and supporters of
deduction have a long history, as Gill and Johnson (2002) claim the modern
justification for taking an inductive approach in the social sciences tends to revolve
around two related arguments:
44
CHARACTERISTIC FEATURE OF INDUCTIVE METHODOLOGY
• Building theory
• Inductive reasoning works the other way, moving from specific
observations to broader generalizations and theories.
• Informally, sometimes call this a "bottom up“ approach
• The conclusion is likely based on premises.
• Involves a degree of uncertainty. (Tang Weijun 2008)
DEDUCTION INDUCTION
Select samples of sufficient size to Less concern with the need to generalize
generalize conclusions
45
(Major differences between deductive and inductive approaches to research:
Adopted and modified from Saunders et al., 2007)
Research inculcates scientific and inductive thinking and it promotes the development
of logical habits of thinking and organization. Research has its special significance in
solving various operational and planning problems of business and industry.
Operations research and market research, along with motivational research, are
considered crucial and their results assist, in more than one way, in taking business
decisions.
CONCLUSION
46
qualitative and quantitative research. An introduction to deductive and inductive
method, types of variables used in the quantitative research like independent,
dependent, moderation etc. are all described well for beginners understanding. The
material incorporated in this part further provides a detailed analysis of the
epistemology of research and research methodology. Several confusion and
misunderstanding will be removed, and research gets better focus on the topic
he/she may have to concentrate by gathering such knowledge.
DISCUSSION QUESTIONS
47
CHAPTER 3
Learning objectives:
INTRODUCTION
A paradigm delivers a conceptual framework for seeing and making sense of the social
world. According to Burrell and Morgan (1979), "to be located in a particular paradigm
is to view the world in a particular way.” Indeed, paradigm has been termed a (Patton,
1990) "world view". Nevertheless, it was Kuhn (1970); who familiarized the term as
"universally recognized scientific achievements that for a time provide model
problems and solutions to a community of practitioners", and suspected that (Khun,
1970) "something like a paradigm is a prerequisite to perception itself". In the
postscript to his second edition, Khun, (1970) provides a useful definition; "it stands
for the entire constellation of beliefs, values and techniques, and so on shared by the
members of a community." “Refers to the progress of scientific practice based on
people’s philosophies and assumptions about the world and the nature of
knowledge”. Paradigms offer a framework comprising an accepted set of theories,
methods, and ways of defining data.
A paradigm determines the criteria according to which one selects and defines
problems for enquiry. Young scientists tend to be socialized into the precepts of the
prevailing paradigm which to them constitutes ‘normal science.’ In that respect a
48
paradigm could be regarded as a cultural artifact, reflecting the dominant notions of
behavior in a particular scientific community…
• Beliefs
• Assumptions
• Values
• Aims of social inquiry, self, society, human agency
2. How are these paradigms socially accomplished or constitutes?
RESEARCH PHILOSOPHY
A research philosophy is a belief about the way in which data about a phenomenon
should be gathered, analyzed and used. The term epistemology (what is known to be
true) as opposed to doxology (what is believed to be true) encompasses the various
philosophies of research approach. The purpose of science, then, is the process of
transforming things believed into things known: doxa to episteme. Two major research
philosophies have been identified in the Western tradition of science, namely
positivist (sometimes called scientific) and interpretivist (also known as antipositivist)
(Galliers, 1991).
POSITIVISM
Positivists believe that reality is stable and can be observed and described from an
objective viewpoint (Levin, 1988), i.e. without interfering with the phenomena being
studied. They contend that phenomena should be isolated and that observations
should be repeatable. This often involves manipulation of reality with variations in
49
only a single independent variable so as to identify regularities in, and to form
relationships between, some of the constituent elements of the social world.
Predictions can be made based on the previously observed and explained realities and
their interrelationships. "Positivism has a long and rich historical tradition. It is so
embedded in our society that knowledge claims not grounded in positivist thought are
simply dismissed as a scientific and therefore invalid" Hirschheim, 1985).
This view is indirectly supported by Alavi and Carlson (1992) who, in a review of 902
IS research articles, found that all the empirical studies were positivist in approach.
Positivism has also had a particularly successful association with the physical and
natural sciences. There has, however, been much debate on the issue of whether or
not this positivist paradigm is entirely suitable for the social sciences (Hirschheim,
1985), many authors calling for a more pluralistic attitude towards IS research
methodologies (see e.g. Kuhn, 1970; Bjørn-Andersen, 1985; Remenyi and Williams,
1996).
While we shall not elaborate on this debate further, it is germane to our study since
it is also the case that Information Systems, dealing as it does with the interaction of
people and technology, are considered to be of the social sciences rather than the
physical sciences (Hirschheim, 1985). Indeed, some of the difficulties experienced in
IS research, such as the apparent inconsistency of results, may be attributed to the
inappropriateness of the positivist paradigm for the domain. Likewise, some variables
or constituent parts of reality might have been previously thought un-measurable
under the positivist paradigm - and hence went unresearched after (Galliers, 1991).
INTERPRETIVISM
50
conducting research. At a philosophical level organizational theories contrast in five
sets of assumptions (Burrell and Morgan, 1979) in a subjectivist /Objectivist
dimension; ontological, epistemological, axiological, methodological assumptions and
assumptions about human nature. These assumptions trickle through to lower levels
and influence the research process.
ONTOLOGY
51
underlie knowledge, there cannot be a vocabulary for representing knowledge. Thus,
the first step in devising an effective knowledge representation system, and
vocabulary, is to perform an effective ontological analysis of the field, or domain.
Weak analyses lead to incoherent knowledge bases (Chandrasekaran and Josephson,
1999).
Central questions:
EPISTEMOLOGY
Closely coupled with ontology and its consideration of what constitutes reality,
epistemology considers views about the most appropriate ways of enquiring into the
nature of the world (Easterby-Smith, Thorpe and Jackson, 2008) and ‘what is
knowledge and what are the sources and limits of knowledge’ (Eriksson and
Kovalainen, 2008). Questions of epistemology begin to consider the research method,
and Eriksson and Kovalainen go on to discuss how epistemology defines how
knowledge can be produced and argued for. Blaikie, (1993) describes epistemology as
‘the theory or science of the method or grounds of knowledge’ expanding this into a
set of claims or assumptions about the ways in which it is possible to gain knowledge
of reality, how what exists may be known, what can be known, and what criteria must
be satisfied in order to be described as knowledge. Chia (2002) describes
epistemology as ‘how and what it is possible to know’ and the need to reflect on
methods and standards through which reliable and verifiable knowledge is produced
and Hatch and Cunliffe, (2006) summarize epistemology as ‘knowing how you can
know’ and expand this by asking how knowledge is generated, what criteria
discriminate good knowledge from bad knowledge, and how should reality be
represented or described.
Epistemology is the branch of philosophy that deals with the nature of knowledge,
that is, with questions of what we know and how we know it. A Researcher’s
Epistemology is a result of his/her Ontological Position and refers to: his/her
Assumptions about the Best Ways of Inquiring into the Nature of the World and
Establishing ‘Truth’. Epistemology distinguishes true knowledge from false
knowledge. All philosophical disciplines such as ethics, metaphysics, etc. depend on
knowledge. Therefore, any philosophical inquiry has to address epistemological issues
52
as well. Epistemology. The study of the nature, source, limits, and validity of
knowledge. It is interested in developing criteria for evaluating claims people make
that they “know” something.
Central questions:
• What is knowledge?
• What is the difference between knowledge and opinion or belief?
• It you know something does that means that you are certain about it?
• Is knowledge really possible? (Creswell, 2005)
Hussey and Hussey (1997) argue that Epistemology is concerned with the study of
knowledge and what we accept as being valid knowledge. This involves the
examination of the relationship between the researcher and that, which is being
researched.
HUMAN NATURE
The human nature is the 3rd assumption in research process according to Burrel and
Morgan, (1977). Husey and Hussey,(1997) conceptualized this stage into two:
Axiological and Rhetorical. According to Hussey and Hussey, (1977), the axiological
assumption is concerned with values, while the rhetorical assumption is concerned
with the language of research. The human nature debate according to Burrel and
Morgan, (1979) revolves around the issues of what model of man is reflected in any
given social scientific theory. Man is value free and unbiased; also, the man is value-
laden and biased.
AXIOLOGY
53
mean that your research is strengthened, in terms of transparency, the opportunity
to minimize bias or in defending your choices, and the creation of a personal value
statement is recommended (Saunders, Lewis and Thornhill, 2007).
Central Question:
METHODOLOGICAL ASSUMPTION
The methodological assumption is concerned with the process of the research. Hussey
and Hussey (1997) argue that the choice of methodology to adopt is largely
determined by the choice of one paradigm. Cooper and Schindler (2002) defined
methodology as the overall approach to the research process. Assumptions about
human nature are deterministic or voluntarist. One views individuals as products of
their environment; the other believes individuals create their own environment
(Putman, 1983). Finally there are assumptions about the process of research, the
methodology. Nomothetic methodology focuses on an examination of regularities
and relationships to universal laws, while ideographic approaches center on the
reasons why individuals create and interpret their world in a particular way (Putman,
1983). The social world can only be understood by obtaining first-hand knowledge of
the subject under investigation.
Central Question:
RHETORIC
54
elaboration in literature (Fyre, 1957). Rhetoric denotes a broad category of linguistic
techniques people use when their primary objective is to influence beliefs and
attitudes and behaviors.
Central Question:
Denzin and Lincoln, (1994) define qualitative research: Qualitative research is multi-
method in focus, involving an interpretive, naturalistic approach to its subject matter.
This means that qualitative researchers study things in their natural settings,
attempting to make sense of or interpret phenomena in terms of the meanings people
bring to them. Qualitative research involves the studied use and collection of a variety
of empirical materials case study, personal experience, introspective, life story
interview, observational, historical, interactional, and visual texts-that describes
routine and problematic moments and meaning in individuals' lives. Cresswell, (1994)
defines it as: Qualitative research is an inquiry process of understanding based on
distinct methodological traditions of inquiry that explore a social or human problem.
The research builds a complex, holistic picture, analyzes words, reports detailed views
of informants, and conducts the study in a natural setting.
The qualitative research methodology starts from the philosophical assumptions that
researchers bring with them their own world views and beliefs (Creswell, 2007). These
assumptions include ontological beliefs, epistemological beliefs, axiological beliefs,
rhetorical beliefs, and methodological beliefs (Creswell, 2007). Therefore, qualitative
researchers acknowledge that their views invariably influence their research and are
the basis for the research results. As such, qualitative researchers rely on their beliefs
and a variety of understandings in describing, interpreting, and explaining the
phenomena of interest (Creswell, 2007; Maxwell, 1992).
55
design when the study has a specific contextual focus, such as classrooms and
students or stories about organizations, when the subject is biographical or a life
history, or an oral history of personal reflections from one or more individual. The
qualitative researchers use phenomenological research when the study is about the
life experiences of a concept or phenomenon experienced by one or more individuals.
Researchers use grounded theory qualitative research to generate or discover a
theory based on the study. The qualitative researchers use ethnographical research
when the subject involves an entire cultural group. Qualitative researchers use the
last design, case study research, to study one or more cases within a bounded set our
context (Creswell, 2007).
In qualitative research, just as it was in quantitative, the validity of the study is also
the most important consideration for the interpretivist in conducting research
(Maxwell, 1992). As Maxwell, posited, validity is a key issue in the debates over using
qualitative research over quantitative research, where quantitative proponents
criticized the lack of standards for assuring validity, such as lack of explicit controls for
validity threats, quantitative measurement, and formal testing of the hypotheses.
However, proponents of qualitative research argue that they have procedures for
attaining validity, and that they are simply different from the quantitative approaches.
QUANTITATIVE RESEARCH
The positivist researcher approaches or views the world as objective and seeks
measurable relationships among variables to test and verify their study hypotheses.
Their quantitative research process consists of five steps the researcher(s) perform
(Swanson and Holton, 2005).
56
additional techniques for gathering the information, such as surveys for descriptive
research. The ability to use generalizability is a research methodology applicable in
any research methodology, but has a real advantage in quantitative research
(Swanson and Holton, 2005).
Variable differentiation
In order top formulate the research questions the researcher has to identify variable
need to be analyzed. The researcher(s) must determine the dependent and
independent variables and the quantities and quality of the data (variables) source
(Miles and Huberman, 1994; Swanson and Holton, 2005). These types of variable
measures include categorical, continuous, and ordinal. In addition, the researcher(s)
must understand the concepts of validity and reliability in their determination of
measures, as failures to address validity and reliability often undermine or invalidate
research studies (Swanson and Holton, 2005).
In the statistical analyses, the researcher(s) determines, based on the overall research
design, how the variables describe, compare, associate, predict, and contribute to
explain the analysis results and to answer the propositions of the study (Cooper and
Schindler, 2008; Swanson & Holton, 2005). The researchers perform the
interpretation of the results of the analysis based on the statistical significance
determined (Swanson and Holton, 2005). Based on the step by step process the
quantitative research arrives at required validity and reliability of the results.
QUALITATIVE QUANTITATIVE
• Contextualization • Generalizability
• Interpretation • Prediction
• Understanding actors' perspectives • Causal explanations
• Researcher interacts with that being • The researcher is independent.
researched.
• The researcher is value laden and • The researcher is value free and
biased. unbiased.
• Theory can be causal, non-causal or • Theory is largely causal and
is often inductive. deductive.
• Meaning is captured and • The hypothesis that researcher
discovered once involved begins with are tested.
• Concepts are in the forms of
distinct variable.
57
• Concepts are in the form of themes,
motifs generalizations and • Data are in the form of numbers.
taxonomies.
• Data are in the forms of words, from • There usually many cases or
precise measurement. judgments.
• There are generally few cases or • Data are in the form of numbers.
judgments.
• Data are in the form of words from
the documents, observations, and • Procedures are standard and
transcripts. replication is assumed.
• Research procedures are particular • Analysis proceeds by using
and replication is rare. statistics, tables, or charts
• The analysis proceeds by extracting discussing how what they show
themes or generalizations from relates to the hypothesis
evidence and organizing data to
present a coherent, consistent • The aim is to classify features, count
picture. them, and construct statistical
• The aim is a complete, detailed models in an attempt to explain
description. what is observed.
• The researcher knows clearly in
advance what he/she is looking for.
• The researcher may only know • Recommended during later phases
roughly in advance what he/she is of research projects.
looking for. • All aspects of the study are carefully
• Recommended during earlier designed before data is collected.
phases of research projects. • Objective seeks precise
• The design emerges as the study measurement & analysis of target
unfolds. concepts, e.g., uses surveys,
• Subjective - individuals questionnaires etc.
interpretation of events is • Quantitative data is more efficient,
important, e.g., uses participant able to test hypotheses, but may
observation, in-depth interviews miss the contextual detail.
etc. • Researcher tends to remain
• Qualitative data is more “rich”, time objectively separated from the
consuming, and less able to be subject matter
generalized.
• Researcher tends to become
subjectively immersed in the
subject matter.
58
Some of the differences between these two paradigms are briefly explained by
Campbell and Kerlinger. These can be as detailed as:
Over the years there has been a large amount of discussion surrounding on
researchers about which methodology is suitable to conduct their research. Whether
one goes with a qualitative inquiry or with quantitative inquiry. The identification of
methodology is more rest with the topic and the phenomena the researcher identified
to do the research. In summary for every strength there appears to be a corresponding
weakness in both quantitative and qualitative research. It is this dilemma that has
fuelled the debate over which approach is superior (Duffy, 1986), and which method
should therefore be adopted for organizational research. Choosing just one
methodology narrows a researcher’s perspective, and deprives him or her of the
benefits of building on the strengths inherent in a variety of research methodologies
(Duffy, 1986). The debate could be seen as advantageous to organizational studies.
Researchers are being forced to consider the controversial issues of both
methodologies, and this required them to have in-depth knowledge of epistemology
and methodology and not to be restricted, as in the past, to the tradition of the
sciences (Duffy, 1985). Preference for a specific research strategy is not just a technical
choice; it is an ethical, moral, ideological and political activity (Moccoa, 1988). All
methodologies have their specific strengths and weaknesses, neither is better than
the other – they are just different.
59
CONCLUSION
This chapter of the book explains the scientific paradigms of research. On the extreme,
many researchers follow the positivist way of doing research on the other hand
several follows interpretivist method. However, keep in mind that the consideration
of quantitative or qualitative method of researches depends on the topic a researcher
will be taken in to for research and further it attached with explainable or quantifiable
phenomena. Blind consideration of quantitative or qualitative research leads to many
obstacles in research, especially during tool development, administration and analysis
of data. Accordingly, a researcher should carefully consider his/her research topic and
research method.
DISCUSSION QUESTIONS
60
MODULE: 4
QUALITATIVE RESEARCH
Learning outcome
INTRODUCTION
Qualitative research methods are getting recognition outside the traditional academic
social Sciences, especially in international development research. The term qualitative
implies an emphasis on examination of the processes and meanings, but not
measured in terms of quantity, amount, or frequency (Labuschagne, 2003). The great
strength of qualitative research is that it attempts to depict the fullness of experience
in a meaningful and comprehensive way. Qualitative research then, is most
appropriate for those projects where phenomena remain unexplained, where the
nature of the research is uncommon or broad, where previous theories do not exist
or are incomplete (Patton, 2002); and where the goal is deep narrative understanding
or theory development (Hammersley and Atkinson, 1983).
61
researchers seek instead illumination, understanding, and extrapolation to similar
situations (Hoepfl, 1997).
Typically, qualitative methods produce a lot of detailed data about a small number of
cases, and provide a depth of detail through direct quotation, precise description of
situations and close observation. The main concern of qualitative research is
developing explanations of social phenomena thereby to understand the incidents,
situations, persons and issues in a way that they are. Qualitative research seeks to
understand research issues from the perspectives of the local population by obtaining
deep rooted information integrating culturally specific information about the values,
opinions, behaviors, and social contexts of particular populations. Qualitative
research is concerned with finding the answers to questions which begin with: why?
how? in what way?, and is more concerned with questions about: how much? how
many? how often? to what extent?
Qualitative Research is concerned with the social aspects of our world and seeks to
answer questions about:
QUALITATIVE RESEARCH
Qualitative research is a term having varying meaning and understanding in social and
managerial research.
62
natural (rather than experimental) settings, giving due emphasis to the meanings,
experiences, and views of all the participants (Banister et al., 1994)
63
5. Qualitative is primarily interpretivist rather than positivist. This approach
doesn’t seek to verify some “truth”. This approach incorporates manifold
truths, realities, meanings, etc. to build descriptions.
6. Qualitative designs are “emergent” rather than fixed. Distinct from
survey and experimental research the research design steps in
Qualitative Research are not direct. It can unravel in multiple ways. The
design is open and flexible to change.
7. The qualitative research is more of general than too specific. Hypothesis
development is not a priority in some cases. It does not develop
hypotheses apriori and don’t test hypotheses or theory. While of some
research develop preliminary conceptual hypotheses and gather
evidence to support/refute them.
8. Can’t anticipate all issues/problems apriori. Have to deal with them “in
the field” (unlike surveys and experiments, where you deal with them
ahead of time or afterwards, statistically).
9. Different sampling techniques are used. In quantitative research,
sampling seeks to demonstrate representativeness of findings through
random selection of subjects.
10. Qualitative research is subjective and uses very different methods of
collecting information, including individual, in-depth interviews and
focus groups.
11. Qualitative sampling techniques are concerned with seeking information
from specific groups and subgroups in the population.
12. The researcher is the main data collection instrument. Researcher’s
beliefs, values, predispositions influence the entire process. So there is a
potential for bias here, and replication of findings is more difficult than
in quantitative research.
13. The intensive and time-consuming nature of data collection necessitates
the use of small samples.
14. Data are used to develop concepts and theories that help us to
understand the social world. This is an inductive approach to the
development of theory.
15. Qualitative data are collected through direct encounters with individuals,
through one-to-one interviews or group interviews or by observation.
Data collection is time consuming.
16. The goal is to produce an understanding/explanation that is true for all
the data. No error. Different from experiments and surveys. Inductive
rather than deductive.
17. The criteria used to assess reliability and validity differ from those used
in quantitative research.
18. Not a standardized data collection.
19. The nature of this type of research is exploratory and open-ended.
64
20. The results of qualitative research are unpredictable.
Many data collection procedures are included in the qualitative research process. It
can be as detailed:
A researcher considers going ahead with qualitative approach in his/her study due to
following reasons;
65
TYPOLOGIES OF QUALITATIVE RESEARCH
NARRATION
There are many ways to analyze qualitative data. Narrative analysis is one approach
to analyzing and interpreting qualitative data. Narrative is defined by the Concise
Oxford English Dictionary as ‘a spoken or written account of connected events; a story’
(Soanes and Stevenson, 2004). Traditionally, a narrative requires a plot, as well as
some coherence. It has some sort of ordered sequence, often in linear form, with a
beginning, middle, and end Narratives also usually have a theme and a main point, or
a moral, to the story.
The researcher should evaluate certain aspect before he/she carry forward narrative
research method.
66
write it up as a case study. Case studies are usually used chronology as
the main organizing device (Czarniawska, 1998).
• The Labov/Cortazzi model suggests six elements to every narrative:
abstract, orientation, complication, evaluation, result, and conclusion
Advantages
Disadvantage
One disadvantage is that it can be very time consuming to collect life histories of
people, and even more time consuming to analyze them.
ETHNOGRAPHY
67
• Ethnographies immerse themselves in the life of the people they study
(Lewis, 1985) and seek to place the phenomena studied in their social and
cultural context; and
• In ethnographic research, the context is what defines the situation and
makes it what it is.
Features of ethnography:
(a) People's behavior is studied in everyday contexts, rather than under experimental
conditions created by the researcher.
(b) Data are gathered from a range of sources, but observation and/or relatively
informal conversations are usually the main ones.
(c) The approach to data collection is "unstructured in the sense that it does not
involve following through a detailed plan set up at the beginning; nor are the
categories used for interpreting what people say and do pre-given or fixed. This does
not mean that the research is unsystematic; simply that initially the data are collected
in as raw a form, and on as wide a front, as feasible.
(d) The focus is usually a single setting or group, of relatively small scale. In life history
research the focus may even be a single individual.
(e) The analysis of the data involves interpretation of the meanings and functions of
human actions and mainly takes the form of verbal descriptions and explanations,
with quantification and statistical analysis playing a subordinate role at most.
Advantages:
Disadvantages:
68
• Researcher bias can bias the design of a study;
• Researcher bias can enter into data collection;
• Does not have much breadth;
• Some subjects may be previously influenced and affect the
outcome of the study;
• Study group may not be representative of the larger
population;
• Analysis of observations can be biased; and
• It can be difficult for some to write up the findings in a journal
article.
PHENOMENOLOGY
Definition
Characteristic feature
69
events, situations, experiences or concepts. We are surrounded by many phenomena,
which we are aware of but not fully understand. Our lack of understanding of these
phenomena may exist because the phenomenon has not been overtly described and
explained or our understanding of the impact it makes may be unclear.
Phenomenological research begins with the acknowledgement that there is a gap in
our understanding and that clarification or illumination will be of benefit.
Phenomenological research will not necessarily provide definitive explanations but it
does raise awareness and increases insight (Hancock, 2002).
Phenomenology aims for truth, logic, and rigorously self-critical thought. All forms of
knowledge including the sciences are regarded as being ultimately grounded on living
experience in relations of orderly, regular structures of consciousness.
Phenomenology starts with what appears: primarily non-verbal awareness, and
studies the overall relations of meaning that appears through sensation to verbalized
thought, which may also include the awareness of others, history, teleology, ethics
and values. In general, it attempts to ground any academic discourse in its definitive
experiences (Husserl, 1970, 1981).
70
it and the realist postulate that the external world exists independently of the activity
of the mind. The emphasis is upon the ‘intentionality’ of consciousness: i.e., the fact
that consciousness is always consciousness of something, is directed towards its
objects in acts not only of perception and cognition, identification and synthesis, but
also of willing, desiring, imagining, etc. (Spiegelberg, (1982).
Example:
Identified 11 Themes
• Inescapable death;
• Dreaded bodily destruction;
• Devouring life;
• Hoping for the right drug;
• Caring for oneself;
• Just a disease;
• Holding a wild cat;
• Magic of not thinking;
• Accepting AIDS;
• Turning to a higher power;
• Recouping with time; and
• Issues of Qualitative Research.
METHODOLOGY OF PHENOMENOLOGY
1. Bracketing
71
2. Intuiting
3. Analyzing
4. Describing
Bracketing
Intuition
Intuition occurs when the researcher remains open to the meaning attributed to the
phenomenon by those who have experienced it. This process of intuition results in a
common understanding about the phenomenon that is being studied. Intuiting
requires that the researcher creatively varies the data until such an understanding
emerges. Intuiting requires that the researcher becomes totally immersed in the study
and the phenomenon.
Analyzing
Analysis involves such processes as coding (open, axial, and selective), categorizing
and making sense of the essential meanings of the phenomenon. As the researcher
works/lives with the rich descriptive data, then common themes or essences begin to
emerge. This stage of analysis basically involves total immersion for as long as it is
needed in order to ensure both a pure and a thorough description of the
phenomenon.
Description
At the descriptive stage, the researcher comes to understand and to define the
phenomenon. The aims of this final step are to communicate and to offer distinct,
critical description in written and verbal form.
Advantages
72
• Rich data from the experiences of individuals.
Disadvantages
GROUNDED THEORY
Definition
The aim of grounded theory is: ‘to generate or discover a theory’ (Glaser and
Strauss, 1967). Grounded theory may be defined as: ‘the discovery of theory
73
from data systematically obtained from social research’ (Glaser and Strauss
1967).
Grounded theory is a research approach or method that calls for a continual interplay
between data collection and analysis to produce a theory during the research process.
A grounded theory is derived inductively through the systematic collection and
analysis of data pertaining to a phenomenon (Strauss and Corbin, 1990). Data
collection, analysis, and theory stand in reciprocal relationship with one another. An
essential feature of grounded theory research is the continuous cycle of collecting and
analyzing data. The researcher starts analyzing data as soon it is collected and then
moves on to compare the analysis of one set of data with another. As the research
progresses and categories are developed, the researcher uses a form of analysis
known as selective coding. This means that the researcher reviews the collected data
by checking out whether the newly developed categories remain constant when the
data is analyzed specifically for these categories. As the research progresses, the
researcher continues to review the categories as further new data is collected, so as
to ensure that data is not being forced into the categories but rather that the
categories represent the data. This dynamic relationship between data collection and
analysis enables the researcher to check if preliminary findings remain constant when
further data is collected. Taken together, constant comparative analysis and data
collection offer the researcher an opportunity of generating research findings that
represent accurately the phenomena being studied Naomi Elliott, & Anne Lazenbatt,
(2005).
74
Assumptions about grounded theory research
Creswell, (1998) suggested that the following assumptions about grounded theory
research are widely shared: The aim of grounded theory research is to generate or
discover a theory;
It is a methodology that has been used to generate theory in areas where there is little
already known (Goulding, 1998). Its usefulness is also recognized where there is an
apparent lack of integrated theory in the literature (Goulding, 2002). Grounded
theory “adapts well to capturing the complexities of the context in which the action
unfolds…” (Locke, 2001:95) and emphasizes process. In so doing it assist the
researcher in retaining the link between culture, language, social context and
construct (Gales, 2003). Therefore, grounded theory generates theory that is of direct
interest and relevance for practitioners in that it analyses a substantive topic and aims
at discovering a basic social process (BSP) which has the potential to resolve some of
the main concerns of a particular group (Jones, 2002).
75
Steps in grounded theory
1. Develop categories
Use data available to develop labeled categories that fit data closely.
2. Saturate categories
Accumulate examples of a given category until is clear what future
instances would be located in this category.
3. Abstract definitions
Abstract a definition of the category by stating in a general form the criteria
for putting further instances into this category.
4. Use the definitions
Use definitions as a guide to emerging features of importance in further
field work and as a stimulus to theoretical reflection.
Strauss and Corbin state that there are four primary requirements for judging a good
grounded theory: 1) It should fit the phenomenon, provided it has been carefully
derived from diverse data and is adherent to the common reality of the area; 2) It
should provide understanding, and be understandable; 3) Because the data is
comprehensive, it should provide generality, in that the theory includes extensive
variation and is abstract enough to be applicable to a wide variety of contexts; and 4)
It should provide control, in the sense of stating the conditions under which the theory
applies and describing a reasonable basis for action.
In clarifying their approach to grounded theory, Strauss and Corbin (1998) suggested
the following seven criteria be used for evaluating the research process:
76
Strauss and Corbin (1998) also suggested that the “empirical grounding of a study” be
evaluated to assess the development of relevant categories and concepts that are the
building blocks of the theory. The consideration of seven criteria for the assessment
of the grounding of a study includes an examination of the following:
Although few grounded theory research studies have been printed in the publications
affiliated with HRD, several studies have focused on research questions relevant to
the field. Pertinent grounded theory research has included an examination of
individual responses to organizational change (Johansen, 1991, 2001), the exploration
of leadership values for quality in a manufacturing context (Franche’re, 1995), intra-
firm conflicts in the formation of business strategies in corporate settings (Shaffer and
Hillman, 2000), patient perceptions of the quality of care (Radwin, 2000), and the
impact of conflict and cohesion on organizational learning and performance (Cairns,
Burt, and Beech, 2001).
Advantages
Disadvantages
77
• The presentation of results - the highly qualitative nature of the results
can make them difficult to present in a manner that is usable by
practitioners.
CASE STUDIES
A case study is a research method which allows for an in-depth examination of events,
phenomena, or other depth examination of events, phenomena, or other
observations within real- observations within a real-life context for testing, or simply
a tool for learning. Case studies often employ documents, artifacts, interviews and
observations during the course of research. The case studies can be qualitative or
quantitative or combined.
Case study research, through reports of past studies, allows the exploration and
understanding of complex issues. It can be considered a robust research method
particularly when a holistic, in-depth investigation is required. Recognized as a tool in
many social science studies, the role of case study method in research becomes more
prominent when issues about education (Gulsecen and Kubat, 2006), sociology
(Grassel and Schirmer, 2006) and community-based problems (Johnson, 2006), such
as poverty, unemployment, drug addiction, illiteracy, etc. were raised. One of the
reasons for the recognition of case study as a research method is that researchers
were becoming more concerned about the limitations of quantitative methods in
providing holistic and in-depth explanations of the social and behavioral problems in
question. Through case study methods, a researcher can go beyond the quantitative
statistical results and understand the behavioral conditions through the actor’s
perspective. By including both quantitative and qualitative data, case study helps
explain both the process and outcome of a phenomenon through complete
observation, reconstruction and analysis of the cases under investigation (Tellis,
1997).
Definition
Yin, (1984) defines the case study research method as an empirical inquiry that
investigates a contemporary phenomenon within its real-life context; when the
boundaries between phenomenon and context are not clearly evident; and in which
multiple sources of evidence are used.
1. A descriptive study
78
b. The main emphasis is always on the construction of verbal descriptions of
behavior or experience but quantitative data may be collected.
2. Narrowly focused.
b. Often the case study focuses on a limited aspect of a person, such as their
psychopathological symptoms.
The researcher may combine objective and subjective data: All are regarded
as valid data for analysis, and as a basis for inferences within the case study.
4. Process-oriented.
a. The case study method enables the researcher to explore and describe the
nature of processes, which occur over time.
Yin, (1984) notes three categories, namely exploratory, descriptive and explanatory
case studies.
Exploratory
Exploratory case studies set to explore any phenomenon in the data which serves as
a point of interest to the researcher. For instance, a researcher conducting an
79
exploratory case study on an individual’s reading process may ask general questions,
such as, “Does a student use any strategies when he reads a text?” And “if so, how
often?”. These general questions are meant to open up the door for further
examination of the phenomenon observed. In exploratory case studies, fieldwork, and
data collection may be undertaken prior to the definition of the research questions
and hypotheses. This type of study has been considered as a prelude to some social
research. A good instrumental case does not have to defend its typicality.
Explanatory
Explanatory cases are appropriate for undertaking causal studies. In very many
multifaceted and multivariate cases. For instance, a researcher may ask the reason as
to why a student uses an influencing strategy in reading (Zaidah, 2003). On the basis
of the data, the researcher may then form a theory and set to test this theory
(McDonough and McDonough, 1997). Furthermore, explanatory cases are also
deployed for causal studies where pattern-matching can be used to investigate certain
phenomena in very complex and multivariate cases. Yin and Moore, (1987) note that
these complex and multivariate cases can be explained by three rival theories: a
knowledge-driven theory, a problem-solving theory, and a social-interaction theory.
Yin and Moore, (1988) conducted a study to examine the reason why some research
findings get into practical use. The utilization outcomes were explained by three rival
theories: a knowledge-driven theory, a problem-solving theory, and a social-
interaction theory. Knowledge-driven theory means that ideas and discoveries from
basic research eventually become commercial products. Similar notions can be said
for the problem-solving theory. The social-interaction theory, on the other hand,
suggests that overlapping professional network causes researchers and users to
communicate frequently with each other.
Descriptive
Descriptive case studies set to describe the natural phenomena which occur within
the data in question, for instance, what different strategies are used by a reader and
how the reader uses them. The goal set by the researcher is to describe the data as
they occur. The task of a descriptive case study is that the investigator must begin with
a descriptive theory to support the description of the phenomenon or story. If this
fails there is the possibility that the description lacks rigor and that problems may
occur during the project. An example of a descriptive case study using pattern-
matching procedure is the one conducted by Pyecha, (1988) on special education
children. Through replication, data elicited from several states in the United States of
America were compared and hypotheses were formulated. In this case, descriptive
theory was used to examine the depth and scope of the case under study.
80
Designing Case Studies
Yin, (1994) identified five components of research design that are important for case
studies:
• A study questions;
• Its propositions, if any;
• Its unit(s) of analysis;
• The logic linking the data to the propositions; and
• The criteria for interpreting the findings (Yin, 1994).
There are certain protocols to be followed by the case study investigator. A case study
protocol contains procedures and general rules that should be followed in using the
instrument. The protocol is to be created prior to the data collection phase. Yin, (1994)
presented the protocol as a major component in asserting the reliability of the case
study research. A typical protocol should have the following sections:
81
Case study Evidence
Stake, (1995) and Yin, (1994) identified at least six sources of evidence in case studies.
The following is not an ordered list, but reflects the research of both Yin, (1994) and
Stake, (1995):
82
Disadvantages of the case study method
1. Replication not possible. Uniqueness of data means that they are valid for only
one person. While this is strength in some forms of research, it is a weakness
of others, because it means that the findings cannot be replicated and so some
types of reliability measures are very low.
2. The researcher’s own subjective feelings may influence the case study
(researcher bias). Both the collection of data and the interpretation of them.
This is particularly true of many of the famous case studies in psychology’s
history, especially the case history reported by Freud. In unstructured or
clinical case studies the researcher’s own interpretations can influence the
way that the data are collected, i.e. there is a potential for researcher bias.
3. Memory distortions. The heavy reliance on memory when reconstructing the
case history means that the information about past experiences and events
may be notoriously subject to distortion. Very few people have full
documentation of all various aspects of their lives, and there is always a
tendency that people focus on factors which they find important themselves
whiles they may be unaware of other possible influences.
4. Not possible to replicate the findings. Serious problems in generalizing the
results of a unique individual to other people because the findings may not be
representative of any particular population.
DELPHI METHOD
The Delphi concept may be viewed as one of the spin-offs of defense research.
"Project Delphi" was the name given to an Air Force-sponsored Rand Corporation
study, starting in the early 1950's, concerning the use of expert opinion, (Edmund
Husserl, 1931). The objective of the original study was to "obtain the most reliable
consensus of opinion of a group of experts ... by a series of intensive questionnaires
interspersed with controlled opinion feedback.
83
Why Delphi?
Many people label Delphi a forecasting procedure because of its significant use in that
area, there is a surprising variety of other application areas. Among those already
developed we find:
The collection of expert was made for professionals having high knowledge and
expertise in the field of internationalization and they are closely associated with Small
and Medium Scale industries as consultants, business associates, partners,
collaborators, owners of SMEs, Researchers, Academicians and Senior level managers.
The members from industrial bodies like chamber of commerce, professors from
universities, senior researchers from research institutes, members from
governmental institutions and financial institution are identified through exploratory
method.
The data regarding the professors were collected from academic department of
universities and professionals are through emails and direct interaction with the
professional’s institutions. Majority members in the expert group belong to an age
group of 38 to 55 years of old. The specialized areas of these expert members include,
Industries Development, Technology Development, Entrepreneurship, Planning
board, SME development, Supply Chain Management, Production and Operations
management, Information Technology, International Marketing, Institutional finance,
84
Human Resource, Research and industrial sociology. The participants comprised 30
male members (86%) and 5 female members (14%). These dynamic groups of panel
of experts are experienced and conversant to give pertinent opinions and a justifiable
understanding of the internationalization process of Indian SMEs.
Rounds
Round 1: In the first round, the Delphi process traditionally begins with an open-ended
questionnaire. The open-ended questionnaire serves as the cornerstone of soliciting
specific information about a content area from the Delphi subjects (Custer, Scarcella,
and Stewart, 1999).
The questions:
Round 2:
In the second round, each Delphi participant receives a second questionnaire and is
asked to review the items summarized by the investigators based on the information
provided in the first round. Accordingly, Delphi panelists may be required to rate or
rank-order items to establish preliminary priorities among items. As a result of round
two, areas of disagreement and agreement are identified (Ludwig, 1994). In this
round, consensus begins forming and the actual outcomes can be presented among
the participants’ responses (Jacobs, 1996).
85
Round 3:
In the third round, each Delphi panelist receives a questionnaire that includes the
items and ratings summarized by the investigators in the previous round and are
asked to revise his/her judgments or “to specify the reasons for remaining outside the
consensus” (Pfeiffer, 1968). This round gives Delphi panelists an opportunity to make
further clarifications of both the information and their judgments about the relative
importance of the items.
1. Second level screening of the 224 items which were having high and low
influence on internationalization of business identified
2. The process further identified 199 items which are having a high and
low influence on internationalization of business identified.
3. Classification of the items in 55 categories was being made.
4. Thematic presentation and the categorization of the items were done.
Round 4:
In the fourth and often final round, the list of remaining items, their ratings, minority
opinions, and items achieving consensus are distributed to the panelists. This round
provides a final opportunity for participants to revise their judgments. It should be
remembered that the number of Delphi iterations depends largely on the degree of
consensus sought by the investigators and can vary from three to five (Delbecq, Van
de Ven, Gustafson, 1975; Ludwig, 1994).
1. During third level, screening of the 182 items which were having high
and moderately high influence on internationalization of subjected to
repeated discussion.
2. Core factors which influence the internationalization of business of
SMEs identified.
3. Sought the expert opinion on the appropriateness of the core factors
selected for the study.
The researchers have pointed out the many advantages of the Delphi
Technique. The Delphi technique is:
86
• Is relatively free of social pressure, personality influence, and
individual dominance and is, therefore, conducive to independent
thinking and gradual formulation of reliable judgments or
forecasting of results;
• Helps generate consensus or identify divergence of opinions among
groups hostile to each other;
2. Helps keep attention directly on the issue:
3. Allows a number of experts to be called upon to provide a broad range
of views, on which to base analysis — “two heads are better than one”:
• Allows sharing of information and reasoning among
participants;
• Iteration enables participants to review, re-evaluate and revise
all their previous statements in light of comments made by their
peers;
4. Is inexpensive.
87
FOCUS GROUP
A focus group, or focus group interview, is a qualitative research tool often used in
social research, business and marketing. Focus groups are "small group discussions,
addressing a specific topic, which usually involve 6-12 participants, either matched or
varied on specific characteristics of interest to the researcher". (Fern, 1982; Morgan
and Spanish, 1984). Methodologically, focus group interviews involve a group of 6–8
people who come from similar social and cultural backgrounds or who have similar
experiences or concerns. They gather together to discuss a specific issue with the help
of a moderator in a particular setting where participants feel comfortable enough to
engage in a dynamic discussion for one or two hours. Focus groups do not aim to reach
consensus on the discussed issues. Rather, focus groups ‘encourage a range of
responses which provide a greater understanding of the attitudes, behavior, opinions
or perceptions of participants on the research issues’ (Hennink, 2007). The focus
group method is different from group interviews since group interactions are treated
explicitly as ‘research data’ (Ivanoff and Hultberg, 2006). The participants are chosen
because they are able to provide valuable contributions to the research questions.
The discussion between participants provides the researchers with an opportunity to
hear issues which may not emerge from their interaction with the researchers alone.
The interaction among the participants themselves leads to more emphasis on the
points of view of the participants than those of the researchers (Gaiser, 2008).
Focus groups require skilled facilitators or moderators to guide the discussion and
maintain the focus. They are found to be most effective for learning about opinions
and attitudes, pilot testing materials for assessments and generating
recommendations.
88
• As such, focus groups offer possibilities for researchers to explore ‘the
gap between what people say and what they do’ (Conradson, 2005).
• The focus group setting also provides the researcher with opportunities
to follow up the comments and to crosscheck with the participants in a
more interactive manner than a questionnaire or individual interview can
offer.
• Focus group methodology is adopted widely in the field of development
in a cross-cultural context, especially in eliciting community viewpoints
and understanding community dynamics (Lloyd-Evans, 2006).
• One of the great advantages of the focus group method is its ability to
cultivate people’s responses to events as they evolve (Barbour, 2007).
Advantages:
Limitations:
• The findings may not represent the views of larger segments of the
population;
• Requires good facilitation skills, including ability to handle various roles
people may play “expert”, “quiet”, “outsider’, “friend”, “hostile”, etc.);
• Tough rich, data may be difficult to analyze because it is unstructured;
• Possible conformance, censoring, conflict avoidance, or other unintended
outcomes of the group process need to be addressed as part of the data
analysis (Carey, 1995)
89
TRIANGULATION
Why Triangulation?
The most fertile search for validity comes from a combined series of different
measures, each with its own idiosyncratic weaknesses, each pointed to a single
hypothesis. When a hypothesis can survive the confrontation of a series of
complementary methods of testing, it contains a degree of validity unattainable by
one tested within the more constricted framework of a single method (Webb et al.
1966).
90
Techniques for ensuring the quality of a study
In the tradition of Lincoln and Guba, Erlandson et al. (1993) describe the following
techniques for ensuring the quality of a study.
• Prolonged engagement;
• Persistent observation;
• Triangulation;
• Referential adequacy;
• Peer de-briefing;
• Member checking;
• Reflexive journal;
• Thick description;
• Purposive sampling; and
• Audit trail.
Types of Triangulation:
Various researchers reported that there are five basic types of triangulation methods
employed by the qualitative researchers:
91
For example, suppose a researcher is interviewing participants from a
nutrition program to learn what healthy lifestyle practice changes they
attribute to participating in a program. To triangulate the information, a
researcher could then share the transcripts with colleagues in different
disciplines (i.e., nutrition, nursing, pharmacy, public health education,
etc.) to see what their interpretations are (Guion, Diehl, and McDonald,
2002).
92
• For in-depth and longitudinal explorations of leadership phenomena;
and for more relevance and interest for practitioners.
Reliability is the extent to which results are consistent over time and an accurate
representation of the total population under study is referred to as reliability and if
the results of a study can be reproduced under a similar methodology, then the
research instrument is considered to be reliable (Joppe, 2000). Wainer and Braun
(1998) describe the validity in quantitative research as “construct validity”. The
construct is the initial concept, notion, question or hypotheses that determine which
data is to be gathered and how it is to be gathered.
Although the term ‘Reliability’ is a concept used for testing or evaluating quantitative
research, the idea is most often used in all kinds of research. If we see the idea of
testing as a way of information elicitation then the most important test of any
qualitative study is its quality. A good qualitative study can help us “understand a
situation that would otherwise be enigmatic or confusing” (Eisner, 1991). The validity
and reliability of qualitative research depend on the researcher’s skill, sensitivity
and training in the field. There are additional specific methods a researcher can
perform to ensure data validity and reliability. Triangulation compares the results
from two or more different methods (i.e., data from interviews and observation), or
two or more data sources (interviews with members of different groups) to check for
consistency in answers and attitudes. Instead of using triangulation as a stringent
test of validity, it might be a more appropriate method for ensuring comprehensive
data collection – getting all sides of “the story,” for example, or understanding all
the shades of meaning in the answer to a question (Mays and Pope, 2000).
Ramos (1989) described three types of problems that may affect qualitative studies:
the researcher/participant relationship, the researcher’s subjective interpretations of
data, and the design itself.
93
and miss what I see as the greater value in triangulating. More accurately,
there are three outcomes that might result from a triangulation strategy.
The first is that which is commonly assumed to be the goal of triangulation
and that is convergence. The notion of convergence needs little explanation:
data from different sources, methods, investigators, and so on will provide
evidence that will result in a single proposition about some social
phenomenon.
• A second and probably more frequently occurring outcome from a
triangulation strategy is inconsistency among the data. When multiple
sources, methods, and so on are employed we frequently are faced with a
range of perspectives or data that do not confirm a single proposition about
a social phenomenon. Rather, the evidence presents alternative
propositions containing inconsistencies and ambiguities. With this outcome
it is not clear what the valid claim or proposition about something else.
• A third outcome is a contradiction. It is possible not only for data being
inconsistent but to actually be contradictory. When we have employed
several methods we are sometimes left with a data bank that results in
opposing views of the social phenomenon being studied. If one were to
accept the assumptions that triangulation should result in a single claim
because bias is naturally cancelled out, the outcomes of the second and
third type would not be useful in the research process. Sandra Mathison,
2008).
Ann DeVaney (2000) provided the following criteria, developed by numerous AECT
members that are used to evaluate the quality of papers submitted for this award:
94
• Do the data collect adequately address the problem; do they make
explicit the researcher’s role and perspective; do the data collection
techniques have a “good fit” with the method and theory? (If
applicable).
• Are the data aggregates and analysis clearly reported; do they make
explicit the interpretive and the reasoning process of the researcher?
(If applicable)
• Does the discussion provide meaningful and warranted interpretations
and conclusions?
QUALIFICATIONS OF INVESTIGATORS
The investigators of qualitative research need to have the following abilities and skills.
The researchers;
CONCLUSION
95
DISCUSSION QUESTIONS
TEMPLATE
96
the problem and introduce the topic to the
readers explicitly. Introduce the issue related
to the sector, company, or an issue or a
theme and how it affects the field, business,
community, country, sector or individual.
Start with a general view and end with
specific point justifying the topic selected for
the study. All content should be source
based. The sources should be explained in
the reference.
Review of Just after you got your topic and definitely a 4 pages
literature problem you will work on – start gathering
information. Nowadays there are a lot of
different sources that you are free to use for
writing a case study. Use books, newspapers,
magazines, Internet and any other sources
you are able to find. Identify the theory that
will be used. All content should be source
based. The sources should be explained in
the reference.
97
Methodology What methodologies have used to arrive at 1 page
the case?
Example;
Interview
Focus group discussion
Field notes
Content analysis
Narration
Survey
Formulate the Based on the introduction and review of 1 page
problem literature, the case writer has to identify and
formulate the problem in this section. The
rationale behind the identification of issues
and selection of issue as a case study to be
clearly mentioned in this section, that it gives
utmost clarity.
Findings Identify the problems found in the case. Each 1-2 pages
analysis of a problem should be supported by
the facts given in the case together with the
relevant theory and course concepts. Here, it
is important to search for the underlying
problems for example: cross-cultural conflict
may be only a symptom of the underlying
problem of inadequate policies and practices
within the company. This section is often
divided into sub-sections, one for each
problem.
98
Discussion • Summarize the major problem/s 4 pages
• Identify alternative solutions to
this/these major problem/s (there is
likely to be more than one solution
per problem)
• Briefly outlines each alternative
solution and then evaluate it in terms
of its advantages and disadvantages
• No need to refer to the theory or
coursework here.
References Make sure all references are sited correctly Depends on your
(Follow APA Style). review of literature
99
MODULE 5
RESEARCH PROCESS
Learning objectives:
INTRODUCTION
Research is an academic activity and as such the term should be used in a technical
sense. According to Clifford Woody research comprises defining and redefining
problems, formulating hypothesis or suggested solutions; collecting, organizing and
evaluating data; making deductions and reaching conclusions; and at least carefully
testing the conclusions to determine whether they fit the formulating hypothesis.
100
RESEARCH PROCESS
Typically a research process consists of the following steps (Cooper and Schindler,
2003):
Although research process is outlined in the form of sequence above but in actual it is
quite muddled and a researcher may be required to simultaneously work on more
than one aspect at a time and revisit some of the steps (Saunders et al., 2009).
Therefore, given the uniqueness of every research activity, factors like; time period
and money required for the research, and problems faced by the researcher would
vary among different research projects.
The research process by Graziano and Raulin (2009) also focuses on behavioral
science, but it is less strict than the process by Bordens and Abbott (2007). Graziano
and Raulin (Graziano and Raulin, 2009) also define science as a process of enquiry.
Science acquires its knowledge through observation (empiricism), but also through
reasoning (rationalism) (Graziano and Raulin, 2009). The process begins with the
generation of an initial idea. Personal experience or existing research can serve as an
inspiration for a new research process. To explore the idea with the help of scientific
research, it has to be clearly defined. In the next step, therefore, the problem to be
addressed is described in the form of a research question. The research procedure
that should lead to the solution of the research question is defined in the procedure-
design phase. The resulting research design determines the study participants and
conditions as well as the data-collection and data analysis methods. After the
observation has been carried out, the data are analyzed and interpreted. The final
communication of the results to the scientific community can trigger a new research
process or stimulate the activity of other researchers. (Graziano and Raulin, 2009).
101
Bj¨ork (2007) describes a very complex model of a research process. Although Bj¨ork
does not clearly define his understanding of science, the terms he uses place the
model in behavioral science. The focus of the model is scientific communication. The
model therefore distinguishes activities that serve to acquire existing knowledge from
activities that generate new knowledge. The inputs of the process are ”scientific
problems” and ”existing knowledge”. By studying the existing research knowledge,
the researchers devise a conceptual framework and hypotheses for further research.
Then, data from existing repositories are collected and analyzed. The researchers then
do experiments and make observations with selected scientific methods. The data
collected as well as the new empirical data is analyzed in order to draw conclusions
and create new scientific knowledge (Bj¨ork, 2007) The research process is further
embedded in a broader process called “Do research, communicate and apply the
results” consisting of the stages Fund R&D, Perform the research, Communicate the
results and Apply the knowledge (Bj¨ork, 2007).
Finally, Blumberg et al. (2008) describe a business research process. Their process
begins with the development and the exact definition of the research question. The
research question of the business research process has to be connected to an existing
management problem. Preceding the research design, researchers might have to
provide a written research proposal. The research proposal describes the exploration
of the management research question. The proposal can be used to obtain funding
for the research project. The next phase, the research design, describes the activities
leading to the fulfillment of the research objectives. Blumberg et al. point out the
benefits of using different methods to prevent bias. The research design begins with
the definition of an overall design strategy. Based on this, the relevant population and
sampling methods.
102
RESEARCH PROCESS IN FLOW CHART
103
(4) Preparing the research design;
(11) preparation of the report or presentation of the results, i.e., formal write-up of
conclusions reached.
104
Role of research questions
At the very outset the researcher must single out the problem he wants to study.
Initially the problem may be stated in a broad general way and then the ambiguities,
if any, relating to the problem be resolved. Then, the feasibility of a particular
solution has to be considered. The formulation of a general topic into a specific
research problem, thus, constitutes the first step in a scientific enquiry.
Essentially two steps are involved in formulating the research problem, viz., try to
realize the problem thoroughly, and rephrasing the same in meaningful terms from
an analytical point of view. The researcher must at the same time examine all
available literature to get himself acquainted with the selected problem.
Once the problem is formulated, a brief summary of it should be written down. The
researcher should undertake extensive literature survey connected with the
problem. For this purpose, the abstracting and indexing journals and published or
unpublished bibliographies are the first place to go to. Academic journals, conference
proceedings, government reports, books etc., must be tapped depending on the
nature of the problem. In this process, it should be remembered that one source will
lead to another. The earlier studies, if any, which are similar to the study in hand,
should be carefully studied. A good library will be a great help to the researcher at
this stage.
105
3. Development of working hypotheses: After the extensive literature survey,
researcher should state in clear terms the working hypothesis or hypotheses. The
working hypothesis is a tentative assumption made in order to draw out and test its
logical or empirical consequences. In most types of research, the development of
working hypothesis plays an important role. The hypothesis should be very specific
and limited to the piece of research in hand because it has to be tested. The role of
the hypothesis is to guide the researcher by delimiting the area of research and to
keep him on the right track. It sharpens his thinking and focuses attention on the
more important facets of the problem. It also indicates the type of data required and
the type of methods of data analysis to be used.
5. Determining sample design: All the items under consideration in any field of
inquiry constitute a ‘universe’ or ‘population’. A complete enumeration of all the
items in the ‘population’ is known as a census inquiry. Census inquiry is not possible
in practice under many circumstances. For instance, blood testing is done only on a
sample basis. Hence, quite often we select only a few items from the universe for our
study purposes. The items so selected constitute what is technically called a sample.
The researcher must decide the way of selecting a sample or what is popularly known
as the sample design. In other words, a sample design is a definite plan determined
before any data are actually collected for obtaining a sample from a given population.
The sample design to be used must be decided by the researcher taking into
consideration the nature of the inquiry and other related factors.
6. Collecting the data: In dealing with any real life problem, it is often found that
data at hand are inadequate, and hence, it becomes necessary to collect data that
are appropriate. There are several ways of collecting the appropriate data which
differ considerably in the context of money costs, time and other resources at the
disposal of the researcher. Primary data can be collected either through experiment
106
or through survey. If the researcher conducts an experiment, he observes some
quantitative measurements, or the data, with the help of which he examines the
truth contained in his hypothesis. But in the case of a survey, data can be collected
by any one or more of the following ways: (i) By observation, (ii) Through personal
interview: The investigator follows a rigid procedure and seeks answers, (iii) Through
telephone interviews (iv) By mailing of questionnaires or (v) Through schedules. The
researcher should select one of these methods of collecting the data taking into
consideration the nature of investigation, objective and scope of the inquiry, financial
resources, available time and the desired degree of accuracy. Though he should pay
attention to all these factors but much depends upon the ability and experience of
the researcher.
8. Analysis of data: After the data have been collected, the researcher turns to
the task of analyzing them. The analysis of data requires a number of closely related
operations such as establishment of categories, the application of these categories
to raw data through coding, tabulation and then drawing statistical inferences. The
unwieldy data should necessarily be condensed into a few manageable groups and
tables for further analysis.
Coding operation is usually done at this stage through which the categories of data
are transformed into symbols that may be tabulated and counted.
Editing is the procedure that improves the quality of the data for coding. With coding
the stage is ready for tabulation.
The tabulation is a part of the technical procedure wherein the classified data are put
in the form of tables. The mechanical devices can be made use of at this juncture. A
great deal of data, especially in large inquiries, is tabulated by computers. Computers
not only save time but also make it possible to study a large number of variables
affecting a problem simultaneously. Analysis work after tabulation is generally based
on the computation of various percentages, coefficients, etc., by applying various
well defined statistical formulae. In the process of analysis, relationships or
107
differences supporting or conflicting with original or new hypotheses should be
subjected to tests of significance to determine with what validity data can be said to
indicate any conclusion(s). In brief, the researcher can analyze the collected data with
the help of various statistical measures.
11. Preparation of the report or the thesis: Finally, the researcher has to prepare
the report of what has been done by him. Writing of reports must be done with great
care keeping in view the following:
The layout of the report should be as follows: (i) the preliminary pages; (ii) the main
text, and (iii) the end matter. In its preliminary pages the report should carry the title
and date followed by acknowledgements and forward. Then there should be a table
of contents followed by a list of tables and list of graphs and charts, if any, given in the
report.
The main text of the report should have the following parts:
108
III. Main report: The main body of the report should be presented in logical
sequence and broken-down into readily identifiable sections.
IV. Conclusion: Towards the end of the main text, the researcher should again
put down the results of his research clearly and precisely. In fact, it is the final
summing up. At the end of the report, appendices should be enlisted in
respect of all technical data.
V. Bibliography: Which consist of all the references. i.e., list of books, journals,
reports, etc., consulted, should also be given at the end. The index should
also be given specially in a published research report.
The report should be written in a concise and objective style in simple language
avoiding vague expressions such as ‘it seems,’ ‘there may be’, and the likes. Charts
and illustrations in the main report should be used only if they present the information
more clearly and forcibly. Calculated ‘confidence limits’ must be mentioned and the
various constraints experienced in conducting research operations may as well be
stated.
CONCLUSION
DISCUSSION QUESTIONS
109
MODULE 6
PROBLEM FORMULATION
Learning objectives:
INTRODUCTION
Research always originates from a need felt by individuals in a social setting. The
problem is basically a “gap” between “what is” and “what ought to be”. When a
research is conducted to solve the problem as Tejero, (2004) puts it, a gap is filled in
and new knowledge evolved. However, a clear distinction between the problem and
the purpose should be made. The problem is the aspect the researcher worries about,
thinks about and wants to find a solution for. The purpose is to solve the problem, i.e.,
find answers to the question(s). If there is no clear problem formulation, the purpose
and methods are meaningless This particular chapter provides better insight and
understanding into the ways with which one researcher formulate the problem from
varied sources, developing the literature matrix and finding the GAP in research.
RESEARCH PROBLEM
110
COMPONENTS OF A RESEARCH PROBLEM:
Tejero (2004) enumerated the following as some of the many sources of a problem:
111
2. Courses that have been taken;
3. Journals, books, magazines or abstracts;
4. Theses and dissertation (focused on recommendations);
5. Professors and colleagues; and
6. Confirm the research GAP.
A research problem is the initial step and the most significant condition in the research
method. The research problem serves as the basis of a research study. According the
Kerlinger; ‘in order for one to solve a problem, one must know what the problem is’.
The large part of the problem knows what one is trying to do. A research problem and
the way researcher formulate it regulates almost every step that follows in the study.
Formulation of the problem determines the quality output of the study. In some cases
research problems or questions redefined too vaguely and too generally. An
important point to keep in mind when defining or formulating a research problem is
that it should be more specific rather than general. When a problem or question is
specific and focused; it becomes a more answerable research question than if it
remained general and unfocused (Bless et. al., 2006).
112
For example it could be: "The frequency of job layoffs is creating fear, anxiety, and a
loss of productivity in middle management workers." While this problem statement is
just one sentence, it should be accompanied by a few paragraphs that elaborate on
the problem. The paragraphs could cover present persuasive arguments that make
the problem important enough to study. They could include the opinions of others
(politicians, futurists, other professionals); explanations of how the problem relates
to business, social or political trends via presentation of data that demonstrates the
scope and depth of the problem. A well-articulated statement of the problem
establishes the foundation for everything to follow in the proposal and will render less
problematic most of the conceptual, theoretical and methodological obstacles
typically encountered during the process of proposal development. This means that,
in subsequent sections of the proposal, there should be no surprises, such as
categories, questions, variables or data sources that come out of nowhere: if it can't
be found in the problem section, at least at the implicit level, then it either does not
belong in the study or the problem statement needs to be re-written (Bwisa (2008).
113
WHERE DOES A PROBLEM STATEMENT ORIGINATE FROM?
ROLE OF LITERATURE
No research undertaking can skip an exhaustive literature search (Aslam, 2010). For a
researcher, it is extremely frustrating to know at the end of the research exercise that
the knowledge gap that he or she is trying to answer had been responded and
published well before the start of the research. Therefore, before beginning any
research, colleagues should devote all their energies to confirm that their research
question has not been answered before…. However, there is always a time gap
ranging from one to several years before the research results published in medical
journals and conference proceedings become the part of textbooks (Scott, et.al.
2011). The importance of publications not available in these conventional databases
(usually known as gray literature) i.e. dissertation, theses, scientific reports, and non-
indexed publications cannot be subsided and every attempt should be made to make
the literature search exhaustive. It is recommended that this work should be
performed by more than one investigator while using two or more databases and
should include a manual search of the list of selected references for gray literature.
Methods of performing such a search have been discussed elsewhere and must be
considered by the colleagues according to their feasibility (Aslam 2010; Scott, et.al.
2011). It is important to keep in mind that the quality of research reports should be
carefully and critically judged using the guidelines available elsewhere, (Morris, 2008),
as unclear methods lead to uncertainty and may itself be a knowledge gap (Aslam,
2010). Nonetheless, we strongly suggest that at the end of the literature search,
colleagues should write a report with methods used for literature search, their
findings, and identified knowledge gaps, to stimulate discussions with other
colleagues to confirm and improve the research question in the light of these reports.
It is highly recommended that researchers whether new or experienced to transform
their literature review reports into peer reviewed comprehensive review articles
114
which can inform other researchers of the possible avenues of research and can be
used as an argument to support identified research questions via peer-review (Junaid,
Akhtar, Raza, and Ejaz, 2012).
• Relevance
• Avoidance of duplication
• Urgency or timeliness
• Political acceptability of study
• Feasibility of study
• Applicability of results
• Ethical acceptability
Relevance
The topic you choose should be a priority problem. Questions to be asked include How
large or widespread is the problem? Who is affected? How severe is the problem? Try
to think of serious socioeconomic problem is that affect a great number of people, or,
of the most serious problems that are faced by the managers in the area of your work.
Before you decide to carry out a study, it is important that you find out whether the
suggested topic has been investigated before, either within the proposed snidy area
or in another area with similar conditions. If the topic has been researched, the results
should be reviewed to explore whether major questions that deserve further
investigation remain unanswered. If not, another topic should be chosen.
How urgently are the results needed for making a decision or developing interventions
at various levels (from community to policy)? Consider which research should be done
first and which can be done later.
115
Political acceptability
In general it is advisable to research a topic that has the interest and support of the
political /national authorities. This will increase the chance that the results of the
study will be implemented. Under certain circumstances, however, you may feel that
a study is required to show that the government's policy needs adjustment. If so, you
should make an extra effort to involve the policy makers concerned at an early stage,
in order to limit the chances of confrontation later.
Feasibility
Look at the project you are proposing and consider the complexity of the problem and
the resources you will require carrying out your study. Thought should be given first
to manpower, time, equipment, and money that is locally available.
In situations where the local resources are necessary to carry out the project are not
sufficient, you might consider resources available at the national level; for example,
in research units, research councils or local universities. Finally, explore the possibility
of obtaining technical and financial assistance from external sources.
Is it likely that the recommendations from the study ill be applied? This will depend
not only on the management capability within the team and the readiness of the
authorities but also on the availability of resources for implementing the
recommendations. Likewise, the opinion of the potential clients and of responsible
staff will influence the implementation of recommendations.
Ethical acceptability
We should always consider the possibility that we may inflict harm on others while
carrying out research. Therefore, review the study you are proposing and consider
important ethical issues such as:
How acceptable is the research to those who will be studied? (Cultural sensitivity must
be given careful consideration). Is the problem shared by the target group and
management staff and researchers? Can informed consent be obtained from the
research subjects? Will the results be shared with those who are being studied? Will
the results be helpful in improving the living standard of those studied?
116
IMPORTANCE OF RESEARCH QUESTION
The first and most important decision in preparing a systematic review is to determine its focus. This
is best done by clearly framing the questions the review seeks to answer. Well-formulated questions
will guide many aspects of the review process, including determining eligibility criteria, searching for
studies, collecting data from included studies, and presenting findings (Jackson, 1980, Cooper, 1984,
and Hedges, 1994). A research problem is often accompanied by research question(s).
A Research Question is a statement that identifies the phenomenon to be
studied. For example: What resources are helpful to knew and minority
drug abuse researchers? (www.theresearchassistant.com).
A good research question is described by the acronym FINER (Hulley and Cummings,
1998)
• Feasible (adequate subjects, technical expertise, time and money, and scope);
• Interesting to the investigator;
• Novel (confirms or refutes previous findings, provides new findings);
• Ethical and
• Relevant (to scientific knowledge, clinical and health policy, future research
directions).
CHECKLIST
117
4 Is the question doable? • Can information be collected in an
attempt to answer the question?
• Do I have the skills and expertise
necessary to access this
information? If not, can the skills
be developed?
• Will I be able to get it all done
within my time constraints?
• Are costs likely to exceed my
budget?
• Are there any potential ethical
problems?
118
a. Being recognized as an expert in some area.
9. Do not assume that outstanding, or even good, clinical research is easier than
outstanding basic research. Outstanding clinical research is not easy.
a. It is more difficult to design well-controlled and informative studies.
b. All procedures needed for an optimal design cannot always be
performed in a given population.
c. The studies usually take longer and are more complicated.
10. Focus, Focus, Focus.
a. Do not forget the need to focus.
b. Trying to make an impact in three or more different areas is extremely
difficult (Kahn, 1994).
CONCLUSION
Formulating the research problem and proposition developed turns as a major step in
the research methodology. The process of formulating them in a meaningful way is
not at all an easy task. In research, the leading stage that comes into play is
formulating the research problem and it becomes practically a necessity to have the
basic knowledge and understanding of phenomena under study and its further
support in making a right decision. This section of the book discusses about these
aspects and provide the researcher meaningful insight into the problem formulation
process, and it support the researcher to fix the unit of analysis, time and space limits,
features that are under study, specific conditions that are present in order to conduct
the research process.
DISCUSSION QUESTIONS
1. How you define a research problem. How you formulate a research problem?
2. What are the sources of research problem?
3. Discuss the importance of formulating a research problem in research?
4. What do you mean by problem statement? How is it different from problem
formulation?
5. Discuss the importance of research questions in arriving at problem
statement?
6. Explain best criteria for choosing a research topic?
7. Discuss the checklist of forming good research questions?
8. What are the ten commandments of picking up a research project?
119
MODULE 7
RESEARCH DESIGN
Learning objectives:
INTRODUCTION
How is the term research design' to be used in this chapter? An analogy might help.
When constructing a building there is no point ordering materials or setting critical
dates for completion of project stages until we know what sort of building is being
constructed. The first decision is whether we need a high rise office building, a factory
for manufacturing machinery, a school, a residential home or an apartment block.
Until this is done, we cannot sketch a plan, obtain permits, work out a work schedule
or order materials. Similarly, social research needs a design or a structure before data
collection or analysis can commence.
A research design is not just a work plan. A work plan details what has to be done to
complete the project but the work plan will flow from the project's research design.
The function of a research design is to ensure that the evidence obtained enables us
to answer the initial question as unambiguously as possible. Obtaining relevant
evidence entails specifying the type of evidence needed to answer the research
question, to test a theory, to evaluate a program or to accurately describe some
phenomenon. In other words, when designing research we need to ask: given this
research question (or theory), what type of evidence is needed to answer the
question (or test the theory) in a convincing way?
DEFINITION
Research design essentially refers to the plan or the strategy of shaping the research
(Henn, Weinstein and Foard, 2006), that might include the entire process of research
from conceptualizing a problem with writing research questions, and on to data
collection, analysis, interpretation and report writing (Creswell, 2007). It provided the
120
framework for the collection and analysis of data and subsequently indicated which
research methods were appropriate (Walliman, 2006). The most common, useful
purposes and main aims of research were exploration, description and rational
explanation based on data (Richardson, 2005; Babbie, 2007).
According to Hussey and Hussey (1997), research design is the overall approach to the
research process, from the theoretical underpinning to the collection and analysis of
the data.
A logical systematic plan prepared for directing a research study. It specifies the
objectives of the study, the methodology, and techniques to be adopted for achieving
the objectives… it constitutes the blue print for the collection measurement and
analysis of the data”. – Philips Bernard.
Saunders, Lewis and Thornhill, (1997) state that the research design helps the
researcher to:
The statement of the problem, research questions and research objectives will call for
a specific research design (Saunders et al., 2009). Research design addresses
important issues relating to a research project such as purpose of study, location of
study, type of investigation, extent of researcher interference, time horizon and the
unit of analysis (Sekaran and Bougie, 2010). Research designs, however, will vary from
simple to complex depending on the nature of the study and the specific hypotheses
formulated for testing. Certain designs will ask for primary data and others for
secondary data. Some research designs may require the researcher to collect primary
as well as secondary data. Moreover, the data collection modes will be different for
different researches. Some researchers will require observation; others may rely on
surveys, or secondary data (Zikmund, 2000). Some researchers will call for
experiments, were based on the results of the experiment, the theory on which the
hypotheses and predictions were based will be accepted or rejected (Goodwin, 2005).
121
Research design sets the scope of the study specifying whether it needs to be
descriptive, explanatory (or causal) or predictive. It is important for a researcher to be
familiar with the differences between the major research designs like experimental,
cross-sectional, longitudinal, case study and comparative (Bryman and Bell, 2003).
Hence a researcher must be very careful while selecting a research design as it will
dictate an overall plan and technology of the research project.
Research design is a master plan that specifies the methods and procedures for
collecting and analyzing needed information. Research design deals with a logical
problem and not a logistical problem' (Yin, 1989). Before a builder or architect can
develop a work plan or order materials, they must first establish the type of building
required, its uses and the needs of the occupants. The work plan owes from this.
Similarly, in social research the issues of sampling, method of data collection (e.g.
questionnaire, observation, and document analysis), and design of questions are all
subsidiary to the matter of `What evidence do I need to collect?' Without attending
to these research design matters at the beginning, the conclusions drawn will
normally be weak and unconvincing and fail to answer the research question.
Exploratory research is a type of research conducted because a problem has not been
clearly defined. Exploratory research helps determine the best research design, data
collection method and selection of subjects. Given its fundamental nature,
exploratory research often concludes that a perceived problem does not actually
exist. According to Sekaran, (2000) an exploratory study research was performed
when a researcher had little knowledge about the situation or had no information on
how similar problems or research issues had been solved in the past. It embarks on
investigating and finding the real nature of the problem. In addition, solutions and
new ideas could surface from this type of research (Richardson, 2005).
122
The exploratory research aims at investigating the full nature of a phenomenon, the
manner existence, other related factors and the characteristics of the subjects
thereof, in order to gain additional information on the situation or practice.
Exploratory research is done to increase the researchers’ knowledge on the field of
study and provides valuable baseline information for further investigation. The
method uses interviews and observational methods to collect data (Drummond, 1998;
Polit and Beck, 2006).
The main objective of the exploratory research is to manage the broad problem into
specific problem statement and generate possible hypothesis.
123
The exploratory studies are mainly used for:
The professionals, academicians, experts, managers etc. are the individual who can
provide substantial data on the research problem.
The Survey of experienced individuals will help the researchers to identify scope and
key issues. Well-organized survey of experienced individual will help the researcher
in:
Sometimes the popular cases have gone through to develop understanding of the
given research problem. Case study research excels at bringing us to an understanding
of a complex issue or object and can extend experience or add strength to what is
already known through previous research. Case studies highlight comprehensive
contextual analysis of a limited number of occasions or conditions and their
124
relationships. Researchers have used the case study research method for many years
across a variety of disciplines. Social scientists, in particular, have made wide use of
this qualitative research method to examine contemporary real-life situations and
provide the basis for the application of ideas and extension of methods. The case
study research method as an empirical inquiry that investigates a contemporary
phenomenon within its real-life context; when the boundaries between phenomenon
and context are not clearly evident; and in which multiple sources of evidence are
used (Yin, 1984)
d) Case study
Case studies are multifaceted since they generally comprise multiple sources of data,
may include manifold cases within a study, and produce large amounts of data for
analysis. Investigators from many areas use the case study technique to figure upon
theory, to produce novel theory, to dispute or test theory, to clarify a condition, to
provide a basis to apply solutions to situations, to explore, or to describe an object or
phenomenon
Advantages
125
• Useful in determining the best approach to achieve a researcher's
objectives; and
• In some cases can save a great deal of time and money.
Disadvantages
126
distribution. Because the human mind cannot extract the full import of a large mass
of raw data, descriptive statistics are very important in reducing the data to
manageable form. When in-depth, narrative descriptions of small numbers of cases
are involved, the research uses description as a tool to organize data into patterns
that emerge during analysis. Those patterns aid the mind in comprehending a
qualitative study and its implications.
In majority cases quantitative research falls into two areas: studies that describe
events and studies aimed at discovering inferences or causal relationships. Descriptive
studies are aimed at finding out "what is," so observational and survey methods are
frequently used to collect descriptive data (Borg and Gall, 1989). Studies of this type
might describe the current state of multimedia usage in schools or patterns of activity
resulting from group work at the computer. An example of this is Cochenour, Hakes,
and Neal's (1994) study of trends in compressed video applications with education
and the private sector.
The description is used for frequencies, averages and other statistical calculations.
Often the best approach, prior to writing descriptive research, is to conduct a survey
investigation. Qualitative research often has the aim of description and researchers
may follow-up with examinations of why the observations exist and what the
implications of the findings are. In short descriptive research deals with everything
that can be counted and studied. But there are always restrictions to that. Your
research must have an impact on the lives of the people around you. For example,
finding the most frequent disease that affects the children of a town. The reader of
the research will know what to do to prevent that disease thus, more people will live
a healthy life.
The descriptive could be quantitative and qualitative both. It describes the given
problem/phenomenon to establish the relationship between the factors.
127
• Large representative samples are observed in descriptive
research;
• Uses description as a tool to organize data into patterns that
emerge during analysis; and
• Often uses visual aids such as graphs and charts to aid the
reader.
Example:
As many researchers have noted, descriptive research designs are for the most part
quantitative in nature (Burns and Bush, 2002; Churchill and Iacobucci, 2004; Hair et al.
2003; Parasuraman, 1991). There are two basic techniques of descriptive research:
cross-sectional and longitudinal. Cross-sectional studies collect information from a
128
given sample of the population at only one point in time, while the latter deals with
the same sample units of population over a period of time (Burns and Bush 2002;
Malhotra 1999).
Correlational Study
The correlational study is one of the subcategories of descriptive research. Best (1977)
defined descriptive research as “ it describes and interprets what is concerned with
the conditions or relationships that exist; practices that prevail; beliefs, points of view,
or attitudes that are held; a process that is going on; effects that are being felt; or
trend that is developing". "Correlational studies are concerned with determining the
extent of relationship existing between variables. They enable one to measure the
extent to which variation in one variable is associated with another. The magnitude of
the relationship is determined through the use of the coefficient of correlation…. The
idea of such studies is exploration rather than theory testing”.
Characteristics
Example:
Study tended to find out the relationship between depression and academic
achievement among arts college students.
Cross-sectional study
129
research method often used in psychology, sociology and management, but also
utilized in many other areas including social science and education. This kind of study
uses different sets of individuals who differ in the variable of interest, but share other
features such as socioeconomic status, educational background, and ethnicity.
Characteristics
Example:
Advantages:
130
Limitations:
Longitudinal Study
Characteristics:
• data are collected for each item or variable for two or more
distinct periods;
• the subjects or cases analyzed are the same or broadly
comparable; and
131
• the analysis involves some comparison of data between or
among periods.
Example:
a. Trend study
A trend study is a type of longitudinal research design that looks into the dynamics of
a particular characteristic of the population over time. For example, a researcher
might want to study the people’s preference for projects, whether government or
132
non-government, in their community. Respondents of the study vary across study
periods.
b. Cohort study
A cohort study is a type of longitudinal research design where a cohort is tracked over
extended periods of time. A cohort is a group of individuals who have shared a
particular time together during a particular time span, for example, a group of
indigenous peoples living in the forest for decades.
c. Panel study
A panel study is a type of longitudinal research design that involves collection of data
from a panel, or the same set of people over several points in time by measuring
specific dependent variable identified by the researcher to achieve a study objective.
From the data gathered, it is possible to predict cause-effect relationship after a given
time. Panel study is usually done when it is difficult to analyze a case-study which is
only a one-shot deal. People’s shifting attitudes and behavior can be detected. For
example, the cause - effect relationship may be investigated between the number of
faculty research outputs and the amount of time given for research as work load over
three years.
Advantages:
Disadvantages:
Prospective panel design: data may be collected at two or more distinct periods, for
those distinct periods, on the same set of cases and variables in each period.
Retrospective panel research: data may be collected in a single period for several
periods, usually including the period that ends with the time at which the data are
collected.
133
Cross-sectional Research Design
Descriptive research is the type of research that explores and describes the data or
characteristics needed for the research. It has several advantages. Some of them are
as given below:
Advantages:
The people being studied are unaware so they act naturally or as they normally do in
everyday situation;
Meaning
This is similar to the descriptive study design, but with a different focus. Diagnostic
research studies, on the other hand, determine the frequency with which something
occurs. A diagnostic design is concerned with the case as well as the treatment. It is
directed towards discovering what is happening, why is it happening and what can be
done about it. It aims at identifying the causes of the problem and possible solutions
for it.
134
Purpose
A diagnostic study may also concern with discovering and testing whether certain
variables are associated; e.g., are persons hailing from rural areas more suitable for
manning the rural branches of banks? Does a mere village that city voters vote for a
particular party?
Requirements
Both descriptive and diagnostic study design share common requirements. The main
objective of descriptive design is to acquire knowledge. The design in such studies
needs to be rigid focusing on the following;
A diagnostic study is not possible in areas where knowledge is not advanced enough
making a possible adequate diagnosis. In such cases the social scientists limit his effort
to descriptive studies.
Experimental design seeks to find out the cause and effect relationship of the
phenomenon under study. Under this design, two similar groups, one called
'experimental group' and the other 'control group' are chosen. The experimental
group is exposed to predesigned procedures while the control group is kept constant.
At the end of the experiment, the two groups are compared to find out the resultant
effect of the experiment. The difference between the two groups is considered to
have been produced by the causative factors.
Before starting the experimentation process, it needs to be ensured that the two
groups are similar in almost every respect. The main techniques for making the two
groups similar are: (i) randomization and matching; and (ii) frequency distribution
control.
135
Definition:
Defining Characteristics:
136
• Unit of Assignment: The experiment applies to a specific ‘unit of
assignment’ (the model in which the conditions are tested).
• Comparison/ Control Group: Most experimental studies measure
the impact of treatments against a comparison or control group.
Sometimes the control condition is defined as one to which the
treatment is NOT applied. Sometimes different treatments are
compared against each other.
• Causality: The combined purpose of all previous features is to
credibly establish a cause-effect relationship. However, ‘causality’
is not always established at a uniform level. Experimental research
in environmental technology (such as a metal roof, for example) is
more likely to take causality for granted than research in socio-
cultural aspects. Causality, we begin to see, is more achievable
where;
- laboratory settings control relevant variables;
- variables are inert (not likely to change, except as a
consequence of the treatment) explicit theories specify
expected effects; and
- instruments are calibrated to measure expected effects.
With social behaviors, researchers are more explicit about how they have met basic
requirements (more, richer, deeper explanation). They also emphasize conditions and
limitations of any causal interpretation (Campbell, (1966)
137
Basic principles of experimental designs
The basic principles of experimental designs are randomization, replication and local
control. These principles make a valid test of significance possible. Each of them is
described briefly in the following subsection (Fisher, 1960).
Randomization
Replication
(ii) to decrease the experimental error and thereby to increase precision, which is
a measure of the variability of the experimental error; and
138
since , where denotes the number of replications (Fisher 1960).
Local Control
It has been observed that all extraneous sources of variation are not removed by
randomization and replication. This necessitates a refinement in the experimental
technique. In other words, we need to choose a design in such a manner that all
extraneous sources of variation are brought under control. For this purpose, we make
use of local control, a term referring to the amount of balancing, blocking and
grouping of the experimental units. Balancing means that the treatments should he
assigned to the experimental units in such a way that the result is a
balanced arrangement of the treatments. Blocking means that like experimental units
should be collected to form a relatively homogeneous group. A block is also a
replicate. The main purpose of the principle of local control is to increase the efficiency
of an experimental design by decreasing the experimental error. The point to
remember here is that the term local control should not be confused with the
word control. The word control in experimental design is used for a treatment. Which
does not receive any treatment, but we need to find out the effectiveness of other
treatments through comparison (Fisher 1960).
Examples:
The following are some examples to illustrate the experimental research (Isaac &
Michael, 1977):
139
Advantages and disadvantages
Advantages Disadvantages
Intuitive practice shaped by research Personal bias of the researcher may intrude
Teachers have bias but can be reflective The sample may not be representative
The researcher can have control over Can produce artificial results
variables
Humans perform experiments anyway The results may only apply to one situation and
may be difficult to replicate
Use to determine what is best for the Human response can be difficult to measure
population
Provides for greater transferability than Political pressure may skew results
anecdotal research
(Source: http://writing.colostate.edu/guides/research/experiment/pop5d.cfm)
There are four types of experimental studies and accordingly four types of research design
dealing with each type of experimental study. The characteristics of these four
experimental designs have been summed up and presented in the following Table.
140
TYPES OF EXPERIMENTAL DESIGNS AND THEIR CHARACTERISTICS
141
Example: After only Experimental design
For example, schoolchildren are randomly assigned to two groups of 25 each. The
experimental (treatment) group receives an innovative teaching method during a special class
session. The second group (the control) receives an old-style teaching method during a special
class session. No pretest is used for each group. Issues such as existing grades, SAT scores,
and other factors are examined as covariates. The key difference in the post test only design
is that, neither group is protected and only at the end of the study are both groups measured
on the dependent variable.
E Ye
C
Yc
Difference =Ye-Yc
Y- DEPENDENT VARIABLE
X - INDEPENDENT VARIABLE
E - EXPERIMENTAL VARIABLE
C - CONTROL VARIABLE
For example: a classroom teacher gives her students a pretest then implements an
instructional strategy followed by a posttest. The teacher would be able to assess the
effectiveness of instructional strategy through this experimental design. The teacher
could evaluate the actual condition of students before giving instruction strategy and
there after implementation, what changes occurred with the students’ performance
142
in academic accomplishment. Comparative analyses of the students’ performance
level can be made possible with this approach. The effect of the treatment would be
equal to the level of the phenomenon after the treatment minus the level of the
phenomenon before the treatment.
BEFORE After
TARGET GROUP Y1 Y2
DIFFERENCE = Y2-Y1
Before Afetr
E
Ye1 Ye2
C
Yc1 Yc2
DIFFERENCE IN E = Ye2-Ye1 = DE
DIFFERENCE IN C = Yc2-Yc1 = DC
143
1. One-shot;
2. One-Group, Pre-Post;
3. Static Group;
4. Random Group;
5. Solomon Four Group;
6. Randomized Block;
7. Factorial;
8. Latin Square; and
9. Historical.
One-shot
GP --T--O
General evaluation:
Diagrammed as:
X O1
144
kinds of treatments, etc., the experimenter might sensibly decide that it is not
necessary to undertake a more extensive design. Simplicity, ease, and low cost
represent strong potential advantages in the oft-despised one-shot.
One-Group, Pre-Post
GP--O--T—O
General evaluation:
• Subjects in the experimental group are measured before and after the
treatment is administered.
• No control group
• Offers comparison of the same individuals before and after the
treatment (e.g., training)
st nd
• If time between 1 & 2 measurements is extended, may suffer
maturation Can also suffer from history, morality, and testing effects
• Diagrammed as:
O X O
1 2
The usefulness of this design is similar to that of the one-shot, except that an
additional class of information is provided, i.e., pre-treatment condition or behavior.
This design is frequently used in clinical and educational research to determine if
changes occurred. It is typically analyzed with a matched pairs t-test.
Static Group
In this design, two intact groups are used, but only one of them is given the
experimental treatment. At the end of the treatment, both groups are observed or
measured to see if there is a difference between them as a result of the treatment.
The design is diagrammed as follows:
GP--T--O
GP------O
145
General Evaluation:
Diagrammed as:
Experimental Group X O1
Control Group O2
This design may provide information on some rival hypotheses. Whether it does or
not depends on the initial comparability of the two groups and whether their
experience during the experiment differs in relevant ways only by the treatment itself.
Almost 40 years ago, Solomon introduced a new form of experimental design typically
referred to today as the Solomon four-group design (Solomon, 1949). Campbell and
Stanley, (1963) discussed this design as one of three one-treatment condition
experimental designs, the other two being the pre- and posttest control group design
and the posttest-only control group design. Each of these designs is adequate to
assess the effect of the treatment and is immune from most threats to internal
validity. The Solomon four-group design, however, adds the advantage of being the
only one of the three able to assess the presence of pretest sensitization. Pretest
sensitization means that "exposure to the pretest increases . . . the sensitivity to the
experimental treatment, thus preventing generalization of results from the pretested
sample to an un-pretested population" (Huck and Sandier, 1973). Thus, the Solomon
four-group design adds a higher degree of external validity in addition to its internal
validity, and hence, according to Helmstadter, 1970), is therefore "the most desirable
of all the. . .basic experimental designs".
The Solomon Four Group design attempts to control for the possible "sensitizing"
effects of the pre-test or measurement by adding two groups who have not been a
part of the pre-test or pre-measurement process. It can be detailed as follows:
146
• Two experimental groups and two control group for the
experiment;
• One experimental group and one control group can be given a
pre-test and post-test;
• The other two groups will be given a post-test alone;
• Analysis of different possibilities;
• True experimental design;
• Combines pretest-posttest with control group design and the
posttest-only with control group design;
• Provides means for controlling the interactive testing effect and
other sources of extraneous variation; and
• Does include random assignment.
EXPERIMENTAL
O1 X O2
O3 X O4
CONTROLLED
EXPERIMENTAL
O5
O1
CONTROLLED
Although this design is not frequently used in clinical studies, it is frequently used in
both behavior and educational research and in medical studies involving the physical
activities of patients (physical therapy, for example where the pre-measurement
involves some sort of physical activity or testing). The additional cost of this design
must be justified by the need for information regarding the possible effects of the pre-
treatment measurement.
Example:
147
This program was intended to assist nurses in understanding vicarious traumatization
(VT), to aid in transforming and addressing VT, to identify signs, symptoms and
contributing factors of VT, and to develop a personal resource list in order to ensure
the development of a healthy self. The aim was to ensure that pretest sensitization
would not influence the effectiveness of the intervention program. Solomon’s four-
group design was implemented in this study (Braver and Braver, 1988). Sixty nurses
who work at the Free State Psychiatric Complex participated in the study. The four
groups consisted of two experimental and two control groups. The first experimental
group completed a pretest, participated in the intervention program and completed
a post-test. The second experimental group participated in the intervention program
and only completed a post-test. The first and second control groups did not participate
in the intervention program. The first control group completed the pre- and post-test
questionnaires whereas the second control group completed only the post-test. In
general based on the responses of those who participated in the program, it seems
that most nurses benefited from the intervention, in particular from the exercises that
addressed VT and integrating emotional experiences
FACTORIAL DESIGN
The field of Design of Experiments (DoE) deals with methods for efficient
experimentation, i.e. deriving required information about, e.g. a process, at the least
expenditure of resources (Barker, 1994). Factorial designs are important tools in DoE
and are exhaustively treated in the literature (Box, et al. 2005; Montgomery, 2005).
WHAT IS A FACTOR?
A factor is a variable that is controlled and varied during the course of an experiment.
The factors in a Factorial Experiment potentially interact among themselves to
influence the resulting value of the dependent variable. The factorial is used when we
desire information concerning the effects of different kinds or intensities of
148
treatments. The factorial provides relatively economical information not only about
the effects of each treatment, level or kind, but also about interaction effects of the
treatment.
The most basic factorial design consists of two factors (independent variable’s) each
of which has 2 levels. This is how we describe a factorial design in a research paper:
2 X 2 between factorial design. Each number represents a factor and the value of the
number indicates how many levels of the factor there are. In this design, there are
two factors each having two levels.
149
ILLUSTRATION FACTORIAL DESIGN
TASK PERFORMANCE
MANIPULATION OF ANXIETY
GROUP A GROUP B
To find out:
HIGH ANXIETY – A 2
LOW ANXIETY – B
SIMPLE TASK – 1
COMPLEX TASK – 2 2
INDUCED ANXIETY X1
SIMPLE
TASK score score
X2
COMPLEX score score
TASK
150
A factorial design is the most common way to study the effect of two or more
independent variables, although we will focus on designs that have only two
independent variables for simplicity. In a factorial design, all levels of each
independent variable are combined with all levels of the other independent variables
to produce all possible conditions.
Example: 2
Suppose we have more than two independent variable that we think is important. Can
we manipulate two (or more) things at once? Let us take the above research example,
(Example 1), ‘effect of whether or not a stimulus person (shown in a photograph) is
smiling or not on ratings of the friendliness of that person’, in which the first
independent variable had three levels (not smiling, closed-mouth smile, open-mouth
smile), then it would be a 3x2 factorial design. Note that the number of distinct
conditions formed by combining the levels of the independent variables is always just
the product of the numbers of levels. In a 2x2 design, there are four distinct
conditions. In a 3x2 design, there are 6.
Example:2
Grocery store chain wants to use 12 of its stores to examine whether sales would
change at 3 different hours of operation and 2 different types of sales promotions
151
➢ Sales promotion: food samples
This type of research design is called as 3 x 2 factorial designs. Here for the interaction
need 6 experimental groups (3 x 2 = 6).
Ex post facto designs (the term ex post facto literally means "after the fact") provide
an alternative means by which a researcher can investigate the extent to which
specific independent variables (a virus, a modified curriculum, a history of family
violence, or a personality trait) may possibly affect the dependent variable(s) of
152
interest. Although experimentation is not feasible, the researcher identifies events
that have already occurred or conditions that were already present and then collects
data to investigate a possible relationship b/w these factors and subsequent
characteristics or behaviors.
Characteristic features:
153
• Where the independent variable lies outside the researcher’s control.
Stage Two: State the hypotheses and the assumptions or premises on which the
hypotheses and research procedures are based.
Stage Three: Select the subjects (sampling) and identify the methods for collecting the
data.
Stage Four: Identify the criteria and categories for classifying the data to fit the
purposes of the study.
Stage Five: Gather data on those factors which are always present in which the given
outcome occurs, and discard the data in which those factors are not always present.
Stage Six: Gather data on those factors which are always present in which the given
outcome does not occur.
Stage Seven: Compare the two sets of data (i.e. Subtract the former (Stage Five) from
the latter (Stage Six), in order to infer the causes that are responsible for the
occurrence or non-occurrence of the outcome (Louis Cohen, Lawrence Manion &
Keith Morrison, 2012).
154
Stage Eight: Analyze, interpret and report findings.
The researcher can establish control over the design and study by;
155
• Can provide support for any number of different, even
contradictory, hypotheses.
• Correlation does not equal cause.
• Lack of control: the researcher cannot manipulate the independent
variable or randomize her subjects
• One cannot know for certain whether the causative factor has been
included or even identified.
• It may be that no single factor is the cause.
• A particular outcome may result from different causes on different
occasions.
• It is not possible to disconfirm a hypothesis.
• Classifying into dichotomous groups can be problematic.
• As the researcher attempts to match groups of key variables, this
leads to shrinkage of the sample.
• Conclusions may be based on too limited a sample or number of
occurrences.
• It may fail to single out the really significant factor(s) (Louis Cohen,
Lawrence Manion and Keith Morrison, 2012).
156
Example of ex post facto study:
Definition
157
Example:
Treatment
Final conclusion
• Notes change in the attitude towards favorable condition only when the
workers participation was ensured.
Advantages
158
Disadvantages
RANDOMIZED
The most basic type of statistical design for making inferences about treatment means
is the completely randomized design (CRD), where all treatments under investigation
are randomly allocated to the experimental units. The CRD is appropriate for testing
the equality of treatment effects when the experimental units are relative
homogeneous with respect to the response variable. The completely randomized
design is a design in which treatments are randomly assigned to the experimental
units or in which independent random samples of experimental units are selected for
each treatment (Gwowen Shieh and Show-Li Jan (2004).
Features
Treatment
159
1. Very flexible design (i.e. number of treatments and replicates is only
limited by the available number of experimental units);
2. Statistical analysis is simple compared to other designs; and
3. Loss of information due to missing data is small compared to other
designs due to the large number of degrees of freedom for the error
source of variation.
When the experimental units are heterogeneous, the notion of blocking is used to
control the extraneous sources of variability. The major criteria of blocking are
characteristics associated with the experimental material and the experimental
setting. The purpose of blocking is to sort experimental units into blocks, so that the
variation within a block is minimized while the variation among blocks is maximized.
An effective blocking not only yields more precise results than an experimental design
of comparable size without blocking, but also increases the range of validity of the
experimental results. One can use a randomized complete block design (RCBD) to
compare treatment means when there is an extraneous source of variability. In such
cases, treatments are randomly assigned to experimental units within a block, with
each treatment appearing exactly once in every block (Gwowen Shieh and Show-Li
Jan, 2004).
Treatment
Blocking Factor: Performance (3x3)
Incentives Lower level workers' Plant level workers Supervisors performance
performance Performance group three
Incentives X1 X1 X1
100 Rs
Incentives X2 X2 X2
200 Rs
Incentives X3 X3 X3
300 Rs
160
In randomized block design the subjects are first divided into groups known as blocks.
Care should be taken the populations selected for the study should be homogeneous
in respect of some selected variables. Here there should be the same number of items
in each block. The subject in the block should be selected randomly for treatment and
each treatment appears same number of times in each block
1. Complete flexibility.
2. Can have any number of treatments and blocks.
3. Provides more accurate results than the completely
randomized design due to grouping.
4. Relatively easy statistical analysis even with missing data.
5. Allows calculation of the unbiased error for specific treatments.
6. Generally more precise than the completely randomized design
(CRD).
7. No restriction on the number of treatments or replicates.
8. Some treatments may be replicated more times than others.
9. Missing plots are easily estimated.
A Latin square is a table filled with n x n different symbols in such a way that each
symbol occurs exactly once in each row and exactly once in each column. A Latin
Square design is an example of an incomplete block design where there is a single
treatment and two blocking variables, each with the same number of levels. Latin
161
square design is to analyze the influence of multiple independent variables on the
dependent variables. Here there is no interaction between treatments and blocking
factors. When the experimental material is divided into rows and columns and the
treatments re allocated such that each treatment occurs only once in a row and once
in a column, the design is known as Latin square design (LSD). In this design
eliminating fertility variations consists in an experimental layout which will control
variation in two perpendicular directions. [Latin square designs are normally used in
experiments where it is required to remove the heterogeneity of experimental
material in two directions. This design requires that the number of replications (rows)
equal the number of treatments]. In LSD the number of rows and number of columns
is equal. Hence the arrangement will form a square.
Latin square designs are reasonable choices when it is impossible to use each
treatment level for the same combination of blocking levels. For example, consider an
experiment with four diets, each to be given to four cows in succession. If each cow
was given the diets in the same order, the treatment effect would be confounded with
the effect due to the order in which the diets were given. Each cow can only be given
a single diet during a single time period. It is common to use multiple Latin squares in
a single experiment (for example if you had 8 or 12 or 16 cows to allocate
(www.stat.wisc.edu).
Treatment
SUPERVISORS X3 X1 X2
Advantages
162
1. They handle the case when we have several nuisance factors and we
either cannot combine them into a single factor or we wish to keep
them separate;
2. They allow experiments with a relatively small number of runs;
3. With two way grouping or stratification LSD controls more of the
variation than C.R.D. or R.B.D.;
4. The statistical analysis is simple though slightly complicated than for
R.B.D. Even with missing data the analysis remains relatively simple;
5. More than one factor can be investigated simultaneously; and
6. The missing observations can be analyzed by using missing plot
technique.
Disadvantages
1. The number of levels of each blocking variable must equal the number
of levels of the treatment factor; and
2. The Latin square model assumes that there are no interactions
between the blocking variables or between the treatment variable and
the blocking variable.
In double blind studies Latin square design both experimenter and the subject are
blinded. Here both experimenter and the subject unaware of ‘true’ versus ‘placebo’
treatment. For example: experimenting the efficacy of newly developed strategies,
drugs etc.
HISTORICAL RESEARCH
Historical research is the systematic collection and objective evaluation of data related
to past occurrences in order to test hypotheses concerning the causes, effects, or
trends of these events that may help to explain present events and anticipate future
events. (Gay, 1996).
163
• drawing from past to understand the present and anticipate in the
future.
Historical research has been defined as the systematic and objective location,
evaluation and synthesis of evidence in order to establish facts and draw conclusions
about past events (Borg, 1963). It involves exploring the meaning and relationship of
events, and as its resource it uses primary historical data in the form of historic
artefacts, records and writings. It attempts to find out what happened in the past and
to reveal reasons for why and how things happened.
Hill and Kerber (1967) indicate the values of historical research by listing, relationship
the past can have with the present and even the future.
Libraries and historical archives (HAs) are regarded as the main repositories for
preserving and maintaining historical documents. Their documents may constitute
either primary or secondary sources, and be maintained in the form of books (pages
bound together), manuscripts, single pages, photos, paintings, video etc. A source is
characterized as primary if it has been created during the period of interest, whereas
secondary sources are those created later on and are based on the analysis of primary
sources.
164
Historians conducting research systematically examine past events to give an account;
historical research may involve interpretation to recapture the nuances, personalities,
and ideas that influenced these events, and the expected research outcome is to
communicate an understanding of past events. Their main objective is to recreate the
past, through existing records and their interconnections. In this process, historians
employ their scientific knowledge, experience and intuition to decide which
information they will need to find and study during each next step, and subsequently
attempt to locate sources that contain this information.
The process for conducting historical research is the same as for other research.
E.g. In the field of library and information science, there is a vast array of topics that
may be considered for conducting historical research. For example, a researcher may
choose to answer questions about the development of school, academic or public
libraries, the rise of technology and the benefits/ problems it brings, the development
of preservation methods, famous personalities in the field, library statistics, or
geographical demographics and how they affect the library distribution.
Limitations
165
• Precise measurement and verifications may not be possible.
CONCLUSION
This section of the book details the significance of research design in research
applications. A research design incorporates the method and processes employed in
a scientific research. In order to a research the researcher should have the base of a
plan of action just as we are in need of a plan to construct a house. A good research
design minimizes the errors which are getting in several stages of research. The
information about varied typologies of research design, purpose, features, will
support the researchers to have a better control over the research in building up
knowledge, analyze it and interpret it to arrive at proper inferences. Research design
has a substantial effect on the reliability of the outcomes obtained. Thus research
design acts as a stable footing for the whole research.
DISCUSSION QUESTIONS
166
MODULE 8
SAMPLING
Learning objectives:
INTRODUCTION
Choosing a study sample is an important step in any research project since it is rarely
practical, efficient or ethical to study whole populations. The aim of all quantitative
sampling approaches is to draw a representative sample from the population, so that
the results of studying the sample can then be generalized back to the population.
Sampling is a good alternative for a complete census if:
With sampling the researcher can reliably use observations about the sample to make
a statement about the entire population. If a researcher desires to obtain information
about a population through questioning or testing, he/she has two basic options:
167
Contacting, questioning, and obtaining information from a large population is
extremely expensive, difficult, and time consuming. A properly designed probability
sample, however, provides a reliable means of inferring information about a
population without examining every member or element. Sampling will enable you
to collect a smaller amount of data that represent the whole group. This will save time,
money and other resources, while not compromising on reliability of information.
Sampling is the process of systematically choosing a sub-set of the total population
you is interested in surveying. With sampling, you produce findings that can be
generalized to the target population of your program.
Sample: A smaller representative of a larger whole. A set of cases that is drawn from
a larger pool and used to make generalizations about the population. A set of unit or
portion of an aggregate and material which has been selected in the belief that it will
be representative of the whole aggregate.
Sampling frame: A comprehensive list of all relevant elements or clusters that is used
to select a sample. A specific list that closely approximates all elements in the
population—from this the researcher selects units to create the study sample.
Element: The person from whom you will collect data; an element could be a young
person, a parent or a service provider. A case or a single unit that is selected from a
population and measured in some way—the basis of analysis (e.g., a person, thing,
specific time, etc.).
168
Sample or Target population: The aggregation of the population from which the
sample is actually drawn (e.g., University students and faculty in 2008-09 academic
year).
Census: A census is a study of all the individuals within a population, while the Census
is an official research activity carried out by the Government. The Census is an
important event for the research industry. The Government publishes a sample of
individual Census returns, and these help researchers to see behind the total data
169
costs, conduct research more efficiently (speed), have greater flexibility, and provides
for greater accuracy.
Characteristics of sampling
• Representativeness-valid
• Accuracy- unbiased
An accurate sample is one which exactly represents the population. It is free from the
influence of causes any difference between sample value and population value (bias).
The sample must yield precise estimate. The smaller the standard error or standard
deviation higher the precision.
• Size- reliable
A good sample must adequate in size in order to be reliable. The sample should be
such size that the inferences drawn from the sample are accurate to a given level of
confidence.
RANDOMIZATION
Numerous factors control the best sample size, beginning with the real size of the
population. The sample size should be big enough that it has a small sampling error,
that represents how nearby the results are to real population within a percentage.
Sample size requires reflecting the diversity in the universe, known as the degree of
170
variability. There needs to be a confidence level such that if the population is
recurrently sampled, the results can be repeated.
The researcher addresses the following concerns when he/she apply the sampling
procedures:
Statistical power refers to the probability that a treatment effect will be detected if it
is there. By convention, power is generally set at about 0.80, or an 80% probability
that a treatment effect will be detected if present. When a study is under-powered,
171
it has less than an 80% chance of detecting an existing treatment effect. When it is
over-powered, it has a greater than 80% chance of detecting a treatment effect. P
level refers to the ability of detecting a statistically significant difference that is the
result of chance – not the result of the treatment. In other words, the p level
determines the probability of obtaining an erroneously significant result. In statistical
language, this error is called a Type I error. By convention, p levels generally are set
at 0.05, or a 5% probability that a significant difference will occur by chance.
Two of the four parameters – power and p level -- are pre-determined. The other two
parameters – treatment variability and error variability – must be estimated in order
to complete the sample size determination. Treatment and error variability can be
estimated in three ways.
Pilot Studies: The most accurate determination of sample size is obtained when the
investigator has collected relevant data from which an estimate of treatment
variability and an estimate of error variability can be made. These data generally are
obtained in a pilot or small-scale preliminary study. Note that the results of a pilot
study do not have to be statistically significant in order for the data to be used to
estimate treatment and error variability. This procedure is the best way to determine
sample size.
In general, if the variability associated with the treatment is large relative to the error
variability, then relatively few subjects will be required to obtain statistically
significant results. Conversely, if the variability associated with the treatment is small
relative to the error variability, then relatively more subjects will be required to obtain
statistically significant results. (DMS –S C G, January 2006).
172
allowable sampling error. The examples and definitions in this section are based on a
paper about determining the sample size (Glenn and Israel, 1992).
The level of precision is sometimes called the sampling error. This is the range in which
the true value of the population is estimated to be. This value is usually expressed in
percentages (+/- 5%) that need to be determined by the investigator before sampling.
Because the level of precision can have a significant effect on the sample size with a
certain confidence level (Jan Mora and Bas, 2010).
The confidence level or risk level is based on the ideas encompassed under the Central
Limit Theorem. The key idea encompassed in the Central Limit Theorem is that when
a population is repeatedly sampled, the average value of the attribute obtained by
those samples is equal to the true population value (Saunder, et al, 2004). This means
that if 95% is the selected confidence level 95 out of the 100 samples will have the
true population value within the range of precision specified earlier. In practice a 95%
confidence level with a +/- 5% precision rate is assumed reliable.
Degree of variability
The degree of variability in the attributes being measured refers to the distribution of
attributes in the population. The more heterogeneous a population, the larger the
sample size required to obtain the given level of precision. The less variable a
population, the smaller the sample size. The level of variability is expressed using the
'proportion' or 'P'. A proportion of 0.5 (or 50%) indicates the greatest level of
variability, more than either 0.2 or 0.8. This is because 0.2 or 0.8 indicate that a large
majority do not or do, respectively, have the attribute of interest. Because a
proportion of 0.5 indicates the maximum variability in a population it is often used in
determining a more conservative sample size, that is, the sample size may be larger
than if the true variability of the population attribute were used. In this paper we will
use formulas that assume a proportion of 0.5, so we can ignore the level of variability
without choosing overly optimistic sample sizes (Mora and Kloet, 2010).
There are several methods for determining the sample size. In this paper we will
present a simple formula from Yamane to determine the sample size (Yamane, Taro.
1967). This formula can be used to determine the minimal sample size for a given
population size.
173
The formula from Yamane is:
n = N
1+N (e)2
Where
n = sample size;
N = population size;
This formula assumes a degree of variability (i.e. proportion) of 0.5 and a confidence
level of 95%. Example 1 shows an example where the population of some sort is 2000
and where a 5% level of precision is required. The sample to be examined by the
investigator is 333 items or objects
n= N 2000
Example 1
If we want a confidence level of 0.95, then the statistical tables tell us z is 1.96. If we
substitute this in the above formula, we get:
N= 0.96N
In section 3 of this paper the Yamane formula will be applied to several digital forensic
investigation scenarios.
Using the Yamane formula, we can easily determine the minimal sample size that we
have to investigate for any given population size. The downside to this formula,
however, is that it gives us at most a confidence level of 95%. If we want a higher (or
lower) confidence level than 95%, then we will have to use the original version of the
Yamane formula. To get this formula, we start with the original formula that the above
Yamane formula is based on ( Taro. 1967):
174
n= z2 P (1−P) N
z2 P(1−P)+N (e )2
Where:
n = sample size
N = population size
z = the value of the standard normal variable given the chosen confidence
level (e.g. z = 1,96 with a CL =95 % and z= 2,57 with a CL = 99%
n= 0.25 (Z2) N
0.25(Z2) +N (e) 2
n= (2.57)2×0.25×2000
The above formulas are all based on the notion that we want to perform some
investigation on a sample, and use the results to say something about the entire
population. But what if we have a large dataset in which we have a relatively small
number of items that are of interest, for example to determine if fraudulent
transactions are present in a population. If we have 20,000 transactions and we
assume that from those transactions only 100 transactions are fraudulent, then we
can be reasonably certain that the likelihood of detecting the fraudulent transactions
with a sample size of 50 is not large. The following formula can be used to determine
exactly how likely it is that we detect a fraudulent transaction (Coderre, 2009).
P=1− (1−n/N )E
175
Where
n = sample size
N = population size
But if we increase the sample size to 400 the probability of finding at least one
fraudulent transaction is 87%.
P = 1− (1−400/ 20000)100 = 87
The sample size needed for a prevalence study depends on how precisely you want to
measure the prevalence. (Precision is the amount of error in a measurement) The
bigger your sample, the less error you are likely to make in measuring the prevalence,
and therefore the better the chance that the prevalence you find in your sample will
be close to the real prevalence in the population. You can calculate the margin of
uncertainty around the findings of your study using confidence intervals. A confidence
interval gives you a maximum and a minimum plausible estimate of the true value you
were trying to measure.
The larger your sample, the less uncertainty you will have about the true prevalence.
However, you do not necessarily need a tiny margin of uncertainty. In an exploratory
study, for example, a margin of error of ±10% might be perfectly acceptable. A 10%
margin of uncertainty can be achieved with a sample of only 100. However, to get to
a 5% margin of error will require a sample of 384 (four times as large) (Conroy (2008).
176
STEP 2: IS YOUR POPULATION FINITE?
Are you sampling a population which has a defined number of members? Such
populations might include physiotherapists in private practice the pharmacies. If you
have a finite population, the sample size you need can be significantly smaller.
STEP 3: SIMPLY READ OFF YOUR REQUIRED SAMPLE SIZE FROM TABLE
Example 1: Sample size for a study of the prevalence of anxiety disorders in students
at a large university
Since the population is large (more than 5,000) she should use the first column in the
table. A sample size of 384 students will allow the study to determine the prevalence
of anxiety disorders with a confidence interval of ±5%. Note that if she wants extra
precision, she will have to sample over 1,000 for ±3%. Sample sizes increase rapidly
when very high precision is needed (Ronán, 2008).
Since the population is finite, with roughly 500 registrars and senior registrars, the
sample size will be smaller than she would need for a study of a large population. A
representative sample of 127 will give the study a margin of error (confidence interval)
177
of ±7.5% in determining the prevalence of bullying in the workplace, and 341 will
narrow that margin of error to ±3% (Conroy, 2008).
Sampling Techniques
Nonprobability Probability
Sampling Techniques Sampling Techniques
The major groups of sample designs are probability sampling and non-probability
sampling:
2. Non-probability sampling.
The choice to use probability or non-probability sampling depends on the goal of the
research. When a researcher needs to have a certain level of confidence in the data
collection, probability sampling should be used (MacNealy). Frey, et al. indicates that
the two sampling methods “differ in terms of how confident we are about the ability
of the selected sample to represent the population from which it is drawn”.
Probability samples can be “rigorously analyzed to determine possible bias and likely
error” (Henry). Non-probability sampling does not provide this advantage but is useful
for researchers “to achieve particular objectives of the research at hand” (Henry).
These objectives may allow for selection of the sample acquired by accident, because
the sample “knows” the most, or because the sample is the most typical (Fink and
Kosecoff). Probability and non-probability sampling have advantages and
178
disadvantages and the use of each is determined by the researcher’s goals in relation
to data collection and validity. Each sampling category includes various methods for
the selection process.
PROBABILITY SAMPLING
Probability sampling (a term due to Deming) is a sampling process that utilizes some
form of random selection. In probability sampling, each unit is drawn with known
probability, [Yamane, 1967] or has a nonzero chance of being selected in the sample.
[Raj, 1968] Such samples are usually selected with the help of random numbers.
(Cochran, 1953 and Trochim) With probability sampling, a measure of sampling
variation can be obtained objectively from the sample itself.
In the former, Probability sampling the researcher knows the exact possibility of
selecting each member of the population; in the latter, the chance of being included
in the sample is not known. A probability sample tends to be more difficult and costly
to conduct. However, probability samples are the only type of samples where the
results can be generalized from the sample to the population. In addition, probability
samples allow the researcher to calculate the precision of the estimates obtained from
the sample and to specify the sampling error.
179
PROBABILITY SAMPLE
Four basic types of methodologies are most commonly used for conducting
probability samples; these are simple random, stratified, cluster, and systematic
sampling. Simple random sampling provides the base from which the other more
complex sampling methodologies are derived.
The basic characteristic of random sampling is that all members of the population
have an equal and independent chance of being included in the sample (Ary, Cheser
Jacobs, and Razavieh, 1972). Simple random sampling involves researcher to select a
sample at random from the sampling frame using either random number table
manually or on a computer, or by an online number generator (Saunders et al., 2009).
To conduct a simple random sample, the researcher must first prepare an exhaustive
list (sampling frame) of all members of the population of interest. From this list, the
sample is drawn so that each person or item has an equal chance of being drawn
during each selection round. Samples may be drawn with or without replacement. In
practice, however, most simple random sampling for survey research is done without
replacement; that is, a person or item selected for sampling is removed from the
population for all subsequent selections. At any draw, the process for a simple random
sample without replacement must provide an equal chance of inclusion to any
member of the population not already drawn. To draw a simple random sample
without introducing researcher bias, computerized sampling programs and random
number tables are used to impartially select the members of the population to be
sampled.
To conduct the simple Radom is sampling research following steps may be followed
hear with.
180
5. For the selected number, look only at the number of digits assigned to
each population member
6. If the number corresponds to the number assigned to any of the
individuals in the population, then that individual is included in the
sample
7. Go to the next number in the column and repeat step #7 until the
desired number of individuals has been selected for the sample.
8. Assign all individuals on the list a consecutive number from zero to the
required number. Each individual must have the same number of digits
as each other individual
Gay, (1987) has given a good example to understand this sample method.
181
Illustration
Illustration
182
WHY IS RANDOM SAMPLING INAPPROPRIATE FOR QUALITATIVE STUDIES?
The process of selecting a random sample is well defined and rigorous, so why can the
same technique not be used for naturalistic studies? The answer lies in the aim of the
study; studying a random sample provides the best opportunity to generalize the
results to the population but is not the most effective way of developing an
understanding of complex issues relating to human behavior. There are both
theoretical and practical reasons for this.
183
• A simple random sampling is very difficult to conduct if the size of the
population being studied is large;
• The observation that makes up the systematic sample will, in most
cases, lack independence;
• The lack of independence makes it impossible to calculate the
sample variance and error variance without bias; and
• Not possible without complete list of population members. Even if a list
is readily available, it may be challenging to gain access to that list.
• Potentially uneconomical to achieve; can be disruptive to isolate
members from a group; time-scale may be too long, data/sample could
change
Researchers who choose simple random sampling must be cognizant of the numbers
that they choose. Researcher bias in regards to preferred numbers can be a problem
for the end results in regards to sample selection (Frey, et al. 2000). It is best to ask
other researchers to aid in the selection of the numbers to be used in the selection
process. It is also important to note that by using simple random sampling, the sample
selected may not include all “elements in the population that are of interest” (Fink,
1995).
184
Steps in Stratified Random Sampling
To conduct the Stratified Radom Sampling research, following steps may be followed
hear with.
Illustration
185
The superintendent would follow these steps to create a stratified sample of his 5,000
teachers.
Example3:
There are 9000 students at Stapleton College. The table below shows how the
students are distributed by course-type and gender.
186
(c) 5000/9000 x 400 = 222
The sample size of each stratum in this technique is proportionate to the population
size of the stratum when viewed against the entire population. This means that the
each stratum has the same sampling fraction.
Example:
(http://www.experiment-resources.com/stratified-sampling)
187
Illustration
Advantages
Disadvantages
188
3. More effort from the researcher is required to implement the sampling
design;
4. Stratified random sampling invites more time in the analysis and
interpretation; and
5. More costly, time-intensive, and complex than simple random
sampling.
CLUSTER SAMPLING
Cluster sampling, on the surface, is very similar to stratified sampling in that “survey
population members are divided into unique, non-overlapping groups prior to
sampling” (Henry, 1990). These groups are referred to as clusters instead of strata
because they are “naturally occurring groupings such as schools, households, or
geographic units” (Henry, 1990). Whereas a stratified sample “involves selecting a few
members from each group or stratum,” cluster sampling involves “the selection of a
few groups and data are collected from all group members” (Henry 1990). This
sampling method is used when no master list of the population exists but “cluster”
lists are obtainable (Babbie, 1990; Frey, et al. 2000; Henry, 1990.; Lohr, 1999;
MacNealy, 1999).
189
estimates derived from cluster sampling are as precise as those from simple random
sampling. However, if the heterogeneity of the clusters.
Illustration
To conduct the Cluster Sampling research, following steps may be followed hear with.
190
1. The population is 5,000 teachers.
2. The sample size is 10%, or 500 teachers.
3. The logical cluster is the school.
4. The superintendent has a list of 100 schools in the district.
5. Although the clusters vary in size, there is an average of 50 teachers
per school.
6. The required number of clusters is obtained by dividing the sample size
(500) by the average size of the cluster (50). Thus, the number of
clusters needed is 500/50 = 10 schools.
7. The superintendent randomly selects 10 schools out of the 100.
8. Every teacher in the selected schools is included in the sample
Advantages
Disadvantages
191
DIFFERENCE BETWEEN STRATIFICATION AND CLUSTERING
Stratification Clustering
1. Divide population into groups 1. Divide population into comparable groups:
different from each other: sexes, schools, cities
races, ages 2. Randomly sample some of the groups
2. Sample randomly from each 3. More error compared to simple random
group 4. Reduces costs to sample only some areas
3. Less error compared to simple or organizations
random
4. More expensive to obtain
stratification information
before sampling
http://www.ssc.wisc.edu
STEMATIC SAMPLING
Systematic samples tend to be easier to draw and execute. The researcher does not
have to jump backward and forward through the sampling frame to draw the
members to be sampled. A systematic sample may spread the members selected for
measurement more evenly across the entire population than simple random
sampling. Therefore, in some cases, systematic sampling may be more representative
of the population and more precise.
192
Illustration
193
Steps in Systematic Sampling
To conduct the Systematic Sampling research, following steps may be followed hear
with.
Example:
In the same example of superintend in the simple random sampling, let us see how
the change can be incorporated.
Advantages
194
• Population will be evenly sampled;
• Through randomization, the systematic sampling will give the same
result that of simple random sampling;
• External validity high; internal validity high;
• This sampling which spread across the population selected for the
study;
• Researcher can avoid bias if they select units for the sample in a
systematic way;
• A grid doesn't necessarily have to be used, sampling just has to be at
uniform intervals; and
• A good coverage of the study area can be more easily achieved than
using random sampling.
Disadvantages
195
sampling statistics that provide information about the precision of the results. The
advantage of non-probability sampling is the ease in which it can be administered.
Non-probability samples tend to be less complicated and less time consuming than
probability samples. If the researcher has no intention of generalizing beyond the
sample, one of the non-probability sampling methodologies will provide the desired
information.
Non-probability sampling – the elements that make up the sample, are selected by
nonrandom methods. This type of sampling is less likely than probability sampling to
produce representative samples. Even though this is true, researcher can and do use
non-probability samples. The three main methods are: 1) convenience, 2) quota, and
3) purposive.
NON-PROBABILITY SAMPLING
The three common types of non-probability samples are convenience sampling, quota
sampling, and judgmental sampling.
CONVENIENCE SAMPLING
Advantages:
196
• One should not worry about taking random samples of the
population;
• Is easy to access, requiring little effort on the part of
the researcher;
• Inexpensive way of ensuring sufficient numbers of a study
• Extensively used/understood;
• No need for lists of population elements;
• Few rules governing how the sample should be collected; and
• Convenience sampling is often used in pilot studies to allow the
people conducting the study to obtain basic data and trends
Disadvantages
QUOTA SAMPLING
Quota sampling is a sort of stratified sampling but here, the selection of cases within
strata is purely non-random (Barnett, 1991). Quota sampling is often confused with
stratified and cluster sampling—two probability sampling methodologies. All of these
methodologies sample a population that has been subdivided into classes or
categories. The primary differences between the methodologies are that with
stratified and cluster sampling the classes are mutually exclusive and are isolated prior
to sampling. Thus, the probability of being selected is known, and members of the
population selected to be sampled are not arbitrarily disqualified from being included
in the results. In quota sampling, the classes cannot be isolated prior to sampling and
respondents are categorized into the classes as the survey proceeds. As each class fills
or reaches its quota, additional respondents that would have fallen into these classes
are rejected or excluded from the results.
197
which the sample is drawn into mutually exclusive income categories prior to drawing
the sample. Bias can be introduced into this type of sample when the respondents
who are rejected, because the class to which they belong has reached its quota, differ
from those who are used.
Advantages
• Moderate cost;
• Very extensively used/understood;
• Ensures selection of adequate numbers of subjects with
appropriate characteristics;
• No need for lists of population elements; and
• Introduces some elements of stratification.
Disadvantages
JUDGMENTAL SAMPLING
Judgment sampling (also called as purposive sampling) requires researcher to use his
personal judgment to select cases that he thinks will best answer his research
questions and meet his research objectives (Saunder, et al., 2009). In judgmental or
purposive sampling, the researcher employs his or her own "expert” judgment about
who to include in the sample frame. Prior knowledge and research skill are used in
selecting the respondents or elements to be sampled.
198
introduced by the researcher cannot be measured and statistics that measure the
precision of the estimates cannot be calculated.
Advantages
• Moderate cost;
• Commonly used/understood; and
• Sample will meet a specific objective.
Disadvantages
SNOWBALL SAMPLING
Advantages
• Low cost;
• Useful in specific circumstances;
• Useful for locating rare populations;
• The only way to get a group of people willing to respond to, who
are especially knowledgeable;
• Possible to include members of groups where no lists or identifiable
clusters even exist (e.g., drug abusers, criminals).
199
Disadvantages
200
• More flexible • Greater risk of bias
• Less costly • May not be possible to
• Less time-consuming generalize to program
• Judgmentally target population
Non-probability representative samples may • Subjectivity can make it
Sampling be preferred when small difficult to measure
numbers of elements are to changes in indicators
be chosen over time
• No way to assess
precision or reliability of
data
IMPORTANCE OF SAMPLING
INTERNAL VALIDITY
There three kinds of internal validity threats are related to sampling. They are:
1. Bias;
2. Cofounding; and
3. Random errors (Miettinen and Cook, 1970, 1981)
201
What is bias?
Any trend in the collection, analysis, interpretation, publication, or review of data that
can lead to conclusions that are systematically different from the truth (Last, 2001). It
is a process in any state of inference tending to produce results that depart
systematically from the true values (Fletcher, et al, 1988). A systematic error in design
or conduct of a study (Szklo, et al, 2000). Term 'bias' should be reserved for differential
or systematic error.
Types of Bias
• Selection bias is any bias arising from the way that study participants
are selected (or select themselves) from the source population.
• Information bias is any bias arising from errors in the classification of
the exposure or disease status of the study participants.
• Confounding occurs if (because of the lack of randomization) the
underlying risk of disease is different in the exposed and non-exposed
groups.
Self-selection bias:
202
Interviewer Bias – an interviewer’s knowledge may influence the structure of
questions and the manner of presentation, which may influence responses.
Loss to follow-up – those that are lost to follow-up or who withdraw from the
study may be different from those who are followed for the entire study.
Surveillance bias – the group with the known exposure or outcome may be
followed more closely or longer than the comparison group.
1. Subject variation.
2. Observer variation.
3. Deficiency of tools.
4. Technical errors in measurement.
Some of the ways with which the bias can be controlled are;
203
• Blinding: Blinding prevents investigators and interviewers from knowing
case/control or exposed/non-exposed status of a given participant
• Form of survey: Mail may impose less “white coat tension” than a phone or
face-to-face interview.
• Questionnaire: Use multiple questions that ask same information acts as a
built in double-check
• Accuracy: Multiple checks in medical records gathering diagnosis data from
multiple sources
• Distorts the true strength of association
1. Confounding bias: Confounding (Lat. confundere, to mix together) is the
distortion of a measure of the effect of an exposure on an outcome due to the
association of the exposure with other factors that influence the occurrence of
the outcome (Porta, 2008).
For example (1) if a study shows that there is a higher rate of pancreatic cancer in
coffee drinkers than in non-coffee drinkers, an investigator might conclude that
there is a causal relation between coffee consumption and pancreatic cancer.
However, if on further analysis, the investigator finds more cigarette smokers in
the coffee drinking group than the non-coffee drinking group, cigarette smoking
might be confounding the relation between coffee consumption and pancreatic
cancer since it is known that cigarette smoking is associated with pancreatic
cancer. Cigarettes satisfy the requirement that a confounder must be associated
with disease and the factor must be associated with exposure. In this situation,
people who smoke cigarettes tend to drink coffee more than non-smokers.
This control through the study design can be accomplished in three primary ways.
Random selection of the study and control groups is the most efficient means of
control, because random selection not only controls confounding bias, but also
other forms of bias. In the pancreatic example above, random sampling might
have assured equal numbers of cigarette smokers in both groups. Another
possibility might be to limit the study to non-smokers or to match smoking history
which would assure that the same proportion of cigarette smokers were in both
groups. The most common statistical control for confounding is stratification. In
stratification, the investigator would look at the pancreatic cancer rate in cigarette
smokers and nonsmokers separately and then if there is a difference, do a
weighted average across the strata. Advanced statistical methods such as logistic
regression can also be used.
Example 2: The mothers of breast-fed babies are of higher social class, and the
babies thus have better hygiene, less crowding and perhaps other factors that
protect against gastroenteritis. Crowding and hygiene is truly protective against
gastroenteritis, but we mistakenly attribute their effects of breast feeding. This is
called confounding, because the observation is correct, but its explanation is
wrong
204
2. Random Errors: Deviation of results and inferences from the truth, occurring
only as a result of the operation of chance. Can produce type 1 or type 2 errors.
Systematic, non-random deviation of results and inferences from the truth, or
processes leading to such deviation. Any trend in the collection, analysis,
interpretation, publication or review of data that can lead to conclusions which
are systematically different from the truth.
Example 1: In a cohort study, babies of women who bottle feed and women
who breast feed are compared, and it is found that the incidence of
gastroenteritis, as recorded in medical records, is lower in the babies who are
breast-fed. Lack of good information on feeding history results in some breast-
feeding mothers being randomly classified as bottle-feeding, and vice-versa. If
this happens, the study finding underestimates the true RR, whichever feeding
modality is associated with higher disease incidence, producing a type 2 error
Random error can work to falsely produce an association (type 1 error) or falsely not produce
an association (type 2 error). We protect ourselves against random misclassification
producing a type 2 error by choosing the most precise and accurate measures of exposure
and outcome.
We protect our study against random type 1 errors by establishing that the result must be
unlikely to have occurred by chance (e.g. p < .05). P-values are established entirely to protect
against type 1 errors due to chance, and do not guarantee protection against type 1 errors
due to bias or confounding. This is the reason we say statistics demonstrate association but
not causation.
We protect our study against random type 2 errors by providing adequate sample size, and
hypothesizing large differences. The larger the sample size, the easier it will be to detect a
true difference, and the largest differences will be the easiest to detect. (Imagine how hard it
would be to detect a 1% increase in the risk of gastroenteritis with bottle-feeding).
The sample size needed to detect a significant difference is called the power of a
study.
205
variances of the outcome measures, which enter into statistical testing, are
decreased.
2. Having an adequate sized sample of study subjects
External Validity
External validity involves the extent to which the results of a study can be generalized
(applied) beyond the sample. In other words, can you apply what you found in your
study of other people (population validity) or settings (ecological validity). The other
validity criteria include the temporal, treatment variation and outcome validity.
Population Validity
How can we increase the external validity? One way, based on the sampling model,
recommends that the researcher should do a good job of drawing a sample from a
population. Population Validity the extent to which the results of a study can be
generalized from the specific sample that was studied in a larger group of subjects.
Inferential statistics contribute evidence to establish population validity of a set of
research results. Population validity can only be achieved if the accessible population
is logically representative of the target population.
206
1. The extent to which one can generalize from the study sample to a defined
population - If the sample is drawn from an accessible population, rather
than the target population, generalizing the research results from the
accessible population to the target population is risky.
2. The extent to which person logical variables interact with treatment
effects-- If the study is an experiment, it may be possible that different
results might be found with students at different grades (a person logically
variable).
Ecological Validity
Ecological Validity the extent to which the results of an experiment can be generalized
from the set of environmental conditions created by the researcher to other
environmental conditions (settings and conditions).
207
certain ideas presented during the treatment to 'fall into place' ". If the
subjects had not taken a posttest, the treatment would not have worked.
• Interaction of history and treatment effect (...to everything there is a time...)
Not only should researchers be cautious about generalizing to other
population, caution should be taken to generalize to a different time period.
As time passes, the conditions under which treatments work change.
• Measurement of the dependent variable (maybe only works with M/C tests)
A treatment may only be evident with certain types of measurements. A
teaching method may produce superior results when its effectiveness is tested
with an essay test, but show no differences when the effectiveness is
measured with a multiple choice test.
• The interaction of time of measurement and treatment effect (it takes a while
for the treatment to kick in). It may be that the treatment effect does not occur
until several weeks after the end of the treatment. In this situation, a posttest
at the end of the treatment would show no impact, but a posttest a month
later might show an impact. (Bracht, and Glass, 1968; Gall, Borg, and Gall,
1996).
Temporal Validity
Temporal validity is the extent to which the study results can be generalized across
time. For example, assume you find that a certain discipline technique works well
with many different kinds of children and in many different settings. After many years,
you might note that it is not working any more; You will need to conduct additional
research to make sure that the technique is robust over time, and if not to figure out
why and to find out what works better. Likewise, findings from far in the past often
need to be replicated to make sure that they still work.
Treatment variation validity is the degree to which one can generalize the results of
the study across variations of the treatment.
For example, if the treatment has varied a little, will the results be similar?
208
Outcome Validity
Outcome validity is the degree to which one can generalize the results of a study
across different but related dependent variables.
When to do research?
CASE
Sampling bias is a tendency to favor the selection of units that have particular
characteristics. Sampling bias is usually the result of a poor sampling plan. The most
notable is the bias of non-response when for some reason some units have no chance
of appearing in the sample. For example, take a hypothetical case where a survey was
conducted recently by the Cornell Graduate School to find out the level of stress that
graduate students were going through. A mail questionnaire was sent to 100
randomly selected graduate students. Only 52 responded and the results were that
students were not under stress at that time when the actual case was that it was the
highest time of stress for all students except those who were writing their thesis at
their own pace. Apparently, this is the group that had the time to respond. The
209
researcher who was conducting the study went back to the questionnaire to find out
what the problem was and found that all those who had responded were third and
fourth PhD students. Bias can be very costly and has to be guarded against as much as
possible. For this case, $2000.00 had been spent and there were no reliable results in
addition, it cost the researcher his job since his employer thought if he was qualified,
he should have known that beforehand and planned on how to avoid it. A means of
selecting the units of analysis must be designed to avoid the more obvious forms of
bias. Another example would be where you would like to know the average income of
some community and you decide to use the telephone numbers to select a sample of
the total population in a locality where only the rich and middle class households have
telephone lines. You will end up with high average income which will lead to the wrong
policy decisions.
Conclusion
This particular section of the book concentrates on sample, sampling, sampling frame,
and importance of sampling in research. A sample is representative of the study
population. A research that does not follow the appropriate sampling method and
process may produce outcomes that lead to imprudent research outcomes (this may
be called external validity–the degree to which the sample’s outcomes can generalize
to the population the researcher care about). This is indispensable if a researcher
wants to arrive at conclusions which are valid for the entire study population. Thus,
the sampling process will be used in the research study to get a wider scope of
generalization of the findings to the target population. The knowledge that one
obtained from this chapter remove all the doubts pertaining to sampling process and
it will lead to engage in stable research practice.
DISCUSSION QUESTIONS
1. What is the purpose of sampling?
2. What is the difference between sample and sample frame?
3. Differentiate between sample and element?
4. What are the broad categories of sampling? Explain with suitable
examples?
5. What do you mean by probabilistic sampling? Explain two types of
probabilistic sampling?
6. What do you mean by non-probabilistic sampling? Explain two types of
non-probabilistic sampling?
7. What do you mean by convenient sampling? Explain with suitable
example.
8. What is quota sampling? Explain with suitable example.
9. What is judgmental sampling? Explain with suitable example.
10. What is snowball sampling? Explain with suitable example.
210
11. Explain simple random sampling method?
12. Differentiate simple random sampling method and stratified random
sampling method?
13. Explain cluster sampling? Explain with suitable example.
14. How you explain systematic sampling?
15. Explain the advantages and disadvantages of probabilistic sampling
method?
16. Explain the advantages and disadvantages of non-probabilistic
sampling method?
17. Why sampling is required in research? What is its inevitability?
211
MODULE 9
HYPOTHESES
Learning objectives:
INTRODUCTION
After formulating statement of the problem, a researcher moves towards outlining his
research questions and developing research hypotheses. Research questions and
hypotheses provide a sound conceptual foundation for a research project. Developing
research questions is the most important task in your research project as it influences
every aspect of your research including; theory to be applied, the method to be used,
data to be gathered and unit of analysis to be assessed etc.). Well thought out
research questions provide focus to a researcher and determine what, when, where
and how the data will be collected and provide an important link between conceptual
and logistic aspects of the research project ((Ohab 2010; Stuermer, 2009; Ohab 2010).
DEFINITION
“Hypotheses are single tentative guesses, good hunches – assumed for use in
devising a theory or planning experiments intended to be given a direct
experimental test when possible”. (Rogers, 1966)
212
A hypothesis is a conjectural statement of the relation between two or more
variables. (Kerlinger, 1956).
The key word is testable. That is, you will perform a test of how two variables might
be related. This is when you are doing a real experiment. You are testing variables.
WHAT IS HYPOTHESIS?
213
• Identify the research objectives;
• Identify the key abstract concepts involved in the research;
• Identify its relationship to both the problem statement and the
literature review;
• A problem cannot be scientifically solved unless it is reduced to
hypothesis form; and
• It is a powerful tool of advancement of knowledge, consistent with
existing knowledge and conducive to further enquiry.
NATURE OF HYPOTHESIS
An Example
You are a nutritionist working in a zoo, and one of your responsibilities is to develop a
menu plan for the group of monkeys. In order to get all the vitamins they need, the
monkeys have to be given fresh leaves as part of their diet. Choices you consider
include leaves of the following species: (a) A (b) B (c) C (d) D and (e) E. You know that
in the wild the monkeys eat mainly B leaves, but you suspect that this could be
because they are safe whilst feeding in B trees, whereas eating any of the other
species would make them vulnerable to predation. You design an experiment to find
out which type of leaf the monkeys actually like best: You offer the monkeys all five
types of leaves in equal quantities, and observe what they eat. There are many
different experimental hypotheses you could formulate for the monkey study. For
example:
When offered all five types of leaves, the monkeys will preferentially feed on B leaves.
This statement satisfies both criteria for experimental hypotheses.
Testable: Once you have collected and evaluated your data (i.e. observations of what
the monkeys eat when all five types of leaves are offered), you know whether or not
they ate more B leaves than the other types (http://www.public.asu.edu).
214
CHARACTERISTICS OF GOOD HYPOTHESIS
According to Cooper and Schindler (2006), a good research hypothesis fulfills three
conditions:
1. It is adequate for its purpose. It means that the hypothesis depicts the original
research problem, explains whether the research study is descriptive or explanatory
(causal), indicates suitable research design of the study and provides the framework
for organizing the conclusions that will result.
2. It is testable. It means that hypothesis require acceptable techniques, reveals
consequences that can be deduced for testing purposes and needs few conditions or
assumptions.
3. It is better than its rivals. It means that the hypothesis is what the experts believe
is powerful enough to reveal more facts and greater variety or scope of information
than its competing hypothesis.
In a good research project, hypotheses flow naturally from the research questions
formulated. Hypothesis testing starts with an imaginary statement about population
parameter and is a procedure based on sample evidence and probability theory to
determine whether the hypotheses is right or wrong (Lind, Marchal and Wathen,
2008). The number of hypotheses formulated in a study depends on the number of
relationships being studied and the overall complexity of the research framework.
• It offers explanations for the relationships between those variables that can
be empirically tested;
• It furnishes proof that the researcher has sufficient background knowledge to
enable him/her to make suggestions in order to extend existing knowledge.
• It gives direction to an investigation;
215
• It structures the next phase in the investigation and therefore furnishes
continuity to the examination of the problem;
• It contributes to the development of theory;
• It determines appropriate techniques for the analysis of data;
• It suggests which type of research is likely to be most appropriate; and
• It specifies the source of data which shall be studies and in what context they
shall be studied.
USABLE HYPOTHESIS
FORMULATING HYPOTHESIS
Once the research question has been stated, the next step is to define testable
hypotheses. Usually, a research question is a broad statement that is not directly
measurable by a research study. The research question needs to be broken down into
smaller units, called hypotheses that can be studied. A hypothesis is a statement that
expresses the probable relationship between variables.
DEFINING VARIABLES
216
Dependent variable (or Criterion measure): This is the variable that is affected by the
independent variable. Responsiveness to pain reduction medication is the dependent
variable in the above example. The dependent variable is dependent on the
independent variable. Another example: If I praise you, you will probably feel good,
but if I am critical of you, you will probably feel angry. My response to you is the
independent variable, and your response to me is the dependent variable, because
what I say influences how you respond.
Control variable: A control variable is a variable that affects the dependent variable.
When we "control a variable" we wish to balance its effect across subjects and groups
so that we can ignore it, and just study the relationship between the independent and
the dependent variables. You control for a variable by holding it constant, e.g., keep
humidity the same, and vary temperature, to study comfort levels.
Extraneous variable: This is a variable that probably does influence the relationship
between the independent and dependent variables, but it is one that we do not
control or manipulate. For example, barometric pressure may affect pain thresholds
in some clients, but we do not know how this operates or how to control for it. Thus,
we note that this variable might affect our results, and then ignore it. Often research
studies do not find evidence to support the hypotheses because of unnoticed
extraneous variables that influenced the results. Extraneous variables which influence
the study in a negative manner are often called confounding variables.
ILLUSTRATION OF VARIABLES
ROLE AMBIGUITY
ROLE OVERLOAD
ORGANISATIONAL
ADJUSTMENT
ROLE CONFLICT
ROLE AUTHORITY
ANXIETY
Rigid control AND
TENSION
INDEPENDENT DEPENDENT
VARIABLES INTERVENING VARIABLE VARIABLES
217
DERIVATIONS OF HYPOTHESIS
INDUCTIVE
Researcher notes the observations of behavior, thinks about the problem, turns to
literature for clues, makes additional observations, derives probable relationships,
and the hypothesizes an explanation. The hypothesis is then tested.
DEDUCTIVE
TYPES OF HYPOTHESIS
Descriptive hypothesis
Descriptive hypotheses aim to describe, not to explain; they are ways of answering
the question “what?” A descriptive hypothesis claims that all instances (of a given
type) of phenomenon X have observable feature Y. A descriptive hypothesis makes an
empirical claim about the generality of a condition. If one claims that all dogs have
tails — that the condition of having tails is valid for all dogs —here one is making a
descriptive hypothesis, which can of course test empirically.
Statistical hypothesis
218
hypothesis is true would be to examine the entire population. Since that is often
impractical, researchers typically examine a random sample from the population. If
sample data is consistent with the statistical hypothesis, the hypothesis is accepted; if
not, it is rejected. There are two types of statistical hypotheses: the null hypothesis
and the alternative hypothesis.
Null hypothesis
Example: There Is No Significant Relation Between The Stress Experienced On The Job
And Job Satisfaction Of Employees.
Alternative Hypothesis
219
The second type of hypothesis is a directional hypothesis. Directional hypotheses are
never phrased as a question, but always as a statement. Directional hypothesis
measure the effects of two variables on each other. In other words, it measures the
direction of variation of two variables. This effect of one variable on the other variable
can be in a positive direction or in the negative direction. Directional hypotheses
always express the effect of an independent on a dependent variable. This is a
hypothesis that specifies the direction of the predicted hypothesis that is whether the
predicted relationship will be positive or negative.
B: The mean height of all men in the city is greater than 5' 6’’; and
b. Non-Directional hypothesis
Example: “There is a difference in the anxiety level of the children of high IQ and those
of low IQ.”
Working hypothesis
While planning the study of problem, hypotheses are formed. Initially they may not
be specific. In such cases they are referred to as “working hypothesis” which are
subject to modification as the investigation proceeds.
Relational hypothesis
Relational hypothesis are the proposition that describes the relationship between two
variables. The relationship suggested may be positive or negative or causal
relationship.
Example:
220
2. Participative management promotes motivation among executives; and
3. Labour productivity decreases as working duration increases.
Causal hypothesis
The causal hypothesis states that the existence of or a change in one variable cause
or leads to an effect on another variable. The first variable is called as an independent
variable and the latter is a dependent variable. When dealing with the causal
relationship between variables the researcher must consider the direction in which
such relationship flow, i.e., which is cause and which is effect.
The common sense hypothesis indicates that there are empirical uniformities
perceived through day to day observation.
Complex hypothesis
Example: Members of the lower level hierarchy suffer from oppression psychosis-
(Purposeful distortion of empirical exactness).
Analytical hypothesis
There are two kinds of errors that can be made in significance testing: (1) a true null
hypothesis can be incorrectly rejected and (2) a false null hypothesis can fail to be
rejected. The former error is called a Type I error and the latter error is called a Type
II error. The probability of a Type I error is designated by the Greek letter alpha (a) and
is called the Type I error rate; the probability of a Type II error (the Type II error rate)
is designated by the Greek letter beta (ß). These two types of errors are defined in the
table.
221
Statistical Decision True State of the Null Hypothesis
H0 True H0 False
Reject H0 Type I error Correct
Do not Reject H0 Correct Type II error
TYPE I ERROR
Type I error, also known as an "error of the first kind", an α error, or a "false positive":
the error of rejecting a null hypothesis when it is actually true. Plainly speaking, it
occurs when we are observing a difference when in truth there is none, thus indicating
a test of poor specificity. An example of this would be if a test shows that a woman is
pregnant when in reality she is not. Type I error can be viewed as the error of excessive
credulity.
Other Examples: We can say that Type I error has been committed when:
TYPE II ERROR
Type II error, also known as an "error of the second kind", a β error, or a "false
negative": the error of failing to reject a null hypothesis when it is in fact not true. In
other words, this is the error of failing to observe a difference when in truth there is
one, thus indicating a test of poor sensitivity.
Example 1: If a test shows that a woman is not pregnant, when in reality, she is. The
type II error can be viewed as the error of excessive skepticism.
Example 2: Medicine, however, is one exception; telling a patient that they are free
of disease, when they are not, is potentially dangerous
222
the production of the calculators. If the null hypothesis is rejected, the entire
shipment is returned to the supplier due to inferior quality. (A shipment is defined to
be of inferior quality if it contains more than 5% defects.)
In this context, define type I and type II errors.
1. From the calculator manufacturers' point of view, which type of error would
be considered more serious and explain why?
2. From the printed circuit suppliers' point of view, which type of error would be
considered more serious and explain why?
HYPOTHESIS TESTING
HYPOTHESIS TESTING
223
All hypothesis tests are conducted the same way. The researcher states a hypothesis
to be tested, formulates an analysis plan, analyzes sample data according to the plan,
and accepts or rejects the null hypothesis, based on the results of the analysis.
▪ State the hypotheses. Every hypothesis test requires the analyst to state a null
hypothesis and an alternative hypothesis. The hypotheses are stated in such a
way that they are mutually exclusive. That is, if one is true, the other must be
false; and vice versa.
▪ Formulate an analysis plan. The analysis plan describes how to use sample
data to accept or reject the null hypothesis. It should specify the following
elements.
• Significance level. Often, researchers choose significance levels equal
to 0.01, 0.05, or 0.10; but any value between 0 and 1 can be used.
• Test method. Typically, the test method involves a test statistic and a
sampling distribution. Computed from sample data, the test statistic
might be a mean score, proportion, difference between means, the
difference between proportions, z-score, t-score, chi-square, etc. Given
a test statistic and its sampling distribution, a researcher can assess
probabilities associated with the test statistic. If the test statistic
probability is less than the significance level, the null hypothesis is
rejected.
▪ Analyze sample data. Using sample data, perform computations called for in
the analysis plan.
224
Interpret the results. If the sample findings are unlikely, given the null
hypothesis, the researcher rejects the null hypothesis. Typically, this
involves comparing the P-value to the significance level, and rejecting
the null hypothesis when the P-value is less than the significance level.
CONCLUSION
DISCUSSION QUESTIONS
1. Define hypothesis?
2. What are the characteristic features of hypothesis?
3. Why hypothesis?
4. What is the scope of the hypothesis?
5. What do you mean by variables?
6. Differentiate extraneous variable and control variable?
7. Differentiate inductive and deductive method?
8. What are the different types of hypothesis?
9. How you define descriptive hypothesis with suitable example?
10. How you define statistical hypothesis with suitable example?
11. Differentiate null and alternative hypothesis?
12. Differentiate directional and non- directional hypothesis?
13. What is relational hypothesis?
14. What do you mean by causal hypothesis?
15. How you conceptualize analytical hypothesis?
16. Explain type one and type two errors?
225
MODULE 10
Learning objectives:
INTRODUCTION
Data collection is a term used to describe a process of preparing and collecting data -
for example as part of a process improvement or similar project. The purpose of data
collection is to obtain information to keep on record, to make decisions about
important issues, to pass information on to others. Primarily, data are collected to
provide information regarding a specific topic.
Data collection usually takes place early on in an improvement project, and is often
formalized through a data collection plan which often contains the following activity.
Prior to any data collection, pre-collection activity is one of the most crucial steps in
the process. It is often discovered too late that the value of their interview information
is discounted as a consequence of poor sampling of both questions and informants
and poor elicitation techniques. After a pre - collection activity is fully completed, data
226
collection in the field, whether by interviewing or other methods, can be carried out
in a structured, systematic and scientific way.
A formal data collection process is necessary as it ensures that data gathered is both
defined and accurately and that subsequent decisions based on arguments embodied
in the findings are valid. The process provides both a baseline from which to measure
from and in certain cases a target on what to improve.
Basic concepts
Data
Data are facts, figures and other relevant materials, past and present serving as bases
for study and analysis.
Demographic data
Age, sex, race, social class, religion, marital status, education, income, occupation,
family size, location of households, lifestyle.
Sources of data
Primary Research
Primary Research (also called field research) involves the collection of data that does
not already exist, which is research to collect original data. Primary Research is often
undertaken after the researcher has gained some insight into the issue by collecting
secondary data. Here the researcher collects firsthand information and he has greater
control over the collection and classification. This can be through numerous forms,
including questionnaires, direct observation and telephone interviews amongst
others. This information may be collected in things like questionnaires and interviews.
For example, you can investigate an issue specific to your business, get feedback about
your Web site, assess demand for a proposed service, gauge response to various
packaging options, and find out how much consumers will shell out for a new product.
Advantages:
• Addresses specific research issues as the researcher controls the search design
to fit their needs;
• Great control, not only does primary research to enable the marketer to focus
on specific subjects, it also enables the researcher to have a higher control over
how the information is collected. Taking this into account, the researcher can
227
decide on such requirements as size of project, time frame and location of
research;
• Efficient spending for information, primary data collection focus on issues
specific to the researcher, improving the chances that the research funds are
spent efficiently; and
• Proprietary information, primary data collected by the researcher is their own.
Secondary Research
Secondary research (also known as desk research) involves the summary, collation
and/or synthesis of existing research rather than primary research, where data is
collected from, for example, research subjects or experiments. Secondary data is the
one in which data is collected from the work of an existing researcher or information
is taken from already existing databases. Secondary Research describes information
which you gathered through already study reports, surveys, from already done
interviews, from literature, publications, records, published institutional documents,
newspapers, journals and magazines and broadcast media. Secondary research is
much easier to gather than primary research. Although secondary research is less
expensive than primary research, it's not as accurate, or as useful, as specific and
customized research. For instance, secondary research will tell you how much
teenagers spent last year on basketball shoes, but not how much they're willing to pay
for the shoe design your company has in mind.
Advantages
228
• If you have limited funds and time, other surveys may have the advantage of
samples drawn from larger populations;
• How much you use previously collected data is flexible; you might only extract
a few figures from a table, you might use the data in a subsidiary role in your
research, or even in a central role; and
• A network of data archives in which survey data files are collected and
distributed is readily available, making research for secondary analysis easily
accessible.
Disadvantages
• Since many surveys deal with national populations, if you are interested in
studying a well-defined minority subgroup you will have a difficult time finding
relevant data;
• Secondary analysis can be used in irresponsible ways. If variables aren't exactly
those you want, data can be manipulated and transformed in a way that might
lessen the validity of the original research; and
• Much research, particularly of large samples, can involve large data files and
difficult statistical packages.
229
4 Primary data is accumulated by Secondary data are obtained from any
the researcher particularly to other organization than the one
meet up the research objective of instantaneously interested in the
the subsisting project. current research project. Secondary
data was collected and analyzed by the
organization to convene the
requirements of various research
objectives.
In primary data collection, you collect the data yourself using methods such as
interviews and questionnaires. The key point here is that the data you collect is unique
to you and your research and, until you publish, no one else has access to it. There are
many methods of collecting primary data and the main methods include:
• Questionnaires.
• Schedules.
• Interviews.
• Focus group interviews.
• Observation.
• Case-studies.
• Diaries.
• Critical incidents and
• Portfolios.
230
QUESTIONNAIRE
Questionnaires are a popular means of collecting data but are difficult to design and
often require many rewrites before an acceptable questionnaire is produced.
Advantages:
Disadvantages:
• Design problems.
• Questions have to be relatively simple.
• Historically low response rate (although inducements may help).
• Time delay whilst waiting for responses to be returned.
• Require a return deadline.
• Several reminders may be required.
• Assumes no literacy problems.
• No control over who completes it.
• Not possible to give assistance if required.
• Problems with incomplete questionnaires.
231
• Replies not spontaneous and independent of each other.
• Respondent can read all questions beforehand and then decide whether to
complete or not. For example, perhaps because it is too long, too complex,
uninteresting, or too personal.
Schedule
A set of questions which are asked and filled in by an interviewer in a face to face
situation with another person.
Objectives:
Interviews
Personal interview
Advantages:
232
• Possible in-depth questions;
• Interviewer in control and can give help if there is a problem;
• Can investigate motives and feelings;
• Can use recording equipment;
• Characteristics of respondent assessed – tone of voice, facial expression,
hesitation, etc.;
• Can use props; and
• Used to pilot other methods.
Disadvantages:
Types of interview
Structured
Semi-structured
The interview is focused by asking certain questions but with scope for the respondent
to express him or herself at length.
Unstructured
This also called an in-depth interview. The interviewer begins by asking a general
question. The interviewer then encourages the respondent to talk freely. The
interviewer uses an unstructured format, the subsequent direction of the interview
being determined by the respondent’s initial reply. The interviewer then probes for
233
elaboration – ‘Why do you say that?’ or, ‘That’s interesting, tell me more’ or, ‘Would
you like to add anything else?’ being typical probes.
Observation involves recording the behavioral patterns of people, objects and events
in a systematic manner. Observational methods may be:
• structured or unstructured;
• disguised or undisguised;
• natural or contrived;
• personal;
• mechanical;
• non-participant; and
• participant, with the participant taking a number of different roles.
Structured or unstructured
Disguised or undisguised
In disguise observation, respondents are unaware they are being observed and thus
behave naturally. Disguise is achieved, for example, by hiding, or using hidden
equipment or people disguised as shoppers.
In undisguised observation, respondents are aware they are being observed. There is
a danger of the Hawthorne effect – people behave differently when being observed.
Natural or contrived
234
Personal
Mechanical
Mechanical devices (video, closed circuit television) record what is being observed.
These devices may or may not require the respondent’s direct participation. They are
used for continuously recording on-going behavior.
Non-participant
The observer does not normally question or communicate with the people being
observed. He or she does not participate.
Participant
In participant observation, the researcher becomes, or is, part of the group that is
being investigated. Participant observation has its roots in ethnographic studies (the
study of man and races) where researchers would live in tribal villages, attempting to
understand the customs and practices of that culture. It has a very extensive
literature, particularly in sociology (development, nature and laws of human society)
and anthropology (physiological and psychological study of man). Organizations can
be viewed as ‘tribes’ with their own customs and practices.
The role of the participant observer is not simple. There are different ways of
classifying the role:
• Researcher as employee.
• Researcher as an explicit role.
• Interrupted involvement.
• Observation alone.
Researcher as employee
The researcher works within the organization alongside other employees, effectively
as one of them. The role of the researcher may or may not be explicit and this will
have implications for the extent to which he or she will be able to move around and
gather information and perspectives from other sources. This role is appropriate when
the researcher needs to become totally immersed and experience the work or
situation at first hand.
235
There are a number of dilemmas. Do you tell management and the unions?
Friendships may compromise the research. What are the ethics of the process? Can
anonymity be maintained? Skill and competence to undertake the work may be
required. The research may be over a long period of time.
The researcher is present every day over a period of time, but entry is negotiated in
advance with management and preferably with employees as well. The individual is
quite clearly in the role of a researcher who can move around, observe, interview and
participate in the work as appropriate. This type of role is the most favored, as it
provides many of the insights that the complete observer would gain, whilst offering
much greater flexibility without the ethical problems that deception entails.
Interrupted involvement
The researcher is present sporadically over a period of time, for example, moving in
and out of the organization to deal with other work or to conduct interviews with, or
observations of, different people across a number of different organizations. It rarely
involves much participation in the work.
Observation alone
Case-studies
The term case-study usually refers to a fairly intensive examination of a single unit
such as a person, a small group of people, or a single company. Case-studies involve
measuring what is there and how it got there. In this sense, it is historical. It can enable
the researcher to explore, unravel and understand problems, issues and relationships.
It cannot, however, allow the researcher to generalize, that is, to argue that from one
case-study the results, findings or theory developed apply to other similar case-
studies. The case looked at may be unique and, therefore not representative of other
instances. It is, of course, possible to look at several case-studies to represent certain
features of management that we are interested in studying. The case-study approach
is often done to make practical improvements. Contributions to general knowledge
are incidental.
236
1. Determine the present situation.
2. Gather background information about the past and key variables.
3. Test hypotheses. The background information collected will have been
analyzed for possible hypotheses. In this step, specific evidence about each
hypothesis can be gathered. This step aims to eliminate possibilities which
conflict with the evidence collected and to gain confidence for the important
hypotheses. The culmination of this step might be the development of an
experimental design to test out more rigorously the hypotheses developed, or
it might be to take action to remedy the problem.
4. Take remedial action. The aim is to check that the hypotheses tested actually
work out in practice. Some action, correction or improvement is made and a
re-check carried out on the situation to see what effect the change has brought
about.
The case-study enables rich information to be gathered from which potentially useful
hypotheses can be generated. It can be a time-consuming process. It is also inefficient
in researching situations which are already well structured and where the important
variables have been identified. They lack utility when attempting to reach rigorous
conclusions or determining precise relationships between variables.
Longitudinal Data
It is the data collected at regular time intervals. Longitudinal Data can be further
divided into two types:
237
to be transitory and situational. Use qualitative methods to capture what people say
about their meanings and interpretations. Qualitative research typically involves
qualitative data, i.e., data obtained through methods such interviews, on-site
observations, and focus groups that are in more narrative rather than numerical form.
Such data are analyzed by looking for themes and patterns. It involves reading,
rereading, and exploring the data. How the data are gathered will greatly affect the
ease of analysis and utility of findings (Maxwell, 1996, Patton, 2002., and Wholey,
Hatry, and Newcomer, 2004).
Quantitative inquiries use numerical and statistical processes to answer specific
questions. Statistics are used in a variety of ways to support inquiry or program
assessment/evaluation. Descriptive statistics are numbers used to describe a group of
items. Inferential statistics are computed from a sample drawn from a larger
population with the intention of making generalizations from the sample about the
whole population. The accuracy of inferences drawn from a sample is critically
affected by the sampling procedures used. It is important to start planning the
statistical analyses at the same time that planning for an inquiry begins. Decisions
about analysis techniques to use and statistics to report are affected by levels of
measurement of the variables in the study, the questions being addressed, and the
type and level of information that you expect to include in reporting on your
discoveries (Maxwell, 1996, Patton, 2002., and Wholey, Hatry, and Newcomer, 2004).
CONCLUSION
This section of the book, in nutshell provides the researcher an insight to data
collection methods and forms of data, and the sources from where the data need to
be collected. The Qualitative research data collection methods are very much time
consuming, hence data are usually collected from a smaller sample. On the contrary,
the quantitative data collection methods are those which focus on numbers and
frequencies rather than on meaning. Whatever the method a researcher may adopt
the result of the research strongly linked to the way with which the data collected
from varied sources. The procedures used to collect data can influence research
validity. The knowledge of data collection supports the researcher to engage in the
right research process and the sources from where the data to be collected.
DISCUSSION QUESTIONS
238
9. What do you mean by questionnaire?
10. Discuss the advantages and disadvantages of questionnaire as a tool of
data collection?
11. Differentiate between schedule and questionnaire?
12. What are the characteristics of a good questionnaire?
13. What are the characteristics of a good schedule?
14. How interview as a tool would be beneficial to research?
15. Discuss various types of interviews?
16. Discuss the role researcher during interview?
17. How case studies turned to be a method of managerial research?
18. What are the basic steps in case study methodology?
19. Differentiate cross sectional data and longitudinal data?
20. Discuss the data collection methods and tools in qualitative and
quantitative research.
239
MODULE 11
ITEM ANALYSIS
Learning objectives:
Introduction
Item analysis is a broad term that denotes the precise procedures used in research to
appraise test items, normally for the purpose of test construction and revision. Item
analysis is regarded as one of the most significant facets of test construction and
increased its attention is going up in applied research. The main purpose of item
analysis is to improve tests by revising or eliminating ineffective items. There are
numerous procedures available for determining item analysis. The procedure
employed in evaluating an item's effectiveness is contingent to some extent on the
researcher's predilection and on the purpose of the test. The researcher should aware
that process of item analysis or the test to be conducted comes after the first draft of
a test has been constructed, administered to a sample and scurried out.
What is an Item?
Single task or questions that usually cannot be broken down into any smaller units.
Item analysis
Item analysis consists of a set of items are protected for difficulty and discrimination
by giving them in an experimental design to a group of examinees fairly representative
of the target population for the test computing an index of difficulty and an index of
discrimination for each item, and retaining for the final test those items having the
desired properties in greater degree.
Scale
240
Scale is a device for measuring variables. Number of scores assigned to the various
degrees of opinions attitudes and other concepts.
Scaling
In the social sciences, scaling is the process of measuring or ordering entities with
respect to quantitative attributes or traits. For example, a scaling technique might
involve estimating individuals' levels of extraversion, or the perceived quality of
products. Certain methods of scaling permit estimation of magnitudes on a
continuum, while other methods provide only for the relative ordering of the entities.
Scale evaluation
Reliability and Validity examine the fitness of measure. According to Whitelaw (2001),
reliability and validity can be conceptualized by taking chocolate cake recipe as an
example.
Reliability
Reliability suggests that any person provided that they follow the recipe, will produce
a reasonable chocolate cake, or at least something that you can identify as a chocolate
cake. Validity suggests that if the recipe includes chocolate, then the cake will look like
a chocolate cake, smell like a chocolate cake and taste like a chocolate. That is, - the
proof is in the pudding. When we think it in this way then it becomes clear that a
measurement device must first be reliable and then it must be valid. In other words,
you have to be confident that what you get is consistently reproducible (i.e. the recipe
consistently produces something like a cake). Once you are confident in the
241
consistency of the output, then you can scrutinize it to assess whether it is what is
purports to be (ie. a chocolate cake).
Based on the above example, we can say that; Reliability is the degree to which a
measure is free from random error and therefore gives consistent results. It indicates
the internal consistency of the measurement device. It refers to the accuracy and
precision of a measurement procedure (Thorndike, Cunningham, Thorndike, and
Hagen, 1991) and can be expressed in terms of stability, equivalence, and internal
consistency (Cooper and Schindler, 2003).
Stability is the extent to which results obtained with the measurement device can be
reproduced. Stability can be assessed by administering the same measurement device
to the same respondents at two separate points in time (Zikmund, 2000). This is called
a test - retest method. However, as Zikmund, 2000) elaborates, there are problems
associated with this method. The first measure may sensitize the respondents and
may influence the results of the second measure. Results may also suffer due to
inadequate time between tests and retest measures.
Equivalence shows the degree to which alternative forms of the same measures are
used to produce the same or similar results (Cooper and Schindler, 2003).
Alpha coefficient ranges in value from 0 to 1 and may be used to describe the reliability
of factors extracted from various types of questionnaires or scales. The higher the
score, the more reliable the generated scale is (Delafrooz, Paim and Khatibi, 2009).
According to Nunnaly (1978), Cronbach‟s alpha score of 0.7 is considered to be
acceptable reliability coefficient.
Validity
242
administering any measurement device a researcher must ensure that his device has
face validity, content validity, criterion validity and construct validity (Neuman, 2005).
• The validity of the test is concerned with what the test measures and how
well it does so.
• The accuracy with which it measures or as the degree to which it
approaches infallibility in measuring what it purports to measure.
Types of validity
Content validity: consider all validity intrinsic and extrinsic- systematic examination
of the content of the test to analyze whether it covers a representative sample.
Content validity is closely related to face validity. It makes sure that a measure
includes an adequate and representative set of items to cover a concept. It can also
be ensured by experts‟ agreement (Sekaran, 2003)
Criterion validity - one which is obtained by comparing the test scores with scores
obtained on a criterion available at present or to be available in the future.
a. Predictive validity.
b. Concurrent validity.
Construct validity-
Predictive Validity
Predictive Validity is the extent to which new measure is able to predict a future event.
Here the researcher uses a future criterion instead of contemporary one. (Bryman and
Bell, 2003)
243
Convergent validity
Convergent validity results when two variables measuring the same construct highly
correlate (Straub, Boudreau and Gefen, 2004)
Discriminant Validity
A device has discriminant validity if by using similar measures for researching different
constructs results into relatively low inter correlations (Cooper and Schindler, 2003).
Levels of measurement
Most research explains the four levels of measurement: nominal, ordinal, interval and
ratio and so the treatment given to them here will be brief.
Nominal scales
Which of the following food items do you tend to buy at least once per month?
(Please tick)
Okra Palm Oil Milled Rice
Peppers Prawns Pasteurized milk
The numbers have no arithmetic properties and act only as labels. The only
measure of average which can be used is the mode because this is simply a set of
frequency counts. Hypothesis tests can be carried out on data collected in the nominal
form. The most likely would be the Chi-square test. However, it should be noted that
the Chi-square is a test to determine whether two or more variables are associated
and the strength of that relationship. It can tell nothing about the form of that
relationship, where it exists, i.e. it is not capable of establishing cause and effect.
244
Ordinal scales
Ordinal scales involve the ranking of individuals, attitudes or items along the
continuum of the characteristic being scaled. For example, if a researcher asked
farmers to rank 5 brands of pesticide in order of preference he/she might obtain
responses like those in table below.
Order of Brand
preference
1 Rambo
2 R.I.P.
3 Killalot
4 D.O.A.
5 Bugdeath
From such a table the researcher knows the order of preference but nothing about
how much more one brand is preferred to another that is there is no information
about the interval between any two brands. All of the information a nominal scale
would have given is available from an ordinal scale. In addition, positional statistics
such as the median, quartile and percentile can be determined.
It is possible to test for order correlation with ranked data. The two main methods are
Spearman's Ranked Correlation Coefficient and Kendall's Coefficient of Concordance.
Using either procedure one can, for example, ascertain the degree to which two or
more survey respondents agree with their ranking of a set of items. Consider again
the ranking of pesticides example in figure 3.2. The researcher might wish to measure
similarities and differences in the rankings of pesticide brands according to whether
the respondents' farm enterprises were classified as "arable" or "mixed" (a
combination of crops and livestock). The resultant coefficient takes a value in the
range 0 to 1. A zero would mean that there was no agreement between the two
groups, and 1 would indicate total agreement. It is more likely that an answer
somewhere between these two extremes would be found.
The only other permissible hypothesis testing procedures are the runs test and sign
test. The runs test (also known as the Wald-Wolfowitz). The test is used to determine
whether a sequence of binomial data - meaning it can take only one of two possible
values e.g. African/non-African, yes/no, male/female - is random or contains
245
systematic 'runs' of one or other value. Sign tests are employed when the objective is
to determine whether there is a significant difference between matched pairs of data.
The sign test tells the analyst if the number of positive differences in ranking is
approximately equal to the number of negative rankings, in which case the
distribution of rankings is random, i.e. apparent differences are not significant. The
test takes into account only the direction of differences and ignores their magnitude
and hence it is compatible with ordinal data.
Ratio scales
The highest level of measurement is a ratio scale. This has the properties of an interval
scale together with a fixed origin or zero point. Examples of variables which are ratio
scaled include weights, lengths and times. Ratio scales permit the researcher to
compare both differences in scores and the relative magnitude of scores. For instance
the difference between 5 and 10 minutes is the same as that between 10 and 15
minutes, and 10 minutes is twice as long as 5 minutes.
Given that sociological and management research seldom aspires beyond the interval
level of measurement, it is not proposed that particular attention be given to this level
of analysis. Suffice it to say that virtually all statistical operations can be performed on
ratio scales.
Interval scales
It is only with an interval scaled data that researchers can justify the use of the
arithmetic mean as the measure of average. The interval or cardinal scale has equal
units of measurement, thus making it possible to interpret not only the order of scale
scores but also the distance between them. However, it must be recognized that the
zero point on an interval scale is arbitrary and is not a true zero. This of course has
implications for the type of data manipulation and analysis we can carry out on data
collected in this form. It is possible to add or subtract a constant to all of the scale
values without affecting the form of the scale but one cannot multiply or divide the
values. It can be said that two respondents with scale positions 1 and 2 are as far apart
as two respondents with scale positions 4 and 5, but not that a person to score 10
feels twice as strong as one with the score 5. Temperature is interval scaled, being
measured either in Centigrade or Fahrenheit. We cannot speak of 50°F being twice as
hot as 25°F since the corresponding temperatures on the centigrade scale, 10°C and -
3.9°C, are not in the ratio 2:1. Most of the common statistical methods of analysis
require only interval scales in order that they might be used. These are not recounted
here because they are so common and can be found in virtually all basic texts on
statistics. Interval scales may be either numeric or semantic. Study the examples
below in the figure.
246
Examples of interval scales in numeric and semantic formats
Please indicate your views on Balkan Olives by scoring them on a scale of 5 down to
1 (i.e. 5 = Excellent; = Poor) on each of the criteria listed
Balkan Olives are: Circle the appropriate score on each line
Succulence 5 4 3 2 1
Fresh tasting 5 4 3 2 1
Free of skin 5 4 3 2 1
blemish
Good value 5 4 3 2 1
Attractively 5 4 3 2 1
packaged
(a)
Please indicate your views on Balkan Olives by ticking the appropriate responses
below:
Excellent Very Good Good Fair Poor
Succulent
Freshness
Freedom from skin blemish
Value for money
Attractiveness of packaging
(b)
Measurement scales
The various types of scales used in marketing research fall into two broad categories:
comparative and non-comparative.
247
Comparative scales
Paired comparison: It is sometimes the case that marketing researchers wish to find
out which are the most important factors in determining the demand for a product.
Conversely they may wish to know which the most important factors are acting to
prevent the widespread adoption of a product. Take, for example, the very poor
farmer response to the first design of an animal-drawn mould board plough. A
combination of exploratory research and shrewd observation suggested that the
following factors played a role in the shaping of the attitudes of those farmers who
feel negatively towards the design:
Suppose the organization responsible wants to know which factors is foremost in the
farmer's mind. It may well be the case that if those factors that are most important to
the farmer than the others, being of a relatively minor nature, will cease to prevent
widespread adoption. The alternatives are to abandon the product's re-development
or to completely re-design it which is not only expensive and time-consuming, but may
well be subject to a new set of objections.
The process of rank ordering the objections from most to least important is best
approached through the questioning technique known as 'paired comparison'. Each
of the objections is paired by the researcher so that with 5 factors, as in this example,
there are 10 pairs-
In 'paired comparisons' every factor has to be paired with every other factor in turn.
However, only one pair is ever put to the farmer at any one time.
248
· It proved too difficult to
transport
In most cases the question, and the alternatives, would be put to the farmer verbally.
He/she then indicates which of the two was the more important and the researcher
ticks the box on his questionnaire. The question is repeated with a second set of
factors and the appropriate box ticked again. This process continues until all possible
combinations are exhausted, in this case 10 pairs. It is good practice to mix the pairs
of factors so that there is no systematic bias. The researcher should try to ensure that
any particular factor is sometimes the first of the pair to be mentioned and sometimes
the second. The researcher would never, for example, take the first factor (on this
occasion 'Does not ridge') and systematically compare it to each of the others in
succession. That is likely to cause systematic bias.
Below labels have been given to the factors so that the worked example will be easier
to understand. The letters A - E have been allocated as follows:
A = Does not ridge
B = Far too expensive
C = New technology too risky
D = Does not work for inter-cropping
E = Too difficult to carry.
The data are then arranged into a matrix. Assume that 200 farmers have been
interviewed and their responses are arranged in the grid below. Further assume that
the matrix is so arranged that we read from top to side. This means, for example, that
164 out of 200 farmers said the fact that the plough was too expensive was a greater
deterrent than the fact that it was not capable of ridging. Similarly, 174 farmers said
that the plough's inability to inter-crop was more important than the inability to ridge
when deciding not to buy the plough.
A preference matrix
A B C D E
A 100 164 120 174 180
B 36 100 160 176 166
C 80 40 100 168 124
D 26 24 32 100 102
E 20 34 76 98 100
249
If the grid is carefully read, it can be seen that the rank order of the factors is -
One major advantage to this type of questioning is that whilst it is possible to obtain
a measure of the order of importance of five or more factors from the respondent, he
is never asked to think about more than two factors at any one time. This is especially
useful when dealing with illiterate farmers. Having said that, the researcher has to be
careful not to present too many pairs of factors to the farmer during the interview. If
he does, he will find that the farmer will quickly get tired and/or bored. It is as well to
remember the formula n (n - 1) /2. For ten factors, brands or product attributes this
would give 45 pairs. Clearly the farmer should not be asked to subject himself to
having the same question put to him 45 times. For practical purposes, six factors are
possibly the limit, giving 15 pairs.
It should be clear from the procedures described in these notes that the paired
comparison scale gives ordinal data.
Which of the following types of How much more, in cents, would you be
fish do you prefer? prepared to pay for your preferred fish?
Fresh Fresh (gutted) $0.70
Fresh (gutted) Smoked 0.50
Frozen Smoked 0.60
250
Frozen Fresh 0.70
Smoked Fresh 0.20
Frozen(gutted) Frozen
From the data above the preferences shown below can be computed as follows:
Fresh fish 0.70 + 0.70 + 0.20 =1.60
Smoked fish 0.60 + (-0.20) + (-0.50) =(-1.10)
Fresh fish(gutted) (-0.70) + 0.30 + 0.50 =0.10
Frozen fish (-0.60) + (-0.70) + (-0.30) =(-1.60)
The procedure is to begin with a list of features which might possibly be offered as
'options' on the product, and alongside each you list its retail cost. A third column is
constructed and this form an index of the relative prices of each of the items. The
table below will help clarify the procedure. For the purposes of this example the basic
reaper is priced at $20,000 and some possible 'extras' are listed along with their prices.
The total value of these hypothetical 'extras' is $7,460 but the researcher tells the
farmer he has an equally hypothetical $3,950 or similar sum. The important thing is
that he should have considerably less hypothetical money to spend than the total
value of the alternative product features. In this way the farmer is encouraged to
reveal his preferences by allowing researchers to observe how he trades one
additional benefit off against another. For example, would he prefer a side rake
attachment on a 3 meter head rather than have a transporter trolley on either a
standard or 2.5m wide head? The farmer has to be told that any unspent money
cannot be retained by him so he should seek the best value-for-money he can get.
In cases where the researcher believes that mentioning specific prices might introduce
some form of bias into the results, then the index can be used instead. This is
constructed by taking the price of each item over the total of $ 7,460 and multiplying
by 100. Survey respondents might then be given a maximum of 60 points and then, as
before, are asked how they would spend these 60 points. In this crude example the
index numbers are not too easy to work with for most respondents, so one would
round them as has been done with the adjusted column. It is the relative and not the
251
absolute value of the items which is important so the precision of the rounding need
not overly concerning us.
Noncomparative scales
Continuous rating scales: The respondents are asked to give a rating by placing a mark
in the appropriate position on a continuous line. The scale can be written on the card
and shown to the respondent during the interview. Two versions of a continuous
rating scale are depicted in the figure.
252
Continuous rating scales
When version B is used, the respondent's score is determined either by dividing into
the line as many categories as desired and assigning the respondent a score based
on the category into which his/her mark falls, or by measuring the distance, in
millimeters or inches, from either end of the scale.
Whichever of these forms of the continuous scale is used, the results are normally
analyzed as interval scaled.
Line marking scale: The line marked scale is typically used to measure perceived
similarity differences between products, brands or other objects. Technically, such a
scale is a form of what is termed a semantic differential scale since each end of the
scale is labeled with a word/phrase (or semantic) that is opposite in meaning to the
other. Below mentioned example provides an illustration of such a scale.
Consider the products below which can be used when frying food. In the case of each
pair, indicate how similar or different they are in the flavour which they impart to the
food.
For some types of respondent, the line scale is an easier format because they do not
find discrete numbers (e.g. 5, 4, 3, 2, 1) best reflect their attitudes/feelings. The line
marking scale is a continuous scale.
253
Itemized rating scales: With an itemized scale, respondents are provided with a scale
having numbers and/or brief descriptions associated with each category and are asked
to select one of the limited number of categories, ordered in terms of scale position,
that best describes the product, brand, company or product attribute being studied.
Examples of the itemized rating scale are illustrated below.
Itemized rating scales can take a variety of innovative forms as demonstrated by the
two illustrated below, which are graphic.
Whichever form of itemized scale is applied, researchers usually treat the data as
interval level.
Semantic scales: This type of scale makes extensive use of words rather than
numbers. Respondents describe their feelings about the products or brands on scales
with semantic labels. When bipolar adjectives are used at the end points of the scales,
these are termed semantic differential scales. The semantic scale and the semantic
differential scale are illustrated in figure.
254
Semantic and semantic differential scales
Likert scales: A Likert scale is what is termed a summated instrument scale. This
means that the items making up a Liken scale are summed to produce a total score.
In fact, a Likert scale is a composite of itemized scales. Typically, each scale item will
have 5 categories, with scale values ranging from -2 to +2 with 0 as neutral responses.
This explanation may be clearer from the example in figure.
255
The food industry spends a 1 2 3 4 5
great deal of money making
sure that its manufacturing is
hygienic.
Food companies should 1 2 3 4 5
charge the same price for
their products throughout the
country
Likert scales are treated as yielding Interval data by the majority of marketing
researchers. The scales which have been described in this chapter are among the most
commonly used in marketing research. Whilst there are a great many more forms
which scales can take, if students are familiar with those described in this chapter they
will be well equipped to deal with most types of survey problem.
Factor Analysis
One of the other important tools in data analysis is the Factor Analysis. In most of the
empirical researches, there are a number of variables that characterize objects
(Rietveld and Van Hout, 1993). In social sciences, research may involve studying
abstract constructs such as personality, motivation, satisfaction, and job stress. In
study of mental ability for example, a questionnaire could be very lengthy as it would
aim to measure it through several subtests, like verbal skills tests, and logical
reasoning ability tests (Darlington, 2004). In a questionnaire, therefore, there can be
number of variables measuring aspects of same variable thus making the study
complicated. In order to deal with these problems, factor analysis was invented.
Factor analysis attempts to bring inter-correlated variables together under more
general, underlying variables. Hence, the objective of factor analysis is to reduce the
number of dimensions to limited number explaining a particular phenomena or
variable (Rietveld and Van Hout, 1993).
Exploratory factor analysis (EFA) and Confirmatory factor analysis (CFA) are two
techniques a researcher can use to conduct his analysis. According to DeCoster (1998),
the basic purpose of EFA is to find out the nature of constructs which are influencing
a particular set of responses while the purpose of CFA is testing whether a particular
set of constructs is influencing responses in a predicted way.
1. Collect measurements;
2. Obtain the correlation matrix;
3. Select the number of factors for inclusion;
256
4. Extract your initial set of factors;
5. Rotate your factors to a final solution;
6. Interpret your factor structure; and
7. Construct factor scores for further analysis.
Hence EFA and CFA are powerful statistical techniques, and a researcher must
determine the type of analysis before answering his research questions (Suhr, 2006).
Another technique for similar purpose which is often confused with Exploratory Factor
Analysis is Principal Component Analysis (PCA). According to DeCoster (1998), EFA and
PCA are different statistical techniques and produce different results when applied to
the same data. He asserts that both are based on different models and the direction
of influence between „components‟ and „measures‟ are different and that in EFA we
assume that the variance caused in the measured variables can be separated for
common factors and for unique factors, whereas; in PCA, the principal components
developed include both common and unique variance. In conclusion, we conduct EFA
when we want to extract certain factors or constructs which are responsible for a
particular set of responses while we use PCA when the purpose is simply to reduce
data.
Furthermore, depending upon the nature of data, the researcher can make use of
other advanced statistical techniques as well such as discriminant analysis, cluster
analysis, and time series analysis.
CONCLUSION
Most important stages of any research is the phase of instrument development. How
far the instrument is valid enough to collect the right responses from the respondents
that determine the validity outcome of the research. The external validity of the
finding depends on the internal validity of the instrument. In order to ensure the
soundness of the instrument, a researcher should have the base of item analysis and
instrument validity and reliability. This section of the book details the information
regarding construct, content, criterion, and face validity support the researcher to
gain attention on the items they develop in their instruments and that ensure better
257
internal validity of the instrument. The researcher will be getting better control over
the phenomena under study with validity and reliability orientation during the
research process.
DISCUSSION QUESTIONS
258
MODULE 12
TOOL ADMINISTRATION,
DATA PROCESSING AND DATA ANALYSIS
Learning objectives:
Introduction
The major aspect of the research is the finalization of the tools of the research and
administration in the right population selected for the study. It is difficult to plan a
major study or project without adequate knowledge of its subject matter. The
population it is to cover the level of knowledge and understanding and the like. What
are the issues involved? What are the concepts associated with the subject matter?
How can be it operationalized? How long the study will take? How much money it will
cost? Such questions call for a good deal of knowledge of the subject matter of an
extensive study and its dimensions. In order to gain such knowledge a preliminary
investigation is to be conducted. This is called a pre - test.
Pilot study
A pre-test or pilot study serves as a trial run that allows us to identify potential
problems in the proposed study. Although this means extra effort at the beginning of
a research project, the pre-test and/or pilot study enables us, if necessary, to revise
the methods and logistics of data collection before starting the actual fieldwork. As a
result, a good deal of time, effort and money can be saved in the long run. Pre-testing
is simpler and less time-consuming and costly than conducting an entire pilot study.
259
Therefore, we will concentrate on pre-testing as an essential step in the development
of research projects.
260
• How much time is needed to locate individuals to be included in the
study.
4. Staffing and activities of the research team can be checked, while all are
participating in the pre-test, to determine:
o How successful the training of the research team has been.
o What the work output of each member of the staff is.
o How well the research team works together.
o Whether logistical support is adequate.
o The reliability of the results when instruments or tests are
administered by different members of the research team.
o Whether staff supervision is adequate.
5. Procedures for data processing and analysis can be evaluated during the pre-
test. Items that can be assessed include:
o Appropriateness of data master sheets and dummy tables and the ease
of use.
o Effectiveness of the system for quality control of data collection.
o Appropriateness of statistical procedures (if used).
o Clarity and ease with which the collected data can be interpreted.
6. The proposed work plan and budget for research activities can be assessed
during the pre-test. Issues that can be evaluated include:
o Appropriateness of the amount of time allowed for the different
activities of planning, implementation, supervision, co-ordination and
administration.
o Accuracy of the scheduling of the various activities.
Pre-testing the data collection and data-analysis process 1-2 weeks before
starting the fieldwork, with the whole research team.
Depending on how closely the pre-test situation resembles the area in which
the actual field work will be carried out, it may be possible to pre-test:
261
• The time needed to carry out interviews, observations or
measurements.
• The feasibility of the designed sampling procedures.
• The feasibility of the designed procedures for data processing and
analysis.
Data Collection
After pre-testing the tools selected for the study it is the time to implement the tools
in the field. The techniques which are discussed in the above chapters may be utilized
in the data collection process. Care should be taken in each aspect to prevent technical
and human error dusting data collection.
Data Processing
Data processing is an intermediary stage of work between data collection and data
analysis. The completed instruments of data collection viz., the questionnaires,
schedule or data sheets contain vast mass of data. They cannot be straight away
providing answers to research questions. They like raw material need data processing.
Data processing involves classification and summarization of data in order to make
them amenable to analysis.
1. Editing;
3. Tabulation.
Editing
a. Field editing
Data processing and analysis should start in the field, with checking for
completeness of the data and performing quality control checks, while sorting the
data by instrument used and by group of informants. Data of small samples may even
be processed and analyzed as soon as it is collected.
262
b. Office Editing
1. Classification
Classification of the data to be done soon after the editing process is over. The
classification of data involves arranging things in groups or classes according to their
resemblances or affinities and gives expression to the unity of attributes that may be
subsist among the diversity of individuals.
Objectives of classification
• To simplify data;
• Distinguish between similarity and dissimilarities;
• To make the data comparable; and
• To make a basis of tabulation.
Examples:
B. Ordinal or Ranked Data: One value is greater or less than another, but the
Magnitude of the difference is unknown.
Examples:
263
3. Income (<$9,999 $10,000-$19,999, $20,000-$49,999, >$50,000)
I. Height;
II. Weight;
III. Light-years; and
IV. Blood pressure.
The researcher should be able to recognize what Data Types are used in the research.
Types of classification
Methods of Classification
264
3. Classification done according to more than two attributes or variables is
known as manifold classification.
Examples
1. One-way classification
No. of students who secured more than 60 % in various sections of same
course.
2. Two – way classification
Classification of students according to sex who secured more than 60 %.
3. Manifold classification.
Classification of employees according to skill, sex and education.
3. Coding
If the data will be entered into a computer for subsequent processing and analysis, it
is essential to develop a Coding System. For computer analysis, each category of a
variable can be coded with a letter, a group of letters or word, or be given a number.
For example, the answer ‘yes’ may be coded as ‘Y’ or 1; ‘no’ as ‘N’ or 2 and ‘no
response’ or ‘unknown’ as ‘U’ or 9. The codes should be entered on the questionnaires
(or checklists) themselves. When finalizing your questionnaire, for each question you
should insert a box for the code in the right margin of the page. These boxes should
not be used by the interviewer. They are only filled in afterwards during data
processing. Take care that you have as many boxes as the number of digits in each
code. If the analysis is done by hand using data master sheets, it is useful to code your
data as well (see section 3 below)
For example:
Codes for open-ended questions (in questionnaires) can be done only after examining
a sample of (say 20) questionnaires. You may group similar types of responses into
single categories, so as to limit their number to at most 6 or 7. If there are too many
categories it is difficult to analyze the data.
Finally it should be borne in mind that the personnel responsible for computer analysis
should be consulted very early in the study, i.e., as soon as the questionnaire and
265
dummy tables are finalized. In fact the research team needs to work closely with the
computer analyst or statistician throughout the design and the implementation of the
study.
4. Tabulation
The process of placing classified data in tabular form is known as tabulation. A table is
a symmetric arrangement of statistical data in rows and columns. Rows are horizontal
arrangements whereas columns are vertical arrangements. It may be simple, double
or complex depending upon the type of classification.
Rules of Tabulation
There are no hard and fast rules for the tabulation of data but for constructing a good
table, following general rules should be observed while tabulating statistical data:
266
• First of all, there should be a proper title for each table. Table number and title
of the table must be written above the table.
• The table should suit the size of the paper and, therefore, the width of the
column should be decided beforehand.
• Number of columns and rows should neither be too large nor too small.
• Captions, heading or sub-headings of columns and heading and subheadings
of rows must be self-explanatory.
• Each column and row must be given a title. Title of column is called caption
and title of the row is called stub.
• As far as possible figures should be approximated before tabulation. This
would reduce unnecessary details.
• Items should be arranged either in alphabetical, chronological, or geographical
order or according to size.
• The units of measurement under each heading or sub-heading must always be
indicated.
• Foot note can be written, if necessary, either use signs like X etc.
• The sub-total and total of the items on the table must be written.
• Percentages are given in the tables if necessary.
• Ditto marks should not be used in a table because sometimes it creates
confusion.
• A table should be simple and attractive.
Table Number:
When a table or a book contains more than one table, each table must have a number.
The tables are numbered in a sequence so that they may be easily referred to. The
number of the table should be placed in the middle on the top of the table.
Title:
Every table must have a suitable heading. The heading should be short, clear and
convey the purpose of the table. It should contain four types of information:
267
Besides, the main heading, there may be some sub-heading also.
The title should be so worded that it permits one and only one interpretation. Its
letters should be the most prominent of any lettering on the table.
Long titles cannot be read as promptly as short titles, but they may have to be used
for the sake of clarity when necessary. In such a situation a "catch title" may be given
above the main title.
Captions and stubs: Captions refer to the vertical column headings, whereas stubs
refer to the horizontal row headings. Captions generally give the basis of classification
e.g. sex, occupation, meters, kms, etc. It may consist of one or more column headings.
Under a column heading, there may be sub-heads. The captions should be clearly
defined and placed at the middle of the column. It is desirable to number each column
and row for reference and to facilitate comparisons.
Head notes: Head Note is a statement given below the title which clarifies the
contents of the table. It gives an explanation concerning the entire table or main parts
of it, e.g., the units of measurement are usually expressed in a head note such as 'in
hectares', 'in millions', 'in quintals' etc.
Body: The body of the table contains the figures that are to be presented to the
readers. The table must contain sub totals of each separate class of data and the grand
total for the combined classes.
Source: The source is given in case of secondary data. It gives the sources from which
the data were obtained. The source should give the name of the book, page number,
table number etc. from which the data have been collected.
Types of Tabulation:
When the data are tabulated according to the two characteristics at a time. It
is said to be double tabled or two-way tabulation.
268
For Example: Tabulation of data on population of the world classified by two
characteristics like Religion and Sex is an example of double tabulation.
c. Complex Tabulation:
In spite of the fact that they are closely related, the differences are as follows.
Classification Tabulation
Data is divided into groups and sub The data is listed according to a logical
groups on the basis of similarities and sequence of related characteristics
Dissimilarities.
Limitations of Tabulation
Data Analysis:
Data Analysis
When the researcher is finished with the data collection, he has to start data analysis
which again involves numerous issues to be answered. Importantly, the data should
269
be accurate, complete, and suitable for further analysis (Sekaran and Bougie, 2010).
Researcher has to record and arrange the data and then apply various descriptive and
inferential statistics or econometrics concepts to explain the data and draw
inferences(Saunders et al.,2009). Selection of inappropriate statistical technique or
econometric model may lead to wrong interpretations. This may in turn result in
failure to solve the research problem and answer research questions. The researcher’s
task is incomplete if the study objectives are not met.
According to Lind, et. al., (2008), researchers can use number of descriptive statistics
concepts to explain data such as frequency distributions or cumulative frequency
distributions, frequency polygons, histograms, various types of charts like bar charts
& pie charts, scatter diagrams, box plots etc.
The researcher can make inferences and draw conclusions based on inferential
statistics. Two main objectives of inferential statistics are (1) to estimate a population
parameter and (2) to test hypotheses or claim about a population parameter (Triola,
2008). The researcher has to carefully select between varieties of inferential statistical
techniques to test his hypotheses. For example, based on whether the researcher is
using sample or census, he has the choice of using either t-tests or z-tests (Cooper &
Schindler, 2006). In hypotheses testing, depending upon the number and nature of
samples, researcher has to decide among using either one sample t-test, or two
sample (independent or dependent) t-tests, or doing ANOVA/MANOVA (Lind et al.,
2008).
r =Σ(X – X)(Y – Y)
(n – 1)sxsy
The value of r always lies between -1 and +1 inclusive. If it lies near to -1, it shows a
strong negative correlation but if it lies near to +1, it shows a strong positive
correlation.
270
Quantitative
Advantages
• Allow for a broader study, involving a greater number of subjects, and enhancing
the generalization of the results
• Can allow for greater objectivity and accuracy of results. Generally, quantitative
methods are designed to provide summaries of data that support
generalizations about the phenomenon under study. In order to accomplish this,
quantitative research usually involves a few variables and many cases, and
employs prescribed procedures to ensure validity and reliability
• Using standards means that the research can be replicated, and then analyzed
and compared with similar studies.
• Personal bias can be avoided by researchers keeping a 'distance' from
participating subjects and employing subjects unknown to them.
Disadvantages
271
Statistical Analysis
There are three (3) general areas that make up the field of statistics: descriptive
statistics, relational statistics, and inferential statistics.
3. Inferential statistics, also called inductive statistics, fall into one of two
categories: tests for difference of means and tests for statistical significance, the latter
which are further subdivided into parametric or nonparametric, depending upon
whether you're inferring to the larger population as a whole (parametric) or the
people in your sample (nonparametric). The purpose of difference of means tests is
to test hypotheses, and the most common techniques are called Z-tests. The most
common parametric tests of significance are the F-test, t-test, ANOVA, and regression.
The most common nonparametric tests of significance are chi-square, the Mann-
Whitney U-test, and the Kruskal-Wallis test.
To summarize:
272
• Relational statistics (correlation, multiple correlation)
• Inferential tests for difference of means (Z-tests)
• Inferential parametric tests for significance (F-tests, t-tests, ANOVA,
regression)
• Inferential nonparametric tests for significance (chi-square, Mann-Whitney,
Kruskal-Wallis)
Central Tendency
Measures of central tendency are generally known as averages and include such
measures as the mean and median. The mean is calculated by summing the values in
a set of data and dividing the total by the number of values. If the data are arrayed in
order from the highest value to the lowest, the median is the middle value, where half
of the values are higher and the other half are lower.
Dispersion
Measures of spread or dispersion include the range, which is the difference between
the highest and lowest values in the data, and the standard deviation. The latter
measure is more complex to calculate and generally requires a computer or at least a
calculator. The standard deviation is the square root of the variance, which is the
mean of the sum of squared deviations from the mean score.
The most commonly used measure of central tendency is the mean. To compute
the mean, you add up all the numbers and divide by how many numbers there are.
It's not the average nor a halfway point, but a kind of center that balances high
numbers with low numbers. For this reason, it's more often reported along with some
simple measure of dispersion, such as the range, which is expressed as the lowest and
highest number. The median is the number that falls in the middle of a range of
numbers. It's not the average; it's the halfway point. There are always just as many
numbers above the median as below it. In cases where there is an even set of
numbers, you average the two middle numbers. The median is best suited for data
that are ordinal, or ranked. It is also useful when you have extremely low or high
scores. The mode is the most frequently occurring number in a list of numbers. It's
the closest thing to what people mean when they say something is average or typical.
The mode doesn't even have to be a number. It will be a category when the data are
nominal or qualitative. The mode is useful when you have a highly skewed set of
numbers, mostly low or mostly high. You can also have two modes (bimodal
distribution) when one group of scores are mostly low and the other group is mostly
high, with few in the middle.
273
Measures Of Dispersion
Correlation
The most used relational statistic is correlation and it's a measure of the strength of
any relationship between two variables, not causality. Interpretation of a correlation
coefficient does not even allow the slightest hint of causality. The most a researcher
can say is that the variables share something in common; that is, are related in some
way. The more two things have something in common, the more strongly they are
related. There can also be negative relations, but the most important quality of
correlation coefficients is not their sign, but their absolute value. A correlation of -. 58
is stronger than a correlation of .43, even though with the former, the relationship is
negative.
The following table lists the interpretations for various correlation coefficients:
.2 to .4 weak
.0 to .2 very weak
274
The most frequently used correlation coefficient in data analysis is the Pearson
product moment correlation.
These refer to a variety of tests for inferential purposes. Z-tests are not to be
confused with z-scores. Z-tests come in a variety of forms, the most popular being: (1)
to test the significance of correlation coefficients; (2) to test for equivalence of sample
proportions to population proportions, as to whether the number of minorities you've
got in your sample is proportionate to the number in the population. Z-tests
essentially check for linearity and normality, allow some rudimentary hypothesis
testing, and allow the ruling out of Type I and Type II error.
F-tests are much more powerful, as they allow explanation of variance in one variable
accounted for by variance in another variable. In this sense, they are very much like
the coefficient of determination. One really needs a full-fledged statistics course to
gain an understanding of F-tests, so suffice it to say here that you find them most
commonly with regression and ANOVA techniques. F-tests require interpretation by
using a table of critical values.
T-tests are kind of like little F-tests, and similar to Z-tests. It's appropriate for smaller
samples, and relatively easy to interpret since any calculated t over 2.0 is, by rule of
thumb, significant. T-tests can be used for one sample, two samples, one tail, or two-
tailed. You use a two-tailed test if there's any possibility of bi-directionality in the
relationship between your variables.
Source SS df MS F P
Total 2800
Between 1800 1 1800 10.80 <.05
Within 1000 6 166.67
275
Regression
According to Triola (2008), following cautions can be adopted while using regression
equations:
Regression is the closest thing to estimating causality in data analysis, and that's
because it predicts how much the numbers "fit" a projected straight line. There are
also advanced regression techniques for curvilinear estimation. The most common
form of regression, however, is linear regression, and the least squares method to find
an equation that best fits a line representing what is called the regression of y on x.
The procedure is similar to computing calculus minima (if you've had a math course in
calculus). Instead of finding the perfect number, however, one is interested in finding
the perfect line, such that there is one and only one line (represented by equation)
that perfectly represents, or fits the data, regardless of how scattered the data points.
The slope of the line (equation) provides information about predicted directionality,
and the estimated coefficients (or beta weights) for x and y (independent and
dependent variables) indicate the power of the relationship. Use of a regression
formula (not shown here because it's too large; only the generic regression equation
is shown) produces a number called R-squared, which is a kind of conservative, yet
powerful coefficient of determination. Interpretation of R-squared is somewhat
276
controversial, but generally uses the same strength table as correlation coefficients,
and at a minimum, researchers say it represents "variance explained."
Chi-Square
A technique designed for less than interval level data is chi-square (pronounced kye-
square), and the most common forms of it are the chi-square test for contingency and
the chi-square test for independence. Other varieties exist, such as Cramer's V,
Proportional Reduction in Error (PRE) statistics, Yule's Q, and Phi. Essentially, all chi-
square type tests involve arranging the frequency distribution of the data in what is
called a contingency table of rows and columns. Marginals, which are estimates of
error in predicting concordant pairs in the rows and columns (based on the null
hypothesis), are then computed, subtracted from one another, and expressed in the
form of a ratio, or contingency coefficient. Predicted scores based on the null
hypothesis are called expected frequencies, and these are subtracted from observed
frequencies (Observed minus Expected). Chi-square tests have frequently seen in the
literature, and can be easily done by hand, or are run by computers automatically
whenever a contingency table is asked for.
The Mann-Whitney U test is similar to chi-square and the t-test, and used whenever
you have ranked ordinal level measurement. As a significance test, you need two
samples, and you rank (say, from 1 to 15) the scores in each group, looking at the
number of ties. A z-table is used to compare calculated and table values of U. The
interpretation is usually along the lines of any significant difference being due to the
variables you've selected.
The Kruskal-Wallis H test is similar to ANOVA and the F-test, and also uses ordinal,
multisampling data. It's most commonly seen when raters are used to judge research
subjects or research content. Rank calculations are compared to a chi-square table,
and interpretation is usually along the lines that there are some significant
differences, and grounds for accepting research hypotheses.
Presenting Findings
You can present the results of your statistical analyses in the form of tables or graphs.
Spreadsheet programs such as Excel can perform most basic statistical analyses, as
277
well as present the findings in tables or graphs. Excel can perform a variety of
statistical procedures, both basic and advanced. Spreadsheet programs, however, are
not specifically designed for more complicated analyses. Many scientists and
university researchers use specialized statistical software packages such as SPSS and
SAS to analyze data.
Although tabulation is very good technique to present the data, but diagrams are an
advanced technique to represent data. As a layman, one cannot understand the
tabulated data easily but with only a single glance at the diagram, one gets a complete
picture of the data presented. According to Moroney, diagrams register a meaningful
impression almost before we think.
• Diagrams give a very clear picture of data. Even a layman can understand it
very easily and in a short time.
• We can make comparison between different samples very easily. We don't
have to use any statistical technique further to compare.
• This technique can be used universally at any place and at any time. This
technique is used almost in all the subjects and other various fields.
• Diagrams have impressive value also. Tabulated data has not much impression
as compared to Diagrams. A common man is impressed easily by good
diagrams.
• This technique can be used for the numerical type of statistical analysis, e.g.,
to locate Mean, Mode, Median or other statistical values.
• It does not only save time and energy but also is economical. Not much money
is needed to prepare even better diagrams.
• These give us much more information as compared to tabulation. The
technique of tabulation has its own limits.
• This data is easily remembered. Diagrams which we see leave their lasting
impression much more than other data techniques.
• Data can be condensed with diagrams. A simple diagram can present what
even cannot be presented with 10000 words.
• The diagram should be properly drawn at the outset. The pith and substance
of the subject matter must be made clear under a broad heading which
properly conveys the purpose of a diagram.
278
• The size of the scale should neither be too big nor too small. If it is too big, it
may look ugly. If it is too small, it may not convey the meaning. In each
diagram, the size of the paper must be taken note-of. It will help to determine
the size of the diagram.
• For clarifying certain ambiguities some notes should be added at the foot of
the diagram. This shall provide the visual insight of the diagram.
• Diagrams should be neat and clean. There should be no vagueness or
overwriting on the diagram.
• Simplicity refers to love at first sight; the diagram should convey the meaning
clearly and easily.
• The scale must be presented along with the diagram.
• It must be Self-Explanatory. It must indicate the nature, place and a source of
data presented.
• Different shades, colors can be used to make diagrams more easily
understandable.
• Vertical diagram should be preferred to Horizontal diagrams.
• It must be accurate. Accuracy must not be done away with to make it attractive
or impressive.
Types of Diagrams
In these diagrams only line is drawn to represent one variable. These lines may be
vertical or horizontal. The lines are drawn such that their length is the proportion to
the value of the terms or items so that comparison may be done easily.
Like line diagrams these figures are also used where only single dimension i.e.
length can present the data. The procedure is almost the same, only one thickness of
lines is measured. These can also be drawn either vertically or horizontally. The
breadth of these lines or bars should be equal. Similarly distance between these bars
should be equal. The breadth and distance between them should be taken according
to space available on the paper.
The diagram is used, when we have to make a comparison between more than
two variables. The number of variables may be 2, 3 or 4 or more. In case of 2 variables,
pair of bars is drawn. Similarly, in case of 3 variables, we draw triple bars. The bars are
drawn on the same proportionate basis as in the case of simple bars. The same shade
is given to the same item. Distance between pairs is kept constant.
279
(d) Sub-divided Bar Diagram
The data which is presented by multiple bar diagram can be presented by this diagram.
In this case we add different variables for a period and draw it on a single bar as shown
in the following examples. The components must be kept in the same order in each
bar. This diagram is more efficient if number of components are less i.e. 3 to 5.
Like sub-divide bar diagram, in this case also data of one particular period or
variable is put on a single bar, but in terms of percentages. The components are kept
in the same order in each bar for easy comparison.
In this case the diagram is on both the sides of base line i.e. to left and right or two
above or below sides.
This diagram is used when the value of some variable is very high or low as compared
to others. In this case the bars with bigger terms or items may be shown broken.
Use of Software’s
280
required by study objectives. The advent of many software packages, particularly
relating to regression and other econometric techniques, such as SPSS, SAS, STATA,
MINITAB, ET, LIMDEP, SHAZAM, Nvivo, Microfit and others have made it easy for the
researcher to work on the analysis part of his research project (Gujarati and Porter,
2008).
CONCLUSION
DISCUSSION QUESTIONS
1. What is the role of a pilot study in research?
2. What aspects of your research methodology can be evaluated during pre-
testing?
3. Which components should be assessed during the pre-test?
4. What do you mean by data processing?
5. What are the different types of closely related operations in data
processing?
6. How you define data classification?
7. Which are different types of data in the data processing stage?
8. What do you mean by classification?
9. What are different types of classification?
10. Explain different ways of classification?
11. What do you mean by coding?
12. Explain tabulation?
13. What are the objects and the importance of tabulation?
14. Discuss the role of tabulation?
15. Which are different parts of a table?
16. Explain different types of tabulation?
17. Differentiate between classification and tabulation?
18. What are the advantages and disadvantages of tabulation?
281
19. Define statistics?
20. Differentiate various types of statistical application in research?
21. Explain central tendency?
22. What do you mean by measures of dispersion?
23. Explain correlation?
24. How you interpret various correlation coefficients?
25. Explain Z test, F test and T test?
26. Evaluate the role of regression and chi-square in research?
27. Role of Kruskal Wallis Test in research analysis?
28. How you present a data?
29. Discuss the role of diagrams in the presentation of data?
30. What are the general guidelines for diagrammatic presentation?
31. What are different types of diagrams?
32. Differentiate simple bar diagram and multiple bar diagram?
33. Discuss the limitation of diagrammatic presentation of data?
282
Module 13
REPORT WRITING
Learning objectives:
Introduction
The final stage of any research process is the compilation and submission of all the
data and finding of external validity. This stage is called the report writing and
finalization. The researcher needs to submit a report to his/her supervisors/sponsors
or institutions to evaluate the report and take further action based on the findings
incorporated. A research report may be centered on practical work, research by
reading and observation or a study of an institution or industrial/workplace situation.
The objective of the report is to write clearly and concisely about the research topic
so that the reader can effortlessly comprehend the purpose and finding of the
research. A research report can be qualitative or quantitative in its content and
nature. This particular section of the book elaborates the concept, types, process and
method of report writing.
283
Characteristics
A research report having the major function of presenting the problem which
is studied by the researcher. The report should specifically mention the methods and
techniques used for collecting and analyzing data. It serves as a basic reference
material for future research to the scholars and professionals. Research report is the
means for judging the quality of the work done by the researcher. It acts as the means
for evaluating the researcher’s ability and competence to do the research. Further
report provides a factual base formulating the policies and strategies related to the
subject matter. The report provides systematic knowledge of the problem.
The writing style is designed to facilitate easy and rapid reading and
understanding of the research findings and recommendations.
284
Interim Report
When there is a time lag between data collection and presentation the interim report
needs to be prepared. This is to inform the relevance to the sponsors about the
progress of the project /research. Interim reports keep alive the agency's interest and
remove the misunderstanding about the delay. In interim report the research has to
provide the narration of what so far has been done. It is the first result of analysis and
the presentation of final outcomes (summary of findings.)
Summary Report
Table of Contents (not always List of major sections and headings with
required) page numbers
Abstract/Synopsis Concise summary of main findings
Introduction Why and what you researched
Literature Review (sometimes Other relevant research in this area
included in the Introduction)
Methodology What you did and how you did it
Results What you found
Discussion Relevance of your results, how it fits with
other research in the area
Conclusion Summary of results/findings
Recommendations (sometimes What needs to be done as a result of your
included in the Conclusion) findings
References or Bibliography All references used in your report or referred
to for background information
Appendices Any additional material which will add to
your report
The summary report is for general public. In which the general interests
written in non-technical and simple language. In this report there is liberal use of
pictorial charts. It is a kind of short reports of two or three pages. This kind of research
report consists of contents, brief reference to the objectives of the study, major
finding and implications
285
RESEARCH REPORT: CONTENT OF INDIVIDUAL SECTIONS
Title of Report:
Make sure this is clear and indicates exactly what you are researching.
Table of Contents:
List all sections, sub headings tables/graphs appendices and give page numbers.
If you have many tables or figures it is helpful to list these also, in a ‘table of contents’
type of format with page numbers.
If abbreviations or acronyms are used in the report, these should be stated in full in
the text the first time they are mentioned. If there are many, they should be listed in
alphabetical order as well. The list can be placed before the first chapter of the report.
The table of contents and lists of tables, figures, abbreviations should be prepared
last, as only then can you include the page numbers of all chapters and sub-sections
in the table of contents. Then you can also finalize the numbering of figures and tables
and include all abbreviations.
Introduction
The introduction consists of the purpose of your report. The thesis statement will be
useful here. Background information may include a brief review of the literature
already available on the topic so that you are able to ‘place’ your research in the field.
Some brief details of your methods and an outline of the structure of the report.
Literature Review
If asked to do a separate literature review, one must carefully structure the findings.
It may be useful to do a chronological format where the researcher discusses from the
earliest to the latest research, placing your research appropriately in the chronology.
Alternatively, he/she could write in a thematic way, outlining the various themes that
286
you discovered in the research regarding the topic. Again, you will need to state where
your research fits.
Methodology
Here you clearly outline what methodology you used in your research, i.e. what you
did and how you did it. It must be written clearly so that it would be easy for another
researcher to duplicate your research if they wished to. It is usually written in a
‘passive’ voice (e.g. the participants were asked to fill in the questionnaire attached)
rather than an ‘active’ voice (e.g. I asked the participants to fill in the questionnaire
attached). Clearly reference any material you have used from other sources. Clearly
label and number any diagrams, charts, and graphs. Ensure that they are relevant to
the research and add substance to the text rather than just duplicating what you have
said. You do not include or discuss the results here. The methodology section should
include a description of:
Results
This is where you indicate what you found in your research. You give the results of
your research, but do not interpret them. The systematic presentation of your findings
in relation to the research objectives is the crucial part of your report.
Discussion
The crux of the report is the analysis and interpretation of the results. This is where
you discuss the relevance of your results and how your findings fit with other research
in the area. It will relate back to your literature review and your introductory thesis
statement. It answers many questions like, What do the results mean? How do they
relate to the objectives of the project? To what extent have they resolved the
287
problem? Because the "Results" and "Discussion" sections are interrelated, they can
often be combined as one section.
Recommendations
This includes suggestions for what needs to be done because of your findings.
Recommendations are usually listed in order of priority.
Conclusion
References or Bibliography
This includes all references used in your report or referred to for background
information. In making recommendations, use not only the findings of your study, but
also supportive information from other sources. The recommendations should take
into consideration the local characteristics of the health system, constraints, feasibility
and usefulness of the proposed solutions. They should be discussed with all concerned
before they are finalized.
Appendix
These should add extra information to the report. If you include appendices, they
must be referred to in the body of the report and must have a clear purpose for being
included. Each appendix must be named and numbered.
CONCLUSION
288
methodology, analysis and results, discussion, and references. A researcher should
follow a flow in reporting the research materials to the stakeholders. This book
provides an overall guide to writing reports about scientific research that a researcher
performed.
DISCUSSION QUESTIONS
289
GLOSSARY
Action research - happens when investigators plan a field experiment, collect the
data, and feed it back to the activists (i.e. Participants) both as responsive and as a
way of modeling the next stage of the experiment.
After-only design - The after-only design is achieved by changing the independent
variable and, after some period of time, measuring the dependent variable. It is
diagrammed as follows: X O1 where X represents the change in the independent
variable (putting all of the apples in end-aisle displays) and the distance between X
and O represents the passage of some time period.
Age - specific rate or frequency of occurrence of an event in a defined age group.
Analysis - The breakdown of something that is complex into smaller parts in such a
way that leads to a better understanding of the whole.
Analysis of variance (ANOVA) - Significance test for comparing the means of a
quantitative variable between three or more groups (an extension of the independent
samples t-test).
Analytic induction - use of constant comparison specifically in developing hypotheses,
which are then tested in further data collection and analysis.
Before–after with control group - The before–after with control group design may be
achieved by randomly dividing subjects of the experiment (in this case, supermarket)
into two groups: the control group and the experimental group.
Bivariate statistics - Descriptive statistics for the analysis of the association between
two variables (e.g. contingency tables, correlation).
Brand-switching studies - Studies examining how many consumers switched brands
are known as brand-switching studies.
Case - A single unit in a study (e.g. a person or setting, such as a clinic, hospital).
Case analysis - By case analysis, we refer to a review of available information about a
former situation(s) that has some similarities to the present research problem.
Case study - a research method which focuses on the circumstances, dynamics and
complexity of a single case, or a small number of cases.
Categorical variable - Variable whose values represent different categories or classes
of the same feature.
Causal hypothesis - A statement that it is predicted that one phenomenon will be the
result of one or more other phenomena that precede it in time.
Causal relationships - Observed changes ('the effect’) in one variable are the result of
prior changes in another.
290
Causality - Causality may be thought of as understanding a phenomenon in terms of
conditional statements of the form “If x, then y.”
Central tendency - (a) Mean: the arithmetic mean, or average, is a measure of central
tendency in a population or sample. The mean is defined as the sum of the values
divided by the total number of cases involved. (b) Median: this is the middle value of
the observations when listed in ascending order; it bisects the observations (i.e. the
point below which 50 per cent of the observations fall). (c) Mode: a measure of central
tendency based on the most common value in the distribution (i.e. the value of X with
the highest frequency).
Classify - Grouping things together based on specific characteristics.
Closed question - the question is followed by predetermined response choices into
which the respondent’s reply must be placed.
Cluster - A sample unit which consists of a group of elements, for example a school.
Cluster sampling - Probability sampling involves the selection of groupings (clusters)
and selecting the sample units from the clusters.
Coding - The assignation of (usually numerical) codes for each category of each
variable.
Compare - To examine the different and/or similar characteristics of things or events.
Concept - An abstraction representing an object or phenomenon.
Confidence interval - A confidence interval calculated from a sample is interpreted as
a range of values which contains the true population value with the probability
specified.
Confounding factors - An extraneous factor (a factor other than the variables under
study), not controlled for, and distorts the results. An extraneous factor only
confounds when it is related to dependent variables and to the independent variables
under investigation. It makes them appear connected when their association is, in
fact, spurious.
Consensus methods - Include Delphi and nominal group techniques and consensus
development conferences. They provide a way of synthesizing information and
dealing with conflicting evidence, with the aim of determining extent of agreement
within a selected group.
Content analysis - A form of analysis which usually counts and reports the frequency
of concepts/words/behaviours held within the data. The researcher develops brief
descriptions of the themes or meanings, called codes. Similar codes may at a later
stage in the analysis be grouped together to form categories.
Continuous panels - Continuous panels ask panel members the same questions on
each panel measurement.
291
Control - The group or subject that is used as a standard for comparison in an
experiment.
Control group - By control group, we mean a group whose subjects have not been
exposed to the change in the independent variable.
Control variable - A variable used to test the possibility that an empirically observed
relationship between an independent and dependent variable is spurious.
Controlled test markets - Controlled test markets are conducted by outside research
firms that guarantee distribution of the product through pre-specified types and
numbers of distributors.
Correlation - Linear association between two quantitative or ordinal variables,
measured by a correlation coefficient.
Correlation coefficient - Measure of the linear association between quantitative or
ordinal variables.
Critical thinking - Thinking that uses specific sets of skills to carefully analyze problems
step-by-step; scientific methods are one type of critical thinking.
Cross-sectional - Study type of observational study with subjects being observed on
just one occasion.
Cross-sectional studies - Cross-sectional studies measure units from a sample of the
population at only one point in time.
Data - Information, measurements and materials gathered from observations that are
used to help answer questions.
Data analysis - A systematic process of working with the data to provide an
understanding of the research participant’s experiences. While there are several
methods of qualitative analysis that can be used, the aim is always to provide an
understanding through the researcher’s interpretation of the data.
Data cleaning - After the data have been entered onto the computer they are checked
to detect and correct errors and inconsistent codes.
Deduction - A theoretical or mental process of reasoning by which the investigator
starts off with an idea, and develops a theory and hypothesis from it; then phenomena
are assessed in order to determine whether the theory is consistent with the
observations.
Degrees of freedom - Measure used in significance tests and other statistical
procedures, which reflects the sample size(s) of the study group(s) used in an
investigation.
Descriptive research - Descriptive research is undertaken to describe answers to
questions of who, what, where, when, and how.
292
Discontinuous panels - Discontinuous panels vary questions from one panel
measurement to the next.
Discourse analysis - The linguistic analysis of naturally occurring connected speech or
written discourse. It is also concerned with language use in social contexts, and in
particular with interaction or dialogue between speakers.
Dispersion - A summary of a spread of cases in a figure (measures include quartiles,
percentiles, deciles, standard deviation and the range).
Electronic test markets - Electronic test markets are those in which a panel of
consumers has agreed to carry identification cards that each consumer presents when
buying goods and services.
Empirical - Based on observation.
Ethics - Research ethics relate to the standards that should be upheld to guard
participants from harm or risk. Ethical considerations should be made at each stage of
the research design and include informed consent, voluntary participation and respect
for confidentiality.
Ethnography - A qualitative research methodology that enables a detailed description
and interpretation of a cultural or social group to be generated. Data collection is
primarily through participant observation or through one-to-one interviews. The
importance of gathering data on context is stressed, as only in this way can an
understanding of social processes and the behavior that comes from them be
developed.
Experience surveys - Experience surveys refer to gathering information from those
thought to be knowledgeable on the issues relevant to the research problem.
Experiment - An experiment is defined as manipulating an independent variable to
see how it affects a dependent variable, while also controlling the effects of additional
extraneous variables.
Experimental design - An experimental design is a procedure for devising an
experimental setting such that a change in a dependent variable may be attributed
solely to the change in an independent variable.
Experimental error - Incorrect data in an experiment that may result from a variety of
causes.
Experimental group - The group that is exposed to the independent variable
(intervention) in experimental research.
Experimental group - The experimental group, on the other hand, is the group that
has been exposed to a change in the independent variable.
293
Exploratory research - Exploratory research is most commonly unstructured, informal
research that is undertaken to gain background information about the general nature
of the research problem.
External validity - External validity refers to the extent that the relationship observed
between the independent and dependent variables during the experiment is
generalizable to the “real world.”
Extraneous variables - Extraneous variables are those that may have some effect on
a dependent variable but yet are not independent variables.
Factor analysis - Multivariate method that analyses correlations between sets of
observed measurements, with the view to estimating the number of different factors
which explain these correlations.
Field experiments - Field experiments are those in which the independent variables
are manipulated and the measurements of the dependent variable are made on test
units in their natural setting.
Field notes - A collective term for records of observation, talk, interview transcripts,
or documentary sources. Typically includes a field diary, which provides a record of
chronological events and development of research as well as the researcher’s own
reactions to, feeling about, and opinions of the research process
Field research - Research which takes place in a natural setting.
Focus groups - An increasingly popular method of conducting exploratory research is
through focus groups, which are small groups of people brought together and guided
by a moderator through an unstructured, spontaneous discussion for the purpose of
gaining information relevant to the research problem.
Grounded theory - A qualitative research methodology with systematic guides for the
collection and analysis of data, that aims to generate a theory that is ‘grounded in’ or
formed from the data and is based on inductive reasoning. This contrasts with other
approaches that stop at the point of describing the participants’ experiences.
Heterogeneity (or lack of homogeneity) - Term usually employed in the context of
meta-analyses, when the results or estimates from individual studies appear to have
different magnitude. Tests of heterogeneity are available to assess the extent of this
variation.
Hypothesis A tentative solution to a research question, expressed in the form of a
prediction about the relationship between the dependent and independent variables.
Induction - Begins with the observation and measurement of phenomena and then
develops ideas and general theories about the universe of interest.
Inference - A logical explanation or conclusion based on observations and/or facts.
294
Internal validity - Internal validity is concerned with the extent to which the change
in the dependent variable was actually due to the independent variable.
Interpretative - Exploration of the human experiential interpretation of any observed
phenomena. Enables researchers to gain a better understanding of the underlying
processes that may influence behavior.
Interviewing - A data collection strategy in which participants are asked to talk about
the area under consideration.
Iteration (an iterative process) - relates to the process of repeatedly returning to the
source of the data to ensure that the understandings are truly coming from the data.
In practice this means a constant process of collecting data, carrying out a preliminary
analysis, and using that to guide the next piece of data collection and continuing this
pattern until the data collection is complete.
Longitudinal studies - Longitudinal studies repeatedly measure the same sample units
of a population over a period of time.
Market tracking studies - Market tracking studies are those that measure some
variable(s) of interest, that is, market share or unit sales over time.
Measure - To compare the characteristics of something (such as mass, length, volume)
with a standard (such as grams, meters, liters).
Methods - An ordered series of steps followed to help answer a question.
Natural setting (naturalistic research) - the normal environment for the research
participants for the issues being researched.
Observation - A strategy for data collection, involving the process of watching
participants directly in the natural setting. Observation can be participative (i.e. taking
part in the activity) or non-participative (the researcher watches from the outside). It
is noticing objects or events using the five senses.
One-group, before–after design - The one-group, before–after design is achieved by
first measuring the dependent variable, then changing the independent variable, and,
finally, taking a second measurement of the dependent variable.
Panels - Panels represent sample units who have agreed to answer questions at
periodic intervals.
Phenomenology - An approach that allows the meaning of having experienced the
phenomenon under investigation to be described, as opposed to a description of what
the experience was. This approach allows the reader to have a better understanding
of what it was like to have experienced a particular phenomenon.
Post-test - When a measurement of the dependent variable is taken after changing
the independent variable, the measurement is sometimes called a posttest.
295
Prediction - A statement made about the future outcome of an experiment based on
past experiences or observations.
Pretest - When a measurement of the dependent variable is taken prior to changing
the independent variable, the measurement is sometimes called a pretest.
Procedure - An ordered series of steps followed to help answer a question.
Projective techniques - Projective techniques, borrowed from the field of clinical
psychology, seek to explore hidden consumer motives for buying goods and services
by asking participants to project themselves into a situation and then to respond to
specific questions regarding the situation.
Qualitative data - Data that is based on observable characteristics of things or events
that can be collected using the five senses. Example: “The juice tastes sweet to me.”
Quantitative data - Data that is based on measurable characteristics of things or
events such as mass, volume, length, and quantity. Example: There is one liter of juice
in the carton.
Quasi-experimental designs - Designs that do not properly control for the effects of
extraneous variables on our dependent variable are known as quasi-experimental
designs.
Reflexivity - The open acknowledgement by the researcher of the central role they
play in the research process. A reflexive approach considers and makes explicit the
effect the researcher may have had on the research findings.
Replication - Repeated trials on more than one subject, as well as controls, in
experimental tests.
Representatives - There are three criteria that are useful for selecting test market
cities: representativeness, degree of isolation, and ability to control distribution and
promotion.
Research design - After thoroughly considering the problem and research objectives,
researchers select a research design, which is a set of advance decisions that makes
up the master plan specifying the methods and procedures for collecting and
analyzing the needed information.
Respondent validation - Refers to seeking the participants’ views of the initial
interpretations of the data. The aim is not to ensure that the researcher are in
agreement as to the meaning of the data, but that the researcher has the opportunity
to incorporate the participants’ responses into the analysis.
Sample surveys - Sample surveys are cross-sectional studies whose samples are drawn
in such a way as to be representative of a specific population.
Science - The study of nature and the physical world using the methods of science, or
a “special method of finding things out”.
296
Scientific method(s) - A process of critical thinking that uses observations and
experiments to investigate testable predictions about the physical universe.
Scientific theory - A causal explanation for generalized patterns in nature that is
supported by much scientific evidence based on data collected using scientific
methods.
Scientist - A person who “does” science and uses the methods of science.
Secondary data analysis - By secondary data analysis, we refer to the process of
searching for and interpreting existing information relevant to the research
objectives.
Triangulation - Process by which the area under investigation is looked at from
different (two or more) perspectives. These can include two or more methods, sample
groups or investigators. Used to ensure that the understanding of an area is as
complete as possible or to confirm interpretation through the comparison of different
data sources.
True experimental design - A “true” experimental design is one that truly isolates the
effects of the independent variable on the dependent variable while controlling for
effects of any extraneous variables.
Variable - Something that can affect a system being examined and is therefore a factor
that may change in an experiment.
Variable, dependent - A factor that responds to changes in other variables in an
experiment; “it changed” variables.
Variable, independent - A factor that can be changed or manipulated in an
experiment by the scientist; “you change it” variables.
Variation - Slight differences among objects, organisms or events that are all of the
same basic type.
297
BIBLIOGRAPHY/REFERENCES
Agarwal. Y.P. (1986). Statistical Methods, Concepts, Applications and Computations.
New Delhi: Sterling Publication.
Air Fax Country Department of Systems Management for Human Service (2003). Over
View of Sampling Procedures, information brochure, http: // www.fhi. org /
NR / rdonlyres / etdgabwszyyk2hnkqosvl2 mieeatan6rrj4l4lfuv52dlbt
7knrewo6qfzosuzq7raxy63chxkz 32c / Chapter6.pdf.
Alexander Jakob (2001). On the Triangulation of Quantitative and Qualitative Data.
Typological Social Research, 2(1). Citation in: Triangulation: How and Why
Triangulated Research Can Help Grow Market Share and Profitability A white
paper by Sharon Bailey-Beckett & Gayle Turner Beckett Advisors, Inc.
Allchin, D. (2001). Error Types. Perspectives on Science, 9(1), 38–58.
Aslam, F., Qayyum, M.A., Mahmud, H., Qasim, R., & Haque, I.U. (2004). Attitudes and
practices of postgraduate medical trainees towards research--a snapshot from
Faisalabad. Journal of Pakistan Medical Association, 54: 534-6.
Babbie, Earl. (1989). The Practice of Social Research. 5th edition. Belmont CA:
Wadsworth
Babbie, Earl. (1990). Survey Research Methods. Belmont, California: Wadsworth
Publishing Company, 2nd ed.
Barrow, J. (1991). Theories of Everything. Oxford University, Press.
Bausell, R.B. (1991). Advanced research methodology: an annotated guide to sources.
Metuchen, N.J.: Scarecrow Press.
Bech-Larsen, T. (1996). Danish consumers’ attitudes to the functional and
environmental characteristics of food packaging. Journal of Consumer Policy,
19, 339-63.
Bell, A. (1994) Climate of opinion: public and media discourse on the global
environment. Discourse and Society, 5, 33–64.
Bell, J. (1993). Doing your research project: a guide for first-time researchers in
education and social science (2nd ed.). Buckingham; Philadelphia: Open
University Press.
Berg, B.L. (1995). Qualitative research methods for the social sciences (2nd ed.).
Boston: Allyn and Bacon.
Berry, W.D., & Lewis-Beck, M.S. (Eds.). (1986). New tools for social scientists:
advances and applications in research methods. Beverly Hills: Sage.
Best John W., & Kahn James V. (2010). Research in Education. New Delhi: PHI Learning
Pvt.Ltd.
298
Beverley H. (2002), An Introduction to Qualitative Research, Produced by Trent Focus
for Research and Development in Primary Health Care. URL:
http://faculty.cbu.ca/pmacintyre/course_pages/MBA603/MBA603_files/Intr
oQualitativeResearch.pdf
Bracht, G. H., & Glass, G. V. (1968). The external validity of experiments. American
Education Research Journal, 5, 437-474.
Brain M. (2004). Type 1 error, Type II error, Consumers Risk and Producers Risk:
Citation in URL: http://www.brainmass.com/homework-help/statistics/
alltopics/ 108614 .
Bronowski (1978), Diederich (1967) and Whaley & Surratt (1967). Taken from The
Kansas School Naturalist, 35(4). (Retrieved on April 2012).
Brownlee, K. A. (1960). Statistical theory and methodology in science and
engineering. New York: Wiley, 1960.
Burton Neil., Brundrett, M. & Jones M. (2008). Doing Your Education Research Project.
UK: Sage Publication.
Busha, Charles & Stephen P. Harter. (1980). Research Methods in Librarianship:
techniques and Interpretations. Academic Press: New York, NY.
Busha, C. & Stephen P. H. (1980). Research Methods in Librarianship: techniques and
Interpretations. Academic Press: New York, NY.
Campbell, D. & Stanley J. Experimental and quasi-experimental designs for research
and teaching. In Gage (Ed.), Handbook on research on teaching. Chicago: Rand
McNally & Co., 1963.
Carlos L. Lastrucci (1963). The Scientific Approach: Basic Principles of the Scientific
Method, 7.
Carter M. (2004). Basic Business Research Methods, http://www.mapnp.org
/library/research/research.htm (Accessed November 2012)
Chandra S. S. & Sharma R. K .(2007). Research in Education. New Delhi: Atlantic.
Publishers.
Chandrasekaran., John R., Josephson & Vichard, B. (1999). What Are Ontologies, and
Why Do We Need Them? IEEE intelligent systems.
Chaudhary. C.M. (2009). Research Methodology. Jaipur: RBSA Publishers.
Cochran, W. G., Sampling Techniques. Second Edition. John Wiley & Sons, Inc. New
York. 1953-1963. Library Of Congress Catalog Card Number: 63-7553
William C. G. (1953). Sampling Techniques. New York: John Wiley & Sons, Inc., 1953.
Coffey, A., & Atkinson, P. (1996). Making sense of qualitative data. Thousand Oaks,
CA: Sage.
299
Cope, D. & Winward, J. (1991), Information failures in green consumerism, Consumer
Policy Review, 1(2), 83-6.
Cornfield, J. & Tukey, J. W. (1956). Average values of mean squares in factorials.
Annals of Mathematical Statistics, 27, 907-949.
Cox, D. R. (1958). Planning of experiments. New York: Wiley.
Creswell, J. W. (1998). Qualitative inquiry and research design: Choose among five
traditions. London: Sage.
Creswell, J.W. (2003). Research Design: Qualitative, Quantitative, and Mixed Methods
Approaches (2nd Edition). Thousand Oaks: Sage Publications.
Lal. D.K.(2005). Designs of Social Research. Jaipur: RAWAT Publication.
Ebel, R. L. & Frisbie, D. A. (1986). Essentials of education measurement. Englewood
Cliffs, NJ: Prentice Hall.
David Coderre (2009)., Computer-aided fraud, prevention & Detection, pp - 224-225.
David Wigder (2007). Marketing green: Citation URL: www.climatebiz.com/bio/
david-wigder.
David, F.N. (1949). Probability Theory for Statistical Methods. Cambridge University
Press. p. 28.
Des R. (1968). Sampling Theory. Mc-Graw-Hill Book Company. New York.
Des R. (172). The Design Of Sample Surveys. McGraw-Hill Book Company, New York.
Donald Ary, Lucy Cheser Jacobs, and Asghar Razavieh, Introduction to Research in
Education, (New York: Holt, Rinehart and Winston, Inc., 160.
Donald Campbell, (1966). Experimental and Quai-Experimental Designs for Research.
Houghton Mifflin Company.
Donna Molloy and Kandy Woodfield (2002). Longitudinal qualitative research
approaches in evaluation studies. Longitudinal qualitative research
approaches in evaluation studies: A study carried out on behalf of the
Department for Work and Pensions: Working Paper, 7, Citation URL:
http://research.dwp.gov.uk/asd/asd5/WP7.pdf (retrieved on 5th May 2012)
Dunlap, R.E. & Van Liere, K. (1978). The new environmental paradigm: a proposed
measuring instrument and preliminary results, Journal of Environmental
Education, 9, 10-9.
Eagly, A.H. & Kulesa, P. (1997). Attitudes, attitude structure, and resistance to change,
in Experiment Resources (2003) Citation: http://www.experiment-
resources.com/null-hypothesis.html
300
Edmund Husserl (1931). Ideas: General Introduction into phenomenology (traps. W.
R. Boyce), Allen & Unwin, London.
Finger, M. (1994). From knowledge to action? Exploring the relationships between
environmental experiences, learning, and behavior”, Journal of Social Issues,
50, 179-97.
Fink, Arlene. How to Sample in Surveys. 6. London: Sage Publications, 1995.
Fisher, R. A. (1935). The design of experiments. (1st ed.) London: Oliver & Boyd.
Fisher, R.A. (1966). The design of experiments. 8th edition. Hafner: E dinburgh. For
example see http://davidmlane.com/hyperstat/A73079.html)
Fowler, Jr. & Floyd J. (1993). Survey Research Methods. 2nd ed. 1. London: Sage
Publications.
Frey, Lawrence R., Carl H. Botan, & Gary L. Kreps. (2000). Investigating
Communication: An Introduction to Research Methods. 2nd ed. Boston: Allyn
and Bacon.
Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction.
White Plains, NY: Longman.
Gay, L. R. (1980). Educational Research: Competencies for Analysis and Application,
3rd ed., (Columbus, Ohio: Merrill Publishing Company, 1987), 101.
Gay, L. R. (1992). Educational research (4th Ed.). New York: Merrill.
Ghosh B.N. (1985). Scientific Method & Social Research, Sterling Publishers (P) Ltd.,
New Delhi.
Girden, E.R. (1996). Evaluating research articles from start to finish. Thousand Oaks,
CA: Sage
Glenn D. Israel (1992), Determining Sample Size, document PEOD6.
Google Trends. (2008). Green Marketing Trend. Citation URL:
www.redfusionmedia.com/.../green-marketing-trend.
Grove, S.J. & Fisk, R.P. (1996). Going green in the Service Sector. European Journal of
Marketing, 30(5), 56-67
Gubrium, J.F., & Holstein, J.A. (1997). The New language of qualitative method. New
York: Oxford University Press.
Gwowen S. & Show-Li Jan (2004). The effectiveness of randomized complete block
design, Statistica Neerlandica 58(1), 111–124.
Hammersley, M., & Atkinson, P. (1983). Ethnography, principles in practice. London,
New York: Tavistock.
301
Harold A. L. & Murray T. (2002). The Delphi Method: Techniques and Applications,
Citation URL: http://is.njit.edu/pubs/delphibook/delphibook.pdf.
Hayes, N. (2000) Doing Psychological Research. Gathering and analysing data.
Buckingham: Open University Press. p- 134.
Hayes, N. (2000) Doing Psychological Research. Gathering and analysing data.
Buckingham: Open University Press. P-1336.
Henry M. B. (2008). How to write a statement problem: Your proposal writing
companion. Citation in URL: http://www.professorbwisa.com/new/free_
downloads/problem_statement.pdf (retrieved on 4th June 2012)
Henry, G. T. (1990). Practical Sampling. 21. London: Sage Publications, 1990.
Heron, J., & Reason, P. (1997). A Participatory Inquiry Paradigm. Qualitative Inquiry,
3(3), 274-294.
Hines, J.M., Hungerford, H.R. & Tomera, A.N. (1987). Analysis and synthesis of
research on responsible environmental behavior: a meta-analysis”, Journal of
Environmental Education, 18, 1-8.
Hopfenbeck, W. (1993). Direccio´ny Marketing Ecolo´gicos. Ediciones Deusto, Madrid.
Hoy W. K. (2010). Quantitative Research in Education: A Primer. UK: Sage Publication.
Hult, C.A. (1996). Researching and writing in the social sciences. Boston: Allyn and
Bacon.
IGNOU (2005). MSO 002 Block 2, Research Methodologies and Methods, Unit 12-14.
Johansson, G. (2002). Success Factors for the Integration of Eco-Design in Product
Development: A Review of State of the Art. Environmental Management and
Health, 13(1), 98-108.
Jones, R.A. (1996). Research methods in the social and behavioral sciences (2nd ed.).
Sunderland, MA: Sinauer Associates.
Kahn CR.(1994). Picking a research problem - the critical decision. NEJM 330:1530-
1533.
Kalafatis, S.P., Pollard, M., East, R. & Tsogas, M.H. (1999), “Green marketing and
Ajzen’s theory of planned behavior: a cross-market examination”, Journal of
Consumer Marketing, 16(5), 441-60.
King, G., Keohane, R.O., & Verba, S. (1994). Designing social inquiry: scientific
inference in qualitative research. Princeton, NJ: Princeton University Press.
King, Gary, Keohane, Robert O., & Verba, Sidney (1994). Designing Social Inquiry.
Princeton, NJ: Princeton University Press.
302
Kinnear, T.C., Taylor, J.R. & Ahmed, S.A. (1974), Ecologically concerned consumers:
who are they?”, Journal of Marketing, 38, 20-4.
Kothari C.R (1985): Research Methodology - Methods and-Techniques, Wiley Eastern
Publication. (Chapter 1-3 pp 1 to 67).
Kothari. C.R. (2010). Research Methodology: Methods and Techniques. New Delhi:
New Age International Pvt. Ltd.
Kotler, P, Adam, S, Brown, L & Armstrong, G. (2006). Principles of Marketing , 3rd edn,
Prentice Hall, Frenchs Forest, NSW,Feroz Russell K. Schutt, Investigating the
Social World, 5th ed, Pine Forge Press
Koul L. (2010). Methodology of Educational Research. New Delhi: Vikas Publishing
House Pvt. Ltd.
Kristin Anderson Moore. (2008). Quasi-Experimental Evaluations: Part 6 in a Series on
Practical Evaluation Methods. Citation URL:
http://www.childtrends.org/Files/Child_Trends-2008_01_16_Evaluation6.pdf
(Retrieve on May 6th 2012)
Kuhn, T. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
Kumar, R. (2011). Research Methodology. New Delhi: Sage Publications India Pvt. Ltd.
Gay, L. R. (1987). Educational Research: Competencies for Analysis and Application,
3rd ed., Columbus, Ohio: Merrill Publishing Company, 101.
L.V. Redman and A.V.H. Mory, The Romance of Research, 1923, p.10.
Labuschagne, A. (2003). Qualitative research - airy fairy or fundamental? The
Qualitative Report, 8(1).
Leinhardt,G., Leinhardt, S.(1980). Exploratory data analysis: new tools for the analysis
of empirical data. In D.C. Berliner, Review of Research in Education, 8, 85-157,
Washington, D.C.: American Educational Research Association.
Leinhardt,G., & Leinhardt, S.(1980). Exploratory data analysis: new tools for the
analysis of empirical data. In D.C. Berliner, Review of Research in Education, 8,
85-157, Washington, D.C.: American Educational Research Association.
Leming, M. R. Research & Sampling Designs: Techniques For Evaluating Hypotheses.
Found at: http://www.stolaf.edu/people/leming/soc371res/ research.html.
Lewis-Beck Michael S; Bryman Am, Tim Futing, & Liao (Ed) (2004). The Sage
Encyclopedia of Social Sciences Research Method, (1, 2 & 3), Sage Publications,
New Delhi.
Liverpool University document (2006). http://www. accessexcellence.org/LC/TL/
filson/writhypo.php.
Lohr, Sharon L. Sampling: Design and Analysis. Albany: Duxbury Press, 1.
303
Louis Cohen, Lawrence Manion & Keith Morrison (2012). Co-relational and criterion
groups designs; Characteristics of ex post facto research; Citation URL:
cw.routledge.com/textbooks/cohen7e/data/Chapter15.ppt
Maloney, M.P. & Ward, M.P. (1973). Ecology: let’s hear from the people. An objective
scale for the measurement of ecological attitudes and knowledge, American
Psychologist, 28(7), 583-6.
Maria R. (2002). The Scientific Method, Cooperative Extension, University of Nevada
document.
Marion, R. (2004). Defining variables and formulating hypotheses. In the whole world
art of deduction: Research skills for new scientists. Retrieved April 10, 2007,
from UTMB School of Allied Health Sciences:
http://www.sahs.utmb.edu/pellinore/intro_to_ research/wad/vars_hyp.htm
(retrieved on 7th July 2012).
Denzin, N K. & Yvonna S. L (1998). Strategies of Qualitative Inquiry. Sage Publications:
London.
Marshall, C., & Rossman, G.B. (1995). Designing qualitative research (2nd ed.).
Newbury Park, CA: Sage.
Marti,n N. M. (1996). Sampling for qualitative research. Family Practice—an
international journal, 13(6), 522-525.
Mason, J. (1996). Qualitative reasoning. London; Thousand Oaks, CA: Sage. Meffert,
H. and Kirchgeorg, M. (1993), Marktorientiertes Umweltmanagement,
Schaeffer-Poeschel, Stuttgart.
Maxwell, J.(1996). Qualitative research design: An interactive approach. Thousand
Oaks, CA. Sage.
Mikkalsen Britha (1995). Methods for Development Work and Research. Sage
Publications, New Delhi.
Moore, D., & McCabe, D. (1993). Introduction to the practice of statistics. New York:
Freeman.
Morris C. (2008). The EQUATOR Network: promoting the transparent and accurate
reporting of research. Developmental Medical Child Neurology, 50, 723
Nachmias, C., & Nachmias, D. (1992). Research methods in the social sciences (4th
ed.). New York: St. Martin's Press.
Naomi Elliott, & Anne Lazenbatt, (2005). How to recognize a ‘quality’ grounded theory
research study? Australian Journal of Advanced Nursing, 22(3), p. 50.
304
Neyman, J.; and Pearson, E.S. (1967) [1928]. On the Use and Interpretation of Certain
Test Criteria for Purposes of Statistical Inference, Part I. Joint Statistical Papers.
Cambridge University Press, 1–66.
Neyman, J. & Pearson, E.S. (1967) [1933]. The testing of statistical hypotheses in
relation to probabilities a priori. Joint Statistical Papers. Cambridge University
Press. 186–202.
Patton, M. (2002). Qualitative research & evaluation methods. (3rd edition).Thousand
Oaks, CA. Sage.
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand
Oaks, CA: Sage Publications.
Pearson, E.S., & Neyman, J. (1967, 1930). On the Problem of Two Samples. Joint
Statistical Papers. Cambridge University Press, 100.
Peattie, K. (1995). Environmental Marketing Management. Meeting the green
challenge, Pitman Publishing, London.
Ravichandran. K. & Nakkiran.S (2009). Introduction to Research Methods in Social
Sciences. New Delhi: Abhijeet Publications.
Robert C. Meir, William T. Newell & Harold L. Dazier,(2012). Simulation in Business
and Economics. Citation in: Manvendra Narayan Mishra and Vinay Kr. (2012).
Mathematical Model to Simulate Infectious Disease. VSRD-TNTJ, 3(2), 60-68.
Robert, S. M. (2002). Strategies for Educational Inquiry: Inquiry & Scientific Method,
Education Science Weekly, 5520 - 5982
Robson, C. (1993). Real world research: a resource for social scientists and
practitioner-researchers. Oxford; Cambridge, MA: Blackwell.
Roger, S. & Victor, J. (2006). Data Collection & Analysis. Sage Publication, London.
URL: amazone.co.uk. ISBN 076195046X.
Ronán C. (2008). Sample Size: Sample size, A rough guide studies measuring a
percentage or proportion. Citation URL: www.beaumontethics.ie/docs/
application/ samplesizecalculation.pdf (retrieved on 3rd of June 2012).
Rosenthal, R., & Rosnow, R. L.(1975). The Volunteer Subject. John Wiley & Sons, Inc.
New York.
Salant, P. A. & Dillman, D. A. (1994). How To Conduct Your Own Survey. John Wiley &
Sons, Inc. New York.
Saravanavel .P. (2011). Research Methodology. New Delhi: Kitab Mahal Publishers.
Saunder M., Lewis, P. and Thornhill, A. (2004). Research methods for business
students 3 edn, Pearson.
305
Schlegelmilch, B.B., Bohlen, G.M. & Diamantopoulos, A. (1996). The link between
green purchasing decisions and measures of environmental consciousness,
European Journal of Marketing, 30(5), 35-55.
Scott SD., Albrecht, L., O'Leary, K., Ball, G.D., Dryden, D.M., & Hartling L, (2001). A
protocol for a systematic review of knowledge translation strategies in the
allied health professions. Implement Science, 6, 58.
Shastri. V.K.(2008). Research Methodology in Education. New Delhi: Authors Press.
Shields, P. & Hassan T. (2006). Intermediate Theory: The Missing Link in Successful
Student Scholarship. Journal of Public Affairs Education. 12(3). 313-334.
http://ecommons. txstate.edu/polsfacp/39/
Singh, Y.K., Sharma, T.K. & Upadhya B. (2012). Educational Technology: Techniques of
Tests and Evaluation. New Delhi: APH Publishing Corporation.
Smith, S.M., Haugtvedt, C.P. & Petty, R.E. (1994), Attitudes and recycling: does the
measurement of affect enhance behavioral prediction?, Psychology and
Marketing, 11 (4), 359-74.
Spiegel, M. R. (1999). Schaum's Outline of Theory and Problems of Statistics (3rd ed.).
McGraw-Hill. ISBN 0070602816.
Spiegelberg, H. (1983). The Phenomenological Movement, 3rd edn. (1982).
Spoull, N.L. (1995). Handbook of research methods: a guide for practitioners and
students in the social sciences. (2nd ed.). Metuchen, NJ: Scarecrow Press.
Stern, P.C., & Kalof, L. (1996). Evaluating social science research. New York: Oxford
University Press.
Stone, G., Barnes, J.H. & Montgomery, C. (1995). Ecoscale: a scale for the
measurement of environmentally responsible consumers”, Psychology and
Marketing, 12(7), 595-612.
Sukhatme, P. V., & Sukhatme, B. V. (1970). Sampling Theory Of Surveys With
Applications. ISBN: 0-8138-1370-0. Indian Society of Agricultural Statistics,
New Delhi, India and the Iowa State University Press, Ames, Iowa, U.S.A.
Swenson, M.R. and Wells, W.D. (1997). Useful correlates of pro-environmental
behavior, in Goldberg, M.E., Fishbein, M. and Middlestadt, S.E. (Eds), Social
Marketing, Theoretical and Practical Perspectives, Lawrence Erlbaum,
Mahwah, NJ, 91-109.
Taro Yamane.(1967). Elementary Sampling Theory. Prentice-Hall, Inc., Englewood
Cliffs, N.J. 1967
Tesch, R. (1990). Qualitative research: analysis types and software tools. New York:
Falmer Press.
306
The Advanced Learner’s Dictionary of Current English, Oxford, (1952), 1069.
The Encyclopaedia of Social Science (1930). MacMillan, 9.
Thomas, K. (1962). The Structure of Scientific Revolutions traces an interesting history
and analysis of the enterprise of research. Citation in: Joseph W. Dellapenna
(2000). Science, Technology, and International Law. Villanova University
School of Law, Public Law and Legal Theory, Research Paper No. 2000-6
Tjärnemo, H. (2001). Eco-marketing & Eco-management – Exploring the eco-
orientation – performance link in food retailing, Lund Business Press, Institute
of Economic Research, Sweden.
Trochim, M. W. (2012). Citation in: URL: trochim.human.cornell.edu/kb/sampnon.
htm Bill Trochim’s Center for Social Research Methods, Cornell University.
Vijayalakshmi, G. & Sivapragasam, C. (2009). Research Methods: Tips and Techniques.
Chennai: MJP Publishers.
Warner, C. (undated) How to write a case study http://www.cpcug.org/user/houser/
advancedwebdesign/Tips_on_Writing_the_Case_Study.html (Accessed
November 2004)
Weimer, J. (ed.) (1995). Research Techniques in Human Engineering. Englewood
Cliffs, NJ: Prentice Hall ISBN 0130970727.
Weller, S., Romney, A. (1988). Systematic Data Collection (Qualitative Research
Methods Series 10). Thousand Oaks, CA: SAGE Publications, ISBN 0803930747.
Wellington J. (2000). Educational Research: Contemporary Issues and Practical
Approaches. London: Continuum International publishing group.
Wholey, J., Hatry, H., & Newcomer, K. (eds). (2004). Handbook of practical program
evaluation. San Francisco, CA. Jossey-Bass.
Wiersma, W. & Jurs, S. G. (2009). Research Methods in Education: An Introduction.
New Delhi: Pearson.
Wilson, E. Bright. (1952). An Introduction to Scientific Research (McGraw-Hill).
Winer, B. J. (1962). Statistical principles in experimental design. New York: McGraw-
Hill.
Yamane, Taro. 1967. Statistics: An Introductory Analysis, 2nd Ed, New York: Harper
and Row.
Website Referred
http://portal.acs.org/portal/fileFetch/C/CTP_005606/pdf/CTP_005606.pdf
http://statistics.berkeley.edu/~stark/SticiGui/Text/gloss.htm#null_hypothesis
307
http://writing.colostate.edu/guides/research/survey/com4c2a.cfm:
http://www.adelaide.edu.au/clpd/all/learning_guides/learningGuide_writingAResea
rchReport.pdf
http://www.adfoster.com/primary_secondary_data_what_s_the_difference
http://www.allbusiness.com/marketing/market-research/1310-1.html
http://www.businessdictionary.com/definition/null-hypothesis.html
http://www.ehow.com/about_5092840_basics-statistical-analysis.html
http://www.experiment-resources.com/type-I-error.html#ixzz0sO1SPzm3
http://www.idrc.ca/en/ev-56466-201-1-DO_TOPIC.html
http://www.idrc.ca/en/ev-56623-201-1-DO_TOPIC.html
http://www.learnhigher.ac.uk/analysethis/main/quantitative.html
http://www.nlm.nih.gov/nichsr/hta101/ta101014.html
http://www.nmmu.ac.za/robert/reshypoth.htm
http://www.rajputbrotherhood.com/knowledge-hub/statistics/diagrammatic-
presentation-of-data.html
http://www.rajputbrotherhood.com/knowledge-hub/statistics/tabulation.html
http://www2.uiah.fi/projects/metodi Exploratory study (diagram)
http://www.is.cityu.edu.hk/staff/isrobert/phd/ch3.pdf (Research philosophy)
http://www.public.asu.edu/~kdooley/papers/simchapter.PDF
http://www.air.org/files/eval.pdf
http://jayaram.com.np/wp-content/uploads/2009/12/reserch.pdf
http://www.socsci.uci.edu/ssarc/sshonors/webdocs/Qual%20and%20Quant.pdf
http://pharmaquest.weebly.com/uploads/9/9/4/2/9942916/introduction_to_resear
ch.pdf
http://www.ekmekci.com/Publicationdocs/RM/ResMet1/5RESEARCHDesign.pdff
http://cogprints.org/2643/1/EOLSSrm.pdf
http://www.drtang.org
en.wiktionary.org/wiki/case_study
http://www.socialresearchmethods.net/tutorial/Cho2/panel.html
wordnetweb.princeton.edu/perl/webwn
www.nyu.edu/classes/bkg/methods/005847ch1.pdf
308
http://www.blurtit.com/q122012.html
http://data.fen-om.com/int460/research-experimental.pdf
http://www.emathzone.com/tutorials/basic-statistics/basic-principles-of-
experimental-designs.html
http://medical-dictionary.thefreedictionary.com/experimental+design
http://psych.csufresno.edu/psy144/Content/Design/Experimental/factorial.html
http://files.ecpe.org/Research%20Methodology%20(261703)/handouts/ch10%20ex
perimental%20designs.pdf
http://www.alseaf.com/wp-content/uploads/2011/04/crd.pdf
http://www.stat.wisc.edu/courses/st572-larget/Spring2007/handouts17-4.pdf
Investigative Techniques Glossary,
http://www.pbs.org/opb/historydetectives/techniques/ glossary.html (historical
research)
Primary and Secondary sources, http://ipr.ues.gseis.ucla.edu/info/definition.html
(historical research)
http://web.ncifcrf.gov/rtp/lasp/intra/acuc/fred/Determination_of_Sample.pdf
http://www.bioterrorism.slu.edu/bt/products/bio_epi/scripts/mod13.pdf
http://www.dorak.info
http://publichealth.massey.ac.nz/publications/introepi_teaching/asite2_chapter%20
6.pdf
http://explorations.sva.psu.edu/lapland/LitRev/prob1.html#anchor2210644
309
Blurb
Numerous books are floating in the market showing the academic base of Research
and Research Methodology. While this book is a combined module which provide a
conceptual, theoretical and practical base to the beginners of research that to get
appropriate understanding on application of research methods. The applications of
research methodology not only lie in science disciplines, but also in several social and
economic subjects which having the root base on groups and individual exploration
and understanding. Different from other books this book provides an integrated
knowledge on three kinds of methodologies of research, viz., qualitative, quantitative,
and mixed methodologies. In order to establish, generalize and disseminate the any
findings, one needs to undertake field research and gather primary data first to have
a discussion on the output with past observations and research output. In this book,
the authors tried to orient the beginners on how to understand research, how to be
aware of different modes of research method, what are different tools and
techniques, how to write a reseae4ch report, and how to develop a good synopsis etc.
This stimulating and insightful book on research is ideal for managers, consultants,
trainers, research scholars, for a new generation of students that to get fundamentals
on research method.
310