Download as pdf or txt
Download as pdf or txt
You are on page 1of 312

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/361880298

WAYS AND MEANS OF RESEARCH METHOD

Book · January 2013

CITATIONS READS

0 27

2 authors, including:

Dileep KUMAR M.
Nile University of Nigeria
159 PUBLICATIONS   664 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Factors that Influence Experiential Value of Timeshare Owners toward Timeshare Purchasing in Malaysia View project

Business Incubation Center View project

All content following this page was uploaded by Dileep KUMAR M. on 09 July 2022.

The user has requested enhancement of the downloaded file.


Pre – Print Version

WAYS AND MEANS OF


RESEARCH METHOD
Research India Publications, India. ISBN: 978-81-89476-05-8
(2013)

Prof. Dr. Dileep Kumar M


Professor
Othman Yeop Abdullah Graduate School of Business
Universiti Utara, Malaysia

Dr. Noor Azila Mohd Noor


Associate Professor
Othman Yeop Abdullah Graduate School of Business
Universiti Utara, Malaysia

14
Citation: Kumar D.M & NAM Noor (2013) Ways and means of Research Method,
Research India Publications, Ist Edition, India.
“This book is dedicated to our teachers who has shown me the path of ‘learning by
doing’ with hard work, sincerity, and commitment”

1
ACKNOWLEDGEMENT

First and foremost, I would like to thank GOD for HIS countless support in the
accomplishment of this monograph.

I would like to thank my family members who have fully supported me throughout my
life. I would especially like to thank them for their love and direction.

I would like to thank the Datuk Prof Dr Mohamed Mustafa Ishak, Vice Chancellor,
Universiti Utara Malaysia, for providing a work culture which is appropriate to get
ahead in literary works.

I am in deep gratitude to my advisor, Prof. Dr. Noor Azizi Ismail, Dean, Othman Yeop
Abdullah Graduate School of Business, for his constant encouragement in my
publication endeavors. Thank you for sharing your knowledge and experience with me
and guiding me through this process with patience and encouragement.

I am grateful to the Sultanah Bahiyah Library - Universiti Utara Malaysia, in providing


a congenial atmosphere to gather appropriate literature from textbooks, journals,
periodicals, dissertations, and other secondary and tertiary documents.

2
FORWARD

Knowledge of research is an ocean. Where to start with and where to end is


unpredictable. The dynamics of Research are moving from pure research to action
research. Adding value to theories and existing literature is old talk rather producing
action-oriented outcomes of research is fundamental to new world order. Many
developing countries are struggling to catch up with the fundamentals of research.
They are still in the scratch of doing a research and produce actionable result.
Significant step one can do in this empowerment process is to make available
knowledge of research in a palpable form to the common man so that they can start
somewhere rather nowhere. If the fundamentals of research are strong enough, then
the new generation researchers can be the part of global advancement and growth of
any nation.

Numbers of books have come out with research tools and technique. While
concentrating developing countries and in par with their knowledge and skill very few
books are tailor made. Start with the concepts, differentiating the methods, fine
tuning the knowledge regarding research with process, tools, and techniques,
incorporated in this book make a reader or scholar simple enough to capture the
meaning and its applicability. Pursuing a research topic, or arriving at managerial
decisions, with accompanying proof and evidence also need the support of tools and
techniques for authenticity and accountability. I believe; this book is an outstanding
work of art for any scholar or professional who want to be investigative, critical, logical
and scientific in their decision making and inferences leading to policy and project
implementation.

The “ways and means of research methods” incorporated in this book will equip the
readers, researchers and professionals with scientific knowledge, critical mindset, and
logical conclusions in an efficient and effective way. I presume this book titled ““ways
and means of research methods”” will extend better insight and understanding of
research and research methodology to the managers, teachers and professionals,
researchers, and student community.

Prof. Dr. Joby E. C.

Principal

Meredian College

Ullal, Mangalore.

India.

3
PREFACE

Research is a logical and systematic search for new and useful information on a
particular issue, incident, a situation or collection of issues on a topic. The exploration
of such topic is possible only through Research and Development activities. To convert
and assumptions of theory and practice, the application of scientific principles and
research techniques with objectivity and empirical observation is very much required.
Research and Research Methodology, monograph focus on helping the students and
faculty members in developing an overview of conceptual understanding on the
nature and features of empirical research in social science field, and how the tools and
techniques support in arriving at effective orientation to take a research process.
Though there are several concepts related to Research and Research Methodology,
this book represents a comprehended and revised version of a collection of literature
and modules on Research and Research Methodology.

Chapter 1 contains an introduction to Scientific methods and its importance in


developing a scientific attitude among students.

Chapter 2 discusses the introduction of the Research and Research Methodology


subject. Which include the general overview of the Research, its objectives, different
types of Research and giving general orientation the students.

Chapter 3 contains an introduction to research paradigms, vi., positivism and


interpretivism that support students in selection of proper research methods.

Chapter 4 contains an introduction to qualitative research, including varied typologies


of methods in involved in engaging qualitative research.

Chapter 5 consists of the Research Process. The students get an idea of, from where
they have to commence the research and how they should end with.

Chapter 6 consists of information relation problem formulation. .

Chapter 7 contains the information related to Research Design. The students get an
idea of, what is a Research Plan of Action and how various types of Research Designs
apply in Social Research.

Chapter 8 discusses the concepts of Sampling Techniques. The content makes an


elaborated discussion on how different types of sampling techniques can be applied
to various topics of research.

4
Chapter 9 formally presents what is Hypothesis as different types of Hypothesis
formulated in Social Research. The students get a clear understanding on how they
can formulate a hypothesis and how they can plan their analysis further with data
collection.

Chapter 10 considers information related to Data Collection process, Tools and


Techniques of Data Collection, thereby to gather relevant information, facts, and
figures on the study.

Chapter 11 devoted specifically to Item Analysis. The students get an idea of how a
tool in Research can be developed, how relevant items can be included in data
collection tool and thereby how they can obtain valuable information on the Research
topic. In

Chapter 12 examines tool administration process, data processing methods and how
the data analysis with various statistical methods can be done.

Chapter 13 finally discusses the method of report writing. It describes different types
of report writing based on the design adopted and gives the student a clear guideline
on how to write a research report.

Today's students of social science must understand a variety of research methods. We


hope this book is an ideal introduction to researching in a social science context for
students at both undergraduate and postgraduate level and will be a must-have for
anyone studying on a research methods course or doing a research project for
themselves.

Dr. Dileep Kumar M


Professor
Othman Yeop Abdullah Graduate School of Business OYAGSB)
Universiti Utara Malaysia, Sintok,
Malaysia.

Dr. Noor Azila Mohd Noor


Associate Professor
Othman Yeop Abdullah Graduate School of Business OYAGSB)
Universiti Utara Malaysia, Sintok,
Malaysia.

5
COURSE CONTENT

MODULE TITLE OF THE CHAPTER PAGE NO


MODULE 1 SCIENTIFIC RESEARCH
MODULE 2 INTRODUCTION TO RESEARCH
MODULE 3 SCIENTIFIC PARADIGMS OF RESEARCH
MODULE 4 QUALITAIVE RESEARCH
MODULE 5 RESEARCH PROCESS
MODULE 6 PROBLEM FORMULATION
MODULE 7 RESEARCH DESIGN
MODULE 8 SAMPLING
MODULE 9 HYPOTHESIS
MODULE 10 DATA COLLECTION TOOLS AND TECHNIQUES
MODULE 11 ITEM ANALYSIS
MODULE 12 TOOL ADMINISTRATION DATA PROCESSING AND DATA ANALYSIS
MODULE 13 REPORT WRITING
APPENDIX: SYNOPSIS MODEL
REFERENCE

6
MODULE 1

SCIENTIFIC RESEARCH

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand appreciates the meaning of research techniques.


2. Learn the scientific methods for conducting research.
3. Application of scientific method in research.
4. Understand the common errors in experiments.
5. Evaluate the pre-requisites for scientific research.
6. Understand the qualities of a scientific researcher.

INTRODUCTION

As in many other spheres of human endeavor, research provides a key basis for
developing knowledge. In the physical sciences, physicists, biologists,
mathematicians, chemists, and so on, have long relied on and used research as a way
of helping to define and refine knowledge in their subject areas. It is only
comparatively recently that the social scientist has begun to use research for the same
purpose. Certainly, research in management is one of the newest areas of research.
This chapter concentrates more on basic concepts of research and research
methodology that support one to get a better understanding on the steps, process,
approaches and methods in research and the right way of application of research.

BASIC CONCEPTS IN SCIENTIFIC RESEARCH

Before we begin to understand “what is research” it is necessary to be aware the


related concepts of research and research methodology. It gives us more insight into
the right way that we have to interpret each concept in research.

SCIENCE: “A way of investigation” (Green). It is concerned with the method not


with the subject matter. This is the method of obtaining objective knowledge
about the world through systematic observation. Science is a method people use
to study the natural world. It is the process that uses observation and investigation
to gain knowledge. It is a disposition to deal with the facts rather than with what
someone has said about them. It is a search for order, for uniformities, for lawful

7
relations among the events in nature. Science is not a democratic process. Majority
rule does not determine what sound science is. Science does not accept notions
that are proven false by experiment.

METHOD: A purposeful use of insights and understandings based upon a body of


knowledge and principles. A way of doing something…. underneath we discover
an integrated arrangement of knowledge, understanding and principles.

PROCESS: Progressive series of steps, means or transactions used orderly to obtain


the end or to satisfy the end.

SYSTEM: an organized or complex whole; an assemblage or combination of things


or parts forming a complex unitary whole. An assemblage of things connected or
interdependent, to form a complex unity; a whole composed of parts in orderly
arrangement, according to some scheme or plan.

FACTS: a set of systematically interrelated concepts, definitions and propositions


that are advanced to explain predict phenomena (Kerlinger). A set of
systematically related propositions specifying causal relationships among
variables (Black et. al.).

CONCEPTS: the concepts are an abstraction or generalization of facts. It is the


abstract symbols of a phenomenon, and they are not phenomena. The concepts
created from sense impressions, precepts, and impression.

THEORIES: relationship between facts or the ordering of them in a meaningful


way. (Goode and Hatt)

SEQUENCE: Organizing activities or arranging activities in an ascending or


descending manner.

SYSTEMATIC: Arrangement of body of knowledge and activities in an orderly way


so that it gives proper meaning and direction to a vision or mission.

CLASSIFICATION: Arranging things in a group or classes or according to their


resemblances and affinities and gives expression to the unity of attributes that
may subsist amongst a diversity of population.

OBSERVATION: Observation is accurate watching and noting of phenomenon as


they occur in nature with regard to cause-and-effect mutual relation. It is a
systematic viewing of a specific phenomenon in its proper setting for the specific
purpose of gathering data for a particular study. Observation is based on empirical
or experimental or direct in its nature.

8
INTERPRETATION: Drawing out conclusions from an organized data. Establishing
the relationship between data of a study, study findings and other scientific
knowledge.

TESTING: Assessment of probability of assumptions or data from a study based on


inferential analysis.

LOGICAL: Explanations based on reasoning.

TOOLS AND TECHNIQUES: Research tools and statistical tools.

CAUSAL EXPLANATIONS: Establishing the relationship between cause and effect.


Explanations based on different probabilities. Clarification of probabilities based
on scientific perception and understanding.

FUNCTIONS OF A SCIENCE

Functions of science which includes various factors. A Science should:

Testing: All the information should be subjected to scientific testing. Without


scientific testing that information cannot be generalized.

Verification: All information should be subject to verification. The facts and figures
gathered to be verified with independent observations that withstand subjectivity
in arriving inference.

Definition: The facts and figures that subject to scientific research should be clear
and definite. There should not be any confusion regarding the variables the
researcher incorporated for research.

Classification: The information collected for the research should be open to


adequate grouping and regrouping. It would help with segregated analysis of the
information collected.

Organization: The information collected should be arranged in a structured way


that supports perfect classification and codification of data for analysis and
interpretation.

Prediction: The inference arrived from the analysis and interpretation of the result
should enable for appropriate prediction of implications.

Application: The findings should facilitate application kin its utilitarian


perspective.

9
SCIENTIFIC METHOD

It is true that personal and cultural beliefs influence both our perceptions and our
interpretations of natural phenomena. The scientific method attempts to minimize
the influence of bias or prejudice in the experimenter when testing a hypothesis or a
theory. The scientific method distinguishes science from other forms of explanation
because of its requirement of systematic experimentation. The scientific method is a
formalized way of answering questions about causation in the natural world. In
principle, the scientific method has three main steps. The first step is observation of
phenomena that can be detected by the senses. Second, the scientist forms a
hypothesis, or idea about the cause of the phenomena that has been observed. The
third step is experimentation, performing tests designed to show that one or more of
the hypotheses is more or less likely to be correct. These tests often include numerical
data so the results can be quantified.

So, the scientific method is the process by which scientists, collectively and over time,
endeavor to construct an accurate (that is, reliable, consistent, and non-arbitrary)
representation of the world.

WHY RESEARCHERS USE THE SCIENTIFIC METHOD?

Our personal and cultural beliefs can influence our perceptions and our
interpretations of phenomena _ The scientific method minimizes the influence of
experimenter bias or prejudice. The scientific method enables researchers,
collectively and over time, to construct a reliable, consistent, and non-arbitrary
representation of the world

CHARACTERISTICS OF THE SCIENTIFIC METHOD

The Characteristics of Scientific Methods include the following factors. It can be as


detailed:

Scientific Methods are Empirical: Information or facts about the world based on
sensory experiences. That is direct observation of the world, to see whether
scientific theories or speculations agree with the facts.

Scientific Methods are Systematic: All aspects of the research process are carefully
planned in advance, and nothing is done in a casual or haphazard fashion

Scientific Methods are subject to Replication: Repeating studies numerous times


to determine if the same results will be obtained.

10
Scientific Methods are Search for Causes: Scientists assume that there is order in
the universe, that there are ascertainable reasons for the occurrence of all events,
and that science can discover the orderly nature of the world.

Scientific Methods are Provisional: Scientific conclusions are always accepted as


tentative and subject to question and possible refutation.

Scientific Methods are Objective: Scientists attempt to remove their bias, belief,
preferences, wishes, and values from their scientific research. It means the ability
to see and accept facts as they are, not as one might wish them to be.

Scientific Methods are for reliance on evidence: The proofs are more important in
every research. Without proof generalization of the facts may not be made
possible. The scientific methods support in empirical observation, and which is
based on facts and figures.

Scientific Methods are for Inter-subjective Testability: Deductive reasoning (a


priori assumption) is where a conclusion is inferred from more abstract premises
or propositions (Monette et al, 1994). Inductive reasoning involves the derivation
of general principles from direct observation-from instances to general principles
(Rubin and Babbie, 1997).

Scientific Methods enable prediction: The inference arrived from the analysis and
interpretation of the result should enable for appropriate prediction of
implications.

Scientific Methods are for Generalizations: The research finds supposed to be


applied at all levels. The findings scope should not limit to fields alone. Scientific
methods support in such generalization based on empirical observations.

FOUR STEPS OF SCIENTIFIC METHOD

The scientific method may be described as consisting of four steps:

Observation and description of a phenomenon or a group of related


phenomena. When an investigator has little prior information, the first step is
described, or as Steiner characterizes it, “natural history.” This is the stage
when qualitative methods may provide considerable information about what-
is.

Formulation of hypotheses to explain the phenomena. Based on qualitative


and/or descriptive studies, investigators begin to speculate about which
variables might be related to other variables and in what manner (directly or

11
indirectly). In educational research, the hypotheses are being often a question
about the relationship between or among variables that may influence
learning. The hypothesis may be one that merely asks whether a relationship
exists (correlational research), or the hypothesis may state a cause-and-effect
relationship.

Predict the existence of other phenomena using the hypothesis, or predict


the results of new observations.

Conduct experimental tests of the predictions by several independent


experimenters who use proper experimental methods (King, et al. 1994).

STEPS AT A GLANCE

(Source: Trimpe, 2003 http://sciencespot.net)

12
The scientific method is characterized by many components like (1) order (2) control
(3) empiricism (4) generalization and theoretical formulation. The scientific approach
to problem solving requires the application of order and discipline to instill confidence
in the investigator’s results. This requires the application of the scientific method, in
which a series of systematic steps are followed to solve problems. It can be tested and
the steps may be deleted, skipped or the order may vary. The steps may include the
following:

1. Identification of the problem to be investigated;


2. Review of existing literature and information concerning the problem;
3. Formation of the hypothesis (Null and Alternate) /statement of purpose of
study;
4. Designing the experiment;
5. Collection of data;
6. Analyzing the data;
7. Drawing conclusions;
8. Replicating the investigation; and
9. Dissemination of results.

Research is worthless without the incorporation of these steps.

REQUISITES OF SCIENTIFIC METHOD

The success of science has more to do with an attitude common to scientists than with
a particular method. This attitude is one of inquiry, experimentation, and humility
before the facts (Hewitt, 1965)

Critical perception of the problem: Critical thinking involves determining the


meaning and significance of what is observed or expressed, or, concerning a given
inference or argument, determining whether there is adequate justification to
accept the conclusion as true.

Careful logical analysis: Research is logical and objective; every possible step is
taken to ensure the validity of procedure, tools and conclusions. The researcher
strives to eliminate personal feelings and bias through scientific method. Logical
analysis of the concepts and problems related to phenomena are envisaged for
better research.

Formulation of concepts where ever possible: Explanations of phenomena’s are


important in every research. The concepts should be well defined and
understandable even to the layman. It should not create any confusion on the later
side.

13
Problem based data collection: There may be lot information available. It is one
of the requisite in scientific research “the segregation of relevant and irrelevant
data”. A proper application of scientific methods required relevant data for
analysis and interpretation. The data should be limited to the problem focused.

Classification of data: The data or information should be arranged in groups or


classes or according to their resemblances and affinities and gives expression to
the unity of attributes that may subsist amongst a diversity of population for
better scientific application of research tools and techniques

Quantitative expression of variables: Both dependent and independent variables


should be explained in a simple way that is understandable to the layman. The
meaning of the variables may have resemblances. Well defined concepts are
important in scientific research.

Exacting experimental or statistical procedure in the analysis: The generalization


of the findings depends on the dependability of the findings. Especially the
qualitative phenomena should be converted into quantitative form. For this
conversion applicability of the appropriate tools of the statistics to be ensured in
scientific research.

LIMITATIONS OF SCIENTIFIC RESEARCH

1. Many of the crucial processes occurred in the past and are difficult to test
in the present.
2. Personal biases are especially strong on topics related to the origins
because of the wider implication
3. Hypothesis-testing process fails to eliminate most of the personal and
cultural biases of the community of investigators
4. Researchers who follow the scientific path have often been referred to as
‘positivists and the positivist ideal is much harder to imagine
5. Scientific studies in psychology are usually highly controlled, which makes
them highly artificial.
6. Due to the need to have completely controlled experiments to test a
hypothesis, science cannot prove everything.

WHAT IS SCIENTIFIC ATTITUDE?

Scientific attitudes can be regarded as a complex of values and norms which is held to
be binding on the man of science. The norms are expressed in the forms of
prescriptions, proscriptions, preferences, and permissions. They are legitimatized in
terms of institutional values (Barnes and Dolby, 1970). The norms and values are
supposed to be internalized by the scientist and thereafter they fashion his/her

14
scientific practice. The current set of scientific attitudes of objectivity, open-
mindedness, un-biasedness, curiosity, suspended judgment, critical mindedness, and
rationality has evolved from a systematic identification of scientific norms and values.
The earlier papers of any importance in the field of scientific attitudes are those of
Merton, (1957).

TWENTY SCIENCE ATTITUDES

1. Empiricism. Simply said, a scientist prefers to "look and see." You do not
argue about whether it is raining outside--just stick a handout the window.
Underlying this is the belief that there is one real world following constant
rules of nature, and that we can probe that real world and build our
understanding--it will not change for us. Nor does the real world depend
upon our understanding--we do not "vote" on science.
2. Determinism. "Cause-and-effect" underlies everything. In simple
mechanisms, an action causes a reaction, and effects do not occur without
causes. This does not mean that some processes are not random or
chaotic. But a causative agent does not alone produce one effect today
and another tomorrow.
3. A belief that problems have solutions. Major problems have been tackled
in the past, from the Manhattan Project to sending a man to the moon.
Other problems such as pollution, war, poverty, and ignorance are seen as
having real causes and are therefore solvable--perhaps not easily, but
possible.
4. Parsimony. Prefer the simple explanation of the complex: when both the
complex earth-centered system with epicycles and the simple Copernican
sun-centered system explain apparent planetary motion, we choose the
simpler.
5. Scientific manipulation. Any idea, even though it may be simple and
conform to apparent observations, must usually be confirmed by the work
that teases out the possibility that the effects are caused by other factors.
6. Skepticism. Nearly all statements make assumptions of prior conditions. A
scientist often reaches a dead end in research and has to go back and
determine if all the assumptions made are true to how the world operates.
7. Precision. Scientists are impatient with vague statements: A virus causes
disease? How many viruses are needed to infect? Are any hosts immune
to the virus? Scientists are very exact and very "picky".
8. Respect for paradigms. A paradigm is our overall understanding about
how the world works. Does a concept "fit" with our overall understanding
or does it fail to weave in with our broad knowledge of the world? If it
doesn't fit, it is "bothersome" and the scientist goes to work to find out if
the new concept is flawed or if the paradigm must be altered.

15
9. A respect for power of theoretical structure. Diederich describes how a
scientist is unlikely to adopt the attitude: "That is all right in theory but it
won't work in practice." He notes that the theory is "all right" only if it does
work in practice. Indeed the rightness of the theory is in the end what the
scientist is working toward; no scientific facts are accumulated at random.
(This is an understanding that many sciences fair students must learn!)
10. Willingness to change opinion. When Harold Urey, author of one textbook
theory on the origin of the moon's surface, examined the moon rocks
brought back from the Apollo mission, he immediately recognized this
theory did not fit the hard facts lying before him. "I've been wrong!" He
proclaimed without any thought of defending the theory he had supported
for decades.
11. Loyalty to reality. Urey above did not convert to just any new idea but
accepted a model that matched reality better. He would never have
considered holding an opinion just because it was associated with his
name.
12. Aversion to superstition and an automatic preference for scientific
explanation. No scientist can know all the experimental evidence
underlying current science concepts and therefore must adopt some views
without understanding their basis. A scientist rejects superstition and
prefers science paradigms out of an appreciation for the power of reality
based knowledge.
13. A thirst for knowledge, an "intellectual drive." Scientists are addicted
puzzle-solvers. The little piece of the puzzle that doesn't fit is the most
interesting. However, as Diederich notes, scientists are willing to live with
incompleteness rather than "...fill the gaps with off-hand explanations."
14. Suspended judgment. Again Diederich states: "A scientist tries hard not to
form an opinion on a given issue until he has investigated it, because it is
so hard to give up opinion already formed, and they tend to make us find
facts that support the opinions... There must be however, a willingness to
act on the best hypothesis that one has time or opportunity to form."
15. Awareness of assumptions. Diederich describes how a good scientist
starts by defining terms, making all assumptions very clear, and reducing
necessary assumptions to the smallest number possible. Often, we want
scientists to make broad statements about a complex world. But usually,
scientists are very specific about what they "know" or will say with
certainty: "When these conditions hold true, the usual outcome is such-
and-such."
16. Ability to separate fundamental concepts from the irrelevant or
unimportant. Some young science students get bogged down in
observations and data that are of little importance to the concept they
want to investigate.

16
17. Respect for quantification and appreciation of mathematics as a
language of science. Many of nature's relationships are best revealed by
patterns and mathematical relationships when reality is counted or
measured; and this beauty often remains hidden without this tool.
18. An appreciation of probability and statistics. Correlations do not prove
cause-and-effect, but some pseudoscience arises when a chance
occurrence is taken as "proof." Individuals who insist on an all-or-none
world and who have little experience with statistics will have difficulty
understanding the concept of an event occurring by chance.
19. An understanding that all knowledge has tolerance limits. All careful
analyses of the world reveal the values that scatter at least slightly around
the average point; a human's core body temperature is about so many
degrees and objects fall to a certain rate of acceleration, but there is some
variation. There is no absolute certainty.
20. Empathy for the human condition. Contrary to popular belief, there is a
value system in science, and it is based on humans being the only
organisms that can "imagine" things that are not triggered by stimuli
present at the immediate time in their environment; we are, therefore, the
only creatures to "look" back on our past and plan our future. This is why
when you read a moving book, you imagine yourself in the position of
another person and you think "I know what the author meant and feels."
Practices that ignore this empathy and the resultant value of human life
produce inaccurate science. (Bronowski, 1978; Diederich, 1967; and
Whaley and Surratt 1967).

LIMITATIONS OF SCIENTIFIC ATTITUDE

Holton and Roller (1958) have found that the actual human characteristics exhibited
by scientists are quite distant from the attitudes ascribed to scientists. Anne Roe
(1961) reports that personal factors inevitably enter into scientific activity. They
influence a scientist's choice of what observations to make; they influence a scientist's
selective perception when making the observations. They also influence their
judgments about when there is sufficient evidence to be conclusive and
considerations as to whether discrepancies between experimental and theoretical
data are important or unimportant to their pet theories. Mitroff 's study (1974) of the
behaviors of Apollo moon scientists shows that scientists are passionate, irrational
and strongly committed to their own favored theories. What this means is that
subjective characteristics of the scientists act as norms rather than the widely
accepted Mertonian norms.

Mitroff (1974) also noted that scientists are seldom objective; there is no such thing
as the disinterested observer. As Mitroff sees it, the real process of doing science is

17
much more complicated. It is filled with subjective and even irrational elements that
have been generally unacknowledged. Mitroff concludes by suggesting that “to
remove commitment and even bias may be to remove one of the strongest sustaining
force for both the discovery of scientific ideas and for their subsequent testing."
(Mitroff, 1973: 765). Quite often school science implies or depicts scientists as being
rational and critical in their scientific activities. This, however, may not always be the
case. Gauld, (1973) admits that rationality does play a part in scientific activity but is
not always evident and not always practiced by all the members of a 43 scientific
community.

Kirkut, (1960) suggested that rational thinking is certainly exercised in judging the
products of these with whom one disagrees although the same case may not be
lavished on the arguments of scientists whose views are closer to one's own. Writings
by Kuhn, (1962) also provide an insight into the factors and personal characteristics
that influence a scientist's activity. The degree of resistance, stubbornness jealousy
and a rigid commitment witnessed among the members of the scientific community
further undermines the total acceptance of scientific attitudes.

WHAT DO WE NEED TO CONSIDER WHEN USING THE SCIENTIFIC METHOD?

It expects that the scientific method needs research questions and empirical
observations that will lead to unbiased answers. The empirical observations of
scientific research must be well designed to provide accurate and repeatable (precise)
results. The higher the precisity of result it is having the likelihood that of the events
happens again. This indicates that the result obtained through the scientific method
having high predictability. The findings will be repeated with test re-tester observation
and having high reliability.

The scientific method enables us to test a hypothesis and distinguish between the
correlation of two or more things happening in association with each other and the
actual cause of the phenomenon we observe. The correlation of two variables cannot
explain the cause and effect of their relationship. Scientists design experiments using
a number of methods to ensure the results reveal the likelihood of the observation
happening (probability). Controlled experiments are used to analyze these
relationships and develop cause and effect relationships. Statistical analysis is used to
determine whether differences between treatments can be attributed to the
treatment applied, if they are artifacts of the experimental design, or of natural
variation. In summary, the scientific method produces answers to questions posed in
the form of a working hypothesis that enables us to derive theories about what we
observe in the world around us. Its power lies in its ability to be repeated, providing
unbiased answers to questions to derive theories. This information is powerful and
offers opportunity to predict future events and phenomena (Ryan, 2002).

18
CONCLUSION

This particular chapter provides an understanding of basic terms which are frequently
used in research and research methodology. In order to get an analytical
understanding of research, understanding and application of these terms in its right
sense are very much important. This section further detailed the importance of
understanding of what is science, functions of science, scientific method, and steps
that to be followed in scientific undertaking. Further this portion explained how a
researcher can develop a scientific attitude for effective scientific research
undertaking.

DISCUSSION QUESTIONS

1. Explain the difference between method and process?


2. What do you mean by science?
3. Define the term theory?
4. Explain and discuss causal explanations in research?
5. What are the functions of science?
6. Explain the scientific method?
7. What are the characteristics of the scientific method?
8. What are the requisites of scientific method?

19
MODULE 2

INTRODUCTION TO RESEARCH

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand and appreciate the meaning of research.


2. Evaluate different types of research.
3. Analyze different types of variables.
4. Understand inductive and deductive research.

INTRODUCTION

The term ‘research’ has been viewed with mystique by many people. It is seen to be
the preserve of academicians and professional elite. In most people’s minds, the word
‘research’ raises up the image of a scholar, laboratory work, university or other
‘academic’ setting. But research is simply the process of asking questions and
answering them by survey or experiment in an organized way. It should not be
confined to academicians alone. Every thinking person has the capacity and should do
research. The fundamental requirement for research is an inquiring mind in order to
recognize that there are questions that need answers. The quest for knowledge then
is the basic idea behind the research. The word research is derived from the Latin word
meaning to know.

DEFINITION

Research is a systematic inquiry aimed at providing information to guide decision


making and to solve managerial problems” (Cooper and Schindler, 2000).

The Advanced Learner’s Dictionary of Current English (1952) lays down the meaning
of research as “a careful investigation or inquiry especially through the search for new
facts in any branch of knowledge.” Redman and Mory (1923) define research as a
“systematized effort to gain new knowledge.” Slesinger and Stephenson in the
Encyclopedia of Social Sciences (1930) define research as “the manipulation of things,
concepts or symbols for the purpose of generalizing to extend, correct or verify
knowledge, whether that knowledge aids in the construction of theory or in the

20
practice of an art. According to Moser Social research is a systematized investigation
to gain new knowledge about social phenomenon and problems.

Research is a scientific undertaking which, by means of logical and systematic


techniques aims to:

• Discover new facts or verify or test old facts;


• Analyze their sequences and interrelationships and causal explanations;
• Develop new scientific tools, concepts and theories, which would facilitate;
valuable study of human behavior. (Young, P.V.)

Research is the systematic, critical investigation of questions in the business fields


with the purpose of yielding answers or solutions to it. [Sekaran, 2003)]

SYSTEMATIC because there is a definite set of procedures and steps which you will
follow. There are certain things in the research process which are always done in order
to get the most accurate results.

ORGANIZED in that there is a structure or method in going about doing research. It is


a planned procedure, not a spontaneous one. It is focused and limited to a specific
scope.

FINDING ANSWERS is the end of all research. Whether it is the answer to an


assumption or even a simple question, research is successful when we find answers.
Sometimes the answer is no, but it is still an answer.

QUESTIONS are central to the research. If there is no question, then the answer is of
no use. Research is focused on relevant, useful, and important questions. Without a
question, research has no focus, drive, or purpose.

ANALISIS OF DEFENITIONS

• Research is a systematic and critical investigation of phenomena;


• The research aims at describing, interpreting and explaining phenomena;
• Research is objective in its approach and logical in analysis and conclusion;
• Based on empirical evidence;
• Finding answers to pertinent questions;
• Finding answers solution to issues, situations, incidents, needs etc.;
• Emphasis on generalization; and
• Stand against criticism.

21
Research is a systematic process of identifying the problems, defining the problems,
identifying the variables/ indicators to address these problems, collecting, compiling,
processing, and analyzing data to assess the inherent characteristics of the
phenomenon under study and to identify the objective basis for arriving at a
correct/reliable decision.

OBJECTIVES OF RESEARCH

The purpose of research is to discover answers to questions through the application


of scientific procedures. The main aim of research is to find out the truth which is
hidden and which has not been discovered as yet. Though each research study has its
own specific purpose, we may think of research objectives as fall into a number of
following broad groupings:

1. To ensure the scientific study of facts figures, incidents, situations and


issues in a systematic way;
2. To gain familiarity with a phenomenon or to achieve new insights into it
(studies with this object in view are termed as exploratory or formulative
research studies);
3. To portray accurately the characteristics of a particular individual,
situation or a group (studies with this object in view are known as
descriptive research studies);
4. To analyze interrelationships between variables which are incorporated in
the study.
5. To determine the frequency with which something occurs or with which it
is associated with something else (studies with this object in view are
known as diagnostic research studies);
6. To test a hypothesis of a causal relationship between variables (such
studies are known as hypothesis-testing research studies).
7. To establish generalizations based on scientific tools and techniques
through empirical observations.
8. To enable make reliable predictions of a particular individual, a situation
or a group, which are subject to research
9. To make provision of adequate solutions to the problems under study.
10. To contribute the development of research developing new tools,
concepts, and theories
11. To contribute to the overall development of the individual and society at
large.

22
TYPES OF RESEARCH

Research at broader level is divided into two categories: pure and applied. Applied
research has a practical problem-solving emphasis (Neuman, 2005). In business
research, it may help managers and researchers reveal answers to practical business
problems pertaining to performance, strategy, structure, culture, organizational
policy and the like. Pure or basic research is also problem solving, but in a different
sense. It obtains answers to questions of a theoretical nature and adds its contribution
to the existing body of knowledge (Sekaran and Bougie, 2010). Thus, both applied and
pure researches are problem based, but applied research helps managers in
organizational decision making.

BASIC / FUNDAMENTAL RESEARCH

Basic / Fundamental research is directed towards finding information that has a


broad base of applications and thus, adds to the already existing organized body of
scientific knowledge. The terms “basic” or “fundamental” research indicate that,
through theory generation, basic research provides the foundation for further, often
applied research.. Fundamental research is mainly concerned with generalizations
and with the formulation of a theory. Such research aimed at certain conclusions (say,
a solution) facing a concrete social or business problem is an example of applied
research. “Gathering knowledge for knowledge’s sake is termed ‘pure’ or ‘basic’
research.” Research concerning some natural phenomenon or relating to pure
mathematics or human behavior carried on with a view to make generalizations are
examples of fundamental research.

Examples:

• How does the memory system work?


• How is language skills development?
• How does one learn psychomotor skills?

APPLIED (OR ACTION) RESEARCH

Applied (or action) research Applied research is done to solve specific, practical
questions; its primary aim is not to gain knowledge for its own sake. Applied research
aims at finding a solution to an immediate problem facing a society or an
industrial/business organization. Research to identify social, economic or political
trends that may affect a particular institution or the copy research (research to find
out whether certain communications will be read and understood) or the marketing
research or evaluation research are examples of applied research. Thus, the central
aim of applied research is to discover a solution for some pressing practical problem.

23
Common areas of applied research include electronics, informatics, process
engineering and applied science.

Examples:

1. Does computer aided instruction improve student learning?


2. What is the effect of immediate feedback and delayed feedback on student
achievement?

DESCRIPTIVE RESEARCH

Descriptive research includes surveys and fact-finding inquiries of different kinds. The
major purpose of descriptive research is a description of the situation as it exists at
present. The main characteristic of this method is that the researcher has no control
over the variables; he can only report what has happened or what is happening. Most
ex post facto research projects are used for descriptive studies in which the researcher
seeks to measure such items as, for example, frequency of shopping, preferences of
people, or similar data.

Examples:

• What are the characteristics of agricultural education students?


• What is the level of job satisfaction of extension agents?
• Why do teachers leave teaching?

QUANTITATIVE

Quantitative: Quantitative research is based on the measurement of quantity or


amount. It is applicable to phenomena that can be expressed in terms of quantity.
Quantitative research designs using statistical analyses to examine whether there are
significant differences between groups on various indicators. Quantitative research is
an inquiry into an identified problem, based on testing a theory, measured with
numbers, and analyzed using statistical techniques. The goal of quantitative methods
is to determine whether the predictive generalizations of a theory hold true.

Examples:

• Effect of entrepreneurial orientation on work attitude.


• Impact of organizational culture upon employee’s
commitment.
• Effect of green marketing on consumer consumption behavior.

24
QUALITATIVE RESEARCH

Qualitative research: Qualitative research, on the other hand, is concerned with


qualitative phenomenon, i.e., phenomena relating to or involving quality or kind. This
type of research aims at discovering the underlying motives and desires, using in
depth interviews for the purpose. Other techniques of such research are word
association tests, sentence completion tests, story completion tests and similar other
projective techniques. Attitude or opinion research i.e., research designed to find out
how people feel or what they think about a particular subject or institution is also
qualitative research.

Examples:

• An ethnic group within a particular socioeconomic group that


should be performing well is not
• How Learning leadership is important to excelling in business
• Research about “cross-boundary disruption” – the phenomenon
related to companies from different industries entering
successfully into other industries

CONCEPTUAL RESEARCH

Conceptual research: Conceptual research is that related to some abstract idea(s) or


theory. It is generally used by philosophers and thinkers to develop new concepts or
to reinterpret existing ones. The conceptual research tries to explore phenomena in
a more deep rooted way giving insights on the basics to be referred. The conceptual
research attempts to provide more insight and understanding of the phenomena
under study.

Examples:

1. How do employee incentives affect employee motivation?


2. Positioning information systems among the business discipline
3. Creation of a domain model for start-up technology companies

EMPIRICAL RESEARCH

Empirical research: Empirical research relies on experience or observation alone,


often without due regard for system and theory. It is data-based research, coming up
with conclusions which are capable of being verified by observation or experiment.
We can also call it as an experimental type of research. In such research it is necessary
to get at facts first hand, at their source, and actively go about doing certain things to

25
stimulate the production of desired information. In such research, the researcher
must first provide himself with a working hypothesis or guess as to the probable
results. He then works to get enough facts (data) to prove or disprove his hypothesis.
He then sets up experimental designs which he thinks will manipulate the persons or
the materials concerned so as to bring forth the desired information. Such research is
thus characterized by the experimenter’s control over the variables under study and
his deliberate manipulation of one of them to study its effects. Evidence gathered
through experiments or empirical studies is today considered to be the most powerful
support possible for a given hypothesis.

The Objectives of Experimental Research in nutshell can be stated as follows:

• To assess the effects of particular variables on a phenomenon by


keeping the other variables constant or controlled.
• An observation under controlled conditions
• Analyze the relationship between dependent and independent
variables.

Examples:

1. Strategies for international expansion of Multinational


Corporation’s performance and survival of foreign subsidiaries
2. Effects of different levels of light on plant growth
3. Study of organizational action of multinational enterprises on new
ventures/international entrepreneurship

ONE TIME RESEARCH OR LONGITUDINAL RESEARCH

One time research or longitudinal research (depending upon the time): Longitudinal
research is the case where the research is carried on over several time-periods. A
longitudinal study follows a group composed of the same people across a period of
the life span. The behavior of these individuals is observed and/or measured at several
intervals over time to study the changes in their behavior. Longitudinal studies may
cover a short time, such as a few weeks, or a long time, such as the entire life span.
From the point of view of time, we can think have research either as one-time research
or longitudinal research. In the former case the research is confined to a single time-
period, whereas in the latter case the research is carried on over several time-periods.

Example:

• Longitudinal Research in pediatric psychology.

26
• Longitudinal studies in health-related behaviors and processes
over time.
• Longitudinal research in the study of children with chronic physical
conditions.

FIELD-SETTING RESEARCH OR LABORATORY RESEARCH

Field-setting research or Laboratory research (depending upon the environment):


Research carried out in the laboratory or field. Field research is a methodology used
by the researchers to acquire direct exposure from the field of phenomena which is
under investigation. For example, providing practical exposure to students,
business/professional the students are required to visit origination for field work.
Students have to write and submit a field work report using the format as approved
by school after they work in origination for a specific period of time. Field work can be
described as a systematic and organized effort to study and observe a specific
organizational situation in hand.

Examples;

1. Education and Development: The changing perspectives of the


role of education in the Highland Guatemalan community of
San Antonio Palopó".
2. A study on plant breeding and propagation.
3. French Students and Their Relations to Culture.

SIMULATION RESEARCH

Simulation research: Research carried out in a simulated environment. Simulation


allows for researchers to assume the inherent complexity of organizational systems
as a given. If other methods answer the questions “What happened, and how, and
why?” simulation helps answer the question “What if?” Simulation approach involves
the construction of an artificial environment within which relevant information and
data can be generated. This permits an observation of the dynamic behavior of a
system (or its sub-system) under controlled conditions. The term ‘simulation’ in the
context of business and social sciences applications refers to “the operation of a
numerical model that represents the structure of a dynamic process. Given the values
of initial conditions, parameters and exogenous variables, a simulation is run to
represent the behavior of the process over time.” Simulation approach can also be
useful in building models for understanding future conditions. Simulation enables
studies of more complex systems because it creates observations by “moving
forward” into the future, whereas other research methods attempt to look backwards
across history to determine what happened, and how. The purpose of simulation

27
research is for theory development, prediction, education, training, performance etc.
Computer simulation is growing in popularity as a methodological approach for
organizational researchers.

Characteristics

• Involves the construction of computer modeling to generate data.


• Controlled environment.
• Variables and parameters can be manipulated to study their
effects of other behavior.
• Useful in building models to understand the future conditions.

Example:

1. Human performances of health care professionals


2. Scheduling of production in flow lines, assembly shops, and job
shops
3. Simulation models in project portfolio management
4. Earthquake simulation, wind effects on tall buildings.

Clinical or diagnostic research: This type of research follows case-study methods or


in-depth approaches to reach the basic causal relations. Such studies usually go deep
into the causes of things or events that interest us, using very small samples and very
deep probing data gathering devices.

Example:

• B-type Natriuretic peptides in the detection of heart failure.


• The diagnostic work-up for patients suspected of deep vein thrombosis
(DVT).
• A diagnostic survey of cattle herd size and household size in
Matabeleland, Zimbabwe.

EXPLORATORY RESEARCH

Exploratory research: Exploratory research is the study of an unacquainted problem.


The researcher has little or no knowledge about the problem. The objective of
exploratory research is the development of hypotheses rather than their testing,
whereas formalized research studies are those with substantial structure and with
specific hypotheses to be tested.

28
Example:

1. Exploratory Study on Grocery Shopping Motivations.


2. Exploratory Study on Operational reasons to budget.
3. Exploratory Study on International Research and Development
Strategies.

HISTORICAL RESEARCH

Historical research: The purpose of an historical method is "to reconstruct the past
objectively and accurately, often in relation to the tenability of an hypothesis" (Isaac
and Michael, 1977). For example, a study renovating practice in the pedagogy of
learning and development in Taiwan during the past 70 years would be considered a
historical study. It is that which utilizes historical sources like documents, remains, etc.
to study events or ideas of the past, including the philosophy of persons and groups
at any remote point of time. Historical research utilizes existing documents to study
the events of the past. Research can also be experimental or evaluative.

Examples:

1. What was the predecessor of the cooperative extension service?


2. What does John Dewey say about the integration of academic and
vocational education?
3. What do 4-H members read in the past and what are the implications for
the present?
4. How did farm life schools differ from regular high schools?

CONCLUSION-ORIENTED RESEARCH

Conclusion-oriented research: When undertaking conclusion-oriented research, a


researcher is free to pick up a problem, redesign the enquiry as he proceeds and is
prepared to conceptualize as he wishes. Decision-oriented research is always for the
need a decision maker and the researcher in this case is not free to embark upon
research according to his own inclination. Operations research is an example of
decision-oriented research since it is a scientific method of providing executive
departments with a quantitative basis for decisions regarding operations under their
control.

Characteristics:

• Provide specific information that aids the decision maker in evaluating


alternative courses of action

29
• Sound statistical methods and formal research methodologies are used to
increase the reliability of the information
• Data sought tends to be specific & decisive
• Also more structured & formal than exploratory data

Example:

1. A researcher into schizophrenia may recommend a more effective


treatment.
2. A physicist might postulate that our picture of the structure of the atom
should be changed.
3. A researcher could make suggestions for refinement of the experimental
design

DECISION-ORIENTED RESEARCH

Decision-oriented research: Decision-oriented research is always in the need of a


decision maker and the researcher in this case is not free to embark upon research
according to his own inclination. Operations research is an example of decision-
oriented research since it is a scientific method of providing executive departments
with a quantitative basis for decisions regarding operations under their control.

Example:

1. Marketing Segmentation of Tata Nano in India and its targeting and


positioning Strategy.
2. A review of benefit segmentation of tourism research.
3. Attitude monitoring of the blogosphere using market research tools.

EVALUATIVE STUDIES

Evaluative studies: Evaluative research is a type of applied research. In evaluative


research, the cost effectiveness of a program is examined. Such research addresses
the question of the efficiency of a program. Such research studies are useful in taking
policy decisions on issues like whether the program is effective and/or needs
modification or re- orientation? Whether it should be continued? This research is for
assessing the effectiveness of a business program or action implemented. It is further
used for assessing the impact of a business project. The assessment will be concurrent,
periodic, and terminal evaluation. Evaluations focused on assessing program quality,
implementation, and impact to provide feedback and information for internal
improvement, without external consequences, is called formative evaluations.
Evaluation studies designed to provide information on program impact to external

30
agencies are referred to as summative evaluations. State tests administered by states
to assess performance on state standards are an example of the information collected
to support a summative evaluation.

Examples:

1. Environmental assessment and evaluation research: examples from


mental health and substance abuse programs.
2. Participatory learning and action with drug using populations.
3. Effectiveness of family planning programs.

ACTION RESEARCH

Action Research: This type is research is a concurrent evaluation study of an action


program. It is about solving a problem for improving an existing business situation. It
consists of all developmental programs. The origins of action research, and the ways
in which action research is both perceived and conducted today, are open to dispute,
yet it "has been a distinctive form of inquiry since the 1940s" (Elden and Chisholm,
1993). The term 'action research' is popularly attributed to Kurt Lewin (1946), though
other authors at the same time were calling for similar action-oriented approaches to
research (e.g. Collier (1945) and Corey (1953). Elden and Chisholm (1993) go on to
note that action research is change oriented, seeking to introduce changes with
positive social values, the key focus of the practice being on a problem and its solution.
Thus, Sanford (1970) views action research as a form of problem-centered research
that bridges the divide between theory and practice, enabling the researcher to
develop applicable knowledge in the problem domain (Peters and Robinson, 1984).
Palmer and Jacobson (1971) see action research as a means of using research to
promote social action. Further to these descriptions, Rapoport (1970) identifies action
research as a form of inquiry that seeks to address both the practical problems of
people and the goals of social science within a "mutually acceptable ethical
framework" (Susman, 1985). Among contemporary action researchers, Argyris et al.'s
(1985) description is most informative, viz.: "Action science is an inquiry into how
human beings design and implement action in relation to one another. Hence it is a
science of practice …” Action research may thus be said to occur when scientists
engage with participants in a collaborative process of critical inquiry into problems of
social practice in a learning context.

Examples:

1. Does “flash cards” of horticultural plants with scientific names improve


student learning?
2. Do leaf collections really help students learn tree identification?

31
3. Do classes with assigned seats have less discipline problems than classes
without assigned seating?

ANALYTICAL RESEARCH

Analytical Research: Analytical research aims at testing the hypothesis. It tries to


specify and interpret the relationships between variables. This research is to examine
relationships among variables from various angles. It applies statistical tools to
convert qualitative data into quantitative data for approximate measurement.
Analytical research is a continuation of descriptive research. The researcher tries to
describe the features, to analyzing and explaining why or how the phenomenon being
studied is happening. Thus, analytical research aims to understand phenomena by
discovering and measuring causal relations among them.

Examples:

1. The relationship between pop cultures and students' attitude.


2. Role of innovation in competition and productivity.
3. Role of advertisement in consumer surplus.

SURVEYS

Surveys: Surveys are fact finding studies. This research is simply to provide
information on a certain situation, incident, or phenomena. It aims to explain
phenomena. This research is concerned with the cause effect relationship between
variables. Here the researcher tries to make a comparison of demographic groups.
Surveys cover definite geographical area. It requires well thought out plan, careful
analysis, and rational interpretation of findings. In a census survey, however,
information is gathered from every member of the population. For that reason, a
census survey is applicable when the population is relatively small and readily
accessible.

Example:

1. Population survey in a country


2. Market survey on experiences with a certain consumer product
3. Surveys with citizens regarding their perceptions, awareness and
actions stimulated by advertisements

Advantages of surveys

1. Good for comparative analysis.


2. Can get lots of data in a relatively short space of time.

32
3. Can be cost-effective (if you use the Internet, for example).
4. Can take less time for respondents to complete (compared to an
interview or focus group).

Disadvantages of surveys

1. Responses may not be specific.


2. Questions may be misinterpreted.
3. May not get as many responses as you need.
4. Don’t get full story.

CASE STUDIES

Case studies: The case study methodology is an in-depth comprehensive study of


phenomena. It is concerned with contextual analysis of similar situations that provide
qualitative than quantitative data. It is conducted as independent study or
supplementary to other methods. The case study method of research support is to
examine complex factors. Case studies are defined in various ways and a standard
does not exist. However, a definition compiled from a number of sources (Stone, 1978;
Benbasat, 1984; Yin, 1984; Bonoma, 1985 and Kaplan, 1985) in Benbasat et al. (1987),
runs as follows: A case study examines a phenomenon in its natural setting, employing
multiple methods of data collection to gather information from one or a few entities
(people, groups or organizations). The boundaries of the phenomenon are not clear
at the outset of the research and no experimental control or manipulation is used. The
case study is considered by Benbasat et al. (1987) to be viable for three reasons. It is
necessary to study the phenomenon in its natural setting. The researcher can ask
"how" and "why" questions, to understand the nature and complexity of the
processes taking place. Research is being conducted in an area where few, if any,
previous studies have been undertaken.

Examples:

• Case study on service quality of University Education System.


• Planned environmental watering - Lachlan catchment, New South
Wales.
• Case study of Managerial Support influence on performance
enhancement: University Library Employees.

Advantages of case studies

1. Specific concrete example;


2. Can help with problem solving;

33
3. Are often interesting to read.

Disadvantages of case studies

1. Can take time to develop;


2. Depending on format, may need some level of good writing skills;
3. Do not usually give a broad overview of the issue at hand.

RESEARCH APPROACHES

The above description of the types of research brings to light the fact that there are
two basic approaches to research, viz., quantitative approach and the qualitative
approach.

QUANTITATIVE APPROACH

The quantitative approach involves the generation of data in quantitative form which
can be subjected to rigorous quantitative analysis in a formal and rigid fashion. This
approach can be further sub-classified into inferential, experimental and simulation
approaches to research. The purpose of inferential approach to research is to form a
database from which to infer characteristics or relationships of population. This
usually means survey research where a sample of population is studied (questioned
or observed) to determine its characteristics, and it is then inferred that the
population has the same characteristics.

Quantitative research relies on deductive reasoning or deduction (Sekaran and


Bougie, 2010) and makes use of a variety of quantitative analysis techniques that
range from providing simple descriptions of the variables involved, to establishing
statistical relationships among variables through complex statistical modeling
(Saunders et al., 2009). Quantitative research calls for typical research designs where
the focus of research is to describe, explain and predict phenomena, uses probability
sampling and relies on larger sample sizes as compared to qualitative research designs
(Cooper and Schindler, 2006). By using particular methodologies and techniques,
quantitative research quantifies relationships between different variables. In
quantitative research, involving two variables, for example, the aim of researcher is
to study the relationship between an independent (predictor) variable and a
dependent (criterion) variable in a population (Hopkins, 2000). There are several types
of quantitative research. For instance, it can be classified as 1) survey research, 2)
correlational research, 3) experimental research and 4) causal-comparative research.
Each type has its own typical characteristics.

34
Characteristic:

• The research is based on the measurement of quantity or amount.


• Applicable to phenomena that can be expressed in terms of quantity.

Examples:

1. Effect of service marketing on consumer buying behavior


2. Influence of green marketing on brand perception
3. Effect of marketing ethics on consumer buying behavior

QUALITATIVE APPROACH

Qualitative approach to research is concerned with subjective assessment of


attitudes, opinions and behavior. Research in such a situation is a function of
researcher’s insights and impressions. Such an approach to research generates results
either in non-quantitative form or in the form which are not subjected to rigorous
quantitative analysis. Generally, the techniques of focus group interviews, projective
techniques and depth interviews are used.

In comparison to quantitative research, qualitative research uses inductive reasoning


(Sekaran and Bougie, 2010) and aims to acquire an in-depth understanding of human
behavior and the reasons of occurrence of that behavior. Qualitative research can also
be called as interpretive research as its primary objective is not a generalization but
to provide deep interpretation of the phenomena (Cooper and Schindler, 2006). It is
used in many academic disciplines such as social sciences, and market research
(Denzin and Lincoln, 2005) particularly where the objective is to probe in human
behaviors and personalities. Given unique purposes of qualitative research, it adopts
typical research designs, uses non-probability sampling, relies on smaller samples
(Cooper and Schindler, 2006), and uses a different data collection and analyses
techniques as compared to quantitative research (Neuman, 2005).

Characteristic:

• The research is based on the measurement of quality: attitude,


opinions, behavior.
• Applicable to qualitative phenomena relating to quality or kind.

Examples:

1. Ethnographic research: Ethnography of ambulance dispatch work in a


large UK metropolitan region.

35
2. Phenomenology: Moral emotions and phenomenology.
3. Grounded theory: From the colony to the corporation: Studying
knowledge transfer across international boundaries.

MIXED METHOD

Mixed method: Certain research problems call for combining both quantitative and
qualitative methodologies. A researcher might adopt therefore, mixed methods
approach where both quantitative and qualitative data collection techniques and
analytical procedures are used in the same research design (Saunders et al., 2009). A
good researcher chooses to go for quantitative or qualitative or sometimes, a
combination of the two types of research based on his problem statement and the
study objectives.

Objective:

To find solutions to immediate problems / research questions

Characteristics

• Integrate / mix different methods.


• Integrate experimental and numerical methods.
• One research method alone cannot explain / conclude the behavior.

Example:

1. Talent acquisition: Qualitative parameters and quantitative assessment


in IT sector
2. Teamwork, infrastructure, resources, and training in mixed methods
research
3. Beyond the R Series – High-quality mixed methods activities in
successful fellowship, career, training, and center grant applications

EXPERIMENTAL APPROACH

Experimental approach is characterized by much greater control over the research


environment and in this case some variables are manipulated to observe their effect
on other variables. Moore and McCabe (1993) indicates experimental approach is the
best method — indeed the only fully compelling method — of establishing causation
is to conduct a carefully designed experiment in which the effects of possible lurking
variables are controlled. To experiment means to actively change the X and to observe
the response in Y. The experimental method is the only method of research that can
truly test hypotheses concerning cause-and-effect relationships. It represents the

36
most valid approach to the solution of educational problems, both practical and
theoretical, and to the advancement of education as a science— (Gay, 1992).

Unique Features of Experiments:

• The investigator manipulates a variable directly (the independent variable).


• Empirical observations based on experiments provide the strongest
argument for cause-effect relationships (handouts: Michael).
• The research is based on the measurement of hard evidence from direct
observations from experiments.
• Data gathered from experiments are considered the most powerful
evidences to support a given hypothesis.
• Control of variables in the experiments.
• Some variables can be manipulated to observe their effects to other
variables.

Example:

An example of a true experiment involving an educational technology application is


the study by Clariana and Lee (2001) on the use of different types of feedback in
computer delivered instruction. Graduate students were randomly assigned to one of
five feedback treatments, approximately 25 subjects per group, comprised of (a) a
constructed-response (fill-in-the-blank) study task with feedback and recognition
(multiple-choice) tasks with (b) single-try feedback, (c) multiple response feedback,
(d) single-try feedback with overt responding, and (e) multiple-try feedback with overt
responding. All subjects were treated identically, with the exception of the
manipulation of the assigned feedback treatment. The major outcome variable
(observation) was a constructed-response achievement test on the lesson material.
Findings favored the recognition-study treatments with feedback followed by overt
responding. Given the true experimental design employed, the authors could infer
that the learning advantages obtained were due to the properties of the overt
responding (namely, in their opinion, that it better matched the posttest measure of
learning) rather than extraneous factors relating to the lesson, environment, or
instructional delivery. In research parlance, “causal” inferences can be made
regarding the effects of the independent (manipulated) variable (in this case, the type
of feedback strategy) on the dependent (outcome) variable (in this case, the degree
of learning).

DESCRIPTIVE AND EXPLANATORY

According to Cooper and Schindler (2003), descriptive study mostly seeks answers to
who, what, when, where and sometimes how questions and explanatory study seeks

37
answers to how and most importantly ‘why’ questions. This is used "to describe
systematically the facts and characteristics of a given population or area of interest,
factually and accurately" (Isaac and Michael, 1977). Contrary to an exploratory
research, a descriptive study is more rigid, preplanned and structured, and is typically
based on a large sample (Churchill and Iacobucci, 2004; Hair, et al. 2003; Malhotra,
1999).

For instance, a survey reporting the results of public service commission rank list,
would be considered a descriptive study. A study of this kind involves collecting data
that test the validity of the hypotheses regarding the present status of the subjects of
the study. It does not seek or try to explain relationships or make implications.
Population census studies, public opinion surveys, fact-finding surveys, observation
studies, job descriptions, surveys of the literature, reports, test score analyses can be
cited as examples of a descriptive research. Explanatory study is also called causal
study because it seeks to study cause and effect relationship between different
variables in the study.

Characteristics

• Build on previous information


• Show relationships between variables
• No manipulation of independent variable;
• Representative samples required
• Structured research plans
• Involves gathering data that describe events and then organizes,
tabulates, depicts, and describes the data.
• Uses description as a tool to organize data into patterns that
emerge during analysis.
• Often uses visual aids such as graphs and charts to aid the reader
• Conclusive findings.
• Can involve collecting quantitative information.
• Can describe categories of qualitative information such as patterns
of interaction when using technology in the classroom.
• Does not fit neatly into either category.

Examples:

1. Descriptive research design for measuring the attributes of


successful sales people, or evaluate a training program or a retailing
situation.
2. Market segmentation studies, i.e., describe characteristics of
various groups, size of market, buying power of consumers.

38
3. Sales potential studies for particular geographic region or
population segment.
4. Determining perceptions of company brand or product
characteristics.

Advantages

• The subject is being observed in a completely natural and


unchanged natural environment.
• Often used as a pre-cursor to quantitative research designs, the
general overview giving some valuable pointers as to what
variables are worth testing quantitatively.

Disadvantages

• Because there are no variables manipulated, there is no way to


statistically analyze the results.
• Many scientists regard this type of study as very unreliable and
‘unscientific’.
• The outcomes of observational studies are not repeatable, and so
there can be no replication of the experiment and reviewing of the
results.

VARIABLES

The meaning of the word ‘variable’ as is given by Collins Dictionary refers to something
that changes quite often and usually there seems to be no fixed pattern of these
changes. A variable is any observation that can take different values. Example: race,
gender, curriculum used, student outcomes, instructional strategies, computer labs,
reading program, math program, student attitude parent satisfaction, readiness for
first grade etc.

According to Zikmund, Babin, Carr and Griffin (2009), there can be number of variables
in a research study among which a researcher may want to study relationships such
as:

1. Independent variable(s)
2. Dependent variables(s)
3. Moderating variable(s)
4. Mediating variable(s) and
5. Extraneous variable(s)

39
DEPENDENT VARIABLE

The result or outcome that is expected to occur from a treatment. The change in the
dependent variable is presumed to be caused by the independent variable. The
dependent variable is observed, measured, and analyzed to detect changes caused by
the independent variable. On a graph, the dependent variable is placed on the vertical
axis.

CONTROL VARIABLE

The characteristic that is controlled by the researcher in order to reduce any impact
this factor might have on the interpretation of the results. Controlled variables should
not be allowed to change. Controlled variables are not shown on a graph. Suppose
that the experiment is about the effect of color on mood. For theoretical reasons,
psychologists may decide to recruit male subjects within a particular age range only.
Consequently, the variables gender and age are represented at the same level in the
test conditions defined by the three colors present in the example. It is in this sense
that experimenters hold constant gender and age. Consequently, gender and age are
the control variables of the experiment.

MODERATOR VARIABLE

A characteristic that influences the impact of the independent variable upon the
dependent variable. In general terms, a moderator is a qualitative (e.g., sex, race,
class) or quantitative (e.g., level of reward) variable that affects the direction and
/or strength of the relation between an independent or predictor variable and a
dependent or criterion variable.

EXTRANEOUS VARIABLE

The extraneous variables of the experiment are identified by exclusion. Specifically,


any variable that is neither the independent nor the dependent nor the control
variable is an extraneous variable. Consequently, there are logically an infinite number
of extraneous variables. Factors that produce uncontrolled, unpredictable impacts on
the dependent variable. Extraneous variables may cause confusion if they are not
controlled. They are contaminating factors that are a threat to the validity of the
study.

INDEPENDENT VARIABLE

An independent variable is the predictor variable which is supposed to be the cause


of change in the dependent variable (criterion variable). This variable is called the
“independent” variable because the formation of the test conditions is not

40
determined by what the subject does. In a hypothetical relationship between
advertising expenditures and sales growth, for example, advertising expenditures is
independent variable, whereas sales growth is a dependent variable. A moderator
variable or moderator is one that has a strong contingent effect on the relationship
between independent and dependent variable (Sekaran and Bougie, 2010). In
statistical terms, the effect of a moderating variable in the relationship between
independent and dependent variable is termed as interaction (Cohen & Patricia,
2002). In the relationship between advertising expenditures and sales growth, for
example, „type of media‟ may be considered as a moderating variable.

MEDIATING VARIABLE:

A mediating variable, also called as mediator or intervening variable is one that


provides a causal link between independent and dependent variable (Sekaran and
Bougie, 2010). In the relationship between advertising expenditures and sales growth,
for example, „product awareness‟ may be considered as mediating variable.

It is important for a researcher to understand the nature of variables in his study and
how to measure those variables. Measurement of variables may be easy or difficult
depending upon their complexity. A variable may be uni-dimensional (concept) such
as age or occupation, or multi-dimensional (construct) such as personality, job stress
or motivation and for the measurement of later, the researcher needs to develop
operational definitions (Cooper and Schindler, 2006). Once a researcher develops
hypotheses or lists research questions, he has a clear idea about the related variables
in his research activity. Equally important for the researcher is to know that how the
variables will be measured and what sort of data he will collect regarding each of the
concerned variables. The data may be nominal, ordinal, interval or ratio.

DEDUCTION AND INDUCTION

In any field of research, a researcher has to understand the general tone of research
and the direction of logic which is guided by either of the two approaches – deduction
and induction. A general distinction made between the two logical paths to knowledge
is that induction is the formation of a generalization derived from examination of a set
of particulars, while deduction is the identification of an unknown particular, drawn
from its resemblance to a set of known facts (Rothchild, 2006). The deduction is a
process which goes from general to specific while induction is a process which goes
from specific to general (Decoo, 1996). In deduction, researchers use “top down”
approach where conclusions follow logically from the premises and in induction,
researchers use “bottom up” approach where the conclusion is likely based on
premises (Aqil Burney, and Nadeem, 2006). However, deduction contains an element

41
of induction and the inductive process is likely to contain a smidgen of deduction
(Bryman and Bell, 2003).

DEDUCTIVE RESEARCH

Deductive research is based on deductive thought which transforms


general theory into specific hypothesis suitable for testing. In this case the
researcher's thinking runs from the general to the specific. He may have a guess about
human behavior, or some other social phenomenon and he proceeds to collect data
to prove it right or wrong. Working deductively, he first states the theory in the form
of a hypothesis and then selects a method by which to test it. If the data supports the
hypothesis we conclude that his theory is correct.

A SCHEMATIC REPRESENTATION OF DEDUCTIVE REASONING

Theory

Hypothesis

Obserevation

Confirmation
j

As Gill and Johnson (2002) assert, a deductive research method entails the
development of a conceptual and theoretical structure prior to its testing through
empirical observation. While providing insights on deductive research, Remenyi et al.
(1998) state that, in this approach the researcher may have deduced a new theory by
analyzing and then synthesizing ideas and concepts already present in the literature.
The emphasis on this type of research will be on the deduction of ideas or facts from
the new theory in the hope that it provides a better or a more coherent framework
than the theories that preceded it. However, by taking a slightly different perspective,
Gill and Johnson (2002) argue that, what is important is the logic of deduction and the
operationalization process, and how this involves the consequent testing of the theory
by its confrontation with the empirical world. According to Collis and Hussey (2003),
deduction is the dominant research approach in the natural sciences, where laws
present the basis of explanation, allow the anticipation of phenomena, predict their
occurrence and therefore permit them to be controlled.

42
Accordingly, Robson (2002) introduces five sequential stages through which deductive
research will be progressed:

1. Deducing a hypothesis from the theory.


2. Expressing the hypothesis in operational terms.
3. Testing the operational hypothesis.
4. Examining the specific outcome of the inquiry and
5. If necessary, modify the theory.

CHARACTERISTIC FEATURES OF DEDUCTIVE METHODOLOGY

Several important characteristics:

• Testing a theory
• Deductive reasoning works from the more general to the more specific.
• Sometimes this is informally called a "top-down “approach.
• There is the search to explain causal relationships between variables.
• Concepts need to be operationalized in a way that enables facts to be
measured quantitatively.
• The conclusion follows logically from the premises (available facts)
• Generalization. (Weijun, 2008)

Example of Deductive method

Social Representation Theory

The representation of rural English life is


stereotyped in current British television drama.

Observe random episodes of Monarch of the Glen,


Emmerdale, Harbour Lights, Heartbeat and Where the Heart Is

Confirmation or Rejection

43
INDUCTIVE RESEARCH METHOD

Inductive research is based on inductive thought or reasoning which transforms


specific observations into a general theory. Here the researcher's thinking goes from
the specific to the general. If he observes a pattern in society, he may form a
hypothesis on it, conduct surveys or experiments verify his hypothesis and thus reach
a conclusion.

Within inductive approach, the theory would follow the data rather than vice versa as
with deduction. As Gill and Johnson (2002) describe, learning is done by reflecting
upon particular past experiences and through the formulation of abstract concepts
and theories, hence, induction corresponds to the right-hand side of Kolb’s learning
cycle. In sharp contrast to the deductive traditional theory is the outcome of
induction.

A SCHEMATIC REPRESENTATION OF INDUCTIVE REASONING

Theory

TENTATIVE
Hypothesis

Pattrern

Obserevation

Kuhn (1962) implies that deductive researchers are enslaved normal scientists, while
inductive researchers are paradigm-breaking revolutionaries (Glaser and Strauss,
1967). Although the debate between supporters of induction and supporters of
deduction have a long history, as Gill and Johnson (2002) claim the modern
justification for taking an inductive approach in the social sciences tends to revolve
around two related arguments:

1. The explanation of social phenomena grounded in observation and


experience
2. Critique of some of the philosophical assumptions embraced by
positivism

44
CHARACTERISTIC FEATURE OF INDUCTIVE METHODOLOGY

• Building theory
• Inductive reasoning works the other way, moving from specific
observations to broader generalizations and theories.
• Informally, sometimes call this a "bottom up“ approach
• The conclusion is likely based on premises.
• Involves a degree of uncertainty. (Tang Weijun 2008)

Example of inductive method:

DIFFERENCE BETWEEN DEDUCTION AND INDUCTION

DEDUCTION INDUCTION

Moving from theory to data Moving from data to theory

Common with natural sciences Common with social sciences

A highly structured approach Flexible structure to permit changes

Explain causal relationships between Understanding of meaning humans


variables attaches to events

Select samples of sufficient size to Less concern with the need to generalize
generalize conclusions

45
(Major differences between deductive and inductive approaches to research:
Adopted and modified from Saunders et al., 2007)

SIGNIFICANCE OF RESEARCH METHOD

Research inculcates scientific and inductive thinking and it promotes the development
of logical habits of thinking and organization. Research has its special significance in
solving various operational and planning problems of business and industry.
Operations research and market research, along with motivational research, are
considered crucial and their results assist, in more than one way, in taking business
decisions.

Market research is the investigation of the structure and development of a market


for the purpose of formulating efficient policies for purchasing, production and sales.

Operations research refers to the application of mathematical, logical and analytical


techniques to the solution of business problems of cost minimization or of profit
maximization or what can be termed as optimization problems.

Motivational research in determining why people behave as they do is mainly


concerned with market characteristics. In other words, it is concerned with the
determination of the motivations underlying the consumer (market) behavior. All
these are of great help to people in business and industry who are responsible for
taking business decisions.

Research is equally important for social scientists in studying social relationships


and in seeking answers to various social problems. It provides the intellectual
satisfaction of knowing a few things just for the sake of knowledge and also has
practical utility for the social scientist to know for the sake of being able to do
something better or in a more efficient manner. Research in the social sciences is
concerned both with knowledge for its own sake and with knowledge of what it can
contribute to practical concerns.

CONCLUSION

A beginner in research should have a clear understanding of what is research,


typologies of research, especially the difference between qualitative and qualitative
research, etc. that give better orientation to the researchers to do research with clear
path and focus. This section of the book is detailed all these information and remove
all sorts of confutation among a beginner to engage in research process with clarity
and concentration. The information incorporated in this section induces better
understanding on the way the research design to be made. A researcher cannot
follow a similar form of methodologies in varied forms of research, especially

46
qualitative and quantitative research. An introduction to deductive and inductive
method, types of variables used in the quantitative research like independent,
dependent, moderation etc. are all described well for beginners understanding. The
material incorporated in this part further provides a detailed analysis of the
epistemology of research and research methodology. Several confusion and
misunderstanding will be removed, and research gets better focus on the topic
he/she may have to concentrate by gathering such knowledge.

DISCUSSION QUESTIONS

1. Explain the term research. Explain significance of research.


2. Research is the systematic, critical investigation of questions in the
business fields with the purpose of yielding answers to it”. Discuss.
3. What are the characteristic features of research?
4. Explain the objectives of the research?
5. What are the different types of research and explain two research
typologies in detail?
6. What do you mean by empirical research?
7. Differentiate qualitative and quantitative research?
8. How do you conceptualize experimental research?
9. Differentiate survey and research?
10. What are the different types of research process?
11. How you differentiate case study approach and experimental approach?

47
CHAPTER 3

SCIENTIFIC PARADIGM IN RESEARCH

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand the philosophy of social science research


2. Learn the paradigms of research methods
3. Understand appreciates the applications of quantitative research
4. Understand appreciates the applications of quantitative research
5. Evaluate different types of research
6. Understand inductive and deductive research

INTRODUCTION

A paradigm delivers a conceptual framework for seeing and making sense of the social
world. According to Burrell and Morgan (1979), "to be located in a particular paradigm
is to view the world in a particular way.” Indeed, paradigm has been termed a (Patton,
1990) "world view". Nevertheless, it was Kuhn (1970); who familiarized the term as
"universally recognized scientific achievements that for a time provide model
problems and solutions to a community of practitioners", and suspected that (Khun,
1970) "something like a paradigm is a prerequisite to perception itself". In the
postscript to his second edition, Khun, (1970) provides a useful definition; "it stands
for the entire constellation of beliefs, values and techniques, and so on shared by the
members of a community." “Refers to the progress of scientific practice based on
people’s philosophies and assumptions about the world and the nature of
knowledge”. Paradigms offer a framework comprising an accepted set of theories,
methods, and ways of defining data.

VALUE OF THE TERM PARADIGM

Husen, (1988) explains the value of the term paradigm as follows:

A paradigm determines the criteria according to which one selects and defines
problems for enquiry. Young scientists tend to be socialized into the precepts of the
prevailing paradigm which to them constitutes ‘normal science.’ In that respect a

48
paradigm could be regarded as a cultural artifact, reflecting the dominant notions of
behavior in a particular scientific community…

A ‘revolution’ in the world of scientific paradigms occurs when one or several


researchers at a given time encounter anomalies, for instance, making observations,
which in a striking way do not fit the prevailing paradigm. Such anomalies can give rise
to a crisis after which the universe under study is perceived in an entirely new light.
Previous theories and facts become subject to thorough rethinking and reevaluation.

A paradigm is, therefore, perceived to be the underlying philosophical concept that


structures thinking in disciplines. Since the research approaches initially used in
education drew on that used in Psychology and Biology, the traditional research
paradigm can be historically described as "scientific" (Tall, 1994).

TWO CRITICAL QUESTIONS

1. What compromises different methodological paradigms or disciplinary


matrices?

• Beliefs
• Assumptions
• Values
• Aims of social inquiry, self, society, human agency
2. How are these paradigms socially accomplished or constitutes?

RESEARCH PHILOSOPHY

A research philosophy is a belief about the way in which data about a phenomenon
should be gathered, analyzed and used. The term epistemology (what is known to be
true) as opposed to doxology (what is believed to be true) encompasses the various
philosophies of research approach. The purpose of science, then, is the process of
transforming things believed into things known: doxa to episteme. Two major research
philosophies have been identified in the Western tradition of science, namely
positivist (sometimes called scientific) and interpretivist (also known as antipositivist)
(Galliers, 1991).

POSITIVISM

Positivists believe that reality is stable and can be observed and described from an
objective viewpoint (Levin, 1988), i.e. without interfering with the phenomena being
studied. They contend that phenomena should be isolated and that observations
should be repeatable. This often involves manipulation of reality with variations in

49
only a single independent variable so as to identify regularities in, and to form
relationships between, some of the constituent elements of the social world.

Predictions can be made based on the previously observed and explained realities and
their interrelationships. "Positivism has a long and rich historical tradition. It is so
embedded in our society that knowledge claims not grounded in positivist thought are
simply dismissed as a scientific and therefore invalid" Hirschheim, 1985).

This view is indirectly supported by Alavi and Carlson (1992) who, in a review of 902
IS research articles, found that all the empirical studies were positivist in approach.
Positivism has also had a particularly successful association with the physical and
natural sciences. There has, however, been much debate on the issue of whether or
not this positivist paradigm is entirely suitable for the social sciences (Hirschheim,
1985), many authors calling for a more pluralistic attitude towards IS research
methodologies (see e.g. Kuhn, 1970; Bjørn-Andersen, 1985; Remenyi and Williams,
1996).

While we shall not elaborate on this debate further, it is germane to our study since
it is also the case that Information Systems, dealing as it does with the interaction of
people and technology, are considered to be of the social sciences rather than the
physical sciences (Hirschheim, 1985). Indeed, some of the difficulties experienced in
IS research, such as the apparent inconsistency of results, may be attributed to the
inappropriateness of the positivist paradigm for the domain. Likewise, some variables
or constituent parts of reality might have been previously thought un-measurable
under the positivist paradigm - and hence went unresearched after (Galliers, 1991).

INTERPRETIVISM

Interpretivist contends that only through the subjective interpretation of and


intervention in reality can that reality be fully understood. The study of phenomena
in their natural environment is key to the interpretivist philosophy, together with the
acknowledgement that scientists cannot avoid affecting those phenomena they study.
They admit that there may be many interpretations of reality but maintain that these
interpretations are themselves a part of the scientific knowledge they are pursuing.
Interpretivism does not have a tradition that is no less glorious than that of positivism,
nor is it shorter.

PHILOSOPHICAL ASSUMPTIONS AND PARADIGMS

In management or organizational research the term paradigm encompasses three


levels. The philosophical, basic beliefs about the world we live in. The social level,
where guidelines exist as to how a researcher should conduct their endeavors and
lastly, the technical level. The methods and techniques ideally adopted when

50
conducting research. At a philosophical level organizational theories contrast in five
sets of assumptions (Burrell and Morgan, 1979) in a subjectivist /Objectivist
dimension; ontological, epistemological, axiological, methodological assumptions and
assumptions about human nature. These assumptions trickle through to lower levels
and influence the research process.

1 What is the nature of reality? Ontology

2 What is the nature of knowledge? Epistemology

3 What is the role of values? Axiological assumption

4 How is knowledge obtained? Methodology

5 What is the language of research? Rhetorical

ONTOLOGY

Ontology is the term referring to the shared understanding of some domains of


interest, which is often conceived as a set of classes (concepts), relations, functions,
axioms and instances (Gruber, 1993). Now in the knowledge representation
community, the commonly used or highly cited ontology definition is from Gruber
(1993): “an ontology is a formal, explicit specification of a shared conceptualization.
‘Conceptualization’ refers to an abstract model of phenomena in the world by having
identified the relevant concepts of those phenomena. That Ontology is a "shared
conceptualization" states that Ontologies aim to represent consensual knowledge
intended for the use of a group. Ideally the Ontology captures knowledge
independently of its use and in a way that can be shared universally, but practically
different tasks and uses call for different representations of the knowledge in
Ontology. Burrel and Morgan, (1979) believe that the ontological assumption is
concerned with the very essence of the phenomena under investigation. They believe
that the basic ontological questions are whether the reality to be investigated is
external to the individual-imposing itself on individual consciousness from without or
the product of individual consciousness, whether reality is of an ‘objective’ nature or
the product of individual cognition, whether reality is a given ‘out there; in the world,
or the product of one’s mind. Ontological analysis clarifies the structure of knowledge.
Given a domain, its ontology forms the heart of any system of knowledge
representation for that domain. Without ontologies, or the conceptualizations that

51
underlie knowledge, there cannot be a vocabulary for representing knowledge. Thus,
the first step in devising an effective knowledge representation system, and
vocabulary, is to perform an effective ontological analysis of the field, or domain.
Weak analyses lead to incoherent knowledge bases (Chandrasekaran and Josephson,
1999).

Central questions:

• Question: What is the nature of reality?


• Characteristics: Reality is subjective and multiple, as seen by participants
in the study
• Implications for Practice: Researcher uses quotes and themes in the words
of participants and provides evidence of different perspectives (Creswell,
2005)

EPISTEMOLOGY

Closely coupled with ontology and its consideration of what constitutes reality,
epistemology considers views about the most appropriate ways of enquiring into the
nature of the world (Easterby-Smith, Thorpe and Jackson, 2008) and ‘what is
knowledge and what are the sources and limits of knowledge’ (Eriksson and
Kovalainen, 2008). Questions of epistemology begin to consider the research method,
and Eriksson and Kovalainen go on to discuss how epistemology defines how
knowledge can be produced and argued for. Blaikie, (1993) describes epistemology as
‘the theory or science of the method or grounds of knowledge’ expanding this into a
set of claims or assumptions about the ways in which it is possible to gain knowledge
of reality, how what exists may be known, what can be known, and what criteria must
be satisfied in order to be described as knowledge. Chia (2002) describes
epistemology as ‘how and what it is possible to know’ and the need to reflect on
methods and standards through which reliable and verifiable knowledge is produced
and Hatch and Cunliffe, (2006) summarize epistemology as ‘knowing how you can
know’ and expand this by asking how knowledge is generated, what criteria
discriminate good knowledge from bad knowledge, and how should reality be
represented or described.

Epistemology is the branch of philosophy that deals with the nature of knowledge,
that is, with questions of what we know and how we know it. A Researcher’s
Epistemology is a result of his/her Ontological Position and refers to: his/her
Assumptions about the Best Ways of Inquiring into the Nature of the World and
Establishing ‘Truth’. Epistemology distinguishes true knowledge from false
knowledge. All philosophical disciplines such as ethics, metaphysics, etc. depend on
knowledge. Therefore, any philosophical inquiry has to address epistemological issues

52
as well. Epistemology. The study of the nature, source, limits, and validity of
knowledge. It is interested in developing criteria for evaluating claims people make
that they “know” something.

Central questions:

• What is knowledge?
• What is the difference between knowledge and opinion or belief?
• It you know something does that means that you are certain about it?
• Is knowledge really possible? (Creswell, 2005)

Hussey and Hussey (1997) argue that Epistemology is concerned with the study of
knowledge and what we accept as being valid knowledge. This involves the
examination of the relationship between the researcher and that, which is being
researched.

HUMAN NATURE

The human nature is the 3rd assumption in research process according to Burrel and
Morgan, (1977). Husey and Hussey,(1997) conceptualized this stage into two:
Axiological and Rhetorical. According to Hussey and Hussey, (1977), the axiological
assumption is concerned with values, while the rhetorical assumption is concerned
with the language of research. The human nature debate according to Burrel and
Morgan, (1979) revolves around the issues of what model of man is reflected in any
given social scientific theory. Man is value free and unbiased; also, the man is value-
laden and biased.

AXIOLOGY

The basic anatomy of an inquiry paradigm in terms of three fundamental kinds of


question: the ontological question about the nature reality, the epistemological
questioning about the nature of knowing, and the methodological question about
how to know, what sorts of injunctions to follow. The question of value is not included
as part of the definition of an inquiry paradigm (Heron and Reason, 1997). It is
important to consider how the individual values of the researcher may play at each
stage of the Research Process. Our values are the guiding reason for our action.
Further, articulating their values as a basis for making judgments about the research
topic and research approach are a demonstration of axiological skill. For example,
using surveys rather than interviews would suggest that the rich personal interaction
is not something that is valued as highly as the need to gather a large data set. It is
argued that through understanding and being aware of your own values and
transparently recognizing and articulating these as part of the research process will

53
mean that your research is strengthened, in terms of transparency, the opportunity
to minimize bias or in defending your choices, and the creation of a personal value
statement is recommended (Saunders, Lewis and Thornhill, 2007).

Central Question:

• Question: What is the role of values?


• Characteristics: Researchers acknowledge that research is value laden
and that biases represent
• Implications for Practice: Researchers openly discusses values that
shape the narrative and includes own interpretation in conjunction with
the interpretation of participants

METHODOLOGICAL ASSUMPTION

The methodological assumption is concerned with the process of the research. Hussey
and Hussey (1997) argue that the choice of methodology to adopt is largely
determined by the choice of one paradigm. Cooper and Schindler (2002) defined
methodology as the overall approach to the research process. Assumptions about
human nature are deterministic or voluntarist. One views individuals as products of
their environment; the other believes individuals create their own environment
(Putman, 1983). Finally there are assumptions about the process of research, the
methodology. Nomothetic methodology focuses on an examination of regularities
and relationships to universal laws, while ideographic approaches center on the
reasons why individuals create and interpret their world in a particular way (Putman,
1983). The social world can only be understood by obtaining first-hand knowledge of
the subject under investigation.

Central Question:

• Question: What is the process of research?


• Characteristics: Researchers use inductive logic, studies in the topic within
its context, and uses an emerging design
• Implications for Practice: Researchers work with particulars (details) before
generalizations, describe in detail, the context of the study, and continually
revise questions from experiences in the field (Creswell, 2005)

RHETORIC

Rhetoric is the art of speaking or writing effectively. It refers generally to how


language is employed, but it has come to mean the insincere or even manipulative use
of the words. Technically it includes the art of persuasion and decoration or

54
elaboration in literature (Fyre, 1957). Rhetoric denotes a broad category of linguistic
techniques people use when their primary objective is to influence beliefs and
attitudes and behaviors.

Central Question:

• Example: Question: What is the role of values?


• Characteristics: Researchers acknowledge that research is value laden and
that biases represent
• Implications for Practice: Researchers openly discusses values that shape
the narrative and includes own interpretation in conjunction with the
interpretation of participants (Creswell, 2005)

QUALITATIVE AND QUANTITATIVE PARADIGM


QUALITATIVE RESEARCH

Denzin and Lincoln, (1994) define qualitative research: Qualitative research is multi-
method in focus, involving an interpretive, naturalistic approach to its subject matter.
This means that qualitative researchers study things in their natural settings,
attempting to make sense of or interpret phenomena in terms of the meanings people
bring to them. Qualitative research involves the studied use and collection of a variety
of empirical materials case study, personal experience, introspective, life story
interview, observational, historical, interactional, and visual texts-that describes
routine and problematic moments and meaning in individuals' lives. Cresswell, (1994)
defines it as: Qualitative research is an inquiry process of understanding based on
distinct methodological traditions of inquiry that explore a social or human problem.
The research builds a complex, holistic picture, analyzes words, reports detailed views
of informants, and conducts the study in a natural setting.

The qualitative research methodology starts from the philosophical assumptions that
researchers bring with them their own world views and beliefs (Creswell, 2007). These
assumptions include ontological beliefs, epistemological beliefs, axiological beliefs,
rhetorical beliefs, and methodological beliefs (Creswell, 2007). Therefore, qualitative
researchers acknowledge that their views invariably influence their research and are
the basis for the research results. As such, qualitative researchers rely on their beliefs
and a variety of understandings in describing, interpreting, and explaining the
phenomena of interest (Creswell, 2007; Maxwell, 1992).

Five general designs in qualitative research

Qualitative research consists of five general designs: narrative, phenomenology,


grounded theory, ethnography, and case study. Qualitative researchers use narrative

55
design when the study has a specific contextual focus, such as classrooms and
students or stories about organizations, when the subject is biographical or a life
history, or an oral history of personal reflections from one or more individual. The
qualitative researchers use phenomenological research when the study is about the
life experiences of a concept or phenomenon experienced by one or more individuals.
Researchers use grounded theory qualitative research to generate or discover a
theory based on the study. The qualitative researchers use ethnographical research
when the subject involves an entire cultural group. Qualitative researchers use the
last design, case study research, to study one or more cases within a bounded set our
context (Creswell, 2007).

Validity in qualitative research

In qualitative research, just as it was in quantitative, the validity of the study is also
the most important consideration for the interpretivist in conducting research
(Maxwell, 1992). As Maxwell, posited, validity is a key issue in the debates over using
qualitative research over quantitative research, where quantitative proponents
criticized the lack of standards for assuring validity, such as lack of explicit controls for
validity threats, quantitative measurement, and formal testing of the hypotheses.
However, proponents of qualitative research argue that they have procedures for
attaining validity, and that they are simply different from the quantitative approaches.

QUANTITATIVE RESEARCH

The positivist researcher approaches or views the world as objective and seeks
measurable relationships among variables to test and verify their study hypotheses.
Their quantitative research process consists of five steps the researcher(s) perform
(Swanson and Holton, 2005).

Categorization of quantitative research

Researchers have generally categorized quantitative research as experimental, quasi-


experimental, correlational, or descriptive (Swanson and Holton, 2005). In
experimental research, the researchers design the specific conditions to test their
theories or propositions, controlling the experiment and collecting their data to
isolate the relationships between their defined independent variables and dependent
variables. These causal relationships are the systematic conjunction of two elements,
which logically follow from one to the other (Lin, 1998). The non- experimental
research, quasi-experimental, correlational, and descriptive, all requires the
researchers to study phenomena without the ability to control or manipulate
variables. These methods require the researcher(s) to collect information from
existing data and determine relationships without inferring causality, or to develop

56
additional techniques for gathering the information, such as surveys for descriptive
research. The ability to use generalizability is a research methodology applicable in
any research methodology, but has a real advantage in quantitative research
(Swanson and Holton, 2005).

Variable differentiation

In order top formulate the research questions the researcher has to identify variable
need to be analyzed. The researcher(s) must determine the dependent and
independent variables and the quantities and quality of the data (variables) source
(Miles and Huberman, 1994; Swanson and Holton, 2005). These types of variable
measures include categorical, continuous, and ordinal. In addition, the researcher(s)
must understand the concepts of validity and reliability in their determination of
measures, as failures to address validity and reliability often undermine or invalidate
research studies (Swanson and Holton, 2005).

Application of Statistical tools and interpretation

In the statistical analyses, the researcher(s) determines, based on the overall research
design, how the variables describe, compare, associate, predict, and contribute to
explain the analysis results and to answer the propositions of the study (Cooper and
Schindler, 2008; Swanson & Holton, 2005). The researchers perform the
interpretation of the results of the analysis based on the statistical significance
determined (Swanson and Holton, 2005). Based on the step by step process the
quantitative research arrives at required validity and reliability of the results.

DIFFERENCE BETWEEN QUALITATIVE AND QUANTITATIVE RESEARCH

QUALITATIVE QUANTITATIVE
• Contextualization • Generalizability
• Interpretation • Prediction
• Understanding actors' perspectives • Causal explanations
• Researcher interacts with that being • The researcher is independent.
researched.
• The researcher is value laden and • The researcher is value free and
biased. unbiased.
• Theory can be causal, non-causal or • Theory is largely causal and
is often inductive. deductive.
• Meaning is captured and • The hypothesis that researcher
discovered once involved begins with are tested.
• Concepts are in the forms of
distinct variable.

57
• Concepts are in the form of themes,
motifs generalizations and • Data are in the form of numbers.
taxonomies.
• Data are in the forms of words, from • There usually many cases or
precise measurement. judgments.
• There are generally few cases or • Data are in the form of numbers.
judgments.
• Data are in the form of words from
the documents, observations, and • Procedures are standard and
transcripts. replication is assumed.
• Research procedures are particular • Analysis proceeds by using
and replication is rare. statistics, tables, or charts
• The analysis proceeds by extracting discussing how what they show
themes or generalizations from relates to the hypothesis
evidence and organizing data to
present a coherent, consistent • The aim is to classify features, count
picture. them, and construct statistical
• The aim is a complete, detailed models in an attempt to explain
description. what is observed.
• The researcher knows clearly in
advance what he/she is looking for.
• The researcher may only know • Recommended during later phases
roughly in advance what he/she is of research projects.
looking for. • All aspects of the study are carefully
• Recommended during earlier designed before data is collected.
phases of research projects. • Objective seeks precise
• The design emerges as the study measurement & analysis of target
unfolds. concepts, e.g., uses surveys,
• Subjective - individuals questionnaires etc.
interpretation of events is • Quantitative data is more efficient,
important, e.g., uses participant able to test hypotheses, but may
observation, in-depth interviews miss the contextual detail.
etc. • Researcher tends to remain
• Qualitative data is more “rich”, time objectively separated from the
consuming, and less able to be subject matter
generalized.
• Researcher tends to become
subjectively immersed in the
subject matter.

58
Some of the differences between these two paradigms are briefly explained by
Campbell and Kerlinger. These can be as detailed as:

1. Qualitative research is a method of inquiry appropriated in many different


academic disciplines, traditionally in the social sciences, but also in market
research and further contexts.
2. Qualitative researchers aim to gather an in-depth understanding of human
behavior and the reasons that govern such behavior.
3. Qualitative method investigates the why and how of decision making, not
just what, where, when. Hence, smaller but focused samples are most
often needed, rather than large samples.
4. “All research ultimately has a qualitative grounding“ - Campbell
5. In the social sciences, quantitative research refers to the systematic
empirical investigation of quantitative properties and phenomena and
their relationships.
6. The objective of quantitative research is to develop and employ
mathematical models, theories and/or hypotheses pertaining to
phenomena.
7. The process of measurement is central to quantitative research because it
provides the fundamental connection between empirical observation and
mathematical expression of quantitative relationships.

WHICH METHODOLOGY IS RIGHT FOR YOU?

Over the years there has been a large amount of discussion surrounding on
researchers about which methodology is suitable to conduct their research. Whether
one goes with a qualitative inquiry or with quantitative inquiry. The identification of
methodology is more rest with the topic and the phenomena the researcher identified
to do the research. In summary for every strength there appears to be a corresponding
weakness in both quantitative and qualitative research. It is this dilemma that has
fuelled the debate over which approach is superior (Duffy, 1986), and which method
should therefore be adopted for organizational research. Choosing just one
methodology narrows a researcher’s perspective, and deprives him or her of the
benefits of building on the strengths inherent in a variety of research methodologies
(Duffy, 1986). The debate could be seen as advantageous to organizational studies.
Researchers are being forced to consider the controversial issues of both
methodologies, and this required them to have in-depth knowledge of epistemology
and methodology and not to be restricted, as in the past, to the tradition of the
sciences (Duffy, 1985). Preference for a specific research strategy is not just a technical
choice; it is an ethical, moral, ideological and political activity (Moccoa, 1988). All
methodologies have their specific strengths and weaknesses, neither is better than
the other – they are just different.

59
CONCLUSION

This chapter of the book explains the scientific paradigms of research. On the extreme,
many researchers follow the positivist way of doing research on the other hand
several follows interpretivist method. However, keep in mind that the consideration
of quantitative or qualitative method of researches depends on the topic a researcher
will be taken in to for research and further it attached with explainable or quantifiable
phenomena. Blind consideration of quantitative or qualitative research leads to many
obstacles in research, especially during tool development, administration and analysis
of data. Accordingly, a researcher should carefully consider his/her research topic and
research method.

DISCUSSION QUESTIONS

1. What do you mean by paradigms of research methods?


2. How do you define the value of the term paradigm?
3. How do you explain the philosophy of social sickness based on research
paradigm?
4. How far quantitative and qualitative research is varied in their approaches?
5. Differentiate positivism and interpretivism?
6. How do you explain epistemology?
7. “There are certain research topics that we cannot quantify and explore the
phenomenon? Discuss the statement?
8. How do you evaluate the significance of mixed methodology based on
qualitative - quantitative paradigm?

60
MODULE: 4

QUALITATIVE RESEARCH

Learning outcome

By the end of this chapter, you will be able to:

1. Link your research topic to an appropriate methodology.


2. Understand the advantages and disadvantages of both qualitative and
quantitative methods.
3. Recognize the value of (sometimes) using quantitative data in
qualitative research.
4. Understand the diverse approaches underlying contemporary
qualitative research.

INTRODUCTION

Qualitative research methods are getting recognition outside the traditional academic
social Sciences, especially in international development research. The term qualitative
implies an emphasis on examination of the processes and meanings, but not
measured in terms of quantity, amount, or frequency (Labuschagne, 2003). The great
strength of qualitative research is that it attempts to depict the fullness of experience
in a meaningful and comprehensive way. Qualitative research then, is most
appropriate for those projects where phenomena remain unexplained, where the
nature of the research is uncommon or broad, where previous theories do not exist
or are incomplete (Patton, 2002); and where the goal is deep narrative understanding
or theory development (Hammersley and Atkinson, 1983).

Qualitative research uses a naturalistic approach that seeks to understand


phenomena in context-specific settings, such as "real world setting [where] the
researcher does not attempt to manipulate the phenomenon of interest" (Patton,
2001). Qualitative research, broadly defined, means "any kind of research that
produces findings not arrived at by means of statistical procedures or other means of
quantification" (Strauss and Corbin, 1990) and instead, the kind of research that
produces findings arrived from real-world settings where the "phenomenon of
interest unfold naturally" (Patton, 2001). Unlike quantitative researchers who seek
causal determination, prediction, and generalization of findings, qualitative

61
researchers seek instead illumination, understanding, and extrapolation to similar
situations (Hoepfl, 1997).

Typically, qualitative methods produce a lot of detailed data about a small number of
cases, and provide a depth of detail through direct quotation, precise description of
situations and close observation. The main concern of qualitative research is
developing explanations of social phenomena thereby to understand the incidents,
situations, persons and issues in a way that they are. Qualitative research seeks to
understand research issues from the perspectives of the local population by obtaining
deep rooted information integrating culturally specific information about the values,
opinions, behaviors, and social contexts of particular populations. Qualitative
research is concerned with finding the answers to questions which begin with: why?
how? in what way?, and is more concerned with questions about: how much? how
many? how often? to what extent?

QUALITATIVE RESEARCH QUESTIONS

Qualitative Research is concerned with the social aspects of our world and seeks to
answer questions about:

• How communities are affected by the opinions and


suggestions?
• Why people behave in certain ways?
• How culture and belief system influence the attitude of people?
• How opinions and attitudes are formed?
• Why issues related to companies vary from one another?
• How strategies influence the organizational behavior?
• How religions influence community belief system?
• Why here are the differences among social groups?

QUALITATIVE RESEARCH

Qualitative research is a term having varying meaning and understanding in social and
managerial research.

An umbrella term covering an array of interpretive techniques which seek to describe,


decode, translate, and otherwise come to terms with the meaning, not the frequency,
of certain naturally occurring phenomena in the social world (Van Maanen, 1979).

Qualitative research is the interpretive study of a specified issue or problem in which


the researcher is central to the sense that is made” “The goal of qualitative research
is the development of concepts which help us to understand social phenomena in

62
natural (rather than experimental) settings, giving due emphasis to the meanings,
experiences, and views of all the participants (Banister et al., 1994)

Shank, (2002) defines qualitative research as “a form of systematic empirical inquiry


into meaning”. By systematic he means “planned, ordered and public”, following rules
agreed upon by members of the qualitative research community. By empirical, he
means that this type of inquiry is grounded in the world of experience. Inquiry into
meaning says researchers try to understand how others make sense of their
experience. Denzin and Lincoln (2000) claim that qualitative research involves an
interpretive and naturalistic approach: “This means that qualitative researchers study
things in their natural settings, attempting to make sense of, or to interpret,
phenomena in terms of the meanings people brings to them”. Borg and Gall (1989),
suggest that the term is often used interchangeably with terms such as naturalistic,
ethnographic, subjective, and post-positivistic. Goetz and LeCompte (1984) choose to
use the term ethnographic as an overall rubric for research using qualitative methods
and for ethnographies. Qualitative research methods typically include interviews and
observations but may also include case studies, surveys, and historical and document
analyses. Case study and survey research are also often considered methods on their
own. Survey research and historical and document analysis are covered in other
chapters in this book; therefore they are not extensively discussed in this chapter.

CHARACTERISTICS OF QUALITATIVE RESEARCH

The characteristic features of qualitative research include;

1. Qualitative research is more of giving answers to “What is going on there


in the research situation? It focuses on people’s perceptions and
experiences. The data is what people say, believe, and do, feelings
articulated, descriptions given.
2. Qualitative research is concerned with the opinions, experiences and
feelings of individuals producing subjective data. Qualitative Research is
collecting, analyzing, and interpreting data by observing what people do
and say. Qualitative research refers to the meanings, concepts,
definitions, characteristics, metaphors, symbols, and descriptions of
things.
3. Understanding of a situation is gained through a holistic perspective.
Quantitative research depends on the ability to identify a set of variables.
4. Qualitative research describes social phenomena as they occur naturally.
No attempt is made to manipulate the situation under study as is the
case with experimental quantitative research.

63
5. Qualitative is primarily interpretivist rather than positivist. This approach
doesn’t seek to verify some “truth”. This approach incorporates manifold
truths, realities, meanings, etc. to build descriptions.
6. Qualitative designs are “emergent” rather than fixed. Distinct from
survey and experimental research the research design steps in
Qualitative Research are not direct. It can unravel in multiple ways. The
design is open and flexible to change.
7. The qualitative research is more of general than too specific. Hypothesis
development is not a priority in some cases. It does not develop
hypotheses apriori and don’t test hypotheses or theory. While of some
research develop preliminary conceptual hypotheses and gather
evidence to support/refute them.
8. Can’t anticipate all issues/problems apriori. Have to deal with them “in
the field” (unlike surveys and experiments, where you deal with them
ahead of time or afterwards, statistically).
9. Different sampling techniques are used. In quantitative research,
sampling seeks to demonstrate representativeness of findings through
random selection of subjects.
10. Qualitative research is subjective and uses very different methods of
collecting information, including individual, in-depth interviews and
focus groups.
11. Qualitative sampling techniques are concerned with seeking information
from specific groups and subgroups in the population.
12. The researcher is the main data collection instrument. Researcher’s
beliefs, values, predispositions influence the entire process. So there is a
potential for bias here, and replication of findings is more difficult than
in quantitative research.
13. The intensive and time-consuming nature of data collection necessitates
the use of small samples.
14. Data are used to develop concepts and theories that help us to
understand the social world. This is an inductive approach to the
development of theory.
15. Qualitative data are collected through direct encounters with individuals,
through one-to-one interviews or group interviews or by observation.
Data collection is time consuming.
16. The goal is to produce an understanding/explanation that is true for all
the data. No error. Different from experiments and surveys. Inductive
rather than deductive.
17. The criteria used to assess reliability and validity differ from those used
in quantitative research.
18. Not a standardized data collection.
19. The nature of this type of research is exploratory and open-ended.

64
20. The results of qualitative research are unpredictable.

LIST OF QUALITATIVE DATA COLLECTION PROCEDURES

Many data collection procedures are included in the qualitative research process. It
can be as detailed:

• Gather observational notes by observing as a participant;


• Gather observational notes by observing as an observer;
• Conduct unstructured, open-ended interview and take interview
notes;
• Conduct unstructured interview, audiotape the interview, and
transcribe the interview;
• Keep a journal during the research study;
• Has a participant kept a journal during the research study;
• Optically scan newspaper accounts;
• Collect personal letters from participants;
• Analyze public documents (e.g., official memos, minutes, records,
archival materials);
• Examine autobiographies and biographies;
• Have a participant write his or her autobiography;
• Write your own (the researcher’s) autobiography;
• Have participants take photographs or videotapes;
• Examine physical trace evidence (e.g., footprints);
• Videotape a social situation or an individual/group;
• Examine photographs or videotapes;
• Collect sounds (e.g., musical sounds, crowd noises);
• Collect e-mail or electronic messages;
• Examine possessions or objects to elicit views during an interview; and
• Collect smells, tastes, or sensations through touch (Creswell, (2003)

APPROPRIATENESS OF QUALITATIVE RESEARCH

A researcher considers going ahead with qualitative approach in his/her study due to
following reasons;

• When variables cannot be quantified;


• When variables are best understood in their natural settings;
• When variables are studied over time;
• When studying roles, processes, and groups; and
• When the paramount objective is “understanding”.

65
TYPOLOGIES OF QUALITATIVE RESEARCH

Qualitative observational research consists of over 30 different approaches which


often overlap and whose distinctions are subtle. The type of approach used depends
on the research question and/or the discipline the researcher belongs to. For instance,
anthropologists commonly employ ethno-methodology and ethnography, while
sociologists often use symbolic interaction and philosophers frequently use concept
analysis (Marshall and Rossman, 1995)

NARRATION

There are many ways to analyze qualitative data. Narrative analysis is one approach
to analyzing and interpreting qualitative data. Narrative is defined by the Concise
Oxford English Dictionary as ‘a spoken or written account of connected events; a story’
(Soanes and Stevenson, 2004). Traditionally, a narrative requires a plot, as well as
some coherence. It has some sort of ordered sequence, often in linear form, with a
beginning, middle, and end Narratives also usually have a theme and a main point, or
a moral, to the story.

Approaches to narrative analysis

Various approaches of making narrative analysis in qualitative research include:

• Writing versus reading narrative;


• Top-down versus bottom-up;
• Realist, constructivist and critical;
• Genres (adventure story, fairy tale, romance, tragedy ...);
• Voice (authoritative, supportive, joint voice); and
: Postmodern narrative and ante-narrative approaches
(deconstruction, grand narrative, microstoria ...).

How to use narrative analysis?

The researcher should evaluate certain aspect before he/she carry forward narrative
research method.

• If the researchers are planning to collect narratives during interviews


(and hence narratives will be your main source of data), then you need
to work at inviting stories from your informants.
• If the researchers are planning to write a narrative of an organization,
then the typical form in management and organization studies is to

66
write it up as a case study. Case studies are usually used chronology as
the main organizing device (Czarniawska, 1998).
• The Labov/Cortazzi model suggests six elements to every narrative:
abstract, orientation, complication, evaluation, result, and conclusion

Advantages

Narrative analysis is very useful in the creation or critique of organizational narratives.


It is an in-depth approach to analyzing qualitative data. Narrative analysis is
potentially an excellent way in which we can enter into a dialogue with managers and
business people in organizations (Czarniawska, 1998). It is one way of making our
research more relevant to practice

Disadvantage

One disadvantage is that it can be very time consuming to collect life histories of
people, and even more time consuming to analyze them.

ETHNOGRAPHY

Ethnography is a social science research method. It relies heavily on up-close, personal


experience and possible participation, not just observation, by researchers trained in
the art of ethnography. These ethnographers often work in multidisciplinary teams.
The ethnographic focal point may include intensive language and culture learning,
intensive study of a single field or domain, and a blend of historical, observational, and
interview methods. Typical ethnographic research employs three kinds of data
collection: interviews, observation, and documents. This in turn produces three kinds
of data: quotations, descriptions, and excerpts of documents, are resulting in one
product, narrative description. This narrative often includes charts, diagrams and
additional artifacts that help to tell “the story“(Hammersley, 1990). Ethnographic
research is one of the most in-depth research methods possible. An ethnographer
sees what people are doing as well as what they say they are doing. It provides
researchers with rich insights into the human, social and cultural aspects of
organizations

Why researchers adopt in ethnography?

Researchers adopt ethnography because of the following factors:

• The main purpose of ethnography is to obtain a deep understanding of


people and their culture;
• It is one distinguishing feature is fieldwork;

67
• Ethnographies immerse themselves in the life of the people they study
(Lewis, 1985) and seek to place the phenomena studied in their social and
cultural context; and
• In ethnographic research, the context is what defines the situation and
makes it what it is.

Features of ethnography:

In terms of method, generally speaking, the term "ethnography" refers to social


research that has most of the following features (Hammersley, 1990).

(a) People's behavior is studied in everyday contexts, rather than under experimental
conditions created by the researcher.

(b) Data are gathered from a range of sources, but observation and/or relatively
informal conversations are usually the main ones.

(c) The approach to data collection is "unstructured in the sense that it does not
involve following through a detailed plan set up at the beginning; nor are the
categories used for interpreting what people say and do pre-given or fixed. This does
not mean that the research is unsystematic; simply that initially the data are collected
in as raw a form, and on as wide a front, as feasible.

(d) The focus is usually a single setting or group, of relatively small scale. In life history
research the focus may even be a single individual.

(e) The analysis of the data involves interpretation of the meanings and functions of
human actions and mainly takes the form of verbal descriptions and explanations,
with quantification and statistical analysis playing a subordinate role at most.

Advantages:

• One of the most valuable aspects is the depth of understanding;


• Can challenge ‘taken for granted’ assumptions;
• Account for the complexity of group behaviors;
• Reveal interrelationships among multifaceted dimensions of
group interactions; and
• Provide context for behaviors.

Disadvantages:

• Takes a long time;

68
• Researcher bias can bias the design of a study;
• Researcher bias can enter into data collection;
• Does not have much breadth;
• Some subjects may be previously influenced and affect the
outcome of the study;
• Study group may not be representative of the larger
population;
• Analysis of observations can be biased; and
• It can be difficult for some to write up the findings in a journal
article.

PHENOMENOLOGY

Phenomenology is a qualitative research method originally developed by the


philosopher Edmund Husserl. Husserl broadened the concepts and methods of
modern science to include the study of consciousness, profoundly influencing
philosophy, other humanities, and the social sciences throughout the 20 th century.
This approach, most often used by psychologists, seeks to explain the "structure and
essence of the experiences" of a group of people (Banning, 1995).

Definition

Phenomenology is a current in philosophy that takes intuitive experience of


phenomena (what presents itself to us in conscious experience) as its starting point
and tries to extract the essential features of experiences and the essence of what we
experience. (Husserl, Ponty, and Heidegger)

Characteristic feature

• A philosophy that puts experience above conceptualizations about it.


• The branch of existentialism which deals with phenomena with no attempt
at explanation.
• A philosophical doctrine in which considerations of objective reality are not
taken into account.
• A philosophy or method of inquiry based on the premise that reality consists
of objects and events as they are perceived or understood in human
consciousness and not of anything independent of human consciousness.
• The focus of phenomenology inquiry is what people experience in regard to
some phenomenon or other and how they interpret those experiences

Phenomenology literally means the study of phenomena. It is a way of describing


something that exists as part of the world in which we live. Phenomena may be

69
events, situations, experiences or concepts. We are surrounded by many phenomena,
which we are aware of but not fully understand. Our lack of understanding of these
phenomena may exist because the phenomenon has not been overtly described and
explained or our understanding of the impact it makes may be unclear.
Phenomenological research begins with the acknowledgement that there is a gap in
our understanding and that clarification or illumination will be of benefit.
Phenomenological research will not necessarily provide definitive explanations but it
does raise awareness and increases insight (Hancock, 2002).

What Phenomenology aims?

Phenomenology aims for truth, logic, and rigorously self-critical thought. All forms of
knowledge including the sciences are regarded as being ultimately grounded on living
experience in relations of orderly, regular structures of consciousness.
Phenomenology starts with what appears: primarily non-verbal awareness, and
studies the overall relations of meaning that appears through sensation to verbalized
thought, which may also include the awareness of others, history, teleology, ethics
and values. In general, it attempts to ground any academic discourse in its definitive
experiences (Husserl, 1970, 1981).

What Phenomenology concerned with?

A phenomenologist is concerned with understanding certain group behaviors from


that group's point of view. Phenomenological inquiry requires that researchers go
through a series of steps in which they try to eliminate their own assumptions and
biases, examine the phenomenon without presuppositions, and describe the "deep
structure" of the phenomenon based on internal themes that are discovered
(Marshall and Rossman, 1995). Phenomenology does greatly overlap with
ethnography, but, as Bruyn, (1970), points out, some phenomenologists assert that
they "study symbolic meanings as they constitute themselves in human
consciousness".

Phenomenology investigates the ground and constitution of meaning. It involves an


intuitive and reflective scrutiny of the sense-giving acts of consciousness prior to their
conceptual elaboration, and a description of phenomena in the various modes in
which they are present to consciousness. The complementary relationship of
consciousness and its objects implies that things are as they appear to us: being and
appearing coincide. Phenomenology argues against the view that there are hidden
‘things-in-themselves’ which lie beyond phenomenal; it attempts to transcend the
opposition between the idealist reduction of the world to the knowledge we have of

70
it and the realist postulate that the external world exists independently of the activity
of the mind. The emphasis is upon the ‘intentionality’ of consciousness: i.e., the fact
that consciousness is always consciousness of something, is directed towards its
objects in acts not only of perception and cognition, identification and synthesis, but
also of willing, desiring, imagining, etc. (Spiegelberg, (1982).

Example:

1. What is the essential nature of a lived experience?


2. Cognitive Representations of AIDS Cognitive representations of illness
determine behavior.
3. How persons living with AIDS image their disease might be key to
understanding medication adherence and other health behaviors. The
authors’ purpose was to describe AIDS patients’ cognitive representations
of their illness (Anderson and Spencer, in Creswell).
• What is your experience with AIDS?
• Do you have a mental image of HIV/AIDS?
• How would you describe HIV/AIDS?
• What feelings come to mind?
• What meaning does it have in your life?
• Draw your image of AIDS and tell me what it means?

Identified 11 Themes

• Inescapable death;
• Dreaded bodily destruction;
• Devouring life;
• Hoping for the right drug;
• Caring for oneself;
• Just a disease;
• Holding a wild cat;
• Magic of not thinking;
• Accepting AIDS;
• Turning to a higher power;
• Recouping with time; and
• Issues of Qualitative Research.

METHODOLOGY OF PHENOMENOLOGY

A phenomenological study often involves the four steps:

1. Bracketing

71
2. Intuiting
3. Analyzing
4. Describing

Bracketing

Bracketing is the process of identifying and holding in abeyance any preconceived


beliefs and opinions that one may have about the phenomenon that is being
researched. The researcher 'brackets out' (as in mathematics) the world and any
presuppositions that he or she may have in an effort to confront the data in as pure a
form as possible. This is the central component of phenomenological reduction - the
isolation of the pure phenomenon versus what is already known about the
phenomenon.

Intuition

Intuition occurs when the researcher remains open to the meaning attributed to the
phenomenon by those who have experienced it. This process of intuition results in a
common understanding about the phenomenon that is being studied. Intuiting
requires that the researcher creatively varies the data until such an understanding
emerges. Intuiting requires that the researcher becomes totally immersed in the study
and the phenomenon.

Analyzing

Analysis involves such processes as coding (open, axial, and selective), categorizing
and making sense of the essential meanings of the phenomenon. As the researcher
works/lives with the rich descriptive data, then common themes or essences begin to
emerge. This stage of analysis basically involves total immersion for as long as it is
needed in order to ensure both a pure and a thorough description of the
phenomenon.

Description

At the descriptive stage, the researcher comes to understand and to define the
phenomenon. The aims of this final step are to communicate and to offer distinct,
critical description in written and verbal form.

Advantages

The advantages of phenomenology method include;

• In-depth understanding of individual phenomena; and

72
• Rich data from the experiences of individuals.

Disadvantages

• The subjectivity of the information or data leads to difficulties in


establishing reliability and validity of approaches and information.
• Hard to identify or to prevent researcher induced bias.
• There can be difficult in ensuring pure bracketing - this can lead to
interference in the interpretation of the data.
• The presentation of results - the highly qualitative nature of the results
can make them difficult to present in a manner that is usable by
practitioners.
• Phenomenology does not produce generalizable data.
• Due to small sample size, we can seldom say that the experiences are
typical.
• On a practical note, it is important to consider the possible difficulties of
participants expressing themselves.
• Intangible factors like foreign language, age, brain damage, and
embarrassment, which influence the process may create difficulties in
inducing interest among participants and develop issues of better
articulation of problems.

GROUNDED THEORY

Qualitative research comprises a variety of methodological methods with different


disciplinary origins and instruments. Grounded theory is a general research
methodology used in building naturalistic theory and roots in sociology (Strauss and
Corbin, 1994). With its origins in sociology, grounded theory emphasizes the
importance of developing an understanding of human behavior through a process of
discovery and induction rather than from the more traditional quantitative research
process of hypothesis testing and deduction. A grounded theory approach provides
nurses with a viable means of generating theory about dominant psychosocial
processes that present within human interactions, indeed, a theory that is grounded
in the realities of everyday clinical practice (Streubert-Speziale and Carpenter, 2003).

Definition

The aim of grounded theory is: ‘to generate or discover a theory’ (Glaser and
Strauss, 1967). Grounded theory may be defined as: ‘the discovery of theory

73
from data systematically obtained from social research’ (Glaser and Strauss
1967).

Characteristic features of grounded theory method

A number of features that all grounded theories have:

• Simultaneous collection and analysis of data;


• Creation of analytic codes and categories developed from the data and not
by pre-existing conceptualizations (theoretical sensitivity);
• Discovery of basic social processes in the data inductive construction of
abstract categories;
• Theoretical sampling to refine the categories;
• Writing analytical memos as the stage between coding and writing; and
• The integration of categories into a theoretical framework (Charmaz (1995,
2002).

Grounded theory is a research approach or method that calls for a continual interplay
between data collection and analysis to produce a theory during the research process.
A grounded theory is derived inductively through the systematic collection and
analysis of data pertaining to a phenomenon (Strauss and Corbin, 1990). Data
collection, analysis, and theory stand in reciprocal relationship with one another. An
essential feature of grounded theory research is the continuous cycle of collecting and
analyzing data. The researcher starts analyzing data as soon it is collected and then
moves on to compare the analysis of one set of data with another. As the research
progresses and categories are developed, the researcher uses a form of analysis
known as selective coding. This means that the researcher reviews the collected data
by checking out whether the newly developed categories remain constant when the
data is analyzed specifically for these categories. As the research progresses, the
researcher continues to review the categories as further new data is collected, so as
to ensure that data is not being forced into the categories but rather that the
categories represent the data. This dynamic relationship between data collection and
analysis enables the researcher to check if preliminary findings remain constant when
further data is collected. Taken together, constant comparative analysis and data
collection offer the researcher an opportunity of generating research findings that
represent accurately the phenomena being studied Naomi Elliott, & Anne Lazenbatt,
(2005).

74
Assumptions about grounded theory research

Creswell, (1998) suggested that the following assumptions about grounded theory
research are widely shared: The aim of grounded theory research is to generate or
discover a theory;

• The researcher has to set aside theoretical ideas to allow a “substantive”


theory to emerge;
• The theory focuses on how individuals interact in relation to the
phenomenon under study;
• The theory asserts a plausible relation between concepts and sets of
concepts;
• Theory is derived from data acquired through fieldwork, interviews,
observations, and documents;
• Data analysis is systematic and begins as soon as data become available;
• Data analysis proceeds through identifying categories and connecting
them;
• Further data collection (or sampling) is based on emerging concepts;
• These concepts are developed through constant comparison with
additional data;
• Data collection can stop when new conceptualizations emerge;
• Data analysis proceeds from “open coding” (identifying categories,
• Properties, and dimensions) through “axial coding” (examining
conditions, strategies, and consequences) to selective coding around an
emerging story line; and
• The resulting theory can be reported in a narrative framework or as a set
of propositions.

Why grounded theory?

It is a methodology that has been used to generate theory in areas where there is little
already known (Goulding, 1998). Its usefulness is also recognized where there is an
apparent lack of integrated theory in the literature (Goulding, 2002). Grounded
theory “adapts well to capturing the complexities of the context in which the action
unfolds…” (Locke, 2001:95) and emphasizes process. In so doing it assist the
researcher in retaining the link between culture, language, social context and
construct (Gales, 2003). Therefore, grounded theory generates theory that is of direct
interest and relevance for practitioners in that it analyses a substantive topic and aims
at discovering a basic social process (BSP) which has the potential to resolve some of
the main concerns of a particular group (Jones, 2002).

75
Steps in grounded theory

1. Develop categories
Use data available to develop labeled categories that fit data closely.
2. Saturate categories
Accumulate examples of a given category until is clear what future
instances would be located in this category.
3. Abstract definitions
Abstract a definition of the category by stating in a general form the criteria
for putting further instances into this category.
4. Use the definitions
Use definitions as a guide to emerging features of importance in further
field work and as a stimulus to theoretical reflection.

Evaluating Grounded Theory

Strauss and Corbin state that there are four primary requirements for judging a good
grounded theory: 1) It should fit the phenomenon, provided it has been carefully
derived from diverse data and is adherent to the common reality of the area; 2) It
should provide understanding, and be understandable; 3) Because the data is
comprehensive, it should provide generality, in that the theory includes extensive
variation and is abstract enough to be applicable to a wide variety of contexts; and 4)
It should provide control, in the sense of stating the conditions under which the theory
applies and describing a reasonable basis for action.

In clarifying their approach to grounded theory, Strauss and Corbin (1998) suggested
the following seven criteria be used for evaluating the research process:

• Rationale for the selection of the original sample;


• Elaboration of the major categories that emerge;
• The events, incidents, or actions pointing to the major categories
identified;
• An explanation of how theoretical formulations influenced or guided
the data collection;
• The elaboration regarding the hypotheses and justifications for the
establishment of relationships between categories and the approach to
validation;
• The accounting for discrepancies in the data and resulting theoretical
modifications; and the rationale for the selection of the core or central
category.

76
Strauss and Corbin (1998) also suggested that the “empirical grounding of a study” be
evaluated to assess the development of relevant categories and concepts that are the
building blocks of the theory. The consideration of seven criteria for the assessment
of the grounding of a study includes an examination of the following:

• The quality of the concepts generated;


• The systematic relationships between the concepts;
• The clarity and density of conceptual linkages;
• The inclusion of variation into the theory;
• A clear description of the conditions under which variation can
be found; and
• An account of the research process, and the significance of
theoretical findings.

Some studies published through grounded theory method

Although few grounded theory research studies have been printed in the publications
affiliated with HRD, several studies have focused on research questions relevant to
the field. Pertinent grounded theory research has included an examination of
individual responses to organizational change (Johansen, 1991, 2001), the exploration
of leadership values for quality in a manufacturing context (Franche’re, 1995), intra-
firm conflicts in the formation of business strategies in corporate settings (Shaffer and
Hillman, 2000), patient perceptions of the quality of care (Radwin, 2000), and the
impact of conflict and cohesion on organizational learning and performance (Cairns,
Burt, and Beech, 2001).

Advantages

Grounded theory provides for:

• A systematic and rigorous procedure;


• Rich data from the experiences of individuals.

Disadvantages

The disadvantages of using grounded theory for your research are:

• The subjectivity of the data leads to difficulties in establishing reliability


and validity of approaches and information.
• It is difficult to detect or to prevent researcher-induced bias.

77
• The presentation of results - the highly qualitative nature of the results
can make them difficult to present in a manner that is usable by
practitioners.

CASE STUDIES

A case study is a research method which allows for an in-depth examination of events,
phenomena, or other depth examination of events, phenomena, or other
observations within real- observations within a real-life context for testing, or simply
a tool for learning. Case studies often employ documents, artifacts, interviews and
observations during the course of research. The case studies can be qualitative or
quantitative or combined.

Case study research, through reports of past studies, allows the exploration and
understanding of complex issues. It can be considered a robust research method
particularly when a holistic, in-depth investigation is required. Recognized as a tool in
many social science studies, the role of case study method in research becomes more
prominent when issues about education (Gulsecen and Kubat, 2006), sociology
(Grassel and Schirmer, 2006) and community-based problems (Johnson, 2006), such
as poverty, unemployment, drug addiction, illiteracy, etc. were raised. One of the
reasons for the recognition of case study as a research method is that researchers
were becoming more concerned about the limitations of quantitative methods in
providing holistic and in-depth explanations of the social and behavioral problems in
question. Through case study methods, a researcher can go beyond the quantitative
statistical results and understand the behavioral conditions through the actor’s
perspective. By including both quantitative and qualitative data, case study helps
explain both the process and outcome of a phenomenon through complete
observation, reconstruction and analysis of the cases under investigation (Tellis,
1997).

Definition

Yin, (1984) defines the case study research method as an empirical inquiry that
investigates a contemporary phenomenon within its real-life context; when the
boundaries between phenomenon and context are not clearly evident; and in which
multiple sources of evidence are used.

The main characteristics of the case study

1. A descriptive study

a. (I.e. The data collected constitute descriptions of psychological processes


and events, and of the contexts in which they occurred (qualitative data).

78
b. The main emphasis is always on the construction of verbal descriptions of
behavior or experience but quantitative data may be collected.

c. High levels of detail are provided.

2. Narrowly focused.

a. Typically a case study offers a description of only a single individual, and


sometimes about groups.

b. Often the case study focuses on a limited aspect of a person, such as their
psychopathological symptoms.

3. Combines objective and subjective data

The researcher may combine objective and subjective data: All are regarded
as valid data for analysis, and as a basis for inferences within the case study.

I. The objective description of behavior and its context.

II. Details of the subjective aspect, such as feelings, beliefs, impressions or


interpretations. In fact, a case study is uniquely able to offer a means of
achieving an in-depth understanding of the behavior and experience of a single
individual.

4. Process-oriented.

a. The case study method enables the researcher to explore and describe the
nature of processes, which occur over time.

b. In contrast to the experimental method, which basically provides a stilled


‘snapshot’ of processes, which may be continued over time like for example
the development of language in children over time (Hayes, 2000).

Types of Case Studies

Yin, (1984) notes three categories, namely exploratory, descriptive and explanatory
case studies.

Exploratory

Exploratory case studies set to explore any phenomenon in the data which serves as
a point of interest to the researcher. For instance, a researcher conducting an

79
exploratory case study on an individual’s reading process may ask general questions,
such as, “Does a student use any strategies when he reads a text?” And “if so, how
often?”. These general questions are meant to open up the door for further
examination of the phenomenon observed. In exploratory case studies, fieldwork, and
data collection may be undertaken prior to the definition of the research questions
and hypotheses. This type of study has been considered as a prelude to some social
research. A good instrumental case does not have to defend its typicality.

Explanatory

Explanatory cases are appropriate for undertaking causal studies. In very many
multifaceted and multivariate cases. For instance, a researcher may ask the reason as
to why a student uses an influencing strategy in reading (Zaidah, 2003). On the basis
of the data, the researcher may then form a theory and set to test this theory
(McDonough and McDonough, 1997). Furthermore, explanatory cases are also
deployed for causal studies where pattern-matching can be used to investigate certain
phenomena in very complex and multivariate cases. Yin and Moore, (1987) note that
these complex and multivariate cases can be explained by three rival theories: a
knowledge-driven theory, a problem-solving theory, and a social-interaction theory.
Yin and Moore, (1988) conducted a study to examine the reason why some research
findings get into practical use. The utilization outcomes were explained by three rival
theories: a knowledge-driven theory, a problem-solving theory, and a social-
interaction theory. Knowledge-driven theory means that ideas and discoveries from
basic research eventually become commercial products. Similar notions can be said
for the problem-solving theory. The social-interaction theory, on the other hand,
suggests that overlapping professional network causes researchers and users to
communicate frequently with each other.

Descriptive

Descriptive case studies set to describe the natural phenomena which occur within
the data in question, for instance, what different strategies are used by a reader and
how the reader uses them. The goal set by the researcher is to describe the data as
they occur. The task of a descriptive case study is that the investigator must begin with
a descriptive theory to support the description of the phenomenon or story. If this
fails there is the possibility that the description lacks rigor and that problems may
occur during the project. An example of a descriptive case study using pattern-
matching procedure is the one conducted by Pyecha, (1988) on special education
children. Through replication, data elicited from several states in the United States of
America were compared and hypotheses were formulated. In this case, descriptive
theory was used to examine the depth and scope of the case under study.

80
Designing Case Studies

Yin, (1994) identified five components of research design that are important for case
studies:
• A study questions;
• Its propositions, if any;
• Its unit(s) of analysis;
• The logic linking the data to the propositions; and
• The criteria for interpreting the findings (Yin, 1994).

Case study methods

Yin, (1984) identifies these methods as including:

• Direct observation of activities and phenomena and their


environment;
• Indirect observation or measurement of process related
phenomena;
• Interviews - structured or unstructured;
• Documentation, such as written, printed or electronic information
about the company and its operations; also newspaper cuttings;
and
• Records and charts about previous use of technology relevant to
the case.

Case study protocol

There are certain protocols to be followed by the case study investigator. A case study
protocol contains procedures and general rules that should be followed in using the
instrument. The protocol is to be created prior to the data collection phase. Yin, (1994)
presented the protocol as a major component in asserting the reliability of the case
study research. A typical protocol should have the following sections:

• An overview of the case study project (objectives, issues, topics being


investigated)
• Field procedures (credentials and access to sites, sources of
information)
• Case study questions (specific questions that the investigator must
keep in mind during data collection)
• A guide for case study report (outline, format for the narrative) (Yin,
1994).

81
Case study Evidence

Stake, (1995) and Yin, (1994) identified at least six sources of evidence in case studies.
The following is not an ordered list, but reflects the research of both Yin, (1994) and
Stake, (1995):

• Documents : letters, memoranda, agendas, administrative


documents, newspaper articles;
• Archival records: service records, organizational records, lists of
names, survey data, and other such records;
• Interviews: open-ended, Focused, and Structured or survey. In an
open-ended interview;
• Direct observation: a field visit is conducted during the case study;
• Participant-observation: makes the researcher into an active
participant in the events being studied; and
• Physical artifacts: be tools, instruments, or any other physical
evidence that may be.

Advantages of Case study method

Advantages of the case study method (Searle, 1999)

1. Stimulating new research. A case study can sometimes highlight extraordinary


behavior, which can stimulate new research.
2. Contradicting established theory. Case studies may sometimes contradict
established psychological theories, as an example of a case study which
challenged the established theory of the early years of life being a critical
period for human social development.
3. Giving new insight into phenomena or experience. Because case studies are so
rich in information, they can give insight into phenomena, which we could not
gain in any other way.
4. Permitting investigation of otherwise inaccessible situations: the case study
gives psychological researchers the possibility to investigate cases, which could
not possibly be engineered in research laboratories.

82
Disadvantages of the case study method

Searle (1999) identified a number of disadvantages in case study research.

1. Replication not possible. Uniqueness of data means that they are valid for only
one person. While this is strength in some forms of research, it is a weakness
of others, because it means that the findings cannot be replicated and so some
types of reliability measures are very low.
2. The researcher’s own subjective feelings may influence the case study
(researcher bias). Both the collection of data and the interpretation of them.
This is particularly true of many of the famous case studies in psychology’s
history, especially the case history reported by Freud. In unstructured or
clinical case studies the researcher’s own interpretations can influence the
way that the data are collected, i.e. there is a potential for researcher bias.
3. Memory distortions. The heavy reliance on memory when reconstructing the
case history means that the information about past experiences and events
may be notoriously subject to distortion. Very few people have full
documentation of all various aspects of their lives, and there is always a
tendency that people focus on factors which they find important themselves
whiles they may be unaware of other possible influences.
4. Not possible to replicate the findings. Serious problems in generalizing the
results of a unique individual to other people because the findings may not be
representative of any particular population.

DELPHI METHOD

The Delphi concept may be viewed as one of the spin-offs of defense research.
"Project Delphi" was the name given to an Air Force-sponsored Rand Corporation
study, starting in the early 1950's, concerning the use of expert opinion, (Edmund
Husserl, 1931). The objective of the original study was to "obtain the most reliable
consensus of opinion of a group of experts ... by a series of intensive questionnaires
interspersed with controlled opinion feedback.

The Delphi technique is ‘a method for structuring a group communication process so


that the process is effective in allowing a group of individuals, as a whole, to deal with
a complex problem’ (Linstone and Turoff, 1975). Furthermore, it is ‘a method for the
systematic solicitation and collation of judgments on a particular topic through a set
of carefully designed sequential questionnaires interspersed with summarized
information and feedback on opinions derived from earlier responses’ (Delbecq et al.
1975).

83
Why Delphi?

Many people label Delphi a forecasting procedure because of its significant use in that
area, there is a surprising variety of other application areas. Among those already
developed we find:

• Gathering current and historical data not accurately known or available;


• Examining the significance of historical events;
• Evaluating possible budget allocations;
• Exploring urban and regional planning options;
• Planning university campus and curriculum development;
• Putting together the structure of a model;
• Delineating the pros and cons associated with potential policy options;
• Developing causal relationships in complex economic or social
phenomena;
• Distinguishing and clarifying real and perceived human motivations; and
• Exposing priorities of personal values, social goals.

Procedural steps in Delphi Technique

The procedural steps in adopting the Delphi technique were as follows:

“Example Research Topic: Factors Wavering Internationalization of SMEs”

Expert Panel identification:

The collection of expert was made for professionals having high knowledge and
expertise in the field of internationalization and they are closely associated with Small
and Medium Scale industries as consultants, business associates, partners,
collaborators, owners of SMEs, Researchers, Academicians and Senior level managers.
The members from industrial bodies like chamber of commerce, professors from
universities, senior researchers from research institutes, members from
governmental institutions and financial institution are identified through exploratory
method.

The data regarding the professors were collected from academic department of
universities and professionals are through emails and direct interaction with the
professional’s institutions. Majority members in the expert group belong to an age
group of 38 to 55 years of old. The specialized areas of these expert members include,
Industries Development, Technology Development, Entrepreneurship, Planning
board, SME development, Supply Chain Management, Production and Operations
management, Information Technology, International Marketing, Institutional finance,

84
Human Resource, Research and industrial sociology. The participants comprised 30
male members (86%) and 5 female members (14%). These dynamic groups of panel
of experts are experienced and conversant to give pertinent opinions and a justifiable
understanding of the internationalization process of Indian SMEs.

Rounds

Round 1: In the first round, the Delphi process traditionally begins with an open-ended
questionnaire. The open-ended questionnaire serves as the cornerstone of soliciting
specific information about a content area from the Delphi subjects (Custer, Scarcella,
and Stewart, 1999).
The questions:

1. How do you define internationalization of business in relation to SMEs?


2. How do you relate the term with SMEs considering your nature of business?
3. How do you take a decision on the expansion of your Small Scale business at
international level?
4. What is the external business environment factors influencing the decision on
the expansion of your business at international level?
5. What is the internal business environment factors influencing the decision on
the expansion of your business at international level?
6. Do you integrate these external and internal factors to analyze the decision on
the expansion of your business at international level?
7. Contextualizing the topic to the Indian scenario, which are the powerful
factors that you feel more knit with the business owner's decision “not to go
with global operations?

Round 2:

In the second round, each Delphi participant receives a second questionnaire and is
asked to review the items summarized by the investigators based on the information
provided in the first round. Accordingly, Delphi panelists may be required to rate or
rank-order items to establish preliminary priorities among items. As a result of round
two, areas of disagreement and agreement are identified (Ludwig, 1994). In this
round, consensus begins forming and the actual outcomes can be presented among
the participants’ responses (Jacobs, 1996).

1. Information regarding the influential factors of internationalization of


business collected from the respondents
2. The process identifies 224 items which are having a high and low
influence on internationalization of business identified.
3. Rating process further identified on the items identified.

85
Round 3:

In the third round, each Delphi panelist receives a questionnaire that includes the
items and ratings summarized by the investigators in the previous round and are
asked to revise his/her judgments or “to specify the reasons for remaining outside the
consensus” (Pfeiffer, 1968). This round gives Delphi panelists an opportunity to make
further clarifications of both the information and their judgments about the relative
importance of the items.

1. Second level screening of the 224 items which were having high and low
influence on internationalization of business identified
2. The process further identified 199 items which are having a high and
low influence on internationalization of business identified.
3. Classification of the items in 55 categories was being made.
4. Thematic presentation and the categorization of the items were done.

Round 4:

In the fourth and often final round, the list of remaining items, their ratings, minority
opinions, and items achieving consensus are distributed to the panelists. This round
provides a final opportunity for participants to revise their judgments. It should be
remembered that the number of Delphi iterations depends largely on the degree of
consensus sought by the investigators and can vary from three to five (Delbecq, Van
de Ven, Gustafson, 1975; Ludwig, 1994).

1. During third level, screening of the 182 items which were having high
and moderately high influence on internationalization of subjected to
repeated discussion.
2. Core factors which influence the internationalization of business of
SMEs identified.
3. Sought the expert opinion on the appropriateness of the core factors
selected for the study.

Advantages of Delphi Technique

The researchers have pointed out the many advantages of the Delphi
Technique. The Delphi technique is:

1. Is conducted in writing and does not require face-to-face meetings:


• Responses can be made at the convenience of the participant;
• Individuals from diverse backgrounds or from remote locations to
work together on the same problems;

86
• Is relatively free of social pressure, personality influence, and
individual dominance and is, therefore, conducive to independent
thinking and gradual formulation of reliable judgments or
forecasting of results;
• Helps generate consensus or identify divergence of opinions among
groups hostile to each other;
2. Helps keep attention directly on the issue:
3. Allows a number of experts to be called upon to provide a broad range
of views, on which to base analysis — “two heads are better than one”:
• Allows sharing of information and reasoning among
participants;
• Iteration enables participants to review, re-evaluate and revise
all their previous statements in light of comments made by their
peers;
4. Is inexpensive.

Why Delphi fails?

Some of the common reasons for the failure of a Delphi are:

1. Imposing monitor view's and preconceptions of a problem upon the


respondent group by over specifying the structure of the Delphi and
not allowing for the contribution of other perspectives related to the
problem;
2. Assuming that Delphi can be a surrogate for all other human
communications in a given situation;
3. Poor techniques of summarizing and presenting the group response
and ensuring common interpretations of the evaluation scales utilized
in the exercise;
4. Ignoring and not exploring disagreements, so that discouraged
dissenters drop out and an artificial consensus is generated;
5. Underestimating the demanding nature of a Delphi and the fact that
tire respondents should he recognized as consultants and properly
compensated for their time if the Delphi is not an integral part of their
job function (Linstone and Turoff);
6. Information comes from a selected group of people and may not be
representative;
7. Tendency to eliminate extreme positions and force a middle-of-the-
road consensus;
8. More time-consuming than group process methods;
9. Requires skill in written communication;
10. Requires adequate time and participant commitment.

87
FOCUS GROUP

A focus group, or focus group interview, is a qualitative research tool often used in
social research, business and marketing. Focus groups are "small group discussions,
addressing a specific topic, which usually involve 6-12 participants, either matched or
varied on specific characteristics of interest to the researcher". (Fern, 1982; Morgan
and Spanish, 1984). Methodologically, focus group interviews involve a group of 6–8
people who come from similar social and cultural backgrounds or who have similar
experiences or concerns. They gather together to discuss a specific issue with the help
of a moderator in a particular setting where participants feel comfortable enough to
engage in a dynamic discussion for one or two hours. Focus groups do not aim to reach
consensus on the discussed issues. Rather, focus groups ‘encourage a range of
responses which provide a greater understanding of the attitudes, behavior, opinions
or perceptions of participants on the research issues’ (Hennink, 2007). The focus
group method is different from group interviews since group interactions are treated
explicitly as ‘research data’ (Ivanoff and Hultberg, 2006). The participants are chosen
because they are able to provide valuable contributions to the research questions.
The discussion between participants provides the researchers with an opportunity to
hear issues which may not emerge from their interaction with the researchers alone.
The interaction among the participants themselves leads to more emphasis on the
points of view of the participants than those of the researchers (Gaiser, 2008).

Focus groups require skilled facilitators or moderators to guide the discussion and
maintain the focus. They are found to be most effective for learning about opinions
and attitudes, pilot testing materials for assessments and generating
recommendations.

Why focus groups in qualitative research?

• Focus group methodology is useful in exploring and examining what


people think, how they think, and why they think the way they do about
the issues of importance to them without pressuring them into making
decisions or reaching a consensus (Kitzinger, (2005),
• As a research method, focus groups are valuable in two main
perspectives (Conradson, 2005). They offer the researchers a means of
obtaining an understanding (insight) of a wide range of views that people
have about a specific issue as well as how they interact and discuss the
issue. A focus group, for example, could be used to find out how
consumers perceive health care and services, both in terms of their own
opinions and in relation to others. For example, how individuals who live
in urban areas sees health care in comparison with those who live in rural
settings (Conradson, 2005).

88
• As such, focus groups offer possibilities for researchers to explore ‘the
gap between what people say and what they do’ (Conradson, 2005).
• The focus group setting also provides the researcher with opportunities
to follow up the comments and to crosscheck with the participants in a
more interactive manner than a questionnaire or individual interview can
offer.
• Focus group methodology is adopted widely in the field of development
in a cross-cultural context, especially in eliciting community viewpoints
and understanding community dynamics (Lloyd-Evans, 2006).
• One of the great advantages of the focus group method is its ability to
cultivate people’s responses to events as they evolve (Barbour, 2007).

Advantages:

The advantages of the focus group method include:

• Relatively easy to assemble, inexpensive and flexible in terms of format,


types of questions and desired outcomes
• Good for groups with lower literacy levels (e.g. young children, English as
a second language)
• Open records allow participants to confirm their contributions
• Provide rich data through direct interaction between researcher and
participants,
• Spontaneous, participants not required to answer every question; able
to build on one another's responses;
• Help people build new connections

Limitations:

The limitations of the focus group method include:

• The findings may not represent the views of larger segments of the
population;
• Requires good facilitation skills, including ability to handle various roles
people may play “expert”, “quiet”, “outsider’, “friend”, “hostile”, etc.);
• Tough rich, data may be difficult to analyze because it is unstructured;
• Possible conformance, censoring, conflict avoidance, or other unintended
outcomes of the group process need to be addressed as part of the data
analysis (Carey, 1995)

89
TRIANGULATION

Triangulation is an approach to research that uses a combination of more than one


research strategy in a single investigation. Qualitative researchers may choose
triangulation as a research strategy to assure completeness of findings or to confirm
findings. By combining multiple observers, theories, methods, and empirical
materials, sociologists can hope to overcome the weakness or intrinsic biases and the
problems that come from single method, single-observer, single-theory studies. Often
the purpose of triangulation in specific contexts is to obtain confirmation of findings
through the convergence of different perspectives. The point at which the
perspectives converge is seen to represent reality.

Why Triangulation?

The researchers make use of triangulation in qualitative research due to various


reasons. They are;

• By combining different strategies, researchers confirm findings by


overcoming the limitations of a single strategy;
• Method to enhance the validity & reliability of qualitative research;
• Enhances accuracy of interpretation;
• Confirms that the data collected is not due to chance or circumstances; and
• Uncovering the same information from more than one vantage point helps
researchers describe how the findings occurred under different
circumstances and assists them to confirm the validity of the findings.

The most fertile search for validity comes from a combined series of different
measures, each with its own idiosyncratic weaknesses, each pointed to a single
hypothesis. When a hypothesis can survive the confrontation of a series of
complementary methods of testing, it contains a degree of validity unattainable by
one tested within the more constricted framework of a single method (Webb et al.
1966).

By combining multiple observers, theories, methods, and empirical materials,


researchers can hope to overcome the weakness or intrinsic biases and the problems
that come from single-method, single-observer, single - theory studies. Often the
purpose of triangulation in specific contexts is to obtain confirmation of findings
through the convergence of different perspectives. The point at which the
perspectives converge is seen to represent reality

90
Techniques for ensuring the quality of a study

In the tradition of Lincoln and Guba, Erlandson et al. (1993) describe the following
techniques for ensuring the quality of a study.

• Prolonged engagement;
• Persistent observation;
• Triangulation;
• Referential adequacy;
• Peer de-briefing;
• Member checking;
• Reflexive journal;
• Thick description;
• Purposive sampling; and
• Audit trail.

Types of Triangulation:

Various researchers reported that there are five basic types of triangulation methods
employed by the qualitative researchers:

• Data triangulation, involving time, space, and persons.


For example, the research process would start by identifying the
stakeholder groups such as youth in the program, their parents, school
teachers, and school administrators. In-depth interviews could be
conducted with each of these groups to gain insight into their
perspectives on program outcomes. During the analysis stage, feedback
from the stakeholder groups would be compared to determine areas of
agreement as well as areas of divergence (Guion, Diehl, and McDonald,
2002).
• Investigator triangulation, which consist of the use of multiple, rather
than single observers; for example, suppose a researcher is conducting
pre- and post-observations of youth in the 4-H public speaking program
to assess changes in nonverbal communication and public speaking skills.
In order to triangulate the data, it would be necessary to line up different
colleagues in the same field to serve as evaluators. They would be given
the same observation check sheet for pre- and post-observations, and
after analysis, validity would be established for the practices and skills that
were identified by each observer (Guion, Diehl, and McDonald, 2002).
• Theory triangulation, which consists of using more than one theoretical
scheme in the interpretation of the phenomenon;

91
For example, suppose a researcher is interviewing participants from a
nutrition program to learn what healthy lifestyle practice changes they
attribute to participating in a program. To triangulate the information, a
researcher could then share the transcripts with colleagues in different
disciplines (i.e., nutrition, nursing, pharmacy, public health education,
etc.) to see what their interpretations are (Guion, Diehl, and McDonald,
2002).

• Methodological triangulation; which involves using more than one


method and may consist of within-method or between-method
strategies.
For example, suppose a researcher is conducting a case study of a
Welfare-to-Work participant to document changes in her life as a result
of participating in the program over a one-year period. A researcher
would use interviewing, observation, document analysis, or any other
feasible method to assess the changes. A researcher could also survey the
participant, her family members, and case workers as a quantitative
strategy. If the findings from all of the methods draw the same or similar
conclusions, then validity has been established (Guion, Diehl, and
McDonald, 2002).
• Multiple triangulation, when the researcher combines in one
investigation multiple observers, theoretical perspectives, sources of
data, and methodologies.
Example: Burr, (1998) used multiple triangulations to obtain a more
comprehensive view of family needs in critical care. Through the use of
questionnaires and selective participant interviews, this researcher found
that family members who were interviewed found the sessions
therapeutic, but those who were not interviewed could only
communicate their frustrations on questionnaires (Thurmond, 2001).

Advantages of qualitative research

The advantages of doing qualitative research on leadership include (Conger, 1998;


Bryman et al, 1988; Alvesson, 1996):

• Flexibility to follow unexpected ideas during research and explore


processes
• Effectively;
• Sensitivity to contextual factors;
• Ability to study symbolic dimensions and social meaning;
• Increased opportunities
• To develop empirically supported new ideas and theories;

92
• For in-depth and longitudinal explorations of leadership phenomena;
and for more relevance and interest for practitioners.

RELIABILITY OF QUALITATIVE RESEARCH

Reliability is the extent to which results are consistent over time and an accurate
representation of the total population under study is referred to as reliability and if
the results of a study can be reproduced under a similar methodology, then the
research instrument is considered to be reliable (Joppe, 2000). Wainer and Braun
(1998) describe the validity in quantitative research as “construct validity”. The
construct is the initial concept, notion, question or hypotheses that determine which
data is to be gathered and how it is to be gathered.

Although the term ‘Reliability’ is a concept used for testing or evaluating quantitative
research, the idea is most often used in all kinds of research. If we see the idea of
testing as a way of information elicitation then the most important test of any
qualitative study is its quality. A good qualitative study can help us “understand a
situation that would otherwise be enigmatic or confusing” (Eisner, 1991). The validity
and reliability of qualitative research depend on the researcher’s skill, sensitivity
and training in the field. There are additional specific methods a researcher can
perform to ensure data validity and reliability. Triangulation compares the results
from two or more different methods (i.e., data from interviews and observation), or
two or more data sources (interviews with members of different groups) to check for
consistency in answers and attitudes. Instead of using triangulation as a stringent
test of validity, it might be a more appropriate method for ensuring comprehensive
data collection – getting all sides of “the story,” for example, or understanding all
the shades of meaning in the answer to a question (Mays and Pope, 2000).

ISSUES OF QUALITATIVE RESEARCH

Ramos (1989) described three types of problems that may affect qualitative studies:
the researcher/participant relationship, the researcher’s subjective interpretations of
data, and the design itself.

• In practice, triangulation as a strategy provides a rich and complex picture


of some social phenomenon being studied, but rarely does it provide a clear
path to a singular view of what is the case. I suggest that triangulation as a
strategy provides evidence for the researcher to make sense of some social
phenomenon, but that the triangulation strategy does not, in and of itself,
do this.
• Because of the predominance of the assumption that triangulation will
result in a single valid proposition we look for the convergence of evidence

93
and miss what I see as the greater value in triangulating. More accurately,
there are three outcomes that might result from a triangulation strategy.
The first is that which is commonly assumed to be the goal of triangulation
and that is convergence. The notion of convergence needs little explanation:
data from different sources, methods, investigators, and so on will provide
evidence that will result in a single proposition about some social
phenomenon.
• A second and probably more frequently occurring outcome from a
triangulation strategy is inconsistency among the data. When multiple
sources, methods, and so on are employed we frequently are faced with a
range of perspectives or data that do not confirm a single proposition about
a social phenomenon. Rather, the evidence presents alternative
propositions containing inconsistencies and ambiguities. With this outcome
it is not clear what the valid claim or proposition about something else.
• A third outcome is a contradiction. It is possible not only for data being
inconsistent but to actually be contradictory. When we have employed
several methods we are sometimes left with a data bank that results in
opposing views of the social phenomenon being studied. If one were to
accept the assumptions that triangulation should result in a single claim
because bias is naturally cancelled out, the outcomes of the second and
third type would not be useful in the research process. Sandra Mathison,
2008).

HOW TO MAKE BETTER EVALUATION OF QUALITATIVE RESEARCH?

Ann DeVaney (2000) provided the following criteria, developed by numerous AECT
members that are used to evaluate the quality of papers submitted for this award:

• Is the problem clearly stated; does it have theoretical value and


currency; does it have practical value?
• Is the problem or topic situated in a theoretical framework; is the
framework clear and accessible; does the document contain competing
epistemologies or other basic assumptions that might invalidate
claims?
• Does the literature review a critique or simply a recapitulation; is it
relevant; does it appear accurate and sufficiently comprehensive?
• Are the theses stated in a clear and coherent fashion; are they
sufficiently demonstrated in an accessible manner; are there credible
warrants to claims made about the theses? (If applicable)
• Does the method fit the problem and is it an appropriate one given the
theoretical framework? (If applicable)

94
• Do the data collect adequately address the problem; do they make
explicit the researcher’s role and perspective; do the data collection
techniques have a “good fit” with the method and theory? (If
applicable).
• Are the data aggregates and analysis clearly reported; do they make
explicit the interpretive and the reasoning process of the researcher?
(If applicable)
• Does the discussion provide meaningful and warranted interpretations
and conclusions?

QUALIFICATIONS OF INVESTIGATORS

The investigators of qualitative research need to have the following abilities and skills.
The researchers;

• Must have requisite knowledge and skills about methodology, setting


and the nature of the issue.
• Must be familiar with own biases, assumptions, expectations, and
values.
• Must be empathic, intelligent, energetic, and interested in listening
• Must be open to embracing multiple realities.
• Must be prepared to produce detailed, comprehensive, and sometimes
lengthy reports.

CONCLUSION

The term qualitative implies an emphasis on examination of the processes and


meanings, but not measured in terms of quantity. The researcher needs to explain
and interpret phenomena or issues under study. Here the instrument of research is
researcher. His/her observations are taken into consideration to describe a
phenomena under study. Several methodologies of qualitative research are
incorporated in this section of the book. It depends upon the topic selected for the
study, a researcher needs to consider the method of qualitative research. In
comparison with quantitative phenomena, the objective of qualitative research is to
explain phenomena rather generalize any findings. A researcher should follow varied
method of data collection to get acceptance of the research observation. Usually
qualitative research is conducted to engage in explorative studies and very deep
rooted studies where variables are less identifiable in the beginning.

95
DISCUSSION QUESTIONS

1. “Qualitative research makes use of naturalist approach”. Discuss


2. How you formulate qualitative research questions?
3. Define qualitative research?
4. Explain the characteristic features of qualitative research?
5. What are the different types of data collection procedures in qualitative
research?
6. How you decide appropriateness of qualitative research?
7. How you make use of narration in qualitative research?
8. Why researchers adopt in ethnography?
9. What Phenomenology aims qualitative research?
10. Explain the steps involved in the methodology of phenomenology?
11. Explain grounded theory? Discuss why researchers should make use of
grounded theory?
12. Discuss the role of case studies in qualitative research?
13. How you make use of the Delphi method in qualitative research?
14. Explain why focus groups in qualitative research?
15. What is the purpose of making use of triangulation in qualitative research?
16. How a researcher ensures reliability of qualitative research findings?
17. Discuss the issues of qualitative research?
18. What are the ways to make an effective evaluation of qualitative research?

TEMPLATE

HOW TO WRITE A CASE RESEARCH - GUIDELINE

Titles What is expected Number of pages


Selection of a • The problem should be contemporary Half page
problem: and realistic for your organization or
other industry.
• The problem invites attention from top
management.
• The problem is sensitive to get a better
solution.
• The topic also can be best practices of
industry or vice versa.

Introduction: A brief introduction is the first part of the 1 page


case study. Remember the purpose of the
introduction part is to give a general view of

96
the problem and introduce the topic to the
readers explicitly. Introduce the issue related
to the sector, company, or an issue or a
theme and how it affects the field, business,
community, country, sector or individual.
Start with a general view and end with
specific point justifying the topic selected for
the study. All content should be source
based. The sources should be explained in
the reference.

The profile of the Introduce the company profile to the 1 page


company / readers. Data related to the company like,
industry (if the when it is established, what its vision,
case is related to mission, work culture, employees,
an institution, management approaches, leadership, and
company or profit turnover, etc. to be incorporated
industry). based on its relevance to the reader.

Review of Just after you got your topic and definitely a 4 pages
literature problem you will work on – start gathering
information. Nowadays there are a lot of
different sources that you are free to use for
writing a case study. Use books, newspapers,
magazines, Internet and any other sources
you are able to find. Identify the theory that
will be used. All content should be source
based. The sources should be explained in
the reference.

97
Methodology What methodologies have used to arrive at 1 page
the case?

Example;
Interview
Focus group discussion
Field notes
Content analysis
Narration
Survey
Formulate the Based on the introduction and review of 1 page
problem literature, the case writer has to identify and
formulate the problem in this section. The
rationale behind the identification of issues
and selection of issue as a case study to be
clearly mentioned in this section, that it gives
utmost clarity.

Statement of the The problem identified by the Quarter of a page


Problem researcher/case writer to be clearly
mentioned in this section.

Findings Identify the problems found in the case. Each 1-2 pages
analysis of a problem should be supported by
the facts given in the case together with the
relevant theory and course concepts. Here, it
is important to search for the underlying
problems for example: cross-cultural conflict
may be only a symptom of the underlying
problem of inadequate policies and practices
within the company. This section is often
divided into sub-sections, one for each
problem.

98
Discussion • Summarize the major problem/s 4 pages
• Identify alternative solutions to
this/these major problem/s (there is
likely to be more than one solution
per problem)
• Briefly outlines each alternative
solution and then evaluate it in terms
of its advantages and disadvantages
• No need to refer to the theory or
coursework here.

Recommendations • Choose which of the alternative 1 page


solutions should be adopted
• Briefly justify your choice explaining
how it will solve the major
problem/s. This should be written in
a forceful style as this section is
intended to be convincing.
• Here the integration of theory and
coursework is appropriate

Implementation • Explain what should be done, by 1 page


whom and by when
• If appropriate, include a rough
estimate of costs (both financial and
time).

Conclusion • Sum up the main points from the 1 page


findings and discussion

References Make sure all references are sited correctly Depends on your
(Follow APA Style). review of literature

Appendices (if any)

99
MODULE 5

RESEARCH PROCESS

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand appreciates the meaning of the research process


2. Analyze the significance of each research step in getting desired research
output
3. Learn how to formulate research process

INTRODUCTION

Research is an academic activity and as such the term should be used in a technical
sense. According to Clifford Woody research comprises defining and redefining
problems, formulating hypothesis or suggested solutions; collecting, organizing and
evaluating data; making deductions and reaching conclusions; and at least carefully
testing the conclusions to determine whether they fit the formulating hypothesis.

Research, as a systematic enquiry leading to the construction of new knowledge


(compare Bordens and Abbott, 2007; Graziano and Raulin, 2009), does not take place
solely in science. It is also carried out daily by each individual (compare Bannister &
Fransella, 2003). However, scientific research (also scholarly research or academic
research) follows particular guidelines and procedures to ensure the quality of
research results. Scientific research intents to create scientific knowledge in particular
field (Hockey, 2000) through the process of systematic scientific enquiry, the research
process (Clark and Hockey, 1989). The research process as well as the research results
have to fulfill certain standards (Heinrich, 1993; Shugan, 2004). Among others,
scientific research must be public, replicable, unprejudiced and independent and it
must advance the state of the art (Heinrich, 1993; Shugan, 2004). Due to this crucial
role of the research process in science, the understanding as well as the theoretical
analysis of the research process is relevant for any research directed towards
improving and supporting science (compare S¨oldner, Haller, Bullinger and M¨oslein,
2009).

100
RESEARCH PROCESS

Typically a research process consists of the following steps (Cooper and Schindler,
2003):

• Identification of the problem.


• Defining the management question and the research question(s).
• Conducting exploratory study (if required), to clarify the problem and/or
to refine management question/research question(s).
• Developing a research proposal.
• Outlining a research design specifying type, purpose, time frame, scope,
and environment of research.
• Developing an instrument for data collection and conducting a pilot study
to test the instrument for validity and reliability.
• Data collection (through observation, experiments, or surveys – the choice
of data collection method depends on the nature of the study).
• Data Analysis.
• Reporting research results

Although research process is outlined in the form of sequence above but in actual it is
quite muddled and a researcher may be required to simultaneously work on more
than one aspect at a time and revisit some of the steps (Saunders et al., 2009).
Therefore, given the uniqueness of every research activity, factors like; time period
and money required for the research, and problems faced by the researcher would
vary among different research projects.

The research process by Graziano and Raulin (2009) also focuses on behavioral
science, but it is less strict than the process by Bordens and Abbott (2007). Graziano
and Raulin (Graziano and Raulin, 2009) also define science as a process of enquiry.
Science acquires its knowledge through observation (empiricism), but also through
reasoning (rationalism) (Graziano and Raulin, 2009). The process begins with the
generation of an initial idea. Personal experience or existing research can serve as an
inspiration for a new research process. To explore the idea with the help of scientific
research, it has to be clearly defined. In the next step, therefore, the problem to be
addressed is described in the form of a research question. The research procedure
that should lead to the solution of the research question is defined in the procedure-
design phase. The resulting research design determines the study participants and
conditions as well as the data-collection and data analysis methods. After the
observation has been carried out, the data are analyzed and interpreted. The final
communication of the results to the scientific community can trigger a new research
process or stimulate the activity of other researchers. (Graziano and Raulin, 2009).

101
Bj¨ork (2007) describes a very complex model of a research process. Although Bj¨ork
does not clearly define his understanding of science, the terms he uses place the
model in behavioral science. The focus of the model is scientific communication. The
model therefore distinguishes activities that serve to acquire existing knowledge from
activities that generate new knowledge. The inputs of the process are ”scientific
problems” and ”existing knowledge”. By studying the existing research knowledge,
the researchers devise a conceptual framework and hypotheses for further research.
Then, data from existing repositories are collected and analyzed. The researchers then
do experiments and make observations with selected scientific methods. The data
collected as well as the new empirical data is analyzed in order to draw conclusions
and create new scientific knowledge (Bj¨ork, 2007) The research process is further
embedded in a broader process called “Do research, communicate and apply the
results” consisting of the stages Fund R&D, Perform the research, Communicate the
results and Apply the knowledge (Bj¨ork, 2007).

Finally, Blumberg et al. (2008) describe a business research process. Their process
begins with the development and the exact definition of the research question. The
research question of the business research process has to be connected to an existing
management problem. Preceding the research design, researchers might have to
provide a written research proposal. The research proposal describes the exploration
of the management research question. The proposal can be used to obtain funding
for the research project. The next phase, the research design, describes the activities
leading to the fulfillment of the research objectives. Blumberg et al. point out the
benefits of using different methods to prevent bias. The research design begins with
the definition of an overall design strategy. Based on this, the relevant population and
sampling methods.

Before embarking on the details of research methodology and techniques, it seems


appropriate to present a brief overview of the research process. Research process
consists of a series of actions or steps necessary to effectively carry out research and
the desired sequence of these steps. The below flow chart well illustrates a research
process. The below mentioned chart indicates that the research process consists of a
number of closely related activities, as shown through I to VII. The following order
concerning various steps provides a useful procedural guideline regarding the
research process:

102
RESEARCH PROCESS IN FLOW CHART

Formulating the research problem;

Extensive literature survey; preparation of the report or


presentation of the results.

Developing the hypothesis; Generalisations and interpretation

Preparing the research design; Hypothesis testing;

Determining sample design; Analysis of data;

Collecting the data; Execution of the project;

RESEARCH PROCESS IN FLOW CHART

(1) Formulating the research problem;

(2) Extensive literature survey;

(3) Developing the hypothesis;

103
(4) Preparing the research design;

(5) Determining sample design;

(6) Collecting the data;

(7) Execution of the project;

(8) Analysis of data;

(9) Hypothesis testing;

(10) Generalizations and interpretation, and

(11) preparation of the report or presentation of the results, i.e., formal write-up of
conclusions reached.

DETAILED RESEARCH PROCESS

1. Formulating the research problem

Working through these steps presupposes a reasonable level of knowledge in


the broad subject area within which the study is to be undertaken. Without such
knowledge it is difficult to clearly and adequately ‘dissect’ a subject area.

Step 1 Identify a broad field or subject area of interest to you.

Step 2 Dissect the broad area into sub areas.

Step 3 Select what is of most interest to you.

Step 4 Raise research questions: A research question is one that yields


hard facts to help solve a problem, produce new research, add to
theory, or improve services. A question that yields opinions can be
interesting but is not generally researchable.

Step 5 Formulate objectives.

Step 6 Assess your objectives.

Step 7 Double check.

104
Role of research questions

At the very outset the researcher must single out the problem he wants to study.
Initially the problem may be stated in a broad general way and then the ambiguities,
if any, relating to the problem be resolved. Then, the feasibility of a particular
solution has to be considered. The formulation of a general topic into a specific
research problem, thus, constitutes the first step in a scientific enquiry.

Essentially two steps are involved in formulating the research problem, viz., try to
realize the problem thoroughly, and rephrasing the same in meaningful terms from
an analytical point of view. The researcher must at the same time examine all
available literature to get himself acquainted with the selected problem.

He may review two types of literature—the conceptual literature concerning the


concepts and theories, and the empirical literature consisting of studies made earlier
which are similar to the one proposed. The basic outcome of this review will be the
knowledge as to what data and other materials are available for operational
purposes which will enable the researcher to specify his own research problem in a
meaningful context. After this the researcher rephrases the problem into analytical
or operational terms i.e., to put the problem in as specific terms as possible.

This task of formulating, or defining, a research problem is a step of greatest


importance in the entire research process. The problem to be investigated must be
defined unambiguously for that will help discriminating relevant data from irrelevant
ones.

2. Extensive literature survey: Reviewing literature can be time-consuming,


daunting and frustrating, but is also rewarding. Its functions are:
• Bring clarity and focus to your research problem;
• Improve your methodology;
• Broaden your knowledge;
• Contextualize your findings.

Once the problem is formulated, a brief summary of it should be written down. The
researcher should undertake extensive literature survey connected with the
problem. For this purpose, the abstracting and indexing journals and published or
unpublished bibliographies are the first place to go to. Academic journals, conference
proceedings, government reports, books etc., must be tapped depending on the
nature of the problem. In this process, it should be remembered that one source will
lead to another. The earlier studies, if any, which are similar to the study in hand,
should be carefully studied. A good library will be a great help to the researcher at
this stage.

105
3. Development of working hypotheses: After the extensive literature survey,
researcher should state in clear terms the working hypothesis or hypotheses. The
working hypothesis is a tentative assumption made in order to draw out and test its
logical or empirical consequences. In most types of research, the development of
working hypothesis plays an important role. The hypothesis should be very specific
and limited to the piece of research in hand because it has to be tested. The role of
the hypothesis is to guide the researcher by delimiting the area of research and to
keep him on the right track. It sharpens his thinking and focuses attention on the
more important facets of the problem. It also indicates the type of data required and
the type of methods of data analysis to be used.

4. Preparing the research design: The research problem is formulated in clear


cut terms, the researcher will be required to prepare a research design, i.e., he will
have to state the conceptual structure within which research would be conducted.
The preparation of such a design facilitates research to be as efficient as possible
yielding maximal information. In other words, the function of research design is to
provide for the collection of relevant evidence with minimal expenditure of effort,
time and money. But how all these can be achieved depends mainly on the research
purpose. Research purposes may be grouped into four categories, viz., (i)
Description, (ii) Exploration, (iii) Diagnosis, and (iv) Experimentation. A flexible
research design which provides opportunity for considering many different aspects
of a problem is considered appropriate if the purpose of the research study is that of
exploration. But when the purpose happens to be an accurate description of a
situation or of an association between variables, the suitable design will be one that
minimizes bias and maximizes the reliability of the data collected and analyzed.

5. Determining sample design: All the items under consideration in any field of
inquiry constitute a ‘universe’ or ‘population’. A complete enumeration of all the
items in the ‘population’ is known as a census inquiry. Census inquiry is not possible
in practice under many circumstances. For instance, blood testing is done only on a
sample basis. Hence, quite often we select only a few items from the universe for our
study purposes. The items so selected constitute what is technically called a sample.
The researcher must decide the way of selecting a sample or what is popularly known
as the sample design. In other words, a sample design is a definite plan determined
before any data are actually collected for obtaining a sample from a given population.
The sample design to be used must be decided by the researcher taking into
consideration the nature of the inquiry and other related factors.

6. Collecting the data: In dealing with any real life problem, it is often found that
data at hand are inadequate, and hence, it becomes necessary to collect data that
are appropriate. There are several ways of collecting the appropriate data which
differ considerably in the context of money costs, time and other resources at the
disposal of the researcher. Primary data can be collected either through experiment

106
or through survey. If the researcher conducts an experiment, he observes some
quantitative measurements, or the data, with the help of which he examines the
truth contained in his hypothesis. But in the case of a survey, data can be collected
by any one or more of the following ways: (i) By observation, (ii) Through personal
interview: The investigator follows a rigid procedure and seeks answers, (iii) Through
telephone interviews (iv) By mailing of questionnaires or (v) Through schedules. The
researcher should select one of these methods of collecting the data taking into
consideration the nature of investigation, objective and scope of the inquiry, financial
resources, available time and the desired degree of accuracy. Though he should pay
attention to all these factors but much depends upon the ability and experience of
the researcher.

7. Execution of the project: Execution of the project is a very important step in


the research process. If the execution of the project proceeds on correct lines, the
data to be collected would be adequate and dependable. The researcher should see
that the project is executed in a systematic manner and in time. If the survey is to be
conducted by means of structured questionnaires, data can be readily machined-
processed. In such a situation, questions as well as the possible answers may be
coded. If the data are to be collected through interviewers, arrangements should be
made for proper selection and training of the interviewers. A careful watch should
be kept for unanticipated factors in order to keep the survey as much realistic as
possible. This, in other words, means that steps should be taken to ensure that the
survey is under statistical control so that the collected information is in accordance
with the pre-defined standard of accuracy.

8. Analysis of data: After the data have been collected, the researcher turns to
the task of analyzing them. The analysis of data requires a number of closely related
operations such as establishment of categories, the application of these categories
to raw data through coding, tabulation and then drawing statistical inferences. The
unwieldy data should necessarily be condensed into a few manageable groups and
tables for further analysis.

Coding operation is usually done at this stage through which the categories of data
are transformed into symbols that may be tabulated and counted.
Editing is the procedure that improves the quality of the data for coding. With coding
the stage is ready for tabulation.

The tabulation is a part of the technical procedure wherein the classified data are put
in the form of tables. The mechanical devices can be made use of at this juncture. A
great deal of data, especially in large inquiries, is tabulated by computers. Computers
not only save time but also make it possible to study a large number of variables
affecting a problem simultaneously. Analysis work after tabulation is generally based
on the computation of various percentages, coefficients, etc., by applying various
well defined statistical formulae. In the process of analysis, relationships or

107
differences supporting or conflicting with original or new hypotheses should be
subjected to tests of significance to determine with what validity data can be said to
indicate any conclusion(s). In brief, the researcher can analyze the collected data with
the help of various statistical measures.

9. Hypothesis-testing: After analyzing the data as stated above, the researcher is


in a position to test the hypotheses, if any, he had formulated earlier. Do the facts
support the hypotheses, or they happen to be contrary? This is the usual question
which should be answered while testing hypotheses. Various tests, such as Chi square
test, t-test, F-test, have been developed by statisticians for the purpose. The
hypotheses may be tested using one or more of such tests, depending upon the
nature and object of research inquiry. The testing of hypothesis results in either
accepting the hypothesis or in rejecting it. It is observed that if the investigator had
no hypotheses to start with, generalizations established on the basis of data may be
stated as hypotheses to be tested by subsequent research in times to come.

10. Generalizations and interpretation: If a hypothesis is tested and upheld


several times, it may be possible for the researcher to arrive at generalization, i.e., to
build a theory. As a matter of fact, the real value of research lies in its ability to arrive
at certain generalizations. If the researcher had no hypothesis to start with, he might
seek to explain his findings on the basis of some theory. It is known as interpretation.
The process of interpretation may quite often trigger off new questions which in turn
may lead to further researches.

11. Preparation of the report or the thesis: Finally, the researcher has to prepare
the report of what has been done by him. Writing of reports must be done with great
care keeping in view the following:

The layout of the report should be as follows: (i) the preliminary pages; (ii) the main
text, and (iii) the end matter. In its preliminary pages the report should carry the title
and date followed by acknowledgements and forward. Then there should be a table
of contents followed by a list of tables and list of graphs and charts, if any, given in the
report.

The main text of the report should have the following parts:

I. Introduction: It should contain a clear statement of the objective of the


research and an explanation of the methodology adopted in accomplishing
the research. The scope of the study along with various limitations should as
well be stated in this part.
II. Summary of findings: After the introduction there would appear a statement
of findings and recommendations in non-technical language. If the findings
are extensive, they should be summarized.

108
III. Main report: The main body of the report should be presented in logical
sequence and broken-down into readily identifiable sections.
IV. Conclusion: Towards the end of the main text, the researcher should again
put down the results of his research clearly and precisely. In fact, it is the final
summing up. At the end of the report, appendices should be enlisted in
respect of all technical data.
V. Bibliography: Which consist of all the references. i.e., list of books, journals,
reports, etc., consulted, should also be given at the end. The index should
also be given specially in a published research report.

The report should be written in a concise and objective style in simple language
avoiding vague expressions such as ‘it seems,’ ‘there may be’, and the likes. Charts
and illustrations in the main report should be used only if they present the information
more clearly and forcibly. Calculated ‘confidence limits’ must be mentioned and the
various constraints experienced in conducting research operations may as well be
stated.

CONCLUSION

Research is a systematic intervention into an issue, problem, phenomenon, or


incident, where a research needs to make use of scientific methods and tools to
explore the scenario, in order to arrive at appropriate findings. This attempt includes
defining and redefining problems, formulating hypothesis or suggested solutions;
collecting, organizing and evaluating data; making deductions and reaching
conclusions. Careful planning and selection of research process make the research
smooth enough to control the phenomena under study and arrive at an appropriate
conclusion. This section of the book provides detailed information about the research
process that to be followed by the researcher.

DISCUSSION QUESTIONS

1. Explain the research process?


2. Explain importance of research process in qualitative and quantitative
research?
3. How you formulate a research problem?
4. How literature survey helpful in formulating problem?
5. Differentiate research design and sample design?
6. How you plan to execute a research project?
7. “Writing of the report must be done with great care”. Explain?

109
MODULE 6

PROBLEM FORMULATION

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand the importance of forming research questions.


2. Understand the importance of forming research problem.
3. Analyze the process involved in the research problem formulation.
4. Understand how to arrive at a sound problem statement.

INTRODUCTION

Research always originates from a need felt by individuals in a social setting. The
problem is basically a “gap” between “what is” and “what ought to be”. When a
research is conducted to solve the problem as Tejero, (2004) puts it, a gap is filled in
and new knowledge evolved. However, a clear distinction between the problem and
the purpose should be made. The problem is the aspect the researcher worries about,
thinks about and wants to find a solution for. The purpose is to solve the problem, i.e.,
find answers to the question(s). If there is no clear problem formulation, the purpose
and methods are meaningless This particular chapter provides better insight and
understanding into the ways with which one researcher formulate the problem from
varied sources, developing the literature matrix and finding the GAP in research.

RESEARCH PROBLEM

As Northrop, (1966) writes, “Inquiry starts only when something is unsatisfactory,


when traditional beliefs are inadequate or in question, when the facts necessary to
resolve one’s uncertainties are not known, when the most likely relevant hypotheses
are not even imagined. What one has at the beginning of the inquiry is merely the
problem”.

A research problem refers to some difficulty that a researcher experiences in the


context of either a theoretical or practical situation and wants to obtain a solution
for the same.

110
COMPONENTS OF A RESEARCH PROBLEM:

The components of research problem include:

1. There must be an individual or a group which has some difficulty or


the problem;
2. There must be some objective(s) to be attained at. If one wants
nothing one cannot have a problem;
3. There must be alternative means (courses of action) for obtaining the
objective;
4. There must remain some doubt in the mind of a researcher with
regard to the selection of alternatives. Research must answer the
relative efficiency of the possible alternatives; and
5. There must be some environment(s) to which the efficiency pertains.

CHARACTERISTICS OF A GOOD RESEARCH PROBLEM

A problem exists when there is an absence of information resulting in a “gap” in


knowledge; when there are contradictory results; and when a fact exists and you
intend to make your study explain it (Mc Gingan, 1978 in Tejero, 2004).

A good research problem as presented by Tejero (2004) is characterized as follows:

1. A good problem should be of great interest to you.


2. A researcher needs to be highly motivated and interested because
research is a long and arduous process.
3. It should be useful for the concerned people in a particular field.
4. A researcher should select a topic within his field of endeavor so that he
can share the benefits of his research work with other people in that
particular field of interest.
5. A good problem should be novel. The research topic should be something
new so that one can be sure that it really contributes to the formation of
new knowledge and not just a mere repetition of what has been done
already.
6. It should lend itself to complex designing. Complex research designs.
7. It should be completed in the allotted time desired.
8. It should not carry ethical or moral impediments

SOURCES OF SCIENTIFIC PROBLEMS

Tejero (2004) enumerated the following as some of the many sources of a problem:

1. Experiences and observations.

111
2. Courses that have been taken;
3. Journals, books, magazines or abstracts;
4. Theses and dissertation (focused on recommendations);
5. Professors and colleagues; and
6. Confirm the research GAP.

What is problem formulation?

A research problem is the initial step and the most significant condition in the research
method. The research problem serves as the basis of a research study. According the
Kerlinger; ‘in order for one to solve a problem, one must know what the problem is’.
The large part of the problem knows what one is trying to do. A research problem and
the way researcher formulate it regulates almost every step that follows in the study.
Formulation of the problem determines the quality output of the study. In some cases
research problems or questions redefined too vaguely and too generally. An
important point to keep in mind when defining or formulating a research problem is
that it should be more specific rather than general. When a problem or question is
specific and focused; it becomes a more answerable research question than if it
remained general and unfocused (Bless et. al., 2006).

A well formulated problem is already a half-solved problem. A research problem is


expressed as a general question about the relationship between two or more
variables. The formulation of a problem introduces the necessity of defining clearly all
concepts used and of determining the variables and their relationships.

IMPORTANCE OF FORMULATING RESEARCH PROBLEM:

According to Kumar (2005), research formulation is the identification of a destination


before undertaking journeys. As in the absence of a destination it is impossible to
identify the shortest or indeed any - root in the absence of a clear research problem,
a clear economic plan is impossible. A research problem is like a foundation of a
building. The type and design of a building depend upon the foundation of the
building.

WHAT IS A PROBLEM STATEMENT?

A problem statement is the description of an issue currently existing which needs to


be addressed. It provides the context for the research study and generates the
questions which the research aims to answer. The statement of the problem is the
focal point of any research. A good problem statement is just one sentence (with
several paragraphs of elaboration).

112
For example it could be: "The frequency of job layoffs is creating fear, anxiety, and a
loss of productivity in middle management workers." While this problem statement is
just one sentence, it should be accompanied by a few paragraphs that elaborate on
the problem. The paragraphs could cover present persuasive arguments that make
the problem important enough to study. They could include the opinions of others
(politicians, futurists, other professionals); explanations of how the problem relates
to business, social or political trends via presentation of data that demonstrates the
scope and depth of the problem. A well-articulated statement of the problem
establishes the foundation for everything to follow in the proposal and will render less
problematic most of the conceptual, theoretical and methodological obstacles
typically encountered during the process of proposal development. This means that,
in subsequent sections of the proposal, there should be no surprises, such as
categories, questions, variables or data sources that come out of nowhere: if it can't
be found in the problem section, at least at the implicit level, then it either does not
belong in the study or the problem statement needs to be re-written (Bwisa (2008).

WHAT IS A PROBLEM STATEMENT?

A problem statement is the description of an issue currently existing which needs to


be addressed. It provides the context for the research study and generates the
questions which the research aims to answer. The statement of the problem is the
focal point of any research. A good problem statement is just one sentence (with
several paragraphs of elaboration). For example it could be: "The frequency of job
layoffs is creating fear, anxiety, and a loss of productivity in middle management
workers." While this problem statement is just one sentence, it should be
accompanied by a few paragraphs that elaborate on the problem. The paragraphs
could cover present persuasive arguments that make the problem important enough
to study. They could include the opinions of others (politicians, futurists, other
professionals); explanations of how the problem relates to business, social or political
trends via presentation of data that demonstrates the scope and depth of the
problem. A well-articulated statement of the problem establishes the foundation for
everything to follow in the proposal and will render less problematic most of the
conceptual, theoretical and methodological obstacles typically encountered during
the process of proposal development. This means that, in subsequent sections of the
proposal, there should be no surprises, such as categories, questions, variables or data
sources that come out of nowhere: if it can't be found in the problem section, at least
at the implicit level, then it either does not belong in the study or the problem
statement needs to be re-written (Bwisa, 2008).

113
WHERE DOES A PROBLEM STATEMENT ORIGINATE FROM?

A good problem originates from a research question formulated out of observation of


the reality. A literature review and a study of previous experiments, and research, are
good sources of research questions that are converted to statements of the problem.
Many scientific researchers look at an area where a previous researcher generated
some interesting results, but never followed up. It could be an interesting area of
research, which nobody else has fully explored. The research question is formulated
and then restated in the form of a statement that notes the adverse consequences of
the problem. The type of study determines the kinds of question you should
formulate: Is there something wrong in society, theoretically unclear or in dispute, or
historically worth studying? Is there a program, drug, project, or product that needs
evaluation? What do you intend to create or produce and how will it be of value to
you and society? (Bwisa, 2008).

ROLE OF LITERATURE

No research undertaking can skip an exhaustive literature search (Aslam, 2010). For a
researcher, it is extremely frustrating to know at the end of the research exercise that
the knowledge gap that he or she is trying to answer had been responded and
published well before the start of the research. Therefore, before beginning any
research, colleagues should devote all their energies to confirm that their research
question has not been answered before…. However, there is always a time gap
ranging from one to several years before the research results published in medical
journals and conference proceedings become the part of textbooks (Scott, et.al.
2011). The importance of publications not available in these conventional databases
(usually known as gray literature) i.e. dissertation, theses, scientific reports, and non-
indexed publications cannot be subsided and every attempt should be made to make
the literature search exhaustive. It is recommended that this work should be
performed by more than one investigator while using two or more databases and
should include a manual search of the list of selected references for gray literature.
Methods of performing such a search have been discussed elsewhere and must be
considered by the colleagues according to their feasibility (Aslam 2010; Scott, et.al.
2011). It is important to keep in mind that the quality of research reports should be
carefully and critically judged using the guidelines available elsewhere, (Morris, 2008),
as unclear methods lead to uncertainty and may itself be a knowledge gap (Aslam,
2010). Nonetheless, we strongly suggest that at the end of the literature search,
colleagues should write a report with methods used for literature search, their
findings, and identified knowledge gaps, to stimulate discussions with other
colleagues to confirm and improve the research question in the light of these reports.
It is highly recommended that researchers whether new or experienced to transform
their literature review reports into peer reviewed comprehensive review articles

114
which can inform other researchers of the possible avenues of research and can be
used as an argument to support identified research questions via peer-review (Junaid,
Akhtar, Raza, and Ejaz, 2012).

CRITERIA FOR CHOOSING A RESEARCH TOPIC

Development research is intended to provide information for decision making to


promote development, the selection and analysis of the research topic should involve
those who are responsible for the development of the community. This would include
managers in service delivery, workers in different development sectors and
community leaders, as well as researchers. Each topic that is proposed for research
has to be judged according to certain guidelines or criteria. There may be several ideas
to choose from. The guidelines or criteria for selecting a research topic are as follows:

• Relevance
• Avoidance of duplication
• Urgency or timeliness
• Political acceptability of study
• Feasibility of study
• Applicability of results
• Ethical acceptability

Relevance

The topic you choose should be a priority problem. Questions to be asked include How
large or widespread is the problem? Who is affected? How severe is the problem? Try
to think of serious socioeconomic problem is that affect a great number of people, or,
of the most serious problems that are faced by the managers in the area of your work.

Avoidance of duplication Research Project Formulation

Before you decide to carry out a study, it is important that you find out whether the
suggested topic has been investigated before, either within the proposed snidy area
or in another area with similar conditions. If the topic has been researched, the results
should be reviewed to explore whether major questions that deserve further
investigation remain unanswered. If not, another topic should be chosen.

Urgency of data needed (timeliness)

How urgently are the results needed for making a decision or developing interventions
at various levels (from community to policy)? Consider which research should be done
first and which can be done later.

115
Political acceptability

In general it is advisable to research a topic that has the interest and support of the
political /national authorities. This will increase the chance that the results of the
study will be implemented. Under certain circumstances, however, you may feel that
a study is required to show that the government's policy needs adjustment. If so, you
should make an extra effort to involve the policy makers concerned at an early stage,
in order to limit the chances of confrontation later.

Feasibility

Look at the project you are proposing and consider the complexity of the problem and
the resources you will require carrying out your study. Thought should be given first
to manpower, time, equipment, and money that is locally available.

In situations where the local resources are necessary to carry out the project are not
sufficient, you might consider resources available at the national level; for example,
in research units, research councils or local universities. Finally, explore the possibility
of obtaining technical and financial assistance from external sources.

Applicability of possible results/recommendations

Is it likely that the recommendations from the study ill be applied? This will depend
not only on the management capability within the team and the readiness of the
authorities but also on the availability of resources for implementing the
recommendations. Likewise, the opinion of the potential clients and of responsible
staff will influence the implementation of recommendations.

Ethical acceptability

We should always consider the possibility that we may inflict harm on others while
carrying out research. Therefore, review the study you are proposing and consider
important ethical issues such as:

How acceptable is the research to those who will be studied? (Cultural sensitivity must
be given careful consideration). Is the problem shared by the target group and
management staff and researchers? Can informed consent be obtained from the
research subjects? Will the results be shared with those who are being studied? Will
the results be helpful in improving the living standard of those studied?

116
IMPORTANCE OF RESEARCH QUESTION

The first and most important decision in preparing a systematic review is to determine its focus. This
is best done by clearly framing the questions the review seeks to answer. Well-formulated questions
will guide many aspects of the review process, including determining eligibility criteria, searching for
studies, collecting data from included studies, and presenting findings (Jackson, 1980, Cooper, 1984,
and Hedges, 1994). A research problem is often accompanied by research question(s).
A Research Question is a statement that identifies the phenomenon to be
studied. For example: What resources are helpful to knew and minority
drug abuse researchers? (www.theresearchassistant.com).

CRITERIA FOR A GOOD RESEARCH QUESTION

A good research question is described by the acronym FINER (Hulley and Cummings,
1998)

• Feasible (adequate subjects, technical expertise, time and money, and scope);
• Interesting to the investigator;
• Novel (confirms or refutes previous findings, provides new findings);
• Ethical and
• Relevant (to scientific knowledge, clinical and health policy, future research
directions).

CHECKLIST

Q. No Questions Detailed Checklist

1 Is the question right for me? • Will the question hold my


interest?
• Can I manage any potential
biases/subjectivities I may have?

2 Is the question right for the • Will the findings be considered


field? significant?
• Will it make a contribution?

3 Is the question well- • Are the terms well-defined?


articulated? • Are there any unchecked
assumptions

117
4 Is the question doable? • Can information be collected in an
attempt to answer the question?
• Do I have the skills and expertise
necessary to access this
information? If not, can the skills
be developed?
• Will I be able to get it all done
within my time constraints?
• Are costs likely to exceed my
budget?
• Are there any potential ethical
problems?

5 Does the question get the tick • Does my supervisor think I am on


of approval from those in the the right track?
know? • Do ‘experts’ in the field think my
question is relevant/ important/
doable

TEN COMMANDMENTS FOR PICKING A RESEARCH PROJECT

1. Anticipate the results before doing the rest study.


a. First and foremost, think of the results you might obtain.
2. Pick an area on the basis of interest in the outcome.
a. Pick something that is not only of interest to you but to a large part of
the scientific community.
3. Look for an under occupied niche that has potential.
a. There are often important aspects in a field that are not being studied.
4. Go to talk and read the papers outside your area of interest.
a. Good ideas come from listening to talks and reading papers outside
your area of interest.
5. Built on a theme.
a. Seminal or important discoveries create the need for further research.
6. Find a balance between low-risk and high-risk projects.
a. High-risk, higher-interest projects allow you to make seminal
observations.
7. Be prepared to pursue a project to any depth necessary.
a. To be recognized for important research accomplishments often
requires a researcher to acquire new skills, call in a consultant,
whatever it takes to complete the project.
8. Differentiate yourself.

118
a. Being recognized as an expert in some area.
9. Do not assume that outstanding, or even good, clinical research is easier than
outstanding basic research. Outstanding clinical research is not easy.
a. It is more difficult to design well-controlled and informative studies.
b. All procedures needed for an optimal design cannot always be
performed in a given population.
c. The studies usually take longer and are more complicated.
10. Focus, Focus, Focus.
a. Do not forget the need to focus.
b. Trying to make an impact in three or more different areas is extremely
difficult (Kahn, 1994).

CONCLUSION

Formulating the research problem and proposition developed turns as a major step in
the research methodology. The process of formulating them in a meaningful way is
not at all an easy task. In research, the leading stage that comes into play is
formulating the research problem and it becomes practically a necessity to have the
basic knowledge and understanding of phenomena under study and its further
support in making a right decision. This section of the book discusses about these
aspects and provide the researcher meaningful insight into the problem formulation
process, and it support the researcher to fix the unit of analysis, time and space limits,
features that are under study, specific conditions that are present in order to conduct
the research process.

DISCUSSION QUESTIONS

1. How you define a research problem. How you formulate a research problem?
2. What are the sources of research problem?
3. Discuss the importance of formulating a research problem in research?
4. What do you mean by problem statement? How is it different from problem
formulation?
5. Discuss the importance of research questions in arriving at problem
statement?
6. Explain best criteria for choosing a research topic?
7. Discuss the checklist of forming good research questions?
8. What are the ten commandments of picking up a research project?

119
MODULE 7

RESEARCH DESIGN

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand appreciates the meaning of research design.


2. Identify various typologies of research design.
3. Analyze the importance of research design in research framework.
4. Understand the essentials of good research design.

INTRODUCTION

How is the term research design' to be used in this chapter? An analogy might help.
When constructing a building there is no point ordering materials or setting critical
dates for completion of project stages until we know what sort of building is being
constructed. The first decision is whether we need a high rise office building, a factory
for manufacturing machinery, a school, a residential home or an apartment block.
Until this is done, we cannot sketch a plan, obtain permits, work out a work schedule
or order materials. Similarly, social research needs a design or a structure before data
collection or analysis can commence.

A research design is not just a work plan. A work plan details what has to be done to
complete the project but the work plan will flow from the project's research design.
The function of a research design is to ensure that the evidence obtained enables us
to answer the initial question as unambiguously as possible. Obtaining relevant
evidence entails specifying the type of evidence needed to answer the research
question, to test a theory, to evaluate a program or to accurately describe some
phenomenon. In other words, when designing research we need to ask: given this
research question (or theory), what type of evidence is needed to answer the
question (or test the theory) in a convincing way?

DEFINITION

Research design essentially refers to the plan or the strategy of shaping the research
(Henn, Weinstein and Foard, 2006), that might include the entire process of research
from conceptualizing a problem with writing research questions, and on to data
collection, analysis, interpretation and report writing (Creswell, 2007). It provided the

120
framework for the collection and analysis of data and subsequently indicated which
research methods were appropriate (Walliman, 2006). The most common, useful
purposes and main aims of research were exploration, description and rational
explanation based on data (Richardson, 2005; Babbie, 2007).

According to Hussey and Hussey (1997), research design is the overall approach to the
research process, from the theoretical underpinning to the collection and analysis of
the data.

A logical systematic plan prepared for directing a research study. It specifies the
objectives of the study, the methodology, and techniques to be adopted for achieving
the objectives… it constitutes the blue print for the collection measurement and
analysis of the data”. – Philips Bernard.

Saunders, Lewis and Thornhill, (1997) state that the research design helps the
researcher to:

• make an informed decision about the research methodology (The


researcher has to decide how data are to be collected and analyzed
and needs an overall configuration of the research process to ensure
success).
• Adapt the research design to cater for limitations and constraints
(These include limited access to data or insufficient knowledge of the
subject or an inadequate understanding of the subject or time
constraints.)
• Determine which research methods would be appropriate for the
particular study (Proper research methods should help to explain the
why’s, how’s and what’s of the subject (Saunders, et. al., 1997).

The statement of the problem, research questions and research objectives will call for
a specific research design (Saunders et al., 2009). Research design addresses
important issues relating to a research project such as purpose of study, location of
study, type of investigation, extent of researcher interference, time horizon and the
unit of analysis (Sekaran and Bougie, 2010). Research designs, however, will vary from
simple to complex depending on the nature of the study and the specific hypotheses
formulated for testing. Certain designs will ask for primary data and others for
secondary data. Some research designs may require the researcher to collect primary
as well as secondary data. Moreover, the data collection modes will be different for
different researches. Some researchers will require observation; others may rely on
surveys, or secondary data (Zikmund, 2000). Some researchers will call for
experiments, were based on the results of the experiment, the theory on which the
hypotheses and predictions were based will be accepted or rejected (Goodwin, 2005).

121
Research design sets the scope of the study specifying whether it needs to be
descriptive, explanatory (or causal) or predictive. It is important for a researcher to be
familiar with the differences between the major research designs like experimental,
cross-sectional, longitudinal, case study and comparative (Bryman and Bell, 2003).
Hence a researcher must be very careful while selecting a research design as it will
dictate an overall plan and technology of the research project.

Research design is a master plan that specifies the methods and procedures for
collecting and analyzing needed information. Research design deals with a logical
problem and not a logistical problem' (Yin, 1989). Before a builder or architect can
develop a work plan or order materials, they must first establish the type of building
required, its uses and the needs of the occupants. The work plan owes from this.
Similarly, in social research the issues of sampling, method of data collection (e.g.
questionnaire, observation, and document analysis), and design of questions are all
subsidiary to the matter of `What evidence do I need to collect?' Without attending
to these research design matters at the beginning, the conclusions drawn will
normally be weak and unconvincing and fail to answer the research question.

ESSENTIALS OF A GOOD RESEARCH DESIGN

• a plan that specifies the objectives of the study


• a plan that specifies the hypothesis to be tested
• a blue print specifying the methods to be adopted for data collection
• a scheme which test generalizability.

Systematic research encompasses specific methods to collect data, deliberation on


the significance of the results obtained, and an explanation of any limitations
experienced. The primary focus of research should be to increase knowledge of a
particular topic in order to help solve relevant problems (Saunders et al., 1997).

RESEARCH DESIGN: TYPOLOGY


EXPLORATORY RESEARCH DESIGN

Exploratory research is a type of research conducted because a problem has not been
clearly defined. Exploratory research helps determine the best research design, data
collection method and selection of subjects. Given its fundamental nature,
exploratory research often concludes that a perceived problem does not actually
exist. According to Sekaran, (2000) an exploratory study research was performed
when a researcher had little knowledge about the situation or had no information on
how similar problems or research issues had been solved in the past. It embarks on
investigating and finding the real nature of the problem. In addition, solutions and
new ideas could surface from this type of research (Richardson, 2005).

122
The exploratory research aims at investigating the full nature of a phenomenon, the
manner existence, other related factors and the characteristics of the subjects
thereof, in order to gain additional information on the situation or practice.
Exploratory research is done to increase the researchers’ knowledge on the field of
study and provides valuable baseline information for further investigation. The
method uses interviews and observational methods to collect data (Drummond, 1998;
Polit and Beck, 2006).

Exploratory research often relies on secondary research such as reviewing available


literature and/or data, or qualitative approaches such as informal discussions with
consumers, employees, management or competitors, and more formal approaches
through in-depth interviews, focus groups, projective methods, case studies or pilot
studies. The outcome of exploratory research is not usually useful for decision-making
by them, but they can provide significant insight into a given situation. Even though
the results of qualitative research can give some indication as to the "why", "how" and
"when" something occurs, it cannot tell us "how often" or "how many." Exploratory
research is not typically generalizable to the population at large.

The central purpose is to formulate hypotheses regarding potential problems and


opportunities present in the decision situation. The hypotheses can be tested at a later
phase with a conclusive research design (Leinhardt and Leinhardt, 1980). Exploratory
research design applies when the research objectives include the following:

1. identifying problems (threats or opportunities);


2. developing a more precise formulation of a vaguely identified problem
(threat or opportunity);
3. gaining perspective regarding the breath of variables operating in a
situation;
4. establishing priorities regarding the potential significance of various
problems (threats or opportunities);
5. gaining management and researcher perspective concerning the
character of the problem situation;
6. identifying and formulating alternative courses of action; and gathering
information on the problems associated with doing conclusive research;
7. identification of problems (threats or opportunities) can be assisted
through the following:
• Searching secondary sources;
• Interviewing knowledgeable persons;
• Compiling case histories.

The main objective of the exploratory research is to manage the broad problem into
specific problem statement and generate possible hypothesis.

123
The exploratory studies are mainly used for:

1. Providing information to enable a more precise problem definition;


2. Establishing research priorities;
3. Making researcher aware of the problem situation; and
4. Data collecting information like tools, methods and techniques.

The generally used methods in exploratory research are:

a) Survey of existing Research literature

Survey of existing Research literature provides an overview and a critical evaluation


of a body of literature relating to a research topic or a research problem. Through the
Survey of existing Research literature analyzes a body of literature in order to classify
it by themes or categories, rather than simply discussing individual works one after
another. The literature search provides the research and ideas of the field rather than
each individual work or author by itself. The researcher explores the published and
unpublished books, journals, magazines and other literature available on the problem
definition. This usually does not provide a solution to the problem, but it can certainly
provide direction to the research process.

b) Survey of experienced individuals

The professionals, academicians, experts, managers etc. are the individual who can
provide substantial data on the research problem.

The Survey of experienced individuals will help the researchers to identify scope and
key issues. Well-organized survey of experienced individual will help the researcher
in:

• Recognize which authors are interested in your specialism and those


who take a generalist’s view;
• Probe writers who are prominent in your subject and who can help
researcher justify the importance of your research idea and
• Include the authors would or could contradict your ideas

c) Analysis of selected case situations

Sometimes the popular cases have gone through to develop understanding of the
given research problem. Case study research excels at bringing us to an understanding
of a complex issue or object and can extend experience or add strength to what is
already known through previous research. Case studies highlight comprehensive
contextual analysis of a limited number of occasions or conditions and their

124
relationships. Researchers have used the case study research method for many years
across a variety of disciplines. Social scientists, in particular, have made wide use of
this qualitative research method to examine contemporary real-life situations and
provide the basis for the application of ideas and extension of methods. The case
study research method as an empirical inquiry that investigates a contemporary
phenomenon within its real-life context; when the boundaries between phenomenon
and context are not clearly evident; and in which multiple sources of evidence are
used (Yin, 1984)

d) Case study

Case studies are multifaceted since they generally comprise multiple sources of data,
may include manifold cases within a study, and produce large amounts of data for
analysis. Investigators from many areas use the case study technique to figure upon
theory, to produce novel theory, to dispute or test theory, to clarify a condition, to
provide a basis to apply solutions to situations, to explore, or to describe an object or
phenomenon

e) Secondary data analysis

Secondary data analysis is another form of exploratory research. Secondary data


analysis can be literally defined as “second-hand” analysis. It is the analysis of data or
information that was either gathered by someone else (e.g., researchers, institutions,
other NGOs, etc.) or for some other purpose than the one currently being considered,
or often a combination of the two (Cnossen, 1997). Secondary data are also helpful in
designing subsequent primary research and, as well, can provide a baseline with which
to compare your primary data collection results. Therefore, it is always wise to begin
any research activity with a review of the secondary data (Novak, 1996). It is observed
that many of document’s archives in which survey information files are gathered and
disseminated is readily available, making a study for secondary analysis easily
accessible.

Advantages

The advantages of exploratory research include;

• It narrows the scope of the research topic;


• It transforms ambiguous problems into well-defined ones;
• Increase a researcher's understanding of a subject;
• By giving flexibility to investigator data can be collected from many
sources;

125
• Useful in determining the best approach to achieve a researcher's
objectives; and
• In some cases can save a great deal of time and money.

Disadvantages

The disadvantages of exploratory research include;

• the data may not fit the problem perfectly;


• exploratory studies are not conclusive studies;
• the design of the study is highly flexible and informal;
• rarely ever does formal design exist in case of exploratory studies; and
• structured and/or standardized questionnaires are replaced by
judgment and intuitive inference drawing on the basis of collected
data; and
• Convenience sampling rather than probability sampling characterizes
exploratory designs having less reliability.

DESCRIPTIVE RESEARCH DESIGN

Descriptive research is a research that described a phenomenon (Salkind, 2000), to


document and describe the phenomenon of interest (Marshall and Rossman, 2006),
providing a clear answer of who, what, when, where, why, and way (6 Ws) of the
research problem and data were typically collected through a questionnaire survey,
interviews or observation(s) (Gay and Diehl, 1992).

Descriptive research, also known as statistical research, describes data and


characteristics about the population or phenomenon being studied. Descriptive
research answers the questions who, what, where, when and how. Although the data
description is factual, accurate and systematic, the research cannot describe what
caused a situation. Thus, Descriptive research cannot be used to create a causal
relationship, where one variable affects another. In other words, descriptive research
can be said to have a low requirement for internal validity.

Descriptive research can be either quantitative or qualitative. It can involve collections


of quantitative information that can be tabulated along a continuum in numerical
form, such as scores on a test or the number of times a person chooses to use a-certain
feature of a multimedia program, or it can describe categories of information such as
gender or patterns of interaction when using technology in a group situation.
Descriptive research involves gathering data that describe events and then organizes,
tabulates, depicts, and describes the data collection (Glass & Hopkins, 1984). It often
uses visual aids such as graphs and charts to aid the reader in understanding the data

126
distribution. Because the human mind cannot extract the full import of a large mass
of raw data, descriptive statistics are very important in reducing the data to
manageable form. When in-depth, narrative descriptions of small numbers of cases
are involved, the research uses description as a tool to organize data into patterns
that emerge during analysis. Those patterns aid the mind in comprehending a
qualitative study and its implications.

In majority cases quantitative research falls into two areas: studies that describe
events and studies aimed at discovering inferences or causal relationships. Descriptive
studies are aimed at finding out "what is," so observational and survey methods are
frequently used to collect descriptive data (Borg and Gall, 1989). Studies of this type
might describe the current state of multimedia usage in schools or patterns of activity
resulting from group work at the computer. An example of this is Cochenour, Hakes,
and Neal's (1994) study of trends in compressed video applications with education
and the private sector.

The description is used for frequencies, averages and other statistical calculations.
Often the best approach, prior to writing descriptive research, is to conduct a survey
investigation. Qualitative research often has the aim of description and researchers
may follow-up with examinations of why the observations exist and what the
implications of the findings are. In short descriptive research deals with everything
that can be counted and studied. But there are always restrictions to that. Your
research must have an impact on the lives of the people around you. For example,
finding the most frequent disease that affects the children of a town. The reader of
the research will know what to do to prevent that disease thus, more people will live
a healthy life.

The descriptive could be quantitative and qualitative both. It describes the given
problem/phenomenon to establish the relationship between the factors.

Characteristics of descriptive research

Major characteristic include:

• Can involve collecting quantitative information;


• Can describe categories of qualitative information such as
patterns of interaction when using technology in the classroom;
• Does not fit neatly into either category;
• Descriptive research is more firm than exploratory;
• In descriptive research is the problem well understood;
• Descriptive research tests specific hypotheses;
• Descriptive research is formal and structured;

127
• Large representative samples are observed in descriptive
research;
• Uses description as a tool to organize data into patterns that
emerge during analysis; and
• Often uses visual aids such as graphs and charts to aid the
reader.

Purpose of descriptive study design

The descriptive research is used for the following purposes:

• To describe the characteristics of certain groups of interest;


• to the marketer, like users of the product;
• To estimate the target customer in the given population;
• To make predictions for the specific future periods;
• Focus directly on theoretical points;
• Verifying focal concepts;
• Contribute to the development of young science;
• Can highlight important methodological consideration;
• Useful for prediction; and
• Planning business action programs.

Example:

A Descriptive Study of Human Resource Operations in Higher Education. Little


research exists that examines whether human resource operations in higher
education are adopting strategic, value-added approaches to service delivery. The
purpose of this descriptive survey study was to examine the perceptions of 1,422
college and university presidents about whether human resource operations in their
institutions were adopted value-added service delivery approaches. Specifically, an
assessment tool from the HR Value Proposition Model (Ulrich and Brockbank, 2005)
was adapted to the higher education environment and then administered using e-mail
and web technologies. Results indicated that most higher education CEOs believed
their human resource operations are transitioning towards adoption of this paradigm
of service delivery.

Types of descriptive research

As many researchers have noted, descriptive research designs are for the most part
quantitative in nature (Burns and Bush, 2002; Churchill and Iacobucci, 2004; Hair et al.
2003; Parasuraman, 1991). There are two basic techniques of descriptive research:
cross-sectional and longitudinal. Cross-sectional studies collect information from a

128
given sample of the population at only one point in time, while the latter deals with
the same sample units of population over a period of time (Burns and Bush 2002;
Malhotra 1999).

Correlational Study

The correlational study is one of the subcategories of descriptive research. Best (1977)
defined descriptive research as “ it describes and interprets what is concerned with
the conditions or relationships that exist; practices that prevail; beliefs, points of view,
or attitudes that are held; a process that is going on; effects that are being felt; or
trend that is developing". "Correlational studies are concerned with determining the
extent of relationship existing between variables. They enable one to measure the
extent to which variation in one variable is associated with another. The magnitude of
the relationship is determined through the use of the coefficient of correlation…. The
idea of such studies is exploration rather than theory testing”.

Characteristics

• Two variables are measured and recorded for each individual.


• The measurements are then reviewed to identify any patterns of
relationship that exist between the two variables and to measure the
strength of the relationship.
• Can be used for making predictions.

Steps in Correlational Research

1. Select the problem;


2. Define the population, select the sample;
3. Select the instruments;
4. Collect the data;
5. Analyze and interpret data; and
6. Report results and conclusions.

Example:

Study tended to find out the relationship between depression and academic
achievement among arts college students.

Cross-sectional study

The cross-sectional study is also referred to as a sample survey, that is selected


individuals are asked to respond to a set of standardized and structured questions
about what they think, feel and do (Hair, et al. 2003). Cross-sectional research is a

129
research method often used in psychology, sociology and management, but also
utilized in many other areas including social science and education. This kind of study
uses different sets of individuals who differ in the variable of interest, but share other
features such as socioeconomic status, educational background, and ethnicity.

Characteristics

1. Cross-sectional study is taking place at a single point in time.


2. Cross-sectional study does not involve manipulating variables.
3. Cross-sectional study permits investigators to look at many
things at once (age, income, gender).
4. Cross-sectional study used to look at the occurrence of
something in a given population.
5. Cross-sectional studies are observational in nature.
6. These studies are not causal or relational.
7. Researchers record the information but do not manipulate
variables.
8. Cross-sectional studies are often used to make inferences
about possible relationships or to gather preliminary data to
support further research and experimentation.

Example:

The examples of cross-sectional design include:

1. Christian couples are twice as likely to divorce as Jewish couples;


2. High-Risk Drinking habit of youth in colleges; and
3. The international epidemic of multiple births, prematurity and low birth
weight caused by assisted reproductive technologies.

Advantages:

The advantages of cross sectional design include:

• Data on many variables;


• Data from a large number of subjects;
• Data from dispersed subjects;
• Data on attitudes and behaviors;
• Answers questions who, what, where, when;
• Good for exploratory research;
• Generates hypotheses for future research; and
• Data useful to many different researchers.

130
Limitations:

The limitations of cross sectional study include:

• It is not possible to say exposure or disease/outcome is cause and


which effect;
• Confounding factors may not be equally distributed between the
groups being compared and this unequal distribution may lead to bias
and subsequent misinterpretation;
• Cross-sectional studies within the dietary survey, may measure current
diet in a group of people with a disease. Current diet may be altered by
the presence of disease; and
• A further limitation of cross-sectional studies may be due to errors in
recall of the exposure and possibly outcome.

Longitudinal Study

‘Longitudinal’ is a very broad term. Basically, it can be defined as research in which:


(a) data are collected for each item or variable for two or more distinct periods; (b)
the subjects or cases analyzed are the same, or at least comparable, from one period
to the next; (c) the analysis involves some comparison of data between or among
periods (Menard, 1991).

Longitudinal research is a kind of study method used to determine interactions


between variables that are not connected to various background variables. This
observational research method includes reviewing the same group of individuals over
a prolonged period of time. Data is first collected at the beginning of the study, and
may then be gathered repetitively throughout the span of the study. The longitudinal
data’s heuristic potential is indeed immense. Such data not only permit analysis of
duration, but also facilitate the measurement of differences, or changes, in a variable
from one period to another and the testing of the direction (positive or negative and
from Y to X or from X to Y) and magnitude of causal relationships (Menard, 1991)

Characteristics:

Ruspini, (1999) defines the common characteristics as being where:

• data are collected for each item or variable for two or more
distinct periods;
• the subjects or cases analyzed are the same or broadly
comparable; and

131
• the analysis involves some comparison of data between or
among periods.

5 Objectives of Longitudinal Research

Five objectives of longitudinal research include;

1. Direct identification of intra-individual change (change from


one period to another);
2. Direct identification of intra-individual similarities, or
differences in intra-individual change (whether individuals
change in the same or different ways);
3. Analysis of interrelationships in behavior change (whether
certain changes are correlated with each other);
4. Analysis of causes or determinants of intra-individual change
(why individuals change from one period to another);
5. Analysis of causes or determinants of inter-individual
similarities or differences in intra-individual change (why
different individuals change in different ways from one period
to another).

Example:

Longitudinal approaches to social research share the common characteristic of


involving the collection of data over a ‘long’ period of time, although as Hakim notes:
“… what is considered a long period of observation depends upon the subject matter
and context, and the issues addressed” (Hakim, 1987). So, for example, a study such
as Thirty Families (Ritchie, 1985) which set out to explore the effects of
unemployment on living standards in thirty families had a duration of five years, with
families being visited by researchers on two occasions. In contrast, studies exploring
the nature of family formation across generations may have longer time spans across
twenty or thirty years. A classic example of a long term longitudinal study is the social
history documentary 7 Up conducted by Michael Apted, which followed the life stories
of fourteen children who were interviewed at ages 7, 14, 21, 28, 35 and 42 (Apted,
1999)

Types of Longitudinal Study:

a. Trend study

A trend study is a type of longitudinal research design that looks into the dynamics of
a particular characteristic of the population over time. For example, a researcher
might want to study the people’s preference for projects, whether government or

132
non-government, in their community. Respondents of the study vary across study
periods.

b. Cohort study

A cohort study is a type of longitudinal research design where a cohort is tracked over
extended periods of time. A cohort is a group of individuals who have shared a
particular time together during a particular time span, for example, a group of
indigenous peoples living in the forest for decades.

c. Panel study

A panel study is a type of longitudinal research design that involves collection of data
from a panel, or the same set of people over several points in time by measuring
specific dependent variable identified by the researcher to achieve a study objective.
From the data gathered, it is possible to predict cause-effect relationship after a given
time. Panel study is usually done when it is difficult to analyze a case-study which is
only a one-shot deal. People’s shifting attitudes and behavior can be detected. For
example, the cause - effect relationship may be investigated between the number of
faculty research outputs and the amount of time given for research as work load over
three years.

Advantages:

1. Reveals individual level changes.


2. Establishes time order of variables.
3. Can show how relationships emerge

Disadvantages:

1. Difficult to obtain initial sample of subjects.


2. Difficult to keep the same subjects over time.
3. Repeated measures may influence subjects’ behavior.

Prospective panel design: data may be collected at two or more distinct periods, for
those distinct periods, on the same set of cases and variables in each period.

Retrospective panel research: data may be collected in a single period for several
periods, usually including the period that ends with the time at which the data are
collected.

133
Cross-sectional Research Design

A cross-sectional research design is a common research design used by social


scientists. It gathers data from a cross-section of a population. For example, a
contingent valuation study asks a sample of a population regarding their willingness-
to-pay to preserve a given forest ecosystem accessible to them.

Advantages of descriptive study design

Descriptive research is the type of research that explores and describes the data or
characteristics needed for the research. It has several advantages. Some of them are
as given below:

Advantages:

The people being studied are unaware so they act naturally or as they normally do in
everyday situation;

• It is less expensive and time consuming than quantitative


experiments;
• Collects a large amount of data for detailed studying;
• As it is used to describe and not make any conclusions it is
easier to start the research with it; and It can identify further
area of study.

The selection of which research approach is appropriate in a given study should be


based upon the problem of interest, resources available, the skills and training of the
researcher, and the audience for the research. Choosing the correct research design
will enable the researcher to gain a better understanding of social phenomena. Thus,
familiarity with these different research designs is a requisite for a well-guided
research study.

DIAGNOSTIC RESEARCH DESIGN

Meaning

This is similar to the descriptive study design, but with a different focus. Diagnostic
research studies, on the other hand, determine the frequency with which something
occurs. A diagnostic design is concerned with the case as well as the treatment. It is
directed towards discovering what is happening, why is it happening and what can be
done about it. It aims at identifying the causes of the problem and possible solutions
for it.

134
Purpose

A diagnostic study may also concern with discovering and testing whether certain
variables are associated; e.g., are persons hailing from rural areas more suitable for
manning the rural branches of banks? Does a mere village that city voters vote for a
particular party?

Requirements

Both descriptive and diagnostic study design share common requirements. The main
objective of descriptive design is to acquire knowledge. The design in such studies
needs to be rigid focusing on the following;

• formulation of the objective;


• designing the methods of data collection;
• selecting the sample;
• collecting the data;
• processing and analyzing the data;
• test of significance;
• reporting the findings; and
• providing solutions.

A diagnostic study is not possible in areas where knowledge is not advanced enough
making a possible adequate diagnosis. In such cases the social scientists limit his effort
to descriptive studies.

EXPERIMENTAL RESEARCH DESIGN

Experimental design seeks to find out the cause and effect relationship of the
phenomenon under study. Under this design, two similar groups, one called
'experimental group' and the other 'control group' are chosen. The experimental
group is exposed to predesigned procedures while the control group is kept constant.
At the end of the experiment, the two groups are compared to find out the resultant
effect of the experiment. The difference between the two groups is considered to
have been produced by the causative factors.

Before starting the experimentation process, it needs to be ensured that the two
groups are similar in almost every respect. The main techniques for making the two
groups similar are: (i) randomization and matching; and (ii) frequency distribution
control.

135
Definition:

Experimental study design is a study design used to test cause-and-effect relationships


between variables. The classic experimental design specifies an experimental group
and a control group. The independent variable is administered to the experimental
group and not to the control group, and both groups are measured on the same
dependent variable. Subsequent experimental designs have used more groups and
more measurements over longer periods. True experiments must have control,
randomization, and manipulation (Mosby's Medical Dictionary, 8th edition, 2009).

An experimental design is a plan for assigning experimental units to treatment levels


and the statistical analysis associated with the plan (Kirk, 1995: 1). The design of an
experiment involves a number of inter-related activities.

1. Formulation of statistical hypotheses that is germane to the


scientific hypothesis. A statistical hypothesis is a statement about:
(a) one or more parameters of a population or (b) the functional
form of a population. Statistical hypotheses are rarely identical to
scientific hypotheses—they are testable formulations of scientific
hypotheses.
2. Determination of the treatment levels (independent variable) to
be manipulated, the measurement to be recorded (dependent
variable), and the extraneous conditions (nuisance variables) that
must be controlled.
3. Specification of the number of experimental units required and
the population from which they will be sampled.
4. Specification of the randomization procedure for assigning the
experimental units to the treatment levels.
5. Determination of the statistical analysis that will be performed
(Kirk, 1995).

Defining Characteristics:

The characteristic features of experimental design include;

• Treatments (Independent Variables): The research looks at the


impact of one or more specific identifiable variables on the studied
phenomenon. These variables are always manipulated or
controlled.
• Outcome Variables: Apart from the input variables, there are also
certain outcome measures/ variables.

136
• Unit of Assignment: The experiment applies to a specific ‘unit of
assignment’ (the model in which the conditions are tested).
• Comparison/ Control Group: Most experimental studies measure
the impact of treatments against a comparison or control group.
Sometimes the control condition is defined as one to which the
treatment is NOT applied. Sometimes different treatments are
compared against each other.
• Causality: The combined purpose of all previous features is to
credibly establish a cause-effect relationship. However, ‘causality’
is not always established at a uniform level. Experimental research
in environmental technology (such as a metal roof, for example) is
more likely to take causality for granted than research in socio-
cultural aspects. Causality, we begin to see, is more achievable
where;
- laboratory settings control relevant variables;
- variables are inert (not likely to change, except as a
consequence of the treatment) explicit theories specify
expected effects; and
- instruments are calibrated to measure expected effects.

With social behaviors, researchers are more explicit about how they have met basic
requirements (more, richer, deeper explanation). They also emphasize conditions and
limitations of any causal interpretation (Campbell, (1966)

Why Experimental Design?

The experimental designs used in this research to:

• Assess the effects of particular variables on a phenomenon by;


keeping the other variables constant or controlled;
• Observe a phenomenon under controlled conditions and
• Analyze the relationship between dependent and independent
variables.

Conditions in experimental design:

To meet the criteria of the experimental design research should make;

• selection of exactly identical group;


• target group amenable for experimentation;
• possible to identify all the independent variables; and
• keep non expérimental variables constant.

137
Basic principles of experimental designs

The basic principles of experimental designs are randomization, replication and local
control. These principles make a valid test of significance possible. Each of them is
described briefly in the following subsection (Fisher, 1960).

Randomization

The first principle of an experimental design is randomization, which is a random


process of assigning treatments to the experimental units. The random process
implies that every possible allotment of treatments has the same probability. An
experimental unit is the smallest division of the experimental material and a
treatment means an experimental condition whose effect is to be measured and
compared. The purpose of randomization is to remove bias and other sources of
extraneous variation, which are not controllable. Another advantage of
randomization (accompanied by replication) is that it forms the basis of any valid
statistical test. Hence the treatments must be assigned at random to the experimental
units. Randomization is usually done by drawing numbered cards from a well-shuffled
pack of cards, or by drawing numbered balls from a well-shaken container or by using
tables of random numbers (Fisher, 1960).

Replication

The second principle of an experimental design is replication, which is a repetition of


the basic experiment. In other words, it is a complete run for all the treatments to be
tested in the experiment. In all experiments, some variation is introduced because the
experimental units such as individuals or plots of land in agricultural experiments
cannot be physically identical. This type of variation can be removed by using several
experimental units. We therefore perform the experiment more than once, i.e., we
repeat the basic experiment. An individual repetition is called a replicate. The number,
the shape and the size of replicates depend upon the nature of the experimental
material. A replication is used
(i) to secure more accurate estimate of the experimental error, a term which
represents the differences that would be observed if the same treatments were
applied several times to the same experimental units;

(ii) to decrease the experimental error and thereby to increase precision, which is
a measure of the variability of the experimental error; and

(iii) to obtain more precise estimate of the mean effect of a treatment,

138
since , where denotes the number of replications (Fisher 1960).

Local Control

It has been observed that all extraneous sources of variation are not removed by
randomization and replication. This necessitates a refinement in the experimental
technique. In other words, we need to choose a design in such a manner that all
extraneous sources of variation are brought under control. For this purpose, we make
use of local control, a term referring to the amount of balancing, blocking and
grouping of the experimental units. Balancing means that the treatments should he
assigned to the experimental units in such a way that the result is a
balanced arrangement of the treatments. Blocking means that like experimental units
should be collected to form a relatively homogeneous group. A block is also a
replicate. The main purpose of the principle of local control is to increase the efficiency
of an experimental design by decreasing the experimental error. The point to
remember here is that the term local control should not be confused with the
word control. The word control in experimental design is used for a treatment. Which
does not receive any treatment, but we need to find out the effectiveness of other
treatments through comparison (Fisher 1960).

Examples:

The following are some examples to illustrate the experimental research (Isaac &
Michael, 1977):

1. To investigate the effects of two methods of teaching a twelfth-grade


history program as a function of class size (large and small) and levels
of student intelligence (high, average, low), using random assignment
of teachers and students-by-intelligence-level to method and class size.
2. To investigate the effects of a new drug abuse prevention program on
the attitudes of junior high school students using experimental and
control groups who are either exposed or not exposed to the program
respectively, and using a pretest-posttest design in which only half of
the students randomly receive the pretest to determine how much of
an attitude change can be attributed to pretesting or the educational
program.

139
Advantages and disadvantages

Advantages Disadvantages

Gain insight into methods of instruction Subject to human error

Intuitive practice shaped by research Personal bias of the researcher may intrude

Teachers have bias but can be reflective The sample may not be representative

The researcher can have control over Can produce artificial results
variables

Humans perform experiments anyway The results may only apply to one situation and
may be difficult to replicate

Can be combined with other research Groups may not be comparable


methods for rigor

Use to determine what is best for the Human response can be difficult to measure
population

Provides for greater transferability than Political pressure may skew results
anecdotal research

(Source: http://writing.colostate.edu/guides/research/experiment/pop5d.cfm)

There are four types of experimental studies and accordingly four types of research design
dealing with each type of experimental study. The characteristics of these four
experimental designs have been summed up and presented in the following Table.

140
TYPES OF EXPERIMENTAL DESIGNS AND THEIR CHARACTERISTICS

MAJOR TYPOLOGY OF EXPERIMENTAL STUDY DESIGN INCLUDES:

TYPES OF EXPERIMENTAL DESIGN CHARACTERISTICS


Experimental Group (EG) and Control Group (CG) is
similar. EG is exposed to the causal variable (X) but the
After Only Experimental Design CG is not exposed. After experimentation, both groups
are compared and some effect, (Y) is, say, produced in the
EG but not in the CG. X is then regarded as the cause and
Y is regarded as the effect.
The effect is measured both before and after the
exposure of group(s) to experiment. The difference that
is produced after the experiment is said to be the effect
(Y) of the experimental variable (X). This design provides
Before – After Experimental evidence of concomitant variation between X and Y by
Design making a comparison of Y in the group exposed to X with
Y in the group not exposed to X. The EG and GG are
measured at the beginning and end of the experimental
period. The difference between the two groups is
regarded as the effect of the experimental variable alone.
When it is difficult to divide the population of a country
into two clear and similar groups, then through a
Ex - Post Facto Experimental comparative study of the conditions of the two countries,
Design the researcher may be able to find out the cause of a
particular event. This is known as ex-post facto study. In
such situations, past is studied through the present. It is
difficult to find out two similar groups or countries which
are comparable
A particular subject is studied by using different kinds of
data over time. The researcher obtains direct evidence of
Panel Study Experimental Design time relationship among variables. It involves repeated
observations on the same subject at different periods of
time. It is a type of time series study. The common subject
observed and studied again and again constitutes a panel.

141
Example: After only Experimental design

Research Topic: Implementation of New Teaching Method

For example, schoolchildren are randomly assigned to two groups of 25 each. The
experimental (treatment) group receives an innovative teaching method during a special class
session. The second group (the control) receives an old-style teaching method during a special
class session. No pretest is used for each group. Issues such as existing grades, SAT scores,
and other factors are examined as covariates. The key difference in the post test only design
is that, neither group is protected and only at the end of the study are both groups measured
on the dependent variable.

AFTER ONLY EXPERIMENTATION

E Ye

C
Yc

Difference =Ye-Yc

Y- DEPENDENT VARIABLE
X - INDEPENDENT VARIABLE
E - EXPERIMENTAL VARIABLE
C - CONTROL VARIABLE

Example: Before After Experimental design

Research Topic: Effectiveness of instructional strategy on High School Students


Performance

For example: a classroom teacher gives her students a pretest then implements an
instructional strategy followed by a posttest. The teacher would be able to assess the
effectiveness of instructional strategy through this experimental design. The teacher
could evaluate the actual condition of students before giving instruction strategy and
there after implementation, what changes occurred with the students’ performance

142
in academic accomplishment. Comparative analyses of the students’ performance
level can be made possible with this approach. The effect of the treatment would be
equal to the level of the phenomenon after the treatment minus the level of the
phenomenon before the treatment.

ONE GROUP BEFORE AFTER EXPERIMENT

BEFORE After

TARGET GROUP Y1 Y2

DIFFERENCE = Y2-Y1

BEFORE AFTER DESIGN WITH CONTROL GROUP

Before Afetr

E
Ye1 Ye2

C
Yc1 Yc2

DIFFERENCE IN E = Ye2-Ye1 = DE
DIFFERENCE IN C = Yc2-Yc1 = DC

NEW DIFFERENCE = De-Dc

Basic Experimental Designs

Eleven commonly used experimental designs will be described. They include:

143
1. One-shot;
2. One-Group, Pre-Post;
3. Static Group;
4. Random Group;
5. Solomon Four Group;
6. Randomized Block;
7. Factorial;
8. Latin Square; and
9. Historical.

(The abbreviations: GP- group, T – Treatment, O - Observation)

One-shot

The One-Shot is a design in which a group of subjects is administered a treatment and


then measured (or observed). In experimental research, an experimental treatment
should be given to the subjects, and then the measurement or observation made.
Usually, with this design, an intact group of subjects is given the treatment and then
measured or observed. No attempt is made to randomly assign subjects to the groups,
nor does the design provide for any additional groups as compared. Thus, one group
will be given one treatment and one "observation." This design is diagrammed as
follows:

GP --T--O

General evaluation:

• A single measure is recorded after the treatment is administered;


• The study lacks any comparison or control of extraneous influences;
• No measure of test units not exposed to the experimental treatment;
and
• May be the only viable choice in taste tests.

Diagrammed as:

X O1

The One-Shot Design is highly useful as an inexpensive measure of a new treatment


of the group in question. If there is some question as to whether any expected effects
will result from the treatment, then a one-shot may be an economical route. In cases
where other studies, or the cumulative knowledge in the field provide information
about either pre-treatment baseline measurements or behavior, the effects of other

144
kinds of treatments, etc., the experimenter might sensibly decide that it is not
necessary to undertake a more extensive design. Simplicity, ease, and low cost
represent strong potential advantages in the oft-despised one-shot.

One-Group, Pre-Post

In this design, one group is given a pre-treatment measurement or observation, the


experimental treatment, and a post-treatment measurement or observation. Here in
this form, the post-treatment measures are compared with their pre-treatment
measures. This design is diagrammed as follows:

GP--O--T—O

General evaluation:

• Subjects in the experimental group are measured before and after the
treatment is administered.
• No control group
• Offers comparison of the same individuals before and after the
treatment (e.g., training)
st nd
• If time between 1 & 2 measurements is extended, may suffer
maturation Can also suffer from history, morality, and testing effects
• Diagrammed as:
O X O
1 2

The usefulness of this design is similar to that of the one-shot, except that an
additional class of information is provided, i.e., pre-treatment condition or behavior.
This design is frequently used in clinical and educational research to determine if
changes occurred. It is typically analyzed with a matched pairs t-test.

Static Group

In this design, two intact groups are used, but only one of them is given the
experimental treatment. At the end of the treatment, both groups are observed or
measured to see if there is a difference between them as a result of the treatment.
The design is diagrammed as follows:

GP--T--O

GP------O

145
General Evaluation:

• Experimental group is measured after being exposed to the


experimental treatment;
• A control group is measured without having been exposed to the
experimental treatment;
• No pre-measure is taken; and
• Major weakness is a lack of assurance that the groups were equal on
variables of interest prior to the treatment.

Diagrammed as:

Experimental Group X O1

Control Group O2

This design may provide information on some rival hypotheses. Whether it does or
not depends on the initial comparability of the two groups and whether their
experience during the experiment differs in relevant ways only by the treatment itself.

SOLOMON FOUR GROUP DESIGN

Almost 40 years ago, Solomon introduced a new form of experimental design typically
referred to today as the Solomon four-group design (Solomon, 1949). Campbell and
Stanley, (1963) discussed this design as one of three one-treatment condition
experimental designs, the other two being the pre- and posttest control group design
and the posttest-only control group design. Each of these designs is adequate to
assess the effect of the treatment and is immune from most threats to internal
validity. The Solomon four-group design, however, adds the advantage of being the
only one of the three able to assess the presence of pretest sensitization. Pretest
sensitization means that "exposure to the pretest increases . . . the sensitivity to the
experimental treatment, thus preventing generalization of results from the pretested
sample to an un-pretested population" (Huck and Sandier, 1973). Thus, the Solomon
four-group design adds a higher degree of external validity in addition to its internal
validity, and hence, according to Helmstadter, 1970), is therefore "the most desirable
of all the. . .basic experimental designs".

The Solomon Four Group design attempts to control for the possible "sensitizing"
effects of the pre-test or measurement by adding two groups who have not been a
part of the pre-test or pre-measurement process. It can be detailed as follows:

146
• Two experimental groups and two control group for the
experiment;
• One experimental group and one control group can be given a
pre-test and post-test;
• The other two groups will be given a post-test alone;
• Analysis of different possibilities;
• True experimental design;
• Combines pretest-posttest with control group design and the
posttest-only with control group design;
• Provides means for controlling the interactive testing effect and
other sources of extraneous variation; and
• Does include random assignment.

GROUP PRETEST TREATMENT POSTTEST

EXPERIMENTAL
O1 X O2

O3 X O4
CONTROLLED

EXPERIMENTAL
O5

O1
CONTROLLED

Although this design is not frequently used in clinical studies, it is frequently used in
both behavior and educational research and in medical studies involving the physical
activities of patients (physical therapy, for example where the pre-measurement
involves some sort of physical activity or testing). The additional cost of this design
must be justified by the need for information regarding the possible effects of the pre-
treatment measurement.

Example:

A program to assist nurses exposed to vicarious traumatization:

147
This program was intended to assist nurses in understanding vicarious traumatization
(VT), to aid in transforming and addressing VT, to identify signs, symptoms and
contributing factors of VT, and to develop a personal resource list in order to ensure
the development of a healthy self. The aim was to ensure that pretest sensitization
would not influence the effectiveness of the intervention program. Solomon’s four-
group design was implemented in this study (Braver and Braver, 1988). Sixty nurses
who work at the Free State Psychiatric Complex participated in the study. The four
groups consisted of two experimental and two control groups. The first experimental
group completed a pretest, participated in the intervention program and completed
a post-test. The second experimental group participated in the intervention program
and only completed a post-test. The first and second control groups did not participate
in the intervention program. The first control group completed the pre- and post-test
questionnaires whereas the second control group completed only the post-test. In
general based on the responses of those who participated in the program, it seems
that most nurses benefited from the intervention, in particular from the exercises that
addressed VT and integrating emotional experiences

FACTORIAL DESIGN

The field of Design of Experiments (DoE) deals with methods for efficient
experimentation, i.e. deriving required information about, e.g. a process, at the least
expenditure of resources (Barker, 1994). Factorial designs are important tools in DoE
and are exhaustively treated in the literature (Box, et al. 2005; Montgomery, 2005).

Factorial design is a particular way of combining factors to create study conditions. In


factorial design, every level of each factor is combined with every level of every other
factor. Factorial design is a design in which two or more independent variables are
tested in such a way that every level of one variable occur with every level of others-
each different combination of levels defining a condition. Conventional experimental
designs allow the manipulation of one variable at a time, holding all other situations
constant. Factorial designs allow the manipulation of more than one independent
variable at a time. Furthermore, interaction effects between variables are revealed
and the simultaneous testing of multiple hypotheses is thus allowed. The investigator
can study the effects of each of the variables and the interactions between them or
their joint effects on the independent variable

WHAT IS A FACTOR?

A factor is a variable that is controlled and varied during the course of an experiment.
The factors in a Factorial Experiment potentially interact among themselves to
influence the resulting value of the dependent variable. The factorial is used when we
desire information concerning the effects of different kinds or intensities of

148
treatments. The factorial provides relatively economical information not only about
the effects of each treatment, level or kind, but also about interaction effects of the
treatment.

Meaning of Factorial Design:

Factorial design involves following aspects:

• The study should choose 2 or more factors to test;


• The study should choose 2 or more levels for each factor;
• The measure the response using various combinations of
factors and levels;
• Not vary-one-factor-and-keep-others-constant;
• One determines which factors have the largest effects on the
response, and whether there are interactions between factors.

What are the uses of Factorial Design?

Factorial designs were followed oases:

• Recognize factors with significant effects on the response;


• Identify interactions among factors;
• Detect which factors have the most important effects on the
response;
• Agree with whether further investigation of a factor effect is
justified;
• Examine the functional dependence of a response on multiple;
and
• factors simultaneously (if and only if you test many levels of
each factor).

2x2 factorial designs

Treatment in factor analysis:

The most basic factorial design consists of two factors (independent variable’s) each
of which has 2 levels. This is how we describe a factorial design in a research paper:
2 X 2 between factorial design. Each number represents a factor and the value of the
number indicates how many levels of the factor there are. In this design, there are
two factors each having two levels.

149
ILLUSTRATION FACTORIAL DESIGN

ANXIETY TASK COMPLEXITY

TASK PERFORMANCE

MANIPULATION OF ANXIETY

GROUP A GROUP B

NEGATIVE FEED BACK


NO CONCERN
SIMPLE TASK COMPLEX TASK WITH THIS GROUP

To find out:

• The mean effect of anxiety


• That of task complexity
• Interaction between anxiety and task complexity

HIGH ANXIETY – A 2
LOW ANXIETY – B
SIMPLE TASK – 1
COMPLEX TASK – 2 2

INDUCED ANXIETY X1

SIMPLE
TASK score score
X2
COMPLEX score score
TASK

150
A factorial design is the most common way to study the effect of two or more
independent variables, although we will focus on designs that have only two
independent variables for simplicity. In a factorial design, all levels of each
independent variable are combined with all levels of the other independent variables
to produce all possible conditions.

Example: 2

A researcher might be interested in the effect of whether or not a stimulus person


(shown in a photograph) is smiling or not on ratings of the friendliness of that
person. The researcher might also be interested in whether or not the stimulus
person is looking directly at the camera makes a difference. In a factorial design, the
two levels of the first independent variable (smiling and not smiling) would be
combined with the two levels of the second (looking directly or not) to produce four
distinct conditions: smiling and looking at the camera, smiling and not looking at the
camera, not smiling and looking at the camera, and not smiling and not looking at the
camera (Dr. Price & Dr. Oswald, 2006).

3x2 factorial designs

Suppose we have more than two independent variable that we think is important. Can
we manipulate two (or more) things at once? Let us take the above research example,
(Example 1), ‘effect of whether or not a stimulus person (shown in a photograph) is
smiling or not on ratings of the friendliness of that person’, in which the first
independent variable had three levels (not smiling, closed-mouth smile, open-mouth
smile), then it would be a 3x2 factorial design. Note that the number of distinct
conditions formed by combining the levels of the independent variables is always just
the product of the numbers of levels. In a 2x2 design, there are four distinct
conditions. In a 3x2 design, there are 6.

Example:2

Grocery store chain wants to use 12 of its stores to examine whether sales would
change at 3 different hours of operation and 2 different types of sales promotions

• The dependent variable is change in sales


• Independent variables

➢ Store open 6 am to 6 pm.


➢ Store open 6 am to midnight.
➢ Store open 24 hours/day.
➢ Sales promotion: samples for a free gift.

151
➢ Sales promotion: food samples

This type of research design is called as 3 x 2 factorial designs. Here for the interaction
need 6 experimental groups (3 x 2 = 6).

Factorial Design Advantages

The Advantages of factorial designs include;

1. Simplifying the process and allows many levels of analysis;


2. More well-organized than one-factor-at-a-time experiments;
3. Factorial design is essential when interactions may be present to avoid
misleading inferences;
4. Factorial designs allow the effects of a factor to be estimated at several
levels of the other factors, yielding inferences that are valid over a
range of experimental conditions;
5. One can examine main effects and interaction effects;
6. Uses fewer subjects than separate experiments;
7. Reducing the possibility of experimental error and confounding
variables;
8. Avoids between experiment comparisons;
9. Selection confound; and
10. Cohort effects.

Factorial Design disadvantages

The disadvantages of factorial designs include;

1. Large designs require a large number of participants;


2. Between subjects design lack statistical power;
3. Researchers must address selection issue (e.g., random assignment to
treatments);
4. The difficulty of experimenting with more than two factors, or many
levels; and
5. An error in one of the levels, or in the general operationalization, will
jeopardize a great amount of work.

EX POST FACTO DESIGN

Ex post facto designs (the term ex post facto literally means "after the fact") provide
an alternative means by which a researcher can investigate the extent to which
specific independent variables (a virus, a modified curriculum, a history of family
violence, or a personality trait) may possibly affect the dependent variable(s) of

152
interest. Although experimentation is not feasible, the researcher identifies events
that have already occurred or conditions that were already present and then collects
data to investigate a possible relationship b/w these factors and subsequent
characteristics or behaviors.

In many situations, it is unethical or impossible to manipulate certain variables in


order to investigate their potential influence on other variables. For example, one
cannot introduce a new virus, withhold instruction, ask parents to abuse their
children, or modify a person's personality to compare the effects of these factors on
the dependent variables in one's research problem. In ex-post facto design there is no
manipulation of the independent variable in the lab or field setting. Here, in ex post
facto design research design subjects already exposed to a stimulus and those not so
exposed to stimuli are studied.

Characteristic features:

The ex post facto research:

• Ex post facto research is sometimes called causal comparative


research;
• Explores possible causes and effects;
• Ex post facto research is research that takes place after the fact;
• In which the independent variable is not manipulated, it has already
been applied;
• It emphases first on the effect, then attempts to determine what
caused the observed effect;
• Often ex post facto research is used to explain something in the present
from data collected sometime in the past; and
• In ex post facto research the researcher takes the effect (or dependent
variable) and examines the data retrospectively to establish the causes,
relationships or associations, and their meanings.

When this study design useful?

The ex post facto research is useful when:

• It is impossible, impractical, costly or unethical to conduct an


experiment;
• It is not possible to select, control and manipulate the factors
necessary to study cause-and-effect relationships directl;
• When the control of all variables except a single independent variable
may be unrealistic and artificial; and

153
• Where the independent variable lies outside the researcher’s control.

Design and procedures in an ex post facto investigation

1. Identify the problem area to be investigated;


2. Formulate a hypothesis to be tested or questions to be answered;
3. Make explicit the assumptions on which the hypothesis and
subsequent procedures will be based; and
4. Review of the research literature will follow to ascertain the kinds of
issues, problems, obstacles and findings disclosed by previous studies
in the area.

1. Plan the actual investigation:

• Identify the population and samples;


• Select and construct techniques for collecting data;
• Establish categories for classifying the data.

2. Describe, analyze and interpret the findings

Stages of an ex post facto investigation

Stage One: Define the problem and survey the literature.

Stage Two: State the hypotheses and the assumptions or premises on which the
hypotheses and research procedures are based.

Stage Three: Select the subjects (sampling) and identify the methods for collecting the
data.

Stage Four: Identify the criteria and categories for classifying the data to fit the
purposes of the study.

Stage Five: Gather data on those factors which are always present in which the given
outcome occurs, and discard the data in which those factors are not always present.

Stage Six: Gather data on those factors which are always present in which the given
outcome does not occur.

Stage Seven: Compare the two sets of data (i.e. Subtract the former (Stage Five) from
the latter (Stage Six), in order to infer the causes that are responsible for the
occurrence or non-occurrence of the outcome (Louis Cohen, Lawrence Manion &
Keith Morrison, 2012).

154
Stage Eight: Analyze, interpret and report findings.

How to get control over the research design?

The researcher can establish control over the design and study by;

• Match the subjects in the experimental and control groups


where the design is causal-comparative;
• Build the extraneous independent variables into the design and
then use an analysis of variance technique;
• Select samples that are as homogeneous as possible on a given
variable; and
• State and test alternative hypotheses that might be plausible
explanations for the empirical outcomes of the study (Louis
Cohen, Lawrence Manion & Keith Morrison, 2012).

Advantages of ex post facto study design:

The advantages of ex post facto research include:

• Useful where the more rigorous experimental approach is not


possible;
• Useful to study what goes with what and under what
conditions;
• Useful where the setting up of the latter would introduce a note
of artificiality into research proceedings;
• Useful where simple cause-and-effect relationships are being
explored; and
• It can give a sense of direction and provide a source of
hypotheses that can subsequently be tested by the most
rigorous experimental method. (Louis Cohen, Lawrence Manion
and Keith Morrison, 2012).

Disadvantages of ex post facto design:

The disadvantages of ex post facto research include:

• The direction of causality difficult to establish: what caused what.


• Lack of control of the independent variable or variables.
• Impossible to isolate a nd control every possible variable, or to
know with absolute certainties which are the most crucial variables.
• Randomization impossible.

155
• Can provide support for any number of different, even
contradictory, hypotheses.
• Correlation does not equal cause.
• Lack of control: the researcher cannot manipulate the independent
variable or randomize her subjects
• One cannot know for certain whether the causative factor has been
included or even identified.
• It may be that no single factor is the cause.
• A particular outcome may result from different causes on different
occasions.
• It is not possible to disconfirm a hypothesis.
• Classifying into dichotomous groups can be problematic.
• As the researcher attempts to match groups of key variables, this
leads to shrinkage of the sample.
• Conclusions may be based on too limited a sample or number of
occurrences.
• It may fail to single out the really significant factor(s) (Louis Cohen,
Lawrence Manion and Keith Morrison, 2012).

Further Issues related to this design

Certain other issues related to ex post facto design include;

• Is the cause that you hypothesize correct?


• Many causes may be interrelated or the result of more than one
variable interacting.
• Extraneous variables may not be accounted for.
• Participants are self-selected. What puts them into these
categories?

156
Example of ex post facto study:

A simple ex post facto design can be depicted as follows:

• training program introduced two year back;


• some people have completed some might not have;
• performance data will be collected from both group;
• the study does not immediately followed after training program
but much later; and
• it is an ex post facto.

QUASI EXPERIMENTAL STUDIES

Definition

“A study involving an intervention in which subjects are not randomly assigned to


treatment conditions, but the researcher exercises certain controls to enhance the
study’s internal validity ” (Polit and Beck, 2006)

The quasi-experiment, also known as ‘field-experiment’ or ‘in-situ experiment’, is a


type of experimental design in which the researcher has limited leverage and control
over the selection of study participants. Specifically, in quasi-experiments, the
researcher does not have the ability to randomly assign the participants and/or ensure
that the sample selected is as homogeneous as desirable. Additionally, in numerous
investigations, including those conducted in information systems research,
randomization may not be feasible, leaving the researcher with pre-assigned group
assignments. Accordingly, the ability to fully control all the study variables and to the
implication of the treatment on the study group(s) may be limited. Never-the-less,
quasi-experiments still provide fruitful information for the advancement of research
(Leedy and Ormrod, 2010).

Quasi-experimental studies also examine outcomes; however, they do not involve


randomly assigning participants to treatment and control groups. A quasi-
experimental study might compare outcomes for individuals receiving program
activities with outcomes for a similar group of individuals not receiving program
activities. This type of study also might compare outcomes for one group of individuals
before and after the group’s involvement in a program (known as “pre-test/post-test
design”). Quasi experimental studies can inform discussions of cause and effect, but,
unlike true experiments, they cannot definitively establish this link (Anderson and
Moore, 2008).

157
Example:

An example of a quasi-experiment is found in the work of Panko (2007) that included


measuring the confidence of students in reducing spreadsheet errors and their actual
number of spreadsheet errors. His study entailed two quasi-experiments. In one, he
compared different types of spreadsheet development (individual development vs.
triad development), while in the second, he looked at two groups of students,
treatment and control, and measured the confidence level and performance (i.e.
reduction in spreadsheet errors) in spreadsheet development before, during, and
after spreadsheet training as part of a Management Information Systems (MIS)
course. The measures were done at the start of the course, right before the treatment
(training/spread sheet development), after the treatment, and at the end of the
course.

Treatment

Example: Attitude of employees in management decision to change the condition of


the work.

• Resistance from employees;


• The researcher may persuade the management to participate few
representatives of employees in decision making;
• Again test the employee attitude; and
• Note a change in employees' behavior.

But the result may not convince to the management because:

• Researcher keeps on continuous research testing employee attitude.

Final conclusion

• Notes change in the attitude towards favorable condition only when the
workers participation was ensured.

Advantages

1. Provide designs for studying those situations in which random assignment is


not possible while only minimally compromising the internal validity of the
study
2. More practical: Ease of implementation
3. More feasible: Resources, subjects, time, setting
4. More generalizable: Comparable to practice

158
Disadvantages

1. Increased potential for bias associated with sampling


2. Threats to internal validity

RANDOMIZED

The most basic type of statistical design for making inferences about treatment means
is the completely randomized design (CRD), where all treatments under investigation
are randomly allocated to the experimental units. The CRD is appropriate for testing
the equality of treatment effects when the experimental units are relative
homogeneous with respect to the response variable. The completely randomized
design is a design in which treatments are randomly assigned to the experimental
units or in which independent random samples of experimental units are selected for
each treatment (Gwowen Shieh and Show-Li Jan (2004).

Features

• Simplest design to use.


• Design can be used when experimental units are essentially
homogeneous.
• Because of the homogeneity requirement, it may be difficult to use
this design for field experiments.
• The CRD is best suited for experiments with a small number of
treatments.
• Subjects are randomly assigned to experimental treatments.
• Gives equal opportunity for the subjects to be selected. If 10 subjects,
5 employees under treatment ‘a’ and 5 under treatment ‘b’.

Treatment

Employees Performance before Treatment (Incentive) Performance after


(3x3)
Group one O1 X1 O2
Group two O3 X1 O4
Group three O5 X1 O6

Advantages of a completely randomized

The advantages of completely randomized design include:

159
1. Very flexible design (i.e. number of treatments and replicates is only
limited by the available number of experimental units);
2. Statistical analysis is simple compared to other designs; and
3. Loss of information due to missing data is small compared to other
designs due to the large number of degrees of freedom for the error
source of variation.

Disadvantages of a completely randomized

The disadvantages of completely randomized design include:

1. If experimental units are not homogeneous and you fail to minimize;


this variation using blocking, there may be a loss of precision;
2. Usually the least efficient design unless experimental units are
homogeneous; and
3. Not suited for a large number of treatments.

RANDOMIZED BLOCK DESIGN

When the experimental units are heterogeneous, the notion of blocking is used to
control the extraneous sources of variability. The major criteria of blocking are
characteristics associated with the experimental material and the experimental
setting. The purpose of blocking is to sort experimental units into blocks, so that the
variation within a block is minimized while the variation among blocks is maximized.
An effective blocking not only yields more precise results than an experimental design
of comparable size without blocking, but also increases the range of validity of the
experimental results. One can use a randomized complete block design (RCBD) to
compare treatment means when there is an extraneous source of variability. In such
cases, treatments are randomly assigned to experimental units within a block, with
each treatment appearing exactly once in every block (Gwowen Shieh and Show-Li
Jan, 2004).

Example: Incentives and level of performance

Treatment
Blocking Factor: Performance (3x3)
Incentives Lower level workers' Plant level workers Supervisors performance
performance Performance group three
Incentives X1 X1 X1
100 Rs
Incentives X2 X2 X2
200 Rs
Incentives X3 X3 X3
300 Rs

160
In randomized block design the subjects are first divided into groups known as blocks.
Care should be taken the populations selected for the study should be homogeneous
in respect of some selected variables. Here there should be the same number of items
in each block. The subject in the block should be selected randomly for treatment and
each treatment appears same number of times in each block

Advantages of randomized complete block designs

The advantages of completely randomized block design include:

1. Complete flexibility.
2. Can have any number of treatments and blocks.
3. Provides more accurate results than the completely
randomized design due to grouping.
4. Relatively easy statistical analysis even with missing data.
5. Allows calculation of the unbiased error for specific treatments.
6. Generally more precise than the completely randomized design
(CRD).
7. No restriction on the number of treatments or replicates.
8. Some treatments may be replicated more times than others.
9. Missing plots are easily estimated.

Disadvantages of randomized complete block designs

The disadvantages of completely randomized block design include:

1. Not suitable for large numbers of treatments because blocks become


too large;
2. Not suitable when complete block contains considerable variability;
3. Interactions between block and treatment effects increase error;
4. Error degrees of freedom are smaller than that for the CRD (problem
with a small number of treatments);
5. Large variation between experimental units within a block may result
in a large error term; and
6. If there are missing data, a RCBD experiment may be less efficient than
a CRD.

LATIN SQUARE DESIGN

A Latin square is a table filled with n x n different symbols in such a way that each
symbol occurs exactly once in each row and exactly once in each column. A Latin
Square design is an example of an incomplete block design where there is a single
treatment and two blocking variables, each with the same number of levels. Latin

161
square design is to analyze the influence of multiple independent variables on the
dependent variables. Here there is no interaction between treatments and blocking
factors. When the experimental material is divided into rows and columns and the
treatments re allocated such that each treatment occurs only once in a row and once
in a column, the design is known as Latin square design (LSD). In this design
eliminating fertility variations consists in an experimental layout which will control
variation in two perpendicular directions. [Latin square designs are normally used in
experiments where it is required to remove the heterogeneity of experimental
material in two directions. This design requires that the number of replications (rows)
equal the number of treatments]. In LSD the number of rows and number of columns
is equal. Hence the arrangement will form a square.

Why researchers use Latin Square Design

Latin square designs are reasonable choices when it is impossible to use each
treatment level for the same combination of blocking levels. For example, consider an
experiment with four diets, each to be given to four cows in succession. If each cow
was given the diets in the same order, the treatment effect would be confounded with
the effect due to the order in which the diets were given. Each cow can only be given
a single diet during a single time period. It is common to use multiple Latin squares in
a single experiment (for example if you had 8 or 12 or 16 cows to allocate
(www.stat.wisc.edu).

Example: Level of employees and performance

Treatment

3- Months Performance (3x3)


Employees June July August
LOWER LEVEL WORKERS X1 X2 X3
PLANT LEVEL WORKERS X2 X3 X1

SUPERVISORS X3 X1 X2

Advantages

The advantages of Latin square design include:

162
1. They handle the case when we have several nuisance factors and we
either cannot combine them into a single factor or we wish to keep
them separate;
2. They allow experiments with a relatively small number of runs;
3. With two way grouping or stratification LSD controls more of the
variation than C.R.D. or R.B.D.;
4. The statistical analysis is simple though slightly complicated than for
R.B.D. Even with missing data the analysis remains relatively simple;
5. More than one factor can be investigated simultaneously; and
6. The missing observations can be analyzed by using missing plot
technique.

Disadvantages

The disadvantages of Latin square design include:

1. The number of levels of each blocking variable must equal the number
of levels of the treatment factor; and
2. The Latin square model assumes that there are no interactions
between the blocking variables or between the treatment variable and
the blocking variable.

In double blind studies Latin square design both experimenter and the subject are
blinded. Here both experimenter and the subject unaware of ‘true’ versus ‘placebo’
treatment. For example: experimenting the efficacy of newly developed strategies,
drugs etc.

HISTORICAL RESEARCH

Historical research is the systematic collection and objective evaluation of data related
to past occurrences in order to test hypotheses concerning the causes, effects, or
trends of these events that may help to explain present events and anticipate future
events. (Gay, 1996).

Historical research is the:

• study of past records;


• origin and development of an organization or movement of a
system;
• descriptive;
• secondary observation involved not direct; and

163
• drawing from past to understand the present and anticipate in the
future.

Historical research has been defined as the systematic and objective location,
evaluation and synthesis of evidence in order to establish facts and draw conclusions
about past events (Borg, 1963). It involves exploring the meaning and relationship of
events, and as its resource it uses primary historical data in the form of historic
artefacts, records and writings. It attempts to find out what happened in the past and
to reveal reasons for why and how things happened.

Whys historical research is conducted?

Historical research is conducted:

• Uncover the unknown;


• Answer questions;
• Identify the relationship that the past has to the present;
• Record and evaluate accomplishments of individuals, agencies, or
institutions; and
• Aid in understanding the culture in which we live.

Values of Historical Research

Hill and Kerber (1967) indicate the values of historical research by listing, relationship
the past can have with the present and even the future.

• It enables solutions to contemporary problems to be sought in the past;


• It throws light on present and future trends;
• It stresses the relative importance and the effects of the interactions
that are found within all cultures; and
• It allows for the revaluation of data supporting selected hypotheses,
theories and generalizations that are presently held about the past.

Libraries and historical archives (HAs) are regarded as the main repositories for
preserving and maintaining historical documents. Their documents may constitute
either primary or secondary sources, and be maintained in the form of books (pages
bound together), manuscripts, single pages, photos, paintings, video etc. A source is
characterized as primary if it has been created during the period of interest, whereas
secondary sources are those created later on and are based on the analysis of primary
sources.

164
Historians conducting research systematically examine past events to give an account;
historical research may involve interpretation to recapture the nuances, personalities,
and ideas that influenced these events, and the expected research outcome is to
communicate an understanding of past events. Their main objective is to recreate the
past, through existing records and their interconnections. In this process, historians
employ their scientific knowledge, experience and intuition to decide which
information they will need to find and study during each next step, and subsequently
attempt to locate sources that contain this information.

The Process for Conducting Historical Research

The process for conducting historical research is the same as for other research.

1. The recognition of a historical problem or the identification of a need


for certain historical knowledge;
2. The gathering of as much relevant information about the problem or
topic as possible;
3. If suitable, the development of the hypothesis that tentatively clarify
relationships between historical factors;
4. The rigorous collection and organization of evidence, and the
verification of the authenticity and veracity of information and its
sources.; and
5. The selection, organization, and analysis of the most pertinent
collected evidence, and the drawing of conclusions; and the recording
of conclusions in a meaningful narrative. (Busha and Harter, 1980).

E.g. In the field of library and information science, there is a vast array of topics that
may be considered for conducting historical research. For example, a researcher may
choose to answer questions about the development of school, academic or public
libraries, the rise of technology and the benefits/ problems it brings, the development
of preservation methods, famous personalities in the field, library statistics, or
geographical demographics and how they affect the library distribution.

Limitations

The limitation of the historical study design can be mentioned as follows:

• The past data may not be available in its authentic form;


• Difficult to test the genuineness and authenticity of the sources and
the data available from them;
• Difficulty to establish the time order of events;
• The dispersal of documents is another limitation; and

165
• Precise measurement and verifications may not be possible.

CONCLUSION

This section of the book details the significance of research design in research
applications. A research design incorporates the method and processes employed in
a scientific research. In order to a research the researcher should have the base of a
plan of action just as we are in need of a plan to construct a house. A good research
design minimizes the errors which are getting in several stages of research. The
information about varied typologies of research design, purpose, features, will
support the researchers to have a better control over the research in building up
knowledge, analyze it and interpret it to arrive at proper inferences. Research design
has a substantial effect on the reliability of the outcomes obtained. Thus research
design acts as a stable footing for the whole research.

DISCUSSION QUESTIONS

1. What is the purpose of research design?


2. Define the research design? Illustrate with example
3. What are the essentials of a good research design?
4. Explain exploratory research design? Illustrate with an example.
5. What is the purpose of exploratory research design?
6. Explain descriptive research design? Illustrate with an example.
7. Explain diagnostic research design? Illustrate with an example.
8. Explain in detail the characteristic feature of experimental study
design?
9. What do you mean by completely randomized design? Illustrate with
example.
10. How you describe Solomon four group designs?
11. Explain Ex-post-Facto research with suitable example?
12. What is historical research?
13. How you decide which type of design is required for your research.
Discuss.

166
MODULE 8

SAMPLING

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand and appreciate the meaning of sampling.


2. Understand the process of determining sample size.
3. Analyze probabilistic and non-probabilistic sampling.
4. Understand inevitability of sampling in quantitative research.

INTRODUCTION

Choosing a study sample is an important step in any research project since it is rarely
practical, efficient or ethical to study whole populations. The aim of all quantitative
sampling approaches is to draw a representative sample from the population, so that
the results of studying the sample can then be generalized back to the population.
Sampling is a good alternative for a complete census if:

• researching of the entire population is not possible in practice;


• budget limitation makes it impossible to examine the total
population;
• time limits make it impossible to research the entire population;
and
• all data has been collected, but you need to produce results quickly.

With sampling the researcher can reliably use observations about the sample to make
a statement about the entire population. If a researcher desires to obtain information
about a population through questioning or testing, he/she has two basic options:

• A ‘short-cut' method for investigating a whole population;


• Data is gathered on a small part of the whole ‘parent population' or
‘sampling frame' and used to inform what the whole picture is like;
• Every member of the population can be questioned or tested, a
census; or
• A sample can be conducted; that is, only selected members of the
population are questioned or tested.

167
Contacting, questioning, and obtaining information from a large population is
extremely expensive, difficult, and time consuming. A properly designed probability
sample, however, provides a reliable means of inferring information about a
population without examining every member or element. Sampling will enable you
to collect a smaller amount of data that represent the whole group. This will save time,
money and other resources, while not compromising on reliability of information.
Sampling is the process of systematically choosing a sub-set of the total population
you is interested in surveying. With sampling, you produce findings that can be
generalized to the target population of your program.

KEY SAMPLING TERMS

Subject: A single member of the sample.

Sample: A smaller representative of a larger whole. A set of cases that is drawn from
a larger pool and used to make generalizations about the population. A set of unit or
portion of an aggregate and material which has been selected in the belief that it will
be representative of the whole aggregate.

Sampling frame: A comprehensive list of all relevant elements or clusters that is used
to select a sample. A specific list that closely approximates all elements in the
population—from this the researcher selects units to create the study sample.

Element: The person from whom you will collect data; an element could be a young
person, a parent or a service provider. A case or a single unit that is selected from a
population and measured in some way—the basis of analysis (e.g., a person, thing,
specific time, etc.).

Heterogeneous: A population whose elements has dissimilar characteristics.


Heterogeneity is the state of being dissimilar.

Homogeneous: A population whose elements has similar characteristics.


Homogeneity as the state of being similar.

Universe: the theoretical aggregation of all possible elements—unspecified to time


and space (e.g., University).

Population: The universe or collection of all elements (persons, businesses, et cetera)


being described or measured from a sample. The theoretical aggregation of specified
elements as defined for a given survey defined by time and space (e.g., UI students
and staff in 2008).

168
Sample or Target population: The aggregation of the population from which the
sample is actually drawn (e.g., University students and faculty in 2008-09 academic
year).

Census: A census is a study of all the individuals within a population, while the Census
is an official research activity carried out by the Government. The Census is an
important event for the research industry. The Government publishes a sample of
individual Census returns, and these help researchers to see behind the total data

Respondent: An element or member of the population selected to be sampled.

Standard Error: The standard deviation of the sampling distribution of a statistic is


known as its standard error and is denoted by (S.E)

Sampling: The sample method involves taking a representative selection of the


population and using the data collected as research information. A sample is a
“subgroup of a population” (Frey et al.). It has also been described as a representative
“taste” of a group (Berinstein). The sample should be “representative in the sense that
each sampled unit will represent the characteristics of a known number of units in the
population” (Lohr). It's the process of selecting a sufficient number of elements from
the population so that a study of the sample and understanding of its properties or
characteristics would make it possible for us to generalize such properties or
characteristics to the population. Sampling is the process of selecting a group of
subjects for a study in such a way that the individuals represent the larger group from
which they were selected. (Ary, Cheser Jacobs, and Razavieh, 1972).

The foremost objective of quantitative research is to generalize. In every quantitative


research, it may not be possible for the researcher to study the whole population of
interest. To get information about the population of interest and to draw inferences
about the population, researchers use sample which is a subgroup of the population
(Lind, Marchal and Wathen, 2008). By using sample, researchers save a lot of time and
money, get more detailed information, and they are able to get information which
may not be available otherwise (Bluman, 2009). Although there are number of
sampling methods to use but the choice of right method is dictated by the nature of
the study and the specific research questions and hypotheses. Researchers can select
from broad categories of probability and non-probability samples. According to Lind
et al., (2008) following probability sampling methods are available to a researcher.

Sampling theory is important to understand in regard to selecting a sampling method


because it seeks to “make sampling more efficient” (Cochran). Cochran posits that
using correct sampling methods allows researchers the ability to reduce research

169
costs, conduct research more efficiently (speed), have greater flexibility, and provides
for greater accuracy.

Characteristics of sampling

• Representativeness-valid

A sample must be representative of the population. The validity of the sample


depends on its accuracy and precision. Probability sampling gives representative
sample.

• Accuracy- unbiased

An accurate sample is one which exactly represents the population. It is free from the
influence of causes any difference between sample value and population value (bias).

• Precise- help of statistics

The sample must yield precise estimate. The smaller the standard error or standard
deviation higher the precision.

• Size- reliable

A good sample must adequate in size in order to be reliable. The sample should be
such size that the inferences drawn from the sample are accurate to a given level of
confidence.

RANDOMIZATION

The key to building representative samples is randomization. “Randomization” is the


process of randomly selecting population members for a given sample, or randomly
assigning subjects to one of several experimental groups, or randomly assigning
experimental treatments to groups. In the context of this chapter, it is selecting
subjects for a sample in such a way that every member of the population has an equal
chance at being selected. By randomly selecting subjects from a population, you
statistically equalize all variables simultaneously.

FACTORS DETERMINING SAMPLE SIZE

Numerous factors control the best sample size, beginning with the real size of the
population. The sample size should be big enough that it has a small sampling error,
that represents how nearby the results are to real population within a percentage.
Sample size requires reflecting the diversity in the universe, known as the degree of

170
variability. There needs to be a confidence level such that if the population is
recurrently sampled, the results can be repeated.

1. Determining the size of a sample in a study is one of the most


challenging responsibilities of researchers. Investigators trusts on a
number of approaches to help in deciding the right sample size for
them. Two possible ways in this regard include;
2. Take a similar study and use that study's sampling size. Or make use
of a series of formulas that calculate a sample size using the size of
the population and the desired sampling error and confidence
level.

The researcher addresses the following concerns when he/she apply the sampling
procedures:

• Larger sample sizes are more accurate representations of the whole;


• The sample size chosen is a balance between obtaining a statistically
valid representation, and the time, energy, money, labor, equipment
and access available; and
• A sampling strategy made with the minimum of bias is the most
statistically valid.
• Most approaches assume that the parent population has a ‘normal
distribution' where most items or individuals clustered close to the
mean, with few extremes.
• A 95% probability or confidence level is usually assumed, e.g. 95% of
items or individuals will be within + or - 2 standard deviations from the
mean
• This also means that up to 5% may lie outside of this - sampling, no
matter how good can only ever be claimed to be a very close estimate.

PARAMETERS FOR SAMPLE SIZE DETERMINATION

Sample size determinations depend on four parameters. These parameters are:

1. the desired level of statistical power;


2. the p level;
3. treatment variability; and
4. error variability.

Statistical power refers to the probability that a treatment effect will be detected if it
is there. By convention, power is generally set at about 0.80, or an 80% probability
that a treatment effect will be detected if present. When a study is under-powered,

171
it has less than an 80% chance of detecting an existing treatment effect. When it is
over-powered, it has a greater than 80% chance of detecting a treatment effect. P
level refers to the ability of detecting a statistically significant difference that is the
result of chance – not the result of the treatment. In other words, the p level
determines the probability of obtaining an erroneously significant result. In statistical
language, this error is called a Type I error. By convention, p levels generally are set
at 0.05, or a 5% probability that a significant difference will occur by chance.

Two of the four parameters – power and p level -- are pre-determined. The other two
parameters – treatment variability and error variability – must be estimated in order
to complete the sample size determination. Treatment and error variability can be
estimated in three ways.

Pilot Studies: The most accurate determination of sample size is obtained when the
investigator has collected relevant data from which an estimate of treatment
variability and an estimate of error variability can be made. These data generally are
obtained in a pilot or small-scale preliminary study. Note that the results of a pilot
study do not have to be statistically significant in order for the data to be used to
estimate treatment and error variability. This procedure is the best way to determine
sample size.

Relevant Literature: Another means of making treatment and error variability


estimates is to use the relevant scientific literature. Estimates could be made from
the published work of investigators who have conducted similar studies or who have
addressed related questions. This is the second-best way to determine sample size.

Rule-of-Thumb Estimates: The third means of making variability estimates is to use


rough approximations or rules-of-thumb that are accepted in a particular field in the
absence of data or published work. This procedure is, by far, the least accurate means
of determining sample size, but sometimes must be used in the absence of data and
relevant literature.

In general, if the variability associated with the treatment is large relative to the error
variability, then relatively few subjects will be required to obtain statistically
significant results. Conversely, if the variability associated with the treatment is small
relative to the error variability, then relatively more subjects will be required to obtain
statistically significant results. (DMS –S C G, January 2006).

DETERMINE SAMPLE SIZE

Before we can take a sample in a digital forensic investigation, we need to determine


the sample size. The sample size has to do with a number of factors, including the
purpose of the study population size, the risk of selecting a bad sample and the

172
allowable sampling error. The examples and definitions in this section are based on a
paper about determining the sample size (Glenn and Israel, 1992).

The level of precision

The level of precision is sometimes called the sampling error. This is the range in which
the true value of the population is estimated to be. This value is usually expressed in
percentages (+/- 5%) that need to be determined by the investigator before sampling.
Because the level of precision can have a significant effect on the sample size with a
certain confidence level (Jan Mora and Bas, 2010).

The confidence level

The confidence level or risk level is based on the ideas encompassed under the Central
Limit Theorem. The key idea encompassed in the Central Limit Theorem is that when
a population is repeatedly sampled, the average value of the attribute obtained by
those samples is equal to the true population value (Saunder, et al, 2004). This means
that if 95% is the selected confidence level 95 out of the 100 samples will have the
true population value within the range of precision specified earlier. In practice a 95%
confidence level with a +/- 5% precision rate is assumed reliable.

Degree of variability

The degree of variability in the attributes being measured refers to the distribution of
attributes in the population. The more heterogeneous a population, the larger the
sample size required to obtain the given level of precision. The less variable a
population, the smaller the sample size. The level of variability is expressed using the
'proportion' or 'P'. A proportion of 0.5 (or 50%) indicates the greatest level of
variability, more than either 0.2 or 0.8. This is because 0.2 or 0.8 indicate that a large
majority do not or do, respectively, have the attribute of interest. Because a
proportion of 0.5 indicates the maximum variability in a population it is often used in
determining a more conservative sample size, that is, the sample size may be larger
than if the true variability of the population attribute were used. In this paper we will
use formulas that assume a proportion of 0.5, so we can ignore the level of variability
without choosing overly optimistic sample sizes (Mora and Kloet, 2010).

Using formulas to calculate a sample size

There are several methods for determining the sample size. In this paper we will
present a simple formula from Yamane to determine the sample size (Yamane, Taro.
1967). This formula can be used to determine the minimal sample size for a given
population size.

173
The formula from Yamane is:

n = N

1+N (e)2

Where

n = sample size;

N = population size;

e = the level of precision

This formula assumes a degree of variability (i.e. proportion) of 0.5 and a confidence
level of 95%. Example 1 shows an example where the population of some sort is 2000
and where a 5% level of precision is required. The sample to be examined by the
investigator is 333 items or objects

n= N 2000

1+N(e)2 = 1+2000×(.05)2 =333

Example 1

If we want a confidence level of 0.95, then the statistical tables tell us z is 1.96. If we
substitute this in the above formula, we get:

N= 0.96N

0.96+N (e) 2, which turns into the Yamane formula if we round


0.96 to 1.

In section 3 of this paper the Yamane formula will be applied to several digital forensic
investigation scenarios.

Using the Yamane formula, we can easily determine the minimal sample size that we
have to investigate for any given population size. The downside to this formula,
however, is that it gives us at most a confidence level of 95%. If we want a higher (or
lower) confidence level than 95%, then we will have to use the original version of the
Yamane formula. To get this formula, we start with the original formula that the above
Yamane formula is based on ( Taro. 1967):

174
n= z2 P (1−P) N

z2 P(1−P)+N (e )2

Where:

n = sample size

N = population size

e = the level of precision

z = the value of the standard normal variable given the chosen confidence
level (e.g. z = 1,96 with a CL =95 % and z= 2,57 with a CL = 99%

P = the proportion or degree of variability

If we assume a proportion P of 0.5, then this formula can be simplified to:

n= 0.25 (Z2) N

0.25(Z2) +N (e) 2

So what do we do when we want a 99% confidence level with a population of 2000?


By using the statistical tables we can determine that we then have a z-value of 2.57,
which gives us the following outcome:

n= (2.57)2×0.25×2000

(2.57)2 × 0.25 + 2000 × (0.05)2 ≈ 496

The above formulas are all based on the notion that we want to perform some
investigation on a sample, and use the results to say something about the entire
population. But what if we have a large dataset in which we have a relatively small
number of items that are of interest, for example to determine if fraudulent
transactions are present in a population. If we have 20,000 transactions and we
assume that from those transactions only 100 transactions are fraudulent, then we
can be reasonably certain that the likelihood of detecting the fraudulent transactions
with a sample size of 50 is not large. The following formula can be used to determine
exactly how likely it is that we detect a fraudulent transaction (Coderre, 2009).

P=1− (1−n/N )E

175
Where

n = sample size

N = population size

P = probability of selecting an error in the sample

E = number of items, e.g. a fraudulent transaction

So if we assume that in a population of 20.000 transactions only 100 are


fraudulent and a sample of 50 is selected, the probability of finding a fraudulent
transaction in the selected sample is:

P = 1− (1−50/ 20000)100 = 22%

But if we increase the sample size to 400 the probability of finding at least one
fraudulent transaction is 87%.

P = 1− (1−400/ 20000)100 = 87

SAMPLE SIZE FOR SIMPLE PREVALENCE STUDIES

The sample size needed for a prevalence study depends on how precisely you want to
measure the prevalence. (Precision is the amount of error in a measurement) The
bigger your sample, the less error you are likely to make in measuring the prevalence,
and therefore the better the chance that the prevalence you find in your sample will
be close to the real prevalence in the population. You can calculate the margin of
uncertainty around the findings of your study using confidence intervals. A confidence
interval gives you a maximum and a minimum plausible estimate of the true value you
were trying to measure.

STEP 1: DECIDE ON AN ACCEPTABLE MARGIN OF ERROR

The larger your sample, the less uncertainty you will have about the true prevalence.
However, you do not necessarily need a tiny margin of uncertainty. In an exploratory
study, for example, a margin of error of ±10% might be perfectly acceptable. A 10%
margin of uncertainty can be achieved with a sample of only 100. However, to get to
a 5% margin of error will require a sample of 384 (four times as large) (Conroy (2008).

176
STEP 2: IS YOUR POPULATION FINITE?

Are you sampling a population which has a defined number of members? Such
populations might include physiotherapists in private practice the pharmacies. If you
have a finite population, the sample size you need can be significantly smaller.

STEP 3: SIMPLY READ OFF YOUR REQUIRED SAMPLE SIZE FROM TABLE

Acceptable Margin of Error Size of population

Large 5000 2500 1000 500 200


±20% 24 24 24 23 23 22
±15% 43 42 42 41 39 35
±10% 96 94 93 88 81 65
±7.5% 171 165 160 146 127 92
±5% 384 357 333 278 217 132
±3% 1067 880 748 516 341 169

SAMPLE SIZES FOR PREVALENCE STUDIES

Example 1: Sample size for a study of the prevalence of anxiety disorders in students
at a large university

A researcher is interested in carrying out a prevalence study using simple random


sampling from a population of over 11,000 university students. She would like to
estimate the prevalence to within 5% of its true value.

Since the population is large (more than 5,000) she should use the first column in the
table. A sample size of 384 students will allow the study to determine the prevalence
of anxiety disorders with a confidence interval of ±5%. Note that if she wants extra
precision, she will have to sample over 1,000 for ±3%. Sample sizes increase rapidly
when very high precision is needed (Ronán, 2008).

ample 2: Sample size for a study of a finite population

A researcher wants to study the prevalence of bullying in registrars and senior


registrars working in Ireland. She is willing to accept a margin of uncertainty of ±7.5%.

Since the population is finite, with roughly 500 registrars and senior registrars, the
sample size will be smaller than she would need for a study of a large population. A
representative sample of 127 will give the study a margin of error (confidence interval)

177
of ±7.5% in determining the prevalence of bullying in the workplace, and 341 will
narrow that margin of error to ±3% (Conroy, 2008).

CLASSIFICATION OF SAMPLING TECHNIQUES

Sampling Techniques

Nonprobability Probability
Sampling Techniques Sampling Techniques

Convenience Judgmental Quota Snowball


Sampling Sampling Sampling Sampling

Simple Random Systematic Stratified Cluster Other Sampling


Sampling Sampling Sampling Sampling Techniques

The major groups of sample designs are probability sampling and non-probability
sampling:

1. Probability sampling and

2. Non-probability sampling.

The choice to use probability or non-probability sampling depends on the goal of the
research. When a researcher needs to have a certain level of confidence in the data
collection, probability sampling should be used (MacNealy). Frey, et al. indicates that
the two sampling methods “differ in terms of how confident we are about the ability
of the selected sample to represent the population from which it is drawn”.
Probability samples can be “rigorously analyzed to determine possible bias and likely
error” (Henry). Non-probability sampling does not provide this advantage but is useful
for researchers “to achieve particular objectives of the research at hand” (Henry).
These objectives may allow for selection of the sample acquired by accident, because
the sample “knows” the most, or because the sample is the most typical (Fink and
Kosecoff). Probability and non-probability sampling have advantages and

178
disadvantages and the use of each is determined by the researcher’s goals in relation
to data collection and validity. Each sampling category includes various methods for
the selection process.

PROBABILITY SAMPLING

Probability sampling (a term due to Deming) is a sampling process that utilizes some
form of random selection. In probability sampling, each unit is drawn with known
probability, [Yamane, 1967] or has a nonzero chance of being selected in the sample.
[Raj, 1968] Such samples are usually selected with the help of random numbers.
(Cochran, 1953 and Trochim) With probability sampling, a measure of sampling
variation can be obtained objectively from the sample itself.

In the former, Probability sampling the researcher knows the exact possibility of
selecting each member of the population; in the latter, the chance of being included
in the sample is not known. A probability sample tends to be more difficult and costly
to conduct. However, probability samples are the only type of samples where the
results can be generalized from the sample to the population. In addition, probability
samples allow the researcher to calculate the precision of the estimates obtained from
the sample and to specify the sampling error.

Probability sampling provides an advantage because of a researcher’s ability to


calculate specific bias and error in regards to the data collected. Probability sampling
is defined as having the “distinguishing characteristic that each unit in the population
has a known, nonzero probability of being included in the sample” (Henry, 1990). It is
described more clearly as “every subject or unit has an equal chance of being
selected” from the population (Fink, 1995). It is important to give everyone an equal
chance of being selected because it “eliminates the danger of researchers biasing the
selection process because of their own opinions or desires” (Frey, et al. 2000). When
bias is eliminated, the results of the research may be generalized from the sample to
the whole of the population because “the sample represents the population” (Frey, et
al. 2000).

Probability sampling – includes some form of random selection in choosing the


elements. Greater confidence can be placed in the representativeness of probability
samples. This type of sampling involves a selection process in which each element in
the population has an equal and independent chance of being selected. Four main
methods include: 1) simple random, 2) stratified random, 3) cluster, and 4)
systematic.

179
PROBABILITY SAMPLE

Four basic types of methodologies are most commonly used for conducting
probability samples; these are simple random, stratified, cluster, and systematic
sampling. Simple random sampling provides the base from which the other more
complex sampling methodologies are derived.

In cluster sampling, the population is divided into clusters (a cluster is a natural


aggregation of elements in a population) and then randomly some clusters are drawn
from the group. In a selected cluster, all elements may be selected for study or a
random sample can be further drawn from the cluster (Sekaran and Bougie, 2010).

SIMPLE RANDOM SAMPLING

The basic characteristic of random sampling is that all members of the population
have an equal and independent chance of being included in the sample (Ary, Cheser
Jacobs, and Razavieh, 1972). Simple random sampling involves researcher to select a
sample at random from the sampling frame using either random number table
manually or on a computer, or by an online number generator (Saunders et al., 2009).

To conduct a simple random sample, the researcher must first prepare an exhaustive
list (sampling frame) of all members of the population of interest. From this list, the
sample is drawn so that each person or item has an equal chance of being drawn
during each selection round. Samples may be drawn with or without replacement. In
practice, however, most simple random sampling for survey research is done without
replacement; that is, a person or item selected for sampling is removed from the
population for all subsequent selections. At any draw, the process for a simple random
sample without replacement must provide an equal chance of inclusion to any
member of the population not already drawn. To draw a simple random sample
without introducing researcher bias, computerized sampling programs and random
number tables are used to impartially select the members of the population to be
sampled.

Steps in Simple Random Sampling

To conduct the simple Radom is sampling research following steps may be followed
hear with.

1. Identify and define the population.


2. Determine the desired sample size
3. List all members of the population.
4. Select an arbitrary number in the table of random numbers.

180
5. For the selected number, look only at the number of digits assigned to
each population member
6. If the number corresponds to the number assigned to any of the
individuals in the population, then that individual is included in the
sample
7. Go to the next number in the column and repeat step #7 until the
desired number of individuals has been selected for the sample.
8. Assign all individuals on the list a consecutive number from zero to the
required number. Each individual must have the same number of digits
as each other individual

An example of a simple random sample would be a survey of County employees. An


exhaustive list of all County employees as of a certain date could be obtained from
the Department of Human Resources. If 100 names were selected from this list using
a random number table or a computerized sampling program, then a simple random
sample would be created. Such a random sampling procedure bias and enables the
researcher to estimate sampling errors and the precision of the estimates derived
through statistical calculations.

Gay, (1987) has given a good example to understand this sample method.

1. The population is 5,000 teachers in the system.


2. The desired sample size is 10%, or 500 teachers.
3. The superintendent has a directory which lists all 5,000 teachers
alphabetically. He assigns numbers from 0000 to 4999 to the teachers.
4. A table of random numbers is entered at an arbitrarily selected number
such as the one underlined below: 59058 11859 53634 48708 71710
5. Since his population has only 5000 members, he is interested only in the
last 4 digits of the number, 3634.
6. The teacher assigned #3634 is selected for the sample.
7. The next number in the column is 48708. The last four digits are 8708. No
teacher is assigned #8708 since there are only 5000 skip this number.
8. Applying these steps to the remaining numbers shown in the column,
teachers 1710, 3942, and 3278 would be added to the sample.
9. This procedure continues down this column and succeeding columns until
500 teachers have been selected.

181
Illustration

Illustration

182
WHY IS RANDOM SAMPLING INAPPROPRIATE FOR QUALITATIVE STUDIES?

The process of selecting a random sample is well defined and rigorous, so why can the
same technique not be used for naturalistic studies? The answer lies in the aim of the
study; studying a random sample provides the best opportunity to generalize the
results to the population but is not the most effective way of developing an
understanding of complex issues relating to human behavior. There are both
theoretical and practical reasons for this.

1. First, samples for qualitative investigations tend to be small, for


reasons explained later in this article. Even if a representative
sample was desirable, the sampling error of such a small sample is
likely to be so large that biases are inevitable.
2. Secondly, for a true random sample to be selected the
characteristics under study of the whole population should be
known; this is rarely possible in a complex qualitative study.
3. Thirdly, random sampling of a population is likely to produce a
representative sample only if the research characteristics are
normally distributed within the population. There is no evidence
that the values, beliefs and attitudes that form the core of
qualitative investigation are normally distributed, making the
probability approach inappropriate.
4. Fourthly, it is well recognized by sociologists that people are not
equally good at observing, understanding and interpreting their
own and other people's behavior (Marshal, 1996).

Advantages of simple random sampling

The advantages of simple random sampling include:

• Simple and easy to execute;


• Decrease the potential for human bias in the selection of cases
• The sample is having a high representation of the population being
studied;
• Make effective generalizations (i.e. statistical inferences) from
the sample to the population; and
• More external validity.

Disadvantages of simple random sampling

The disadvantages of simple random sampling include:

183
• A simple random sampling is very difficult to conduct if the size of the
population being studied is large;
• The observation that makes up the systematic sample will, in most
cases, lack independence;
• The lack of independence makes it impossible to calculate the
sample variance and error variance without bias; and
• Not possible without complete list of population members. Even if a list
is readily available, it may be challenging to gain access to that list.
• Potentially uneconomical to achieve; can be disruptive to isolate
members from a group; time-scale may be too long, data/sample could
change

Researchers who choose simple random sampling must be cognizant of the numbers
that they choose. Researcher bias in regards to preferred numbers can be a problem
for the end results in regards to sample selection (Frey, et al. 2000). It is best to ask
other researchers to aid in the selection of the numbers to be used in the selection
process. It is also important to note that by using simple random sampling, the sample
selected may not include all “elements in the population that are of interest” (Fink,
1995).

STRATIFIED RANDOM SAMPLING

Stratified random sampling involves a process of stratification (different strata are


made on the bases of different factors such as life stages, income levels, management
level etc.) and a random sample is then drawn from each stratum (Sekaran and
Bougie, 2010). Additionally, a stratum is homogenous from within but heterogeneous
with other strata. Stratified random sampling involves categorizing the members of
the population into mutually exclusive and collectively exhaustive groups. An
independent simple random sample is then drawn from each group. Stratified
sampling techniques can provide more precise estimates if the population being
surveyed is more heterogeneous than the categorized groups, can enable the
researcher to determine desired levels of sampling precision for each group, and can
provide administrative efficiency.

184
Steps in Stratified Random Sampling

To conduct the Stratified Radom Sampling research, following steps may be followed
hear with.

1. Identify and define the population;


2. Determine the desired sample size;
3. Identify the variable and subgroups (strata) for which you want to
guarantee appropriate, equal representation;
4. Classify all members of the population as members of one identified
subgroup; and
5. Randomly select, using a table of random numbers) an “appropriate”
number of individuals from each of the subgroups, the appropriate
meaning an equal number of individuals.

Illustration

Example 1: As part of a different survey on attitudes to school meals in Lower-School


a stratified random sample of size 40 is to be taken. Find the sample size for each year
group.

The total number of Lower-


School students = 560

Year 7 sample size = 156/560 x 40 = 11 students

Year 8 sample size = 180/560 x 40 = 13 students

Year 9 sample size = 224/560 x 40 = 16 students

Example: 2. An example of a stratified sample would be a sample conducted to


determine the average income earned by families in the United States. To obtain more
precise estimates of income, the researcher may want to stratify the sample by
geographic region (northeast, mid-Atlantic, et cetera) and/or stratify the sample by
urban, suburban, and rural groupings. If the differences in income among the regions
or groupings are greater than the income differences within the regions or groupings,
the precision of the estimates is improved. In addition, if the research organization
has branch offices located in these regions, the administration of the survey can be
decentralized and perhaps conducted in a more cost-efficient manner.

185
The superintendent would follow these steps to create a stratified sample of his 5,000
teachers.

1. The population is 5,000 teachers;


2. The desired sample size is 10%, or 500 teachers;
3. The variable of interest is teaching level. There are three subgroups:
elementary, junior high, and senior high;
4. Classify the 5,000 teachers into the subgroups. In this case, 65% or
3,250 are elementary teachers, 20% or 1,000 are junior high teachers,
and 15% or 750 are senior high teachers.;
5. The superintendent wants 500 teachers in the sample. So 65% of the
sample (325 teachers) should be elementary, 20% (100) should be
junior high teachers, and 15% (75) should be senior high teachers;
6. This is a proportionally stratified sample. A non-proportional stratified
sample would randomly select 167 subjects from each of the three
groups; and
7. The superintendent now has a sample of 500 (325+100+75) teachers,
which is representative of the 5,000 and which reflects proportionally
each teaching level.

Example3:

There are 9000 students at Stapleton College. The table below shows how the
students are distributed by course-type and gender.

A sample of 400 students is to be taken to obtain their views on the quality of


education that they receive. Calculate the number of students in the sample that
should be: (a) Females on full-time courses (b) Male (c) Part-time students

(a) 2100/9000 x 400 = 93

(b) 4200/9000 x 400 = 187

186
(c) 5000/9000 x 400 = 222

PROPORTIONATE STRATIFIED RANDOM SAMPLING

The sample size of each stratum in this technique is proportionate to the population
size of the stratum when viewed against the entire population. This means that the
each stratum has the same sampling fraction.

Example:

Let say a sample of 250 companies is required to conduct a research on “strategic


planning” practices among the managers. Total company population is 550, but a
sample frame obtained is 290. Researcher decides to take 25% cases from each
stratum. Sampling intensity = 13.5%.

(http://www.experiment-resources.com/stratified-sampling)

DISPROPORTIONATE STRATIFIED RANDOM SAMPLING

The only difference between proportionate and disproportionate stratified random


sampling is their sampling fractions. In disproportionate sampling, the different strata
have different sampling fractions. The precision of this design is highly dependent on
the sampling fraction allocation of the researcher. If the researcher commits mistakes
in allotting sampling fractions, a stratum may either be overrepresented or
underrepresented which will result in skewed results

Let say a sample of 250 companies is required to conduct a research on “strategic


planning” practices among the managers. Total company population is 550, but a
sample frame obtained is 290. Sampling intensity = 45.5%.

187
Illustration

Advantages

The advantages of Stratified random sampling include:

• Stratified samples yield smaller random sampling errors;


• Gain in precision;
• Stratified samples tend to be more representative of a population;
• Flexible in the choice of the sample design for different strata;
• Able to get estimates of each stratum in addition to the population
estimates;
• Can ensure that specific groups are represented, even proportionally, in
the sample(s) (e.g., by gender), by selecting individuals from strata list
• Allows different research methods and procedures to be used in different
strata;
• Has a greater ability to make inferences; and
• Administrative convenience in carrying out the study.

Disadvantages

The disadvantages of Stratified random sampling include:

1. Acquiring information about the varied proportion of strata may be


time-consuming and costly;
2. If the study include a large number of variable selection of stratification
difficult;

188
3. More effort from the researcher is required to implement the sampling
design;
4. Stratified random sampling invites more time in the analysis and
interpretation; and
5. More costly, time-intensive, and complex than simple random
sampling.

CLUSTER SAMPLING

Cluster sampling, on the surface, is very similar to stratified sampling in that “survey
population members are divided into unique, non-overlapping groups prior to
sampling” (Henry, 1990). These groups are referred to as clusters instead of strata
because they are “naturally occurring groupings such as schools, households, or
geographic units” (Henry, 1990). Whereas a stratified sample “involves selecting a few
members from each group or stratum,” cluster sampling involves “the selection of a
few groups and data are collected from all group members” (Henry 1990). This
sampling method is used when no master list of the population exists but “cluster”
lists are obtainable (Babbie, 1990; Frey, et al. 2000; Henry, 1990.; Lohr, 1999;
MacNealy, 1999).

In cluster sampling, the population is divided into clusters (a cluster is a natural


aggregation of elements in a population) and then randomly some clusters are drawn
from the group. In a selected cluster, all elements may be selected for study or a
random sample can be further drawn from the cluster (Sekaran and Bougie, 2010).

Cluster sampling is similar to stratified sampling because the population to be sampled


is subdivided into mutually exclusive groups. However, in cluster sampling the groups
are defined so as to maintain the heterogeneity of the population. It is the
researcher’s goal to establish clusters that are representative of the population as a
whole, although in practice this may be difficult to achieve. After the clusters are
established, a simple random sample of the clusters is drawn and the members of the
chosen clusters are sampled. If all of the elements (members) of the clusters selected
are sampled, then the sampling procedure is defined as one-stage cluster sampling. If
a random sample of the elements of each selected cluster is drawn, then the sampling
procedure is defined as two-stage cluster sampling. Cluster sampling is frequently
employed when the researcher is unable to compile a comprehensive list of all the
elements in the population of interest. A cluster sample might be used by a researcher
attempting to measure the age distribution of persons residing in Fairfax County. It
would be much more difficult for the researcher to compile a list of every person
residing in Fairfax County than to compile a list of residential addresses. In this
example, each address would represent a cluster of elements (persons) to be sampled.
If the elements contained in the clusters are as heterogeneous as the population, then

189
estimates derived from cluster sampling are as precise as those from simple random
sampling. However, if the heterogeneity of the clusters.

Illustration

Steps in Cluster Sampling

To conduct the Cluster Sampling research, following steps may be followed hear with.

1. Identify and define the population;


2. Determine the desired sample size;
3. Identify and define a logical cluster;
4. List all clusters (or obtain a list) that make up the population of clusters;
5. Estimate the average number of population members per cluster;
6. Determine the number of clusters needed by dividing the sample size
by the estimated size of a cluster;
7. Randomly select the needed number of clusters by using a table of
random numbers; and
8. Include in your study all population members in each selected cluster.

Example: Let’s apply this approach to the superintendent’s study.

190
1. The population is 5,000 teachers.
2. The sample size is 10%, or 500 teachers.
3. The logical cluster is the school.
4. The superintendent has a list of 100 schools in the district.
5. Although the clusters vary in size, there is an average of 50 teachers
per school.
6. The required number of clusters is obtained by dividing the sample size
(500) by the average size of the cluster (50). Thus, the number of
clusters needed is 500/50 = 10 schools.
7. The superintendent randomly selects 10 schools out of the 100.
8. Every teacher in the selected schools is included in the sample

Advantages

The advantages of Cluster Sampling include:

• Low cost/high frequency of use;


• Requires list of all clusters, but only of individuals within chosen
clusters;
• Can estimate characteristics of both cluster and population; and
• For multistage, has strengths of used methods.

Disadvantages

The disadvantages of Cluster Sampling include:

• Larger error for comparable size than other probability methods;


• Multistage very expensive and validity depends on other methods
used;
• Fewer sampling points make it less like that the sample is
representative; and
• Sampling error is higher for a simple random sample of the same size.

191
DIFFERENCE BETWEEN STRATIFICATION AND CLUSTERING

Stratification Clustering
1. Divide population into groups 1. Divide population into comparable groups:
different from each other: sexes, schools, cities
races, ages 2. Randomly sample some of the groups
2. Sample randomly from each 3. More error compared to simple random
group 4. Reduces costs to sample only some areas
3. Less error compared to simple or organizations
random
4. More expensive to obtain
stratification information
before sampling

http://www.ssc.wisc.edu

STEMATIC SAMPLING

In systematic sampling, researcher begins sampling with a random selection of an


element in the range of 1 to k and then every kth element in the population is selected
as sample (Cooper & Schindler, 2006). The kth element or skips interval is calculated
as: k = skip interval = population size/sampling size

Systematic sampling, a form of one-stage cluster sampling, is often used in place of


simple random sampling. In systematic sampling, the researcher selects every nth
member after randomly selecting the first through nth element as the starting point.
For example, if the researcher decides to sample every 20th member of the
population, a 5 percent sample, the starting point for the sample is randomly selected
from the first 20 members. A systematic sample is a type of a cluster sample because
each of the first 20 members of the sampling frame defines a cluster that contains 5
percent of the population. A researcher may choose to conduct a systematic sample
instead of a simple random sample for several reasons.

Systematic samples tend to be easier to draw and execute. The researcher does not
have to jump backward and forward through the sampling frame to draw the
members to be sampled. A systematic sample may spread the members selected for
measurement more evenly across the entire population than simple random
sampling. Therefore, in some cases, systematic sampling may be more representative
of the population and more precise.

192
Illustration

193
Steps in Systematic Sampling

To conduct the Systematic Sampling research, following steps may be followed hear
with.

1. Identify and define the population;


2. Determine the desired sample size;
3. Obtain a list of the population;
4. Determine what K is equal to by dividing the size of the population by
the desired sample size;
5. Start at some random place in the population list. Close your eyes and
point your finger to a name;
6. Starting at that point, take every Kth name on the list until the desired
sample size is reached; and
7. If the end of the list is reached before the desired sample is reached,
go back to the top of the list.

Example:

In the same example of superintend in the simple random sampling, let us see how
the change can be incorporated.

The superintendent in our example would employ systematic sampling as


follows:

1. The population is 5,000 teachers;


2. The sample size is 10%, or 500 teachers;
3. The superintendent has a directory which lists all 5,000 teachers in
alphabetical order;
4. The sampling interval (K) is determined by dividing the population
(5000) by the desired sample size (500). K = 5000/500 = 10;
5. A random number between 0 and 9 is selected as a starting point.
Suppose the number selected is “3”; and
6. Beginning with the 3rd name, every 10th name is selected throughout
the population of 5000 names. Thus, teacher 3, 13, 23, 33 ... 993 would
be chosen for the sample (Gay)

Advantages

The advantages of systematic sampling include:

• It is more straight-forward than random sampling;


• Systematic sampling is easier, simpler, and less time-consuming;

194
• Population will be evenly sampled;
• Through randomization, the systematic sampling will give the same
result that of simple random sampling;
• External validity high; internal validity high;
• This sampling which spread across the population selected for the
study;
• Researcher can avoid bias if they select units for the sample in a
systematic way;
• A grid doesn't necessarily have to be used, sampling just has to be at
uniform intervals; and
• A good coverage of the study area can be more easily achieved than
using random sampling.

Disadvantages

The disadvantages of systematic sampling include:

• Requires sampling frame;


• The main disadvantage is that if there is an ordering (monotonic trend
or periodicity) in the list which is unknown to the researcher, this may
bias the resulting estimates;
• There is a problem of estimating variance from systematic sampling-
variance is biased; and
• It may therefore lead to over or under representation of a particular
pattern.

NON – PROBABILITY SAMPLING

The advantage of non-probability sampling is that it a convenient way for researchers


to assemble a sample with little or no cost and/or for those research studies that do
not require representativeness of the population (Babbie, 1990). Non-probability
sampling is a good method to use when conducting a pilot study, when attempting to
question groups who may have sensitivities to the questions being asked and may not
want answer those questions honestly, and for those situations when ethical concerns
may keep the researcher from speaking to every member of a specific group (Fink,
1995). In nonprobability sampling, subjective judgments play a specific role (Henry,
1990.).

Non – probability sampling, in contrast, do not allow the study's findings to be


generalized from the sample to the population. When discussing the results of a non-
probability sample, the researcher must limit his/her findings to the persons or
elements sampled. This procedure also does not allow the researcher to calculate

195
sampling statistics that provide information about the precision of the results. The
advantage of non-probability sampling is the ease in which it can be administered.
Non-probability samples tend to be less complicated and less time consuming than
probability samples. If the researcher has no intention of generalizing beyond the
sample, one of the non-probability sampling methodologies will provide the desired
information.

Nonprobability sampling or judgment sampling depends on subjective judgment.


(Salant) The nonprobability method of sampling is a process where probabilities
cannot be assigned to the units objectively, and hence it becomes difficult to
determine the reliability of the sample results in terms of probability (Yamane).

Non-probability sampling – the elements that make up the sample, are selected by
nonrandom methods. This type of sampling is less likely than probability sampling to
produce representative samples. Even though this is true, researcher can and do use
non-probability samples. The three main methods are: 1) convenience, 2) quota, and
3) purposive.

NON-PROBABILITY SAMPLING

The three common types of non-probability samples are convenience sampling, quota
sampling, and judgmental sampling.

CONVENIENCE SAMPLING

As the name implies, convenience sampling involves choosing respondents at the


convenience of the researcher. In convenience sampling (also known as haphazard or
accidental sampling), a sample of units or people is obtained, who are most
conveniently available (Zikmund, 2000).

Examples of convenience samples include people-in-the street interviews—the


sampling of people to which the researcher has easy access, such as a class of
students; and studies that use people who have volunteered to be questioned as a
result of an advertisement or other type of promotion. A drawback to this
methodology is the lack of sampling accuracy. Because the probability of inclusion in
the sample is unknown for each respondent, none of the reliability or sampling
precision statistics can be calculated. Convenience samples, however, are employed
by researchers because the time and cost of collecting information can be reduced.

Advantages:

The advantages of convenience samples include;

196
• One should not worry about taking random samples of the
population;
• Is easy to access, requiring little effort on the part of
the researcher;
• Inexpensive way of ensuring sufficient numbers of a study
• Extensively used/understood;
• No need for lists of population elements;
• Few rules governing how the sample should be collected; and
• Convenience sampling is often used in pilot studies to allow the
people conducting the study to obtain basic data and trends

Disadvantages

The disadvantages of convenience samples include;

• Variability and bias cannot be measured or controlled;


• Skewing the results quite radically and rendering any conclusive
data hard to make convincing;
• Projecting data beyond sample not justified;
• Can be highly unrepresentative;
• Lack of sampling accuracy; and
• Regarded as a form of sampling bias.

QUOTA SAMPLING

Quota sampling is a sort of stratified sampling but here, the selection of cases within
strata is purely non-random (Barnett, 1991). Quota sampling is often confused with
stratified and cluster sampling—two probability sampling methodologies. All of these
methodologies sample a population that has been subdivided into classes or
categories. The primary differences between the methodologies are that with
stratified and cluster sampling the classes are mutually exclusive and are isolated prior
to sampling. Thus, the probability of being selected is known, and members of the
population selected to be sampled are not arbitrarily disqualified from being included
in the results. In quota sampling, the classes cannot be isolated prior to sampling and
respondents are categorized into the classes as the survey proceeds. As each class fills
or reaches its quota, additional respondents that would have fallen into these classes
are rejected or excluded from the results.

An example of a quota sample would be a survey in which the researcher desires to


obtain a certain number of respondents from various income categories. Generally,
researchers do not know the incomes of the persons they are sampling until they ask
about income. Therefore, the researcher is unable to subdivide the population from

197
which the sample is drawn into mutually exclusive income categories prior to drawing
the sample. Bias can be introduced into this type of sample when the respondents
who are rejected, because the class to which they belong has reached its quota, differ
from those who are used.

Advantages

The advantages of quota sample include:

• Moderate cost;
• Very extensively used/understood;
• Ensures selection of adequate numbers of subjects with
appropriate characteristics;
• No need for lists of population elements; and
• Introduces some elements of stratification.

Disadvantages

The disadvantages of quota sample include;

• Variability and bias cannot be measured or controlled


(classification of subjects);
• Projecting data beyond sample not justified; and
• Not possible to prove that the sample is representative of
designated population.

JUDGMENTAL SAMPLING

Judgment sampling (also called as purposive sampling) requires researcher to use his
personal judgment to select cases that he thinks will best answer his research
questions and meet his research objectives (Saunder, et al., 2009). In judgmental or
purposive sampling, the researcher employs his or her own "expert” judgment about
who to include in the sample frame. Prior knowledge and research skill are used in
selecting the respondents or elements to be sampled.

An example of this type of sample would be a study of potential users of a new


recreational facility that is limited to those persons who live within two miles of the
new facility. Expert judgment, based on past experience, indicates that most of the
use of this type of facility comes from persons living within two miles. However, by
limiting the sample to only this group, usage projections may not be reliable if the
usage characteristics of the new facility vary from those previously experienced. As
with all non-probability sampling methods, the degree and direction of error

198
introduced by the researcher cannot be measured and statistics that measure the
precision of the estimates cannot be calculated.

Advantages

The advantages of judgment sampling include:

• Moderate cost;
• Commonly used/understood; and
• Sample will meet a specific objective.

Disadvantages

The disadvantages of judgment sampling include;

• High probability of bias; and


• Projecting data beyond sample not justified.

SNOWBALL SAMPLING

Snowball sampling (also known as network chain referral, or reputational sampling),


based on the idea of rolling snowball is where one or few people are initially sampled
and then the sample spreads out on the basis of links to initial people (Neuman, 2005).
Data is collected from a small group of people with special characteristics, who is then
asked to identify other people like them. Data is collected from these referrals, who
are also asked to identify other people like them. This process continues until a target
sample size has been reached, or until additional data collection yields no new
information. This method is also known as network or chain referral sampling.

Advantages

The advantages of snow ball sampling include:

• Low cost;
• Useful in specific circumstances;
• Useful for locating rare populations;
• The only way to get a group of people willing to respond to, who
are especially knowledgeable;
• Possible to include members of groups where no lists or identifiable
clusters even exist (e.g., drug abusers, criminals).

199
Disadvantages

The disadvantages of snow ball sampling include:

• Bias because sampling units not independent;


• Projecting data beyond sample not justified;
• Good qualitative material but poor in terms of generating reliable data;
• Loose the benefits of random sampling and can do no probability (i.e.,
statistical) analysis ; and
• No way of knowing whether the sample is representative of the
population.

PROBABILITY AND NON-PROBABILITY SAMPLING METHODS: ADVANTAGES AND


DISADVANTAGES

The advantages and disadvantages of the Probability and Non-Probability Sampling


Methods are detailed below in the table:

ADVANTAGES AND DISADVANTAGES OF PROBABILITY AND NON-PROBABILITY


SAMPLING METHODS

Type of Sampling Advantages Disadvantages


Method
• Less prone to bias • Requires that you have a
• Allows estimation of list of all sample
Probability magnitude of sampling elements
Sampling error, from which you can • More time-consuming
determine the statistical • More costly
significance of • No advantage when small
changes/differences in numbers of elements
indicators

200
• More flexible • Greater risk of bias
• Less costly • May not be possible to
• Less time-consuming generalize to program
• Judgmentally target population
Non-probability representative samples may • Subjectivity can make it
Sampling be preferred when small difficult to measure
numbers of elements are to changes in indicators
be chosen over time
• No way to assess
precision or reliability of
data

IMPORTANCE OF SAMPLING

The importance of the sampling can be detailed as follows:

1. Through the sampling intensive study of selected items is made possible


through sampling;
2. Through sampling study of larger area possible;
3. Through sampling study of representative units is made possible;
4. Sampling support in the attainment of sufficient results for the analysis.
and interpretation of data;
5. Sampling reduces time and cost;
6. Sampling saves labor;
7. Sampling allows better coverage of information and good quality of the
result will be made possible;
8. Accuracy of results ids expected by sampling;
9. Sampling supports administrative convenience;
10. Sampling is single dependable procedure on infinite population; and
11. Sampling ensures economy of resources.

INTERNAL VALIDITY

There three kinds of internal validity threats are related to sampling. They are:

1. Bias;
2. Cofounding; and
3. Random errors (Miettinen and Cook, 1970, 1981)

201
What is bias?

Any trend in the collection, analysis, interpretation, publication, or review of data that
can lead to conclusions that are systematically different from the truth (Last, 2001). It
is a process in any state of inference tending to produce results that depart
systematically from the true values (Fletcher, et al, 1988). A systematic error in design
or conduct of a study (Szklo, et al, 2000). Term 'bias' should be reserved for differential
or systematic error.

Types of Bias

• Selection bias is any bias arising from the way that study participants
are selected (or select themselves) from the source population.
• Information bias is any bias arising from errors in the classification of
the exposure or disease status of the study participants.
• Confounding occurs if (because of the lack of randomization) the
underlying risk of disease is different in the exposed and non-exposed
groups.

Selection bias: Unrepresentative nature of sample

• Selective differences between comparison groups that impact on


the relationship between exposure and outcome. Usually results
from comparative groups not coming from the same study base and
not being representative of the populations they come from.

Self-selection bias:

Example 1: The researcher wants to determine the prevalence of HIV


infection. He/she asks for volunteers for testing and the researcher and
find no HIV. Is it correct to conclude that there is no HIV in this location?

Example 2: Healthy worker effect: Another form of self-selection bias.


“Self-screening” process – people who are unhealthy “screen” themselves
out of the active worker population.

Information (misclassification) bias: Any aspect of the way information is collected in


the study that creates a systematic difference between the compared populations
that is not due to the association under study (some call this measurement bias). For
example, errors in measurement of exposure of disease. Method of gathering
information is inappropriate and yields systematic errors in the measurement of
exposures or outcomes.

202
Interviewer Bias – an interviewer’s knowledge may influence the structure of
questions and the manner of presentation, which may influence responses.

Recall Bias – those with a particular outcome or exposure may remember


events more clearly or amplify their recollections.

Observer Bias – observers may have preconceived expectations of what they


should find on an examination.

Loss to follow-up – those that are lost to follow-up or who withdraw from the
study may be different from those who are followed for the entire study.

Hawthorne effect – an effect first documented at a Hawthorne manufacturing


plant; people act differently if they know they are being watched.

Surveillance bias – the group with the known exposure or outcome may be
followed more closely or longer than the comparison group.

Misclassification bias – errors are made in classifying either disease or


exposure status.

Types of Misclassification Bias

Non-differential: If misclassification of exposure (or disease) is unrelated to disease


(or exposure) then the misclassification is non-differential. Errors in assignment of
group happen in more than one direction. This will dilute the study findings. Bias
toward the null.

Differential: Errors in measurement are one way only. If misclassification of exposure


(or disease) is related to disease (or exposure) then the misclassification is differential.
Example: Measurement bias – instrumentation may be inaccurate, such as using only
one size blood pressure cuff to take measurements on both adults and children

Sources of information bias in this case:

1. Subject variation.
2. Observer variation.
3. Deficiency of tools.
4. Technical errors in measurement.

How to control information Bias in this case?

Some of the ways with which the bias can be controlled are;

203
• Blinding: Blinding prevents investigators and interviewers from knowing
case/control or exposed/non-exposed status of a given participant
• Form of survey: Mail may impose less “white coat tension” than a phone or
face-to-face interview.
• Questionnaire: Use multiple questions that ask same information acts as a
built in double-check
• Accuracy: Multiple checks in medical records gathering diagnosis data from
multiple sources
• Distorts the true strength of association
1. Confounding bias: Confounding (Lat. confundere, to mix together) is the
distortion of a measure of the effect of an exposure on an outcome due to the
association of the exposure with other factors that influence the occurrence of
the outcome (Porta, 2008).
For example (1) if a study shows that there is a higher rate of pancreatic cancer in
coffee drinkers than in non-coffee drinkers, an investigator might conclude that
there is a causal relation between coffee consumption and pancreatic cancer.
However, if on further analysis, the investigator finds more cigarette smokers in
the coffee drinking group than the non-coffee drinking group, cigarette smoking
might be confounding the relation between coffee consumption and pancreatic
cancer since it is known that cigarette smoking is associated with pancreatic
cancer. Cigarettes satisfy the requirement that a confounder must be associated
with disease and the factor must be associated with exposure. In this situation,
people who smoke cigarettes tend to drink coffee more than non-smokers.
This control through the study design can be accomplished in three primary ways.
Random selection of the study and control groups is the most efficient means of
control, because random selection not only controls confounding bias, but also
other forms of bias. In the pancreatic example above, random sampling might
have assured equal numbers of cigarette smokers in both groups. Another
possibility might be to limit the study to non-smokers or to match smoking history
which would assure that the same proportion of cigarette smokers were in both
groups. The most common statistical control for confounding is stratification. In
stratification, the investigator would look at the pancreatic cancer rate in cigarette
smokers and nonsmokers separately and then if there is a difference, do a
weighted average across the strata. Advanced statistical methods such as logistic
regression can also be used.

Example 2: The mothers of breast-fed babies are of higher social class, and the
babies thus have better hygiene, less crowding and perhaps other factors that
protect against gastroenteritis. Crowding and hygiene is truly protective against
gastroenteritis, but we mistakenly attribute their effects of breast feeding. This is
called confounding, because the observation is correct, but its explanation is
wrong

204
2. Random Errors: Deviation of results and inferences from the truth, occurring
only as a result of the operation of chance. Can produce type 1 or type 2 errors.
Systematic, non-random deviation of results and inferences from the truth, or
processes leading to such deviation. Any trend in the collection, analysis,
interpretation, publication or review of data that can lead to conclusions which
are systematically different from the truth.
Example 1: In a cohort study, babies of women who bottle feed and women
who breast feed are compared, and it is found that the incidence of
gastroenteritis, as recorded in medical records, is lower in the babies who are
breast-fed. Lack of good information on feeding history results in some breast-
feeding mothers being randomly classified as bottle-feeding, and vice-versa. If
this happens, the study finding underestimates the true RR, whichever feeding
modality is associated with higher disease incidence, producing a type 2 error

Protection against random error and random misclassification

Random error can work to falsely produce an association (type 1 error) or falsely not produce
an association (type 2 error). We protect ourselves against random misclassification
producing a type 2 error by choosing the most precise and accurate measures of exposure
and outcome.

Protection against type 1 errors

We protect our study against random type 1 errors by establishing that the result must be
unlikely to have occurred by chance (e.g. p < .05). P-values are established entirely to protect
against type 1 errors due to chance, and do not guarantee protection against type 1 errors
due to bias or confounding. This is the reason we say statistics demonstrate association but
not causation.

Protection against type 2 errors

We protect our study against random type 2 errors by providing adequate sample size, and
hypothesizing large differences. The larger the sample size, the easier it will be to detect a
true difference, and the largest differences will be the easiest to detect. (Imagine how hard it
would be to detect a 1% increase in the risk of gastroenteritis with bottle-feeding).

Two ways to increase power

The sample size needed to detect a significant difference is called the power of a
study.

1. Choosing the most precise and accurate measures of exposure and


outcome has the effect of increasing the power of our study, because of

205
variances of the outcome measures, which enter into statistical testing, are
decreased.
2. Having an adequate sized sample of study subjects

Summary of Validity Issues

The summary of the internal validity issues can be detailed as:

• Reduce random error by making the study as large as possible and


through appropriate study design;
• Minimize selection bias by having a good response rate (and
selecting controls appropriate in a case-control study);
• Ensure that information bias is non-differential and keep it as small
as possible;
• Minimize confounding in the study design and control for it in the
analysis;
• Study design always involves a compromise between these issues,
e.g. obtaining better exposure information may reduce
information bias, but may increase random error if the study size is
thereby reduced;
• Confounding is often weaker than is “expected”; and
• Non-differential information bias cannot usually cause “false
positive” findings

External Validity

External validity involves the extent to which the results of a study can be generalized
(applied) beyond the sample. In other words, can you apply what you found in your
study of other people (population validity) or settings (ecological validity). The other
validity criteria include the temporal, treatment variation and outcome validity.

Threats to External Validity

Population Validity

How can we increase the external validity? One way, based on the sampling model,
recommends that the researcher should do a good job of drawing a sample from a
population. Population Validity the extent to which the results of a study can be
generalized from the specific sample that was studied in a larger group of subjects.
Inferential statistics contribute evidence to establish population validity of a set of
research results. Population validity can only be achieved if the accessible population
is logically representative of the target population.

206
1. The extent to which one can generalize from the study sample to a defined
population - If the sample is drawn from an accessible population, rather
than the target population, generalizing the research results from the
accessible population to the target population is risky.
2. The extent to which person logical variables interact with treatment
effects-- If the study is an experiment, it may be possible that different
results might be found with students at different grades (a person logically
variable).

Ecological Validity

Ecological Validity the extent to which the results of an experiment can be generalized
from the set of environmental conditions created by the researcher to other
environmental conditions (settings and conditions).

• Explicit description of the experimental treatment (not sufficiently described


for others to replicate)
• If the researcher fails to adequately describe how he or she conducted a study,
it is difficult to determine whether the results are applicable to other settings.
• Multiple-treatment interference (catalyst effect) If a researcher were to apply
several treatments, it is difficult to determine how well each of the treatments
would work individually. It might be that only the combination of the
treatments is effective.
• Hawthorne effect (attention causes differences) Subjects perform differently
because they know they are being studied. "...The external validity of the
experiment is jeopardized because the findings might not generalize to a
situation in which researchers or others who were involved in the research are
not present" (Gall, Borg, and Gall, 1996)
• Novelty and disruption effect (anything different makes a difference) A
treatment may work because it is novel and the subjects respond to the
uniqueness, rather than the actual treatment. The opposite may also occur,
the treatment may not work because it is unique, but given time for the
subjects to adjust to it, it might have worked.
• Experimenter effect (it only works with this experimenter). The treatment
might have worked because of the person implementing it. Given a different
person, the treatment might not work at all.
• Pretest sensitization (pretest sets the stage) A treatment might only work if a
pretest is given. Because they have taken a pretest, the subjects may be more
sensitive to the treatment. Had they not taken a pretest, the treatment would
not have worked.
• Posttest sensitization (posttest helps treatment "fall into place"). The posttest
can become a learning experience. "For example, the posttest might cause

207
certain ideas presented during the treatment to 'fall into place' ". If the
subjects had not taken a posttest, the treatment would not have worked.
• Interaction of history and treatment effect (...to everything there is a time...)
Not only should researchers be cautious about generalizing to other
population, caution should be taken to generalize to a different time period.
As time passes, the conditions under which treatments work change.
• Measurement of the dependent variable (maybe only works with M/C tests)
A treatment may only be evident with certain types of measurements. A
teaching method may produce superior results when its effectiveness is tested
with an essay test, but show no differences when the effectiveness is
measured with a multiple choice test.
• The interaction of time of measurement and treatment effect (it takes a while
for the treatment to kick in). It may be that the treatment effect does not occur
until several weeks after the end of the treatment. In this situation, a posttest
at the end of the treatment would show no impact, but a posttest a month
later might show an impact. (Bracht, and Glass, 1968; Gall, Borg, and Gall,
1996).

Temporal Validity

Temporal validity is the extent to which the study results can be generalized across
time. For example, assume you find that a certain discipline technique works well
with many different kinds of children and in many different settings. After many years,
you might note that it is not working any more; You will need to conduct additional
research to make sure that the technique is robust over time, and if not to figure out
why and to find out what works better. Likewise, findings from far in the past often
need to be replicated to make sure that they still work.

Treatment Variation Validity

Treatment variation validity is the degree to which one can generalize the results of
the study across variations of the treatment.

For example, if the treatment has varied a little, will the results be similar?

• One reason this is important is because when an intervention is


administered by practitioners in the field, it is unlikely that the intervention
will be administered exactly as it was by the original researchers.
• This is, by the way, one reason that interventions that have been shown to
work end up failing when they are broadly applied in the field.

208
Outcome Validity

Outcome validity is the degree to which one can generalize the results of a study
across different but related dependent variables.

• For example, if a study shows a positive effect on self-esteem, will it also


show a positive effect on the related construct of self-efficacy?
• A good way to understand the outcome validity of your research study is
to include several outcome measures so that you can get a more complete
picture of the overall effect of the treatment or intervention.

When to do research?

If sampling is found appropriate for a research, the researcher, then:

1. Identifies the target population as precisely as possible, and in a way that


makes sense in terms of the purpose of study (Salant and Dillman, 1994).
2. Puts together a list of the target population from which the sample will be
selected. (Salant and Dillman, 1994).
3. This list is termed as a frame (more appropriately list frame) by many
statisticians (Raj, 1972).
4. Selects the sample, (Salant and Dillman, 1994) and decide on a sampling
technique, and; makes an inference about the population. (Raj, 1972)
All these four steps are interwoven and cannot be considered isolated from
one another. Simple random sampling, systematic sampling, stratified sampling fall
into the category of simple sampling techniques. Complex sampling techniques are
used, only in the presence of large experimental data sets; when efficiency is required;
and, while making precise estimates about relatively small groups within large
populations (Salant).

CASE

Sampling bias is a tendency to favor the selection of units that have particular
characteristics. Sampling bias is usually the result of a poor sampling plan. The most
notable is the bias of non-response when for some reason some units have no chance
of appearing in the sample. For example, take a hypothetical case where a survey was
conducted recently by the Cornell Graduate School to find out the level of stress that
graduate students were going through. A mail questionnaire was sent to 100
randomly selected graduate students. Only 52 responded and the results were that
students were not under stress at that time when the actual case was that it was the
highest time of stress for all students except those who were writing their thesis at
their own pace. Apparently, this is the group that had the time to respond. The

209
researcher who was conducting the study went back to the questionnaire to find out
what the problem was and found that all those who had responded were third and
fourth PhD students. Bias can be very costly and has to be guarded against as much as
possible. For this case, $2000.00 had been spent and there were no reliable results in
addition, it cost the researcher his job since his employer thought if he was qualified,
he should have known that beforehand and planned on how to avoid it. A means of
selecting the units of analysis must be designed to avoid the more obvious forms of
bias. Another example would be where you would like to know the average income of
some community and you decide to use the telephone numbers to select a sample of
the total population in a locality where only the rich and middle class households have
telephone lines. You will end up with high average income which will lead to the wrong
policy decisions.

Conclusion

This particular section of the book concentrates on sample, sampling, sampling frame,
and importance of sampling in research. A sample is representative of the study
population. A research that does not follow the appropriate sampling method and
process may produce outcomes that lead to imprudent research outcomes (this may
be called external validity–the degree to which the sample’s outcomes can generalize
to the population the researcher care about). This is indispensable if a researcher
wants to arrive at conclusions which are valid for the entire study population. Thus,
the sampling process will be used in the research study to get a wider scope of
generalization of the findings to the target population. The knowledge that one
obtained from this chapter remove all the doubts pertaining to sampling process and
it will lead to engage in stable research practice.

DISCUSSION QUESTIONS
1. What is the purpose of sampling?
2. What is the difference between sample and sample frame?
3. Differentiate between sample and element?
4. What are the broad categories of sampling? Explain with suitable
examples?
5. What do you mean by probabilistic sampling? Explain two types of
probabilistic sampling?
6. What do you mean by non-probabilistic sampling? Explain two types of
non-probabilistic sampling?
7. What do you mean by convenient sampling? Explain with suitable
example.
8. What is quota sampling? Explain with suitable example.
9. What is judgmental sampling? Explain with suitable example.
10. What is snowball sampling? Explain with suitable example.

210
11. Explain simple random sampling method?
12. Differentiate simple random sampling method and stratified random
sampling method?
13. Explain cluster sampling? Explain with suitable example.
14. How you explain systematic sampling?
15. Explain the advantages and disadvantages of probabilistic sampling
method?
16. Explain the advantages and disadvantages of non-probabilistic
sampling method?
17. Why sampling is required in research? What is its inevitability?

211
MODULE 9
HYPOTHESES

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand and appreciate the concept of hypothesis.


2. Understand different types of hypothesis.
3. Learn the scientific methods in hypothesis formation.
4. Analyze the steps in hypothesis testing.

INTRODUCTION

After formulating statement of the problem, a researcher moves towards outlining his
research questions and developing research hypotheses. Research questions and
hypotheses provide a sound conceptual foundation for a research project. Developing
research questions is the most important task in your research project as it influences
every aspect of your research including; theory to be applied, the method to be used,
data to be gathered and unit of analysis to be assessed etc.). Well thought out
research questions provide focus to a researcher and determine what, when, where
and how the data will be collected and provide an important link between conceptual
and logistic aspects of the research project ((Ohab 2010; Stuermer, 2009; Ohab 2010).

A hypothesis (from Greek, plural hypotheses) is a proposed explanation for an


observable phenomenon. The term derives from the Greek, hypotithenai meaning "to
put under" or "to suppose.

DEFINITION

“A tentative generalization, the validity of which remains to be tested”. –


Lundberg-

“Hypotheses are single tentative guesses, good hunches – assumed for use in
devising a theory or planning experiments intended to be given a direct
experimental test when possible”. (Rogers, 1966)

A hypothesis may be precisely defined as a tentative proposition suggested as


a solution to a problem or as an explanation of some phenomenon. (Ary,
Jacobs and Razavieh, 1984).

212
A hypothesis is a conjectural statement of the relation between two or more
variables. (Kerlinger, 1956).

“A hypothesis can be defined as a tentative explanation of the research


problem, a possible outcome of the research, or an educated guess about the
research outcome.” (Sarantakos, 1993, 1991)

“An hypothesis is a statement or explanation that is suggested by knowledge


or observation but has not, yet, been proved or disproved.” (Clark and Hockey,
1981).

Hypothesis is a formal statement that presents the expected relationship


between an independent and dependent variable. (Creswell, 1994)

Hypothesis relates theory to observation and observation to theory. (Ary,


Jacobs and Razavieh, 1984)

Hypotheses are relational propositions. (Kerlinger, 1956)

The key word is testable. That is, you will perform a test of how two variables might
be related. This is when you are doing a real experiment. You are testing variables.

WHAT IS HYPOTHESIS?

A hypothesis is an explanation for a phenomenon which can be tested in some way


which ideally either proves or disproves the hypothesis. When someone formulates a
hypothesis, he or she does so with the intention of testing it, and he or she should not
know the outcome of potential tests before the hypothesis is made. When
formulating a hypothesis, the ideals of the scientific method are often kept in mind,
so the hypothesis is designed to be testable in a way which could be replicated by
other people. It is also kept clear and simple, and the hypothesis relies on known
information and reasoning. A hypothesis does not have to be right or wrong, but the
person formulating the hypothesis does have to be prepared to test the theory to its
limits. If someone hypothesizes that exposure to X causes Y in lab rats, for example,
he or she must see if exposure to other things also causes Y. When scientists publish
results which support a hypothesis, they often detail the steps they took to disprove
the hypothesis as well as the steps which confirmed it, to make the case that much
stronger.

The hypothesis is a clear statement of what is intended to be investigated. It should


be specified before research is conducted and openly stated in reporting the results.
This allows to:

213
• Identify the research objectives;
• Identify the key abstract concepts involved in the research;
• Identify its relationship to both the problem statement and the
literature review;
• A problem cannot be scientifically solved unless it is reduced to
hypothesis form; and
• It is a powerful tool of advancement of knowledge, consistent with
existing knowledge and conducive to further enquiry.

NATURE OF HYPOTHESIS

• It can be tested – verifiable or falsifiable;


• Hypotheses are not moral or ethical questions;
• It is neither too specific nor to general;
• It is a prediction of consequences; and
• It is considered valuable even if proven false

An Example

Imagine the following situation:

You are a nutritionist working in a zoo, and one of your responsibilities is to develop a
menu plan for the group of monkeys. In order to get all the vitamins they need, the
monkeys have to be given fresh leaves as part of their diet. Choices you consider
include leaves of the following species: (a) A (b) B (c) C (d) D and (e) E. You know that
in the wild the monkeys eat mainly B leaves, but you suspect that this could be
because they are safe whilst feeding in B trees, whereas eating any of the other
species would make them vulnerable to predation. You design an experiment to find
out which type of leaf the monkeys actually like best: You offer the monkeys all five
types of leaves in equal quantities, and observe what they eat. There are many
different experimental hypotheses you could formulate for the monkey study. For
example:

When offered all five types of leaves, the monkeys will preferentially feed on B leaves.
This statement satisfies both criteria for experimental hypotheses.

Prediction: It predicts the anticipated outcome of the experiment.

Testable: Once you have collected and evaluated your data (i.e. observations of what
the monkeys eat when all five types of leaves are offered), you know whether or not
they ate more B leaves than the other types (http://www.public.asu.edu).

214
CHARACTERISTICS OF GOOD HYPOTHESIS

According to Cooper and Schindler (2006), a good research hypothesis fulfills three
conditions:

1. It is adequate for its purpose. It means that the hypothesis depicts the original
research problem, explains whether the research study is descriptive or explanatory
(causal), indicates suitable research design of the study and provides the framework
for organizing the conclusions that will result.
2. It is testable. It means that hypothesis require acceptable techniques, reveals
consequences that can be deduced for testing purposes and needs few conditions or
assumptions.
3. It is better than its rivals. It means that the hypothesis is what the experts believe
is powerful enough to reveal more facts and greater variety or scope of information
than its competing hypothesis.

In a good research project, hypotheses flow naturally from the research questions
formulated. Hypothesis testing starts with an imaginary statement about population
parameter and is a procedure based on sample evidence and probability theory to
determine whether the hypotheses is right or wrong (Lind, Marchal and Wathen,
2008). The number of hypotheses formulated in a study depends on the number of
relationships being studied and the overall complexity of the research framework.

GENERAL CHARACTERISTICS OF A HYPOTHESIS

The hypothesis having following general characteristics;

• It should have elucidating power.


• It should strive to furnish an acceptable explanation of the phenomenon.
• It must be verifiable.
• It must be formulated in simple, understandable terms.
• It should correspond with existing knowledge.

THE PURPOSE AND FUNCTION OF HYPOTHESIS

Various functions of hypothesis include:

• It offers explanations for the relationships between those variables that can
be empirically tested;
• It furnishes proof that the researcher has sufficient background knowledge to
enable him/her to make suggestions in order to extend existing knowledge.
• It gives direction to an investigation;

215
• It structures the next phase in the investigation and therefore furnishes
continuity to the examination of the problem;
• It contributes to the development of theory;
• It determines appropriate techniques for the analysis of data;
• It suggests which type of research is likely to be most appropriate; and
• It specifies the source of data which shall be studies and in what context they
shall be studied.

USABLE HYPOTHESIS

The usable hypotheses are described as;

• It must have explanatory power;


• It must state the expected relationship between variables;
• It must be testable;
• It should be consistent with the existing body of knowledge; and
• It should be stated as simply and concisely as possible.

FORMULATING HYPOTHESIS

Once the research question has been stated, the next step is to define testable
hypotheses. Usually, a research question is a broad statement that is not directly
measurable by a research study. The research question needs to be broken down into
smaller units, called hypotheses that can be studied. A hypothesis is a statement that
expresses the probable relationship between variables.

DEFINING VARIABLES

The purpose of a research study is to discover unknown qualities of persons or things.


To measure these qualities, we define variables. In a study there are several classes of
variables.

Independent (or experimental) variable: There are two types of independent


variables: Active and attribute. If the independent variable is an active variable then
we manipulate the values of the variable to study its effect on another variable. In the
above example, we alter the anxiety level to see if responsiveness to pain reduction
medication is enhanced. The anxiety level is the active independent variable. An
attribute variable is a variable where we do not alter the variable during the study.
For example, we might want to study the effect of age on weight. We cannot change
a person's age, but we can study people of different ages and weights.

216
Dependent variable (or Criterion measure): This is the variable that is affected by the
independent variable. Responsiveness to pain reduction medication is the dependent
variable in the above example. The dependent variable is dependent on the
independent variable. Another example: If I praise you, you will probably feel good,
but if I am critical of you, you will probably feel angry. My response to you is the
independent variable, and your response to me is the dependent variable, because
what I say influences how you respond.

Control variable: A control variable is a variable that affects the dependent variable.
When we "control a variable" we wish to balance its effect across subjects and groups
so that we can ignore it, and just study the relationship between the independent and
the dependent variables. You control for a variable by holding it constant, e.g., keep
humidity the same, and vary temperature, to study comfort levels.

Extraneous variable: This is a variable that probably does influence the relationship
between the independent and dependent variables, but it is one that we do not
control or manipulate. For example, barometric pressure may affect pain thresholds
in some clients, but we do not know how this operates or how to control for it. Thus,
we note that this variable might affect our results, and then ignore it. Often research
studies do not find evidence to support the hypotheses because of unnoticed
extraneous variables that influenced the results. Extraneous variables which influence
the study in a negative manner are often called confounding variables.

ILLUSTRATION OF VARIABLES

Variables must be defined in terms of measurable behaviors. The operational


definition of a variable describes the variable.

JOB STRESS AND ADJUSTMENT

ROLE AMBIGUITY

ROLE OVERLOAD
ORGANISATIONAL
ADJUSTMENT
ROLE CONFLICT

ROLE AUTHORITY

ANXIETY
Rigid control AND
TENSION

INDEPENDENT DEPENDENT
VARIABLES INTERVENING VARIABLE VARIABLES

217
DERIVATIONS OF HYPOTHESIS
INDUCTIVE
Researcher notes the observations of behavior, thinks about the problem, turns to
literature for clues, makes additional observations, derives probable relationships,
and the hypothesizes an explanation. The hypothesis is then tested.

• May be limited in scope.


• Can lead to unconnected findings, which could explain a little
about the research.

Observations Study Probable relationship Hypothesis Theory

DEDUCTIVE

The researcher begins by selecting a theory, derives a hypothesis leading to


deductions derived through symbolic logic or mathematics. These deductions are then
presented in the form of statements accompanied by an argument or a rationale for
the particular proposition.

• Theories are not speculations but are previously known facts.


• The process is a technique to test the adequacy of the theory.

Theory Hypothesis Study Deduction Statement

TYPES OF HYPOTHESIS

Descriptive hypothesis

Descriptive hypothesis are proposition that describe the characteristics of a variable -


an object, person, organization, situation, event. E.g. Public enterprises are more
amenable for centralized planning.

Descriptive hypotheses aim to describe, not to explain; they are ways of answering
the question “what?” A descriptive hypothesis claims that all instances (of a given
type) of phenomenon X have observable feature Y. A descriptive hypothesis makes an
empirical claim about the generality of a condition. If one claims that all dogs have
tails — that the condition of having tails is valid for all dogs —here one is making a
descriptive hypothesis, which can of course test empirically.

Statistical hypothesis

Statistical hypothesis is an assumption about a population parameter. This


assumption may or may not be true. The best way to determine whether a statistical

218
hypothesis is true would be to examine the entire population. Since that is often
impractical, researchers typically examine a random sample from the population. If
sample data is consistent with the statistical hypothesis, the hypothesis is accepted; if
not, it is rejected. There are two types of statistical hypotheses: the null hypothesis
and the alternative hypothesis.

Example: Group A performance in team work is better than group A.

Null hypothesis

The null hypothesis characteristically proposes a common or default position, such as


that there is no association between two measured phenomena, or that a potential
treatment has no effect. The term was initially coined by English geneticist and
statistician Ronald Fisher. A null hypothesis may read, “There is no difference
between…..” Ho states the opposite of what the experimenter would expect or
predict. The final conclusion of the investigator will either retain a null hypothesis or
reject a null hypothesis in favor of the alternative hypothesis. Here, not rejecting Ho
does not really mean that Ho is true. There might not be enough evidence against Ho.

Example: “There is no significant difference in the anxiety level of children of High IQ


and those of low IQ.”

Example: There Is No Significant Relation Between The Stress Experienced On The Job
And Job Satisfaction Of Employees.

- HO:  = HO =100


- Population Mean = (  ), HYPOTHESISED MEAN = HO .

Alternative Hypothesis

An alternative hypothesis is a statement that suggests a potential outcome that the


researcher may expect (H1or HA).

Alternative hypothesis comes from prior literature or studies. It is established only


when a null hypothesis is rejected. Often an alternative Hypothesis is the desired
conclusion of the investigator.

The two types of alternative hypothesis are:

1. Directional Hypothesis; and


2. Non- Directional hypothesis.
a. Directional Hypothesis

219
The second type of hypothesis is a directional hypothesis. Directional hypotheses are
never phrased as a question, but always as a statement. Directional hypothesis
measure the effects of two variables on each other. In other words, it measures the
direction of variation of two variables. This effect of one variable on the other variable
can be in a positive direction or in the negative direction. Directional hypotheses
always express the effect of an independent on a dependent variable. This is a
hypothesis that specifies the direction of the predicted hypothesis that is whether the
predicted relationship will be positive or negative.

Consider the following example:

A: The mean height of all men in the city is 5' 6'';

B: The mean height of all men in the city is greater than 5' 6’’; and

B is a directional hypothesis because it specifies a direction (greater than).

b. Non-Directional hypothesis

A non - directional hypothesis is a type of alternative hypothesis in which no definite


direction of the expected findings is specified. The researcher may not know what can
be predicted from the past literature. It may read, “..there is difference between..”

Example: “There is a difference in the anxiety level of the children of high IQ and those
of low IQ.”

Working hypothesis

While planning the study of problem, hypotheses are formed. Initially they may not
be specific. In such cases they are referred to as “working hypothesis” which are
subject to modification as the investigation proceeds.

Example: Management system is not oriented to human resource development-

Relational hypothesis

Relational hypothesis are the proposition that describes the relationship between two
variables. The relationship suggested may be positive or negative or causal
relationship.

Example:

1. Families with higher incomes spends more for recreation;

220
2. Participative management promotes motivation among executives; and
3. Labour productivity decreases as working duration increases.

Causal hypothesis

The causal hypothesis states that the existence of or a change in one variable cause
or leads to an effect on another variable. The first variable is called as an independent
variable and the latter is a dependent variable. When dealing with the causal
relationship between variables the researcher must consider the direction in which
such relationship flow, i.e., which is cause and which is effect.

Example: Feeling of Inequality Develops Dissatisfaction among Employees-

Common sense hypothesis

The common sense hypothesis indicates that there are empirical uniformities
perceived through day to day observation.

Example: Employees Of Upper-Class Have Low Adjustment In Organization Compared


To Lower Class.

Complex hypothesis

Complex hypothesis aims at testing the existence of logically derived relationships


between empirical uniformities.

Example: Members of the lower level hierarchy suffer from oppression psychosis-
(Purposeful distortion of empirical exactness).

Analytical hypothesis

Analytical hypothesis state the relationship between analytical variables. It specifies


the relationship between changes in one property and changes in another.

TYPE I AND TYPE 2 ERROR

There are two kinds of errors that can be made in significance testing: (1) a true null
hypothesis can be incorrectly rejected and (2) a false null hypothesis can fail to be
rejected. The former error is called a Type I error and the latter error is called a Type
II error. The probability of a Type I error is designated by the Greek letter alpha (a) and
is called the Type I error rate; the probability of a Type II error (the Type II error rate)
is designated by the Greek letter beta (ß). These two types of errors are defined in the
table.

221
Statistical Decision True State of the Null Hypothesis
H0 True H0 False
Reject H0 Type I error Correct
Do not Reject H0 Correct Type II error

TYPE I ERROR

Type I error, also known as an "error of the first kind", an α error, or a "false positive":
the error of rejecting a null hypothesis when it is actually true. Plainly speaking, it
occurs when we are observing a difference when in truth there is none, thus indicating
a test of poor specificity. An example of this would be if a test shows that a woman is
pregnant when in reality she is not. Type I error can be viewed as the error of excessive
credulity.

Other Examples: We can say that Type I error has been committed when:

1. an intelligent student is not promoted to the next class;


2. a good player is not allowed to play the match;
3. an innocent person is punished;
4. a driver is punished for no fault of him; and
5. a good worker is not paid his salary in time.

TYPE II ERROR

Type II error, also known as an "error of the second kind", a β error, or a "false
negative": the error of failing to reject a null hypothesis when it is in fact not true. In
other words, this is the error of failing to observe a difference when in truth there is
one, thus indicating a test of poor sensitivity.

Example 1: If a test shows that a woman is not pregnant, when in reality, she is. The
type II error can be viewed as the error of excessive skepticism.

Example 2: Medicine, however, is one exception; telling a patient that they are free
of disease, when they are not, is potentially dangerous

CASE: TYPE 1 ERROR, TYPE II ERROR:


CONSUMERS RISK AND PRODUCERS RISK
A manufacturer of handheld calculators receives very large shipments of printed
circuits from a supplier. It is too costly and time-consuming to inspect all incoming
circuits, so when each shipment arrives, a sample is then used to test H0: pi = 0.05
versus HA: pi > 0.05, where is the true proportion of defects in the shipment. If the
null hypothesis is not rejected, the shipment is accepted, and the circuits are used in

222
the production of the calculators. If the null hypothesis is rejected, the entire
shipment is returned to the supplier due to inferior quality. (A shipment is defined to
be of inferior quality if it contains more than 5% defects.)
In this context, define type I and type II errors.
1. From the calculator manufacturers' point of view, which type of error would
be considered more serious and explain why?
2. From the printed circuit suppliers' point of view, which type of error would be
considered more serious and explain why?
HYPOTHESIS TESTING

HYPOTHESIS TESTING

SPECIFY THE LEVEL DECIDE THE CORRECT


SAMPLING DISTRIBUTION
OF SIGNIFICANCE ( LEVEL )
A GENERAL PROCEDURE FOR CONDUCTING HYPOTHESIS TESTS

SELECT A RANDOM SAMPLE AND


THERE BY RUN THERE BY RUN WORK OUT AN APPROPRIATE VALUE
THE RISK OF THE RISK OF
COMMITTING COMMITTING
TYPE I ERROR TYPE II ERROR CALCULATION OF THE PROBABILITY

COMPARING THE PROBABILITY


REJECT Ho ACCEPT Ho

NO IS THE PROBABILITY EQUAL TO OR SMALLER THAN  VALUE


IN CASE OF ONE TAILED AND /2 IN CASE OF TWO TAILED
YES

223
All hypothesis tests are conducted the same way. The researcher states a hypothesis
to be tested, formulates an analysis plan, analyzes sample data according to the plan,
and accepts or rejects the null hypothesis, based on the results of the analysis.

▪ State the hypotheses. Every hypothesis test requires the analyst to state a null
hypothesis and an alternative hypothesis. The hypotheses are stated in such a
way that they are mutually exclusive. That is, if one is true, the other must be
false; and vice versa.
▪ Formulate an analysis plan. The analysis plan describes how to use sample
data to accept or reject the null hypothesis. It should specify the following
elements.
• Significance level. Often, researchers choose significance levels equal
to 0.01, 0.05, or 0.10; but any value between 0 and 1 can be used.
• Test method. Typically, the test method involves a test statistic and a
sampling distribution. Computed from sample data, the test statistic
might be a mean score, proportion, difference between means, the
difference between proportions, z-score, t-score, chi-square, etc. Given
a test statistic and its sampling distribution, a researcher can assess
probabilities associated with the test statistic. If the test statistic
probability is less than the significance level, the null hypothesis is
rejected.
▪ Analyze sample data. Using sample data, perform computations called for in
the analysis plan.

Test statistic. When the null hypothesis involves a mean or proportion,


use either of the following equations to compute the test statistic.
Test statistic = (Statistic - Parameter) / (Standard deviation of statistic)
Test statistic = (Statistic - Parameter) / (Standard error of the statistic)
Where Parameter is the value appearing in the null hypothesis, and
Statistic is the point estimate of the parameter. As part of the analysis,
you may need to compute the standard deviation or standard error of
the statistic. Previously, we presented common formulas for the
standard deviation and standard error.
When the parameter in the null hypothesis involves categorical data,
you may use a chi-square statistic as the test statistic. Instructions for
computing a chi-square test statistic are presented in the lesson on the
chi-square goodness of fit test.
P-value. The P-value is the probability of observing a sample statistic
as extreme as the test statistic, assuming the null hypothesis is true.

224
Interpret the results. If the sample findings are unlikely, given the null
hypothesis, the researcher rejects the null hypothesis. Typically, this
involves comparing the P-value to the significance level, and rejecting
the null hypothesis when the P-value is less than the significance level.
CONCLUSION

The hypothesis turns to be the crux of any research undertaken in quantitative


research. Hypotheses are also significant since it supports a researcher to locate
information needed to resolve the research problem or sub problem. The Major
objective behind hypothesis formation is to provide direction to the research and the
researcher. As this section indicates the role of hypothesis is to provide clarification
on the propositions that formulated in the research and the problem by bridging the
gap between theory and experiential inquiry. Knowledge of hypothesis formulation
and hypothesis testing is important for a researcher to arrive at appropriate research
conclusions.

DISCUSSION QUESTIONS

1. Define hypothesis?
2. What are the characteristic features of hypothesis?
3. Why hypothesis?
4. What is the scope of the hypothesis?
5. What do you mean by variables?
6. Differentiate extraneous variable and control variable?
7. Differentiate inductive and deductive method?
8. What are the different types of hypothesis?
9. How you define descriptive hypothesis with suitable example?
10. How you define statistical hypothesis with suitable example?
11. Differentiate null and alternative hypothesis?
12. Differentiate directional and non- directional hypothesis?
13. What is relational hypothesis?
14. What do you mean by causal hypothesis?
15. How you conceptualize analytical hypothesis?
16. Explain type one and type two errors?

225
MODULE 10

DATA COLLECTION TOOLS AND TECHNIQUES

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand and appreciate the meaning of data collection.


2. Analyze the tools of data collection.
3. Understand varied techniques of data collection; and
4. Learn the Scientific methods for conducting research.

INTRODUCTION

Data collection is a term used to describe a process of preparing and collecting data -
for example as part of a process improvement or similar project. The purpose of data
collection is to obtain information to keep on record, to make decisions about
important issues, to pass information on to others. Primarily, data are collected to
provide information regarding a specific topic.

Data-collection techniques allow us to systematically collect information about our


objects of study (people, objects, phenomena) and about the settings in which they
occur. In the collection of data we have to be systematic. If data are collected
haphazardly, it will be difficult to answer our research questions in a conclusive way.

Data collection usually takes place early on in an improvement project, and is often
formalized through a data collection plan which often contains the following activity.

1. Pre collection activity – Agree goals, target data, definitions, methods.


2. Collection – data collection.
3. Present Findings – usually involves some form of sorting analysis
and/or presentation.

Prior to any data collection, pre-collection activity is one of the most crucial steps in
the process. It is often discovered too late that the value of their interview information
is discounted as a consequence of poor sampling of both questions and informants
and poor elicitation techniques. After a pre - collection activity is fully completed, data

226
collection in the field, whether by interviewing or other methods, can be carried out
in a structured, systematic and scientific way.

A formal data collection process is necessary as it ensures that data gathered is both
defined and accurately and that subsequent decisions based on arguments embodied
in the findings are valid. The process provides both a baseline from which to measure
from and in certain cases a target on what to improve.

Basic concepts

Data

Data are facts, figures and other relevant materials, past and present serving as bases
for study and analysis.

Demographic data

Age, sex, race, social class, religion, marital status, education, income, occupation,
family size, location of households, lifestyle.

Sources of data

Primary Research

Primary Research (also called field research) involves the collection of data that does
not already exist, which is research to collect original data. Primary Research is often
undertaken after the researcher has gained some insight into the issue by collecting
secondary data. Here the researcher collects firsthand information and he has greater
control over the collection and classification. This can be through numerous forms,
including questionnaires, direct observation and telephone interviews amongst
others. This information may be collected in things like questionnaires and interviews.

For example, you can investigate an issue specific to your business, get feedback about
your Web site, assess demand for a proposed service, gauge response to various
packaging options, and find out how much consumers will shell out for a new product.

Advantages:

• Addresses specific research issues as the researcher controls the search design
to fit their needs;
• Great control, not only does primary research to enable the marketer to focus
on specific subjects, it also enables the researcher to have a higher control over
how the information is collected. Taking this into account, the researcher can

227
decide on such requirements as size of project, time frame and location of
research;
• Efficient spending for information, primary data collection focus on issues
specific to the researcher, improving the chances that the research funds are
spent efficiently; and
• Proprietary information, primary data collected by the researcher is their own.

• Compared to secondary research, primary data may be very expensive in


preparing and carrying out the research;
• In order to be done properly, primary data collection requires the
development and execution of a research plan. It is longer to undertake
primary research than to acquire secondary data;
• Some research projects, while potentially offering information that could
prove quite valuable, may not be within the reach of a researcher;
• May be very expensive because many people need to be confronted;
• By the time the research is complete it may be out of date;
• People may have to be employed or avoid their primary duties for the duration
of the research; and
• People may not reply if emails or letters are used.

Secondary Research

Secondary research (also known as desk research) involves the summary, collation
and/or synthesis of existing research rather than primary research, where data is
collected from, for example, research subjects or experiments. Secondary data is the
one in which data is collected from the work of an existing researcher or information
is taken from already existing databases. Secondary Research describes information
which you gathered through already study reports, surveys, from already done
interviews, from literature, publications, records, published institutional documents,
newspapers, journals and magazines and broadcast media. Secondary research is
much easier to gather than primary research. Although secondary research is less
expensive than primary research, it's not as accurate, or as useful, as specific and
customized research. For instance, secondary research will tell you how much
teenagers spent last year on basketball shoes, but not how much they're willing to pay
for the shoe design your company has in mind.

Advantages

• Considerably cheaper and faster than doing original studies;


• You can benefit from the research from some of the top scholars in your field,
which for the most part ensures quality data;

228
• If you have limited funds and time, other surveys may have the advantage of
samples drawn from larger populations;
• How much you use previously collected data is flexible; you might only extract
a few figures from a table, you might use the data in a subsidiary role in your
research, or even in a central role; and
• A network of data archives in which survey data files are collected and
distributed is readily available, making research for secondary analysis easily
accessible.

Disadvantages

• Since many surveys deal with national populations, if you are interested in
studying a well-defined minority subgroup you will have a difficult time finding
relevant data;
• Secondary analysis can be used in irresponsible ways. If variables aren't exactly
those you want, data can be manipulated and transformed in a way that might
lessen the validity of the original research; and
• Much research, particularly of large samples, can involve large data files and
difficult statistical packages.

DIFFERENCE BETWEEN PRIMARY AND SECONDARY RESEARCH

Sl. No Primary Research Secondary Research

1 Primary research is the first hand Secondary Research describes


research done, which is original, information which you gathered
the data you collected by oneself through already study reports, surveys
by interacting with people. etc.

2 Primary research entails the use Whereas secondary research is a


of the immediate data. The means to reprocess, and reuse
popular ways to collect primary collected information as an indication
data consist of surveys, for betterments of the service or
interviews and focus groups, product. Both primary and secondary
which shows that direct data are useful for businesses, but both
relationship between may differ from each other in various
respondents and researcher. aspects.

3 Primary data are more In secondary data, information relates


accommodating as it shows latest to a past period. Hence, it lacks aptness
information. and therefore, it has unsatisfactory
value.

229
4 Primary data is accumulated by Secondary data are obtained from any
the researcher particularly to other organization than the one
meet up the research objective of instantaneously interested in the
the subsisting project. current research project. Secondary
data was collected and analyzed by the
organization to convene the
requirements of various research
objectives.

5 Primary data are completely A firm in which secondary data is


tailor-made and there is no accumulated and delivered may not
problem of adjustments. accommodate the exact needs and
particular requirements of the current
research study. Many a times,
alteration or modifications to the exact
needs of the investigator may not be
sufficient. To that amount usefulness
of secondary data will be lost.

Primary data takes a lot of time Secondary data are available


and the unit cost of such data is effortlessly, rapidly and inexpensively.
relatively high.

PRIMARY DATA COLLECTION METHODS

In primary data collection, you collect the data yourself using methods such as
interviews and questionnaires. The key point here is that the data you collect is unique
to you and your research and, until you publish, no one else has access to it. There are
many methods of collecting primary data and the main methods include:

• Questionnaires.
• Schedules.
• Interviews.
• Focus group interviews.
• Observation.
• Case-studies.
• Diaries.
• Critical incidents and
• Portfolios.

230
QUESTIONNAIRE

Questionnaires are a popular means of collecting data but are difficult to design and
often require many rewrites before an acceptable questionnaire is produced.

Characteristics of good questionnaire

• A questionnaire consists of more analytical questions.


• There should be perfect clarity to get a response.
• There should be an adequate number of questions in the instrument.
• Answer related questions to be avoided.
• Standard and technical term usage should be avoided.
• There planned and ordered questions should be used.
• The questionnaire should be attractive in its physical form.
• Emotional questions should be avoided.

Advantages:

• Can be used as a method in its own right or as a basis for interviewing or a


telephone survey.
• Can be posted, emailed or faxed.
• Can cover a large number of people or organizations.
• Wide geographic coverage.
• Relatively cheap.
• No prior arrangements are needed.
• Avoids embarrassment on the part of the respondent.
• Respondent can consider responses.
• Possible anonymity of respondent.
• No interviewer bias.

Disadvantages:

• Design problems.
• Questions have to be relatively simple.
• Historically low response rate (although inducements may help).
• Time delay whilst waiting for responses to be returned.
• Require a return deadline.
• Several reminders may be required.
• Assumes no literacy problems.
• No control over who completes it.
• Not possible to give assistance if required.
• Problems with incomplete questionnaires.

231
• Replies not spontaneous and independent of each other.
• Respondent can read all questions beforehand and then decide whether to
complete or not. For example, perhaps because it is too long, too complex,
uninteresting, or too personal.

Schedule

A set of questions which are asked and filled in by an interviewer in a face to face
situation with another person.

Objectives:

• To collect information directly;


• To help in memory; and
• To facilitate and help in the work of tabulation and analysis of data.

Characteristics of Good Schedule

• There should be accurate communication items;


• Separate suggestive questions should be included;
• The researcher should get an accurate response;
• The size of the schedule should be attractive;
• Try to avoid unambiguous questions;
• Try to avoid emotional questions;
• The schedule should be free from subjective questions; and
• Information sought, should have the capability for tabulation.

Interviews

Interviewing is a technique that is primarily used to gain an understanding of the


underlying reasons and motivations for people’s attitudes, preferences or behaviour.
Interviews can be undertaken on a personal one-to-one basis or in a group. They can
be conducted at work, at home, in the street or in a shopping center, or some other
agreed location.

Personal interview

Advantages:

• Serious approach by respondent resulting in accurate information;


• Good response rate;
• Completed and immediate;

232
• Possible in-depth questions;
• Interviewer in control and can give help if there is a problem;
• Can investigate motives and feelings;
• Can use recording equipment;
• Characteristics of respondent assessed – tone of voice, facial expression,
hesitation, etc.;
• Can use props; and
• Used to pilot other methods.

Disadvantages:

• Need to set up interviews;


• Time consuming;
• Geographic limitations.;
• Can be expensive;
• Normally need a set of questions;
• Respondent bias – tendency to please or impress, create false personal image,
or end interview quickly;
• Embarrassment possible if personal questions;
• Transcription and analysis can present problems – subjectivity; and
• If many interviewers, training required.

Types of interview

Structured

• Based on a carefully worded interview schedule;


• Frequently require short answers with the answers being ticked off;
• Useful when there are a lot of questions which are not particularly contentious
or thought provoking; and
• Respondent may become irritated by having to give over-simplified answers.

Semi-structured

The interview is focused by asking certain questions but with scope for the respondent
to express him or herself at length.

Unstructured

This also called an in-depth interview. The interviewer begins by asking a general
question. The interviewer then encourages the respondent to talk freely. The
interviewer uses an unstructured format, the subsequent direction of the interview
being determined by the respondent’s initial reply. The interviewer then probes for

233
elaboration – ‘Why do you say that?’ or, ‘That’s interesting, tell me more’ or, ‘Would
you like to add anything else?’ being typical probes.

Observation involves recording the behavioral patterns of people, objects and events
in a systematic manner. Observational methods may be:

• structured or unstructured;
• disguised or undisguised;
• natural or contrived;
• personal;
• mechanical;
• non-participant; and
• participant, with the participant taking a number of different roles.

Structured or unstructured

In structured observation, the researcher specifies in detail what is to be observed and


how the measurements are to be recorded. It is appropriate when the problem is
clearly defined and the information needed is specified.

In unstructured observation, the researcher monitors all aspects of the phenomenon


that seem relevant. It is appropriate when the problem has yet to be formulated
precisely and flexibility is needed in observation to identify key components of the
problem and to develop hypotheses. The potential for bias is high. Observation
findings should be treated as hypotheses to be tested rather than as conclusive
findings.

Disguised or undisguised

In disguise observation, respondents are unaware they are being observed and thus
behave naturally. Disguise is achieved, for example, by hiding, or using hidden
equipment or people disguised as shoppers.

In undisguised observation, respondents are aware they are being observed. There is
a danger of the Hawthorne effect – people behave differently when being observed.

Natural or contrived

Natural observation involves observing behavior as it takes place in the environment,


for example, eating hamburgers in a fast food outlet.

In contrived observation, the respondents’ behavior is observed in an artificial


environment, for example, a food tasting session.

234
Personal

In personal observation, a researcher observes actual behavior as it occurs. The


observer may or may not normally attempt to control or manipulate the phenomenon
being observed. The observer merely records what takes place.

Mechanical

Mechanical devices (video, closed circuit television) record what is being observed.
These devices may or may not require the respondent’s direct participation. They are
used for continuously recording on-going behavior.

Non-participant

The observer does not normally question or communicate with the people being
observed. He or she does not participate.

Participant

In participant observation, the researcher becomes, or is, part of the group that is
being investigated. Participant observation has its roots in ethnographic studies (the
study of man and races) where researchers would live in tribal villages, attempting to
understand the customs and practices of that culture. It has a very extensive
literature, particularly in sociology (development, nature and laws of human society)
and anthropology (physiological and psychological study of man). Organizations can
be viewed as ‘tribes’ with their own customs and practices.

The role of the participant observer is not simple. There are different ways of
classifying the role:

• Researcher as employee.
• Researcher as an explicit role.
• Interrupted involvement.
• Observation alone.

Researcher as employee

The researcher works within the organization alongside other employees, effectively
as one of them. The role of the researcher may or may not be explicit and this will
have implications for the extent to which he or she will be able to move around and
gather information and perspectives from other sources. This role is appropriate when
the researcher needs to become totally immersed and experience the work or
situation at first hand.

235
There are a number of dilemmas. Do you tell management and the unions?
Friendships may compromise the research. What are the ethics of the process? Can
anonymity be maintained? Skill and competence to undertake the work may be
required. The research may be over a long period of time.

Researcher as an explicit role

The researcher is present every day over a period of time, but entry is negotiated in
advance with management and preferably with employees as well. The individual is
quite clearly in the role of a researcher who can move around, observe, interview and
participate in the work as appropriate. This type of role is the most favored, as it
provides many of the insights that the complete observer would gain, whilst offering
much greater flexibility without the ethical problems that deception entails.

Interrupted involvement

The researcher is present sporadically over a period of time, for example, moving in
and out of the organization to deal with other work or to conduct interviews with, or
observations of, different people across a number of different organizations. It rarely
involves much participation in the work.

Observation alone

The observer role is often disliked by employees since it appears to be


‘eavesdropping’. The inevitable detachment prevents the degree of trust and
friendship forming between the researcher and respondent, which is an important
component in other methods.

Case-studies

The term case-study usually refers to a fairly intensive examination of a single unit
such as a person, a small group of people, or a single company. Case-studies involve
measuring what is there and how it got there. In this sense, it is historical. It can enable
the researcher to explore, unravel and understand problems, issues and relationships.
It cannot, however, allow the researcher to generalize, that is, to argue that from one
case-study the results, findings or theory developed apply to other similar case-
studies. The case looked at may be unique and, therefore not representative of other
instances. It is, of course, possible to look at several case-studies to represent certain
features of management that we are interested in studying. The case-study approach
is often done to make practical improvements. Contributions to general knowledge
are incidental.

The case-study method has four steps:

236
1. Determine the present situation.
2. Gather background information about the past and key variables.
3. Test hypotheses. The background information collected will have been
analyzed for possible hypotheses. In this step, specific evidence about each
hypothesis can be gathered. This step aims to eliminate possibilities which
conflict with the evidence collected and to gain confidence for the important
hypotheses. The culmination of this step might be the development of an
experimental design to test out more rigorously the hypotheses developed, or
it might be to take action to remedy the problem.
4. Take remedial action. The aim is to check that the hypotheses tested actually
work out in practice. Some action, correction or improvement is made and a
re-check carried out on the situation to see what effect the change has brought
about.

The case-study enables rich information to be gathered from which potentially useful
hypotheses can be generated. It can be a time-consuming process. It is also inefficient
in researching situations which are already well structured and where the important
variables have been identified. They lack utility when attempting to reach rigorous
conclusions or determining precise relationships between variables.

Secondary Sources of data collection

Secondary Data can be of two types. These are:

1. Cross Sectional Data.


2. Longitudinal Data.

Cross Sectional Data

It is the data collected at the same time from different places

Longitudinal Data

It is the data collected at regular time intervals. Longitudinal Data can be further
divided into two types:

a. Data collected through Panel Study.


b. Data collected through Repeated Design.

DATA COLLECTION IN QUALITATIVE AND QUANTITATIVE RESEARCH

Qualitative research is grounded in the assumption that individuals construct social


reality in the form of meanings and interpretations, and that these constructions tend

237
to be transitory and situational. Use qualitative methods to capture what people say
about their meanings and interpretations. Qualitative research typically involves
qualitative data, i.e., data obtained through methods such interviews, on-site
observations, and focus groups that are in more narrative rather than numerical form.
Such data are analyzed by looking for themes and patterns. It involves reading,
rereading, and exploring the data. How the data are gathered will greatly affect the
ease of analysis and utility of findings (Maxwell, 1996, Patton, 2002., and Wholey,
Hatry, and Newcomer, 2004).
Quantitative inquiries use numerical and statistical processes to answer specific
questions. Statistics are used in a variety of ways to support inquiry or program
assessment/evaluation. Descriptive statistics are numbers used to describe a group of
items. Inferential statistics are computed from a sample drawn from a larger
population with the intention of making generalizations from the sample about the
whole population. The accuracy of inferences drawn from a sample is critically
affected by the sampling procedures used. It is important to start planning the
statistical analyses at the same time that planning for an inquiry begins. Decisions
about analysis techniques to use and statistics to report are affected by levels of
measurement of the variables in the study, the questions being addressed, and the
type and level of information that you expect to include in reporting on your
discoveries (Maxwell, 1996, Patton, 2002., and Wholey, Hatry, and Newcomer, 2004).

CONCLUSION
This section of the book, in nutshell provides the researcher an insight to data
collection methods and forms of data, and the sources from where the data need to
be collected. The Qualitative research data collection methods are very much time
consuming, hence data are usually collected from a smaller sample. On the contrary,
the quantitative data collection methods are those which focus on numbers and
frequencies rather than on meaning. Whatever the method a researcher may adopt
the result of the research strongly linked to the way with which the data collected
from varied sources. The procedures used to collect data can influence research
validity. The knowledge of data collection supports the researcher to engage in the
right research process and the sources from where the data to be collected.
DISCUSSION QUESTIONS

1. What do you mean by data collection?


2. What are the basic activities in data collection?
3. What are the sources of data?
4. Differentiate primary and secondary data?
5. What primary data is important in research?
6. What would be the use of secondary data?
7. What are advantages of secondary data?
8. What do you mean by tools of data collection?

238
9. What do you mean by questionnaire?
10. Discuss the advantages and disadvantages of questionnaire as a tool of
data collection?
11. Differentiate between schedule and questionnaire?
12. What are the characteristics of a good questionnaire?
13. What are the characteristics of a good schedule?
14. How interview as a tool would be beneficial to research?
15. Discuss various types of interviews?
16. Discuss the role researcher during interview?
17. How case studies turned to be a method of managerial research?
18. What are the basic steps in case study methodology?
19. Differentiate cross sectional data and longitudinal data?
20. Discuss the data collection methods and tools in qualitative and
quantitative research.

239
MODULE 11

ITEM ANALYSIS

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand appreciates the meaning of research techniques.


2. Learn the scientific methods for conducting research.
3. Understand item analysis.
4. Understand reliability and validity,.

Introduction

Item analysis is a broad term that denotes the precise procedures used in research to
appraise test items, normally for the purpose of test construction and revision. Item
analysis is regarded as one of the most significant facets of test construction and
increased its attention is going up in applied research. The main purpose of item
analysis is to improve tests by revising or eliminating ineffective items. There are
numerous procedures available for determining item analysis. The procedure
employed in evaluating an item's effectiveness is contingent to some extent on the
researcher's predilection and on the purpose of the test. The researcher should aware
that process of item analysis or the test to be conducted comes after the first draft of
a test has been constructed, administered to a sample and scurried out.

What is an Item?

Single task or questions that usually cannot be broken down into any smaller units.

Item analysis

Item analysis consists of a set of items are protected for difficulty and discrimination
by giving them in an experimental design to a group of examinees fairly representative
of the target population for the test computing an index of difficulty and an index of
discrimination for each item, and retaining for the final test those items having the
desired properties in greater degree.

Scale

240
Scale is a device for measuring variables. Number of scores assigned to the various
degrees of opinions attitudes and other concepts.

Scaling

In the social sciences, scaling is the process of measuring or ordering entities with
respect to quantitative attributes or traits. For example, a scaling technique might
involve estimating individuals' levels of extraversion, or the perceived quality of
products. Certain methods of scaling permit estimation of magnitudes on a
continuum, while other methods provide only for the relative ordering of the entities.

Scale evaluation

Scales should be tested for reliability, generalizability, and validity. Generalizability is


the ability to make inferences from a sample to the population, given the scale you
have selected.

Validity and Reliability

A research design calls for a particular data collection methodology. In a research


study, the data can be collected through surveys (interviewing, questionnaires),
observation or experiments (Cooper and Schindler, 2003). The data to be collected
may be primary or secondary or both (Sekaran and Bougie, 2010). If the research study
requires the researcher to collect primary data, then one of the most important tasks
waiting for him is to develop an instrument. Instrument development can be a time
consuming process and it may contain several important issues. One of the important
aspects of instrument development is checking for its validity and reliability through
pilot testing (Saunders et al., 2009).

Reliability and Validity examine the fitness of measure. According to Whitelaw (2001),
reliability and validity can be conceptualized by taking chocolate cake recipe as an
example.

Reliability

Reliability suggests that any person provided that they follow the recipe, will produce
a reasonable chocolate cake, or at least something that you can identify as a chocolate
cake. Validity suggests that if the recipe includes chocolate, then the cake will look like
a chocolate cake, smell like a chocolate cake and taste like a chocolate. That is, - the
proof is in the pudding. When we think it in this way then it becomes clear that a
measurement device must first be reliable and then it must be valid. In other words,
you have to be confident that what you get is consistently reproducible (i.e. the recipe
consistently produces something like a cake). Once you are confident in the

241
consistency of the output, then you can scrutinize it to assess whether it is what is
purports to be (ie. a chocolate cake).

Based on the above example, we can say that; Reliability is the degree to which a
measure is free from random error and therefore gives consistent results. It indicates
the internal consistency of the measurement device. It refers to the accuracy and
precision of a measurement procedure (Thorndike, Cunningham, Thorndike, and
Hagen, 1991) and can be expressed in terms of stability, equivalence, and internal
consistency (Cooper and Schindler, 2003).

Stability is the extent to which results obtained with the measurement device can be
reproduced. Stability can be assessed by administering the same measurement device
to the same respondents at two separate points in time (Zikmund, 2000). This is called
a test - retest method. However, as Zikmund, 2000) elaborates, there are problems
associated with this method. The first measure may sensitize the respondents and
may influence the results of the second measure. Results may also suffer due to
inadequate time between tests and retest measures.

Equivalence shows the degree to which alternative forms of the same measures are
used to produce the same or similar results (Cooper and Schindler, 2003).

Internal Consistency is the extent to which items in a scale or measurement device


are homogeneous and reflect the same underlying construct (Cooper and Schindler,
2003). Most commonly used method to evaluate internal consistency is Cronbach‟s
Coefficient Alpha (Cronbach, 1951). Most of the survey instruments use Likert-type
scale and it is important to calculate Cronbach‟s alpha when researcher decides to
use this type of scale. If one doesn’t report Cronbach‟s alpha, the reliability of the
items is considered to be low or unknown.

Alpha coefficient ranges in value from 0 to 1 and may be used to describe the reliability
of factors extracted from various types of questionnaires or scales. The higher the
score, the more reliable the generated scale is (Delafrooz, Paim and Khatibi, 2009).
According to Nunnaly (1978), Cronbach‟s alpha score of 0.7 is considered to be
acceptable reliability coefficient.

Validity

Validity is the extent to which a score truthfully represents a concept. Simply


speaking, it is the accuracy of measurement device and represents the ability of a scale
to measure what it is intended to measure (Cooper and Emory, 1994; Zikmund, 2000).
Validity is expressed in two types: External and Internal (Saunders et al., 2009).
External validity is about generalization and internal validity ensures that a
researchers’ research design closely follows the principle of cause and effect. While

242
administering any measurement device a researcher must ensure that his device has
face validity, content validity, criterion validity and construct validity (Neuman, 2005).

• The validity of the test is concerned with what the test measures and how
well it does so.
• The accuracy with which it measures or as the degree to which it
approaches infallibility in measuring what it purports to measure.

Types of validity

Content validity: consider all validity intrinsic and extrinsic- systematic examination
of the content of the test to analyze whether it covers a representative sample.
Content validity is closely related to face validity. It makes sure that a measure
includes an adequate and representative set of items to cover a concept. It can also
be ensured by experts‟ agreement (Sekaran, 2003)

Criterion validity - one which is obtained by comparing the test scores with scores
obtained on a criterion available at present or to be available in the future.

a. Predictive validity.
b. Concurrent validity.

Criterion Validity is the degree of correlation of a measure with other standard


measures of the same construct (Zikmund, 2000). It has two types: concurrent validity
and predictive validity.

Construct validity-

• Indicating how to measure using some formulae.


• The extent to which the test may be said to measure a theoretical
construct or trust.

In concurrent validity, the researcher employs a criterion on which cases (individuals,


organizations or countries, for example) are known to differ and that is relevant to the
concept in question (Bryman and Bell, 2003).

Predictive Validity

Predictive Validity is the extent to which new measure is able to predict a future event.
Here the researcher uses a future criterion instead of contemporary one. (Bryman and
Bell, 2003)

243
Convergent validity

Convergent validity results when two variables measuring the same construct highly
correlate (Straub, Boudreau and Gefen, 2004)

Discriminant Validity

A device has discriminant validity if by using similar measures for researching different
constructs results into relatively low inter correlations (Cooper and Schindler, 2003).

Levels of measurement

Most research explains the four levels of measurement: nominal, ordinal, interval and
ratio and so the treatment given to them here will be brief.

Nominal scales

This, the crudest of measurement scales, classifies individuals, companies, products,


brands or other entities into categories where no order is implied. Indeed it is often
referred as a categorical scale. It is a system of classification and does not place the
entity along a continuum. It involves a simple count of the frequency of the cases
assigned to the various categories, and if desired numbers can be nominally assigned
to label each category as the example below:

An example of a nominal scale

Which of the following food items do you tend to buy at least once per month?
(Please tick)
Okra Palm Oil Milled Rice
Peppers Prawns Pasteurized milk

The numbers have no arithmetic properties and act only as labels. The only
measure of average which can be used is the mode because this is simply a set of
frequency counts. Hypothesis tests can be carried out on data collected in the nominal
form. The most likely would be the Chi-square test. However, it should be noted that
the Chi-square is a test to determine whether two or more variables are associated
and the strength of that relationship. It can tell nothing about the form of that
relationship, where it exists, i.e. it is not capable of establishing cause and effect.

244
Ordinal scales

Ordinal scales involve the ranking of individuals, attitudes or items along the
continuum of the characteristic being scaled. For example, if a researcher asked
farmers to rank 5 brands of pesticide in order of preference he/she might obtain
responses like those in table below.

An example of an ordinal scale used to determine farmers' preferences among 5


brands of pesticide.

Order of Brand
preference
1 Rambo
2 R.I.P.
3 Killalot
4 D.O.A.
5 Bugdeath

From such a table the researcher knows the order of preference but nothing about
how much more one brand is preferred to another that is there is no information
about the interval between any two brands. All of the information a nominal scale
would have given is available from an ordinal scale. In addition, positional statistics
such as the median, quartile and percentile can be determined.

It is possible to test for order correlation with ranked data. The two main methods are
Spearman's Ranked Correlation Coefficient and Kendall's Coefficient of Concordance.
Using either procedure one can, for example, ascertain the degree to which two or
more survey respondents agree with their ranking of a set of items. Consider again
the ranking of pesticides example in figure 3.2. The researcher might wish to measure
similarities and differences in the rankings of pesticide brands according to whether
the respondents' farm enterprises were classified as "arable" or "mixed" (a
combination of crops and livestock). The resultant coefficient takes a value in the
range 0 to 1. A zero would mean that there was no agreement between the two
groups, and 1 would indicate total agreement. It is more likely that an answer
somewhere between these two extremes would be found.

The only other permissible hypothesis testing procedures are the runs test and sign
test. The runs test (also known as the Wald-Wolfowitz). The test is used to determine
whether a sequence of binomial data - meaning it can take only one of two possible
values e.g. African/non-African, yes/no, male/female - is random or contains

245
systematic 'runs' of one or other value. Sign tests are employed when the objective is
to determine whether there is a significant difference between matched pairs of data.
The sign test tells the analyst if the number of positive differences in ranking is
approximately equal to the number of negative rankings, in which case the
distribution of rankings is random, i.e. apparent differences are not significant. The
test takes into account only the direction of differences and ignores their magnitude
and hence it is compatible with ordinal data.

Ratio scales

The highest level of measurement is a ratio scale. This has the properties of an interval
scale together with a fixed origin or zero point. Examples of variables which are ratio
scaled include weights, lengths and times. Ratio scales permit the researcher to
compare both differences in scores and the relative magnitude of scores. For instance
the difference between 5 and 10 minutes is the same as that between 10 and 15
minutes, and 10 minutes is twice as long as 5 minutes.

Given that sociological and management research seldom aspires beyond the interval
level of measurement, it is not proposed that particular attention be given to this level
of analysis. Suffice it to say that virtually all statistical operations can be performed on
ratio scales.

Interval scales

It is only with an interval scaled data that researchers can justify the use of the
arithmetic mean as the measure of average. The interval or cardinal scale has equal
units of measurement, thus making it possible to interpret not only the order of scale
scores but also the distance between them. However, it must be recognized that the
zero point on an interval scale is arbitrary and is not a true zero. This of course has
implications for the type of data manipulation and analysis we can carry out on data
collected in this form. It is possible to add or subtract a constant to all of the scale
values without affecting the form of the scale but one cannot multiply or divide the
values. It can be said that two respondents with scale positions 1 and 2 are as far apart
as two respondents with scale positions 4 and 5, but not that a person to score 10
feels twice as strong as one with the score 5. Temperature is interval scaled, being
measured either in Centigrade or Fahrenheit. We cannot speak of 50°F being twice as
hot as 25°F since the corresponding temperatures on the centigrade scale, 10°C and -
3.9°C, are not in the ratio 2:1. Most of the common statistical methods of analysis
require only interval scales in order that they might be used. These are not recounted
here because they are so common and can be found in virtually all basic texts on
statistics. Interval scales may be either numeric or semantic. Study the examples
below in the figure.

246
Examples of interval scales in numeric and semantic formats

Please indicate your views on Balkan Olives by scoring them on a scale of 5 down to
1 (i.e. 5 = Excellent; = Poor) on each of the criteria listed
Balkan Olives are: Circle the appropriate score on each line
Succulence 5 4 3 2 1
Fresh tasting 5 4 3 2 1
Free of skin 5 4 3 2 1
blemish
Good value 5 4 3 2 1
Attractively 5 4 3 2 1
packaged
(a)
Please indicate your views on Balkan Olives by ticking the appropriate responses
below:
Excellent Very Good Good Fair Poor
Succulent
Freshness
Freedom from skin blemish
Value for money
Attractiveness of packaging
(b)

Measurement scales

The various types of scales used in marketing research fall into two broad categories:
comparative and non-comparative.

In comparative scaling, the respondent is asked to compare one brand or product


against another. With noncomparative scaling respondents need only evaluate a
single product or brand. Their evaluation is independent of the other product and/or
brands which the marketing researcher is studying.

Noncomparative scaling is frequently referred to as monadic scaling and this is most


widely used type of scale in commercial marketing research studies.

247
Comparative scales

Paired comparison: It is sometimes the case that marketing researchers wish to find
out which are the most important factors in determining the demand for a product.
Conversely they may wish to know which the most important factors are acting to
prevent the widespread adoption of a product. Take, for example, the very poor
farmer response to the first design of an animal-drawn mould board plough. A
combination of exploratory research and shrewd observation suggested that the
following factors played a role in the shaping of the attitudes of those farmers who
feel negatively towards the design:

• Does not ridge;


• Does not work for inter-cropping;
• Far too expensive;
• New technology too risky; and
• Too difficult to carry.

Suppose the organization responsible wants to know which factors is foremost in the
farmer's mind. It may well be the case that if those factors that are most important to
the farmer than the others, being of a relatively minor nature, will cease to prevent
widespread adoption. The alternatives are to abandon the product's re-development
or to completely re-design it which is not only expensive and time-consuming, but may
well be subject to a new set of objections.

The process of rank ordering the objections from most to least important is best
approached through the questioning technique known as 'paired comparison'. Each
of the objections is paired by the researcher so that with 5 factors, as in this example,
there are 10 pairs-

In 'paired comparisons' every factor has to be paired with every other factor in turn.
However, only one pair is ever put to the farmer at any one time.

The question might be put as follows:


Which of the following was the more important in making you decide not to buy the
plough?

· The plough was too expensive

248
· It proved too difficult to
transport

In most cases the question, and the alternatives, would be put to the farmer verbally.
He/she then indicates which of the two was the more important and the researcher
ticks the box on his questionnaire. The question is repeated with a second set of
factors and the appropriate box ticked again. This process continues until all possible
combinations are exhausted, in this case 10 pairs. It is good practice to mix the pairs
of factors so that there is no systematic bias. The researcher should try to ensure that
any particular factor is sometimes the first of the pair to be mentioned and sometimes
the second. The researcher would never, for example, take the first factor (on this
occasion 'Does not ridge') and systematically compare it to each of the others in
succession. That is likely to cause systematic bias.

Below labels have been given to the factors so that the worked example will be easier
to understand. The letters A - E have been allocated as follows:
A = Does not ridge
B = Far too expensive
C = New technology too risky
D = Does not work for inter-cropping
E = Too difficult to carry.

The data are then arranged into a matrix. Assume that 200 farmers have been
interviewed and their responses are arranged in the grid below. Further assume that
the matrix is so arranged that we read from top to side. This means, for example, that
164 out of 200 farmers said the fact that the plough was too expensive was a greater
deterrent than the fact that it was not capable of ridging. Similarly, 174 farmers said
that the plough's inability to inter-crop was more important than the inability to ridge
when deciding not to buy the plough.

A preference matrix

A B C D E
A 100 164 120 174 180
B 36 100 160 176 166
C 80 40 100 168 124
D 26 24 32 100 102
E 20 34 76 98 100

249
If the grid is carefully read, it can be seen that the rank order of the factors is -

Most important E Too difficult to carry


D Does not inter crop
C New technology/high risk
B Too expensive
Least important A Does not ridge.

It can be seen that it is more important for designers to concentrate on improving


transportability and, if possible, to give it an inter-cropping capability rather than
focusing on its ridging capabilities (remember that the example is entirely
hypothetical).

One major advantage to this type of questioning is that whilst it is possible to obtain
a measure of the order of importance of five or more factors from the respondent, he
is never asked to think about more than two factors at any one time. This is especially
useful when dealing with illiterate farmers. Having said that, the researcher has to be
careful not to present too many pairs of factors to the farmer during the interview. If
he does, he will find that the farmer will quickly get tired and/or bored. It is as well to
remember the formula n (n - 1) /2. For ten factors, brands or product attributes this
would give 45 pairs. Clearly the farmer should not be asked to subject himself to
having the same question put to him 45 times. For practical purposes, six factors are
possibly the limit, giving 15 pairs.

It should be clear from the procedures described in these notes that the paired
comparison scale gives ordinal data.

Dollar Metric Comparisons: This type of scale is an extension of the paired


comparison method in that it requires respondents to indicate both their preference
and how much they are willing to pay for their preference. This scaling technique gives
the marketing researcher an interval - scaled measurement. An example is given in
the figure.

An example of a dollar metric scale

Which of the following types of How much more, in cents, would you be
fish do you prefer? prepared to pay for your preferred fish?
Fresh Fresh (gutted) $0.70
Fresh (gutted) Smoked 0.50
Frozen Smoked 0.60

250
Frozen Fresh 0.70
Smoked Fresh 0.20
Frozen(gutted) Frozen
From the data above the preferences shown below can be computed as follows:
Fresh fish 0.70 + 0.70 + 0.20 =1.60
Smoked fish 0.60 + (-0.20) + (-0.50) =(-1.10)
Fresh fish(gutted) (-0.70) + 0.30 + 0.50 =0.10
Frozen fish (-0.60) + (-0.70) + (-0.30) =(-1.60)

The Unity-sum-gain technique: A common problem with launching new products is


one of reaching a decision as to what options, and how many options one offer. Whilst
a company may be anxious to meet the needs of as many market segments as
possible, it has to ensure that the segment is large enough to enable him to make a
profit. It is always easier to add products to the product line but much more difficult
to decide which models should be deleted. One technique for evaluating the options
which are likely to prove successful is the unity-sum-gain approach.

The procedure is to begin with a list of features which might possibly be offered as
'options' on the product, and alongside each you list its retail cost. A third column is
constructed and this form an index of the relative prices of each of the items. The
table below will help clarify the procedure. For the purposes of this example the basic
reaper is priced at $20,000 and some possible 'extras' are listed along with their prices.

The total value of these hypothetical 'extras' is $7,460 but the researcher tells the
farmer he has an equally hypothetical $3,950 or similar sum. The important thing is
that he should have considerably less hypothetical money to spend than the total
value of the alternative product features. In this way the farmer is encouraged to
reveal his preferences by allowing researchers to observe how he trades one
additional benefit off against another. For example, would he prefer a side rake
attachment on a 3 meter head rather than have a transporter trolley on either a
standard or 2.5m wide head? The farmer has to be told that any unspent money
cannot be retained by him so he should seek the best value-for-money he can get.

In cases where the researcher believes that mentioning specific prices might introduce
some form of bias into the results, then the index can be used instead. This is
constructed by taking the price of each item over the total of $ 7,460 and multiplying
by 100. Survey respondents might then be given a maximum of 60 points and then, as
before, are asked how they would spend these 60 points. In this crude example the
index numbers are not too easy to work with for most respondents, so one would
round them as has been done with the adjusted column. It is the relative and not the

251
absolute value of the items which is important so the precision of the rounding need
not overly concerning us.

The unity-sum-gain technique

Item Additional Cost Index Adjusted


($s) Index
2.5 wide rather than standard 2m 2,000 27 30
Self-lubricating chain rather than belt 200 47 50
Side rake attachment 350 5 10
Polymer heads rather than steel 250 3 5
Double rather than single edged 210 2.5 5
cutters
Transporter trolley for reaper 650 9 10
attachment
Automatic leveling of table 300 4 5

The unity-sum-gain technique is useful for determining which product features


are more important to farmers. The design of the final market version of the product
can then reflect the farmers' needs and preferences. Practitioners treat data gathered
by this method as ordinal.

Noncomparative scales

Continuous rating scales: The respondents are asked to give a rating by placing a mark
in the appropriate position on a continuous line. The scale can be written on the card
and shown to the respondent during the interview. Two versions of a continuous
rating scale are depicted in the figure.

252
Continuous rating scales

When version B is used, the respondent's score is determined either by dividing into
the line as many categories as desired and assigning the respondent a score based
on the category into which his/her mark falls, or by measuring the distance, in
millimeters or inches, from either end of the scale.

Whichever of these forms of the continuous scale is used, the results are normally
analyzed as interval scaled.

Line marking scale: The line marked scale is typically used to measure perceived
similarity differences between products, brands or other objects. Technically, such a
scale is a form of what is termed a semantic differential scale since each end of the
scale is labeled with a word/phrase (or semantic) that is opposite in meaning to the
other. Below mentioned example provides an illustration of such a scale.

Consider the products below which can be used when frying food. In the case of each
pair, indicate how similar or different they are in the flavour which they impart to the
food.

An example of a line marking scale

For some types of respondent, the line scale is an easier format because they do not
find discrete numbers (e.g. 5, 4, 3, 2, 1) best reflect their attitudes/feelings. The line
marking scale is a continuous scale.

253
Itemized rating scales: With an itemized scale, respondents are provided with a scale
having numbers and/or brief descriptions associated with each category and are asked
to select one of the limited number of categories, ordered in terms of scale position,
that best describes the product, brand, company or product attribute being studied.
Examples of the itemized rating scale are illustrated below.

Itemized rating scales

Itemized rating scales can take a variety of innovative forms as demonstrated by the
two illustrated below, which are graphic.

Graphic itemized scales

Whichever form of itemized scale is applied, researchers usually treat the data as
interval level.

Semantic scales: This type of scale makes extensive use of words rather than
numbers. Respondents describe their feelings about the products or brands on scales
with semantic labels. When bipolar adjectives are used at the end points of the scales,
these are termed semantic differential scales. The semantic scale and the semantic
differential scale are illustrated in figure.

254
Semantic and semantic differential scales

Likert scales: A Likert scale is what is termed a summated instrument scale. This
means that the items making up a Liken scale are summed to produce a total score.
In fact, a Likert scale is a composite of itemized scales. Typically, each scale item will
have 5 categories, with scale values ranging from -2 to +2 with 0 as neutral responses.
This explanation may be clearer from the example in figure.

The Likert scale

Strongly Agree Neither Disagree Strongly


Agree Disagree
If the price of raw materials 1 2 3 4 5
fell firms would reduce the
price of their food products.
Without government 1 2 3 4 5
regulation the firms would
exploit the consumer.
Most food companies are so 1 2 3 4 5
concerned about making
profits they do not care about
quality.

255
The food industry spends a 1 2 3 4 5
great deal of money making
sure that its manufacturing is
hygienic.
Food companies should 1 2 3 4 5
charge the same price for
their products throughout the
country

Likert scales are treated as yielding Interval data by the majority of marketing
researchers. The scales which have been described in this chapter are among the most
commonly used in marketing research. Whilst there are a great many more forms
which scales can take, if students are familiar with those described in this chapter they
will be well equipped to deal with most types of survey problem.

Factor Analysis

One of the other important tools in data analysis is the Factor Analysis. In most of the
empirical researches, there are a number of variables that characterize objects
(Rietveld and Van Hout, 1993). In social sciences, research may involve studying
abstract constructs such as personality, motivation, satisfaction, and job stress. In
study of mental ability for example, a questionnaire could be very lengthy as it would
aim to measure it through several subtests, like verbal skills tests, and logical
reasoning ability tests (Darlington, 2004). In a questionnaire, therefore, there can be
number of variables measuring aspects of same variable thus making the study
complicated. In order to deal with these problems, factor analysis was invented.
Factor analysis attempts to bring inter-correlated variables together under more
general, underlying variables. Hence, the objective of factor analysis is to reduce the
number of dimensions to limited number explaining a particular phenomena or
variable (Rietveld and Van Hout, 1993).

Exploratory factor analysis (EFA) and Confirmatory factor analysis (CFA) are two
techniques a researcher can use to conduct his analysis. According to DeCoster (1998),
the basic purpose of EFA is to find out the nature of constructs which are influencing
a particular set of responses while the purpose of CFA is testing whether a particular
set of constructs is influencing responses in a predicted way.

DeCoster (1998) suggests following steps to perform EFA:

1. Collect measurements;
2. Obtain the correlation matrix;
3. Select the number of factors for inclusion;

256
4. Extract your initial set of factors;
5. Rotate your factors to a final solution;
6. Interpret your factor structure; and
7. Construct factor scores for further analysis.

Similarly, DeCoster (1998) has specified following steps to perform CFA:


1. Define the factor model;
2. Collect measurements;
3. Obtain the correlation matrix;
4. Fit the model to the data;
5. Evaluate model adequacy; and
6. Compare with other models.

Hence EFA and CFA are powerful statistical techniques, and a researcher must
determine the type of analysis before answering his research questions (Suhr, 2006).

Another technique for similar purpose which is often confused with Exploratory Factor
Analysis is Principal Component Analysis (PCA). According to DeCoster (1998), EFA and
PCA are different statistical techniques and produce different results when applied to
the same data. He asserts that both are based on different models and the direction
of influence between „components‟ and „measures‟ are different and that in EFA we
assume that the variance caused in the measured variables can be separated for
common factors and for unique factors, whereas; in PCA, the principal components
developed include both common and unique variance. In conclusion, we conduct EFA
when we want to extract certain factors or constructs which are responsible for a
particular set of responses while we use PCA when the purpose is simply to reduce
data.

Furthermore, depending upon the nature of data, the researcher can make use of
other advanced statistical techniques as well such as discriminant analysis, cluster
analysis, and time series analysis.

CONCLUSION

Most important stages of any research is the phase of instrument development. How
far the instrument is valid enough to collect the right responses from the respondents
that determine the validity outcome of the research. The external validity of the
finding depends on the internal validity of the instrument. In order to ensure the
soundness of the instrument, a researcher should have the base of item analysis and
instrument validity and reliability. This section of the book details the information
regarding construct, content, criterion, and face validity support the researcher to
gain attention on the items they develop in their instruments and that ensure better

257
internal validity of the instrument. The researcher will be getting better control over
the phenomena under study with validity and reliability orientation during the
research process.

DISCUSSION QUESTIONS

1. What do you mean by an item in an instrument?


2. How do you explain item analysis?
3. What do you mean by scaling?
4. Why the scales to be evaluated?
5. What do mean by validity?
6. What are the different types of validity?
7. Differentiate between nominal and ordinal scales?
8. Differentiate between ratio and interval scales?
9. Discuss about pairing comparison?
10. What do you mean by continuous rating scale?
11. What do you mean by itemized rating scale?
12. Explain Likert scale and discuss its importance in research?

258
MODULE 12

TOOL ADMINISTRATION,
DATA PROCESSING AND DATA ANALYSIS

Learning objectives:

1. Understand and appreciates the meaning of research techniques.


2. Learn the scientific methods for conducting research.
3. Understand various ways in processing the data.
4. Understand effective ways in which the data can be analyzed.
5. Learn how to present the results.

Introduction

The major aspect of the research is the finalization of the tools of the research and
administration in the right population selected for the study. It is difficult to plan a
major study or project without adequate knowledge of its subject matter. The
population it is to cover the level of knowledge and understanding and the like. What
are the issues involved? What are the concepts associated with the subject matter?
How can be it operationalized? How long the study will take? How much money it will
cost? Such questions call for a good deal of knowledge of the subject matter of an
extensive study and its dimensions. In order to gain such knowledge a preliminary
investigation is to be conducted. This is called a pre - test.

Pilot study

A PRE-TEST usually refers to a small-scale trial of particular research components. A


PILOT STUDY is the process of carrying out a preliminary study, going through the
entire research procedure with a small sample.

WHY do we carry out a pre-test or pilot study?

A pre-test or pilot study serves as a trial run that allows us to identify potential
problems in the proposed study. Although this means extra effort at the beginning of
a research project, the pre-test and/or pilot study enables us, if necessary, to revise
the methods and logistics of data collection before starting the actual fieldwork. As a
result, a good deal of time, effort and money can be saved in the long run. Pre-testing
is simpler and less time-consuming and costly than conducting an entire pilot study.

259
Therefore, we will concentrate on pre-testing as an essential step in the development
of research projects.

WHAT aspects of your research methodology can be evaluated during pre-testing?

1. The reactions of the respondents to the research procedures can be observed


in the pre-test to determine:
o availability of the study population and how respondents’ daily work
schedules can best be respected;
o acceptability of the methods used to establish contact with the study
population;
o acceptability of the questions asked; and
o willingness of the respondents to answer the questions and collaborate
with the study.
2. The data-collection tools can be pre-tested to determine:
o Whether the tools you use allow you to collect the information you
need and whether those tools are reliable. You may find that some of
the data collected is not relevant to the problem or is not in a form
suitable for analysis. This is the time to decide not to collect this data
or to consider using alternative techniques that will produce data in a
more usable form.
o How much time is needed to administer the interview
guide/questionnaire, to conduct observations or group interviews,
and/or to make measurements.
o Whether there is any need to revise the format or presentation of
interview guides/ questionnaires, including whether:

• The sequence of questions is logical.


• The wording of the questions is clear.
• Translations are accurate.
• Space for answers is sufficient.
• There is a need to pre-categories some answers or to change
closed questions into open-ended questions.
• There is a need to adjust the coding system.
• There is a need for additional instructions for interviewers (e.g.,
guidelines for ‘probing’ certain open questions).

3. Sampling procedures can be checked to determine:

• Whether the instructions concerning how to select the sample are


followed in the same way by all staff involved.

260
• How much time is needed to locate individuals to be included in the
study.

4. Staffing and activities of the research team can be checked, while all are
participating in the pre-test, to determine:
o How successful the training of the research team has been.
o What the work output of each member of the staff is.
o How well the research team works together.
o Whether logistical support is adequate.
o The reliability of the results when instruments or tests are
administered by different members of the research team.
o Whether staff supervision is adequate.
5. Procedures for data processing and analysis can be evaluated during the pre-
test. Items that can be assessed include:
o Appropriateness of data master sheets and dummy tables and the ease
of use.
o Effectiveness of the system for quality control of data collection.
o Appropriateness of statistical procedures (if used).
o Clarity and ease with which the collected data can be interpreted.
6. The proposed work plan and budget for research activities can be assessed
during the pre-test. Issues that can be evaluated include:
o Appropriateness of the amount of time allowed for the different
activities of planning, implementation, supervision, co-ordination and
administration.
o Accuracy of the scheduling of the various activities.

Pre-testing the data collection and data-analysis process 1-2 weeks before
starting the fieldwork, with the whole research team.

Which components should be assessed during the pre-test?

Depending on how closely the pre-test situation resembles the area in which
the actual field work will be carried out, it may be possible to pre-test:

• The reactions of respondents to the research procedures and


questions related to sensitive issues.
• The appropriateness of study type(s) and research tools selected
for the purpose of the study (e.g., validity: Do they collect the
information you need? and reliability: Do they collect the data in a
precise way?).
• The appropriateness of format and wording of questionnaires and
interview schedules and the accuracy of the translations.

261
• The time needed to carry out interviews, observations or
measurements.
• The feasibility of the designed sampling procedures.
• The feasibility of the designed procedures for data processing and
analysis.

Data Collection

After pre-testing the tools selected for the study it is the time to implement the tools
in the field. The techniques which are discussed in the above chapters may be utilized
in the data collection process. Care should be taken in each aspect to prevent technical
and human error dusting data collection.

Data Processing

Data processing is an intermediary stage of work between data collection and data
analysis. The completed instruments of data collection viz., the questionnaires,
schedule or data sheets contain vast mass of data. They cannot be straight away
providing answers to research questions. They like raw material need data processing.
Data processing involves classification and summarization of data in order to make
them amenable to analysis.

Data processing consists of a number of closely related operations. Viz.,.

1. Editing;

2. Classification and coding; and

3. Tabulation.

Editing

The first step in the processing of data is editing of complete questionnaires /


schedules. Editing is the process of checking to detect and correct errors and
omissions. Editing is done in two stages. First at the field work stage and the second
at the office.

a. Field editing

Data processing and analysis should start in the field, with checking for
completeness of the data and performing quality control checks, while sorting the
data by instrument used and by group of informants. Data of small samples may even
be processed and analyzed as soon as it is collected.

262
b. Office Editing

Office editing is done for:

8. Completion of the answer to each question. If there is omission the


editor can deduce the proper answer from other related data on the
instrument.
9. The accuracy of each recorded answer Arithmetic error and clear
inconsistencies should be rectified.
10. Uniformity in the interpretation of questions.

1. Classification

Classification of the data to be done soon after the editing process is over. The
classification of data involves arranging things in groups or classes according to their
resemblances or affinities and gives expression to the unity of attributes that may be
subsist among the diversity of individuals.

Objectives of classification

• To simplify data;
• Distinguish between similarity and dissimilarities;
• To make the data comparable; and
• To make a basis of tabulation.

Types of Data or Data Classification


I. Qualitative Data

A. Nominal, Attribute, or Categorical Data

Examples:

1. Gender (female, male)


2. Medication (Aspirin, Tylenol, Advil, none)
3. Religion (Buddhist, Islamic, Jewish, Christian, Hindu, none, etc.
4. Countries (Iraq, Iran, Israel, Zimbabwe, Canada, etc.)

B. Ordinal or Ranked Data: One value is greater or less than another, but the
Magnitude of the difference is unknown.
Examples:

1. Muscle response (none, partial, complete)


2. Tree vigor (Healthy, sick, dead

263
3. Income (<$9,999 $10,000-$19,999, $20,000-$49,999, >$50,000)

II. Quantitative or Interval Data (measurements)

A. Discrete Data (whole number counts)


Examples:

1. Number of petals on flower


2. Number of pets at home
3. Number of children in family

B. Continuous measurements (rational numbers, limited by the accuracy of your


measurements)
Examples:

I. Height;
II. Weight;
III. Light-years; and
IV. Blood pressure.

The researcher should be able to recognize what Data Types are used in the research.

Types of classification

The very important types are:


1. Geographical classification: Data are classified according to region.
2. Chronological classification: Data are classified according to the time of its
occurrence.
3. Conditional classification: Data are classified according to certain conditions.
4. Qualitative classification: Classification of data that is not measurable.
E.g. Sex of a person, marital status, colour etc.
5. Quantitative classification: Classification of data that is measurable either in
discrete or continuous form.
6. Statistical Series: Data arranged logically according to size or time of
occurrence or some other measurable or no measurable characteristics.

Methods of Classification

1. Classification is done according to a single attribute or variable, is known as


one way classification.
2. Classification done according to two attributes or variables is known as two-
way classification.

264
3. Classification done according to more than two attributes or variables is
known as manifold classification.

Examples

1. One-way classification
No. of students who secured more than 60 % in various sections of same
course.
2. Two – way classification
Classification of students according to sex who secured more than 60 %.
3. Manifold classification.
Classification of employees according to skill, sex and education.

3. Coding

If the data will be entered into a computer for subsequent processing and analysis, it
is essential to develop a Coding System. For computer analysis, each category of a
variable can be coded with a letter, a group of letters or word, or be given a number.
For example, the answer ‘yes’ may be coded as ‘Y’ or 1; ‘no’ as ‘N’ or 2 and ‘no
response’ or ‘unknown’ as ‘U’ or 9. The codes should be entered on the questionnaires
(or checklists) themselves. When finalizing your questionnaire, for each question you
should insert a box for the code in the right margin of the page. These boxes should
not be used by the interviewer. They are only filled in afterwards during data
processing. Take care that you have as many boxes as the number of digits in each
code. If the analysis is done by hand using data master sheets, it is useful to code your
data as well (see section 3 below)

For example:

Yes (or positive response) code - Y or 1


No (or negative response) code - N or 2
Don’t know code - D or 8
No response/unknown code - U or 9

Codes for open-ended questions (in questionnaires) can be done only after examining
a sample of (say 20) questionnaires. You may group similar types of responses into
single categories, so as to limit their number to at most 6 or 7. If there are too many
categories it is difficult to analyze the data.

Finally it should be borne in mind that the personnel responsible for computer analysis
should be consulted very early in the study, i.e., as soon as the questionnaire and

265
dummy tables are finalized. In fact the research team needs to work closely with the
computer analyst or statistician throughout the design and the implementation of the
study.

4. Tabulation

The process of placing classified data in tabular form is known as tabulation. A table is
a symmetric arrangement of statistical data in rows and columns. Rows are horizontal
arrangements whereas columns are vertical arrangements. It may be simple, double
or complex depending upon the type of classification.

Objects and Importance of Tabulation

The tabulation is a technique to present and interpret the complex information in a


simple and systematic form. The main objectives of the process of tabulating are as
follows:

• The main purpose of the tabulation is to simplify the complex information so


that it can be easily understood.
• Under tabulation, data is divided into various parts and for each part there are
totals and sub totals. Therefore, the relationship between different parts can
be easily known.
• Since data are arranged in a table with a title and a number so these can be
easily identified and used for the required purpose.
• Tabulation makes the data brief. Therefore, it can be easily presented in the
form of graphs.
• Tabulation presents the numerical figures in an attractive form.
• Tabulation makes complex data simple and as a result of this, it becomes easy
to understand the data.
• This form of the presentation of data is helpful in finding mistakes.
• The tabulation is useful in condensing the collected data.
• Tabulation makes it easy to analyze the data from tables.
• The tabulation is a very cheap mode to present the data. It saves time as well
as space.
• The tabulation is a device to summaries the large scattered data. So, the
maximum information may be collected from these tables.

Rules of Tabulation

There are no hard and fast rules for the tabulation of data but for constructing a good
table, following general rules should be observed while tabulating statistical data:

266
• First of all, there should be a proper title for each table. Table number and title
of the table must be written above the table.
• The table should suit the size of the paper and, therefore, the width of the
column should be decided beforehand.
• Number of columns and rows should neither be too large nor too small.
• Captions, heading or sub-headings of columns and heading and subheadings
of rows must be self-explanatory.
• Each column and row must be given a title. Title of column is called caption
and title of the row is called stub.
• As far as possible figures should be approximated before tabulation. This
would reduce unnecessary details.
• Items should be arranged either in alphabetical, chronological, or geographical
order or according to size.
• The units of measurement under each heading or sub-heading must always be
indicated.
• Foot note can be written, if necessary, either use signs like X etc.
• The sub-total and total of the items on the table must be written.
• Percentages are given in the tables if necessary.
• Ditto marks should not be used in a table because sometimes it creates
confusion.
• A table should be simple and attractive.

Parts of a Table or Preparation of a Table

Preparation of a table is an art which needs an expert handling of data. Following


general principles may be followed for the purpose of preparing a perfect table:

Table Number:

When a table or a book contains more than one table, each table must have a number.
The tables are numbered in a sequence so that they may be easily referred to. The
number of the table should be placed in the middle on the top of the table.

Title:

Every table must have a suitable heading. The heading should be short, clear and
convey the purpose of the table. It should contain four types of information:

• The subject matter;


• Time;
• Basis of classification; and
• Sources.

267
Besides, the main heading, there may be some sub-heading also.

The title should be so worded that it permits one and only one interpretation. Its
letters should be the most prominent of any lettering on the table.

Long titles cannot be read as promptly as short titles, but they may have to be used
for the sake of clarity when necessary. In such a situation a "catch title" may be given
above the main title.

Captions and stubs: Captions refer to the vertical column headings, whereas stubs
refer to the horizontal row headings. Captions generally give the basis of classification
e.g. sex, occupation, meters, kms, etc. It may consist of one or more column headings.
Under a column heading, there may be sub-heads. The captions should be clearly
defined and placed at the middle of the column. It is desirable to number each column
and row for reference and to facilitate comparisons.

Head notes: Head Note is a statement given below the title which clarifies the
contents of the table. It gives an explanation concerning the entire table or main parts
of it, e.g., the units of measurement are usually expressed in a head note such as 'in
hectares', 'in millions', 'in quintals' etc.

Body: The body of the table contains the figures that are to be presented to the
readers. The table must contain sub totals of each separate class of data and the grand
total for the combined classes.

Source: The source is given in case of secondary data. It gives the sources from which
the data were obtained. The source should give the name of the book, page number,
table number etc. from which the data have been collected.

Types of Tabulation:

a. Simple Tabulation or One-way Tabulation

When the data are tabulated to one characteristic, it is said to be simple


tabulation or one-way tabulation.

For Example: Tabulation of data on population of the world classified by one


characteristic like Religion is an example of simple tabulation.

b. Double Tabulation or Two-way Tabulation:

When the data are tabulated according to the two characteristics at a time. It
is said to be double tabled or two-way tabulation.

268
For Example: Tabulation of data on population of the world classified by two
characteristics like Religion and Sex is an example of double tabulation.

c. Complex Tabulation:

When the data are tabulated according to many characteristics, it is said to be


complex tabulation.

For Example: Tabulation of data on population of the world classified by two


characteristics like Religion, Sex and Literacy etc.… is an example of complex
tabulation.

Basic differences between classification and Tabulation

In spite of the fact that they are closely related, the differences are as follows.

Classification Tabulation

It is the basis for Tabulation It is the basis for further analysis

It is the basis for Simplification It is the basis for Presentation

Data is divided into groups and sub The data is listed according to a logical
groups on the basis of similarities and sequence of related characteristics
Dissimilarities.

Limitations of Tabulation

Tabulation suffers from the following limitations:

• Tables contain only numerical data. They do not contain details.


• Qualitative expression is not possible through tables.
• Tables can be used by experts only to draw conclusions. Common men do not
understand them properly.

Data Analysis:

Data Analysis

When the researcher is finished with the data collection, he has to start data analysis
which again involves numerous issues to be answered. Importantly, the data should

269
be accurate, complete, and suitable for further analysis (Sekaran and Bougie, 2010).
Researcher has to record and arrange the data and then apply various descriptive and
inferential statistics or econometrics concepts to explain the data and draw
inferences(Saunders et al.,2009). Selection of inappropriate statistical technique or
econometric model may lead to wrong interpretations. This may in turn result in
failure to solve the research problem and answer research questions. The researcher’s
task is incomplete if the study objectives are not met.

According to Lind, et. al., (2008), researchers can use number of descriptive statistics
concepts to explain data such as frequency distributions or cumulative frequency
distributions, frequency polygons, histograms, various types of charts like bar charts
& pie charts, scatter diagrams, box plots etc.

The researcher can make inferences and draw conclusions based on inferential
statistics. Two main objectives of inferential statistics are (1) to estimate a population
parameter and (2) to test hypotheses or claim about a population parameter (Triola,
2008). The researcher has to carefully select between varieties of inferential statistical
techniques to test his hypotheses. For example, based on whether the researcher is
using sample or census, he has the choice of using either t-tests or z-tests (Cooper &
Schindler, 2006). In hypotheses testing, depending upon the number and nature of
samples, researcher has to decide among using either one sample t-test, or two
sample (independent or dependent) t-tests, or doing ANOVA/MANOVA (Lind et al.,
2008).

In explanatory or causal study, researchers must rely on statistical or econometric


techniques having the ability to test correlation or cause. Pearson Coefficient of
Correlation is the most commonly used measure of finding correlations between two
or more variables. A correlation exists between two variables when one of them is
related to the other in some way (Triola, 2008).

The formula to compute Linear Correlation Coefficient is:

r =Σ(X – X)(Y – Y)

(n – 1)sxsy

The value of r always lies between -1 and +1 inclusive. If it lies near to -1, it shows a
strong negative correlation but if it lies near to +1, it shows a strong positive
correlation.

270
Quantitative

It is a systematic approach to investigations during which numerical data are collected


and/or the researcher transforms what is collected or observed into numerical data.
It often describes a situation or event; answering the 'what' and 'how many' questions
you may have about something. This is research which involves measuring or counting
attributes (i.e. quantities). A quantitative approach is often concerned with finding
evidence to either support or contradict an idea or hypothesis you might have. A
hypothesis is where a predicted answer to a research question is proposed.

Advantages

• Allow for a broader study, involving a greater number of subjects, and enhancing
the generalization of the results
• Can allow for greater objectivity and accuracy of results. Generally, quantitative
methods are designed to provide summaries of data that support
generalizations about the phenomenon under study. In order to accomplish this,
quantitative research usually involves a few variables and many cases, and
employs prescribed procedures to ensure validity and reliability
• Using standards means that the research can be replicated, and then analyzed
and compared with similar studies.
• Personal bias can be avoided by researchers keeping a 'distance' from
participating subjects and employing subjects unknown to them.

Disadvantages

• Collect a much narrower and sometimes superficial dataset.


• Results are limited as they provide numerical descriptions rather than detailed
narrative and generally provide less elaborate accounts of human perception.
• The research is often carried out in an unnatural, artificial environment so that
a level of control can be applied to the exercise. This level of control might not
normally be in place in the real world yielding laboratory results as opposed to
real world results.
• In addition preset answers will not necessarily reflect how people really feel
about a subject and in some cases might just be the closest match.
• The development of standard questions by researchers can lead to 'structural'
bias and false representation, where the data actually reflects the view of them
instead of the participating subject.

271
Statistical Analysis

The statistical analysis is about making sense of a set of data or a series of


observations. Most people, whether they realize it or not, have conducted any kind of
statistical analysis, even something as basic as balancing a checkbook. Statistical
analysis can summarize and even illuminate a set of data, depending on the type of
analysis performed. Techniques of analysis range from simple measures, such as
means and standard deviations, to more complex analyses such as regression.

There are three (3) general areas that make up the field of statistics: descriptive
statistics, relational statistics, and inferential statistics.

1. Descriptive statistics fall into one of two categories: measures of central


tendency (mean, median, and mode) or measures of dispersion (standard deviation
and variance). Their purpose is to explore hunches that may have come up during the
course of the research process, but most people compute them to look at the
normality of their numbers. Examples include descriptive analysis of sex, age, race,
social class, and so forth.

2. Relational statistics fall into one of three categories: univariate, bivariate,


and multivariate analysis. Univariate analysis is the study of one variable for a
subpopulation, for example, age of murderers, and the analysis is often descriptive,
but you'd be surprised how many advanced statistics can be computed using just one
variable. Bivariate analysis is the study of a relationship between two variables, for
example, murder and meanness, and the most commonly known technique here is
the correlation. Multivariate analysis is the study of the relationship between three or
more variables, for example, murder, meanness, and gun ownership, and for all
techniques in this area, you simply take the word "multiple" and put it in front of the
bivariate technique used, as in multiple correlation.

3. Inferential statistics, also called inductive statistics, fall into one of two
categories: tests for difference of means and tests for statistical significance, the latter
which are further subdivided into parametric or nonparametric, depending upon
whether you're inferring to the larger population as a whole (parametric) or the
people in your sample (nonparametric). The purpose of difference of means tests is
to test hypotheses, and the most common techniques are called Z-tests. The most
common parametric tests of significance are the F-test, t-test, ANOVA, and regression.
The most common nonparametric tests of significance are chi-square, the Mann-
Whitney U-test, and the Kruskal-Wallis test.

To summarize:

• Descriptive statistics (mean, median, mode; standard deviation, variance)

272
• Relational statistics (correlation, multiple correlation)
• Inferential tests for difference of means (Z-tests)
• Inferential parametric tests for significance (F-tests, t-tests, ANOVA,
regression)
• Inferential nonparametric tests for significance (chi-square, Mann-Whitney,
Kruskal-Wallis)

Central Tendency

Measures of central tendency are generally known as averages and include such
measures as the mean and median. The mean is calculated by summing the values in
a set of data and dividing the total by the number of values. If the data are arrayed in
order from the highest value to the lowest, the median is the middle value, where half
of the values are higher and the other half are lower.

Dispersion

Measures of spread or dispersion include the range, which is the difference between
the highest and lowest values in the data, and the standard deviation. The latter
measure is more complex to calculate and generally requires a computer or at least a
calculator. The standard deviation is the square root of the variance, which is the
mean of the sum of squared deviations from the mean score.

Measures Of Central Tendency

The most commonly used measure of central tendency is the mean. To compute
the mean, you add up all the numbers and divide by how many numbers there are.
It's not the average nor a halfway point, but a kind of center that balances high
numbers with low numbers. For this reason, it's more often reported along with some
simple measure of dispersion, such as the range, which is expressed as the lowest and
highest number. The median is the number that falls in the middle of a range of
numbers. It's not the average; it's the halfway point. There are always just as many
numbers above the median as below it. In cases where there is an even set of
numbers, you average the two middle numbers. The median is best suited for data
that are ordinal, or ranked. It is also useful when you have extremely low or high
scores. The mode is the most frequently occurring number in a list of numbers. It's
the closest thing to what people mean when they say something is average or typical.
The mode doesn't even have to be a number. It will be a category when the data are
nominal or qualitative. The mode is useful when you have a highly skewed set of
numbers, mostly low or mostly high. You can also have two modes (bimodal
distribution) when one group of scores are mostly low and the other group is mostly
high, with few in the middle.

273
Measures Of Dispersion

In data analysis, the purpose of statistical computing a measure of dispersion is to


discover the extent to which scores differ, cluster, or spread from around a measure
of central tendency. The most commonly used measure of dispersion is the standard
deviation. You first compute the variance, which is calculated by subtracting the mean
from each number, squaring it, and dividing the grand total (Sum of Squares) by how
many numbers there are. The square root of the variance is the standard
deviation. The standard deviation is important for many reasons. One reason is that,
once you know the standard deviation, you can standardize by it. Standardization is
the process of converting raw scores into what are called standard scores, which allow
you to better compare groups of different sizes. Standardization isn't required for data
analysis, but it becomes useful when you want to compare different subgroups in your
sample, or between groups in different studies. A standard score is called a z-score
(not to be confused with a z-test), and is calculated by subtracting the mean from each
and every number and dividing by the standard deviation. Once you have converted
your data into standard scores, you can then use probability tables that exist for
estimating the likelihood that a certain raw score will appear in the population. This is
an example of using a descriptive statistic (standard deviation) for inferential
purposes.

Correlation

The most used relational statistic is correlation and it's a measure of the strength of
any relationship between two variables, not causality. Interpretation of a correlation
coefficient does not even allow the slightest hint of causality. The most a researcher
can say is that the variables share something in common; that is, are related in some
way. The more two things have something in common, the more strongly they are
related. There can also be negative relations, but the most important quality of
correlation coefficients is not their sign, but their absolute value. A correlation of -. 58
is stronger than a correlation of .43, even though with the former, the relationship is
negative.

The following table lists the interpretations for various correlation coefficients:

.8 to 1.0 very strong


.6 to .8 strong
.4 to .6 moderate

.2 to .4 weak

.0 to .2 very weak

274
The most frequently used correlation coefficient in data analysis is the Pearson
product moment correlation.

Z-Tests, F-Tests, AND T-Tests

These refer to a variety of tests for inferential purposes. Z-tests are not to be
confused with z-scores. Z-tests come in a variety of forms, the most popular being: (1)
to test the significance of correlation coefficients; (2) to test for equivalence of sample
proportions to population proportions, as to whether the number of minorities you've
got in your sample is proportionate to the number in the population. Z-tests
essentially check for linearity and normality, allow some rudimentary hypothesis
testing, and allow the ruling out of Type I and Type II error.

F-tests are much more powerful, as they allow explanation of variance in one variable
accounted for by variance in another variable. In this sense, they are very much like
the coefficient of determination. One really needs a full-fledged statistics course to
gain an understanding of F-tests, so suffice it to say here that you find them most
commonly with regression and ANOVA techniques. F-tests require interpretation by
using a table of critical values.

T-tests are kind of like little F-tests, and similar to Z-tests. It's appropriate for smaller
samples, and relatively easy to interpret since any calculated t over 2.0 is, by rule of
thumb, significant. T-tests can be used for one sample, two samples, one tail, or two-
tailed. You use a two-tailed test if there's any possibility of bi-directionality in the
relationship between your variables.

Analysis of Variance (ANOVA) is a data analytic technique based on the idea of


comparing explained variance with unexplained variance, kind of like a comparison of
the coefficient of determination with the coefficient of alienation. It uses a rather
unique computational formula which involves squaring almost every column of
numbers. What is called the Between Sum of Squares (BSS) refers to variance in one
variable explained by variance in another variable, and what is called the Within Sum
of Squares (WSS) refers to variance that is not explained by variance in another
variable. A F-test is then conducted on the number obtained by dividing BSS by WSS.
The results are presented in what's called an ANOVA source table, which looks like the
following:

Source SS df MS F P
Total 2800
Between 1800 1 1800 10.80 <.05
Within 1000 6 166.67

275
Regression

Regression models can be used in an explanatory study where the researcher is


interested in predicting the value of the dependent variable based on the value of the
independent variable. The researcher can use simple linear regression, if the number
of independent variables in the study is only one while in case of more than one
independent variable in the study; researcher has to make use of multiple regression
models (Lind et al., 2008). Importantly however, regression doesn’t assume causation
(Gujarati and Porter, 2008).

A general multiple regression equation can be expressed as:

Ŷ = b0 + b1x1 + b2x2 + . . . + bnxn

According to Triola (2008), following cautions can be adopted while using regression
equations:

• Don’t use the regression equation if there is supposed to be no linear


correlation between the variables. Sometimes the relationship between
independent variable and dependent variable is curvilinear. In those cases
other techniques must be used.
• Stay within the scope of the available sample data when using the regression
equation for predictions.
• A regression equation based on old data may not necessarily be valid later on.
• Don’t make predictions about a population that is different from the
population from which the sample data were drawn.

Regression is the closest thing to estimating causality in data analysis, and that's
because it predicts how much the numbers "fit" a projected straight line. There are
also advanced regression techniques for curvilinear estimation. The most common
form of regression, however, is linear regression, and the least squares method to find
an equation that best fits a line representing what is called the regression of y on x.
The procedure is similar to computing calculus minima (if you've had a math course in
calculus). Instead of finding the perfect number, however, one is interested in finding
the perfect line, such that there is one and only one line (represented by equation)
that perfectly represents, or fits the data, regardless of how scattered the data points.
The slope of the line (equation) provides information about predicted directionality,
and the estimated coefficients (or beta weights) for x and y (independent and
dependent variables) indicate the power of the relationship. Use of a regression
formula (not shown here because it's too large; only the generic regression equation
is shown) produces a number called R-squared, which is a kind of conservative, yet
powerful coefficient of determination. Interpretation of R-squared is somewhat

276
controversial, but generally uses the same strength table as correlation coefficients,
and at a minimum, researchers say it represents "variance explained."

Chi-Square

A technique designed for less than interval level data is chi-square (pronounced kye-
square), and the most common forms of it are the chi-square test for contingency and
the chi-square test for independence. Other varieties exist, such as Cramer's V,
Proportional Reduction in Error (PRE) statistics, Yule's Q, and Phi. Essentially, all chi-
square type tests involve arranging the frequency distribution of the data in what is
called a contingency table of rows and columns. Marginals, which are estimates of
error in predicting concordant pairs in the rows and columns (based on the null
hypothesis), are then computed, subtracted from one another, and expressed in the
form of a ratio, or contingency coefficient. Predicted scores based on the null
hypothesis are called expected frequencies, and these are subtracted from observed
frequencies (Observed minus Expected). Chi-square tests have frequently seen in the
literature, and can be easily done by hand, or are run by computers automatically
whenever a contingency table is asked for.

The chi-square test for contingency is interpreted as a strength of association


measure, while the chi-square test for independence (which requires two samples) is
a nonparametric test of significance that essentially rules out as much sampling error
and chance as possible.

Mann-Whitney and Kruskal-Wallis Tests

The Mann-Whitney U test is similar to chi-square and the t-test, and used whenever
you have ranked ordinal level measurement. As a significance test, you need two
samples, and you rank (say, from 1 to 15) the scores in each group, looking at the
number of ties. A z-table is used to compare calculated and table values of U. The
interpretation is usually along the lines of any significant difference being due to the
variables you've selected.

The Kruskal-Wallis H test is similar to ANOVA and the F-test, and also uses ordinal,
multisampling data. It's most commonly seen when raters are used to judge research
subjects or research content. Rank calculations are compared to a chi-square table,
and interpretation is usually along the lines that there are some significant
differences, and grounds for accepting research hypotheses.

Presenting Findings

You can present the results of your statistical analyses in the form of tables or graphs.
Spreadsheet programs such as Excel can perform most basic statistical analyses, as

277
well as present the findings in tables or graphs. Excel can perform a variety of
statistical procedures, both basic and advanced. Spreadsheet programs, however, are
not specifically designed for more complicated analyses. Many scientists and
university researchers use specialized statistical software packages such as SPSS and
SAS to analyze data.

Diagrammatic Presentation of Data

Although tabulation is very good technique to present the data, but diagrams are an
advanced technique to represent data. As a layman, one cannot understand the
tabulated data easily but with only a single glance at the diagram, one gets a complete
picture of the data presented. According to Moroney, diagrams register a meaningful
impression almost before we think.

Importance or utility of Diagrams

• Diagrams give a very clear picture of data. Even a layman can understand it
very easily and in a short time.
• We can make comparison between different samples very easily. We don't
have to use any statistical technique further to compare.
• This technique can be used universally at any place and at any time. This
technique is used almost in all the subjects and other various fields.
• Diagrams have impressive value also. Tabulated data has not much impression
as compared to Diagrams. A common man is impressed easily by good
diagrams.
• This technique can be used for the numerical type of statistical analysis, e.g.,
to locate Mean, Mode, Median or other statistical values.
• It does not only save time and energy but also is economical. Not much money
is needed to prepare even better diagrams.
• These give us much more information as compared to tabulation. The
technique of tabulation has its own limits.
• This data is easily remembered. Diagrams which we see leave their lasting
impression much more than other data techniques.
• Data can be condensed with diagrams. A simple diagram can present what
even cannot be presented with 10000 words.

General Guidelines for Diagrammatic presentation

• The diagram should be properly drawn at the outset. The pith and substance
of the subject matter must be made clear under a broad heading which
properly conveys the purpose of a diagram.

278
• The size of the scale should neither be too big nor too small. If it is too big, it
may look ugly. If it is too small, it may not convey the meaning. In each
diagram, the size of the paper must be taken note-of. It will help to determine
the size of the diagram.
• For clarifying certain ambiguities some notes should be added at the foot of
the diagram. This shall provide the visual insight of the diagram.
• Diagrams should be neat and clean. There should be no vagueness or
overwriting on the diagram.
• Simplicity refers to love at first sight; the diagram should convey the meaning
clearly and easily.
• The scale must be presented along with the diagram.
• It must be Self-Explanatory. It must indicate the nature, place and a source of
data presented.
• Different shades, colors can be used to make diagrams more easily
understandable.
• Vertical diagram should be preferred to Horizontal diagrams.
• It must be accurate. Accuracy must not be done away with to make it attractive
or impressive.

Types of Diagrams

(a) Line Diagrams

In these diagrams only line is drawn to represent one variable. These lines may be
vertical or horizontal. The lines are drawn such that their length is the proportion to
the value of the terms or items so that comparison may be done easily.

(b) Simple Bar Diagram

Like line diagrams these figures are also used where only single dimension i.e.
length can present the data. The procedure is almost the same, only one thickness of
lines is measured. These can also be drawn either vertically or horizontally. The
breadth of these lines or bars should be equal. Similarly distance between these bars
should be equal. The breadth and distance between them should be taken according
to space available on the paper.

(c) Multiple Bar Diagrams

The diagram is used, when we have to make a comparison between more than
two variables. The number of variables may be 2, 3 or 4 or more. In case of 2 variables,
pair of bars is drawn. Similarly, in case of 3 variables, we draw triple bars. The bars are
drawn on the same proportionate basis as in the case of simple bars. The same shade
is given to the same item. Distance between pairs is kept constant.

279
(d) Sub-divided Bar Diagram

The data which is presented by multiple bar diagram can be presented by this diagram.
In this case we add different variables for a period and draw it on a single bar as shown
in the following examples. The components must be kept in the same order in each
bar. This diagram is more efficient if number of components are less i.e. 3 to 5.

(e) Percentage Bar Diagram

Like sub-divide bar diagram, in this case also data of one particular period or
variable is put on a single bar, but in terms of percentages. The components are kept
in the same order in each bar for easy comparison.

(f) Duo-directional Bar Diagram

In this case the diagram is on both the sides of base line i.e. to left and right or two
above or below sides.

(g) Broken Bar Diagram

This diagram is used when the value of some variable is very high or low as compared
to others. In this case the bars with bigger terms or items may be shown broken.

Limitations of Diagrammatic Presentation

• Diagrams do not present the small differences properly.


• These can easily be misused.
• Only an artist can draw multi-dimensional diagrams.
• In statistical analysis, diagrams are of no use.
• Diagrams are just supplementing to tabulation.
• Only a limited set of data can be presented in the form of a diagram.
• Diagrammatic presentation of data is a more time-consuming process.
• Diagrams present preliminary conclusions.
• Diagrammatic presentation of data shows only on an estimate of the actual
behaviour of the variables.

Use of Software’s

As discussed before, researcher must carefully select among the right


statistical, mathematical and econometric tools most suitable for the type of analysis
he wants to do with his data (Lind et al., 2008). While at the data analysis, a good
researcher never emphasizes quantity of analysis but quality. He must conduct all the
analysis sufficient enough to provide required interpretations and conclusions as

280
required by study objectives. The advent of many software packages, particularly
relating to regression and other econometric techniques, such as SPSS, SAS, STATA,
MINITAB, ET, LIMDEP, SHAZAM, Nvivo, Microfit and others have made it easy for the
researcher to work on the analysis part of his research project (Gujarati and Porter,
2008).

CONCLUSION

This particular chapter discusses on the implementation of tools and techniques of


research, how to process the data and how a researcher can make an appropriate data
analysis. The next stage of data collection is data processing and analysis. Especially
the information incorporated in this chapter is the fundamental knowledge that
required for a researcher to complete the research. The quantitative data to be
converted once gain into a qualitative form of effective interpretation. In this process
the data need to be tabulated and with the support of software’s like SPSS, SAS, AMOS
or PLS in statistics need to be thoroughly analyzed into. The better the understanding
a researcher may have on data collection, analysis and process in better he/she can
avoid human and technical errors. This chapter provided the researcher an overview
of the tools, data collection, analysis and processing.

DISCUSSION QUESTIONS
1. What is the role of a pilot study in research?
2. What aspects of your research methodology can be evaluated during pre-
testing?
3. Which components should be assessed during the pre-test?
4. What do you mean by data processing?
5. What are the different types of closely related operations in data
processing?
6. How you define data classification?
7. Which are different types of data in the data processing stage?
8. What do you mean by classification?
9. What are different types of classification?
10. Explain different ways of classification?
11. What do you mean by coding?
12. Explain tabulation?
13. What are the objects and the importance of tabulation?
14. Discuss the role of tabulation?
15. Which are different parts of a table?
16. Explain different types of tabulation?
17. Differentiate between classification and tabulation?
18. What are the advantages and disadvantages of tabulation?

281
19. Define statistics?
20. Differentiate various types of statistical application in research?
21. Explain central tendency?
22. What do you mean by measures of dispersion?
23. Explain correlation?
24. How you interpret various correlation coefficients?
25. Explain Z test, F test and T test?
26. Evaluate the role of regression and chi-square in research?
27. Role of Kruskal Wallis Test in research analysis?
28. How you present a data?
29. Discuss the role of diagrams in the presentation of data?
30. What are the general guidelines for diagrammatic presentation?
31. What are different types of diagrams?
32. Differentiate simple bar diagram and multiple bar diagram?
33. Discuss the limitation of diagrammatic presentation of data?

282
Module 13

REPORT WRITING

Learning objectives:

By the end of this chapter, you will be able to:

1. Understand and appreciates the meaning of research.


2. Learn different types of reports and research reports
3. Learn how to write research reports.
4. Learn how to write a research synopsis.

Introduction

The final stage of any research process is the compilation and submission of all the
data and finding of external validity. This stage is called the report writing and
finalization. The researcher needs to submit a report to his/her supervisors/sponsors
or institutions to evaluate the report and take further action based on the findings
incorporated. A research report may be centered on practical work, research by
reading and observation or a study of an institution or industrial/workplace situation.
The objective of the report is to write clearly and concisely about the research topic
so that the reader can effortlessly comprehend the purpose and finding of the
research. A research report can be qualitative or quantitative in its content and
nature. This particular section of the book elaborates the concept, types, process and
method of report writing.

What is a research report?

Research report is a formal statement of the research process and research


results. It narrates the problem studied, methods used and the findings and the
conclusions.

A research report is: a written document or oral presentation based on a


written document that communicates the purpose, scope, objective(s), hypotheses,
methodology, findings, limitations and finally, recommendations of a research project
to others.

283
Characteristics

• Research report is narrative and authoritative document of the outcome of


the research.
• It presents a highly specific information; and
• It is simple, readable, and accurate form of communication.

Function of Research Report

The functions of the research report can be detailed as follows.

A research report having the major function of presenting the problem which
is studied by the researcher. The report should specifically mention the methods and
techniques used for collecting and analyzing data. It serves as a basic reference
material for future research to the scholars and professionals. Research report is the
means for judging the quality of the work done by the researcher. It acts as the means
for evaluating the researcher’s ability and competence to do the research. Further
report provides a factual base formulating the policies and strategies related to the
subject matter. The report provides systematic knowledge of the problem.

Two types of reports:

Technical Report: Suitable for a target audience of: researchers, research


managers or other people familiar with and interested in the technicalities such as
research design, sampling methods, statistical details etc.

Popular Report: suitable for: a more general audience, interested mainly in


the research findings as it is non-technical in nature.

• Used for non-technical users (journalistic);


• Readers less concerned with methodology;
• Interested in studying quickly the major findings and conclusions
• Brief, abstract of findings;
• No presentation of complicated statistical procedures; and
• Use of pictorial devices.

The writing style is designed to facilitate easy and rapid reading and
understanding of the research findings and recommendations.

284
Interim Report

When there is a time lag between data collection and presentation the interim report
needs to be prepared. This is to inform the relevance to the sponsors about the
progress of the project /research. Interim reports keep alive the agency's interest and
remove the misunderstanding about the delay. In interim report the research has to
provide the narration of what so far has been done. It is the first result of analysis and
the presentation of final outcomes (summary of findings.)

Summary Report

INDIVIDUAL SECTIONS CONTENT OF EACH SECTION


Title of Report Concise heading indicating what the report is
about

Table of Contents (not always List of major sections and headings with
required) page numbers
Abstract/Synopsis Concise summary of main findings
Introduction Why and what you researched
Literature Review (sometimes Other relevant research in this area
included in the Introduction)
Methodology What you did and how you did it
Results What you found
Discussion Relevance of your results, how it fits with
other research in the area
Conclusion Summary of results/findings
Recommendations (sometimes What needs to be done as a result of your
included in the Conclusion) findings
References or Bibliography All references used in your report or referred
to for background information
Appendices Any additional material which will add to
your report
The summary report is for general public. In which the general interests
written in non-technical and simple language. In this report there is liberal use of
pictorial charts. It is a kind of short reports of two or three pages. This kind of research
report consists of contents, brief reference to the objectives of the study, major
finding and implications

285
RESEARCH REPORT: CONTENT OF INDIVIDUAL SECTIONS

DRAFT THE PRELIMINARY MATERIAL

Title of Report:

Make sure this is clear and indicates exactly what you are researching.

Table of Contents:

List all sections, sub headings tables/graphs appendices and give page numbers.

List of tables, figures

If you have many tables or figures it is helpful to list these also, in a ‘table of contents’
type of format with page numbers.

List of abbreviations (optional)

If abbreviations or acronyms are used in the report, these should be stated in full in
the text the first time they are mentioned. If there are many, they should be listed in
alphabetical order as well. The list can be placed before the first chapter of the report.

The table of contents and lists of tables, figures, abbreviations should be prepared
last, as only then can you include the page numbers of all chapters and sub-sections
in the table of contents. Then you can also finalize the numbering of figures and tables
and include all abbreviations.

DRAFT THE BODY OF YOUR REPORT

Introduction

The introduction consists of the purpose of your report. The thesis statement will be
useful here. Background information may include a brief review of the literature
already available on the topic so that you are able to ‘place’ your research in the field.
Some brief details of your methods and an outline of the structure of the report.

Literature Review

If asked to do a separate literature review, one must carefully structure the findings.
It may be useful to do a chronological format where the researcher discusses from the
earliest to the latest research, placing your research appropriately in the chronology.
Alternatively, he/she could write in a thematic way, outlining the various themes that

286
you discovered in the research regarding the topic. Again, you will need to state where
your research fits.

Methodology

Here you clearly outline what methodology you used in your research, i.e. what you
did and how you did it. It must be written clearly so that it would be easy for another
researcher to duplicate your research if they wished to. It is usually written in a
‘passive’ voice (e.g. the participants were asked to fill in the questionnaire attached)
rather than an ‘active’ voice (e.g. I asked the participants to fill in the questionnaire
attached). Clearly reference any material you have used from other sources. Clearly
label and number any diagrams, charts, and graphs. Ensure that they are relevant to
the research and add substance to the text rather than just duplicating what you have
said. You do not include or discuss the results here. The methodology section should
include a description of:

• the study type;


• major study themes or variables (a more detailed list of variables on
which data was collected may be annexed);
• the study population(s), sampling method(s) and the size of the
sample(s);
• data-collection techniques used for the different study populations;
• how the data was collected and by whom; and
• procedures used for data analysis, including statistical tests (if
applicable).

Results

This is where you indicate what you found in your research. You give the results of
your research, but do not interpret them. The systematic presentation of your findings
in relation to the research objectives is the crucial part of your report.

The description of findings should offer a good combination or triangulation of data


from qualitative and quantitative components of the study.

Discussion

The crux of the report is the analysis and interpretation of the results. This is where
you discuss the relevance of your results and how your findings fit with other research
in the area. It will relate back to your literature review and your introductory thesis
statement. It answers many questions like, What do the results mean? How do they
relate to the objectives of the project? To what extent have they resolved the

287
problem? Because the "Results" and "Discussion" sections are interrelated, they can
often be combined as one section.

Recommendations

This includes suggestions for what needs to be done because of your findings.
Recommendations are usually listed in order of priority.

Conclusion

A separate section outlining the main conclusions of the project is appropriate if


conclusions have not already been stated in the "Discussion" section. Directions for
future work are also suitably expressed here. A lengthy report, or one in which the
findings are complex, usually benefit from a paragraph summarizing the main features
of the report - the objectives, the findings, and the conclusions. The last paragraph of
text in manuscripts prepared for publication is customarily dedicated to
acknowledgments. However, there is no rule about this, and research reports or
senior theses frequently place acknowledgments following the title page.

References or Bibliography

This includes all references used in your report or referred to for background
information. In making recommendations, use not only the findings of your study, but
also supportive information from other sources. The recommendations should take
into consideration the local characteristics of the health system, constraints, feasibility
and usefulness of the proposed solutions. They should be discussed with all concerned
before they are finalized.

Appendix

These should add extra information to the report. If you include appendices, they
must be referred to in the body of the report and must have a clear purpose for being
included. Each appendix must be named and numbered.

CONCLUSION

Major responsibility of any researcher is to communicate his/her findings on research


topic to the stakeholder’s and society through proper reports and thesis. As it is
incorporated in elsewhere in this this section there are varied forms of reports and
reporting writing. Care should be taken when a researcher wants to communicate the
findings to the stakeholder since it would be appealing and attractive enough to
receive. The report needs to be prepared in a scientific manner. A good research
report consists of seven typical components: introduction, literature review,

288
methodology, analysis and results, discussion, and references. A researcher should
follow a flow in reporting the research materials to the stakeholders. This book
provides an overall guide to writing reports about scientific research that a researcher
performed.

DISCUSSION QUESTIONS

1. What is a research report?


2. Explain the features of a research report?
3. What are the functions of a research report?
4. Differentiate technical and popular research report?
5. Differentiate summary and interim report?
6. Explain the individual and contents of research report?

289
GLOSSARY
Action research - happens when investigators plan a field experiment, collect the
data, and feed it back to the activists (i.e. Participants) both as responsive and as a
way of modeling the next stage of the experiment.
After-only design - The after-only design is achieved by changing the independent
variable and, after some period of time, measuring the dependent variable. It is
diagrammed as follows: X O1 where X represents the change in the independent
variable (putting all of the apples in end-aisle displays) and the distance between X
and O represents the passage of some time period.
Age - specific rate or frequency of occurrence of an event in a defined age group.
Analysis - The breakdown of something that is complex into smaller parts in such a
way that leads to a better understanding of the whole.
Analysis of variance (ANOVA) - Significance test for comparing the means of a
quantitative variable between three or more groups (an extension of the independent
samples t-test).
Analytic induction - use of constant comparison specifically in developing hypotheses,
which are then tested in further data collection and analysis.
Before–after with control group - The before–after with control group design may be
achieved by randomly dividing subjects of the experiment (in this case, supermarket)
into two groups: the control group and the experimental group.
Bivariate statistics - Descriptive statistics for the analysis of the association between
two variables (e.g. contingency tables, correlation).
Brand-switching studies - Studies examining how many consumers switched brands
are known as brand-switching studies.
Case - A single unit in a study (e.g. a person or setting, such as a clinic, hospital).
Case analysis - By case analysis, we refer to a review of available information about a
former situation(s) that has some similarities to the present research problem.
Case study - a research method which focuses on the circumstances, dynamics and
complexity of a single case, or a small number of cases.
Categorical variable - Variable whose values represent different categories or classes
of the same feature.
Causal hypothesis - A statement that it is predicted that one phenomenon will be the
result of one or more other phenomena that precede it in time.
Causal relationships - Observed changes ('the effect’) in one variable are the result of
prior changes in another.

290
Causality - Causality may be thought of as understanding a phenomenon in terms of
conditional statements of the form “If x, then y.”
Central tendency - (a) Mean: the arithmetic mean, or average, is a measure of central
tendency in a population or sample. The mean is defined as the sum of the values
divided by the total number of cases involved. (b) Median: this is the middle value of
the observations when listed in ascending order; it bisects the observations (i.e. the
point below which 50 per cent of the observations fall). (c) Mode: a measure of central
tendency based on the most common value in the distribution (i.e. the value of X with
the highest frequency).
Classify - Grouping things together based on specific characteristics.
Closed question - the question is followed by predetermined response choices into
which the respondent’s reply must be placed.
Cluster - A sample unit which consists of a group of elements, for example a school.
Cluster sampling - Probability sampling involves the selection of groupings (clusters)
and selecting the sample units from the clusters.
Coding - The assignation of (usually numerical) codes for each category of each
variable.
Compare - To examine the different and/or similar characteristics of things or events.
Concept - An abstraction representing an object or phenomenon.
Confidence interval - A confidence interval calculated from a sample is interpreted as
a range of values which contains the true population value with the probability
specified.
Confounding factors - An extraneous factor (a factor other than the variables under
study), not controlled for, and distorts the results. An extraneous factor only
confounds when it is related to dependent variables and to the independent variables
under investigation. It makes them appear connected when their association is, in
fact, spurious.
Consensus methods - Include Delphi and nominal group techniques and consensus
development conferences. They provide a way of synthesizing information and
dealing with conflicting evidence, with the aim of determining extent of agreement
within a selected group.
Content analysis - A form of analysis which usually counts and reports the frequency
of concepts/words/behaviours held within the data. The researcher develops brief
descriptions of the themes or meanings, called codes. Similar codes may at a later
stage in the analysis be grouped together to form categories.
Continuous panels - Continuous panels ask panel members the same questions on
each panel measurement.

291
Control - The group or subject that is used as a standard for comparison in an
experiment.
Control group - By control group, we mean a group whose subjects have not been
exposed to the change in the independent variable.
Control variable - A variable used to test the possibility that an empirically observed
relationship between an independent and dependent variable is spurious.
Controlled test markets - Controlled test markets are conducted by outside research
firms that guarantee distribution of the product through pre-specified types and
numbers of distributors.
Correlation - Linear association between two quantitative or ordinal variables,
measured by a correlation coefficient.
Correlation coefficient - Measure of the linear association between quantitative or
ordinal variables.
Critical thinking - Thinking that uses specific sets of skills to carefully analyze problems
step-by-step; scientific methods are one type of critical thinking.
Cross-sectional - Study type of observational study with subjects being observed on
just one occasion.
Cross-sectional studies - Cross-sectional studies measure units from a sample of the
population at only one point in time.
Data - Information, measurements and materials gathered from observations that are
used to help answer questions.
Data analysis - A systematic process of working with the data to provide an
understanding of the research participant’s experiences. While there are several
methods of qualitative analysis that can be used, the aim is always to provide an
understanding through the researcher’s interpretation of the data.
Data cleaning - After the data have been entered onto the computer they are checked
to detect and correct errors and inconsistent codes.
Deduction - A theoretical or mental process of reasoning by which the investigator
starts off with an idea, and develops a theory and hypothesis from it; then phenomena
are assessed in order to determine whether the theory is consistent with the
observations.
Degrees of freedom - Measure used in significance tests and other statistical
procedures, which reflects the sample size(s) of the study group(s) used in an
investigation.
Descriptive research - Descriptive research is undertaken to describe answers to
questions of who, what, where, when, and how.

292
Discontinuous panels - Discontinuous panels vary questions from one panel
measurement to the next.
Discourse analysis - The linguistic analysis of naturally occurring connected speech or
written discourse. It is also concerned with language use in social contexts, and in
particular with interaction or dialogue between speakers.
Dispersion - A summary of a spread of cases in a figure (measures include quartiles,
percentiles, deciles, standard deviation and the range).
Electronic test markets - Electronic test markets are those in which a panel of
consumers has agreed to carry identification cards that each consumer presents when
buying goods and services.
Empirical - Based on observation.
Ethics - Research ethics relate to the standards that should be upheld to guard
participants from harm or risk. Ethical considerations should be made at each stage of
the research design and include informed consent, voluntary participation and respect
for confidentiality.
Ethnography - A qualitative research methodology that enables a detailed description
and interpretation of a cultural or social group to be generated. Data collection is
primarily through participant observation or through one-to-one interviews. The
importance of gathering data on context is stressed, as only in this way can an
understanding of social processes and the behavior that comes from them be
developed.
Experience surveys - Experience surveys refer to gathering information from those
thought to be knowledgeable on the issues relevant to the research problem.
Experiment - An experiment is defined as manipulating an independent variable to
see how it affects a dependent variable, while also controlling the effects of additional
extraneous variables.
Experimental design - An experimental design is a procedure for devising an
experimental setting such that a change in a dependent variable may be attributed
solely to the change in an independent variable.
Experimental error - Incorrect data in an experiment that may result from a variety of
causes.
Experimental group - The group that is exposed to the independent variable
(intervention) in experimental research.
Experimental group - The experimental group, on the other hand, is the group that
has been exposed to a change in the independent variable.

293
Exploratory research - Exploratory research is most commonly unstructured, informal
research that is undertaken to gain background information about the general nature
of the research problem.
External validity - External validity refers to the extent that the relationship observed
between the independent and dependent variables during the experiment is
generalizable to the “real world.”
Extraneous variables - Extraneous variables are those that may have some effect on
a dependent variable but yet are not independent variables.
Factor analysis - Multivariate method that analyses correlations between sets of
observed measurements, with the view to estimating the number of different factors
which explain these correlations.
Field experiments - Field experiments are those in which the independent variables
are manipulated and the measurements of the dependent variable are made on test
units in their natural setting.
Field notes - A collective term for records of observation, talk, interview transcripts,
or documentary sources. Typically includes a field diary, which provides a record of
chronological events and development of research as well as the researcher’s own
reactions to, feeling about, and opinions of the research process
Field research - Research which takes place in a natural setting.
Focus groups - An increasingly popular method of conducting exploratory research is
through focus groups, which are small groups of people brought together and guided
by a moderator through an unstructured, spontaneous discussion for the purpose of
gaining information relevant to the research problem.
Grounded theory - A qualitative research methodology with systematic guides for the
collection and analysis of data, that aims to generate a theory that is ‘grounded in’ or
formed from the data and is based on inductive reasoning. This contrasts with other
approaches that stop at the point of describing the participants’ experiences.
Heterogeneity (or lack of homogeneity) - Term usually employed in the context of
meta-analyses, when the results or estimates from individual studies appear to have
different magnitude. Tests of heterogeneity are available to assess the extent of this
variation.
Hypothesis A tentative solution to a research question, expressed in the form of a
prediction about the relationship between the dependent and independent variables.
Induction - Begins with the observation and measurement of phenomena and then
develops ideas and general theories about the universe of interest.
Inference - A logical explanation or conclusion based on observations and/or facts.

294
Internal validity - Internal validity is concerned with the extent to which the change
in the dependent variable was actually due to the independent variable.
Interpretative - Exploration of the human experiential interpretation of any observed
phenomena. Enables researchers to gain a better understanding of the underlying
processes that may influence behavior.
Interviewing - A data collection strategy in which participants are asked to talk about
the area under consideration.
Iteration (an iterative process) - relates to the process of repeatedly returning to the
source of the data to ensure that the understandings are truly coming from the data.
In practice this means a constant process of collecting data, carrying out a preliminary
analysis, and using that to guide the next piece of data collection and continuing this
pattern until the data collection is complete.
Longitudinal studies - Longitudinal studies repeatedly measure the same sample units
of a population over a period of time.
Market tracking studies - Market tracking studies are those that measure some
variable(s) of interest, that is, market share or unit sales over time.
Measure - To compare the characteristics of something (such as mass, length, volume)
with a standard (such as grams, meters, liters).
Methods - An ordered series of steps followed to help answer a question.
Natural setting (naturalistic research) - the normal environment for the research
participants for the issues being researched.
Observation - A strategy for data collection, involving the process of watching
participants directly in the natural setting. Observation can be participative (i.e. taking
part in the activity) or non-participative (the researcher watches from the outside). It
is noticing objects or events using the five senses.
One-group, before–after design - The one-group, before–after design is achieved by
first measuring the dependent variable, then changing the independent variable, and,
finally, taking a second measurement of the dependent variable.
Panels - Panels represent sample units who have agreed to answer questions at
periodic intervals.
Phenomenology - An approach that allows the meaning of having experienced the
phenomenon under investigation to be described, as opposed to a description of what
the experience was. This approach allows the reader to have a better understanding
of what it was like to have experienced a particular phenomenon.
Post-test - When a measurement of the dependent variable is taken after changing
the independent variable, the measurement is sometimes called a posttest.

295
Prediction - A statement made about the future outcome of an experiment based on
past experiences or observations.
Pretest - When a measurement of the dependent variable is taken prior to changing
the independent variable, the measurement is sometimes called a pretest.
Procedure - An ordered series of steps followed to help answer a question.
Projective techniques - Projective techniques, borrowed from the field of clinical
psychology, seek to explore hidden consumer motives for buying goods and services
by asking participants to project themselves into a situation and then to respond to
specific questions regarding the situation.
Qualitative data - Data that is based on observable characteristics of things or events
that can be collected using the five senses. Example: “The juice tastes sweet to me.”
Quantitative data - Data that is based on measurable characteristics of things or
events such as mass, volume, length, and quantity. Example: There is one liter of juice
in the carton.
Quasi-experimental designs - Designs that do not properly control for the effects of
extraneous variables on our dependent variable are known as quasi-experimental
designs.
Reflexivity - The open acknowledgement by the researcher of the central role they
play in the research process. A reflexive approach considers and makes explicit the
effect the researcher may have had on the research findings.
Replication - Repeated trials on more than one subject, as well as controls, in
experimental tests.
Representatives - There are three criteria that are useful for selecting test market
cities: representativeness, degree of isolation, and ability to control distribution and
promotion.
Research design - After thoroughly considering the problem and research objectives,
researchers select a research design, which is a set of advance decisions that makes
up the master plan specifying the methods and procedures for collecting and
analyzing the needed information.
Respondent validation - Refers to seeking the participants’ views of the initial
interpretations of the data. The aim is not to ensure that the researcher are in
agreement as to the meaning of the data, but that the researcher has the opportunity
to incorporate the participants’ responses into the analysis.
Sample surveys - Sample surveys are cross-sectional studies whose samples are drawn
in such a way as to be representative of a specific population.
Science - The study of nature and the physical world using the methods of science, or
a “special method of finding things out”.

296
Scientific method(s) - A process of critical thinking that uses observations and
experiments to investigate testable predictions about the physical universe.
Scientific theory - A causal explanation for generalized patterns in nature that is
supported by much scientific evidence based on data collected using scientific
methods.
Scientist - A person who “does” science and uses the methods of science.
Secondary data analysis - By secondary data analysis, we refer to the process of
searching for and interpreting existing information relevant to the research
objectives.
Triangulation - Process by which the area under investigation is looked at from
different (two or more) perspectives. These can include two or more methods, sample
groups or investigators. Used to ensure that the understanding of an area is as
complete as possible or to confirm interpretation through the comparison of different
data sources.
True experimental design - A “true” experimental design is one that truly isolates the
effects of the independent variable on the dependent variable while controlling for
effects of any extraneous variables.
Variable - Something that can affect a system being examined and is therefore a factor
that may change in an experiment.
Variable, dependent - A factor that responds to changes in other variables in an
experiment; “it changed” variables.
Variable, independent - A factor that can be changed or manipulated in an
experiment by the scientist; “you change it” variables.
Variation - Slight differences among objects, organisms or events that are all of the
same basic type.

297
BIBLIOGRAPHY/REFERENCES
Agarwal. Y.P. (1986). Statistical Methods, Concepts, Applications and Computations.
New Delhi: Sterling Publication.
Air Fax Country Department of Systems Management for Human Service (2003). Over
View of Sampling Procedures, information brochure, http: // www.fhi. org /
NR / rdonlyres / etdgabwszyyk2hnkqosvl2 mieeatan6rrj4l4lfuv52dlbt
7knrewo6qfzosuzq7raxy63chxkz 32c / Chapter6.pdf.
Alexander Jakob (2001). On the Triangulation of Quantitative and Qualitative Data.
Typological Social Research, 2(1). Citation in: Triangulation: How and Why
Triangulated Research Can Help Grow Market Share and Profitability A white
paper by Sharon Bailey-Beckett & Gayle Turner Beckett Advisors, Inc.
Allchin, D. (2001). Error Types. Perspectives on Science, 9(1), 38–58.
Aslam, F., Qayyum, M.A., Mahmud, H., Qasim, R., & Haque, I.U. (2004). Attitudes and
practices of postgraduate medical trainees towards research--a snapshot from
Faisalabad. Journal of Pakistan Medical Association, 54: 534-6.
Babbie, Earl. (1989). The Practice of Social Research. 5th edition. Belmont CA:
Wadsworth
Babbie, Earl. (1990). Survey Research Methods. Belmont, California: Wadsworth
Publishing Company, 2nd ed.
Barrow, J. (1991). Theories of Everything. Oxford University, Press.
Bausell, R.B. (1991). Advanced research methodology: an annotated guide to sources.
Metuchen, N.J.: Scarecrow Press.
Bech-Larsen, T. (1996). Danish consumers’ attitudes to the functional and
environmental characteristics of food packaging. Journal of Consumer Policy,
19, 339-63.
Bell, A. (1994) Climate of opinion: public and media discourse on the global
environment. Discourse and Society, 5, 33–64.
Bell, J. (1993). Doing your research project: a guide for first-time researchers in
education and social science (2nd ed.). Buckingham; Philadelphia: Open
University Press.
Berg, B.L. (1995). Qualitative research methods for the social sciences (2nd ed.).
Boston: Allyn and Bacon.
Berry, W.D., & Lewis-Beck, M.S. (Eds.). (1986). New tools for social scientists:
advances and applications in research methods. Beverly Hills: Sage.
Best John W., & Kahn James V. (2010). Research in Education. New Delhi: PHI Learning
Pvt.Ltd.

298
Beverley H. (2002), An Introduction to Qualitative Research, Produced by Trent Focus
for Research and Development in Primary Health Care. URL:
http://faculty.cbu.ca/pmacintyre/course_pages/MBA603/MBA603_files/Intr
oQualitativeResearch.pdf
Bracht, G. H., & Glass, G. V. (1968). The external validity of experiments. American
Education Research Journal, 5, 437-474.
Brain M. (2004). Type 1 error, Type II error, Consumers Risk and Producers Risk:
Citation in URL: http://www.brainmass.com/homework-help/statistics/
alltopics/ 108614 .
Bronowski (1978), Diederich (1967) and Whaley & Surratt (1967). Taken from The
Kansas School Naturalist, 35(4). (Retrieved on April 2012).
Brownlee, K. A. (1960). Statistical theory and methodology in science and
engineering. New York: Wiley, 1960.
Burton Neil., Brundrett, M. & Jones M. (2008). Doing Your Education Research Project.
UK: Sage Publication.
Busha, Charles & Stephen P. Harter. (1980). Research Methods in Librarianship:
techniques and Interpretations. Academic Press: New York, NY.
Busha, C. & Stephen P. H. (1980). Research Methods in Librarianship: techniques and
Interpretations. Academic Press: New York, NY.
Campbell, D. & Stanley J. Experimental and quasi-experimental designs for research
and teaching. In Gage (Ed.), Handbook on research on teaching. Chicago: Rand
McNally & Co., 1963.
Carlos L. Lastrucci (1963). The Scientific Approach: Basic Principles of the Scientific
Method, 7.
Carter M. (2004). Basic Business Research Methods, http://www.mapnp.org
/library/research/research.htm (Accessed November 2012)
Chandra S. S. & Sharma R. K .(2007). Research in Education. New Delhi: Atlantic.
Publishers.
Chandrasekaran., John R., Josephson & Vichard, B. (1999). What Are Ontologies, and
Why Do We Need Them? IEEE intelligent systems.
Chaudhary. C.M. (2009). Research Methodology. Jaipur: RBSA Publishers.
Cochran, W. G., Sampling Techniques. Second Edition. John Wiley & Sons, Inc. New
York. 1953-1963. Library Of Congress Catalog Card Number: 63-7553
William C. G. (1953). Sampling Techniques. New York: John Wiley & Sons, Inc., 1953.
Coffey, A., & Atkinson, P. (1996). Making sense of qualitative data. Thousand Oaks,
CA: Sage.

299
Cope, D. & Winward, J. (1991), Information failures in green consumerism, Consumer
Policy Review, 1(2), 83-6.
Cornfield, J. & Tukey, J. W. (1956). Average values of mean squares in factorials.
Annals of Mathematical Statistics, 27, 907-949.
Cox, D. R. (1958). Planning of experiments. New York: Wiley.
Creswell, J. W. (1998). Qualitative inquiry and research design: Choose among five
traditions. London: Sage.
Creswell, J.W. (2003). Research Design: Qualitative, Quantitative, and Mixed Methods
Approaches (2nd Edition). Thousand Oaks: Sage Publications.
Lal. D.K.(2005). Designs of Social Research. Jaipur: RAWAT Publication.
Ebel, R. L. & Frisbie, D. A. (1986). Essentials of education measurement. Englewood
Cliffs, NJ: Prentice Hall.
David Coderre (2009)., Computer-aided fraud, prevention & Detection, pp - 224-225.
David Wigder (2007). Marketing green: Citation URL: www.climatebiz.com/bio/
david-wigder.
David, F.N. (1949). Probability Theory for Statistical Methods. Cambridge University
Press. p. 28.
Des R. (1968). Sampling Theory. Mc-Graw-Hill Book Company. New York.
Des R. (172). The Design Of Sample Surveys. McGraw-Hill Book Company, New York.
Donald Ary, Lucy Cheser Jacobs, and Asghar Razavieh, Introduction to Research in
Education, (New York: Holt, Rinehart and Winston, Inc., 160.
Donald Campbell, (1966). Experimental and Quai-Experimental Designs for Research.
Houghton Mifflin Company.
Donna Molloy and Kandy Woodfield (2002). Longitudinal qualitative research
approaches in evaluation studies. Longitudinal qualitative research
approaches in evaluation studies: A study carried out on behalf of the
Department for Work and Pensions: Working Paper, 7, Citation URL:
http://research.dwp.gov.uk/asd/asd5/WP7.pdf (retrieved on 5th May 2012)
Dunlap, R.E. & Van Liere, K. (1978). The new environmental paradigm: a proposed
measuring instrument and preliminary results, Journal of Environmental
Education, 9, 10-9.
Eagly, A.H. & Kulesa, P. (1997). Attitudes, attitude structure, and resistance to change,
in Experiment Resources (2003) Citation: http://www.experiment-
resources.com/null-hypothesis.html

300
Edmund Husserl (1931). Ideas: General Introduction into phenomenology (traps. W.
R. Boyce), Allen & Unwin, London.
Finger, M. (1994). From knowledge to action? Exploring the relationships between
environmental experiences, learning, and behavior”, Journal of Social Issues,
50, 179-97.
Fink, Arlene. How to Sample in Surveys. 6. London: Sage Publications, 1995.
Fisher, R. A. (1935). The design of experiments. (1st ed.) London: Oliver & Boyd.
Fisher, R.A. (1966). The design of experiments. 8th edition. Hafner: E dinburgh. For
example see http://davidmlane.com/hyperstat/A73079.html)
Fowler, Jr. & Floyd J. (1993). Survey Research Methods. 2nd ed. 1. London: Sage
Publications.
Frey, Lawrence R., Carl H. Botan, & Gary L. Kreps. (2000). Investigating
Communication: An Introduction to Research Methods. 2nd ed. Boston: Allyn
and Bacon.
Gall, M. D., Borg, W. R., & Gall, J. P. (1996). Educational research: An introduction.
White Plains, NY: Longman.
Gay, L. R. (1980). Educational Research: Competencies for Analysis and Application,
3rd ed., (Columbus, Ohio: Merrill Publishing Company, 1987), 101.
Gay, L. R. (1992). Educational research (4th Ed.). New York: Merrill.
Ghosh B.N. (1985). Scientific Method & Social Research, Sterling Publishers (P) Ltd.,
New Delhi.
Girden, E.R. (1996). Evaluating research articles from start to finish. Thousand Oaks,
CA: Sage
Glenn D. Israel (1992), Determining Sample Size, document PEOD6.
Google Trends. (2008). Green Marketing Trend. Citation URL:
www.redfusionmedia.com/.../green-marketing-trend.
Grove, S.J. & Fisk, R.P. (1996). Going green in the Service Sector. European Journal of
Marketing, 30(5), 56-67
Gubrium, J.F., & Holstein, J.A. (1997). The New language of qualitative method. New
York: Oxford University Press.
Gwowen S. & Show-Li Jan (2004). The effectiveness of randomized complete block
design, Statistica Neerlandica 58(1), 111–124.
Hammersley, M., & Atkinson, P. (1983). Ethnography, principles in practice. London,
New York: Tavistock.

301
Harold A. L. & Murray T. (2002). The Delphi Method: Techniques and Applications,
Citation URL: http://is.njit.edu/pubs/delphibook/delphibook.pdf.
Hayes, N. (2000) Doing Psychological Research. Gathering and analysing data.
Buckingham: Open University Press. p- 134.
Hayes, N. (2000) Doing Psychological Research. Gathering and analysing data.
Buckingham: Open University Press. P-1336.
Henry M. B. (2008). How to write a statement problem: Your proposal writing
companion. Citation in URL: http://www.professorbwisa.com/new/free_
downloads/problem_statement.pdf (retrieved on 4th June 2012)
Henry, G. T. (1990). Practical Sampling. 21. London: Sage Publications, 1990.
Heron, J., & Reason, P. (1997). A Participatory Inquiry Paradigm. Qualitative Inquiry,
3(3), 274-294.
Hines, J.M., Hungerford, H.R. & Tomera, A.N. (1987). Analysis and synthesis of
research on responsible environmental behavior: a meta-analysis”, Journal of
Environmental Education, 18, 1-8.
Hopfenbeck, W. (1993). Direccio´ny Marketing Ecolo´gicos. Ediciones Deusto, Madrid.
Hoy W. K. (2010). Quantitative Research in Education: A Primer. UK: Sage Publication.
Hult, C.A. (1996). Researching and writing in the social sciences. Boston: Allyn and
Bacon.
IGNOU (2005). MSO 002 Block 2, Research Methodologies and Methods, Unit 12-14.
Johansson, G. (2002). Success Factors for the Integration of Eco-Design in Product
Development: A Review of State of the Art. Environmental Management and
Health, 13(1), 98-108.
Jones, R.A. (1996). Research methods in the social and behavioral sciences (2nd ed.).
Sunderland, MA: Sinauer Associates.
Kahn CR.(1994). Picking a research problem - the critical decision. NEJM 330:1530-
1533.
Kalafatis, S.P., Pollard, M., East, R. & Tsogas, M.H. (1999), “Green marketing and
Ajzen’s theory of planned behavior: a cross-market examination”, Journal of
Consumer Marketing, 16(5), 441-60.
King, G., Keohane, R.O., & Verba, S. (1994). Designing social inquiry: scientific
inference in qualitative research. Princeton, NJ: Princeton University Press.
King, Gary, Keohane, Robert O., & Verba, Sidney (1994). Designing Social Inquiry.
Princeton, NJ: Princeton University Press.

302
Kinnear, T.C., Taylor, J.R. & Ahmed, S.A. (1974), Ecologically concerned consumers:
who are they?”, Journal of Marketing, 38, 20-4.
Kothari C.R (1985): Research Methodology - Methods and-Techniques, Wiley Eastern
Publication. (Chapter 1-3 pp 1 to 67).
Kothari. C.R. (2010). Research Methodology: Methods and Techniques. New Delhi:
New Age International Pvt. Ltd.
Kotler, P, Adam, S, Brown, L & Armstrong, G. (2006). Principles of Marketing , 3rd edn,
Prentice Hall, Frenchs Forest, NSW,Feroz Russell K. Schutt, Investigating the
Social World, 5th ed, Pine Forge Press
Koul L. (2010). Methodology of Educational Research. New Delhi: Vikas Publishing
House Pvt. Ltd.
Kristin Anderson Moore. (2008). Quasi-Experimental Evaluations: Part 6 in a Series on
Practical Evaluation Methods. Citation URL:
http://www.childtrends.org/Files/Child_Trends-2008_01_16_Evaluation6.pdf
(Retrieve on May 6th 2012)
Kuhn, T. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
Kumar, R. (2011). Research Methodology. New Delhi: Sage Publications India Pvt. Ltd.
Gay, L. R. (1987). Educational Research: Competencies for Analysis and Application,
3rd ed., Columbus, Ohio: Merrill Publishing Company, 101.
L.V. Redman and A.V.H. Mory, The Romance of Research, 1923, p.10.
Labuschagne, A. (2003). Qualitative research - airy fairy or fundamental? The
Qualitative Report, 8(1).
Leinhardt,G., Leinhardt, S.(1980). Exploratory data analysis: new tools for the analysis
of empirical data. In D.C. Berliner, Review of Research in Education, 8, 85-157,
Washington, D.C.: American Educational Research Association.
Leinhardt,G., & Leinhardt, S.(1980). Exploratory data analysis: new tools for the
analysis of empirical data. In D.C. Berliner, Review of Research in Education, 8,
85-157, Washington, D.C.: American Educational Research Association.
Leming, M. R. Research & Sampling Designs: Techniques For Evaluating Hypotheses.
Found at: http://www.stolaf.edu/people/leming/soc371res/ research.html.
Lewis-Beck Michael S; Bryman Am, Tim Futing, & Liao (Ed) (2004). The Sage
Encyclopedia of Social Sciences Research Method, (1, 2 & 3), Sage Publications,
New Delhi.
Liverpool University document (2006). http://www. accessexcellence.org/LC/TL/
filson/writhypo.php.
Lohr, Sharon L. Sampling: Design and Analysis. Albany: Duxbury Press, 1.

303
Louis Cohen, Lawrence Manion & Keith Morrison (2012). Co-relational and criterion
groups designs; Characteristics of ex post facto research; Citation URL:
cw.routledge.com/textbooks/cohen7e/data/Chapter15.ppt
Maloney, M.P. & Ward, M.P. (1973). Ecology: let’s hear from the people. An objective
scale for the measurement of ecological attitudes and knowledge, American
Psychologist, 28(7), 583-6.
Maria R. (2002). The Scientific Method, Cooperative Extension, University of Nevada
document.
Marion, R. (2004). Defining variables and formulating hypotheses. In the whole world
art of deduction: Research skills for new scientists. Retrieved April 10, 2007,
from UTMB School of Allied Health Sciences:
http://www.sahs.utmb.edu/pellinore/intro_to_ research/wad/vars_hyp.htm
(retrieved on 7th July 2012).
Denzin, N K. & Yvonna S. L (1998). Strategies of Qualitative Inquiry. Sage Publications:
London.
Marshall, C., & Rossman, G.B. (1995). Designing qualitative research (2nd ed.).
Newbury Park, CA: Sage.
Marti,n N. M. (1996). Sampling for qualitative research. Family Practice—an
international journal, 13(6), 522-525.
Mason, J. (1996). Qualitative reasoning. London; Thousand Oaks, CA: Sage. Meffert,
H. and Kirchgeorg, M. (1993), Marktorientiertes Umweltmanagement,
Schaeffer-Poeschel, Stuttgart.
Maxwell, J.(1996). Qualitative research design: An interactive approach. Thousand
Oaks, CA. Sage.
Mikkalsen Britha (1995). Methods for Development Work and Research. Sage
Publications, New Delhi.
Moore, D., & McCabe, D. (1993). Introduction to the practice of statistics. New York:
Freeman.
Morris C. (2008). The EQUATOR Network: promoting the transparent and accurate
reporting of research. Developmental Medical Child Neurology, 50, 723
Nachmias, C., & Nachmias, D. (1992). Research methods in the social sciences (4th
ed.). New York: St. Martin's Press.
Naomi Elliott, & Anne Lazenbatt, (2005). How to recognize a ‘quality’ grounded theory
research study? Australian Journal of Advanced Nursing, 22(3), p. 50.

304
Neyman, J.; and Pearson, E.S. (1967) [1928]. On the Use and Interpretation of Certain
Test Criteria for Purposes of Statistical Inference, Part I. Joint Statistical Papers.
Cambridge University Press, 1–66.
Neyman, J. & Pearson, E.S. (1967) [1933]. The testing of statistical hypotheses in
relation to probabilities a priori. Joint Statistical Papers. Cambridge University
Press. 186–202.
Patton, M. (2002). Qualitative research & evaluation methods. (3rd edition).Thousand
Oaks, CA. Sage.
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand
Oaks, CA: Sage Publications.
Pearson, E.S., & Neyman, J. (1967, 1930). On the Problem of Two Samples. Joint
Statistical Papers. Cambridge University Press, 100.
Peattie, K. (1995). Environmental Marketing Management. Meeting the green
challenge, Pitman Publishing, London.
Ravichandran. K. & Nakkiran.S (2009). Introduction to Research Methods in Social
Sciences. New Delhi: Abhijeet Publications.
Robert C. Meir, William T. Newell & Harold L. Dazier,(2012). Simulation in Business
and Economics. Citation in: Manvendra Narayan Mishra and Vinay Kr. (2012).
Mathematical Model to Simulate Infectious Disease. VSRD-TNTJ, 3(2), 60-68.
Robert, S. M. (2002). Strategies for Educational Inquiry: Inquiry & Scientific Method,
Education Science Weekly, 5520 - 5982
Robson, C. (1993). Real world research: a resource for social scientists and
practitioner-researchers. Oxford; Cambridge, MA: Blackwell.
Roger, S. & Victor, J. (2006). Data Collection & Analysis. Sage Publication, London.
URL: amazone.co.uk. ISBN 076195046X.
Ronán C. (2008). Sample Size: Sample size, A rough guide studies measuring a
percentage or proportion. Citation URL: www.beaumontethics.ie/docs/
application/ samplesizecalculation.pdf (retrieved on 3rd of June 2012).
Rosenthal, R., & Rosnow, R. L.(1975). The Volunteer Subject. John Wiley & Sons, Inc.
New York.
Salant, P. A. & Dillman, D. A. (1994). How To Conduct Your Own Survey. John Wiley &
Sons, Inc. New York.
Saravanavel .P. (2011). Research Methodology. New Delhi: Kitab Mahal Publishers.
Saunder M., Lewis, P. and Thornhill, A. (2004). Research methods for business
students 3 edn, Pearson.

305
Schlegelmilch, B.B., Bohlen, G.M. & Diamantopoulos, A. (1996). The link between
green purchasing decisions and measures of environmental consciousness,
European Journal of Marketing, 30(5), 35-55.
Scott SD., Albrecht, L., O'Leary, K., Ball, G.D., Dryden, D.M., & Hartling L, (2001). A
protocol for a systematic review of knowledge translation strategies in the
allied health professions. Implement Science, 6, 58.
Shastri. V.K.(2008). Research Methodology in Education. New Delhi: Authors Press.
Shields, P. & Hassan T. (2006). Intermediate Theory: The Missing Link in Successful
Student Scholarship. Journal of Public Affairs Education. 12(3). 313-334.
http://ecommons. txstate.edu/polsfacp/39/
Singh, Y.K., Sharma, T.K. & Upadhya B. (2012). Educational Technology: Techniques of
Tests and Evaluation. New Delhi: APH Publishing Corporation.
Smith, S.M., Haugtvedt, C.P. & Petty, R.E. (1994), Attitudes and recycling: does the
measurement of affect enhance behavioral prediction?, Psychology and
Marketing, 11 (4), 359-74.
Spiegel, M. R. (1999). Schaum's Outline of Theory and Problems of Statistics (3rd ed.).
McGraw-Hill. ISBN 0070602816.
Spiegelberg, H. (1983). The Phenomenological Movement, 3rd edn. (1982).
Spoull, N.L. (1995). Handbook of research methods: a guide for practitioners and
students in the social sciences. (2nd ed.). Metuchen, NJ: Scarecrow Press.
Stern, P.C., & Kalof, L. (1996). Evaluating social science research. New York: Oxford
University Press.
Stone, G., Barnes, J.H. & Montgomery, C. (1995). Ecoscale: a scale for the
measurement of environmentally responsible consumers”, Psychology and
Marketing, 12(7), 595-612.
Sukhatme, P. V., & Sukhatme, B. V. (1970). Sampling Theory Of Surveys With
Applications. ISBN: 0-8138-1370-0. Indian Society of Agricultural Statistics,
New Delhi, India and the Iowa State University Press, Ames, Iowa, U.S.A.
Swenson, M.R. and Wells, W.D. (1997). Useful correlates of pro-environmental
behavior, in Goldberg, M.E., Fishbein, M. and Middlestadt, S.E. (Eds), Social
Marketing, Theoretical and Practical Perspectives, Lawrence Erlbaum,
Mahwah, NJ, 91-109.
Taro Yamane.(1967). Elementary Sampling Theory. Prentice-Hall, Inc., Englewood
Cliffs, N.J. 1967
Tesch, R. (1990). Qualitative research: analysis types and software tools. New York:
Falmer Press.

306
The Advanced Learner’s Dictionary of Current English, Oxford, (1952), 1069.
The Encyclopaedia of Social Science (1930). MacMillan, 9.
Thomas, K. (1962). The Structure of Scientific Revolutions traces an interesting history
and analysis of the enterprise of research. Citation in: Joseph W. Dellapenna
(2000). Science, Technology, and International Law. Villanova University
School of Law, Public Law and Legal Theory, Research Paper No. 2000-6
Tjärnemo, H. (2001). Eco-marketing & Eco-management – Exploring the eco-
orientation – performance link in food retailing, Lund Business Press, Institute
of Economic Research, Sweden.
Trochim, M. W. (2012). Citation in: URL: trochim.human.cornell.edu/kb/sampnon.
htm Bill Trochim’s Center for Social Research Methods, Cornell University.
Vijayalakshmi, G. & Sivapragasam, C. (2009). Research Methods: Tips and Techniques.
Chennai: MJP Publishers.
Warner, C. (undated) How to write a case study http://www.cpcug.org/user/houser/
advancedwebdesign/Tips_on_Writing_the_Case_Study.html (Accessed
November 2004)
Weimer, J. (ed.) (1995). Research Techniques in Human Engineering. Englewood
Cliffs, NJ: Prentice Hall ISBN 0130970727.
Weller, S., Romney, A. (1988). Systematic Data Collection (Qualitative Research
Methods Series 10). Thousand Oaks, CA: SAGE Publications, ISBN 0803930747.
Wellington J. (2000). Educational Research: Contemporary Issues and Practical
Approaches. London: Continuum International publishing group.
Wholey, J., Hatry, H., & Newcomer, K. (eds). (2004). Handbook of practical program
evaluation. San Francisco, CA. Jossey-Bass.
Wiersma, W. & Jurs, S. G. (2009). Research Methods in Education: An Introduction.
New Delhi: Pearson.
Wilson, E. Bright. (1952). An Introduction to Scientific Research (McGraw-Hill).
Winer, B. J. (1962). Statistical principles in experimental design. New York: McGraw-
Hill.
Yamane, Taro. 1967. Statistics: An Introductory Analysis, 2nd Ed, New York: Harper
and Row.
Website Referred
http://portal.acs.org/portal/fileFetch/C/CTP_005606/pdf/CTP_005606.pdf
http://statistics.berkeley.edu/~stark/SticiGui/Text/gloss.htm#null_hypothesis

307
http://writing.colostate.edu/guides/research/survey/com4c2a.cfm:
http://www.adelaide.edu.au/clpd/all/learning_guides/learningGuide_writingAResea
rchReport.pdf
http://www.adfoster.com/primary_secondary_data_what_s_the_difference
http://www.allbusiness.com/marketing/market-research/1310-1.html
http://www.businessdictionary.com/definition/null-hypothesis.html
http://www.ehow.com/about_5092840_basics-statistical-analysis.html
http://www.experiment-resources.com/type-I-error.html#ixzz0sO1SPzm3
http://www.idrc.ca/en/ev-56466-201-1-DO_TOPIC.html
http://www.idrc.ca/en/ev-56623-201-1-DO_TOPIC.html
http://www.learnhigher.ac.uk/analysethis/main/quantitative.html
http://www.nlm.nih.gov/nichsr/hta101/ta101014.html
http://www.nmmu.ac.za/robert/reshypoth.htm
http://www.rajputbrotherhood.com/knowledge-hub/statistics/diagrammatic-
presentation-of-data.html
http://www.rajputbrotherhood.com/knowledge-hub/statistics/tabulation.html
http://www2.uiah.fi/projects/metodi Exploratory study (diagram)
http://www.is.cityu.edu.hk/staff/isrobert/phd/ch3.pdf (Research philosophy)
http://www.public.asu.edu/~kdooley/papers/simchapter.PDF
http://www.air.org/files/eval.pdf
http://jayaram.com.np/wp-content/uploads/2009/12/reserch.pdf
http://www.socsci.uci.edu/ssarc/sshonors/webdocs/Qual%20and%20Quant.pdf
http://pharmaquest.weebly.com/uploads/9/9/4/2/9942916/introduction_to_resear
ch.pdf
http://www.ekmekci.com/Publicationdocs/RM/ResMet1/5RESEARCHDesign.pdff
http://cogprints.org/2643/1/EOLSSrm.pdf
http://www.drtang.org
en.wiktionary.org/wiki/case_study
http://www.socialresearchmethods.net/tutorial/Cho2/panel.html
wordnetweb.princeton.edu/perl/webwn
www.nyu.edu/classes/bkg/methods/005847ch1.pdf

308
http://www.blurtit.com/q122012.html
http://data.fen-om.com/int460/research-experimental.pdf
http://www.emathzone.com/tutorials/basic-statistics/basic-principles-of-
experimental-designs.html
http://medical-dictionary.thefreedictionary.com/experimental+design
http://psych.csufresno.edu/psy144/Content/Design/Experimental/factorial.html
http://files.ecpe.org/Research%20Methodology%20(261703)/handouts/ch10%20ex
perimental%20designs.pdf
http://www.alseaf.com/wp-content/uploads/2011/04/crd.pdf
http://www.stat.wisc.edu/courses/st572-larget/Spring2007/handouts17-4.pdf
Investigative Techniques Glossary,
http://www.pbs.org/opb/historydetectives/techniques/ glossary.html (historical
research)
Primary and Secondary sources, http://ipr.ues.gseis.ucla.edu/info/definition.html
(historical research)
http://web.ncifcrf.gov/rtp/lasp/intra/acuc/fred/Determination_of_Sample.pdf
http://www.bioterrorism.slu.edu/bt/products/bio_epi/scripts/mod13.pdf
http://www.dorak.info
http://publichealth.massey.ac.nz/publications/introepi_teaching/asite2_chapter%20
6.pdf
http://explorations.sva.psu.edu/lapland/LitRev/prob1.html#anchor2210644

309
Blurb
Numerous books are floating in the market showing the academic base of Research
and Research Methodology. While this book is a combined module which provide a
conceptual, theoretical and practical base to the beginners of research that to get
appropriate understanding on application of research methods. The applications of
research methodology not only lie in science disciplines, but also in several social and
economic subjects which having the root base on groups and individual exploration
and understanding. Different from other books this book provides an integrated
knowledge on three kinds of methodologies of research, viz., qualitative, quantitative,
and mixed methodologies. In order to establish, generalize and disseminate the any
findings, one needs to undertake field research and gather primary data first to have
a discussion on the output with past observations and research output. In this book,
the authors tried to orient the beginners on how to understand research, how to be
aware of different modes of research method, what are different tools and
techniques, how to write a reseae4ch report, and how to develop a good synopsis etc.
This stimulating and insightful book on research is ideal for managers, consultants,
trainers, research scholars, for a new generation of students that to get fundamentals
on research method.

310

View publication stats

You might also like