Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Subject Name – Business Research Methods

Topic Name – Data & Measurement


Student Name – Kashish Shaikh

Class – First Year MBA


Division – C
Academic Year – 2022-23
Roll No - 139
Data collection
Data collection is a systematic process of gathering observations or measurements. Whether you are performing
research for business, governmental or academic purposes, data collection allows you to gain first-hand knowledge
and original insights into your research problem.

While methods and aims may differ between fields, the overall process of data collection remains largely the same.
Before you begin collecting data, you need to consider:

• The aim of the research


• The type of data that you will collect
• The methods and procedures you will use to collect, store, and process the data

To collect high-quality data that is relevant to your purposes, follow these four steps.
• Step 1 – Define the aim of your research
• Step 2 – Choose your data collection method
• Step 3 – Plan your data collection procedures
• Step 4 – Collect the data
Step 1: Define the aim of your research
Before you start the process of data collection, you need to identify exactly what you want to achieve. You can start
by writing a problem statement: what is the practical or scientific issue that you want to address and why does it
matter?
Next, formulate one or more research questions that precisely define what you want to find out. Depending on your
research questions, you might need to collect quantitative or qualitative data:
• Quantitative data is expressed in numbers and graphs and is analyzed through statistical methods.
• Qualitative data is expressed in words and analyzed through interpretations and categorizations.
If your aim is to test a hypothesis, measure something precisely, or gain large-scale statistical insights, collect
quantitative data. If your aim is to explore ideas, understand experiences, or gain detailed insights into a specific
context, collect qualitative data. If you have several aims, you can use a mixed methods approach that collects both
types of data.
Examples of quantitative and qualitative research aimsYou are researching employee perceptions of their direct
managers in a large organization.
• Your first aim is to assess whether there are significant differences in perceptions of managers across different
departments and office locations.
• Your second aim is to gather meaningful feedback from employees to explore new ideas for how managers can
improve.
You decide to use a mixed-methods approach to collect both quantitative and qualitative data.
Step 2: Choose your data collection method
Based on the data you want to collect, decide which method is best suited for your research.
• Experimental research is primarily a quantitative method.
• Interviews, focus groups, and ethnographies are qualitative methods.
• Surveys, observations, archival research and secondary data collection can be quantitative or qualitative
methods.
Carefully consider what method you will use to gather data that helps you directly answer your research questions.

Step 3: plan your data collEction procedures


When you know which method(s) you are using, you need to plan exactly how you will implement them. What
procedures will you follow to make accurate observations or measurements of the variables you are interested in?
For instance, if you’re conducting surveys or interviews, decide what form the questions will take; if you’re
conducting an experiment, make decisions about your experimental design (e.g., determine inclusion and
exclusion criteria).
Operationalization
Sometimes your variables can be measured directly: for example, you can collect data on the average age of
employees simply by asking for dates of birth. However, often you’ll be interested in collecting data on more
abstract concepts or variables that can’t be directly observed.
Operationalization means turning abstract conceptual ideas into measurable observations. When planning how
you will collect data, you need to translate the conceptual definition of what you want to study into the operational
definition of what you will actually measure.
Example of operationalization
You have decided to use surveys to collect quantitative data. The concept you want to measure is the leadership of
managers. You operationalize this concept in two ways:
• You ask managers to rate their own leadership skills on 5-point scales assessing the ability to delegate,
decisiveness and dependability.
• You ask their direct employees to provide anonymous feedback on the managers regarding the same topics.
Using multiple ratings of a single concept can help you cross-check your data and assess the test validity of your
measures.
sampling
You may need to develop a sampling plan to obtain data systematically. This involves defining a population, the
group you want to draw conclusions about, and a sample, the group you will actually collect data from.
Your sampling method will determine how you recruit participants or obtain measurements for your study. To
decide on a sampling method you will need to consider factors like the required sample size, accessibility of the
sample, and timeframe of the data collection.

Standardizing procedures
If multiple researchers are involved, write a detailed manual to standardize data collection procedures in your
study.
This means laying out specific step-by-step instructions so that everyone in your research team collects data in a
consistent way – for example, by conducting experiments under the same conditions and using objective criteria to
record and categorize observations. This helps you avoid common research biases like omitted variable
bias or information bias.
This helps ensure the reliability of your data, and you can also use it to replicate the study in the future.

Creating a data management plan


Before beginning data collection, you should also decide how you will organize and store your data.
• If you are collecting data from people, you will likely need to anonymize and safeguard the data to prevent leaks
of sensitive information (e.g. names or identity numbers).
• If you are collecting data via interviews or pencil-and-paper formats, you will need to perform transcriptions or
data entry in systematic ways to minimize distortion.
• You can prevent loss of data by having an organization system that is routinely backed up.
Step 4: collect the data
Finally, you can implement your chosen methods to measure or observe the variables you are interested in.
Examples of collecting qualitative and quantitative data
• To collect data about perceptions of managers, you administer a survey with closed- and open-ended questions
to a sample of 300 company employees across different departments and locations.
• The closed-ended questions ask participants to rate their manager’s leadership skills on scales from 1–5. The
data produced is numerical and can be statistically analyzed for averages and patterns.
• The open-ended questions ask participants for examples of what the manager is doing well now and what they
can do better in the future. The data produced is qualitative and can be categorized through content analysis for
further insights.
To ensure that high quality data is recorded in a systematic way, here are some best practices:
• Record all relevant information as and when you obtain data. For example, note down whether or how lab
equipment is recalibrated during an experimental study.
• Double-check manual data entry for errors.
• If you collect quantitative data, you can assess the reliability and validity to get an indication of your data
quality.
measurement
The process of assigning numbers or labels to different objects under study to represent them
quantitatively or qualitatively is called measurement. It can be understood as means to denote the
amount of a particular attribute that a particular object possesses. There are certain rules defining the
process of measurement; for ex. Number 1 might be assigned to people who are from South India and
Number 2 might be assigned to people who are from North India. Measurement is done for the
attributes of the units under study but not the units themselves. For ex. The height, weight, age or
other such attributes of a person are measured.
This represents a limited use of the term measurement. In statistics, the term measurement is used
more broadly and is more appropriately termed scales of measurement. Scales of measurement refer
to ways in which variables/numbers are defined and categorized. Each scale of measurement has
certain properties, which in turn determines the appropriateness for use of certain statistical analyses.
Concepts
A researcher has to know what to measure before knowing how to measure something. The problem
definition process should suggest the concepts that must be measured. A concept can be thought of as
a generalized idea that represents something of meaning. Concepts like age, sex, education, and
number of siblings are relatively concrete properties. They present few problems in either definition
or measurement. Other concepts are more abstract. Concepts such as loyalty, personality, trust,
customer satisfaction, and so on are more difficult to both define and measure. For example, loyalty
has been measured as a combination of customer share and commitment. Thus, we can see that
loyalty consists of two components, the first is behavioral and the second is attitudinal.
measurement
Variables
Researchers use variance in concepts to make diagnoses. From our previous understanding, we know that
variables capture different concept values. For practical purposes, once a research project is underway, there is
little difference between a concept and a variable. Consider the following hypothesis:
H1: Experience is positively related to job performance.
The hypothesis implies a relationship between two variables, experience and job performance. The variables
capture variance in the experience and performance concepts. One employee may have 15 years of experience
and be a top performer. A second may have 10 years’ experience and be a good performer. The scale used to
measure experience is quite straightforward in this case and would involve simply providing the number of
years an employee has been with the company. Job performance, on the other hand, can be quite complex.
Constructs
Sometimes, a single variable cannot capture a concept alone. Using multiple variables to measure one concept
can often provide a more complete account of some concept than could any single variable. Even in the
physical sciences, multiple measurements are often used to make sure an accurate representation is obtained.
In social science, many concepts are measured with multiple measurements.
A construct is a term used for concepts that are measured with multiple variables. For instance, when a
business researcher wishes to measure the customer orientation of a salesperson, several variables like these
may be used, each captured on a 1–5 scale:
•I offer the product that is best suited to a customer’s problem.
•A good employee has to have the customer’s best interests in mind.
•I try to find out what kind of products will be most helpful to a customer.
Levels of scales of measurement
Nominal Scale: (also called categorical variable) can be placed into categories. They don’t have a numeric
value and so cannot be added, subtracted, divided or multiplied. They also have no order; if they appear to have
an order then you probably have instead. For example: Color of the eyes (Black, Green, Aqua, Hazel, etc.),
Gender (Male/Female), True/False.
Ordinal Scale: The ordinal scale contains things that you can place in order. For example, hottest to coldest,
lightest to heaviest, richest to poorest. Basically, if you can rank data by 1st, 2nd, 3rd place (and so on), then you
have data that’s on an ordinal scale.
Ordinal scales tell us relative order, but give us no information regarding differences between the categories. For
example, in a race if Ram takes first and; Vinod takes second place, we do not know competition was close by
how many seconds.
One more example could be rating surveys in restaurants–When a waiter gets a paper or online survey with a
question: “How satisfied are you with the dining experience?” having 0-10 option, 0 being extremely dissatisfied
and 10 being extremely satisfied.
Interval Scale: An interval scale has ordered numbers with meaningful divisions, the magnitude between the
consecutive intervals are equal. Interval scales do not have a true zero i.e. In Celsius 0 degrees does not mean
the absence of heat. For example, temperature on Fahrenheit/Celsius thermometer i.e. 90° are hotter than 45°
and the difference between 10° and 30° are the same as the difference between 60° degrees and 80°.
Measurement of Sea Level is another example of an interval scale. With each of these scales there is direct,
measurable quantity with equality of units. In addition, zero does not represent the absolute lowest value.
Rather, it is point on the scale with numbers both above and below it (for example, -10 degrees Fahrenheit).
Ratio Scale: The ratio
scale of measurement is
similar to the interval scale
in that it also represents
quantity and has equality of
units with one major
difference: zero is
meaningful (no numbers
exist below the zero). The
true zero allows us to know
how many times greater
one case is than another.
Ratio scales have all of the
characteristics of the
nominal, ordinal and
interval scales. The
simplest example of a ratio
scale is the measurement of
length. Having zero length
or zero money means that
there is no length and no
money but zero
temperature is not an
absolute zero. Following
table summarizes the
properties of four types
levels of measurement
scales.
Criteria for good measurement
The three major criteria for evaluating measurements are reliability, validity, and sensitivity.
Reliability
Reliability is an indicator of a measure’s internal consistency. Consistency is the key to understanding
reliability. A measure is reliable when different attempts at measuring something converge on the same
result. For example, consider an exam that has three parts: 25 multiple-choice questions, 2 essay
questions, and a short case. If a student gets 20 of the 25 (80 percent) multiple-choice questions correct,
we would expect she would also score about 80 percent on the essay and case portions of the exam.
Further, if a professor’s research tests are reliable, a student should tend toward consistent scores on all
tests. In other words, a student who makes an 80 percent on the first test should make scores close to 80
percent on all subsequent tests. Another way to look at this is that the student who makes the best score
on one test will exhibit scores close to the best score in the class on the other tests. If it is difficult to
predict what students would make on a test by examining their previous test scores, the tests probably
lack reliability or the students are not preparing the same each time.
So, the concept of reliability revolves around consistency. Think of a scale to measure weight. You would
expect this scale to be consistent from one time to the next. If you stepped on the scale and it read 140
pounds, then got off and back on, you would expect it to again read 140. If it read 110 the second time,
while you may be happier, the scale would not be reliable.
Internal Consistency
Internal consistency represents a measure’s homogeneity. An attempt to measure trustworthiness may
require asking several similar but not identical questions, as shown in image below. The set of items that
make up a measure are referred to as a battery of scale items. Internal consistency of a multiple-item
measure can be measured by correlating scores on subsets of items making up a scale.
Split half method
This method of checking reliability is performed by taking half of the items from a scale (for example, odd-
numbered items) and checking them against the results from the other half (even-numbered items). The two
scale halves should produce similar scores and correlate highly. The problem with split-half method is
determining the two halves. Should it be even- and odd- numbered questions? Questions 1–3 compared to 4–
6? Following Coefficient alpha provides a solution to this problem.
Coefficient alpha
It is the most commonly applied estimate of a multiple-item scale’s reliability. Coefficient alpha represents
internal consistency by computing the average of all possible split-half reliabilities for a multiple-item scale.
The coefficient demonstrates whether or not the different items converge. Although coefficient alpha does not
address validity, many researchers use alpha as the sole indicator of a scale’s quality. Coefficient alpha ranges
in value from 0, meaning no consistency, to 1, meaning complete consistency (all items yield corresponding
values). Generally speaking, scales with a coefficient alpha between 0.80 and 0.95 are considered to have very
good reliability. Scales with a coefficient alpha between 0.70 and 0.80 are considered to have good reliability,
and an alpha value between 0.60 and 0.70 indicates fair reliability. When the coefficient alpha is below 0.6, the
scale is considered to have a poor reliability. Most statistical software packages, such as SPSS, can easily
compute coefficient alpha.
Test Retest Reliability
The test-retest method of determining reliability involves administering the same scale or measure to the same
respondents at two separate times to test for stability. If the measure is stable over time, the test, administered under the
same conditions each time, should obtain similar results. Test- retest reliability represents a measure’s repeatability.
Suppose a researcher at one time attempts to measure buying intentions and finds that 12 percent of the population is
willing to purchase a product. If the study is repeated a few weeks later under similar conditions, and the researcher again
finds that 12 percent of the population is willing to purchase the product, the measure appears to be reliable. High
stability correlation or consistency between two measures at time 1 and time 2 indicates high reliability.
Measures of test-retest reliability pose two problems that are common to all longitudinal studies. Firstly, the first
measures, may sensitize the respondents to their participation in a research project and subsequently influence the results
of the second measure. Furthermore, if the time between measures is long, there may be an attitude change or other
maturation of the subjects. Thus, a reliable measure can indicate a low or a moderate correlation between the first and
second administration, but this low correlation may be due to an attitude change over time rather than to a lack of
reliability in the measure itself.
Validity
The ability of a scale or a measuring instrument to measure what it is intended to measure can be termed as validity of
measurement. Reliability represents how consistent a measure is, in that the different attempts at measuring the same
thing converge on the same point. Accuracy deals more with how a measure assesses the intended concept. Validity is the
accuracy of a measure or the extent to which a score truthfully represents a concept. In other words, are we accurately
measuring what we think we are measuring? The four basic approaches to establishing validity are face validity, content
validity, criterion validity, and construct validity.
Face Validity
Face validity refers to the subjective agreement among experts that a scale logically reflects the concept being measured. Do the
test items look like they make sense given a concept’s definition? When an inspection of the test items convinces experts that the
items match the definition, the scale is said to have face validity.
Clear questions like “How many children do you have?” generally are agreed to have face validity. But it becomes more
challenging to assess face validity in regard to more complicated business phenomena. For instance, consider the concept of
customer loyalty. Does the statement “I prefer to purchase my groceries at ABC Foods” appear to capture loyalty? How about “I
am very satisfied with my purchases from ABC Fine Foods”? What about “ABC Fine Foods offers very good value”? While the first
statement appears to capture loyalty, it can be argued the second question is not loyalty but rather satisfaction. What does the
third statement reflect? Do you think it looks like a loyalty statement?
In scientific studies, face validity might be considered a first hurdle. In comparison to other forms of validity, face validity is
relatively easy to assess. However, researchers are generally not satisfied with simply establishing face validity. Because of the
elusive nature of attitudes and other business phenomena, additional forms of validity are sought.
Content Validity
It refers to the degree that a measure covers the domain of interest. Do the items capture the entire scope, but not go beyond, the
concept we are measuring? If an exam is sup- posed to cover chapters 1–5, it is fair for students to expect that questions should
come from all five chapters, rather than just one or two. It is also fair to assume that the questions will not come from chapter 6.
Thus, when students complain about the material on an exam, they are often claiming it lacks content validity. Similarly, an
evaluation of an employee’s job performance should cover all the important aspects of the job, but not something outside of the
employee’s specified duties.
Criterion Validity
Criterion validity addresses the question, “How well does my measure work in practice?” Because of this, criterion validity is
sometimes referred to as pragmatic validity. In other words, is my measure practical? Criterion validity may be classified as
either or depending on the time sequence in which the new measurement scale and the criterion measure are correlated.
Construct Validity
Construct validity exists when a measure reliably measures and truthfully represents a unique concept. Construct validity consists
of several components, including; Face validity, Content validity, Criterion validity, Convergent validity, and Discriminant
validity.
We have discussed face validity, content validity, and criterion validity. These forms of validity represent how unique or distinct a
measure is. Convergent validity requires that concepts that should be related are indeed related. For example, in business we
believe customer satisfaction and customer loyalty are related. If we have measures of both, we would expect them to be positively
correlated. If we found no significant correlation between our measures of satisfaction and our measures of loyalty, it would bring
into question the convergent validity of these measures. On the other hand, our customer satisfaction measure should not
correlate too highly with the loyalty measure if the two concepts are truly different. If the correlation is too high, we have to ask if
we are measuring two different things, or if satisfaction and loyalty are actually one concept. As a rough rule of thumb, when two
scales are correlated above 0.75, discriminant validity may be questioned. So, we expect related concepts to display a significant
correlation (convergent validity), but not to be so highly correlated that they are not independent concepts (discriminant
validity).
Attitude Measurement
For social scientists, an attitude is a stable nature to respond consistently to specific aspects of the world, including actions,
people, or objects. One way to understand an attitude is to break it down into its components. Consider this brief statement:
“Sally likes shopping at Wal-Mart. She believes the store is clean, conveniently located, and has low prices. She intends to shop
there every Thursday.” This simple example demonstrates three components of attitude: affective, cognitive, and behavioral. The
affective component refers to an individual’s general feelings or emotions toward an object. A person’s attitudinal feelings are
driven directly by his/her beliefs or cognitions. This cognitive component represents an individual’s knowledge about attributes
and their consequences. The behavioral component of an attitude reflects a predisposition to action by reflecting an individual’s
intentions.
Thank You

You might also like