Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

This article appeared in a journal published by Elsevier.

The attached
copy is furnished to the author for internal non-commercial research
and education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling or
licensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of the
article (e.g. in Word or Tex form) to their personal website or
institutional repository. Authors requiring further information
regarding Elseviers archiving and manuscript policies are
encouraged to visit:
http://www.elsevier.com/authorsrights
Author's personal copy

Journal of Business Research 67 (2014) 403406

Contents lists available at ScienceDirect

Journal of Business Research

Construct measurement in management research: The importance of match between


levels of theory and measurement
Bo Bernhard Nielsen
Copenhagen Business School, Department of Strategic Management and Globalization, Kilen 14, 2nd oor, 2000 Frederiksberg, Denmark
University of Technology Sydney, School of Marketing, Australia

a r t i c l e i n f o a b s t r a c t

Article history: Far too often do management scholars resort to crude and often inappropriate measures of fundamental con-
Received 1 October 2012 structs in their research; an approach which calls in question the interpretation and validity of their ndings.
Received in revised form 1 December 2012 Scholars often legitimize poor choices in measurement with a lack of availability of better measures and/or
Accepted 1 December 2012
that they are simply following existing research in adopting previously published measures without critically
Available online 5 January 2013
assessing the validity, appropriateness, and applicability of such measures in terms of the focal study. Motivated
Keywords:
by a recent dialog in Journal of Business Research, this research note raises important questions about the use
Construct measurement of proxies in management research and argues for greater care in operationalizing constructs with particular
Management research attention to matching levels of theory and measurement.
JV relatedness 2012 Elsevier Inc. All rights reserved.
Multilevel research
SIC code
Alliance termination

1. Introduction which species a set of factors at different levels that may inuence
the termination rate of JVs. Chief among these variables is JV related-
Scholars seeking to investigate important research questions ness and they advance the argument that the impact of various factors
and develop new theories often face challenges with respect to the (at multiple theoretical levels) on termination vary between related
measurement of the constructs that are pivotal to their frameworks. and unrelated JVs. The underlying theoretical logic is that the evolu-
With expectation to test empirically novel theoretical frameworks, tion (and thus probability of termination) of related and unrelated
researchers employ a variety of data collection approaches that may JVs is likely to differ because of important differences in the motives
compromise construct validity. One frequently used approach in- of the rms to form such JVs.
volves approximation, which enables the researcher to investigate As a commentary (Nielsen, 2012) to Cui and Kumar (2012a) notes,
important research questions at the expense of precise measures. however, industry-level SIC codes say very little about the underlying
It is often argued that approximation is reasonable given the data motives behind alliance formation, nor do they approximate well
constraints and that issues of lack of precise measures should not pre- rm-level resources and capabilities. Yet, the authors argue in their
vent us from investigating important research questions (e.g., Cui rejoinder (Cui & Kumar, 2012b) to the commentary that SIC codes
& Kumar, 2012b). In this commentary, I argue that data limitations provide a good proxy for rm resources and capabilities because
should prevent us from analyzing research questions (testing theory) businesses in the same industry tend to have similar assets, opera-
for which we do not have appropriate data or measures; rather this tions, as well as similar intangible resources. However, using a
limitation than running the risk of providing evidence of key rela- 2-digit industry level SIC code to proxy rm intangible resources
tionships that may not hold up to closer empirical scrutiny. and capabilities is highly problematic as such industry overlap says
Cui and Kumar (2012a) follow previous research and use a 2-digit absolutely nothing about rm level operations, efciencies, proce-
SIC code as proxy for JV relatedness in order to investigate to what dures, knowledge, experience, let alone assets or motivation behind
extent alliance termination differs among related and unrelated JV formation. Moreover, within industries, there is likely to be vari-
joint ventures. Essentially, their approach is a contingency approach ance between rms in terms of resources, capabilities and strategic
motives for JV formation. This example, which is by no means unique
in management research (see Boyd, Gove, & Hitt, 2005), raises a num-
The author thanks Jacob Lyngsie, Copenhagen Business School, and Stewart Miller,
University of Texas at San Antonio for their valuable comments and suggestions. Any
ber of fundamental questions about measurement in management
and all errors are the sole responsibility of the author. research, the importance of matching the levels of theory and mea-
E-mail address: bn.smg@cbs.dk. surement, and the utility of previously established measures.

0148-2963/$ see front matter 2012 Elsevier Inc. All rights reserved.
http://dx.doi.org/10.1016/j.jbusres.2012.12.020
Author's personal copy

404 B.B. Nielsen / Journal of Business Research 67 (2014) 403406

2. Construct measurement in management research For instance, if the theory species within-group homogeneity
(e.g., rms within industries are similar in terms of motives for
Unobserved constructs (such as capabilities or managerial processes) forming JVs), data collection should be conducted at the rm level in
lie at the core of management phenomena, which puts a premium on order to assess whether such homogeneity exists. However, theories
the researcher's ability to develop sound strategies for operationalizing like RBV assert that rms differ in terms of resources and capabilities,
and testing constructs that are unobservable (Godfrey & Hill, 1995). and that this heterogeneity determines variation in rm strategy
More fundamentally, I suggest that management scholars should strive and performance even within the same industry. Such rm-level
harder to ensure construct validity of their measures and seek to heterogeneity, in turn, is likely to inuence the motivational intend un-
acquire direct measures rather than relying on poor proxies when derlying JV formation and, as a result, the probability of JV termination.
examining complex latent constructs. Proxies for variables are often In other words, measuring JV-rm relatedness at the industry level
seemingly selected without concern for their reliability or validity assumes no rm heterogeneity within the same 2-digit SIC code,
and future researchers must employ measurement strategies, such as yet such heterogeneity is clearly possible (indeed likely according
cross-validation, in order to establish that a model's constructs are to theory) as not all rms with similar 2-digit SIC codes are similar
appropriately measured. (homogenous) in terms of resources, capabilities or motivational in-
The use of dichotomous variables is an extreme example of how tend. Testing such theories is best accomplished by (a) using measures
simple measures are used to proxy complex relationships in manage- that (like the theory) highlight the position of each individual rm
ment research. For instance, prior experience with the partner is relative to the JV and by (b) maximizing within-group variability
often coded as a dummy based on any type of previous relationships (Nielsen, 2010).
between parties and theorized to reect relational quality between Before higher-level (e.g., industry) data are used to measure
partners. Yet, such binary variables are imperfect measures of the lower-level (e.g., JV or rm) phenomena, psychometric evidence of
nature (type of transaction) and quality (positive or negative) of in- the suitability of using the data in this way must be demonstrated. Dif-
teraction between JV partners and thus fail to capture the underlying ferent types of validity evidence exist depending on the nature of the
theoretical construct. Thus, using dichotomous variables as crude constructs in question (see Chan, 1998; Chen, Mathieu, and Bliese,
representations of more complex relations allow for little variance 2004). More generally, however, researchers need to be explicit about
which likely biases results. how data collected at one level of analysis is related to constructs at a
When constructs are complex and multi-faceted, multi-dimensional lower/higher level of analysis (see also Klein & Kozlowski, 2000). Here
construct measurement is typically warranted in order to improve it should be noted, that simply arguing that more appropriate measures
validity and t with theory. A single measure provides no structure were difcult to obtain and that this reects the constraints that a re-
on which to evaluate construct validity. That is not to say that single searcher typically faces when conducting research (Cui & Kumar,
measures cannot be valid, however, one must establish such validity 2012b) is a poor proxy for rigorous and valid research. If indeed theory
by showing that results obtained from a single measures would be the (and data) warrants multilevel treatment, then researchers need to ac-
same if other measures within the domain were used, or show that count for this in adequate ways both theoretically and empirically.
indeed a single measure captures the underlying construct better than Building multilevel theories and measuring and testing them according-
a host of ambiguous items (for an example of trust in alliances, see ly is difcult but necessary if one wishes to answer challenging and in-
Nielsen & Nielsen, 2009). Notwithstanding, combining several mea- teresting research questions. Conceptualizing multilevel yet building
sures provides greater construct validity and, importantly, improves and testing a single-level model without paying due attention to the
generalizability relative to using a single measure. levels of theory and measurement in the study design may mislead
Construct measurement is of pivotal importance if we are to the reader and lead to erroneous conclusions due to level-related con-
advance management research and scholars must (at a minimum) founds (Klein & Kozlowski, 2000; Nielsen, 2010).
demonstrate that (1) measures employed plausibly capture the theo-
retical constructs and (2) theoretical and empirical levels of analysis 4. The utility of existing measures
for the proposed construct match (Lawrence, 1997).
Moreover, as is often the practice, it can be dangerous to follow
3. Levels of theory and measurement existing research in their operationalization of measures since legiti-
mizing the use of a potentially poor measure by referencing its previ-
Researchers need to be much more watchful of potential mis- ous use in the literature does not ensure its appropriateness or indeed
matches between theory and measurement when operationalizing validity. Poor measurement oftentimes appears to be the result of
difcult-to-measure management phenomena; levels-of-analysis relying on previously published work, regardless of the quality and
ambiguity may seriously misrepresent the relationships a researcher purpose of that measurement approach, and may be responsible for
would have found if data had been collected and analyzed at the the persistence and pervasive reliance on poor measures in a particu-
same level as the theory (Klein, Dansereau, & Hall, 1994; Lawrence, lar eld (Boyd et al., 2005). To be sure, adopting existing measures can
1997). Level of theory refers to the focal unit or target (e.g., rm or be very valuable; however, doing so without assessing the applicabil-
alliance) that a researcher aims to explain; it is the level to which ity and meaning of such measures may lead to poor match between
generalizations are made (Rousseau, 1985: 4). The focal unit, in turn, theory and measurement. Meaningfulness depends upon context
determines the appropriate level associated with the key constructs of and thus researchers must take extra steps to ensure validity when
interests in a study (e.g., JV relatedness is at the JV level). Explicit atten- utilizing existing measures in different contextual settings. Hence,
tion must be paid to the level of theory (and measurement) because although it is normally thought to lend credibility and legitimacy to a
neglecting to do so increases the risks of falsely attributing effects at study that measures (and particularly proxies) have been established
one level (e.g., industry level) to another (e.g., rm or JV level) and previously in the literature, we, as researchers, have to be more critical
thus committing fallacies of the wrong level (Rousseau, 1985: 5). of the way we adopt and apply such measures in our work.
Related, level of measurement refers to the actual source of the data First of all, be careful in adopting existing measures at face value
from which inferences are made; in other words the entities from without evaluating to what extent the original study was within the
which the data are drawn or to which the data is attached (e.g., rms, same domain theoretically and empirically (e.g., levels of theory and
dyads and industries). The level of measurement should correspond to measurement). For instance, while a number of scholars have used
the level of constructs specied in the focal unit in order to increase industry similarity measured by SIC codes to infer business related-
the variability predicted by the theory. ness, the fact remains that industry overlap (in terms of 2 or even
Author's personal copy

B.B. Nielsen / Journal of Business Research 67 (2014) 403406 405

4-digit SIC codes) says very little about underlying motivation behind management research must raise the bar of measurement and avoid
JV formation as well as the complexity of managing such JVs as a func- falling into the trap of applying existing proxies for critical constructs.
tion of rm-level resources and capabilities. In order to more accu- Instead, researchers must pay particular attention to consistency
rately capture the actual inuence of a particular unobserved between the levels of theory and measurement and appropriateness
theoretical construct, one must strive to directly measure it. For in- of operationalization of important research constructs in order to
stance, in the case of JV relatedness, studies have developed direct increase validity and interpretability of results.
measures of complementarity, operationalized as competence sim- In practice, this entails that researchers should increase the con-
ilarity (e.g., Nielsen, 2007) or resource overlap (e.g., Hill & cern for reliability and construct validity of their measures in order
Hellriegel, 1994; Suarez & Garca-Canal, 2003). The question here to improve the quality of measurement. One way of doing so is to
is not so much about the transitive nature of the construct as it is a use multiple rather than single indicators for specic constructs;
basic issue of mismatch between theory and measurement in the in particular complex and unobserved constructs. The use of multi-
rst place. In the context of the Cui and Kumar (2012a) study of JV ple items or indicators allows the evaluation of the reliability
termination such mismatch is particularly important given that the of the measure. While use of single item indicators (proxies) is a
differences in the probability of JV termination is theorized to be common practice with archival data, such research is not exempt
contingent upon the relatedness of the JV to the (focal) parent rm, from the requirements of validation and reliability. The use of
where relatedness is thought to relate directly to the motive behind proxies implicitly emphasizes measurement reliability over con-
the JV as well as rm tangible and intangible resources. Clearly, a struct validity yet such choices must be explicitly stated and, more
crude industry-level measure of overlap constitutes a mismatch importantly, justied. Indeed, future management research should
with the underlying theory. seek to triangulate between data sources (be they archival or
survey-based) in order to ensure the reliability and validity of the
5. Conclusion construct measures used. In addition, constructs must be measured
at the appropriate level in order to ensure conformity of the data
Theories have two equally important components; (1) the mea- to the level of theory; doing so increases their predictive power
surement component that dictates what constructs are to be mea- (Nielsen, 2010). Moreover, crude proxies should be avoided unless
sured and how, and (2) the structural component that describes the strictly necessary (e.g., in early exploratory research) as the value
properties of the resulting measures in terms of how constructs inter- of a theory is bound by its measurement; good theories work pre-
relate. Unfortunately, management research seems to be predomi- cisely because of their close connection to empirical observation
nantly preoccupied with the latter often at the expense of the (Lawrence, 1997).
former. It is strongly recommended that scholars pay more attention Finally, despite the fact that relying on past research for measures
to measurement issues in general, and construct validity, appropri- provides opportunities for cross-study comparisons, without proper
ateness of measures across contexts, as well as the matching of levels validation and assessment of reliability, such practice only serves
of measurement to theory in particular. Indeed, structural equation to ensure that the measurement problem persists. Essentially, more
modeling techniques, which are often used in management research attention must be paid to research design and process, construct
when dealing with latent constructs, advocate a two-stage procedure, validation, and more sophisticated analytical strategies in order to
where the researchers rst must establish a good measurement avoid measurement problems (Bergh, 2001). Validation is an unending
model (via conrmatory factor analysis) before moving on to testing process and we, as management scholars, need to continuously assess
relationships in the structural regression model (Anderson & Gerbing, and re-evaluate the usefulness of a particular measure in order to
1988). The strong focus on the measurement instrument (and mea- move management research forward.
surement error) is a particular strength associated with structural
equation modeling which should be used more often before moving References
on to hypothesis testing via regression.
Anderson, J. C., & Gerbing, D. W. (1988). Structural equation modeling in practice:
As management advances as a scientic eld, we need to increase A review and recommended two-step approach. Psychological Bulletin, 103(3),
the criteria for validity of measures; for instance, while it was tradi- 411423.
tionally (up until the 1990s) thought appropriate to mix levels of the- Bergh, D. D. (2001). Diversication strategy research at a crossroads. In M. A. Hitt, R. E.
Freeman, & J. S. Harrison (Eds.), Handbook of strategic management. Oxford, UK:
ory and measurement and to use simple proxies for certain difcult-
Blackwell.
to-measure variables, such as R&D intensity as proxy for absorptive Boyd, B. K., Gove, S., & Hitt, M. A. (2005). Construct measurement in strategic manage-
capacity, we now must ask a higher standard of management scholars ment research: Illusion or reality? Strategic Management Journal, 26, 239257.
in their attempt to capture these often latent constructs. Chan, D. (1998). Functional relations among constructs in the same content domain at
different levels of analysis: A typology of composition models. Journal of Applied
While one can certainly appreciate the difculty associated with Psychology, 83, 234246.
obtaining and validating critical measures of management phenome- Chen, G., Mathieu, J. E., & Bliese, P. D. (2004). A framework for conducting multilevel
na, we cannot as a eld be satised with the use of poor proxies; construct validation. In F. J. Dansereau, & F. Yammarino (Eds.), Research in
multi-level issues: The many faces of multi-level issues. (pp, 3. (pp. 273303) Oxford,
rather, management scholars must recognize the crucial importance UK: Elsevier Science.
of construct operationalization and measurement as a determinant Cui, A. S., & Kumar, M. V. S. (2012a). Termination of related and unrelated joint ventures:
of the scientic validity of their study. Just as convenience sampling A contingency approach. Journal of Business Research, 65(8), 12021208.
Cui, A. S., & Kumar, M. V. S. (2012b). Advancing multilevel alliance research: Rejoinder
is unacceptable for a number of reasons, so too are poor approxima- to commentary on Cui and Kumar (2012) and future directions. Journal of Business
tions due to the lack of data availability or legitimized through prior Research, http://dx.doi.org/10.1016/j.jbusres.2012.05.017.
use. This does not, however, free us from the inherent dangers of Godfrey, P. C., & Hill, C. W. L. (1995). The problem of unobservables in strategic
management research. Strategic Management Journal, 16, 519533.
equating measurability of a construct with its relevance in explanation Hayek, F. A. (1989). The pretense of knowledge. American Economic Review, 79(6),
and treating as important only that which is readily accessible to mea- 37.
surement (Hayek, 1989). Yet, as noted by Lawrence (1997), the use Hill, R. C., & Hellriegel, D. (1994). Critical contingencies in joint venture management:
Some lessons from managers. Organization Science, 5, 594607.
of proxies moves the researcher farther and farther away, both em-
Klein, K. J., Dansereau, F., & Hall, R. J. (1994). Levels issues in theory development, data
pirically and theoretically, from the actual mechanisms underlying collection, and analysis. Academy of Management Review, 19, 195229.
observed relationships. Klein, K. J., & Kozlowski, S. W. J. (2000). From micro to meso: Critical steps in concep-
Management as a science invariably involves functional relations tualizing and conducting multilevel research. Organizational Research Methods, 3,
211236.
among measured variables, and thus management research can prog- Lawrence, B. S. (1997). The black box of organizational demography. Organization
ress no faster than the measurement of its key variables. Future Science, 8(1), 122.
Author's personal copy

406 B.B. Nielsen / Journal of Business Research 67 (2014) 403406

Nielsen, B. B. (2007). Determining international strategic alliance performance: Nielsen, B. B., & Nielsen, S. (2009). Learning and innovation in international strategic
A multidimensional approach. International Business Review, 16(3), 337361. alliances: An empirical test of the role of trust and tacitness. Journal of Management
Nielsen, B. B. (2010). Multilevel issues in strategic alliance research. In T. K. Das (Ed.), Studies, 46(6), 10311056.
Researching strategic alliances: Emerging issues (pp. 126). Charlotte, NC: Informa- Rousseau, D. M. (1985). Issues of level on organizational research: Multi-level and
tion Age Publishing. cross-level perspectives. Research in Organizational Behavior, 7, 137.
Nielsen, B. B. (2012). What determines joint venture termination? A commentary Suarez, M. V., & Garc'a-Canal, E. (2003). Complementarity and leverage as drivers of stock
essay. Journal of Business Research, 65(8), 11091111. market reactions to global alliance formation. Long Range Planning, 36(6), 565578.

You might also like