Download as pdf or txt
Download as pdf or txt
You are on page 1of 24

Industrial and Corporate Change, 2018, 1–24

doi: 10.1093/icc/dty054
Original Article

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Toward a dynamic capabilities scale: measuring
organizational sensing, seizing, and transforming
capacities
Barbara Kump,1,* Alexander Engelmann,2 Alexander Kessler,3 and
Christina Schweiger4
1
Institute for SME-Management & Entrepreneurship, WU Vienna University of Economics & Business,
Welthandelsplatz 1, Building D1, Vienna, 1020, Austria. e-mail: bkump@wu.ac.at, 2Department of
Management, Vienna University of Applied Sciences, Waehringer Guertel 97, Vienna, 1180, Austria. e-mail:
alexander.engelmann@fh-wien.ac.at, 3Department of Global Business and Trade, Research Institute for
Family Business, WU Vienna University of Economics and Business, Welthandelsplatz 1, Building D1,
Vienna, 1020, Austria. e-mail: alexander.kessler@wu.ac.at and 4Department of Management, Vienna
University of Applied Sciences, Waehringer Guertel 97, Vienna, 1180, Austria. e-mail: christina.schweiger@fh-
wien.ac.at
*Main author for correspondence.

Abstract
To date, no standard scale exists for measuring dynamic capabilities. This limits the comparability of
empirical findings and impairs data-based theory development. This article presents the development
of a 14-item scale based on Teece’s (2007, Strategic Management Journal, 28, 1319–1350) well-
established dynamic capability framework, assessing sensing, seizing, and transforming capacities. It
describes the rigorous empirical scale development procedure comprising the steps of (i) item gener-
ation, (ii) scale purification (n ¼ 269), and (iii) scale confirmation (n ¼ 307). The scale shows high reli-
ability and validity and is a solid predictor of business and innovation performance.
JEL classifications: O31, C83

1. Introduction
Since its introduction approximately two decades ago (Helfat, 1997; Teece et al., 1997; Eisenhardt and Martin,
2000), the concept of dynamic capabilities (DC) has received great attention from management scholars (for reviews
see, Ambrosini and Bowman, 2009; Barreto, 2010; Di Stefano et al., 2010; Vogel and Güttel, 2013; Wilden et al.,
2016; Schilke et al., 2018). Defined as organizational capabilities that allow firms to “build and renew resources and
assets [. . .], reconfiguring them as needed to innovate and respond to (or bring about) changes in the market and in
the business environment” (Teece, 2014: 332), DC are crucial for firms to achieve strategic change and renewal
(Helfat et al., 2007; Agarwal and Helfat, 2009).
A rigorous literature review by Schilke et al. (2018) revealed two important observations: first, in contrast with
earlier days when most DC studies were conceptual, the majority of DC research is now empirical; second, while

C The Author 2018. Published by Oxford University Press on behalf of Associazione ICC. All rights reserved.
V
2 Toward a dynamic capabilities scale

recently researchers (Stadler et al., 2013; Girod and Whittington, 2017) have started to explore the potential of
employing proxies for measuring DC, still 33% of all publications on DC they reviewed report findings from survey

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


studies. However, nearly each of these studies uses its own survey instrument, and no standard scale exists for meas-
uring DC to expose the concept to a systematic empirical investigation. Moreover, in hardly any of these quantitative
studies measuring DC as dependent or independent variables (for example, Marcus and Anderson, 2006; Naldi et al.,
2014; Lin et al., 2016; Lopez-Cabrales et al., 2017), the employed DC scales have been developed in line with
acknowledged good practices of scale construction in organizational contexts as suggested, for instance by Churchill
(1979) or Hinkin (1995, 1998; one exception is Wilden et al.’s, 2013, scale). Both the absence of a standard scale
and the lack of ultimate rigor in scale development limit the comparability of findings in the growing number of
quantitative survey studies and the applicability of meta-analyses for further improving conceptual clarity; findings
from different studies remain fragmented and disconnected. Only by employing standardized, validated instruments
that incorporate the assumptions underlying the concept of DC, empirical research can reveal both theoretical
insights and well-founded implications for managers. Our research aims to contribute to the development of a stand-
ardized DC scale by applying a rigorous scale development procedure.
The development of a scale should start from a well-established construct (Clark and Watson, 1995) and follow a
systematic multi-staged procedure, where the operationalizations are gradually reduced to a set of items that most
consistently reflect that construct. According to Schilke et al.’s (2018) study, the most widely acknowledged, and
most frequently referred to DC framework is Teece’s (2007, further developed from Teece et al.’s, 1997) model that
conceptualizes DC as sensing, seizing, and transforming capacities. Hence, we took Teece’s (2007) conceptualization
as a starting point for our scale development.
Our main contribution is a carefully designed and empirically validated version of a DC scale measuring Teece’s
(2007) sensing, seizing, and transforming capacities. The resulting scale comprises 14 items. The subscales show high
internal consistency, and the overall scale reveals high construct validity. The capacities are solid predictors of differ-
ent facets of business performance and innovation performance, indicating substantial criterion validity of the scale.
Although the actual scale may have to be extended, revised, or even partly discarded in future iterations, it may serve
as a starting point for further consolidating empirical research.
This article is organized as follows. We first provide the theoretical rationale of our approach to measuring DC
based on Teece’s (2007) framework. Then, we review existing scales for measuring Teece’s DC concept. The core
body describes the process of scale development: We explain how we formulated the items of the scale based on pre-
cise operationalizations (item generation), how we assessed the dimensionality and internal consistency of the prelim-
inary scale with an exploratory factor analysis (EFA) based on a sample of 269 companies, and how we purified it by
removing items of low psychometric quality (scale purification). We further describe how we cross-validated the fac-
tor structure of the items in a second-order confirmatory factor analysis (CFA) based on a new sample of 307 compa-
nies and tested the criterion validity by predicting various performance indicators (scale confirmation). We conclude
with a discussion of our scale compared to other approaches for measuring DC (scales and secondary data), and
some theoretical implications of our research.

2. Theoretical basis for the development of a DC scale


2.1 Different perspectives on DC
Since Teece et al.’s (1997: 516) original introduction of the term dynamic capabilities as capabilities to “integrate,
build, and reconfigure internal and external competences to address rapidly changing environments,” the concept of
DC has become one of the most important theoretical lenses in contemporary management scholarship (for recent
and comprehensive reviews see Wilden et al., 2016, and Schilke et al., 2018). There is consensus in the literature that
the role of DC is to modify a firm’s existing resource base and to transform it intentionally and in alignment with
strategic assumptions in such a way that a new bundle or configuration of organizational resources is created (Zahra
et al., 2006; Helfat et al., 2007; Ambrosini and Bowman, 2009). This role is also reflected in the distinction between
dynamic and “ordinary” capabilities, as outlined by Teece (2014; see also Winter, 2003, or Zahra et al., 2006): or-
dinary capabilities are responsible for generating value for a firm (e.g., supply chain management of a car manufac-
turer; delivery of high-quality management education in a business school). In contrast, DC extend, modify, and
create ordinary capabilities. There is agreement in the literature that through this modification of ordinary
B. Kump et al. 3

capabilities, DC may contribute to competitive advantage, but they are not sufficient for sustained firm performance
(Helfat et al., 2007; Wilden et al., 2016; Schilke et al., 2018).

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Besides these commonalities, some important differences exist between different streams in the field, and in the
past, the DC framework has been the subject of numerous theoretical debates (for overviews, see, Arend and
Bromiley, 2009; Ambrosini and Bowman, 2009; Easterby-Smith et al., 2009; Barreto, 2010; Di Stefano et al., 2014;
Wilden et al., 2016). Most importantly, Peteraf et al. (2013) demonstrated a theoretical divide between a stream of
research that builds on Teece et al.’s (1997) conceptualization (further developed by Helfat et al., 2007; Teece,
2007), and a stream that relies on Eisenhardt and Martin’s (2000) view. However, based on an in-depth theoretical
analysis of the two approaches, Peteraf et al. (2013) showed that they are just differences in perspectives (as regards
boundary conditions, attainment of sustainable and competitive advantage) that can be combined under certain cir-
cumstances. A few years later, Wilden et al. (2016) observed that these two camps have moved more closely together
since 2012—which may be partly due to Peteraf et al.’s (2013) integrative perspective. Similarly, Schilke et al. (2018)
demonstrated that there is now considerable conceptual convergence in the field: on the basis of a content analysis of
298 articles, they came to the conclusion that the three most influential definitions by Teece et al. (1997), Eisenhardt
and Martin (2000), and Helfat et al. (2007) are complementary and build on one another, and that other frequently
used definitions are highly consistent with those three.
Yet, for measuring DC, researchers have to specify which perspective they take with their measure, Teece’s (and
Helfat et al.’s), or Eisenhardt and Martin’s view. Teece et al.’s (1997) approach (and its further developments) is ra-
ther broad and general (e.g., “the firm’s ability to integrate, build, and reconfigure [. . .] competences”; Teece et al.,
1997: 516), and seeks to specify generic micro-foundations, sensing, seizing, and transforming capacity (Teece,
2007). In contrast, Eisenhardt and Martin (2000, and those building on their work) do not provide a generic set of
capacities but present a list of examples for DC, including product development routines, strategic decision-making
routines, routines for replication and brokering, resource allocation routines, and so forth.
For the present scale development, we chose Teece’s (2007) conceptualization for a theoretical and a practical rea-
son: the theoretical reason is that we aimed at measuring general DC of a firm; Teece’s (2007) model provides general
types of processes in which DC are engaged (sensing, seizing, and transforming) and not specific functional domains
of DC (e.g., alliancing, product development), as Eisenhardt and Martin’s (2000), perspective would imply. Thereby,
in line with Schilke et al. (2018), we regard Teece’s (2007) work not as a supersession but as an elaboration of Teece
et al.’s (1997) typology. The practical reason for relying on Teece’s perspective is that it has been employed in the ma-
jority of empirical DC studies so far (Schilke et al., 2018). In the following subsections, we will outline the theoretical
and methodical implications of this decision to rely on Teece’s (2007) view for scale development.

2.2 Implications of Teece’s DC conceptualization for scale development


Teece et al.’s (1997) and Eisenhardt and Martin’s (2000) approaches (and their respective further developments) are
in agreement in many regards. Most importantly, both originate from the resource-based view, and both take a
multilevel perspective in that they combine managerial and organizational processes. One seemingly difference is
whether DC are considered capacities or routines (Di Stefano et al., 2010, 2014). Perspectives that are based on
Teece et al.’s (1997) definition regard DC as capacities; those based on Eisenhardt and Martin (2000) view DC as
routines. However, Di Stefano et al. (2014) showed that the two views can be combined: capacities are latent and
can be observed only once they are put into action, whereas routines and their constituent elements are more observ-
able. That is, DC are seen as latent capacities, which are manifested in (observable) routines and their outcomes.
Only through these routines, DC enable strategic renewal in a continuous and reliable way. Hence, both perspectives
agree on the crucial role of routines (Peteraf et al., 2013).
Moreover, while Teece et al.’s (1997) and Eisenhard and Martin’s (2000) views seem to diverge in (i) whether
they regard DC as useful for firms in highly dynamic environments, (ii) whether DC are a source of sustainable ad-
vantage, and (iii) whether DC are a source of competitive advantage, Peteraf et al. (2013) showed that the two per-
spectives can be combined in the sense of a contingency approach, by defining boundary conditions under which
both their assumptions hold true: in moderately dynamic environments, DC as “best practices” (as conceptualized by
Teece et al., 1997) may be a source of sustainable competitive advantage, if they are idiosyncratic in their details. For
the context of highly dynamic environments, the assumptions of the two perspectives can still be in alignment, for ex-
ample, if DC take the shape of higher-order capabilities (e.g., capabilities in rapid and continuous product
4 Toward a dynamic capabilities scale

innovation). These higher-order capabilities may enable firms to deploy and modify lower-order DC in the form of
simple rules. As a second circumstance under which the approaches are in alignment also in high-velocity markets,

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Peteraf et al. (2013) mention the presence of DC that are not specific but generic in the sense that they remain useful
to the firm even as market conditions change.
Consequently, to build a scale based on Teece’s (2007) view that still does not contradict Eisenhardt and Martin’s
(2000) perspective, a few methodical implications must be considered: first, in line with Teece’s rather general con-
ceptualization of DC, the scale should capture rather broad and general DC for integrating, building, and reconfigur-
ing ordinary capabilities, and not measure examples of DC (e.g., product development routines and strategic
decision-making routines). Second, to be effective in high-velocity markets (the area of application targeted by Teece
et al., 1997; Teece, 2007, 2014), DC should be seen as generic in the sense that they remain useful even if the market
changes dramatically. Third, to potentially enable competitive advantage (in line with the VRIN criteria; valuable,
rare, inimitatible, non-substitutable), these capabilities might be formulated as “best practices,” but only if the defini-
tions of these best practices would leave room for idiosyncrasy (as suggested by Eisenhardt and Martin, 2000): that
is, the definitions should acknowledge that one and the same latent aspect of DC (e.g., the capacity to continuously
identify trends in the firm’s environment) may manifest in different routines across different firms. Hence, firms that
possess high levels of certain aspects of DC may have established reliable routines that lead to similar DC outcomes
(e.g., awareness of market trends), but they may differ with regard to the concrete methods and structures of how
they achieve these DC outcomes.

2.3 Dimensions of DC: sensing, seizing, and transforming


Teece (2007, see also Teece, 2014) provided further refinements of DC into generic sensing, seizing, and transform-
ing capacities, which need to be closely aligned with a firm’s strategy. Teece’s conceptualizations of these capacities
are rather broad: sensing includes as many aspects as “identification, development, codevelopment and assessment of
technological opportunities in relationship to customer needs” (Teece, 2014: 332), seizing involves the “mobilization
of resources to address needs and opportunities, and to capture value from doing so” (Teece, 2014: 332), and trans-
forming means nothing less than “continued renewal” (Teece, 2014: 332). To operationalize them, Teece’s (2007,
2014) broad conceptualizations have to be further specified.
Sensing refers to an organization’s capacity to continuously scan the organizational environment (Teece, 2007,
2014; see also Pavlou and El Sawy, 2011; Makkonen et al., 2014). According to Teece, sensing refers to accumulat-
ing and filtering information from the environment “to create a conjecture or a hypothesis about the likely evolution
of technologies, customer needs, and marketplace responses” and “involves scanning and monitoring internal and ex-
ternal technological developments and assessing customer needs, expressed and latent” (Teece, 2007: 1323), in add-
ition to shaping market opportunities and monitoring threats. Some researchers (Babelyt_e-Labanausk_e and
Nedzinskas, 2017) have outlined that sensing does not only have an external focus but also has an internal aspect: It
may, for example, involve the identification of new developments and opportunities within the firm. This theoretical
difference between external and internal sensing is reflected in further developments of cognitive micro-foundations
of sensing: some scholars in the strategic management field (Helfat and Peteraf, 2015) have focused on perception
and attention (i.e., a rather external perspective), while others (Hodgkinson and Healey, 2011) have mainly looked
at the need for reflection/reflexion (i.e., a rather internal perspective). Nevertheless, Teece’s (2007, 2014) original
conceptualization is more oriented toward the organization’s external environment. Hence, in the present article, we
stick closely with Teece’s original model and understand sensing mainly as external sensing. This sensing component
involves both recognizing opportunities and anticipating competitive threats (Helfat and Peteraf, 2015). This may
take place formally (e.g., through systematic market research) or informally (e.g., through self-motivated reading of
industry newspapers by staff members). Refining Teece’s broad definition, we posit that an organization with high
sensing capacity is able to continuously and reliably acquire strategically relevant information from the environment,
including market trends, best practices, and competitors’ activities.
Seizing refers to developing and selecting business opportunities that fit with the organization’s environment and
its strengths and weaknesses (Teece, 2007). Seizing thus means that market opportunities are successfully exploited
and that threats are eluded. Seizing bridges external and internal information and knowledge, and it is closely linked
with strategic decision-making, particularly regarding investment decisions. Seizing capacity starts from a strategy
that enables the recognition of valuable knowledge. This evaluation is based on prior knowledge, and it results in a
B. Kump et al. 5

selection from a variety of strategic options. Seizing capacity within an organization is high if the organization is able
to decide whether some information is of potential value, to transform valuable information into concrete business

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


opportunities that fit its strengths and weaknesses and to make decisions accordingly.
Transforming, according to Teece (2007: 1319), includes “enhancing, combining, protecting, and, when neces-
sary, reconfiguring the business enterprise’s intangible and tangible assets,” such that path dependencies and inertia
are avoided. That is, transforming refers to putting decisions for new business models, product or process innovations
into practice by implementing the required structures and routines, providing the infrastructure, ensuring that the
workforce has the required skills, and so forth. Transforming is characterized by the actual realization of strategic re-
newal within the organization through the reconfiguration of resources, structures, and processes. Teece (2007:
1335) describes transforming (reconfiguring) as the “ability to recombine and to reconfigure assets and organization-
al structures as the enterprise grows, and as markets and technologies change.” Thereby, transforming is similar to Li
and Liu’s (2014) implementation capacity, which is defined as “the ability to execute and coordinate strategic deci-
sion and corporate change, which involves a variety of managerial and organizational processes, depending on the
nature of the objective” (Li and Liu, 2014: 2794). Implementing thus refers to communicating, interpreting, adopt-
ing, and enacting strategic plans (Noble, 1999). Only through implementation does renewal come into being; other-
wise, new information and ideas within an organization remain theoretical inputs and potential changes. An
organization with a high transforming capacity consistently implements decided renewal activities by assigning
responsibilities, allocating resources, and ensuring that the workforce possesses the newly required knowledge.

3. Existing scales for measuring DC based on Teece (2007)


As outlined above, even if no standard scale exists, several researchers have developed and employed various DC
scales based on Teece (2007). To gain an overview of existing scales, we systematically searched peer-reviewed jour-
nals from the database ProQuest for the time between 1997 (the year of publication of Teece et al.’s original concep-
tualization) and January 2018. The central search term was dynamic capabilit*, mentioned at least in the title, in the
abstract, or as a keyword. As an additional constraint, we added the condition “scale OR measure* OR survey OR
empirical*” to the query. The search revealed 325 articles of which we excluded those that were from different fields
(e.g., chemistry and biology), addressed DC only theoretically without actually measuring them, or employed empir-
ical measures other than scales (e.g., secondary data, financial data, and data from qualitative interviews). The result
was a collection of 125 articles presenting survey-based measures of DC.
Of the studies described in these articles, 75 employed surveys for specific examples for DC (in the sense of
Eisenhardt and Martin’s perspective) such as quality of the scientific team (Deeds et al., 2000); market disruptiveness
capability (McKelvie and Davidsson, 2009); market orientation (Ma and Todorovic, 2011); supplier-chain integra-
tion (Fawcett et al., 2011; Vickery et al., 2013); R&D capabilities (Singh, Oberoi, and Ahuja, 2013); dynamic collab-
oration capability (Allred et al., 2011); alliance management (Schilke, 2014); managerial capabilities (Townsend and
Busenitz, 2015); new product development (Barrales-Molina et al., 2015); marketing competence, R&D competence,
technological competence, and customer competence (Danneels, 2016); dynamic service capabilities (Raman and
Bharadwaj, 2017); or networking capability in supplier networks (Mitrega, Forkmann, Zaefarian, and Henneberg,
2017).
Another 22 studies (Marcus and Anderson, 2006; Wu, 2010; Drnevich and Kriauciunas, 2011; Cheng et al.,
2014; Lee et al., 2016; Monteiro et al., 2017; Wamba et al., 2017) investigated DC as an overall construct, without
distinguishing its dimensions. Of the remaining 28 studies where components of DC were measured, 15 were based
on newly developed own models or models other than Teece’s (Agarwal and Selen, 2009; Hou and Chien, 2010; Jiao
et al., 2010; Simon, 2010; Cui and Jiao, 2011; Zheng et al., 2011; Ali et al., 2012; Li and Liu, 2014; Makkonen
et al., 2014; Karimi and Walter, 2015; Wang et al., 2015; Lin et al., 2016; Verreynne et al., 2016; Wohlgemuth and
Wenzel, 2016; Battisti and Deakins, 2017).
The remaining 13 survey-based measures building on Teece’s view are listed in Table 1. The table is split into two
time periods: (i) January 1997 to December 2015, the time before scale development, and (ii) January 2016 to
January 2018, the time during scale development. Scales from the first period, that is, before 2016 were taken into ac-
count for item generation.
As shown in Table 1, two measures in the first time period (Rows 1 and 2; Pavlou and El Sawy, 2011; Protogerou
et al., 2012) and three measures in the second time period (Rows 8–10; Mandal, 2017; Pandit et al., 2017;
6 Toward a dynamic capabilities scale

Table 1. Existing scales for measuring DC as conceptualized by Teece

Author(s) (year) Underlying model Operationalization of DC Predictive validity

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Findings from the literature review between January 1997 to December 2015 (basis for scale development)
1 Pavlou and El Sawy Teece, Pisano, and Shuen Sensing (four items) Modification in operational
(2011) (1997) capabilities, new product
development performance
Learning (five items)
Integrating (five items)
Coordinating (five items)
Reconfiguration (two items; second-
order construct)
2 Protogerou et al. (2012) Teece et al. (1997) Coordination (three items) Firm performance
Learning (three items)
Competitive responses (four items)
3 Hawass (2010) Part of Teece’s (2007) model Reconfiguration (four items) n/a (Reconfiguration is criter-
ion variable)
4 Naldi et al. (2014) Part of Teece’s (2007) model Sensing (four items) Innovative performance
Seizing (four items)
5 Wilden and Gudergan Part of Teece’s (2007) model Sensing (four items) Firm performance
(2015) Reconfiguring (seven items)
6 Nedzinskas et al. (2013) Teece (2007) Sensing (two items) Relative nonfinancial
performance
Seizing (seven items) Relative financial
performance
Reconfiguring (two items)
7 Wilden et al. (2013) Teece (2007) Sensing (four items) Financial solvency
Seizing (four items)
Reconfiguring (four items)
Findings from the literature review between January 2016 to January 2018 (during scale development)
8 Mandal (2017) Teece et al. (1997) Visibility for sensing (five items) Hospital–supplier
collaboration
Visibility for learning (five items) Hospital supply chain
performance
Visibility for coordinating (five
items)
Visibility for integrating (five items)
9 Pandit et al. (2017) Teece et al.’s (1997) model; Learning (two items) Disruptive innovation
items partly based on Integrating (three items)
Pavlou and El Sawy (2011) Coordinating (one item)
10 Rashidirad et al. (2017) Teece et al. (1997) Sensing (six items) Novelty
Learning (seven items) Lock-in
Integrating (seven items) Complementarities
Coordinating (seven items) Efficiency
11 Babelyt_e-Labanausk_e and Teece (2007) Sense (four aspects to be rated) R&D performance
Nedzinskas (2017) Seize (five aspects to be rated) Innovation performance
Reconfigure (four aspects to be
rated)
12 Lopez-Cabrales et al. Teece’s (2007) model; items Sensing (threee items) n/a (DC are criterion
(2017) partly based on Pavlou and Seizing (four items) variables)
El Sawy (2011) Reconfiguration (five items)
13 Shafia et al. (2016) Teece (2007) Sensing (3 items) Competitiveness of research
Seizing (5 items) and technology
Reconfiguration (6 items) organizations

Note: The scale purification study took place at the beginning of 2016 and only scales before 2016 could be taken into account for scale development. The litera-
ture review was extended to January 2018 to make sure that more recent developments are also captured.
B. Kump et al. 7

Rashidirad et al., 2017) were derived from Teece et al.’s (1997) theoretical assumptions, with only loose references
to Teece’s (2007) concept.

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


The remaining eight measures (Rows 3–7 and 11–13 in Table 1) build upon sensing, seizing, and transforming
capacities as conceptualized by Teece (2007). Thereof, Hawass’s (2010) measures reconfiguration (transforming)
capacity but neglects sensing and seizing. Naldi et al. (2014) include sensing and seizing, but not transforming, and
Wilden and Gudergan (2015) take into account sensing and reconfiguring, but not seizing.
Overall, five scales remain that employ scales for sensing, seizing, and transforming capacities as conceptualized
by Teece (2007)—two in the first time period (Rows 6–7; Nedzinskas et al., 2013; Wilden et al., 2013) and three in
the second time period (Rows 11–13; Shafia et al., 2016; Babelyt_e-Labanausk_e and Nedzinskas, 2017; Lopez-
Cabrales, et al., 2017).
Nedzinskas et al. (2013: 385) measure all three dimensions, sensing, seizing, and transforming; they employ items
“adapted from Teece (2007)” but do not provide details on the scale development process, the exact wording of the
items, or the response format they used. The main goal of Nedzinskas et al.’s work was to measure the impact of DC
aspects on SME performance, and not to systematically develop and test a DC scale. Also all of the three articles
from 2016 that report DC scales aim at linking DC with other variables: Babelyt_e-Labanausk_e and Nedzinskas
(2017) predict R&D and innovation performance, Shafia et al. (2016) explain competitiveness of research and tech-
nology organizations, and Lopez-Cabrales et al. (2017) link the development of DC with characteristics of HR sys-
tems (2017). None of them had the aim to develop a DC scale following a rigorous scale development procedure.
One study that explicitly intended to develop a DC scale following a systematic and transparent scale develop-
ment process and that includes all three dimensions—sensing, seizing, and reconfiguring (transforming)—is Wilden
et al.’s (2013) scale. (Part of the scale is reused in Wilden and Gudergan’s (2015) study, where the authors measure
sensing and reconfiguring activities, but not seizing.) Wilden et al.’s (2013) scale was inspired by Jantunen’s (2005)
and Danneel’s (2008) measures of the sensing component and asks for activities and processes, such as the adoption
of new management methods and renewal of business routines (Wilden et al., 2013: 80f)—for instance, “People par-
ticipate in professional association activities” (sensing). The scale queries about the frequencies of these activities
(e.g., “How often have you carried out the following activities between 2004 and 2008?”) and provides response
options from “rarely (1)” to “very often (7).” Besides its rigorous procedure of scale development, Wilden et al.’s ac-
tivity- and frequency-oriented approach (“How often do you. . .?”) has another major advantage: asking respondents
what they do frequently, may be less biased due to social desirability than asking respondents about how good they
are at something.
However, we argue that this activity- and frequency-oriented approach has also one important limitation: even if
it provides an objective measure of the frequency of a certain DC-related activity, this does not immediately indicate
the actual outcome of that activity. For example, frequent participation in professional association meetings (activity
related to sensing) does not necessarily result in high awareness of market trends (outcome of sensing-related activ-
ity). An analogy from the world of music would be to ask a piano player, how frequently she practices (activity) and
to take her answer as a measure for her capability to play the piano. Hence, acknowledging the merits of Wilden
et al.’s (2013) scale, we argue that a perspective on outcomes of capabilities, or successfully established practices rep-
resents an important complementary perspective to asking for frequencies of activities. In other words, given that
Wilden et al.’s (2013) scale still is the only instrument for measuring DC based on Teece (2007) that has been devel-
oped in line with principles of a rigorous scale development procedure, a systematically developed scale that meas-
ures DC outcomes constitutes a relevant contribution to the field.

4. Scale development
4.1 Overall procedure
Even if the suggested approaches for scale development (Churchill, 1979; Hinkin, 1995, 1998) differ in their details,
they agree on the general process and quality criteria to be taken into account. We integrated their methodological
considerations and employed a three-step procedure, which will be described in detail below. First, in the item-gener-
ation step, we developed indicators that should reflect Teece’s (2007) sensing, seizing, and transforming capacities.
To provide a complementary perspective to Wilden et al.’s (2013) scale, our items should focus on the general exist-
ence of DC routines and outcomes (instead of frequencies of DC activities).
8 Toward a dynamic capabilities scale

We considered to start from existing (sub-)scales and combine them into one new instrument that measures all of
the aspects suggested by Teece (2007). However, we encountered two obstacles, one conceptual and one methodical.

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


As for the conceptual obstacle, we realized that the different authors employed different interpretations (and implicit
definitions) of the DC dimensions. For example, both Hawass (2010) and Naldi et al. (2014) compound seizing and
transforming. As for the methodical problem, our review of the existing subscales and items from different authors
revealed that they are heterogeneous regarding both the operationalization of capacities and the answering format
(e.g., rarely (1) to very often (7), Wilden et al., 2013; strongly disagree (1) to strongly agree (7), Hawass, 2010).
Hence we concluded that a mere combination of subscales from different instruments is not feasible, and that scale
development requires (i) refined operationalizations of Teece’s (2007) sensing, seizing, and transforming capacities,
and (ii) homogeneous phrasing of the items in the different subscales with a focus on the actual presence of capacities.
Nevertheless, our aim was that the new scale builds on existing scales as much as possible.
In a second step of scale development, scale purification, we collected empirical data to investigate the dimension-
ality of the scale by employing an EFA to assess the internal consistency of the sensing, seizing, and transforming sub-
scales and to remove items of low psychometric quality, if necessary. At this stage, we investigated which factors the
EFA would reveal and whether they correspond to the theoretically assumed DC dimensions. We expected that the
EFA would reveal distinct, robust factors representing the three different capacities. Moreover, we expected (i) in-
ternal consistency within each factor to be high, indicating that the developed items are measuring one and the same
construct, and (ii) moderate subscale intercorrelations because the capacities were assumed to be distinct but
interrelated.
In a third step, scale confirmation, we collected data from a new sample to statistically test and confirm the
dimensionality of the DC construct as identified in the EFA by means of a second-order CFA (cross-validation of the
factorial structure). Moreover, in the scale confirmation step, we wanted to establish criterion validity by predicting
innovation and business performance criteria. We hypothesized (i) a significant model fit of the factor structure that
resulted in the EFA also in the second-order CFA; (ii) positive regression coefficients for business and innovation per-
formance criteria of moderate magnitude, due to the indirect nature of the relationship between DC and firm per-
formance; and, (iii) moderately positive intercorrelations among the DC subscales.

4.2 Item generation (content validity)

4.2.1 Test-theoretical considerations


We regard DC as a multidimensional construct reflected by the conceptually independent but interrelated capacities
of sensing, seizing, and transforming. Further, we view DC as latent capacities that manifest in observable routines
and their outcomes. While previous researchers have suggested measures for DC that ask for frequencies of activities
that potentially contribute to DC (e.g., “People participate in professional association activities,” Wilden et al.,
2013: 83; the item may contribute to sensing capacity but is not a necessary condition for it), our items measure DC
more directly by either asking for the outcome of the capability (e.g., “Our company knows the best practices in the
market,” indicating that sensing capacity is high), or for routines that directly indicate the existence of the capability
(e.g., “Our company systematically searches for information on the current market situation,” indicating that the
firm has established systematic sensing routines). Because DC constitute organizational phenomena, not individual
ones, all items are formulated in a depersonalized way and ask for organizational instead of individual attitudes and
outcomes (e.g., “We are always up-to-date with market trends” instead of “I am always up-to-date with market
trends”). Furthermore, rather than a dichotomous (“have it” or “have it not”) concept, DC should be understood as
continuous, allowing for more variable descriptions of the configurations of DC of different firms (Barreto, 2010).
Hence, as in most other DC scales, our items ask for gradual responses, not “yes” or “no” answers. As the answering
format, we chose a six-point Likert scale ranging from “strongly disagree (1)” to “strongly agree (6).”

4.2.2 Operationalization of DC
Sensing. As outlined in the theoretical considerations above, we posit that an organization with high sensing capacity
is able to systematically, continuously, and reliably acquire strategically relevant information from the environment,
including market trends, best practices, and competitors’ activities, that is, information from outside the organiza-
tion. Related concepts to sensing capacity have used the terms knowledge acquisition (Jantunen, 2005) or
B. Kump et al. 9

environmental scanning (Danneels, 2008). The systematic monitoring of the environment increases the chances of
becoming aware of upcoming markets, trends, and technology developments and of tapping into new business areas

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


(Daft et al., 1988). Thereby, sensing may take place via different channels, such as specialist literature or participa-
tion in networks of knowledge (Danneels, 2008), and it may take place formally (e.g., in dedicated processes) or in-
formally (e.g., by chatting with customers). To operationalize sensing capacity, we built on existing scales by
Danneels (2008), Makkonen et al. (2014), Wilden et al. (2013), and Jantunen (2005), but we adapted items to focus
on established routines and outcomes. For example, we developed items asking for the extent to which the organiza-
tion is up-to-date on the current market situation, or to which the organization knows how to access new informa-
tion (see Table 2).

Seizing. Further concretizing Teece’s (2007) definition, we assume that the capacity for seizing within an organiza-
tion is high if it is able to decide whether some information is of potential value, to transform valuable information
into concrete business opportunities that fit the organization’s strengths and weaknesses, and to make decisions ac-
cordingly. To operationalize seizing, we built on the content of existing scales but, for several reasons, could not dir-
ectly reuse existing items: Naldi et al. (2014) do not use actual survey items but other proxies (e.g., number of
newsletters a firm subscribed to). Jantunen (2005) and Makkonen et al. (2014) measure seizing but combine it with
sensing. Nedzinskas et al. (2013) employ items for seizing but do not report them. We also integrated contents from
scales for measuring absorptive capacity, such as Flatten et al. (2011), but adapted the items to the aspect of seizing.
To measure seizing, we used items such as “We recognize what new information can be utilized in our company,” or
“Our company is capable of turning new technological knowledge into process and product innovation.”

Transforming. In line with Teece’s (2007) ideas, we assume that an organization with a high transforming capacity
consistently implements decided renewal activities by assigning responsibilities, allocating resources, and ensuring
that the workforce possesses the newly required knowledge. To operationalize transforming capacity, we reviewed
existing reconfiguration and transforming (sub-)scales. Of the scales building on Teece’s model, Nedzinskas et al.
(2013) do not report items. Hawass’s (2010) reconfiguration scale focuses on the consequences of the transformation
rather than on the transformation itself (e.g., “We are more successful than competitors in diversifying into new mar-
kets by deploy [sic!] in existing technologies”; 426). Wilden and Gudergan (2015) ask for the frequency of how often
specific activities have been carried out (e.g., frequency of modifications in strategy and frequency of renewal of
business processes) as an indicator of transforming capacity. None of these scales explicitly asks for the capacity
for transforming processes and structures within an organization. One scale that measures transforming capacity
more directly is Li and Liu’s (2014) change implementation subscale. However, their items focus on conditions
for transforming (e.g., “We help each other in strategic change implementation”), not on its outcome. Our opera-
tionalizations of transforming capacity focus on how successful strategic renewal is actually implemented
and achieved within the organization and includes items such as “Decisions on planned changes are pursued con-
sistently in our company,” or “By defining clear responsibilities, we successfully implement plans for changes in
our company.”

4.2.3 Item pool and preliminary scale


Based on the refined definitions, we developed a preliminary scale in two main stages: (1) theory-based development
of an initial item pool by a group of five researchers in the field of DC (including three of the coauthors of this manu-
script), and (2) refinement of wording, and reduction of redundancy through expert opinions. The items were devel-
oped in German language (targeting a German-speaking audience). Later, the scale was translated into English by a
team of native English- and German-speaking researchers.
In the strongly theory-based step (Phase 1), an initial pool of 21 items was developed by the five researchers with
the aim to formulate indicators that would reflect each of the three DC dimensions. This phase required extensive en-
gagement with the DC literature, and with existing DC scales. When formulating the items, the following challenges
had to be addressed: (i) The items should be generic in the sense that they could be applied to a broad range of indus-
tries and thus could not include industry-specific aspects (such as regularly visiting trade fairs, as an aspect of sens-
ing), while at the same time remain understandable and answerable; (ii) the scale should be economic in that it
should contain only a few items per subscale (and thus could not include every single aspect of a DC dimension),
10 Toward a dynamic capabilities scale

Table 2. Pattern matrix of items for measuring sensing (SE), seizing (SZ), and transforming (T) capacities (EFA)

No. Item Factor

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


SE SZ T

SE1 Our company knows the best practices in the market 0.72
SE2 Our company is up-to-date on the current market situation 0.82
SE3 Our company systematically searches for information on the current market situation 0.95
SE4 As a company, we know how to access new information 0.83
SE5 Our company always has an eye on our competitors’ activities 0.70
SE6a Our company quickly notices changes in the market 0.40 0.48
SZ1 Our company can quickly relate to new knowledge from the outside 0.87
SZ2 We recognize what new information can be utilized in our company 0.71
SZ3 Our company is capable of turning new technological knowledge into process and product innovation 0.84
SZ4 Current information leads to the development of new products or services 0.73
T1 By defining clear responsibilities, we successfully implement plans for changes in our company 0.89
T2 Even when unforeseen interruptions occur, change projects are seen through consistently in our company 0.90
T3 Decisions on planned changes are pursued consistently in our company 0.61
T4 In the past, we have demonstrated our strengths in implementing changes 0.60
T5 In our company, change projects can be put into practice alongside the daily business 0.72
T6a In our company, plans for change can be flexibly adapted to the current situation 0.44 0.55

Note: Extraction method: principal component analysis; rotation method: oblique rotation (promax) with Kaiser normalization; the rotation converged in six iter-
ations. Factor loadings <0.30 are suppressed in the table.
a
Item removed from survey after explorative factor analysis due to factor cross-loadings.

while at the same time reflecting the DC dimension well. Regarding inter-rater agreement, only such items
were considered where all five researchers agreed that they reflected an important aspect of Teece’s original
concept.
The aims of Phase 2 were to augment content validity (Rossiter, 2008), enhance comprehensibility, and minimize
perceived redundancy of the items developed in the first phase. Therefore, we conducted a first systematic walk-
through with an experienced top-level manager from a large firm in the IT industry. In a simplified version of the cog-
nitive lab technique (Wilson, 2004), the manager was asked to express his spontaneous associations and to comment
on the process of choosing an answer to ensure comprehension, logic, and relevance. The results were documented
and built the basis of another round of discussion among the five researchers. In the course of this discussion, the
items were slightly rephrased based on the managers’ feedback regarding comprehensibility, thereby ensuring that
they were still in line with theoretical assumptions (e.g., “In our company, anything new is easily transformed into
ideas for changes” was rephrased into “Our company is capable of turning new technological knowledge into process
and product innovation”). Moreover, items that were asking for the same information according to the manager
were reconsidered and merged, if necessary, reducing the overall number of items to 19. Then, a second walkthrough
was carried out with another experienced manager, the CEO of a small business, who again provided feedback on
the items by “thinking aloud.” Also, the second manager’s responses were documented for further discussion among
the team of researchers. The subsequent final round of discussion led to further slight rephrasing, and a reduction of
items that were asking for the same information (e.g., “We do not miss new developments that could have an influ-
ence on our business” and “Our company is up-to-date on the current market situation”), resulting in a “final” pre-
liminary scale of 16 items.

4.3 Scale purification (dimensionality and internal consistency)

4.3.1 Data collection


The purpose of the scale purification study was to explore the dimensionality (i.e., the presumed three-factor struc-
ture) and internal consistency (i.e., whether all items load high on the intended subscales, and low on other subscales)
of our scale. Concretely, we were testing if all items would meet the high test-theoretical criteria we employed, to
B. Kump et al. 11

purify the scale, if needed. At this stage, the following test-theoretical considerations were underlying our sampling
strategy: (i) the scale should basically be applicable to firms in all industries; (ii) the scale should be applicable regard-

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


less of firm size. Moreover, if there is large variance in the data, especially variations in configurations of capabilities,
a factor analysis reveals more robust factors. Thus, a sampling strategy was chosen that should ensure maximum
variance in the data, both with regard to industries and with regard to firm size.

Variables. We developed an electronic questionnaire which included the 16 items of the three subscales sensing, seiz-
ing, and transforming (see Table 2). Moreover, we queried the firm size, firm age, position of the respondents, and in-
dustry at the end of the questionnaire.

Setting and respondents. For the pilot study, we had the opportunity to distribute an electronic version of the prelim-
inary scale to 100 CEOs of randomly selected firms (50 small and 50 large firms) in each of the seven sections of the
Austrian Federal Economic Chamber (i.e., in total 700 firms). Each recipient in this pilot study was asked to complete
the survey and to forward it to other persons in leading positions in their own industry network. This snowball-
sampling technique should increase the number of responses within sectors by invitation of other CEOs through their
peers. While the drawback was that we could not track the number of forwards to other respondents, the snowball-
sampling was efficient, as it revealed 269 complete responses.
Of these 269 questionnaires, 29.6% had been completed by CEOs, 35.9% by senior executives, and the
remaining 34.6% by other staff members in leading positions. Approximately half (51.1%) of the companies
had fewer than 250 employees. Most respondents (79.4%) worked for a company that was 10 or more years
old. The distribution within the industrial sectors of the Austrian Federal Economic Chamber of those who indi-
cated their industries was as follows: 11.0% bank and insurance, 13.3% crafts and trades, 11.9% commerce,
22.5% industry, 25.7% information and consulting, 11.9% tourism and leisure industries, and 3.7% transpor-
tation and communications. Overall, the goal of the sampling strategy to maximize variety across firm sizes and
industries was achieved.

4.3.2 Analyses and results


To investigate the factorial structure of the scale, we employed an EFA with principal component analysis, an extrac-
tion method that uncovers the pattern of inter-item relationships (Thompson, 2004). The EFA was employed to re-
construct the underlying structure of the DC concepts in an exploratory manner, and to develop an initial
understanding of whether the model structure had substantial deviation from the intended structure (i.e., whether the
items loaded on others than the theoretically assumed factors, or on multiple factors). Only factors with an eigen-
value greater than 1 were considered, to ensure that each factor explained more variance than any single item.
Oblique rotation was used to achieve better interpretability. This rotation is indicated when factors are expected to
be correlated (Thompson, 2004), as we assumed it to be the case for dimensions of DC.
The principal component analysis yielded three distinct factors—sensing (SE), seizing (SZ), and transforming
(T)—accounting for approximately 66% of the inter-item variance. Table 2 displays all loadings above 0.3 in the
EFA. As observed from Table 2, except two items that showed factor cross-loadings (SE6 and T6), each item loaded
high on the intended factor and not on other factors. All items without cross-loadings reached factor loadings above
0.6, with 0.5 being the recommended cutoff value for exploratory research (Nunnally, 1978). Altogether, these find-
ings reveal that the scale has good factorial validity.
Then, we purified the preliminary scale by removing the two items with cross-loadings from the subscales.
Of the remaining 14 items, 5 measured sensing, 4 seizing, and 5 transforming. To assess the reliability (internal
consistency) for each of the subscales and the overall scale, we calculated Cronbach’s alpha (Cronbach, 1951)
coefficients (Table 3). Because an alpha above 0.8 is regarded as good and that above 0.9 is regarded as
excellent—and because all alpha coefficients of the subscales range from 0.83 to 0.88 and the overall alpha co-
efficient is 0.91, the reliability of the scale can be seen as good to excellent. As expected, pairwise subscale inter-
correlation coefficients were moderate to high, ranging from 0.49 to 0.66 (Table 3; all coefficients are
significant with P < 0.01). These findings indicate that the subscales for sensing, seizing, and transforming
measure highly related constructs.
12 Toward a dynamic capabilities scale

Table 3. Descriptives, correlations and alpha coefficients for sensing (SE), seizing (SZ), and transforming (T) subscales
(n ¼ 269)

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Subscale Descriptivesa Correlation (alpha coefficient)

Mean SD Number of items SE SZ T

Sensing (SE) 4.76 0.885 5 (0.88)


Seizing (SZ) 4.30 0.928 4 0.58 (0.83)
Transforming (T) 4.16 0.886 5 0.49 0.66 (0.86)

Note: All correlations are significant at P < 0.01.


a
Descriptives, correlations, and alpha coefficients are given for the purified scale after the exclusion of items due to cross-loadings.

4.4 Scale confirmation (factorial validity and criterion validity)


The purpose of the scale confirmation study was twofold. First, we wanted to cross-validate and confirm the factor
structure of our DC scale on a new sample (factorial validity). Second, because DC have been closely related with
strategic renewal, which again is a predictor for business performance, we aimed at testing whether the DC subscales
would be able to predict diverse performance outcomes (criterion validity). While the sampling strategy in the scale
purification study was driven by methodical (test-theoretical) considerations, in the scale confirmation study we tar-
geted firms from industries facing elevated environmental dynamism and, hence, an augmented need for strategic re-
newal—those firms for which the concept of DC was originally developed by Teece et al. (1997).

4.4.1 Data collection


Variables. In addition to the 14 items of the DC scale, and the firm size, firm age, industry and the position of the
respondents (at the end of the scale), our electronic questionnaire in the scale confirmation study contained items
measuring business and innovation performance and international orientation (IO).
Business performance was assessed with 12 standard items based on Hult et al. (2004), Wiklund and Shepherd
(2005), and Ottenbacher (2007), with always three items addressing market performance (attraction of new custom-
ers; opening of new markets; development of market shares), customer-related performance (image; customer satis-
faction; customer loyalty), financial performance (growth in sales; growth in profits; profitability), and employee-
related performance (employee satisfaction, employee commitment; long-term staff membership/low employee fluc-
tuation). All business performance items had to be appraised relative to the most important competitors on a six-
point Likert scale ranging from “much worse than the most important competitors” (1) to “much better than the
most important competitors” (6).
Innovation performance was measured with five standard items (I1: percentage of sales from innovations intro-
duced on the market within the past 3 years; I2: percentage of profits from innovations introduced on the market
within the past 3 years; I3: number of innovations introduced on the market within the past 3 years; I4: innovation
expenditure in percentage of sales; I5: percentage of costs saved by implementing process innovations) suggested by
Dömötör, Franke, and Hienerth (2007). I1 and I2 were queried on a 10-point scale ranging from 0 to 100%; I3, I4,
and I5 were assessed on six-point scales with constant intervals.
For validation purposes and potential future research, we also included items measuring the firm’s IO, which
could later serve as a marker variable to test the extent of common method variance (Richardson et al., 2009).
International orientation was measured with four standard items (Knight and Kim, 2009) asking about the firm’s
mission, focus, culture, and resources with regard to international markets [answers ranging from “strongly disagree
(1)” to “strongly agree (6)”].

Setting and respondents. To reach firms from innovative industries, we reviewed the standard industry classification
in the European Union, Nomenclature statistique des Activités économiques dans la Communauté Européenne
(NACE) together with the results of the Eurostat community innovation survey (CIS) 2014 (Eurostat, 2014) and
selected subclasses of industries that are facing rather high environmental dynamism and innovative enterprises,
namely, Classes C–K, M, and Q. Based on contact information in the standard Austrian firm database Aurelia, we
B. Kump et al. 13

contacted 5229 companies within these subsections via e-mail. We specifically addressed the CEOs or middle-line
managers responsible for innovation issues.

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


We received 307 fully completed questionnaires, a response rate of 5.9%. Even if it is rather low, this response
rate is of similar magnitude as, and the absolute sample size exceeds that of other, comparable DC studies (Wilden
et al., 2013 received 228 fully completed surveys, a response rate of 8.3%; Nedzinskas et al., 2013, obtained 360
responses through phone calls; Hawass, 2010 had a response rate of 13.8%, an absolute sample size of 83 surveys).
The slightly lower response rate compared to the other studies is plausible given that in our case most of the e-mail
addresses in the Aurelia database were office addresses (and not the personal e-mail addresses of the CEOs, etc.), and
that we did not (additionally) call participants to ask them for participation.
Of our 307 fully completed questionnaires, 188 had been filled by CEOs, 97 by senior executives, and 22 by other
staff members in leading positions. Most (84.7%) of the companies had fewer than 250 employees, and most
(85.7%) of the respondents worked for a company that was 10 or more years old. The distribution across industries
was as follows: 18.6% belonged to the manufacturing industry (NACE category C), 1.0 to energy supply (D), 1.0 to
water supply (E), 15.0 to construction (F), 10.1 to wholesale and retail trade (G), 6.8 to transportation and storage
(H), 8.5 to accommodation and food services (I), 10.1 to information and communication industries (J), 3.6 to finan-
cial and insurance branches (K), 17.3 to professional scientific and technical activities (M), and 8.1 to human health
and social work activities (Q).
We compared the distribution across size, age, and industries of firms in our sample with characteristics of
the population in the Aurelia database. The nonsignificant v2 test reveals that our sample is true to scale regard-
ing firm size (v2 ¼ 1.96, df ¼ 1, P ¼ 0.194): of the originally contacted firms, 81.2% of the firms had fewer than
250 employees. Regarding firm age, the test (v2 ¼ 6.72, df ¼ 1, P ¼ 0.010) indicates a rather small deviation
from the distribution in the population: of the contacted firms, 91.2% were 10 or more years old. Regarding the
distribution across industries, the according v2 test was significant (v2 ¼ 79.70, df ¼ 10, P < 0.001) due to devia-
tions from the original population in two (out of 11 surveyed) industries: in our sample, the information and
communication industries are overrepresented (only 4.6% of the contacted firms were classified as J in the
Aurelia database), while the manufacturing industries are underrepresented (35.1% of the contacted firms were
classified as C in Aurelia). The reason may be that firms in the information and communication industries are
more likely than average firms to participate in electronic surveys, while manufacturing industries are less likely
than other firms to do so. In summary, we assume that the structure of the sample reflects the structure of the
contacted firms fairly well regarding the distribution across size, age, and industry.

4.4.2 Analyses and results


Confirmation of the factor structure. We conducted a second-order CFA using AMOS 18 and maximum likelihood
estimation to test the theoretical assumptions that (i) the items of our scale capacities loaded on the three factors
identified in the EFA (sensing, seizing, and transforming) and (ii) the three capacities (first-order constructs) reflected
facets of an overall DC capacity (second-order construct). As recommended (Hu and Bentler, 1999; Thompson,
2004), we computed several descriptive (normed fit index, NFI; comparative fit index, CFI; incremental fit index,
IFI; Tucker–Lewis Index, TLI; standardized root mean squared residual, SRMR) and inferential statistics (v2 and
root mean square error of approximation, RMSEA). With NFI ¼ 0.94, CFI ¼ 0.97, IFI ¼ 0.97, TLI ¼ 0.96, and
SRMR ¼ 0.04, the descriptive indices reveal an adequate fit (Bentler and Bonett, 1980; Hu and Bentler, 1999, Marsh
et al., 2004; Thompson, 2004). In addition, the inferential statistics indicate a reasonably good model fit
(v2 ¼ 158.18, df ¼ 76, P < 0.000, v2/df ¼ 2.08; Byrne et al., 1989). Also, the RMSEA value is 0.06 (P-close ¼ 0.07), as
recommended for a close inferential model fit (Browne and Cudeck, 1992).
Figure 1 displays the factor loadings of each item and squared multiple correlations of the three first-order factors.
As observed from Figure 1, all composite reliabilities are 0.85 or higher. The squared multiple correlations (R2) of
the three dimensions and the indicator reliabilities for each item exceed the recommended minimum of 0.4 (Bagozzi
and Baumgartner, 1994; Figure 1). Altogether, the findings confirm the theoretically assumed structure in the data
and imply a high factorial validity of our scale.
Table 4 shows the alpha reliabilities (Cronbach, 1951) of the subscales and the subscale intercorrelations. Alpha
coefficients are 0.84 (sensing), 0.84 (seizing), and 0.87 (transforming), indicating high internal consistencies. As in
the scale purification study, the overall alpha coefficient is 0.91. Also as in the scale purification step, the pairwise
14 Toward a dynamic capabilities scale

coefficients for subscale intercorrelations are high (ranging from 0.50 to 0.70), indicating that the subscales for the
sensing, seizing, and transforming capacities measure distinct, but strongly related constructs.

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Regarding internal convergent validity, the average variance extracted (AVE) per factor ranging from 0.59 to
0.63 exceed the recommended value of 0.5 for good convergent validity (Fornell and Larcker, 1981), suggesting that
the items assigned to one and the same factor measure one underlying construct. To establish the internal discrimin-
ant validity Fornell and Larcker (1981) suggest that, for any two constructs, the AVEs should be greater than the
shared variance obtained from the square of the correlation between these two constructs. As can be seen from Table
4, the subdimensions of the DC scale fulfill this criterion, because all AVEs are higher than any squared correlation
between the subdimensions. The subdimensions, thus, can be assumed to be sufficiently different, indicating internal
discriminant validity.

Potential method biases. Even though we were striving for maximum rigor during the process of scale development,
our approach to data collection contains the possibility of biases due to systematic measurement error, because our
findings are based on self-reported measures, and all information for one case (i.e., one firm) stems from one re-
spondent (Podsakoff et al., 2003). To test for common method bias, we employed a CFA marker technique
(Richardson et al. 2009; Williams et al., 2010) based on structural equation modeling, with IO as marker variable.
One major advantage of this technique is that it allows evaluating method variance on both model- and item-level. In
the marker-based procedure, a baseline model, where the method factor loadings are forced to be zero (i.e., a model
without method factor loadings), is compared with a constrained model, where the method factor loadings are con-
strained to have equal values, and an unconstrained model, where the method factor loadings are estimated. The re-
sult of the v2 difference test between the baseline model and the constrained model reveals significant differences
(v2dif ¼ 98.67, df ¼ 1; P < 0.000, n ¼ 187), indicating the existence of a method factor. The comparison of the con-
strained model and the unconstrained model (v2dif ¼ 147.10, df ¼ 14; P < 0.000, n ¼ 187) shows that the impact of
the marker variable is not equal across items (loadings are ranging between 0.18 and 0.38). Of the three models, the
unconstrained model accounts best for marker variance related to the items of the scale (model fit: v2 ¼ 191.44,
df ¼ 126; v2/df ¼ 1.52, P < 0.000, CFI ¼ 0.97, RMSEA ¼ 0.05, n ¼ 187). Despite the method factor in the uncon-
strained model, all items also load significantly on the DC constructs (sensing, seizing, and transforming) they intend
to measure, and the model yields a good overall model fit. The median amount of method variance attributable to
the marker variable is 9, but the item loadings on the DC dimensions are still very high (ranging from 0.60 to 0.80).
Altogether, these results seem more than acceptable, as measures in psychology, sociology, marketing, business, and
education have been found to contain on average even about one quarter (26%) of method variance (Cote and
Buckley, 1987).
To test for nonresponse bias, we carried out a median split of early and late respondents (independent variable)
and calculated t-tests with sensing, seizing, and transforming (dependent variables). The tests revealed nonsignificant
effects for sensing and seizing, and a significant effect of small effect size (Cohen’s d ¼ 0.21; Cohen, 1988) for trans-
forming (t ¼ 1.99; P < 0.05), indicating that early respondents had slightly higher values (M ¼ 4.45; SD ¼ 0.85) than
late respondents (M ¼ 4.28; SD ¼ 0.80) as regards this one subscale. From these findings, we conclude that there is
no overall systematic nonresponse bias.

Criterion validity. To test the criterion validity of our scale, we employed the mean values of the sensing, seizing, and
transforming capacities as predictors of business performance and innovation performance in linear regression mod-
els (for descriptive statistics, see Table 4). We conducted an ordinary least squares regression analysis (Gelman and
Hill, 2007) to control for demographic variables (i.e., firm age and firm size) and the partial correlations between the
dimensions sensing, seizing, and transforming. In all calculations, we entered the demographic variables in the first
block and the dimensions of DC in the second.
As Table 5 shows, the regression model for business performance explains 33% (adjusted R2) of the variance in
the aggregate index of business performance. It also explains substantial variance of the distinct aspects of business
performance, namely, 30% of market performance, 17% of financial performance, 29% of employee-related per-
formance, and 17% of customer-related performance.
Of the demographic variables, the firm’s age has a significant, slightly negative effect on aggregate (b ¼ 0.04;
P < 0.01) market (b ¼ 0.06; P < 0.01) and financial performance (b ¼ 0.06; P < 0.05), indicating that as the firm’s
B. Kump et al. 15

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Figure 1. Second-order CFA with sensing (SE), seizing (SZ), and transforming (T) as first-order construct and DC as second-order
construct.

Note: Model fit indices are NFI ¼ 0.94, CFI ¼ 0.97, IFI ¼ 0.97, TLI ¼ 0.96, and SRMR ¼ 0.04.

Table 4. Descriptive statistics and correlations for DC, and business and innovation performance indicators (n ¼ 307)

Variable Descriptives DC Dimensions of business Dimensions of innovation


performance performance

Mean SD a SE SZ T Market Financial Employee- Customer- I1 I2 I3 I4 I5


related related

DC 0.91
Sensing (SE) 4.54 0.837 0.84 1
Seizing (SZ) 4.33 0.877 0.84 0.59** 1
Transforming (T) 4.31 0.876 0.87 0.50** 0.70** 1
Business 0.90
performance
Market 4.16 0.814 0.86 0.43** 0.48** 0.48** 1
Financial 3.89 1.053 0.91 0.36** 0.37** 0.33** 0.60** 1
Employee-related 4.53 0.947 0.83 0.35** 0.48** 0.52** 0.62** 0.42** 1
Customer-related 4.66 0.895 0.82 0.30** 0.37** 0.36** 0.82** 0.42** 0.64** 1
Innovation 0.80
performance
I1 2.60 1.994 0.07 0.27** 0.21** 0.17** 0.10 0.10 0.05 1
I2 2.17 1.935 0.06 0.25** 0.19** 0.18** 0.15** 0.13* 0.04 0.80** 1
I3 3.36 1.167 0.18** 0.36** 0.25** 0.26** 0.19** 0.15** 0.08 0.55** 0.45** 1
I4 2.25 1.627 0.16** 0.38** 0.30** 0.29** 0.19** 0.19** 0.15** 0.50** 0.45** 0.46** 1
I5 2.66 1.285 0.30** 0.35** 0.30** 0.37** 0.33** 0.19** 0.23** 0.24** 0.26** 0.33** 0.37** 1

*Correlation is significant at P < 0.05; **correlation is significant at P < 0.01.


I1 ¼ Percentage of sales 2013 from innovations introduced on the market within the past 3 years.
I2 ¼ Percentage of profits 2013 from innovations introduced on the market within the past 3 years.
I3 ¼ Number of innovations introduced on the market in 2013 within the past 3 years.
I4 ¼ Innovation expenditure in R&D in percent of sales (2013).
I5 ¼ Percentage of costs saved (reduced) within 1 year by implementing process innovations in 2013.
16 Toward a dynamic capabilities scale

age increases, its overall, market, and financial performance decrease in comparison with its most important compet-
itors. The firm’s size has a statistically significant, slightly negative effect on customer-related performance

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


(b ¼ 0.06; P < 0.05), indicating that as the firm’s size decreases, customer-related performance increases. However,
the b weights for both firm age and firm size are very low, indicating that in all, firm age and size do not seem to
have much predictive power for business performance.
In contrast, DC are strong predictors of business performance: sensing capacity significantly predicts overall busi-
ness performance (b ¼ 0.18; P < 0.01), market performance (b ¼ 0.20; P < 0.01), financial performance (b ¼ 0.26;
P < 0.01), and customer-related performance (b ¼ 0.15; P < 0.05). Seizing capacity shows high predictive power for
overall business performance (b ¼ 0.19; P < 0.01), market performance (b ¼ 0.17; P < 0.05), financial performance
(b ¼ 0.20; P < 0.05), employee-related performance (b ¼ 0.21; P < 0.01), and customer-related performance
(b ¼ 0.18; P < 0.05). Transforming capacity has a highly significant positive effect on aggregate business performance
(b ¼ 0.23; P < 0.00), employee-related performance (b ¼ 0.37; P < 0.00), market performance (b ¼ 0.23; P < 0.00),
and a slightly significant effect on customer-related performance (b ¼ 0.18; P < 0.05). The overall model explains
substantial variance, and all subscales—search, seizing, and transforming—significantly predict different business
performance aspects.
The regression model for innovation performance explains between 12% and 20% (adjusted R2) of the variability
in different innovation performance aspects (Table 5). Regarding demographic variables, the firm’s size is a positive
predictor of the sales revenue from new products (I1, b ¼ 0.15; P < 0.05) and the percentage of costs saved by imple-
menting process innovations (I5, b ¼ 0.14; P < 0.001): as the firm’s size increases, so do the sales revenue from new
products and the percentage of costs saved through process innovations. The firm’s age has a highly significant nega-
tive effect on the percentage of the 2013 sales revenue from innovations (I1, b ¼ 0.30; P < 0.00), the 2013 profit
from innovations (I2, b ¼ 0.24; P < 0.00), the percentage of R&D expenditure (I4, b ¼ 0.21; P < 0.00), and the
percentage of costs saved within 1 year by implementing process innovations in 2013 (I5, b ¼ 0.12; P < 0.00).
The findings concerning DC reveal a number of interesting insights. Seizing capacity is highly positively associated
with all innovation performance indicators: it predicts the percentage of sales revenue from innovations introduced
(I1; b ¼ 0.63; P < 0.00), profits from innovations introduced (I2; b ¼ 0.61; P < 0.00), the number of innovations
introduced on the market (I3; b ¼ 0.51; P < 0.00), the percentage of innovation expenditure in R&D (I4; b ¼ 0.67;
P < 0.00), and the percentage of costs saved by implementing process innovations (I5; b ¼ 0.29; P < 0.05): as firms
improve at deciding which innovations are compatible with them, they spend more on R&D, introduce more innova-
tions, gain more sales revenue and profit based on innovation, and reduce costs through process innovation.
Surprisingly, transforming capacity explains no variance in innovation performance. (All coefficients are not signifi-
cant.) Sensing capacity even has a negative effect on both the percentage of sales (I1; b ¼ 0.34; P < 0.05) and the
percentage of profit (I2; b ¼ 0.33; P < 0.05) from innovations. One possible explanation for this unexpected finding
is that sensing capacity, as operationalized in our scale, measures responsive market orientation rather than proactive
market orientation (Narver et al., 2004). Following this line of argument, firms with a high awareness of market
trends, customer needs and so forth may be more prone to adopt existing trends instead of creating new ones.
Overall, these findings also indicate the criterion validity of the DC scale with regard to innovation performance:
The employed regression model explains the substantial variance of different innovation performance indicators (par-
ticularly innovation expenditure in R&D), and particularly the dimension of seizing seems to be strongly positively
related to multiple aspects of innovation performance.

5. Discussion
In the previous sections, we have documented our endeavor to systematically develop a scale for measuring DC as
conceptualized by Teece (2007). We will now discuss (i) the psychometric quality of the developed scale, (ii) implica-
tions for developing the concept of DC further, and (iii) benefits and drawbacks of our scale compared to other
instruments.

5.1 Psychometric quality of the scale


Even if it may have to be developed further to become a standardized measure, our DC scale already met all psycho-
metrical standards in its current form: both the overall scale and its subscales showed high reliability (alpha
B. Kump et al.

Table 5. Regression analysis for business performance and innovation performance variables (n ¼ 307)

Predictors Dimensions of business performance Dimensions of innovation performance

Market Financial Employee-relatedCustomer-related Aggregated I1 I2 I3 I4 I5

(Constant) 1.88*** 1.58*** 1.73*** 2.75*** 1.98*** 2.00*** 1.60* 1.45** 0.60 0.14
Firm size 0.01 (0.03) 0.04 (0.04) 0.04 (0.03) 0.06* (0.03) 0.01 (0.02) 0.15* (0.07) 0.11(0.07) 0.07 (0.04) 0.09 (0.06) 0.14** (0.04)
Firm age 0.06** (0.02) 0.06* (0.03) 0.01 (0.02) 0.02 (0.03) 0.04* (0.02) 0.30*** (0.06)0.24*** (0.07)0.02 (0.03) 0.21*** (0.05)0.12*** (0.04)
Sensing (SE) 0.20** (0.06) 0.26** (0.08) 0.10 (0.07) 0.15* (0.07) 0.18** (0.06) 0.34* (0.16) 0.33* (0.16) 0.11 (0.10) 0.21(0.13) 0.14 (0.10)
Seizing (SZ) 0.17* (0.07) 0.20* (0.10) 0.21** (0.08) 0.18* (0.08) 0.19** (0.06) 0.63*** (0.18) 0.61*** (0.18) 0.51*** (0.11) 0.67*** (0.15) 0.29* (0.12)
Transforming (T) 0.23*** (0.06) 0.13 (0.10) 0.37*** (0.07) 0.18* (0.08) 0.23*** (0.06) 0.15 (0.17) 0.11 (0.17) 0.02 (0.10) 0.15 (0.14) 0.14(0.11)
R 0.56 0.44 0.55 0.43 0.58 0.40 0.36 0.38 0.46 0.43
R-squared 0.31 0.19 0.31 0.18 0.34 0.16 0.13 0.14 0.21 0.19
Adjusted 0.30 0.17 0.29 0.17 0.33 0.15 0.12 0.13 0.20 0.17
R-squared

Note: Unstandardized b coefficients are given, with standard errors in parentheses.


*P < 0.05; **P < 0.01; ***P < 0.001.
I1 ¼ Percentage of sales 2013 from innovations introduced on the market within the past 3 years.
I2 ¼ Percentage of profits 2013 from innovations introduced on the market within the past 3 years.
I3 ¼ Number of innovations introduced on the market in 2013 within the past 3 years.
I4 ¼ Innovation expenditure in R&D in percent of sales (2013).
I5 ¼ Percentage of costs saved (reduced) within 1 year by implementing process innovations in 2013.
17

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


18 Toward a dynamic capabilities scale

coefficients) in two different samples. Validity was also high; the theoretically assumed three-factor structure was
identified in the EFA (scale purification step) and confirmed with the cross-validation in a new sample (scale confirm-

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


ation step). Finally, the suggested DC scale showed solid criterion validity; it explained substantial amounts of vari-
ance in business and innovation performance criteria.
In both the scale purification and scale confirmation study, we found high pairwise intercorrelations among the
sensing, seizing, and transforming subscales. Although this could have indicated a threat to construct validity (i.e.,
the subscales may not measure distinct dimensions of DC), the findings in the regression analyses clearly demonstrate
that the subscales predict different facets of business and innovation performance. Hence, it can be concluded that
the subscales measure different, analytically distinct but functionally related aspects of one and the same concept—as
theoretically assumed in the DC literature.
As all empirical research, our study has several limitations. One limitation that we have identified in the scale con-
firmation study is that assessing DC with our scale—as holds for most of the scales in social research—contains a cer-
tain extent of method variance (common method bias). The degree of common method bias—9% in our study—
however, seems acceptable given an estimated method bias of 26% in other areas of social research (Cote and
Buckley, 1987); the tested model still yields a good fit as specified by all relevant indices, and all items still load sub-
stantially on the intended DC dimensions.
Another limitation is that business and innovation performance criteria as collected in the scale confirmation
study are based on survey data instead of financial hard facts, such as actual figures on R&D expenditure or sales
revenue from innovations. Even though previous research has found high correlations between self-assessed and ob-
jective performance data (Dess and Robinson, 1984; Wall et al., 2004), to increase the validity of the measures for
these external criteria, future scale validation activities should think of ways to include more objective data on busi-
ness and innovation performance.
One further potential limitation that we share with similar studies in the area of DC (Wilden et al., 2013) is that
of a small response rate (5.9%) in the scale confirmation study. The means and SDs we obtained may be biased in
the sense that they only reflect the views of persons who are interested in DC and strategic renewal, or survey re-
search. Although they may suffice for construct validation purposes, our data may not be useful for drawing conclu-
sions on the “true” extent of DC of each firm involved, and the means and SDs may not be representative for the
whole population. Moreover, in our sample, the manufacturing industry (C) is underrepresented, while information
and communication industries (J) are overrepresented, and older firms are slightly overrepresented. To standardize
the scale (e.g., to diagnose high vs. low DC) and to provide information on expected values, confidence intervals, and
so forth, further research is required that involves a sample that is not biased through self-selection.

5.2 Theoretical implications for further developing the concept of DC


Building on earlier work, in their extensive review, Schilke et al. (2018: 416) came to the conclusion that “[n]ow may
also be a good time to move beyond these established procedural typologies and enrich the dynamic capabilities
framework with additional organizational processes that may have been previously overlooked.” We are convinced
that scale development is one important vehicle to move beyond existing typologies: The operationalization of a con-
cept into concrete items requires clear, consistent definitions; vice versa, through empirical testing of the factor struc-
ture of a scale, and correlations of its subscale with other variables may provide starting points for conceptual
refinement. Our research serves as an example for this mutual relationship of test construction and theory develop-
ment. The most salient result is that Teece’s theoretically assumed DC dimensions were empirically shown to form
three different factors that predict different business performance criteria. Furthermore, our findings point to several
potential avenues for further refining Teece’s (2007) framework. For example, as mentioned above, one interpret-
ation for sensing being negatively related to sales revenues and profits from product innovation is that the operation-
alization we chose measures responsive market orientation rather than proactive market orientation (Narver et al.,
2004) and, using Wang and Ahmed’s (2007) terminology, that firms that are well aware of market trends, and cus-
tomer wishes might develop an adaptive, instead of an innovative, capability. Another subscale could be developed
for measuring proactive market orientation, for instance, by asking about the awareness of future business opportu-
nities and latent customer needs, rather than current ones.
Seizing capacity was found to be positively associated with all performance indicators, thereby having the highest
beta weights for the percentage of sales from innovations, the percentage of profits from innovations, and innovation
B. Kump et al. 19

expenditure in R&D. Among the DC facets, seizing had the highest predictive power with regard to all measured per-
formance indicators. This seems plausible, as seizing is the capacity that is most closely linked to a strong strategy

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


and is also in line with Teece’s (2007) view of seizing as the capacity for making investment decisions based on busi-
ness opportunities that fit the firm’s strengths.
Transforming capacity, as defined in our scale, is positively associated with aggregated, market, customer-related,
and employee-related performance, but it does not predict innovation performance indicators. This result may be
due in part to our items’ focus on the capacity for transforming organizational processes and structures but not on
the transformation of human resources in the sense of employee knowledge and skills, which might have been a more
effective predictor of innovation performance. The aspect of resource transformation capacity—as, for instance, part-
ly reflected in Makkonen et al.’s (2014) learning capability subscale—might, therefore, be included as a subscale in
future versions of the scale. Notably, as firms become better at transforming, employee satisfaction and commitment,
as well as customer satisfaction, increase. The validity of these relationships and their causal direction—for example,
whether a more satisfied staff is better at implementing planned changes or whether the successful implementation of
planned changes increases staff satisfaction—will remain subject to future analyses.
As we have described in the theoretical considerations underlying dimensions of DC, some scholars have argued
that Teece’s (2007) sensing concept may have an external and an internal aspect; internal sensing would be required
to identify the need (or opportunities) for adaptation and change based on information from within the firm. The ne-
cessity of reflection and articulation of implicitly held knowledge for learning, continuous optimization, and renewal
has also been recognized by strategic management scholars (Hodgkinson and Healey, 2011), especially in the context
of strategic alliances (Kale and Singh, 2007), and in the context of developing multiple capabilities at the same time
(Bingham et al., 2015). Integrating the aspect of reflection into Teece’s model could be an interesting way forward.

5.3 Other ways of measuring DC


To date, according to Schilke et al. (2018), about one-third of DC studies have employed survey scales. However, as
we have outlined, Wilden et al.’s (2013) scale is the only instrument for measuring DC as conceptualized by Teece
(2007) that has been developed in line with the highest standards of scale development. In comparison with our own
scale, Wilden et al.’s (2013) scale has the benefit that it may reduce social desirability or misjudgments by asking for
frequencies of DC activities. Nevertheless, as we have extensively argued in the theory section, an additional out-
come-oriented approach is needed. Hence, our DC scale can be seen as a complement to Wilden et al.’s scale: even if
the scale developments varied in their detailed procedures, Wilden et al.’s and our scale are comparable with regard
to methodical rigor, sample size, and also with respect to the high psychometric quality criteria they both achieved.
Both scales have also been successfully applied to show relationships between DC aspects and various kinds of per-
formance. The presence of two DC scales that take two perspectives—a frequency- or activity-oriented perspective
(Wilden et al.’s scale) and an outcome-oriented perspective (our scale)—now opens up multiple avenues for triangula-
tion in future research. For example, one interesting next step would be to investigate the link between DC activities,
DC outcomes, and various kinds of firm performance.
Other techniques than surveys have been employed to measure DC. For example, scholars have used proximal
measures as indicators for DC as a remedy to potential informant biases (above all, social desirability). In this con-
text, Stadler et al. (2013) looked the level of certain oil drilling capabilities to infer an oil firm’s level of
“technological sophistication,” which then served as a proxy to DC; the rationale behind the proxy they employed is
the assumption that a high level of technological sophistication would imply a high level of DC. Another recent ex-
ample is Girod and Whittington’s (2017) study that used proxies for measuring reconfiguration capacity: they
counted numbers of structural reconfigurations within a firm under consideration (e.g., number of additions of new
business units, number of recombinations of units, and split of business units) and treated the number of reconfigura-
tions within 1 year as a continuous measure of reconfiguration capacity. These examples illustrate both advantages
and limitations of a proxy-based measure of DC compared to a survey scale. Clearly, the main advantage (in both
cases where proxies were used) is that they provide an objective, unbiased picture of “what is going on” within an or-
ganization, regardless of (socially) desirable outcomes. Besides the fact that such data are hard to collect in many
cases, one disadvantage is that proxies can be highly specific to the firm or industry (as in Stadler et al.’s 2013 case),
thereby limiting their applicability as a measure in other businesses. Another potential limitation is that they may
measure DC very indirectly. To take Girod and Whittington’s (2017) example, frequently changing a business’s
20 Toward a dynamic capabilities scale

structure may require (and thus indicate) transforming capacity; nevertheless, as Arend and Bromiley (2009) argued,
not changing does not necessarily imply an incapacity to change.

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


In conclusion, all methods for measuring DC—frequency- and activity-based surveys (Wilden et al., 2013),
proxy-based measures (Stadler et al., 2013; Girod and Whittington, 2017), and our outcome-oriented survey—have
their benefits and drawbacks, and none of them will be the “answer to everything.” Instead, as suggested by Schilke
et al. (2018), mixed-method approaches (e.g., combining proxy-based studies with survey-based and other measures)
should be applied to take advantage of each methods’ strengths while balancing their limitations. Moreover, future
methodical research could also investigate correlations between different types of instruments, to work toward stand-
ard measures in the area of DC.

6. Conclusion and future outlook


The starting point of our research was the claim for a more standardized approach to measuring DC. As yet, no
agreed-upon scale yet exists for measuring dimensions of DC, and hardly any of the scales employed in quantitative
studies so far have been developed following a rigorous process of scale development. We contribute to DC research
by providing a first version of a DC scale based on Teece’s (2007) model of sensing, seizing, and transforming, focus-
ing on outcomes of latent capacities (e.g., how good is the firm at sensing new trends) instead of frequencies of activ-
ities (e.g., how and how often do they search the environment). Because of the cross-validation of the factor structure
found in the EFA, the rather large sample size (269 for scale purification and 307 for scale confirmation) and the
large variety of performance indicators, we conclude that the scale is already of high psychometric quality (different
forms of reliability and validity) in its current version. Nonetheless, it should be employed and further improved in
future scale development processes.
Our work implies several avenues for future research. We discussed the outcomes from the confirmation study
against the backdrop of our concrete item operationalizations and elaborated on possible alternative findings if other
aspects of the same capacities (e.g., sensing and transforming) had been operationalized. These provide the basis for
potential additional aspects that could be integrated to further refine the DC scale. For example, sensing could be
extended by proactive market orientation, process innovation could be highlighted more in the operationalizations of
the seizing subscale, and the aspect of transforming firm resources (e.g., through learning and development) could be
included in the transforming subscale. Their inclusion could also contribute to a step-wise refinement of the assump-
tions underlying Teece’s (2007) model. Moreover, in a next step, external construct validity should be established by
comparing the outcomes of the scale with similar scales (convergent validity), such as Wilden et al.’s (2013) scale, or
with scales measuring different constructs (discriminant validity). Further potential outcomes of DC could be used
for criterion-related validation, such as organizational learning or strategic renewal. Overall, the ambition behind
our research is that in the future, there will be a valid, unified scale for measuring various, clearly specified dimen-
sions of DC. We are confident that this approach of concretizing capacities and distinguishing different aspects may
also contribute to future theory development by further unraveling the multifaceted concept of DC.

References
Agarwal, R. and C. E. Helfat (2009), ‘Strategic renewal of organizations,’ Organization Science, 20(2), 281–293.
Agarwal, R. and W. Selen (2009), ‘Dynamic capability building in service value networks for achieving service innovation,’ Decision
Sciences, 40(3), 431–475.
Ali, S., L. D. Peters and F. Lettice (2012), ‘An organizational learning perspective on conceptualizing dynamic and substantive capa-
bilities,’ Journal of Strategic Marketing, 20(7), 589–607.
Allred, C. R., S. E. Fawcett, C. Wallin and G. M. Magnan (2011), ‘A Dynamic Collaboration Capability as a source of competitive ad-
vantage,’ Decision Sciences, 42(1), 129–161.
Ambrosini, V. and C. Bowman (2009), ‘What are dynamic capabilities and are they a useful construct in strategic management?,’
International Journal of Management Reviews, 11(1), 29–49.
Arend, R. J. and P. Bromiley (2009), ‘Assessing the dynamic capabilities view: spare change, everyone?,’ Strategic Organization, 7(1),
75–90.
Babelyt_e-Labanausk_e, K. and S. Nedzinskas (2017), ‘Dynamic capabilities and their impact on research organizations’ R&D and in-
novation performance,’ Journal of Modelling in Management, 12(4), 603–630.
B. Kump et al. 21

Bagozzi, R. P. and H. Baumgartner (1994), ‘The evaluation of structural equation models and hypothesis testing,’ in R. P. Bagozzi
and H. Baumgartner (eds), Principles of Marketing Research. Blackwell Publishers: Cambridge, pp. 386–422.
Barrales-Molina, V., F. J. L. Montes and L. J. Gutierrez-Gutierrez (2015), ‘Dynamic capabilities, human resources and operating rou-

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


tines,’ Industrial Management and Data Systems, 115(8), 1388–1411.
Barreto, I. (2010), ‘Dynamic capabilities: a review of past research and an agenda for the future,’ Journal of Management, 36(1),
256–280.
Battisti, M. and D. Deakins (2017), ‘The relationship between dynamic capabilities, the firm’s resource base and performance in a
post-disaster environment,’ International Small Business Journal, 35(1), 78–98.
Bentler, P. M. and D. G. Bonett (1980), ‘Significance tests and goodness of fit in the analysis of covariance structures,’ Psychological
Bulletin, 88(3), 588–606.
Bingham, C., K. H. Heimeriks, M. Schijven and S. Gates (2015), ‘Concurrent learning: how firms develop multiple dynamic capabil-
ities in parallel,’ Strategic Management Journal, 36(12), 1802–1825.
Browne, M. W. and R. Cudeck (1992), ‘Alternative ways of assessing model fit,’ Sociological Methods and Research, 21(2), 230–258.
Byrne, B. M., R. J. Shavelson and B. Muthén (1989), ‘Testing for the equivalence of factor covariance and mean structures: the issue
of partial measurement invariance,’ Psychological Bulletin, 105(3), 456–466.
Cheng, J.-H., M.-C. Chen and C.-H. Huang (2014), ‘Assessing inter-organizational innovation performance through relational gov-
ernance and dynamic capabilities in supply chains,’ Supply Chain Management: An International Journal, 19(2), 173–186.
Churchill, G. A. J. (1979), ‘A paradigm for developing better measures of marketing constructs,’ Journal of Marketing Research,
16(1), 64–73.
Clark, L. A. and D. Watson (1995), ‘Constructing validity: basic issues in objective scale development,’ Psychological Assessment,
7(3), 309–319.
Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd edn. Lawrence Erlbaum Associates: New Jersey.
Cote, J. A. and M. R. Buckley (1987), ‘Estimating trait, method, and error variance: generalizing across 70 construct validation stud-
ies,’ Journal of Marketing Research, 24(3), 315.
Cronbach, L. J. (1951), ‘Coefficient alpha and the internal structure of tests,’ Psychometrika, 16(3), 297–334.
Cui, Y. and H. Jiao (2011), ‘Dynamic capabilities, strategic stakeholder alliances and sustainable competitive advantage: evidence
from China,’ Corporate Governance, 11(4), 386–398.
Daft, R. L., J. Sormunen and D. Parks (1988), ‘Chief executive scanning, environmental characteristics, and company performance:
an empirical study,’ Strategic Management Journal, 9(2), 123–139.
Danneels, E. (2008), ‘Organizational antecedents of second-order competences,’ Strategic Management Journal, 29(5), 519–543.
Danneels, E. (2016), ‘Survey measures of first- and second-order competences,’ Strategic Management Journal, 37(10), 2174–2188.
Deeds, D. L., D. De Carolis and J. Coombs (2000), ‘Dynamic capabilities and new product development in high technology ventures:
an empirical analysis of new biotechnology firms,’ Journal of Business Venturing, 15(3), 211–229.
Dess, G. G. and R. B. Robinson (1984), ‘Measuring Organizational Performance in the Absence of Objective Measures: the Case of
the Privately-Held Firm and Conglomerate Business Unit,’ Strategic Management Journal, 5(3), 265–273.
Di Stefano, G., M. Peteraf and G. Verona (2010), ‘Dynamic capabilities deconstructed: a bibliographic investigation into the origins,
development, and future directions of the research domain,’ Industrial and Corporate Change, 19(4), 1187–1204.
Di Stefano, G., M. Peteraf and G. Verona (2014), ‘The organizational drivetrain: a road to integration of dynamic capabilities re-
search,’ Academy of Management Perspectives, 28(4), 307–327.
Dömötör, R., N. Franke and C. Hienerth (2007), ‘What a difference a DV makes. . . The impact of conceptualizing the dependent vari-
able in innovation success factor studies,’ Journal of Business Economics (Former Zeitschrift Für Betriebswirtschaft), 23–46.
Drnevich, P. L. and A. P. Kriauciunas (2011), ‘Clarifying the conditions and limits of the contributions of ordinary and dynamic capa-
bilities to relative firm performance,’ Strategic Management Journal, 32(3), 254–279.
Easterby-Smith, M., M. A. Lyles and M. Peteraf (2009), ‘Dynamic capabilities: current debates and future directions,’ British Journal
of Management, 20, S1–S8.
Eisenhardt, K. M. and J. A. Martin (2000), ‘Dynamic capabilities: what are they?,’ Strategic Management Journal, 21(10–11),
1105–1121.
Eurostat (2014), ‘Results of the community innovation survey 2014 (CIS2014)’, http://ec.europa.eu/eurostat/cache/metadata/en/inn_
cis9_esms.htm
Fawcett, S. E., C. Wallin, C. Allred, A. M. Fawcett and G. M. Magnan (2011), ‘Information technology as an enabler of Supply Chain
Collaboration: a dynamic-capabilities perspectives,’ Journal of Supply Chain Management, 47, 22.
Flatten, T. C., A. Engelen, S. A. Zahra and M. Brettel (2011), ‘A measure of absorptive capacity: scale development and validation,’
European Management Journal, 29(2), 98–116.
Fornell, C. and D. F. Larcker (1981), ‘Evaluating structural equation models with unobservable variables and measurement error,’
Journal of Marketing Research, 18(1), 39–50.
22 Toward a dynamic capabilities scale

Gelman, A. and J. Hill (2007), Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press:
New York.
Girod, S. J. G. and R. Whittington (2017), ‘Reconfiguration, Restructuring and Firm Performance: dynamic Capabilities and

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Environmental Dynamism,’ Strategic Management Journal, 38(5), 1121–1133.
Hawass, H. H. (2010), ‘Exploring the determinants of the reconfiguration capability: a dynamic capability perspective,’ European
Journal of Innovation Management, 13, 409–438.
Helfat, C. E. (1997), ‘Know-how and asset complementarity and dynamic capability accumulation: the case of R&D,’ Strategic
Management Journal, 18, 339–360.
Helfat, C. E., S. Finkelstein, W. Mitchell, M. A. Peteraf, H. Singh, D. J. Teece and S. G. Winter (2007), Dynamic Capabilities:
Understanding Strategic Change in Organizations. Wiley-Blackwell: Oxford.
Helfat, C. E. and M. A. Peteraf (2015), ‘Managerial cognitive capabilities and the microfoundations of dynamic capabilities,’
Strategic Management Journal, 36(6), 831–850.
Hinkin, T. R. (1995), ‘A review of scale development practices in the study of organizations,’ Journal of Management, 21(5),
967–988.
Hinkin, T. R. (1998), ‘A brief tutorial on the development of measures for use in survey questionnaires,’ Organizational Research
Methods, 1(1), 104–121.
Hodgkinson, G. P. and M. P. Healey (2011), ‘Psychological foundations of dynamic capabilities: reflexion and reflection in strategic
management,’ Strategic Management Journal, 32(13), 1500–1516.
Hou, J.-J. and Y.-T. Chien (2010), ‘The effect of market knowledge management competence on business performance: a Dynamic
Capabilities perspective,’ International Journal of Electronic Business Management, 8, 96–109.
Hu, L. and P. M. Bentler (1999), ‘Cut-off criteria for fit indexes in covariance structure analysis: conventional criteria versus new
alternatives,’ Structural Equation Modeling, 6(1), 1–55.
Hult, G. T. M., R. F. Hurley and G. A. Knight (2004), ‘Innovativeness: its antecedents and impact on business performance,’
Industrial Marketing Management, 33(5), 429–438.
Jantunen, A. (2005), ‘Knowledge-processing capabilities and innovative performance: an empirical study,’ European Journal of
Innovation Management, 8(3), 336–349.
Jiao, H., J. Wei and Y. Cui (2010), ‘An empirical study on paths to develop dynamic capabilities: from the perspectives of entrepre-
neurial orientation and organizational learning,’ Frontiers of Business Research in China, 4(1), 47–72.
Kale, P. and H. Singh (2007), ‘Building firm capabilities through learning: the role of the alliance learning process in alliance capabil-
ity and firm-level alliance success,’ Strategic Management Journal, 28(10), 981–1000.
Karimi, J. and Z. Walter (2015), ‘The role of Dynamic Capabilities in responding to digital disruption: a factor-based study of the
newspaper industry,’ Journal of Management Information Systems, 32(1), 39–81.
Knight, G. A. and D. Kim (2009), ‘International business competence and the contemporary firm,’ Journal of International Business
Studies, 40(2), 255–273.
Lee, P.-Y., M.-L. Wu, C.-C. Kuo and C.-S. J. Li (2016), ‘How to deploy multiunit organizations’ dynamic capabilities?,’ Management
Decision, 54(4), 965–980.
Li, D. and J. Liu (2014), ‘Dynamic capabilities, environmental dynamism, and competitive advantage: evidence from China,’ Journal
of Business Research, 67(1), 2793–2799.
Lin, H.-F., J.-Q. Su and A. Higgins (2016), ‘How dynamic capabilities affect adoption of management innovations,’ Journal of
Business Research, 69(2), 862–876.
Lopez-Cabrales, A., M. Bornay-Barrachina and M. Diaz-Fernandez (2017), ‘Leadership and dynamic capabilities: the role of HR sys-
tems,’ Personnel Review, 46(2), 255–276.
Ma, J. and Z. Todorovic (2011), ‘Making universities relevant: market orientation as a Dynamic Capability within institutions of
higher learning,’ Academy of Marketing Studies Journal, 15, 1–15.
Makkonen, H., M. Pohjola, R. Olkkonen and A. Koponen (2014), ‘Dynamic capabilities and firm performance in a financial crisis,’
Journal of Business Research, 67(1), 2707–2719.
Mandal, S. (2017), ‘The influence of dynamic capabilities on hospital-supplier collaboration and hospital supply chain performance,’
International Journal of Operations & Production Management, 37(5), 664–684.
Marcus, A. A. and M. H. Anderson (2006), ‘A general Dynamic Capability: does it propagate business and social competencies in the
retail food industry?,’ Journal of Management Studies, 43(1), 19–46.
Marsh, H. W., Z. Wen and K.-T. Hau (2004), ‘Structural equation models of latent interactions: evaluation of alternative estimation
strategies and indicator construction,’ Psychological Methods, 9(3), 275–300.
McKelvie, A. and P. Davidsson (2009), ‘From resource base to Dynamic Capabilities: an investigation of new firms,’ British Journal
of Management, 20, S63–S80.
B. Kump et al. 23

Mitrega, M.,. D. Forkmann, G. Zaefarian and S. C. Henneberg (2017), ‘Networking capability in supplier relationships and its im-
pact on product innovation and firm performance,’ International Journal of Operations & Production Management, 37(5),
577–606.

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


Monteiro, A. P., A. M. Soares and O. L. Rua (2017), ‘Linking intangible resources and export performance,’ Baltic Journal of
Management, 12(3), 329–347.
Naldi, L., P. Wikström and M. B. von Rimscha (2014), ‘Dynamic Capabilities and performance,’ International Studies of
Management and Organization, 44(4), 63–82.
Narver, J. C., S. F. Slater and D. L. Mac Lachlan (2004), ‘Responsive and proactive market orientation and new product success,’ The
Journal of Product Innovation Management, 21(5), 334–347.
Nedzinskas, S., A. Pundzien_e, S. Buozi
ut_e-Rafanavicien_e and M. Pilkien_e (2013), ‘The impact of dynamic capabilties,’ Baltic Journal
of Management, 8(4), 376–396.
Noble, C. H. (1999), ‘The eclectic roots of strategy implementation research,’ Journal of Business Research, 45(2), 119–134.
Nunnally, J. C. (1978), Psychometric Methods. McGraw Hill: New York.
Ottenbacher, M. C. (2007), ‘Innovation management in the hospitality industry: different strategies for achieving success,’ Journal of
Hospitality & Tourism Research, 31(4), 431–454.
Pandit, D., M. P. Joshi, R. K. Gupta and A. Sahay (2017), ‘Disruptive innovation through a dynamic capabilities lens: an exploration
of the auto component sector in India,’ International Journal of Indian Culture and Business Management, 14(1), 109–130.
Peteraf, M., G. Di Stefano and G. Verona (2013), ‘The elephant in the room of dynamic capabilities: bringing two diverging conversa-
tions together,’ Strategic Management Journal, 34(12), 1389–1410.
Pavlou, P. A. and O. A. El Sawy (2011), ‘Understanding the elusive black box of Dynamic Capabilities,’ Decision Sciences, 42(1),
239–273.
Podsakoff, P. M., S. B. MacKenzie, J.-Y. Lee and N. P. Podsakoff (2003), ‘Common method biases in behavioral research: a critical re-
view of the literature and recommended remedies,’ Journal of Applied Psychology, 88(5), 879–903.
Protogerou, A., Y. Caloghirou and S. Lioukas (2012), ‘Dynamic capabilities and their indirect impact on firm performance,’
Industrial and Corporate Change, 21(3), 615–647.
Raman, A. and S. S. Bharadwaj (2017), ‘Dynamic service capabilities enabling agile services,’ Journal of Enterprise Information
Management, 30(1), 166–187.
Rashidirad, M., H. Salimian, E. Soltani and Z. Fazeli (2017), ‘Competitive strategy, dynamic capability, and value creation: some em-
pirical evidence from UK telecommunications firms,’ Strategic Change, 26(4), 333–342.
Richardson, H. A., M. J. Simmering and M. C. Sturman (2009), ‘A tale of three perspectives: examining post hoc statistical techniques
for detection and correction of common,’ Method Variance’, Organizational Research Methods, 12(4), 762–800.
Rossiter, J. R. (2008), ‘Content validity of measures of abstract constructs in management and organizational research,’ British
Journal of Management, 19(4), 380–388.
Schilke, O. (2014), ‘Second-order dynamic capabilities: how do they matter?,’ The Academy of Management Perspectives, 28(4),
368–380.
Schilke, O., S. Hu and C. Helfat (2018), ‘Quo vadis, dynamic capabilities? A content-analytic review of the current state of knowledge
and recommendations for future research,’ Academy of Management Annals, 12(1), 390–439.
Shafia, M. A., S. Shavvalpour, M. Hosseini and R. Hosseini (2016), ‘Mediating effect of technological innovation capabilities between
dynamic capabilities and competitiveness of research and technology organisations,’ Technology Analysis and Strategic
Management, 28(7), 811–826.
Simon, A. (2010), ‘Resources, Dynamic Capabilities and Australian business success,’ Journal of Global Business and Technology, 6,
12–31.
Singh, D., J. S. Oberoi and I. S. Ahuja (2013), ‘An empirical investigation of dynamic capabilities in managing strategic flexibility in
manufacturing organizations,’ Management Decision, 51(7), 1442–1461.
Stadler, C., C. E. Helfat and G. Verona (2013), ‘The impact of dynamic capabilities on resource access and development,’
Organization Science, 24(6), 1782–1804.
Teece, D. J. (2007), ‘Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance,’
Strategic Management Journal, 28(13), 1319–1350.
Teece, D. J. (2014), ‘The foundation of enterprise performance: dynamic and ordinary capabilities in an (economic) theory of firms,’
The Academy of Management Perspectives, 28(4), 328–352.
Teece, D. J., G. Pisano and A. Shuen (1997), ‘Dynamic capabilities and strategic management,’ Strategic Management Journal, 18(7),
509–533.
Thompson, B. (2004), Exploratory and Confirmatory Factor Analysis: Understanding Concepts and Applications. American
Psychological Association: Washington DC.
Townsend, D. M. and l. W. Busenitz (2015), ‘Turning water into wine? Exploring the role of dynamic capabilities in early-stage capit-
alization processes,’ Journal of Business Venturing, 30(2), 292–306.
24 Toward a dynamic capabilities scale

Verreynne, M.-L., D. Hine, L. Coote and R. Parke (2016), ‘Building a scale for dynamic learning capabilities: the role of resources,
learning, competitive intent and routine patterning,’ Journal of Business Research, 69(10), 4287–4303.
Vickery, S. K., X. Koufteros and C. Droge (2013), ‘Does product platform strategy mediate the effects of supply chain integration on

Downloaded from https://academic.oup.com/icc/advance-article-abstract/doi/10.1093/icc/dty054/5245299 by Swinburne Library user on 03 February 2019


performance? A dynamic capabilities perspective,’ IEEE Transactions on Engineering Management, 60(4), 750–762.
Vogel, R. and W. H. Güttel (2013), ‘The dynamic capability view in strategic management: a bibliometric review,’ International
Journal of Management Reviews, 15, 426–446.
Wall, T. D., J. Michie, M. Patterson, S. J. Wood, M. Sheehan, C. W. Clegg and M. West (2004), ‘On the Validity of Subjective
Measures of Company Performance,’ Personnel Psychology, 57(1), 95–118.
Wamba, S. F., A. Gunasekaran, S. Akter, S. J. Ren, R. Dubey and S. J. Childe (2017), ‘Big data analytics and firm performance: effects
of dynamic capabilities,’ Journal of Business Research, 70, 356–365.
Wang, C. L. and P. K. Ahmed (2007), ‘Dynamic capabilities: a review and research agenda,’ International Journal of Management
Reviews, 9(1), 31–51.
Wang, C. L., C. Senaratne and M. Rafiq (2015), ‘Success traps, Dynamic Capabilities and firm performance,’ British Journal of
Management, 26(1), 26–44.
Wiklund, J. and D. Shepherd (2005), ‘Entrepreneurial orientation and small business performance: a configurational approach,’
Journal of Business Venturing, 20(1), 71–91.
Wilden, R. and S. P. Gudergan (2015), ‘The impact of dynamic capabilities on operational marketing and technological capabilities:
investigating the role of environmental turbulence,’ Journal of the Academy of Marketing Science, 43(2), 181–199.
Wilden, R., S. P. Gudergan, B. B. Nielsen and I. Lings (2013), ‘Dynamic Capabilities and performance: strategy, structure and envir-
onment,’ Long Range Planning, 46(1–2), 72–96.
Wilden, R., T. M. Devinney and G. R. Dowling (2016), ‘The architecture of dynamic capability research: identifying the building
blocks of a configurational approach,’ Academy of Management Annals, 10(1), 997–1076.
Williams, L. J., N. Hartman and F. Cavazotte (2010), ‘Method variance and marker variables: a review and comprehensive CFA
marker technique,’ Organizational Research Methods, 13(3), 477–514.
Wilson, M. (2004), Constructing Measures: An Item Response Modeling Approach. Lawrence Erlbaum Associates: New Jersey.
Winter, S. G. (2003), ‘Understanding dynamic capabilities,’ Strategic Management Journal, 24(10), 991–995.
Wohlgemuth, V. and M. Wenzel (2016), ‘Dynamic capabilities and routinization,’ Journal of Business Research, 69(5), 1944–1948.
Wu, L.-Y. (2010), ‘Applicability of the resource-based and dynamic-capability views under environmental volatility,’ Journal of
Business Research, 63(1), 27–31.
Zahra, S. A., H. J. Sapienza and P. Davidsson (2006), ‘Entrepreneurship and dynamic capabilities: a review, model and research
agenda,’ Journal of Management Studies, 43(4), 917–955.
Zheng, S., W. Zhang and J. Du (2011), ‘Knowledge based dynamic capabilities and innovation in networked environments,’ Journal
of Knowledge Management, 15(6), 1035–1051.

You might also like