Professional Documents
Culture Documents
Toward A DC Scale
Toward A DC Scale
net/publication/327602283
CITATIONS READS
18 2,354
4 authors, including:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Barbara Kump on 03 March 2020.
Abstract. To date, no standard scale exists for measuring dynamic capabilities. This
limits the comparability of empirical findings and impairs data-based theory development.
This article presents the development of a 14-items scale based on Teece’s (2007) well-
capacities. It describes the rigorous empirical scale development procedure comprising the
steps of (i) item generation, (ii) scale purification (n = 269), and (iii) scale confirmation (n =
307). The scale shows high reliability and validity and is a solid predictor of business and
innovation performance.
innovation performance
Please cite as: Kump, B., Engelmann, A., Keßler, A., & Schweiger, C. (2019). Toward a
INTRODUCTION
Since its introduction approximately two decades ago (Eisenhardt and Martin, 2000; Helfat,
1997; Teece, Pisano, and Shuen, 1997), the concept of dynamic capabilities (DC) has received
great attention from management scholars (for reviews see, e.g., Ambrosini and Bowman,
2009; Barreto, 2010; Di Stefano, Peteraf, and Verona, 2010; Schilke, Hu, and Helfat, 2018,
Vogel and Güttel, 2013; Wilden, Devinney, and Dowling, 2016). Defined as organizational
capabilities that allow firms to ‘build and renew resources and assets […], reconfiguring them
as needed to innovate and respond to (or bring about) changes in the market and in the
business environment’ (Teece, 2014: 332), DC are crucial for firms to achieve strategic
change and renewal (Agarwal and Helfat, 2009; Helfat et al., 2007).
observations: First, in contrast with earlier days when most DC studies were conceptual, the
majority of DC research is now empirical; second, while recently researchers (e.g., Girod and
Whittington, 2017; Stadler, Helfat, and Verona, 2013) have started to explore the potential of
employing proxies for measuring DC, still 33 percent of all publications on DC they reviewed
report findings from survey studies. However, nearly each of these studies uses its own survey
instrument, and no standard scale exists for measuring DC to expose the concept to a
measuring DC as dependent or independent variables (e.g., Lin, Su, and Higgins, 2016;
Naldi, Wilkström, and Von Rimscha, 2014), the employed DC scales have been developed in
suggested, for instance by Churchill (1979) or Hinkin (1995, 1998; one exception is Wilden,
Gudergan, Nielsen, and Lings’, 2013 scale). Both the absence of a standard scale and the lack
of ultimate rigor in scale development limit the comparability of findings in the growing
3
number of quantitative survey studies and the applicability of meta-analyses for further
improving conceptual clarity; findings from different studies remain fragmented and
assumptions underlying the concept of DC, empirical research can reveal both theoretical
insights and well-founded implications for managers. Our research aims to contribute to the
The development of a scale should start from a well-established construct (Clark and
Watson, 1995) and follow a systematic multi-staged procedure, where the operationalizations
are gradually reduced to a set of items that most consistently reflect that construct. According
to Schilke et al.’s (2018) study, the most widely acknowledged, and most frequently referred
to DC framework is Teece’s (2007, further developed from Teece, Pisano, and Shuen’s, 1997)
took Teece’s (2007) conceptualization as a starting point for our scale development.
scale measuring Teece’s (2007) sensing, seizing, and transforming capacities. The resulting
scale comprises 14 items. The subscales show high internal consistency, and the overall scale
reveals high construct validity. The capacities are solid predictors of different facets of
the scale. Although the actual scale may have to be extended, revised, or even partly discarded
in future iterations, it may serve as a starting point for further consolidating empirical
research.
This article is organized as follows. We first provide the theoretical rationale of our
scales for measuring Teece’s DC concept. The core body describes the process of scale
development: We explain how we formulated the items of the scale based on precise
4
consistency of the preliminary scale with an exploratory factor analysis based on a sample of
269 companies, and how we purified it by removing items of low psychometric quality (scale
purification). We further describe how we cross-validated the factor structure of the items in a
second-order confirmatory factor analysis (CFA) based on a new sample of 307 companies
and tested the criterion validity by predicting various performance indicators (scale
confirmation). We conclude with a discussion of our scale compared to other approaches for
measuring DC (scales and secondary data), and some theoretical implications of our research.
Different perspectives on DC
Since Teece et al.’s (1997: 516) original introduction of the term dynamic capabilities as
capabilities to ‘integrate, build, and reconfigure internal and external competences to address
rapidly changing environments’, the concept of DC has become one of the most important
reviews see Wilden et al., 2016, and Schilke et al., 2018). There is consensus in the literature
that the role of DC is to modify a firm’s existing resource base and to transform it
intentionally and in alignment with strategic assumptions in such a way that a new bundle or
al., 2007; Zahra, Sapienza, and Davidsson, 2006). This role is also reflected in the distinction
between dynamic and ‘ordinary’ capabilities, as outlined by Teece (2014; see also Winter,
2003, or Zahra et al., 2006): Ordinary capabilities are responsible for generating value for a
ordinary capabilities. There is agreement in the literature that through this modification of
5
ordinary capabilities, DC may contribute to competitive advantage, but they are not sufficient
for sustained firm performance (Helfat et al., 2007; Schilke et al., 2018; Wilden et al., 2016).
streams in the field, and in the past, the DC framework has been the subject of numerous
theoretical debates (for overviews, see, e.g., Arend and Bromiley, 2009; Ambrosini and
Bowman, 2009; Barreto, 2010 Easterby-Smith, Lyles, and Peteraf, 2009; Di Stefano et al.,
2014; Wilden et al., 2016). Most importantly, Peteraf et al., (2013) demonstrated a theoretical
divide between a stream of research that builds on Teece et al.’s (1997) conceptualization
(further developed by Teece, 2007; Helfat et al., 2007), and a stream that relies on Eisenhardt
and Martin’s (2000) view. However, based on an in-depth theoretical analysis of the two
approaches, Peteraf et al. (2013) showed that they are just differences in perspectives (as
regards boundary conditions, attainment of sustainable and competitive advantage) that can be
combined under certain circumstances. A few years later, Wilden et al. (2016) observed that
these two camps have moved more closely together since 2012 – which may be partly due to
Peteraf et al.’s (2013) integrative perspective. Similarly, Schilke et al. (2018) demonstrated
that there is now considerable conceptual convergence in the field: On the basis of a content
analysis of 298 articles, they came to the conclusion that the three most influential definitions
by Teece et al. (1997), Eisenhardt and Martin (2000), and Helfat et al. (2007) are
complementary and build on one another, and that other frequently used definitions are highly
Yet, for measuring DC, researchers have to specify which perspective they take with
their measure, Teece’s (and Helfat et al.’s), or Eisenhardt and Martin’s view. Teece et al.’s
(1997) approach (and its further developments) is rather broad and general (e.g., ‘the firm’s
ability to integrate, build, and reconfigure […] competences’; Teece et al., 1997: 516), and
(Teece, 2007). In contrast, Eisenhardt and Martin (2000, and those building on their work) do
not provide a generic set of capacities, but present a list of examples for DC, including
product development routines, strategic decision making routines, routines for replication and
For the present scale development, we chose Teece’s (2007) conceptualization for a
theoretical and a practical reason: The theoretical reason is that we aimed at measuring
general DC of a firm; Teece’s (2007) model provides general types of processes in which DC
are engaged (sensing, seizing, and transforming) and not specific functional domains of DC
(e.g., alliancing, product development), as Eisenhardt and Martin’s (2000), perspective would
imply. Thereby, in line with Schilke et al. (2018), we regard Teece’s (2007) work not as a
supersession but as an elaboration of Teece et al.’s (1997) typology. The practical reason for
relying on Teece’s perspective is that it has been employed in the majority of empirical DC
studies so far (Schilke et al., 2018). In the following sub-sections, we will outline the
theoretical and methodical implications of this decision to rely on Teece’s (2007) view for
scale development.
Teece et al.’s (1997) and Eisenhardt and Martin’s (2000) approaches (and their respective
further developments) are in agreement in many regards. Most importantly, both originate
from the resource-based view, and both take a multi-level perspective in that they combine
considered capacities or routines (Di Stefano et al., 2010; 2014). Perspectives that are based
on Teece et al.’s (1997) definition regard DC as capacities; those based on Eisenhardt and
Martin (2000) view DC as routines. However, Di Stefano et al. (2014) showed that the two
views can be combined: Capacities are latent and can be observed only once they are put into
action, whereas routines and their constituent elements are more observable. That is, DC are
7
seen as latent capacities, which are manifested in (observable) routines and their outcomes.
Only through these routines, DC enable strategic renewal in a continuous and reliable way.
Hence, both perspectives agree on the crucial role of routines (Peteraf et al., 2013).
Moreover, while Teece et al.’s (1997) and Eisenhard and Martin’s (2000) views seem to
diverge in (a) whether they regard DC as useful for firms in highly dynamic environments, (b)
whether DC are a source of sustainable advantage, and (c) whether DC are a source of
competitive advantage, Peteraf et al. (2013) showed that the two perspectives can be
which both their assumptions hold true: In moderately dynamic environments, DC as ‘best
competitive advantage, if they are idiosyncratic in their details. For the context of highly
dynamic environments, the assumptions of the two perspectives can still be in alignment, for
example, if DC take the shape of higher-order capabilities (e.g., capabilities in rapid and
continuous product innovation). These higher-order capabilities may enable firms to deploy
and modify lower-order DC in the form of simple rules. As a second circumstance under
which the approaches are in alignment also in high-velocity markets, Peteraf et al. (2013)
mention the presence of DC that are not specific but generic in the sense that they remain
Consequently, in order to build a scale based on Teece’s (2007) view that still does not
contradict Eisenhardt and Martin’s (2000) perspective, a few methodical implications must be
considered: First, in line with Teece’s rather general conceptualization of DC, the scale should
capture rather broad and general dynamic capabilities for integrating, building, and
velocity markets (the area of application targeted by Teece et al., 1997; Teece, 2007, 2014),
8
DC should be seen as generic in the sense that they remain useful even if the market changes
dramatically. Third, in order to potentially enable competitive advantage (in line with the
VRIN criteria) these capabilities might be formulated as ‘best practices’, but only if the
definitions of these best practices would leave room for idiosyncrasy (as suggested by
Eisenhardt and Martin, 2000): That is, the definitions should acknowledge that one and the
same latent aspect of DC (e.g., the capacity to continuously identify trends in the firm’s
environment) may manifest in different routines across different firms. Hence, firms that
possess high levels of certain aspects of DC may have established reliable routines that lead to
similar DC outcomes (e.g., awareness of market trends), but they may differ with regard to the
Teece (2007, see also Teece, 2014) provided further refinements of DC into generic sensing,
seizing, and transforming capacities, which need to be closely aligned with a firm’s strategy.
Teece’s conceptualizations of these capacities are rather broad: Sensing includes as many
opportunities in relationship to customer needs’ (Teece, 2014: 332), seizing involves the
‘mobilization of resources to address needs and opportunities, and to capture value from
doing so’ (ibid, 2014: 332), and transforming means nothing less than ‘continued renewal’
(ibid, 2014: 332). In order to operationalize them, Teece’s (2007, 2014) broad
environment (Teece, 2007, 2014; see also Makkonen et al., 2014; Pavlou and Sawy, 2011).
According to Teece, sensing refers to accumulating and filtering information from the
environment ‘to create a conjecture or a hypothesis about the likely evolution of technologies,
customer needs, and marketplace responses’ and ‘involves scanning and monitoring internal
9
and external technological developments and assessing customer needs, expressed and latent’
(Teece, 2007: 1323), in addition to shaping market opportunities and monitoring threats.
Some researchers (e.g., Babelytė-Labanauskė and Nedzinskas, 2017) have outlined that
sensing does not only have an external focus but also has an internal aspect: It may, for
example, involve the identification of new developments and opportunities within the firm.
This theoretical difference between external and internal sensing is reflected in further
management field (e.g., Helfat and Peteraf, 2015) have focused on perception and attention
(i.e., a rather external perspective), while others (e.g., Hodgkinson and Healey, 2011) have
mainly looked at the need for reflection/reflexion (i.e., a rather internal perspective).
Nevertheless, Teece’s (2007, 2014) original conceptualization is more oriented towards the
organization’s external environment. Hence, in the present article, we stick closely with
Teece’s original model and understand sensing mainly as external sensing. This sensing
(Helfat and Peteraf, 2015). This may take place formally (e.g., through systematic market
members). Refining Teece’s broad definition, we posit that an organization with high sensing
capacity is able to continuously, and reliably acquire strategically relevant information from
the environment, including market trends, best practices, and competitors’ activities.
Seizing refers to developing and selecting business opportunities that fit with the
organization’s environment and its strengths and weaknesses (Teece, 2007). Seizing thus
means that market opportunities are successfully exploited and that threats are eluded. Seizing
bridges external and internal information and knowledge, and it is closely linked with
strategic decision making, particularly regarding investment decisions. Seizing capacity starts
from a strategy that enables the recognition of valuable knowledge. This evaluation is based
10
on prior knowledge, and it results in a selection from a variety of strategic options. Seizing
capacity within an organization is high if the organization is able to decide whether some
opportunities that fit its strengths and weaknesses and to make decisions accordingly.
protecting, and, when necessary, reconfiguring the business enterprise’s intangible and
tangible assets’, such that path dependencies and inertia are avoided. That is, transforming
refers to putting decisions for new business models, product or process innovations into
practice by implementing the required structures and routines, providing the infrastructure,
ensuring that the workforce has the required skills, and so forth. Transforming is characterized
by the actual realization of strategic renewal within the organization through the
organizational structures as the enterprise grows, and as markets and technologies change’.
defined as ‘the ability to execute and coordinate strategic decision and corporate change,
which involves a variety of managerial and organizational processes, depending on the nature
adopting, and enacting strategic plans (Noble, 1999). Only through implementation does
renewal come into being; otherwise, new information and ideas within an organization remain
theoretical inputs and potential changes. An organization with a high transforming capacity
resources, and ensuring that the workforce possesses the newly required knowledge.
11
As outlined above, even if no standard scale exists, several researchers have developed and
employed various DC scales based on Teece (2007). To gain an overview of existing scales,
we systematically searched peer-reviewed journals from the database ProQuest for the time
between 1997 (the year of publication of Teece et al.’s original conceptualization) and January
2018. The central search term was dynamic capabilit*, mentioned at least in the title, in the
measure* OR survey OR empirical*’ to the query. The search revealed 325 articles of which
we excluded those that were from different fields (e.g., chemistry, biology), addressed DC
only theoretically without actually measuring them, or employed empirical measures other
than scales (e.g., secondary data, financial data, data from qualitative interviews). The result
Of the studies described in these articles, 75 employed surveys for specific examples for
DC (in the sense of Eisenhardt and Martin’s perspective) such as quality of the scientific team
(Deeds, DeCarolis, and Coombs, 2000), market disruptiveness capability (McKelvie and
Davidsson, 2009), market orientation (Ma and Todorovic, 2011), supplier-chain integration
(Fawcett et al., 2011; Vickery, Koufteros, and Droge, 2013), R&D capabilities (Singh,
Oberoi, and Ahuja, 2013), dynamic collaboration capability (Allred et al., 2011), alliance
management (Schilke, 2014), managerial capabilities (Townsend and Busenitz, 2015), new
(Danneels, 2016), dynamic service capabilities (Raman and Bharadwaj, 2017), or networking
Another 22 studies (e.g., Cheng, Chen, and Huang, 2014; Drnevich and Kriauciunas,
2011; (Lee, Wu, Kuo, and Li, 2016; Marcus and Anderson, 2006; Monteiro, Soares, and Rua,
12
2017; Wamba et al., 2017; Wu, 2010) investigated DC as an overall construct, without
measured, 15 were based on newly developed own models or models other than Teece’s
(Agarwal and Selen, 2009; Ali, Peters, and Lettice, 2012; Battisti and Deakins, 2017; Cui and
Jiao, 2011; Hou and Chien, 2010; Jiao, Wei, and Cui, 2010; Karimi and Walter, 2015; Li and
Liu, 2014; Makkonen et al., 2014; Simon, 2010; Lin et al., 2016; Verreynne, Hine, Coote, and
Parker, 2016; Wang, Senaratne, and Rafiq, 2015; Wohlgemuth and Wenzel, 2016; Zheng,
The remaining 13 survey-based measures building on Teece’s view are listed in Table 1.
The table is split into two time periods: (i) January 1997 to December 2015, the time before
scale development, and (ii) January 2016 to January 2018, the time during scale development.
Scales from the first period, that is, before 2016 were taken into account for item generation.
As shown in Table 1, two measures in the first time period (rows 1 and 2; Pavlou and El
Sawy, 2011; Protogerou, Caloghirou, and Lioukas, 2012) and three measures in the second
time period (rows 8-10; Mandal, 2017; Pandit, Joshi, Gupta, and Sahay, 2017; Rashidirad,
Salimian, Soltani, and Fazeli, 2017) were derived from Teece et al.’s (1997) theoretical
The remaining eight measures (rows 3-7 and 11-13 in Table 1) build upon sensing,
(2010) measures reconfiguration (transforming) capacity but neglects sensing and seizing.
Naldi et al. (2014) include sensing and seizing, but not transforming, and Wilden and
Gudergan (2015) take into account sensing and reconfiguring, but not seizing.
Overall, five scales remain that employ scales for sensing, seizing, and transforming
capacities as conceptualized by Teece (2007) – two in the first time period (rows 6-7;
13
Nielsen, and Lings, 2013 ) and three in the second time period (rows 11-13; Babelytė-
Nedzinskas et al. (2013: 385) measure all three dimensions, sensing, seizing, and
transforming; they employ items ‘adapted from Teece (2007)’ but do not provide details on
the scale development process, the exact wording of the items, or the response format they
used. The main goal of Nedzinskas et al.’s work was to measure the impact of DC aspects on
SME performance, and not to systematically develop and test a DC scale. Also all of the three
articles from 2016 that report DC scales aim at linking DC with other variables: Babelytė-
Labanauskė and Nedzinskas (2017) predict R&D and innovation performance, Shafia et al.
et al. (2017) link the development of DC with characteristics of HR systems (2017). None of
them had the aim to develop a DC scale following a rigorous scale development procedure.
One study that explicitly intended to develop a DC scale following a systematic and
transparent scale development process and that includes all three dimensions – sensing,
seizing, and reconfiguring (transforming) – is Wilden et al.’ (2013) scale. (Part of the scale is
re-used in Wilden and Gudergan’ (2015) study, where the authors measure sensing and
reconfiguring activities, but not seizing.) Wilden et al’s (2013) scale was inspired by
Jantunen’s (2005) and Daneel’s (2008) measures of the sensing component and asks for
activities and processes, such as the adoption of new management methods and renewal of
business routines (Wilden et al., 2013: 80f) — for instance, ‘People participate in
professional association activities’ (sensing). The scale queries about the frequencies of these
activities (e.g., ‘How often have you carried out the following activities between 2004 and
2008?’) and provides response options from ‘rarely (1)’ to ‘very often (7)’. Besides its
14
approach (‘How often do you…?’) has another major advantage: Asking respondents what
they do frequently, may be less biased due to social desirability than asking respondents about
However, we argue that this activity- and frequency-oriented approach has also one
DC-related activity, this does not immediately indicate the actual outcome of that activity. For
sensing) does not necessarily result in high awareness of market trends (outcome of sensing-
related activity). An analogy from the world of music would be to ask a piano player, how
frequently she practices (activity), and take this as a measure for her capability to play the
piano. Hence, acknowledging the merits of Wilden et al.’s (2013) scale, we argue that a
given that Wilden et al.’s (2013) scale still is the only instrument for measuring DC based on
Teece (2007) that has been developed in line with principles of a rigorous scale development
SCALE DEVELOPMENT
Overall procedure
Even if the suggested approaches for scale development (e.g., Churchill, 1979; Hinkin,
1995, 1998) differ in their details, they agree on the general process and quality criteria to be
taken into account. We integrated their methodological considerations and employed a three-
step procedure, which will be described in detail below. First, in the item-generation step, we
developed indicators that should reflect Teece’s (2007) sensing, seizing, and transforming
15
capacities. To provide a complementary perspective to Wilden et al.’s (2013) scale, our items
should focus on the general existence of DC routines and outcomes (instead of frequencies of
DC activities).
We considered to start from existing (sub-)scales and combine them into one new
instrument that measures all of the aspects suggested by Teece (2007). However, we
encountered two obstacles, one conceptual and one methodical. As for the conceptual
obstacle, we realized that the different authors employed different interpretations (and implicit
definitions) of the DC dimensions. For example, both Hawass’ (2010), and Naldi et al. (2014)
compound seizing and transforming. As for the methodical problem, our review of the
existing subscales and items from different authors revealed that they are heterogeneous
regarding both the operationalization of capacities and the answering format (e.g., rarely (1)
to very often (7), Wilden et al., 2013; strongly disagree (1) to strongly agree (7), Hawass,
2010). Hence we concluded that a mere combination of subscales from different instruments
is not feasible, and that scale development requires (a) refined operationalizations of Teece’s
(2007) sensing, seizing, and transforming capacities, and (b) homogeneous phrasing of the
items in the different subscales with a focus on the actual presence of capacities.
Nevertheless, our aim was that the new scale builds on existing scales as much as possible.
investigate the dimensionality of the scale by employing an exploratory factor analysis (EFA)
to assess the internal consistency of the sensing, seizing, and transforming subscales and to
remove items of low psychometric quality, if necessary. At this stage, we investigated which
factors the EFA would reveal and whether they correspond to the theoretically assumed DC
dimensions. We expected that the EFA would reveal distinct, robust factors representing the
three different capacities. Moreover, we expected (i) internal consistency within each factor to
be high, indicating that the developed items are measuring one and the same construct, and
16
(ii) moderate subscale inter-correlations because the capacities were assumed to be distinct
but interrelated.
In a third step, scale confirmation, we collected data from a new sample to statistically
test and confirm the dimensionality of the DC construct as identified in the EFA by means of a
model fit of the factor structure that resulted in the EFA also in the second-order CFA; (ii)
positive regression coefficients for business and innovation performance criteria of moderate
magnitude, due to the indirect nature of the relationship between DC and firm performance;
Test-theoretical considerations
capacities that manifest in observable routines and their outcomes. While previous researchers
have suggested measures for DC that ask for frequencies of activities that potentially
al., 2013: 83; the item may contribute to sensing capacity but is not a necessary condition for
it), our items measure DC more directly by either asking for the outcome of the capability
(e.g., ‘Our company knows the best practices in the market’, indicating that sensing capacity
is high), or for routines that directly indicate the existence of the capability (e.g., ‘Our
company systematically searches for information on the current market situation’, indicating
that the firm has established systematic sensing routines). Because DC constitute
organizational phenomena, not individual ones, all items are formulated in a depersonalized
17
way and ask for organizational instead of individual attitudes and outcomes (e.g., ‘We are
always up-to-date with market trends’ instead of ‘I am always up-to-date with market trends’).
Furthermore, rather than a dichotomous (‘have it’ or ‘have it not’) concept, DC should be
of different firms (Barreto, 2010). Hence, as in most other DC scales, our items ask for
gradual responses, not ‘yes’ or ‘no’ answers. As the answering format, we chose a six-point
Likert scale ranging from ‘strongly disagree (1)’ to ‘strongly agree (6)’.
Operationalization of DC
Sensing
As outlined in the theoretical considerations above, we posit that an organization with high
relevant information from the environment, including market trends, best practices, and
competitors’ activities, that is, information from outside the organization. Related concepts to
sensing capacity have used the terms knowledge acquisition (e.g., Jantunen, 2005) or
environmental scanning (e.g., Danneels, 2008). The systematic monitoring of the environment
increases the chances of becoming aware of upcoming markets, trends, and technology
developments and of tapping into new business areas (Daft, Sormunen, and Parks, 1988).
Thereby, sensing may take place via different channels, such as specialist literature or
participation in networks of knowledge (Danneels, 2008), and it may take place formally
al. (2014), Wilden et al. (2013) and Jantunen (2005), but we adapted items to focus on
established routines and outcomes. For example, we developed items asking for the extent to
which the organization is up-to-date on the current market situation, or to which the
Seizing
Further concretizing Teece’s (2007) definition, we assume that the capacity for seizing within
transform valuable information into concrete business opportunities that fit the organization’s
built on the content of existing scales but, for several reasons, could not directly reuse existing
items: Naldi et al. (2014) do not use actual survey items but other proxies (e.g., number of
newsletters a firm subscribed to). Jantunen (2005), and Makkonen et al. (2014) measure
seizing but combine it with sensing. Nedzinskas et al. (2013) employ items for seizing but do
not report them. We also integrated contents from scales for measuring absorptive capacity,
such as Flatten et al. (2011), but adapted the items to the aspect of seizing. To measure
seizing, we employed items such as ‘We recognize what new information can be utilized in
our company’, or ‘Our company is capable of turning new technological knowledge into
Transforming
In line with Teece’s (2007) ideas, we assume that an organization with a high transforming
allocating resources, and ensuring that the workforce possesses the newly required
and transforming (sub-)scales. Of the scales building on Teece’s model, Nedzinskas et al.
(2013) do not report items. Hawass’ (2010) reconfiguration scale focuses on the consequences
of the transformation rather than on the transformation itself (e.g., ‘We are more successful
than competitors in diversifying into new markets by deploy [sic!] in existing technologies’;
426). Wilden and Gudergan (2015) ask for the frequency of how often specific activities have
asks for the capacity for transforming processes and structures within an organization. One
scale that measures transforming capacity more directly is Li and Liu’s (2014) change
implementation subscale. However, their items focus on conditions for transforming (e.g., ‘We
help each other in strategic change implementation’), not on its outcome. Our
actually implemented and achieved within the organization, and includes items such as
‘Decisions on planned changes are pursued consistently in our company’, or ‘By defining
Based on the refined definitions, we developed a preliminary scale in two main stages: (1)
theory-based development of an initial item pool by a group of five researchers in the field of
DC (including three of the co-authors of this manuscript); and (2) refinement of wording, and
reduction of redundancy through expert opinions. The items were developed in German
language (targeting a German-speaking audience). Later, the scale was translated into English
In the strongly theory-based step (1), an initial pool of 21 items was developed by the
five researchers with the aim to formulate indicators that would reflect each of the three DC
dimensions. This phase required extensive engagement with the DC literature, and with
existing DC scales. When formulating the items, the following challenges had to be
addressed: (a) The items should be generic in the sense that they could be applied to a broad
range of industries and thus could not include industry-specific aspects (such as regularly
visiting trade fairs, as an aspect of sensing), while at the same time remain understandable and
answerable; (b) The scale should be economic in that it should contain only a few items per
subscale (and thus could not include every single aspect of a DC dimension), while at the
20
same time reflecting the DC dimension well. Regarding inter-rater agreement, only such items
were considered where all five researchers agreed that they reflected an important aspect of
The aims of phase (2) were to augment content validity (Rossiter, 2008), enhance
comprehensibility, and minimize perceived redundancy of the items developed in the first
manager from a large firm in the IT industry. In a simplified version of the cognitive lab
technique (Wilson, 2004), the manager was asked to express his spontaneous associations and
relevance. The results were documented and built the basis of another round of discussion
amongst the five researchers. In the course of this discussion, the items were slightly
rephrased based on the managers’ feedback regarding comprehensibility, thereby ensuring that
they were still in line with theoretical assumptions (e.g., ‘In our company, anything new is
easily transformed into ideas for changes’ was rephrased into ‘Our company is capable of
turning new technological knowledge into process and product innovation’). Moreover, items
that were asking for the same information according to the manager were re-considered and
merged, if necessary, reducing the overall number of items to 19. Then, a second walkthrough
was carried out with another experienced manager, the CEO of a small business, who again
provided feedback on the items by ‘thinking aloud’. Also, the second manager’s responses
were documented for further discussion amongst the team of researchers. The subsequent
final round of discussion led to further slight rephrasing, and a reduction of items that were
asking for the same information (e.g., ‘We do not miss new developments that could have an
influence on our business’ and ‘Our company is up-to-date on the current market situation’),
Data collection
The purpose of the scale purification study was to explore the dimensionality (i.e. the
presumed three-factor structure) and internal consistency (i.e. whether all items load high on
the intended sub-scales, and low on other sub-scales) of our scale. Concretely, we were testing
if all items would meet the high test-theoretical criteria we employed, in order to purify the
scale, if needed. At this stage, the following test-theoretical considerations were underlying
our sampling strategy: (a) the scale should basically be applicable to firms in all industries; (b)
the scale should be applicable regardless of firm size. Moreover, if there is large variance in
the data, especially variations in configurations of capabilities, a factor analysis reveals more
robust factors. Thus, a sampling strategy was chosen that should ensure maximum variance in
the data, both with regard to industries and with regard to firm size.
Variables
We developed an electronic questionnaire which included the 16 items of the three sub-scales
sensing, seizing, and transforming (see Table 2). Moreover, we queried the firm size, firm age,
For the pilot study, we had the opportunity to distribute an electronic version of the
preliminary scale to 100 CEOs of randomly selected firms (50 small and 50 large firms) in
each of the seven sections of the Austrian Federal Economic Chamber (i.e., in total 700
firms). Each recipient in this pilot study was asked to complete the survey and to forward it to
other persons in leading positions in their own industry network. This snowball-sampling
technique should increase the number of responses within sectors by invitation of other CEOs
through their peers. While the drawback was that we could not track the number of forwards
responses.
22
Of these 269 questionnaires, 29.6 percent had been completed by CEOs, 35.9 percent by
senior executives, and the remaining 34.6 percent by other staff members in leading positions.
Approximately half (51.1 %) of the companies had fewer than 250 employees. Most
respondents (79.4 %) worked for a company that was 10 or more years old. The distribution
within the industrial sectors of the Austrian Federal Economic Chamber of those who
indicated their industries was as follows: 11.0 percent bank and insurance, 13.3 percent crafts
and trades, 11.9 percent commerce, 22.5 percent industry, 25.7 percent information and
consulting, 11.9 percent tourism and leisure industries, and 3.7 percent transportation and
communications. Overall, the goal of the sampling strategy to maximize variety across firm
To investigate the factorial structure of the scale, we employed an EFA with principal
component analysis, an extraction method that uncovers the pattern of inter-item relationships
(Thompson, 2004). The EFA was employed to reconstruct the underlying structure of the DC
model structure had substantial deviations from the intended structure (i.e. whether the items
loaded on others than the theoretically assumed factors, or on multiple factors). Only factors
with an eigenvalue greater than 1 were considered, to ensure that each factor explained more
variance than any single item. Oblique rotation was used to achieve better interpretability.
This rotation is indicated when factors are expected to be correlated (Thompson, 2004), as we
The principal component analysis yielded three distinct factors – sensing (SE), seizing
(SZ), and transforming (T) – accounting for approximately 66 percent of the inter-item
variance. Table 2 displays all loadings above 0.3 in the EFA. As observed from Table 2,
except two items that showed factor cross-loadings (SE6 and T6), each item loaded high on
23
the intended factor and not on other factors. All items without cross-loadings reached factor
loadings above 0.6, with 0.5 being the recommended cut-off value for exploratory research
(Nunnally, 1978). Altogether, these findings reveal that the scale has good factorial validity.
Then, we purified the preliminary scale by removing the two items with cross-loadings
from the subscales. Of the remaining 14 items, five measured sensing, four seizing, and five
transforming. To assess the reliability (internal consistency) for each of the subscales and the
overall scale, we calculated Cronbach’s alpha (Cronbach, 1951) coefficients (Table 3).
Because an alpha above 0.8 is regarded as good and that above 0.9 is regarded as excellent—
and because all alpha coefficients of the subscales range from 0.83 to 0.88 and the overall
alpha coefficient is 0.91, the reliability of the scale can be seen as good to excellent. As
expected, pairwise subscale inter-correlation coefficients were moderate to high, ranging from
0.49 to 0.66 (Table 3; all coefficients are significant with p < 0.01). These findings indicate
that the subscales for sensing, seizing, and transforming measure highly related constructs.
The purpose of the scale confirmation study was twofold. First, we wanted to cross-validate
and confirm the factor structure of our DC scale on a new sample (factorial validity). Second,
because DC have been closely related with strategic renewal, which again is a predictor for
business performance, we aimed at testing whether the DC subscales would be able to predict
diverse performance outcomes (criterion validity). While the sampling strategy in the scale
dynamism and, hence, an augmented need for strategic renewal – those firms for which the
Data collection
Variables
In addition to the 14 items of the DC scale, and the firm size, firm age, industry and the
position of the respondents (at the end of the scale), our electronic questionnaire in the scale
confirmation study contained items measuring business and innovation performance and
international orientation.
Business performance was assessed with 12 standard items based on Hult, Hurley, and
Knight (2004), Wiklund and Shepherd (2005), and Ottenbacher (2007), with always three
items addressing market performance (attraction of new customers; opening of new markets;
appraised relative to the most important competitors on a six-point Likert scale ranging from
‘much worse than the most important competitors’ (1) to ‘much better than the most
Innovation performance was measured with five standard items (I1: Percentage of sales
from innovations introduced on the market within the past three years; I2: Percentage of
profits from innovations introduced on the market within the past three years; I3: Number of
innovations introduced on the market within the past three years; I4: Innovation expenditure
suggested by Dömötör, Franke and Hienerth (2007). I1 and I2 were queried on a ten-point
scale ranging from 0 to 100 percent; I3, I4, and I5 were assessed on six-point scales with
constant intervals.
25
For validation purposes and potential future research, we also included items measuring
the firm’s international orientation, which could later serve as a marker variable to test the
International orientation was measured with four standard items (Knight and Kim, 2009)
asking about the firm’s mission, focus, culture, and resources with regard to international
markets (answers ranging from ‘strongly disagree (1)’ to ‘strongly agree (6)’).
To reach firms from innovative industries, we reviewed the standard industry classification in
Communauté Européenne (NACE) together with the results of the Eurostat community
innovation survey (CIS) 2014 (Eurostat, 2014) and selected sub-classes of industries that are
facing rather high environmental dynamism and innovative enterprises, namely classes C to
K, M, and Q. Based on contact information in the standard Austrian firm database Aurelia, we
contacted 5229 companies within these sub-sections via e-mail. We specifically addressed the
We received 307 fully completed questionnaires, a response rate of 5.9 percent. Even if
it is rather low, this response rate is of similar magnitude as, and the absolute sample size
exceeds that of other, comparable DC studies (e.g., Wilden et al., 2013 received 228 fully
completed surveys, a response rate of 8.3 percent; Nedzinskas et al., 2013, obtained 360
responses through phone calls; Hawass, 2010 had a response rate of 13.8 percent, an absolute
sample size of 83 surveys). The slightly lower response rate compared to the other studies is
plausible given that in our case most of the e-mail addresses in the Aurelia database were
office addresses (and not the personal e-mail addresses of the CEOs etc.), and that we did not
Of our 307 fully completed questionnaires, 188 had been filled by CEOs, 97 by senior
executives, and 22 by other staff members in leading positions. Most (84.7 %) of the
companies had fewer than 250 employees, and most (85.7 %) of the respondents worked for a
company that was ten or more years old. The distribution across industries was as follows:
18.6 percent belonged to the manufacturing industry (NACE category C), 1.0 to energy
supply (D), 1.0 to water supply (E), 15.0 to construction (F), 10.1 to wholesale and retail trade
(G), 6.8 to transportation and storage (H), 8.5 to accommodation and food services (I), 10.1 to
information and communication industries (J), 3.6 to financial and insurance branches (K),
17.3 to professional scientific and technical activities (M), and 8.1 to human health and social
We compared the distribution across size, age, and industries of firms in our sample
with characteristics of the population in the Aurelia database. The non-significant χ² test
reveals that our sample is true to scale regarding firm size (χ² = 1.96, df = 1, p = 0.194): Of the
originally contacted firms, 81.2% of the firms had fewer than 250 employees. Regarding firm
age, the test (χ² = 6.72, df = 1, p = 0.010) indicates a rather small deviation from the
distribution in the population: Of the contacted firms, 91.2% were ten or more years old.
Regarding the distribution across industries, the according χ² test was significant (χ² = 79.70,
df = 10, p < 0.001) due to deviations from the original population in two (out of eleven
surveyed) industries: In our sample, the information and communication industries are
overrepresented (only 4.6% of the contacted firms were classified as J), while the
manufacturing industries are underrepresented (35.1% of the contacted firms were classified
as C). The reason may be that firms in the information and communication industries are
more likely than average firms to participate in electronic surveys while manufacturing
industries are less likely than other firms to do so. In summary, we assume that the structure
27
of the sample reflects the structure of the contacted firms fairly well regarding the distribution
test the theoretical assumptions that (a) the items of our scale capacities loaded on the three
factors identified in the EFA (sensing, seizing, and transforming) and (b) the three capacities
descriptive (normed fit index, NFI; comparative fit index, CFI; incremental fit index, IFI;
Tucker-Lewis Index, TLI; standardized root mean squared residual, SRMR) and inferential
statistics (χ² and root mean square error of approximation, RMSEA). With NFI = 0.94, CFI =
0.97, IFI = 0.97, TLI = 0.96, and SRMR = 0.04, the descriptive indices reveal an adequate fit
(Bentler and Bonett, 1980; Hu and Bentler, 1999, Marsh, Wen, and Hau, 2004; Thompson,
2004). In addition, the inferential statistics indicate a reasonably good model fit (χ² = 158.18,
df = 76, p < 0.000, χ²/df = 2.08; Byrne, Shavelson and Muthén, 1989). Also, the RMSEA value
is 0.06 (p-close = 0.07), as recommended for a close inferential model fit (Browne and
Cudeck, 1992).
Figure 1 displays the factor loadings of each item and squared multiple correlations of
the three first-order factors. As observed from Figure 1, all composite reliabilities (CR) are
0.85 or higher. The squared multiple correlations (R²) of the three dimensions and the
indicator reliabilities for each item exceed the recommended minimum of 0.4 (Bagozzi and
Baumgartner, 1994; Figure 1). Altogether, the findings confirm the theoretically assumed
structure in the data and imply a high factorial validity of our scale.
28
Table 4 shows the alpha reliabilities (Cronbach, 1951) of the subscales and the subscale
inter-correlations. Alpha coefficients are 0.84 (sensing), 0.84 (seizing), and 0.87
(transforming), indicating high internal consistencies. As in the scale purification study, the
overall alpha coefficient is 0.91. As in the scale purification step, the pairwise coefficients for
subscale inter-correlations are high (ranging from 0.50 to 0.70), indicating that the subscales
for the sensing, seizing, and transforming capacities measure distinct, but strongly related
constructs.
Regarding internal convergent validity, the average extracted variances per factor (AVE)
ranging from 0.59 to 0.63 exceed the recommended value of 0.5 for good convergent validity
(Fornell and Larcker, 1981), suggesting that the items assigned to one and the same factor
measure one underlying construct. To establish the internal discriminant validity Fornell and
Larcker (1981) suggest that, for any two constructs, the AVEs should be greater than the
shared variance obtained from the square of the correlation between these two constructs. As
can be seen from Table 4, the sub-dimensions of the DC scale fulfill this criterion, because all
AVEs are higher than any squared correlation between the sub-dimensions. The sub-
validity.
Even though we were striving for maximum rigor during the process of scale development,
our approach to data collection contains the possibility of biases due to systematic
measurement error, because our findings are based on self-reported measures, and all
information for one case (i.e. one firm) stems from one respondent (Podsakoff, MacKenzie,
and Podsakoff, 2003). To test for common method bias, we employed a CFA Marker
Technique (Richardson, et al. 2009; Williams, Hartman, and Cavazotte, 2010) based on
29
structural equation modeling, with international orientation (IO) as marker variable. One
major advantage of this technique is that it allows evaluating method variance on both model-
and item-level. In the marker-based procedure, a baseline model, where the method factor
loadings are forced to be zero (i.e. a model without method factor loadings), is compared with
a constrained model, where the method factor loadings are constrained to have equal values,
and an unconstrained model, where the method factor loadings are estimated. The result of
the χ² difference test between the baseline model and the constrained model reveals significant
differences (χ²dif = 98.67, df = 1; p < 0.000, n = 187), indicating the existence of a method
factor. The comparison of the constrained model and the unconstrained model (χ²dif = 147.10,
df = 14; p < 0.000, n = 187) shows that the impact of the marker variable is not equal across
items (loadings are ranging between 0.18 and 0.38). Of the three models, the unconstrained
model accounts best for marker variance related to the items of the scale (model fit: χ² =
191.44, df = 126; χ²/df = 1.52, p < 0.000, CFI = 0.97, RMSEA = 0.05, n = 187). Despite the
method factor in the unconstrained model, all items also load significantly on the DC
constructs (sensing, seizing, transforming) they intend to measure, and the model yields a
good overall model fit. The median amount of method variance attributable to the marker
variable is 9 percent, but the item loadings on the DC dimensions are still very high (ranging
from 0.60 to 0.80). Altogether, these results seem more than acceptable, as measures in
psychology, sociology, marketing, business, and education have been found to contain on
average even about one quarter (26 percent) of method variance (Cote and Buckley, 1987).
To test for non-response bias, we carried out a median split of early and late respondents
(independent variable) and calculated t-tests with sensing, seizing, and transforming
(dependent variables). The tests revealed non-significant effects for sensing and seizing, and a
significant effect of small effect size (Cohen’s d = 0.21; Cohen, 1988) for transforming
(t=1.99; p<.05), indicating that early respondents had slightly higher values (M=4.45;
30
SD=0.85) than late respondents (M=4.28; SD=0.80) as regards this one sub-scale. From these
Criterion validity
To test the criterion validity of our scale, we employed the mean values of the sensing,
performance in linear regression models (for descriptive statistics, see Table 4). We conducted
an ordinary least squares regression analysis (Gelman and Hill, 2007) to control for
demographic variables (i.e., firm age and firm size) and the partial correlations between the
demographic variables in the first block and the dimensions of DC in the second.
As Table 5 shows, the regression model for business performance explains 33 percent
(adjusted R²) of the variance in the aggregate index of business performance. It also explains
Of the demographic variables, the firm’s age has a significant, slightly negative effect
on aggregate (b = -0.04; p < 0.01) market (b = -0.06; p < 0.01) and financial performance (b =
-0.06; p < 0.05), indicating that as the firm’s age increases, its overall, market, and financial
performance decrease in comparison with its most important competitors. The firm’s size has
p < 0.05), indicating that as the firm’s size decreases, customer-related performance increases.
However, the b weights for both firm age and firm size are very low, indicating that in all,
firm age and size do not seem to have much predictive power for business performance.
31
significantly predicts overall business performance (b = 0.18; p < 0.01), market performance
(b = 0.20; p < 0.01), financial performance (b = 0.26; p < 0.01), and customer-related
performance (b = 0.15; p < 0.05). Seizing capacity shows high predictive power for overall
business performance (b = 0.19; p < 0.01), market performance (b = 0.17; p < 0.05), financial
performance (b = 0.20; p < 0.05), employee-related performance (b = 0.21; p < 0.01), and
significant positive effect on aggregate business performance (b = 0.23; p < 0.00), employee-
related performance (b = 0.37; p < 0.00), market performance (b = 0.23; p < 0.00), and a
slightly significant effect on customer-related performance (b = 0.18; p < 0.05). The overall
model explains substantial variance, and all subscales—search, seizing, and transforming—
The regression model for innovation performance explains between 12 and 20 percent
(adjusted R²) of the variability in different innovation performance aspects (Table 5).
Regarding demographic variables, the firm’s size is a positive predictor of the sales revenue
from new products (I1, b = 0.15; p < 0.05) and the percentage of costs saved by implementing
process innovations (I5, b = 0.14; p < 0.001): as the firm’s size increases, so do the sales
revenue from new products and the percentage of costs saved through process innovations.
The firm’s age has a highly significant negative effect on the percentage of the 2013 sales
revenue from innovations (I1, b = -0.30; p < 0.00), the 2013 profit from innovations (I2, b = -
0.24; p < 0.00), the percentage of R&D expenditure (I4, b = -0.21; p < 0.00), and the
percentage of costs saved within one year by implementing process innovations in 2013 (I5, b
highly positively associated with all innovation performance indicators: It predicts the
32
percentage of sales revenue from innovations introduced (I1; b = 0.63; p < 0.00), profits from
innovations introduced (I2; b = 0.61; p < .00), the number of innovations introduced on the
market (I3; b = 0.51; p < 0.00), the percentage of innovation expenditure in R&D (I4; b =
0.67; p < 0.00), and the percentage of costs saved by implementing process innovations (I5; b
= 0.29; p < 0.05): as firms improve at deciding which innovations are compatible with them,
they spend more on R&D, introduce more innovations, gain more sales revenue and profit
based on innovation, and reduce costs through process innovation. Surprisingly, transforming
capacity explains no variance in innovation performance. (All coefficients are not significant.)
Sensing capacity even has a negative effect on both the percentage of sales (I1; b = -0.34; p <
0.05) and the percentage of profit (I2; b = -0.33; p < 0.05) from innovations. One possible
explanation for this unexpected finding is that sensing capacity, as operationalized in our
scale, measures responsive market orientation rather than proactive market orientation
(Narver, Slater, and MacLachlan, 2004). Following this line of argument, firms with a high
awareness of market trends, customer needs and so forth may be more prone to adopt existing
Overall, these findings also indicate the criterion validity of the DC scale with regard to
innovation performance: The employed regression model explains the substantial variance of
and particularly the dimension of seizing seems to be strongly positively related to multiple
DISCUSSION
scale for measuring DC as conceptualized by Teece (2007). We will now discuss (a) the
psychometric quality of the developed scale, (b) implications for developing the concept of
DC further, and (c) benefits and drawbacks of our scale compared to other instruments.
33
Even if it may have to be developed further to become a standardized measure, our DC scale
already met all psychometrical standards in its current form: both the overall scale and its
subscales showed high reliability (alpha coefficients) in two different samples. Validity was
also high; the theoretically assumed three-factor structure was identified in the exploratory
factor analysis (scale purification step) and confirmed with the cross-validation in a new
sample (scale confirmation step). Finally, the suggested DC scale showed solid criterion
criteria.
In both the scale purification and scale confirmation study, we found high pairwise
inter-correlations among the sensing, seizing, and transforming subscales. Although this could
have indicated a threat to construct validity (i.e., the subscales may not measure distinct
dimensions of DC), the findings in the regression analyses clearly demonstrate that the
subscales predict different facets of business and innovation performance. Hence, it can be
concluded that the subscales measure different, analytically distinct but functionally related
aspects of one and the same concept—as theoretically assumed in the DC literature.
As all empirical research, our study has several limitations. One limitation that we have
identified in the scale confirmation study is that assessing DC with our scale – as holds for
most of the scales in social research – contains a certain extent of method variance (common
method bias). The degree of common method bias – 9 percent in our study – however, seems
acceptable given an estimated method bias of 26 percent in other areas of social research
(Cote and Buckley, 1987); the tested model still yields a good fit as specified by all relevant
indices, and all items still load substantially on the intended DC dimensions.
the scale confirmation study are based on survey data instead of financial hard facts, such as
34
actual figures on R&D expenditure or sales revenue from innovations. Even though previous
research has found high correlations between self-assessed and objective performance data
(e.g., Dess and Robinson, 1984; Wall et al., 2004), to increase the validity of the measures for
these external criteria, future scale validation activities should think of ways to include more
One further potential limitation that we share with similar studies in the area of DC
(e.g., Wilden et al., 2013) is that of a small response rate (5.9 percent) in the scale
confirmation study. The means and standard deviations we obtained may be biased in the
sense that they only reflect the views of persons who are interested in DC and strategic
renewal, or survey research. Although they may suffice for construct validation purposes, our
data may not be useful for drawing conclusions on the ‘true’ extent of DC of each firm
involved, and the means and standard deviations may not be representative for the whole
while information and communication industries (J) are overrepresented, and older firms are
slightly overrepresented. To standardize the scale (e.g., in order to diagnose high vs. low DC)
and to provide information on expected values, confidence intervals, and so forth, further
research is required that involves a sample that is not biased through self-selection.
Building on earlier work, in their extensive review, Schilke et al. (2018: 416) came to the
conclusion that ‘[n]ow may also be a good time to move beyond these established procedural
typologies and enrich the dynamic capabilities framework with additional organizational
processes that may have been previously overlooked’. We are convinced that scale
operationalization of a concept into concrete items requires clear, consistent definitions; vice
versa, through empirical testing of the factor structure of a scale, and correlations of its
35
subscale with other variables may provide starting points for conceptual refinement. Our
research serves as an example for this mutual relationship of test construction and theory
development. The most salient result is that Teece’s theoretically assumed DC dimensions
were empirically shown to form three different factors that predict different business
performance criteria. Furthermore, our findings point to several potential avenues for further
refining Teece’s (2007) framework. For example, as mentioned above, one interpretation for
sensing being negatively related to sales revenues and profits from product innovation is that
the operationalization we chose measures responsive market orientation rather than proactive
market orientation (Narver et al., 2004) and, using Wang and Ahmed’s (2007) terminology,
that firms that are well aware of market trends, and customer wishes might develop an
measuring proactive market orientation, for instance, by asking about the awareness of future
business opportunities and latent customer needs, rather than current ones.
Seizing capacity was found to be positively associated with all performance indicators,
thereby having the highest beta weights for the percentage of sales from innovations, the
percentage of profits from innovations, and innovation expenditure in R&D. Amongst the DC
facets, seizing had the highest predictive power with regard to all measured performance
indicators. This seems plausible, as seizing is the capacity that is most closely linked to a
strong strategy and is also in line with Teece’s (2007) view of seizing as the capacity for
making investment decisions based on business opportunities that fit the firm’s strengths.
innovation performance indicators. This result may be due in part to our items’ focus on the
capacity for transforming organizational processes and structures but not on the
transformation of human resources in the sense of employee knowledge and skills, which
36
might have been a more effective predictor of innovation performance. The aspect of resource
the scale. Notably, as firms become better at transforming, employee satisfaction and
commitment, as well as customer satisfaction increase. The validity of these relationships and
their causal direction— for example, whether a more satisfied staff is better at implementing
planned changes or whether the successful implementation of planned changes increases staff
some scholars have argued that Teece’s (2007) sensing concept may have an external and an
internal aspect; internal sensing would be required to identify the need (or opportunities) for
adaptation and change based on information from within the firm. The necessity of reflection
and articulation of implicitly held knowledge for learning, continuous optimization, and
renewal has also been recognized by strategic management scholars (e.g., Hodgkinson and
Healey, 2011), especially in the context of strategic alliances (e.g., Kale and Singh, 2007), and
in the context of developing multiple capabilities at the same time (Bingham, Heimeriks,
Schijven, and Gates, 2015). Integrating the aspect of reflection into Teece’s model could be an
To date, according to Schilke et al. (2018), about one third of DC studies have employed
survey scales. However, as we have outlined, Wilden et al.’s (2013) scale is the only
instrument for measuring DC as conceptualized by Teece (2007) that has been developed in
line with the highest standards of scale development. In comparison with our own scale,
Wilden et al.’s (2013) scale has the benefit that it may reduce social desirability or
Hence, our DC scale can be seen as a complement to Wilden et al.’s scale: Even if the scale
developments varied in their detailed procedures, Wilden et al.’s and our scale are comparable
with regard to methodical rigor, sample size, and also with respect to the high psychometric
quality criteria they both achieved. Both scales have also been successfully applied to show
relationships between DC aspects and various kinds of performance. The presence of two DC
scales that take two perspectives – a frequency/activity oriented perspective (Wilden et al.’s
scale) and an outcome-oriented perspective (our scale) – now opens up multiple avenues for
triangulation in future research. For example, one interesting next step would be to investigate
the link between DC activities, DC outcomes, and various kinds of firm performance.
Other ways than surveys have been employed to measure DC. For example, scholars
have used proximal measures as indicators for DC as a remedy to potential informant biases
(above all, social desirability). In this context, Stadler et al. (2013) looked the level of certain
oil drilling capabilities to infer an oil firm’s level of ‘technological sophistication’, which
then served as a proxy to DC; the rationale behind the proxy they employed is the assumption
that a high level of technological sophistication would imply a high level of DC. Another
recent example is Girod and Whittington’s (2017) study that used proxies for measuring
under consideration (e.g., number of additions of new business units, number of re-
combinations of units, split of business units) and treated the number of reconfigurations
survey scale. Clearly, the main advantage (in both cases where proxies were used) is that they
provide an objective, unbiased picture of ‘what is going on’ within an organization, regardless
of (socially-) desirable outcomes. Besides the fact that such data are hard to collect in many
38
cases, one disadvantage is that proxies can be highly specific to the firm or industry (as in
Stadler et al.’s 2013 case), thereby limiting their applicability as a measure in other
businesses. Another potential limitation is that they may measure DC very indirectly. To take
Girod and Whittington’s (2017) example, frequently changing a business’s structure may
require (and thus indicate) transforming capacity; nevertheless, as Arend and Bromiley (2009)
(e.g., Wilden et al., 2013), proxy-based measures (e.g., Stadler et al., 2013; Girod and
Whittington, 2017), and our outcome-oriented survey – have their benefits and drawbacks,
and none of them will be the ‘answer to everything’. Instead, as suggested by Schilke et al.,
(2018), mixed-method approaches (e.g. combining proxy-based studies with survey-based and
other measures) should be applied to take advantage of each methods’ strengths while
balancing their limitations. Moreover, future methodical research could also investigate
correlations between different types of instruments, to work towards standard measures in the
area of DC.
The starting point of our research was the claim for a more standardized approach to
measuring DC. As yet, no agreed-upon scale yet exists for measuring dimensions of DC, and
hardly any of the scales employed in quantitative studies so far have been developed
providing a first version of a DC scale based on Teece’s (2007) model of sensing, seizing, and
transforming, focusing on outcomes of latent capacities (e.g., how good is the firm at sensing
new trends) instead of frequencies of activities (e.g., how and how often do they search the
environment). Because of the cross-validation of the factor structure found in the exploratory
factor analysis, the rather large sample size (269 for scale purification and 307 for scale
39
confirmation) and the large variety of performance indicators, we conclude that the scale is
already of high psychometric quality (different forms of reliability and validity) in its current
version. Nonetheless, it should be employed and further improved in future scale development
processes.
Our work implies several avenues for future research. We discussed the outcomes from
the confirmation study against the backdrop of our concrete item operationalizations and
elaborated on possible alternative findings if other aspects of the same capacities (e.g.,
sensing, transforming) had been operationalized. These provide the basis for potential
additional aspects that could be integrated to further refine the DC scale. For example, sensing
more in the operationalizations of the seizing subscale, and the aspect of transforming firm
resources (e.g., through learning and development) could be included in the transforming
subscale. Their inclusion could also contribute to a step-wise refinement of the assumptions
underlying Teece’s (2007) model. Moreover, in a next step, external construct validity should
be established by comparing the outcomes of the scale with similar scales (convergent
validity), such as Wilden et al.’s (2013) scale, or with scales measuring different constructs
validation, such as organizational learning or strategic renewal. Overall, the ambition behind
our research is that in the future, there will be a valid, unified scale for measuring various,
clearly specified dimensions of DC. We are confident that this approach of concretizing
capacities and distinguishing different aspects may also contribute to future theory
REFERENCES
Flatten, T. C., A. Engelen, S.A. Zahra and M. Brettel (2011), ‘A measure of absorptive
capacity: Scale development and validation’, European Management Journal, 29, 98–
116.
Fornell, C. and D. F. Larcker (1981), ‘Evaluating structural equation models with
unobservable variables and measurement error’, Journal of Marketing Research, 18, 39–
50.
Gelman, A. and J. Hill (2007), Data analysis using regression and multilevel/hierarchical
models. Cambridge University Press: New York.
Girod, S. J. G. and R. Whittington (2017), ‘Reconfiguration, Restructuring and Firm
Performance: Dynamic Capabilities and Environmental Dynamism’ Strategic
Management Journal 38, 1121–1133.
Hawass, H. H. (2010), ‘Exploring the determinants of the reconfiguration capability: a
dynamic capability perspective’, European Journal of Innovation Management, 13, 409–
438.
Helfat, C. E. (1997), ‘Know-how and asset complementarity and dynamic capability
accumulation: The case of R&D’, Strategic Management Journal, 18, 339–360.
Helfat, C. E., S. Finkelstein, W. Mitchell, M. A. Peteraf, H. Singh, D. J. Teece and S. G.
Winter, (2007), Dynamic capabilities: Understanding strategic change in organizations.
Wiley-Blackwell: Oxford.
Helfat, C. E. and M. A. Peteraf (2015), ‘Managerial cognitive capabilities and the
microfoundations of dynamic capabilities’ Strategic Management Journal, 36, 831–50.
Hinkin, T. R. (1995), ‘A review of scale development practices in the study of organizations’,
Journal of Management, 21, 967–988.
Hinkin, T. R. (1998), ‘A brief tutorial on the development of measures for use in survey
questionnaires’, Organizational Research Methods, 1, 104–121.
Hodgkinson, G. P. and M. P. Healey (2011), ‘Psychological foundations of dynamic
capabilities: Reflexion and reflection in strategic management’, Strategic Management
Journal, 32, 1500–1516.
Hou, J.-J. and Y.-T. Chien (2010), ‘The effect of market knowledge management competence
on business performance: A Dynamic Capabilities perspective’, International Journal of
Electronic Business Management, 8, 96–109.
Høyrup, S. (2004), ‘Reflection as a core process in organisational learning’, Journal of
Workplace Learning, 16, 442–454.
Hu, L. and P. M. Bentler (1999), ‘Cut-off criteria for fit indexes in covariance structure
analysis: Conventional criteria versus new alternatives’, Structural Equation Modeling,
6, 1–55.
Hult, G. T. M., R. F. Hurley and G. A. Knight (2004), ‘Innovativeness: Its antecedents and
impact on business performance,’ Industrial Marketing Management, 33, 429–438.
Jantunen, A. (2005), ‘Knowledge-processing capabilities and innovative performance: An
empirical study’, European Journal of Innovation Management, 8, 336–349.
Jiao, H., J. Wei and Y. Cui (2010), ‘An empirical study on paths to develop dynamic
capabilities: From the perspectives of entrepreneurial orientation and organizational
learning’, Frontiers of Business Research in China, 4, 47–72.
Kale, P. and H. Singh (2007), ‘Building firm capabilities through learning: The role of the
alliance learning process in alliance capability and firm-level alliance success’, Strategic
Management Journal, 28, 981–1000.
Karimi, J. and Z. Walter (2015), ‘The role of Dynamic Capabilities in responding to digital
disruption: A factor-based study of the newspaper industry’, Journal of Management
Information Systems, 32, 39–81.
43
Knight, G. A. and D. Kim (2009), ‘International business competence and the contemporary
firm’. Journal of International Business Studies, 40, 255–273.
Lee, P.-Y., M.-L. Wu, C.-C. Kuo and C.-S. J. Li (2016), ‘How to deploy multiunit
organizations’ dynamic capabilities?’ Management Decision, 54, 965–80.
Li, D. and J. Liu (2014), ‘Dynamic capabilities, environmental dynamism, and competitive
advantage: Evidence from China’, Journal of Business Research, 67, 2793–2799.
Lin, H.-F., J.-Q. Su and A. Higgins (2016), ‘How dynamic capabilities affect adoption of
management innovations’, Journal of Business Research, 69, 862–76.
Lopez-Cabrales, A., M. Bornay-Barrachina and M. Diaz-Fernandez (2017), ‘Leadership and
dynamic capabilities: The role of HR systems’, Personnel Review, 46, 255–276.
Ma, J. and Z. Todorovic (2011), ‘Making universities relevant: Market orientation as a
Dynamic Capability within institutions of higher learning’, Academy of Marketing
Studies Journal, 15, 1–15.
Makkonen, H., M. Pohjola, R. Olkkonen and A. Koponen (2014), ‘Dynamic capabilities and
firm performance in a financial crisis’, Journal of Business Research, 67, 2707–2719.
Mandal, S. (2017), ‘The influence of dynamic capabilities on hospital-supplier collaboration
and hospital supply chain performance’, International Journal of Operations &
Production Management, 37, 664–84.
Marcus, A. A. and M. H. Anderson (2006), ‘A general Dynamic Capability: Does it propagate
business and social competencies in the retail food industry?’, Journal of Management
Studies, 43, 19–46.
Marsh, H. W., Z. Wen and K.-T. Hau (2004), ‘Structural equation models of latent
interactions: Evaluation of alternative estimation strategies and indicator construction’,
Psychological Methods, 9, 275–300.
McKelvie, A. and P. Davidsson (2009), ‘From resource base to Dynamic Capabilities: an
investigation of new firms’, British Journal of Management, 20, S63–S80.
Mitrega, M., D. Forkmann, G. Zaefarian and S. C. Henneberg (2017), ‘Networking capability
in supplier relationships and its impact on product innovation and firm performance’,
International Journal of Operations & Production Management, 37, 577–606
Monteiro, A. P., A. M. Soares and O. L. Rua (2017), ‘Linking intangible resources and export
performance’, Baltic Journal of Management, 12, 329–347.
Naldi, L., P. Wikström and M. B. von Rimscha (2014), ‘Dynamic Capabilities and
performance’, International Studies of Management & Organization, 44, 63–82.
Narver, J. C., S. F. Slater and D. L. Mac Lachlan (2004), ‘Responsive and proactive market
orientation and new product success’, The Journal of Product Innovation Management,
21, 334–347.
Nedzinskas, Š., A. Pundzienė, S. Buožiūtė-Rafanavičienė and M. Pilkienė (2013), ‘The
impact of dynamic capabilties’, Baltic Journal of Management, 8, 376–396.
Noble, C. H. (1999), ‘The eclectic roots of strategy implementation research’, Journal of
Business Research, 45, 119–134.
Nunnally, J. C. (1978), Psychometric methods. McGraw Hill: New York.
Ottenbacher, M. C. (2007), ‘Innovation management in the hospitality industry: Different
strategies for achieving success’, Journal of Hospitality & Tourism Research, 31, 431–
454.
Pandit, D., M. P. Joshi, R. K. Gupta and A. Sahay (2017), ‘Disruptive innovation through a
dynamic capabilities lens: An exploration of the auto component sector in India’,
International Journal of Indian Culture and Business Management, 14, 109-130.
Peteraf, M., G. Di Stefano and G. Verona (2013), ‘The elephant in the room of dynamic
capabilities: Bringing two diverging conversations together.” Strategic Management
Journal, 34, 1389–1410.
44
Pandza, K., and R. Holt (2007), ‘Absorptive and transformative capacities in nanotechnology
innovation systems’, Journal of Engineering and Technology Management, 24, 347–365.
Pavlou, P. A. and O. A. El Sawy (2011), ‘Understanding the elusive black box of Dynamic
Capabilities’, Decision Sciences, 42, 239–273.
Podsakoff, P. M., S. B. MacKenzie, J.-Y. Lee and N. P. Podsakoff (2003), ‘Common method
biases in behavioral research: A critical review of the literature and recommended
remedies’, Journal of Applied Psychology, 88, 879–903.
Protogerou, A., Y. Caloghirou and S. Lioukas (2012), ‘Dynamic capabilities and their indirect
impact on firm performance’, Industrial and Corporate Change, 21, 615–647.
Raman, A. and S. S. Bharadwaj (2017), ‘Dynamic service capabilities enabling agile services’
Journal of Enterprise Information Management, 30, 166–87.
Rashidirad, M., H. Salimian, E. Soltani and Z. Fazeli (2017), ‘Competitive strategy, dynamic
capability, and value creation: Some empirical evidence from UK telecommunications
firms’, Strategic Change, 26, 333–42.
Richardson, H. A., M. J. Simmering and M. C. Sturman (2009), ‘A tale of three perspectives:
Examining post hoc statistical techniques for detection and correction of common
method variance’, Organizational Research Methods, 12, 762–800.
Rossiter, J. R. (2008), ‘Content validity of measures of abstract constructs in management and
organizational research’, British Journal of Management, 19, 380–388.
Schilke, O. (2014), ’Second-order dynamic capabilities: How do they matter?’, The Academy
of Management Perspectives, 28, 368–380.
Schilke, O., S. Hu and C. Helfat (2018), ‘Quo vadis, dynamic capabilities? A content-analytic
review of the current state of knowledge and recommendations for future research’,
Academy of Management Annals, 12, 390-439.
Shafia, M. A., S. Shavvalpour, M. Hosseini and R. Hosseini (2016), ‘Mediating effect of
technological innovation capabilities between dynamic capabilities and competitiveness
of research and technology organisations’, Technology Analysis & Strategic
Management, 28, 811–26.
Simon, A. (2010), ‘Resources, Dynamic Capabilities and Australian business success’,
Journal of Global Business and Technology, 6, 12–31.
Singh, D., J. S. Oberoi and I. S. Ahuja (2013), ‘An empirical investigation of dynamic
capabilities in managing strategic flexibility in manufacturing organizations’,
Management Decision, 51, 1442–1461.
Stadler, C., C. E. Helfat and G. Verona (2013), ‘The impact of dynamic capabilities on
resource access and development’, Organization Science, 24, 1782–1804.
Teece, D. J. (2007), ‘Explicating dynamic capabilities: The nature and microfoundations of
(sustainable) enterprise performance’, Strategic Management Journal, 28, 1319–1350.
Teece, D. J. (2014), ‘The foundation of enterprise performance: Dynamic and ordinary
capabilities in an (economic) theory of firms’, The Academy of Management
Perspectives, 28, 328–352.
Teece, D. J., G. Pisano and A. Shuen (1997), ‘Dynamic capabilities and strategic
management’, Strategic Management Journal, 18, 509–533.
Thompson, B. (2004), Exploratory and confirmatory factor analysis: Understanding concepts
and applications. American Psychological Association: Washington DC.
Townsend, D. M. and l. W. Busenitz (2015), ‘Turning water into wine? Exploring the role of
dynamic capabilities in early-stage capitalization processes’ Journal of Business
Venturing, 30, 292–306.
Verreynne, M.-L., D. Hine, L. Coote and R. Parke (2016), ‘Building a scale for dynamic
learning capabilities: The role of resources, learning, competitive intent and routine
patterning’, Journal of Business Research, 69, 4287–4303.
45
Vickery, S. K., X. Koufteros and C. Droge (2013), ‘Does product platform strategy mediate
the effects of supply chain integration on performance? A dynamic capabilities
perspective’, IEEE Transactions on Engineering Management, 60, 750–762.
Vogel, R. and W. H. Güttel (2013), ‘The dynamic capability view in strategic management: A
bibliometric review’, International Journal of Management Reviews, 15, 426–446.
Wall, T. D., J. Michie, M. Patterson, S. J. Wood, M.Sheehan, C. W. Clegg, and M. West
(2004), ‘On the Validity of Subjective Measures of Company Performance’ Personnel
Psychology 57, 95–118.
Wamba, S. F., A. Gunasekaran, S. Akter, S. J. Ren, R. Dubey and S. J. Childe (2017), ‘Big
data analytics and firm performance: Effects of dynamic capabilities’ Journal of
Business Research, 70, 356–365.
Wang, C. L. and P. K. Ahmed (2007), ‘Dynamic capabilities: A review and research agenda’,
International Journal of Management Reviews, 9, 31–51.
Wang, C. L., C. Senaratne and M. Rafiq (2015), ‘Success traps, Dynamic Capabilities and
firm performance’, British Journal of Management, 26, 26–44.
Wiklund, J. and D. Shepherd (2005), ‘Entrepreneurial orientation and small business
performance: A configurational approach’, Journal of Business Venturing, 20, 71–91.
Wilden, R. and S. P. Gudergan (2015), ‘The impact of dynamic capabilities on operational
marketing and technological capabilities: Investigating the role of environmental
turbulence’, Journal of the Academy of Marketing Science, 43, 181–199.
Wilden, R., S. P. Gudergan, B. B. Nielsen and I. Lings (2013), ‘Dynamic Capabilities and
performance: Strategy, structure and environment’, Long Range Planning, 46, 72–96.
Wilden, R., T. M. Devinney and G. R. Dowling (2016), ‘The architecture of dynamic
capability research: Identifying the building blocks of a configurational approach’
Academy of Management Annals, 10, 997-1076.
Williams, L. J., N. Hartman and F. Cavazotte (2010), ‘Method variance and marker variables:
A review and comprehensive CFA marker technique’, Organizational Research
Methods, 13, 477–514.
Wilson, M. (2004), Constructing measures: An item response modeling approach. Lawrence
Erlbaum Associates: New Jersey.
Winter, S. G. (2003), ‘Understanding dynamic capabilities’, Strategic Management Journal,
24, 991–995.
Wohlgemuth, V. and M. Wenzel (2016), ‘Dynamic capabilities and routinization’, Journal of
Business Research, 69, 1944–48.
Wu, L.-Y. (2010), ‘Applicability of the resource-based and dynamic-capability views under
environmental volatility’, Journal of Business Research, 63, 27–31.
Zahra, S. A., H. J. Sapienza and P. Davidsson (2006), ‘Entrepreneurship and dynamic
capabilities: A review, model and research agenda,’ Journal of Management Studies, 43,
917–955.
Zheng, S., W. Zhang and J. Du (2011), ‘Knowledge based dynamic capabilities and
innovation in networked environments`, Journal of Knowledge Management, 15, 1035–
1051.
46
Table 1: Existing scales for measuring Dynamic Capabilities (DC) as conceptualized by Teece
Findings from the literature review between January 1997- December 2015 (basis for scale development)
Table 1 (continued): Existing scales for measuring Dynamic Capabilities (DC) as conceptualized by Teece
Findings from the literature review between January 2016-January 2018 (during scale development)
development. The literature review was extended to January 2018 to make sure that more recent developments are also captured.
48
Table 2: Pattern matrix of items for measuring sensing (SE), seizing (SZ), and transforming
Factor
Nr. Item SE SZ T
SE1 Our company knows the best practices in the market. 0.72
SE2 Our company is up-to-date on the current market situation. 0.82
SE3 Our company systematically searches for information on 0.95
the current market situation.
SE4 As a company, we know how to access new information. 0.83
SE5 Our company always has an eye on our competitors’ 0.70
activities.
SE6* Our company quickly notices changes in the market. 0.40 0.48
SZ1 Our company can quickly relate to new knowledge from 0.87
the outside.
SZ2 We recognize what new information can be utilized in our 0.71
company.
SZ3 Our company is capable of turning new technological 0.84
knowledge into process and product innovation.
SZ4 Current information leads to the development of new 0.73
products or services.
T1 By defining clear responsibilities, we successfully 0.89
implement plans for changes in our company.
T2 Even when unforeseen interruptions occur, change projects 0.90
are seen through consistently in our company.
T3 Decisions on planned changes are pursued consistently in 0.61
our company.
T4 In the past, we have demonstrated our strengths in 0.60
implementing changes.
T5 In our company, change projects can be put into practice 0.72
alongside the daily business.
T6* In our company, plans for change can be flexibly adapted 0.44 0.55
to the current situation.
Note: Extraction method: principal component analysis; rotation method: oblique rotation (promax) with Kaiser
normalization; the rotation converged in 6 iterations. Factor loadings <.30 are suppressed in the table.
* Item removed from survey after explorative factor analysis due to factor cross-loadings.
49
Table 3: Descriptives, correlations and alpha coefficients for sensing (SE), seizing (SZ), and
Correlation
Descriptives*
(alpha coefficient)
Number
Subscale Mean SD SE SZ T
of items
* Descriptives, correlations, and alpha coefficients are given for the purified scale after the exclusion of items due to
cross-loadings.
50
Table 4: Descriptive statistics and correlations for DC, and business and innovation performance indicators (n=307)
Dynamic Capabilities
Descriptives Dimensions of business performance Dimensions of innovation performance
(DC)
employee- customer-
Variable Mean SD α SE SZ T market financial I1 I2 I3 I4 I5
related related
DC 0.91
Sensing (SE) 4.54 0.837 0.84 1
Seizing (SZ) 4.33 0.877 0.84 0.59** 1
Transform. (T) 4.31 0.876 0.87 0.50** 0.70** 1
Business
0.90
performance
market 4.16 0.814 0.86 0.43** 0.48** 0.48** 1
financial 3.89 1.053 0.91 0.36** 0.37** 0.33** 0.60** 1
employee-rel. 4.53 0.947 0.83 0.35** 0.48** 0.52** 0.62** 0.42** 1
customer-rel. 4.66 0.895 0.82 0.30** 0.37** 0.36** 0.82** 0.42** 0.64** 1
Innovation
0.80
performance
I1 2.60 1.994 0.07 0.27** 0.21** 0.17** 0.10 0.10 0.05 1
I2 2.17 1.935 0.06 0.25** 0.19** 0.18** 0.15** 0.13* 0.04 0.80** 1
I3 3.36 1.167 0.18** 0.36** 0.25** 0.26** 0.19** 0.15** 0.08 0.55** 0.45** 1
I4 2.25 1.627 0.16** 0.38** 0.30** 0.29** 0.19** 0.19** 0.15** 0.50** 0.45** 0.46** 1
I5 2.66 1.285 0.30** 0.35** 0.30** 0.37** 0.33** 0.19** 0.23** 0.24** 0.26** 0.33** 0.37** 1
*Correlation is significant at p < 0.05; **correlation is significant at p < 0.01.
I1=Percentage of sales 2013 from innovations introduced on the market within the past three years
I2=Percentage of profits 2013 from innovations introduced on the market within the past three years
I3=Number of innovations introduced on the market in 2013 within the past three years
I4=Innovation expenditure in R&D in percent of sales (2013)
I5=Percentage of costs saved (reduced) within one year by implementing process innovations in 2013
51
Table 5: Regression analysis for business performance and innovation performance variables (n=307)
I1=Percentage of sales 2013 from innovations introduced on the market within the past three years,
I2=Percentage of profits 2013 from innovations introduced on the market within the past three years
I3=Number of innovations introduced on the market in 2013 within the past three years
I4=Innovation expenditure in R&D in percent of sales (2013)
I5=Percentage of costs saved (reduced) within one year by implementing process innovations in 2013
52
SE1 R² = 0.63
0.75
SE2
0.83
SE3 0.71 Sensing (SE)
0.80
SE4 0.73
AVE = 0.59; CR = 0.88 0.79
SE5
SZ1 R² = 0.93
0.82
SZ2 0.80 Dynamic Capabilities (DC)
Seizing (SZ) 0.96
SZ3 0.78
0.66
SZ4 AVE = 0.59; CR = 0.85
T1 0.83
R² = 0.69
T2 0,82
0,75
T3 0,88 Transforming (T)
0,83
T4 0,69
T5 AVE = 0.63; CR = 0.89
Note: Model fit indices are NFI = 0.94, CFI = 0.97, IFI = 0.97, TLI = 0.96, and SRMR = 0.04
Figure 1: Second-order confirmatory factor analysis (CFA) with sensing (SE), seizing (SZ),
and transforming (T) as first-order constructs and dynamic capabilities (DC) as second-
order construct