Professional Documents
Culture Documents
Out PDF
Out PDF
by
B.S., History and Naval Engineering, May 1985, United States Naval Academy
A Dissertation submitted to
The Faculty of
Graduation Date
In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.
UMI 3614805
Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author.
Microform Edition © ProQuest LLC.
All rights reserved. This work is protected against
unauthorized copying under Title 17, United States Code
ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
Copyright © 2006
ii
ABSTRACT
almost-daily basis within the business world. Much attention has been paid to the theory
organizational practice. However, much less attention has been paid to the more specific
financial benefit or payoff that could potentially result from an effective system
management systems (KMS) are becoming more prevalent. A KMS is often considered
processes produces an excessive time lag, rigor, and formality which will “disrupt” the
desired free flow of knowledge. Professor Michael Stankosky of GWU has posited a
more flexible variation of the usual systems engineering (SE) approach, tailored
(EME). This approach takes the four major pillars of KM as identified by GWU research
in this area – Leadership, Organization, Technology, and Learning -- and adapts eighteen
key SE steps to accommodate the more flexible and imprecise nature of “knowledge”.
Anecdotal study of successful KMS developments has shown that many of the more
iii
integrated systems engineering process tailored specifically to the KM domain should
have followed some or all of the steps in this process will have designed and deployed
more “successful” KMS than those organizations that have not done so. To support and
refine this approach, a survey was developed to determine the usage of the 18 steps
identified in EME. These results were then analyzed against a objective financial
implementation.
For the financial measurement data, the subject list of organizations for this study
used a measure of intangible valuation developed by Professor Baruch Lev of NYU called
Knowledge Capital Earnings © (KCE). This is the amount of earnings that a company
with good “knowledge” has left over once its earnings based on tangible financial and
physical assets have been subtracted from overall earnings. KCE can then be used to
determine the Knowledge Capital (KC) of an organization. This in turn provides two
quantitative measures (one relative, one absolute) that can be used to define a successful
knowledge company.
For this study, Lev’s research from 2001 was updated, using more recent financial
data. Several of these organizations completed a survey instrument based upon the 18
points of the EME approach. The results for the 18 steps were compared against each
other and against each organization’s KC scores. The results show that there is a
significant correlation between EME and the relative KC measurement, and select EME
steps do correlate significantly with a high KC value. Although this study, being the first
iv
validation effort, does not show provable causation, it does demonstrate a quantifiable
turn should contribute to the slim body of objective knowledge on the design,
v
DEDICATION
x My father, George Fain, who so bravely resisted the temptation to ask me when
x My in-laws, Patricia and (particularly) Edward Hymson, who had no such qualms
x My sister, Kathy Fain, who still does not believe that I have actually finished this
x My husband, Ken, who tolerated endless nights of false starts and stops and took
care of our daughter countless evenings and weekends so that I could finish this
x My daughter, Kelly, who someday will wonder what all of the big deal was about
this
Most of all, I want to dedicate this to my mother, Dr. Lin Fain, who went back to
school after twenty years and completed her own dissertation while working full-time:
Although she is not here to see me finish this in person, I know that she is monitoring
every keystroke, wincing at every awkward sentence, and trying to correct every
typographical error.
I also hope that she is proud that I will be wearing her cap and gown when I receive
this degree.
vi
ACKNOWLEDGEMENTS
I would like to thank the following people who have been instrumental to my
x Professor Baruch Lev, of the Stern School of Business at NYU, who graciously
this document. Any errors or misinterpretations are mine and mine alone.
ago
world’s longest-serving doctoral candidate, who helped me to find not one but
two new dissertation topics when my others became obsolete, who encouraged me
to stay with it all these years, and who also granted me permission to cite his
x Professors Jack Harrald and Lile Murphree and Lecturers Frank Calabrese and
long it took
Engineering Department at GWU, who finally, firmly, and graciously told me that
vii
x Ms. Zoe Dansan, who provided the outstanding coordination and support that
allowed me to get through all of the university wickets for this project despite my
professionals for the firms contacted for this study, for taking the time and the
degree and for giving me the support (and the small office supplies) needed to
x My supervisors and managers over the years – Dr. Steve Cohen, CAPT USN
(retired), and Joe Vrabel, Don Zugby, and Lori Scherer of MITRE -- who politely
but repeatedly egged me on both to begin the degree and to complete the
dissertation
x My coworkers, customers, and sponsors, who provided their own share of strong
x My friends, especially the GNO crowd of Susan Keuch, Karen Detweiler, and
Heidi Avery, who hung placeholders on my office walls and constantly asked me
when I was going to finish my degree (and Karen who is soon to finish her own
doctorate)
x Thanks to all of you and to anyone else I forgot to name for all of your support for
the years – I could not have done this without all of your help.
viii
TABLE OF CONTENTS
Dedication .................................................................................................................... vi
ix
2.1 Integrated Approaches to Systems Engineering................................................ 8
x
2.5 Knowledge Scorecard/Knowledge Capital ..................................................... 60
Success? .............................................................................................. 71
xi
4.5.4 Different Approaches to KM and KMS Projects ................................ 87
5.3.2 H2: Does More EME Correlate to More KM Success? .................... 110
Chapter 6 Conclusions, Discussion, And Suggestions For Further Research ... 121
xii
6.2 Areas for Future Research............................................................................. 129
xiii
LIST OF FIGURES
Figure 2-2 -- Vee Model (Forsberg Mooz and Cotterman) [45] ....................................... 13
Figure 2-8 -- Stankosky’s Original Integrative KMS Engineering Model [121] .............. 29
Figure 2-13 -- Human Resource Cost Accounting (HRCA) (Brummet, Flamholtz, Pyle) 45
xiv
Figure 5-3 -- EME Area Scores Per Company................................................................ 107
xv
LIST OF TABLES
Table 2-3 -- Perceived vs. Actual Intangible Value Factors (VCI) ................................... 52
Table 2-4 -- Knowledge Capital Across Non-Financial Industry Sectors, 1999 [102] ..... 64
Table 2-5 -- Lev's Top 50 Knowledge Companies – Knowledge Capital (2001) ............. 65
Table 5-11 – Data for H3 Statistical Comparison, “EME1 – Enterprise Definition”...... 116
Table 5-12 -- EME and KM Correlations By EME Phase (D=0.05, n=8 ranked pairs).. 116
xvi
Table C- 1 -- Total Physical and Financial Asset Values ............................................... 177
xvii
Table E- 20 -- H3 Statistical Comparison, “EME 13a – Formal Organizational Structure”
................................................................................................................................. 232
xviii
LIST OF ACRONYMS
BV Book Value
CV Comprehensive Value
DK Don’t Know
ER Economic Return
FC Financial Capital
xix
IAM Intangible Asset Management
KC Knowledge Capital
KM Knowledge Management
Management
MV Market Value
PC Physical Capital
xx
CHAPTER 1 INTRODUCTION
“Knowledge is Power”
on almost a daily basis within the business world. Yet unlike many previous fads that
came and left, knowledge management has stayed. In many organizations, KM has
substantially improved the capabilities of the organization – to the point where the
people and people to the knowledge they need to effectively act and create
1
x “Because knowledge is personalized, in order for one person’s knowledge to
more effective and productive in their work.” (Maryam Alavi and Dorothy
of the projects selected, the people, process, and technology issues need to be
[20]
Much attention has been paid to the theory and justification of knowledge
much less has been paid to the more tactical issues of effective implementation of
engineering process to ensure that it meets its intended requirements within its given
2
and integration of an enterprise’s intellectual assets to maximize its efficiency,
Systems engineering (SE) approaches are often used to build KM systems (indeed,
many organizations claim to use formal SE processes to build all their systems).
However, many KM system deployments are not considered successful. Sometimes the
initial definition of “system” (or of SE) is so formal and restrictive as to doom the
knowledge project to failure. Sometimes all of the elements of the SE approach are not
always used. In other cases, organizations believed that the rigor and standardization of a
typical formal process could disrupt the free flow of knowledge and so deliberately
ignored any sort of formal engineering process. Stankosky [122] has posited a KM-
(EME). This approach adapts essential systems engineering concepts to the more
system.
There may be a link between “shorting” the SE approach and building unsuccessful
3
so. The organizations who implemented the projects informally, with little planning or
This study attempted to validate the EME model by correlating the “success” of
organizational KM systems with the use of aspects of the EME model. Subjects for the
figures.
Existing literature supports the use of a formal process to design, develop, and
operate a successful knowledge management system. It also supports the claim that that
effective project management [39]. This is particularly important given the seemingly-
contradictory requirements for advanced planning and rapid flexibility when developing
KM systems [121]. However, it had not yet been validated that the EME process, as
defined by Stankosky in his initial iteration, results in a more successful KMS. Part of
the issue is the question of “successful”; in many similar past efforts, success has been
self-reported. This study used external and objective financial measures – knowledge
is successful in an organization.
x H10: Organizations that use any of the EME approach show no difference in KM
4
The alternative hypothesis to this is:
x H1a: Organizations that use any of the EME approach show a more positive
If the null hypothesis can be disproved, the conclusion would be that the existence of
a high knowledge-related financial return and thus would indicate success for a KMS
overall EME process and the successful KMS. In order to try to validate the specific
EME model, a correlation should be made between how much of the EME process was
with KM return.
Disproving this null hypothesis would support the contention that more usage of EME
directly correlates to more KM return and thus more successful system implementation.
This still leaves one hypothesis to address – which parts of the EME process correlate
5
H3: Do Different Parts of EME Correlate More Strongly to KM Success?
x H30: Organizations that use specific parts of the EME process show no difference
in KM returns than organizations that use other parts of the EME approach.
x H3a: Organizations that use specific parts of the EME process show a positive
If this null hypothesis can be disproved, the conclusion would be that specific EME
If all three null hypotheses can be rejected, EME can be correlated at a high level with
successful knowledge management, and the EME model therefore should be extended to
1.5 Overview
This paper begins with a review of the relevant literature on the evolution of
combined with project management concepts. It then reviews the need for knowledge
management engineering and summarizes the EME construct [122] and the Four Pillars
of KM. The last phase of the literature review lists several approaches for defining
knowledge management success metrics and ends with the measure used for defining KM
system success for this study – the Knowledge Capital (KC) valuation approach,
The next two chapters of the study address the relevance of the study topics and
outlines the three hypothesis sets in additional detail. This is followed by a description of
6
the scope and methodology for the study itself, outlining the criteria for choosing subject
organizations, the nature of the financial measures used to define success, and the
structure and description of the survey instrument used to measure EME among the
subject organizations.
The last section of this paper summarizes the data, results, and conclusions. Eight
companies ultimately returned surveys within the deadline period. The first hypothesis
set could not be addressed since none of the participants reported no use of the EME
of KC and EME (although it is a very strong inference). However, based upon the results
of the latter two hypothesis comparisons, it can be stated that there is indeed a statistical
correlation between the relative measure of KC, the Comprehensive Value Ratio (CVR),
and the use of EME. There are also several statistically-significant correlations for
specific EME areas and both measures of KC. The paper then identifies areas of the
EME construct that should be added or reexamined and highlight potential additional
corporate financial and KM communities and the “academic” and corporate definitions of
both knowledge and system, one which should be addressed if KM systems are to become
7
CHAPTER 2 REVIEW OF RELATED LITERATURE
“One day Alice came to a fork in the road and saw a Cheshire cat in a tree.
Lewis Carroll
requirements of defense and space systems to handle multiple dimensions and analyses by
the mid 1960s [5][73]. The “systems approach” allowed engineers and scientists to view
the entire problem set and increased their ability to model potential solutions to complex
implement, and operate those potential solutions. Soon the approach became mandatory
in the defense and aerospace fields and spread from there to other areas dealing with
8
technically- or operationally-complex issues, particularly into the area of computer and
One of the issues is the tautology that it is extremely difficult to define “systems
engineering” without the use of the word “system”. Eisner [39] defines the systems
which, in turn, requires a systematic and repeatable process for designing, developing,
and operating the system.” The International Council on Systems Engineering (INCOSE)
In fact, the U. S. Department of Defense even issued a draft military standard (MIL-
STD-499B) specifically calling out processes and procedures required for systems
originally issued in the early 1970s and defined systems engineering as “an
of system product and process solutions that satisfy customer needs” [33]. Although this
document was never formally issued (due to a DoD drive in the 1990s toward commercial
specifications), a “demilitarized” version of this standard in fact became the core for a
Winston Royce first put forth the waterfall model for information systems design in
9
approach, borrowed from traditional construction projects, assumes that each step would
be completed before work begins on the next step. In cases where the variables and
technical problems are well-defined and understood, it allows relatively low-risk progress
through each of the stages [39]. It provides relatively tight control and allows easy
monitoring of cost and schedule variables, thus making the project easier to manage than
Although many software practitioners still use this model (Neill and Laplante believe
that the oft-trumpeted recent “death” of the waterfall model is another urban myth [98]),
there are some serious issues restricting its utility for less-well-defined or more time-
constrained projects. This approach was more than adequate for large-scale, slow-
evolution development tasks; it began to show problems with the move away from
10
hardware-based systems to more adaptive (and rapidly-evolving) software projects [98].
The dependence on the completion of each step prior to initiation of the next one is a
critical tenet of the waterfall approach [39]; although this requirement encourages
stability and thorough design, it also restricts development flexibility and response times
[75]. In a perfect waterfall implementation, each step concludes with a thorough review
of all the progress accomplished up until that point, and no step begins until the results of
the previous one have been accepted [73]. In order to keep the project on schedule, large
amounts of time-consuming documentation are required in order for later steps to begin
their planning. All requirements must be documented, evaluated, and prioritized [107].
Documentation in turn can become its own goal, keeping project staff from working to
For a knowledge management system, where the “information” within the system is
more perishable and harder to code as data, the waterfall model shows cracks in the initial
upfront in any design project regardless of the methodology [31], the rigid formality of
the waterfall requirements process often removes end users from the process, producing a
system designed by its builders to meet the builders’ concept of user requirements. The
same issue recurs in the software requirements and analysis phases – since each data item
must be completely resolved and thoroughly analyzed prior to its incorporation into the
system design, the resulting constructs are effectively bulletproof [107]. However, the
defined system requirements can also be obsolete or irrelevant to the needs and desires of
11
The earliest “KM” systems, rule-based expert systems, were often built using this
approach [94]; however, those systems were generally processing structured data, not
sequential process can stifle the ability of a KM system designer to ensure user
involvement, knowledge accuracy, and currency, three traits that improve user acceptance
Shortly after the waterfall model became widely accepted, a variation on this model
began to emerge, again from the aerospace industry. One of the issues with the waterfall,
requirements at the start of the process; it also became harder to test and evaluate whether
those requirements had actually been accomplished. The Vee model, popularized by
Essentially, the orderly progression of the waterfall process continues as before. The
Vee (Figure 2-2 below) allows the beginning of the process to address the requirements at
determine system specifications that lead to part-level drawings. Each step in the process
drives out additional detail until, at the bottom of the Vee, the systems engineer is dealing
12
with the smallest details and actually building system components [45]. As the engineer
moves back up the Vee, smaller parts become assemblies, which become subsystems,
which become systems. Each phase of increasing integration on the right side tracks
directly across to the decomposition on the left side, so as each level is assembled, it is
tested against the design documentation appropriate for that level. This makes testing and
verification much easier; it also allows project managers to continue to have insight and
While this process is more responsive to incorporating users and managers into the
review and testing process and allows developers and assemblers to move through the
cycle more rapidly and efficiently, it is still essentially a process that assumes that the
system being built consists primarily of hardware and software to execute defined tasks,
13
usually based upon either data or information [1]. The second generation of KM systems,
often object-oriented, were built using this approach. Systems that used chaining and
inference to work through the processes of transmitting and storing information (as
opposed to data) required this type of design and test relationship to ensure that the
between components [94]. However, knowledge differs from both data and information
in that it both requires and generates “context”. While it does allow and support
gradually-increasing levels of detail and refinement, the Vee process does not easily
accommodate the varying levels of detail and flexibility required for effective knowledge
codification.
In 1998, Bahill and Gissing [9] proposed a model that integrated a re-evaluation
function throughout the entire systems engineering process and removed the inherent
single lifecycle approach of previous models. The process was made repetitive, non-
sequential, and more generic. Requirements definition, always the bane of KM systems
designers, became the more generic “State the Problem”. System analysis became
“Alternative Investigations” and “Modeling”. Systems were no longer built; they were
“Integrated” and “Launched”. At the end of the entire process, the system’s performance
was consciously assessed, and at each of the phases in the project, the progress and status
were reevaluated to ensure that the project continued to meet customer needs [113].
This mode is also called by its acronym, SIMILAR: State, Investigate, Model,
14
Integrate, Launch, Assess and Re-evaluate (Figure 2-3 below). Although the steps are
shown in a left-to-right sequence, the intent of the SIMILAR model is that any or all of
these steps could be going on at a particular point in time [9]. As with the Vee model,
additional levels of detail are driven out as the system continues its development. As
more detail becomes known, each of the six areas of development become richer and
better defined.
INCOSE currently (2006) has this model posted on their website as their preferred
definition of systems engineering [62]. It provides much more flexibility that the Vee or
the waterfall, allowing system designers and engineers to revise their plans and products
as more information becomes known [7]. SIMILAR provides an excellent framework for
building any information technology system, allowing user input, management oversight,
decomposition and integration, and continual feedback and revision throughout the
process. The system building aspects of systems engineering have finally all been
15
2.1.4 Integration of Process -- Software Engineering Institute/Capability Maturity
Model (CMM)
definitions into a single document, EIA 732 [40], in 1998. Known as the Systems
Engineering Capability Model (SECM), the document attempted to consolidate and relate
the Systems Engineering Capability Maturity Model (SE CMM), the INCOSE Systems
Processes for Engineering a System. The SECM defined systems engineering as “an
[40]. It was intended to serve as an interim standard and a draft for the future Capability
Maturity Model (CMM) and included a list of systems engineering and project
management tasks which could be assessed against the CMM levels. These Focus Areas
16
Table 2-1 -- EIA Systems Engineering Focus Areas [40]
The Defense Systems Management College (DSMC) in 2001 stated that “systems
verifies an integrated, life-cycle balanced (emphases added) set of system solutions that
17
The Software Engineering Institute (SEI) at Carnegie-Mellon University has been
assessments for several years now [25]. Their Capability Maturity Model (CMM) defines
specific quantitative process and performance metrics that must be met in order for an
organization to claim that it has a particular level of maturity. There are five levels in the
18
As organizations move through the process and achieve each successive level, each
level builds on the one before it. The requirements measured by the CMM assessments
become an integral part of the everyday business process. The first CMM assessments
were used to document and assess software development processes but later expanded
introduced the Capability Maturity Model (CMM®) IntegrationSM (CMMI). The CMMI
was formed to address the problem of having to use multiple Capability Maturity Models.
CMMI 1.1 combines four CMMs into a single source and addresses software engineering,
systems engineering, integrated product development, and supplier sourcing. CMMI 1.2
will be released in 2006 and will further integrate systems engineering, acquisition, and
supplier sourcing in an integrated architecture [26]. The user selects which of the areas
they wish to address and can use the CMMI standards to conduct a detailed assessment of
that area. The process can be implemented as a continuous representation, where the
implementation of the CMMI process itself is tailored and flexible. This can involve
19
Figure 2-4 -- Continuous Representation, CMMI 1.1 [25]
organization moves up the levels as a whole in a predefined sequence. This allows the
use of a single standard and fits more easily into firms more familiar with software
development. Figure 2-5 below shows the staged representation, which is more narrowly
focused and allows an organization to concentrate their resources but still requires intense
20
Figure 2-5 -- Staged Representation, CMMI 1.1 [25]
Both versions allows for objective external process measurement and evaluation
through the Standard CMMI Appraisal Method for Process Improvement (SCAMPISM)
process [27]. The CMMI can be implemented in one, two, three, or all of its four parts,
into a product solution and support that solution throughout the product’s life.
21
This includes the definition of technical performance measures, the integration of
The current CMMI standard for the software engineering and systems engineering
models, v1.1 [24], addresses only software and systems engineering. It is over 600 pages
long and provides extremely detailed information as to how to establish, measure, and
processes. While the CMMI framework demonstrates one direction in which the study
not see the need to address for an internal corporate knowledge management system.
Society issued a revised standard (IEEE 1220-2005) for the “Application and
Management of the Systems Engineering Process” [58]. This version was drafted to
begin to bring the IEEE standards into line with the international ISO/ISC 15288 standard
and is representative of the current state of the art in “integrative” systems engineering
[35]. It is intended to ensure that all aspects of the system, including the inputs of the end
users and the system stakeholders, are incorporated into the design and build process.
22
The systems engineering cycle and the project management process have merged to
The concurrency of these efforts is a major difference from the “cycles” of the
previous models. No longer does a project start and stop – it now continues indefinitely,
having constant inputs and course corrections to ensure that user requirements are current
and the technology is up to date [58]. This approach is a good start for evaluating an
approach for building a KMS, for it ensures continuous updating and user feedback, but it
23
is still lacking emphasis on the “softer” aspects of KM. A KMS approach needs to
processes, and tools that define and enable the instantiation and continuation of
be divided into four “pillars”– leadership, organization, technology, and learning [123] –
the actual system required to implement effective KM often does not receive the same
degree of planning, design, and implementation as the knowledge content itself. Often an
organization will concentrate on one of those three aspects – usually the tools as those
are often easiest for an organization to “sell” – ignoring the fact that each of the three
major areas of a knowledge management system (KMS) are as critical in their own way
as the underlying knowledge content. As shown in Figure 2-7 below, all three elements
24
Figure 2-7 -- The Three Components of a Knowledge Management System
Pro
es
ce
lici
sse
Po
s
Tools
Policies come into play as the organization begins to examine how and why a KMS
would be installed and used. No knowledge management system can possibly hope to
succeed without the continual backing of senior management and the inculcation of KM
and objectives are often required for a KM project to succeed. However, the policy
changes are often considered among the hardest, and so those are often delayed or ignored
outright.
Processes define whether the system will be used in such a way that it supports the
25
inefficient process merely increases the speed at which a user can become frustrated
[103]. The process of using the system – and the focus of the system and the knowledge
within it – must remain upon the organization’s key objective, whatever that may be
[126]. Without an effective process in place to implement the policies decided with the
tools provided, the system will not be used and so it cannot be successful. As Cho,
Jarrell, and Landay pointed out in 2000 “unless you have a Workforce willing and able to
Tools are needed to provide, store, and process the knowledge in the system. Often a
project manager believes that simply purchasing and installing Lotus Notes/intranets/
automatically and painlessly cause all of the knowledge to flow from those who have it to
those who need it. However, knowledge is not the same as information or data. It adds
context, and context adds complexity [1]. Once the subject matter begins to get complex,
and the system changes from calculating and evaluating objective data to assessing and
decreases as the resource and schedule costs of coding and maintenance begin to
dramatically increase [56]. Much of this problem stems from the fact that, once the
systems begin to support and display complex behavior, they become incredibly
requires the constant involvement of both subject matter experts (to validate the content)
26
These systems require planning, design, development, and maintenance procedures
just as would any other complex systems project. All three dimensions should be
addressed, and they should be addressed from a systems life-cycle perspective while still
retaining the flexibility needed to elicit, store, and use knowledge in a timely, efficient,
and effective manner. This brings us to the need for a knowledge systems engineering
construct.
Stankosky and Baldanza in 2001 [123] cited the following stages in their proposed
4. Assess and define the “system-level” relevant requirements in each of the four
areas
5. Define a functional architecture that meets the objectives of Step Two and the
These stages track closely with the successive stages of systems engineering defined
27
(steps 1, 2, 3, 4), architecture design/decomposition (steps 5/6), system and subsystem
design (step 7), and prototype (8). The steps missing (but assumed) in the above
definition are those for support and feedback (operations and maintenance) of the system,
and those steps should probably be added to the proposed KSE cycle. However, there is
from any other type of complex system. The key is to remember not only to use a
systems engineering view but also to ensure that the entire knowledge management
processes, and tools, the traditional integrative systems engineering approach should
The original KMS systems engineering model is shown in Figure 2-8 below [121]:
28
Figure 2-8 -- Stankosky’s Original Integrative KMS Engineering Model [121]
The model shown above has since been expanded to Enterprise Management
Engineering (EME), the current instantiation of Stankosky’s model. EME (Figure 2-9
below) illustrates the concepts represented in the above figure, but refines the model,
imposes it over the traditional systems development life cycle, and pulls together all the
requisite concepts for “integrative management” into a single entity operative throughout
the system.
29
Figure 2-9 -- Stankosky's Enterprise Management Engineering Model
The eighteen system design and development activities that needed in order to
implement the EME model can be summed up in the list below as follows [122]:
30
o Identify critical environmental changes that would impact strategic objectives
processes.
intellectual assets.
x resources required
x persons responsible
x timetable.
31
o Discuss change implementation and management
If the above steps are followed, and the system is appropriately defined, the EME
One of the issues in explaining and justifying knowledge management systems is that
higher productivity, motivate the workforce, and improve the organization. In practice,
however, KM systems have historically suffered not only from being IT projects but also
from trying to automate something not easily measured or encoded [72]. Data are easier
severable from the “value” of the workforce itself (and the opportunity cost of having to
role in the world economy [6][7], corporate financial positions need to begin to find an
accurate measure for representing the “value” of their knowledge to the outside world
[36][117].
32
Intangible assets are defined by the Federal Accounting Standards Board as all non-
financial assets that lack physical substance [42]. Intangible assets can be further divided
x Brands
x Intellectual Property
x Goodwill
As can be seen from the next section, several models exist for defining and evaluating
KM (or progress in KM), and most of them tend to be divided into information and
and systems) and external processes and assets (brands, sustainability, and goodwill).
Consequently, for purposes of this study, “knowledge” is defined as all five of the above
The American Productivity and Quality Center (APQC) produces studies and analyses
implementation and a series of targeted reports on various aspects of KM. One particular
study of APQC sponsor and partner organizations [3] produced four guidelines to
metrics approach:
33
x KM measures must be appropriate for the knowledge management approach,
making the measurements and meeting the objectives are essential to monitoring
itself, its employees, and its management of the benefits of successful KM.
While these guidelines are not in themselves useful for the KM success screening
required for this study, they do provide a useful context for evaluating KM measurement
models. Several approaches have been proposed; these are described in detail in the
following sections.
Edvinsson and Malone [36] describe an Intellectual Capital Report (ICR), consisting
of 111 indices that, taken as a whole, purport to represent the knowledge contained within
the workforce and the organization as well as the financial status that that intellectual
capital produces. Organizations tailor and select these indices to meet their reporting and
34
management requirements. They then report these figures and management and analysts
can theoretically review them to identify trends, areas needing improvement, or areas
Edvinsson and Malone based most of their work on the example of the Skandia
Corporation, a Swedish financial management and insurance firm which is one of the
Skandia). Edvinsson and Skandia later promulgated their “Navigator” approach, which
attempts to streamline the original list of indices down to about 30 indicators, measured
annually. These indicators are still divided into the same five categories, but the
35
Figure 2-10 -- Skandia Navigator (Edvinsson) [36]
Financial Focus
History
The Navigator indices are quite specific and quantitative; they fall into five areas.
Some of the 113 are shown below along with their unit of measure:
x Financial Focus
o Income/employee ($)
x Customer Focus
36
o Days spent visiting customers (#)
x Process Focus
o IT expense/employee ($)
x Human Focus
along the KM path. However, the Navigator is not an appropriate measure for use in this
study since the measures could (and indeed are designed to) vary from company to
37
company. Furthermore, they are far too detailed for the initial comparison needed to
Karl-Erik Sveiby uses a slightly different approach. His “Invisible Balance Sheet”,
first proposed in 1988, proposed to measure what he calls “invisible equity”. This was
developed to try to explain the difference between the physical valuation of a company
and the value that could actually be received if the company were sold (in the past this
has often been accounted for as “goodwill”). The Invisible Balance Sheet has evolved
into a concept known as Intangible Asset Management, or IAM [128]. IAM divides an
organization’s value into tangible and intangible indicators and then allocates the
is then assessed against four possible criteria: growth, renewal, efficiency, and stability.
Organizations select indicators for each of these criteria that can show progress in each
area.
38
Figure 2-11 -- Intangible Asset Monitor Model (Sveiby) [128]
As with the Navigator model, indicators can (and should) be tailored for a particular
“flows” as Sveiby terms it. Specific measurements in each category are similar to the
Navigator, but they are grouped to indicate trends and to measure one of the four criteria.
Knowledge in this view is a fluid process more than an endstate, and therefore it should
x Organic growth (increase in income less the portion of that increase due to
acquisitions)
x Efficiency
x Win/loss index
39
x Sales per customer
x Stability
x Customer longevity
x Efficiency
x Stability
x Efficiency
x Stability
40
x Average age
As with the Navigator, though, this approach would not work well for this study. The
to track and what the unit is capable of recording. Furthermore, this measurement again
is more for tracking a single organization’s internal progress along a KM path and does
The Balanced Scorecard (BSC) is an attempt to formalize the feedback process that is
only implicit in the previous two models. It was developed independently from the
Navigator and the IAM by Robert Kaplan and David Norton in 1992 [68]. It has received
a lot of attention in recent years since the U.S. Government has chosen to use the concept
for measuring and reporting on the performance of federal agencies and departments.
The BSC looks at four major areas: financial, internal, customer (or external processes),
and learning and growth. Each of these four areas is constantly interacting with the others
in order to drive and shape organizational vision and strategy as shown in Figure 2-12
below:
41
Figure 2-12 -- Balanced Scorecard (Kaplan and Norton) [68]
Some metrics that can be used as part of the BSC approach to measuring KM include:
42
x Financial returns to shareholders
x Market potential
x Market share
x Customer acquisition
x Customer profitability
x Product/service profitability
x Productivity
x Waste
x Organizational capabilities
x Infrastructure capabilities
x Stakeholder capabilities
43
This is essentially an extension of the Total Quality Management (TQM) model.
TQM attempted to improve quality (and effectiveness) by measuring certain key state
indicators that show outputs. The BSC approach takes this one step further and provides
feedback not only on the state indicators but also on the processes used to get there – BSC
measures outcomes. Taken together, the BSC uses a “double-loop feedback approach”
This measurement goes beyond the indicator reporting of the Navigator and IAM
approaches to add progress reporting. This “partial credit” approach can be useful
(particularly for large organizations that are slow to change) but again, unless all of the
participating organizations share the same goals and strategy, using the BSC to compare
and contrast multiple organizations would not be appropriate. The BSC sets targets for
organizations, and the organizations are then measured against their own targets. This
highly customizable approach can be used to show progress toward organizational goals
and strategies, something all KM systems should do. However, the strength of the BSC
(its ability to show intermediate progress if tailored correctly) is also the reason why it
would not be an appropriate metric for this study (the fact that it can be tailored means
Human Resource Costing and Accounting (HRCA) was first proposed in 1968 by R.
Lee Brummet, Eric Flamholtz, and William Pyle [16][46]. This approach attempts to
quantify the actual costs of hiring, replacing, retaining, and motivating experienced staff –
44
the past costs associated with obtaining the staff holding and generating the corporate
knowledge. Initially, the concept merely attempted to quantify the actual costs incurred
through those activities. As time went on, the concept was extended to also include
Essentially, this approach (Figure 2-13 below) takes into account company
investments in human resources as both a loss and a contributor to profit and posits that
most organizations severely undervalue benefits and costs associated with human
resources. The HRCA model quantifies financial benefits and costs associated with staff
vacancies and opportunity costs, and turnover. These figures can be added to standard
corporate profit and loss statements in several European companies [66] but has not
Figure 2-13 -- Human Resource Cost Accounting (HRCA) (Brummet, Flamholtz, Pyle)
45
This is an interesting addition to the literature on costing intangible assets, but it is not
clear that this approach necessarily fits the definition of knowledge as used in this study.
Unlike many of the other models presented, this one could be used to compare some
organizations against others, but the emphasis is overwhelmingly on the costs (and cost
avoidance) of human resources management actions, not necessarily the benefits of the
satisfied staff (and correspondingly low turnover and HR costs) would score quite highly
reused.
46
France, Finland, Norway, Spain and Sweden [65]. The intent was to observe and
and reporting as part of the EU efforts to produce standard guidelines for corporate
reporting. The final report of the project was issued in 2000. The business model is
composed of three parts as shown in the figure below: identification, measurement and
resources, activities, and indicators into one of the following three categories:
human and structural capital that affects or interacts with outside stakeholders.
47
Figure 2-14 -- Intellectual Capital Management Model (MERITUM) [65]
Intellectual Capital Report as first defined by the Skandia Navigator. However, since the
specific indicators used to measure intellectual capital are designed and intended to vary
from organization to organization and situation to situation, their contextual nature makes
them impractical for the widespread common denominator needed for this study.
The DATI Guidelines were created for use by the Danish Agency for Trade and
Industry (DATI) in 2000 [28]. The DATI Guidelines explain the use of evaluation criteria
(66 are provided as examples of metrics that can be used, depending on the organization
and the market segment). Seventeen Danish companies volunteered to support the effort
and worked with DATI to attempt to represent their own position in their own
“Intellectual Capital Reports”. The Guidelines stress more the need to track and report on
48
Figure 2-15 -- Intellectual Capital Statement (DATI) [28]
As shown above, the DATI approach itself concentrates more on measuring and
intellectual capital as the entire difference between “market value” and “book value”,
where book value is the value of the assets in the annual report. DATI also views
intellectual capital as being synonymous with knowledge capital. In this respect, the
principles beneath the DATI guidelines are similar to those in Lev’s Knowledge Capital
(KC) computation. There can be other explanations for differences in market to book
value – market value, after all, is a reflection of what the stock market thinks of a
company, which may or may not consider its intellectual capital. This approach may
work well for regulated organizations where stock prices reflect fairly certain predictions
49
of future performance. However, this approach still needs the ability to (1) account for
return.
Christopher Ittner and David Larcker from the Wharton School of Business have been
source not found. Error! Reference source not found., and ensuring that there are
measures, and performance evaluation, are key to increasing value. This holds as true for
50
Figure 2-16 -- Value-Based Accounting (Ittner and Larcker) [64]
In fact, Ittner and Larcker feel that non-financial data can and should be used to
supplement “traditional” accounting data; they better reflect the strategic direction of a
values even after accounting measures are taken into account. These non-financial or
intangible criteria that affect the market value of a firm are those that could be
knowledge-based.
51
2.4.8 Value Creation Index (VCI)
This approach was tested in a joint effort by Forbes magazine, the Wharton Business
School, and the Cap Gemini Ernst & Young Center for Business Innovation. One of the
major intentions of this effort was to analyze intangible factors affecting corporate value
both against actual performance (which factors had the most effect on corporate value)
and against perceived value (which factors did corporate executives feel were more
important for corporate value). The results are shown in the table below:
What conclusions can be drawn from this analysis? The factors offered by the
executives are the same as those in many of the same concepts that have been posited in
the other models and approaches discussed in the previous section. Models for
However, they do not necessarily translate into improved market performance In fact, the
52
bottom two factors within the “actual” category are actually tied for last place – neither
technology nor customer satisfaction showed ANY statistical correlation to the market
value.
publicly claim KM and which show measurable benefits from KM are more information-
oriented [93]; they claim that valuation results and models based on “old economy”
models do not and should not apply [102]. Yet many of the older companies are
leveraging and managing their knowledge just as well as the newer ones [10].
Furthermore, these results held valid with a 0.9 level correlation for e-commerce
This version of the Value Creation Index is essentially a rank ordering of non-
financial factors that affect corporate value. This concept is especially interesting in that
it runs counter to many of the other KM approaches which are based on either technology
or customer satisfaction. However, these non-financial measures are also generally non-
quantifiable and must be assessed on a relative basis. That approach will not work for
this study.
In 1997, A.T. Kearney separately developed a concept they call the Value Creation
Index (VCI) [70]. Their Value Creation Indices are completely quantitative and are
(EC), and the NPV of pipeline products/economic capital. One distinction is that
53
economic return and economic capital incorporate the research and development costs
and equipment as assets (since it is assumed that they will eventually pay off) instead of
costs. The “NPV of marketed products/economic capital” figure accounts for current
performance; the “NPV of pipeline products/economic capital” figure takes into account
expected future performance. Therefore, this VCI model supports long-term R&D and
for industries with lengthy R&D cycles. The VCI formula is shown in Equation 2-1
below:
The figure Error! Reference source not found. shows how this figure is affected by
various factors and the percentage of influence that each has on the result (this model is
for the pharmaceutical industry); companies then have something more concrete to target
54
Figure 2-17 -- Value Creation Index (VCI) Value Tree (Kearney) [70]
This approach has promise, but it is really only appropriate for firms which (1) have a
long product development cycle, and (2) have a reasonable expectation that their
the development cycle is not necessarily any longer due to the need to create -- new drugs,
new chemicals, or new technologies can all either take long periods of time or can occur
quickly. In this industry, the product deployment cycle is artificially long due to the need
for testing, review, and approval. Most knowledge firms operate with a faster turnaround
time for both their products and their processes; using this model as a KM “screener” to
compare firms (as this study requires) would restrict the study unnecessarily.
55
2.4.10 Framework of Intangible Valuation Areas (FIVA)
Dr. Annie Green of GWU proposed the Framework of Intangible Valuation Areas
(FIVA) as part of her dissertation in 2004 [49]. It reconciles existing anecdotal and
subjective intangible asset models with quantitative and process-oriented value creation
or value chain such as those in the previous section. If value can be added to tangible
products through these processes, it stands to reason that value can also be added to
intangible ones. The issue in the past has been the inability of the marketplace to
objectively organize and assess those assets. FIVA takes the stepwise approach of value
creation but applies that concept not to the physical products but to the intangible ones.
The intangible value drivers are shown in the bottom row of the figure below as
Alternatives. The middle level, Criteria, are actually knowledge management objectives,
Learning. However, these criteria were derived by Green by analysis of the importance of
56
Figure 2-18 -- Framework of Intangible Assets (Green) [49]
While FIVA shows great promise as a global means of binning, assessing, and
integrating all intangibles in support of an overall strategic goal, Green herself admits that
intangibles include more than just “knowledge”; they also include such assets as the
social interactions and contexts [49]. Consequently, her framework was deemed too
The Value Chain Scoreboard (VCS) was developed by Baruk Lev from NYU Stern
Business School [79]. As with the other models previously described, it is also matrix of
non-financial indicators. However, it is intended to address the disconnect that Lev sees
between “traditional” accounting and “value addition”. As companies have become more
knowledge intensive, actions that produce (or reduce) corporate knowledge occur further
and further in time from the events that actually are recorded on balance sheets. A
57
discovery or invention – using knowledge – takes longer to get to market and sell –
producing income. Unlike the customary industrial model, knowledge is not always
immediately apparent and does not always provide a visible short-turn return. To reflect
this emphasis on the development cycle, the VCS classifies metrics into the cycle of
prospects.
Lev took the management metrics proposed in several of the earlier models and
placed them in the product life-cycle context. His definition of a “value chain” equates to
stock market analysts’ definition of a “business model” [79]. That information has
understandable manner.
58
Figure 2-19 -- Value Chain Scorecard (Lev) [79]
Not all of the bullets within each of the above boxes would necessarily apply to each
company or industry/market sector; ten to twelve of these for each company should be
able to represent the answers to questions most frequently asked by analysts when
comprehensive view of the company and its ability to create value based upon its inherent
knowledge and its utilization of it. In addition, these indicators share three requisite
Lev’s proposed report is intended to complement, not replace, existing rigid profit and
loss reporting requirements. The idea is to place the objective and familiar numbers in a
59
context that shows the ability of a company to leverage its intellectual capital. This is a
complex concept, as one form of information display should be able to support the other.
For purposes of this study, the VCS is considerably more complex than the simple “does
KM work in this company” measurement required for this analysis; instead, it will serve
as the basis for the chosen KM screening metric – Knowledge Capital (KC).
Knowledge Capital (KC), also created by Lev [92] is a more streamlined version of
the VCS that can be used as a gross measurement of the effectiveness and value of an
organization’s intangible assets. This metric, instead of being used to measure the
problems with knowledge management and traditional accounting methods is that assets
are ultimately only worth what the market will bear; unless knowledge can be
by averaging the past three years' actual figures and the next three years'
predictions. However, since knowledge increases in value over time, and since
this is an attempt to value the future, the future predictions are weighted twice as
heavily as past performance. The sum is then divided by nine. This produces a
60
3. Multiply EPSn by the number of Common Shares Outstanding to equal the total
“normalized” earnings
4. Using a constant for expected after-tax return on financial assets (about 4.5%
5. Using a constant for expected after-tax return on physical assets (about 7% from
6. Subtract financial and physical earnings from total "normalized" earnings. The
The KCE represents the amount of a company’s earnings that is generated from the
use of its knowledge assets [93]. To determine the value for KC, Lev suggests using a
“knowledge capital discount rate” (KCDR) as well. This is analogous to the 7% average
return on investment for a physical asset or the going 4.5% return for a financial one.
Dividing the KCE by the KCDR produces the asset value for the “knowledge capital”.
Lev calculated the KCDR based upon the 10.5% average after-tax profits for the
software and biotechnology sectors over the past several years [50][93][94]. These two
sectors are generally considered more knowledge-intensive (and less of anything else)
61
The market validates these figures. The correlation between a calculated KC for a
company and the actual measured total return from a stock is 0.53 [92]. The comparable
correlation between traditional “earnings” and stock value is only 0.29 [50]. Clearly the
KC shows the absolute amount of the knowledge assets within a company – large
firms will generally have larger numbers regardless of the industry or of their specific use
of KM. It can be used to show what percentage of a company’s overall financial numbers
can be attributed to knowledge, and for this reason, it is a useful measure within
companies for measuring the value of their knowledge efforts as shown in their bottom
line [93]. However, this figure alone can be misleading, as larger companies will almost
always generate higher earnings overall and so any large company will have higher
knowledge-intensive sectors are comprised of smaller companies with smaller total assets
To address this issue, there is a companion measurement for KC, and that is the
“Comprehensive Value” [93]. The market value of a company is the number of shares of
common stock multiplied by the current market price for that stock. The book value of a
company is the value of all assets of an organization less all liabilities, debts, and
intangible assets – as shown in official financial records [14]. Financial markets compare
the two numbers (through either the “market-to-book ratio” also referred to as “price to
book ratio” or its inverse, the “book-to-market ratio”) to see whether a company has
62
overvalued or undervalued stock. If the market value is higher than the book value, than
the market believes that a company is worth more than it is stating [14].
Lev defines comprehensive value as book value plus knowledge capital [93]. Using
larger companies. This in turn can show not only whether the market believes that a
stock is over or undervalued but also can also show what percentage of its stock price can
be derived from its KC. Two companies with similar KC values can have wildly
higher level (or believes the other company is being less than forthcoming in determining
In 1999, 2000, and 2001 Lev and Marc Bothwell (Credit-Suisse) applied this KC
concept to companies in specific industry sectors for CFO magazine in a concept called
firms in 22 industry sectors as shown in Table 2-4 below [102]. He was able to show
through this measurement that the sectors thought higher in knowledge usage (biotech,
63
Table 2-4 -- Knowledge Capital Across Non-Financial Industry Sectors, 1999 [102]
Market
Knowledge Market Value/
Knowledge Value/Book KC MV/CV
Industry Earnings Comprehensive
Capital ($M) Value Rank Rank
($M) Value (MV/CV)
(MV/BV)
companies across industry sectors. For the 2001 listing of the Fortune 500, Lev took the
top 50 publicly-held non-financial companies that showed a profit that year (in order to
companies, the lowest ranking one on the list came in at 99 overall [126]. Again it can be
64
seen that the distinction between the revenue of the company (which determines the
Fortune 500 ranking) [126] and the knowledge values support the general assessments of
some of these companies. GM, AT&T, and WorldCom, while all large companies, did
not rank nearly as high in knowledge assets as GE, Pfizer, and Microsoft, nor were they
Overall
KC MV/CV Knowledge Knowledge
500
Rank Rank Company Name Capital Earnings MV/BV MV/CV
Rank
(2001) (2001) ($M) ($M)
(2001)
1 43 5 GE 254,381 12,435 8.10 1.34
2 33 53 Pfizer 219,202 7,686 15.10 1.03
3 40 79 Microsoft 204,515 8,735 6.90 1.16
4 4 11 Philip Morris 188,538 10,226 7.00 0.52
5 41 1 Exxon Mobil 176,409 10,050 4.10 1.17
6 29 41 Intel 173,964 7,339 5.10 0.91
7 20 14 SBC 155,402 7,897 4.80 0.78
8 32 8 IBM 148,679 7,534 8.10 0.99
9 18 10 Verizon 141,471 7,494 3.80 0.74
10 38 30 Merck 139,494 7,497 11.50 1.11
11 48 2 Wal-Mart 99,973 5,018 6.90 1.63
12 39 58 Johnson & Johnson 94,769 4,976 6.90 1.14
13 42 88 Bristol-Myers Squibb 85,837 4,709 12.50 1.21
14 45 93 Coca-Cola 73,976 3,906 12.70 1.42
15 21 48 Dell 72,985 2,499 11.40 0.81
16 24 66 BellSouth 71,269 4,004 4.50 0.86
17 35 31 Procter & Gamble 66,609 3,931 6.90 1.07
18 14 4 Ford 59,311 4,310 2.90 0.70
19 5 71 Honeywell 52,798 2,533 3.30 0.52
20 19 15 Boeing 51,179 2,427 4.20 0.75
21 36 94 PepsiCo 50,614 2,607 8.80 1.10
22 37 52 UPS 48,508 2,470 6.60 1.10
65
Table 2-5 (cont) -- Lev's Top 50 Knowledge Companies – Knowledge Capital (2001)
Overall
KC MV/CV Knowledge Knowledge
500
Rank Rank Company Name Capital Earnings MV/BV MV/CV
Rank
(2001) (2001) ($M) ($M)
(2001)
23 47 23 Home Depot 48,389 1,952 6.70 1.58
24 30 19 Hewlett-Packard 48,226 2,500 4.30 0.97
25 22 67 Walt Disney 46,960 2,307 2.40 0.82
6 25 20 Chevron 46,606 2,585 2.90 0.86
27 7 27 Compaq 38,569 1,671 2.60 0.62
28 9 77 Alcoa 37,158 1,702 2.70 0.64
29 23 64 United Technologies 34,866 1,832 4.60 0.83
30 49 9 AT&T 34,411 1,443 2.50 1.64
31 15 78 Dow Chemical 31,085 1,751 3.10 0.70
32 16 46 Safeway 30,754 1,408 4.80 0.72
33 1 92 Loews 30,675 1,639 1.00 0.26
34 34 56 DuPont 29,939 1,946 3.40 1.04
35 10 34 Motorola 29,002 1,125 1.70 0.66
36 3 69 Lockheed Martin 25,428 1,433 2.30 0.50
37 31 16 Texaco 24,110 1,286 2.70 0.97
38 12 18 Kroger 24,103 1,150 6.00 0.68
39 27 3 GM 24,032 1,564 1.60 0.89
40 2 29 Sears 23,685 1,463 1.80 0.41
41 46 7 Enron 22,297 1,022 4.30 1.47
42 28 17 Duke Energy 21,797 1,248 2.80 0.89
43 13 44 Conoco 21,153 1,196 3.20 0.68
44 26 96 Sara Lee 19,398 1,249 14.50 0.86
45 6 89 Phillips Petroleum 18,886 1,011 2.30 0.56
46 11 99 Caterpillar 17,443 1,083 2.80 0.67
47 44 37 Target 16,597 853 4.90 1.38
48 17 32 Worldcom 14,411 (124) 0.90 0.72
49 50 90 Walgreen 14,243 661 10.10 2.30
50 8 68 Conagra Foods 13,765 853 3.50 0.62
While this is a simplistic way of assessing the dollar value of knowledge, it can be
the current precepts of Enterprise Management Engineering becomes the next step in this
66
study. EME may not be the only way in which to introduce successful knowledge
management, but the anecdotal evidence to date supports the idea that it is a logical
67
CHAPTER 3 RESEARCH PROBLEM
“If knowledge can create problems, it is not through ignorance that we can solve
them.”
Isaac Asimov
3.1 Problem
Much research has been done on various systems engineering paradigms, and recently
the approach has shifted toward a more integrative approach, combining multiple
accepted that it has moved from its origins in the design of rigid rule-based artificial
The major problem now has become determining how to define knowledge
product; when combined with the amorphous concepts many use for valuing
“knowledge”, the combination can mean that formal systems engineering discipline falls
by the wayside.
68
This need not happen. Formal system engineering has a place in the design and
SE discipline needs to adapt to the specific domain for which it is being used, but
At the same time, it is difficult to postulate the success or failure of the EME
approach (or any other one) unless and until there exists an objective and accepted
method of quantifying knowledge management success. One of the best ways to prove
does have a value; as shown by Lev’s research, that value can be recognized and
The experiment for this study was designed to link the two concepts of Enterprise
3.2 Hypotheses
In order to begin to prove a linkage between the two concepts, a relationship should
be defined between them. Since it is always easier to disprove a hypothesis than to prove
one, this study used three null hypotheses, each of which postulated that there would be
no connection at all between results measured from a survey of EME practices and results
69
3.2.1 H1: Does EME Correlate to KM Success?
x H10: Organizations that use any of the EME approach show no difference in KM
x H1a: Organizations that use any of the EME approach show a more positive
The null hypothesis for this set says that no degree of integrative systems engineering
affects the odds for success for a KMS within that organization. However, this merely
systems engineering process and higher measured knowledge capital and earnings. In
order to try to validate this specific EME model, portions of Stankosky’s EME construct
need to have actually been used to build the KMS, and a correlation should be made
between how much of the EME process was used and how “successful” the KMS
implementation was.
with KM return.
70
If this hypothesis can be disproved with a one-tailed test, the conclusion would be that
Still, one last series of analyses is needed. Some portions of the EME process are more
widely used than others; it would seem that some EME steps might have higher rates of
knowledge “payback”.
x H30: Organizations that use specific parts of the EME process show no difference
in KM returns than organizations that use other parts of the EME approach.
x H3a: Organizations that use specific parts of the EME process show a positive
The last area of analysis addresses which EME steps are more widely used and which
specific ones correlate to a higher KC, a higher CVR, or both. If this hypothesis can be
disproved, these steps identified, and additional correlations made, the EME development
process can be refined to emphasize and further expand those steps with a higher
KC/CVR correlation.
If all three hypotheses can be disproven, there is a high probability that the specific
EME process does have some direct correlation with KC and measurable knowledge
management. EME can be extended to the KMS engineering realm and the model would
71
3.3 Approach
As mentioned in Chapter Two, there are eighteen steps in the EME construct. The
study used a survey approach to ask certain targeted corporations whether they had used
any or all of the eighteen steps and to what extent (there were 22 major question areas,
addressing the eighteen steps, each of the four KM pillars, and distinguishing between
KM approaches and effectiveness and formal and informal organization). The study also
asked additional detail about each of the eighteen areas in an effort to further expand the
detail of the model. The result was a survey with 118 questions addressing the eighteen
areas (not all questions were always asked – several of them were branching questions,
dependent upon the answers to previous ones). The detailed survey is included as
Appendix A; the mapping between the EME tenets and the questions is shown in Table
3-1 below:
organization.
List the critical intellectual assets Q3a, 3b, 3c, 3d, 3e, 3f
Identify the sources of your intellectual Q4a, 4b, 4c, 4d, 4e, 4f
72
Table 3-1 (cont) -- Mapping of EME and Survey Instrument Questions
factors).
Identify critical environmental changes Q6a, 6b, 6c, 6d, 6e, 6f, 6g, 6h, 6i
List the intellectual assets required to 13a, 13b, 13c, 13d, 13e
processes.
List the sources of these intellectual 14a, 14b, 14c, 14d, 14e
assets.
73
Table 3-1 (cont) -- Mapping of EME and Survey Instrument Questions
Diagram the formal and informal 17a, 17b, 17c, 17d, 17e, 17f, 18a, 18b, 18c, 18d, 18e,
innovation)
x timetable. 20d(iv)
systems)
management/mitigation plans
management
74
3.4 Next Steps
entire EME construct. Once organizations were assessed in each of these areas, they
were ranked both overall and by each EME area. Organizations were also ranked using
the financial data needed to determine KC and CVR scores for each of the subject
correlation did indeed exist. In addition, answers for each of the areas were also tabulated
and analyzed to determine the use and effectiveness of each area with respect to the
others. The next chapter describes the survey methodology and process.
75
CHAPTER 4 RESEARCH METHODOLOGY
4.1 Subjects
The initial goal of this effort was to attempt to analyze 10-20 organizational subjects.
statistical validity for any claims of correlation, and a higher sample size would lend
additional strength to the statistical outcome. Consequently, the initial sample population
was intended to use the same 50 organizations as Lev used in his 2001 study, the top fifty
Since the KC metric requires the public availability of certain corporate financial
statements (specifically, actual and predicted corporate earnings and the value of
corporate tangible – physical and financial – assets), organizations in this study were
subjects was also restricted to those firms that have actually generated earnings per share
for the past three years, as at this point, Lev’s calculations do not easily apply for
76
financial, unprofitable, non-profit, academic, or governmental institutions (this is a
The original list of companies cited by Lev in 2001 [126] and based on figures from
2000 has changed since then due to corporate reorganizations. Ten firms which had
reorganized since then, one which had gone bankrupt, one which showed no profit in
2004, and one which had no data in CompuStat for 2004 were all dropped from the
potential survey list, leaving an initial survey group of 37 companies. In order to try to
replicate Lev’s results as closely as possible, the study proceeded to request participation
from the 37 remaining companies. This list of initial candidate organizations was ranked
and analyzed using both the Knowledge Capital (KC) and Market Value/Comprehensive
The subjects were each contacted and sent a survey based upon Stankosky’s EME
construct. The survey itself was organized to reflect the different concepts that have
greater weight at different points in the KMS engineering life-cycle. It was designed to
possible, and answers were usually requested on a Likert scale from 1 (positive, well or
subjective scales, a Likert scale helps to minimize this issue by making all answers
relative only to others provided by the same respondents. Participants also had the option
77
to answer N/A or “Don’t Know” to most questions, and there were some optional open-
The survey asked 117 questions on the 18 EME topics, divided into three areas
corresponding to the life-cycle phases of a KMS project. The questions are summarized
78
Table 4-1 (cont) -- EME KMS Survey Areas
The survey itself is attached as Appendix A. The online version was worded
screen display conventions and the introduction of branching logic but the content was
identical.
Since no survey existed that measured the specific systems engineering constructs
required for validation of the EME model, the one that was developed required
assessment and piloting (due to the lack of existing validated test data, this instrument
was not formally validated). The instrument was originally created as a list of binary
recommendation of the research committee, the survey was revised to incorporate a more
79
variable Likert scale as it was felt that this would generate more answers. Since this
survey was also an initial attempt to gather general information about real-world
experience with the EME construct, several free text optional response questions were
also added.
The revised survey was then reviewed by several associates of the researcher,
including three systems engineers and one retired forms designer. Most of their
suggestions were incorporated into a redesign of the survey form. There was a general
feeling that the survey was too long; the author chose to keep the length in order to
address each of the 18 EME areas but revised the survey to combine questions where
possible, to make answers optional, and to add the “Not Applicable” and “Don’t Know”
The survey was then posted in an online version and the website link sent to the
reviewers. Comments addressing clarity and logic were incorporated where appropriate
The survey itself was submitted to the George Washington University Office of
Health Research (OHR) Institutional Review Board (IRB) in November of 2004. After
removing several initial requests for subject identification data from the survey, the
80
4.4 Design and Procedure
Since the data for this study came from two sources, the design and procedure for
The original data in the 2001 comparison needed updating in order to bring them into
line with the timeframe of the EME assessment (early 2006). The earning results and
predictions used by Lev were from I/B/E/S International; the financial data used for this
study used the same sources as closely as possible. The needed data were obtained from
the online databases for Institutional Brokers Estimate System (I/B/E/S) (for estimated
and actual earnings data) [59] and Standard and Poor’s CompuStat Industrial Annual
Reports (for asset value and market data) [118]. This both ensured that the corporate
figures were as objective as possible and will also allow access by others to the same
datasets should that be desired. Online access to these datasets for purposes of this study
was provided through GWU’s subscription to the Wharton Research Data Services at the
Each organization on the subject list was contacted and given the option of
hard copy version with a stamped return envelope. Each survey was identified through a
dropbox listing with one of the 37 firms selected for the study. Points of contact within
81
each organization were identified and a copy of the survey sent to at least one respondent
(by name) within each organization that agreed to participate. Results were requested
and/or email in order to get at least the minimum number of respondents. Eight surveys
in all were received that could be used (more were received but some companies
The instrument itself had 117 questions addressing the 18 steps of EME. The
questions themselves were divided into the following five types as shown in Table 4-2
below:
continuous ordinal Likert scales. Although a qualitative Likert scale like this one should
not be used for parametric statistics, it is still possible to examine Yes and No answers to
develop some indications of relevant items or events within the EME cycle. It is also
possible to see if there are any steps that have an unusually high number of “N/A” or
82
“Don’t Know” responses – while these could indicate confusion with the survey question,
they could also indicate a proposed concept within EME that should be reexamined.
Four major types of analysis were performed. The first, an attempt to disprove H10
by calculating a significant positive correlation between the overall concept of EME and
KC/CVR figures, used basic parametric correlation analysis with Pearson’s r since it was
judged that the binary answers and the financial figures were objective and measurable
quantities.
The second analysis used nonparametric analysis (more appropriate given the
relatively subjective nature of the EME scoring and ranking process) to attempt to
disprove H20. This analysis sought to compare and correlate financial rankings based on
KM and CVR with the ranks derived from summary EME survey assessments through
use of the Spearman r coefficient; this can be used to infer whether more EME and high
The third analysis focused within the EME survey itself, to determine which of the 18
steps were implemented more or less frequently within these organizations’ KMS
median, and standard deviation were calculated for each of the 18 areas of the EME
construct, the four pillars, and formal and informal organization (for a total of 22 assessed
The last stage was to attempt to disprove H30 by correlating KC and CVR
performance with each of the 18 EME construct areas to determine which ones had a
higher correlation (and thus more success). This was actually another nonparametric
83
analysis or correlation similar to the attempt to disprove H20, except this time it was
performed with ranks based upon the EME means from specific areas within the survey
Results from all four of these analyses are summarized in Chapters 5 and 6.
There were some limitations and assumptions made as part of this survey that may
The passage of the Sarbanes-Oxley bill in 2002 caused changes in GAAP. The Act
generally forced more accuracy and clarity in financial reporting, causing many
companies to change the way in which they valued and reported intangible assets [105].
4.5.1.1 Risk
In order to accurately replicate and update Lev’s KC approach, the original companies
asked to participate in the survey in 2001 (with 13 exceptions) were the same companies
analyzed in 2005, using the same financial data sources. This allowed replication of the
financial analysis and results but comparing figures before and after the Act introduced
the possibility of the subjects having changed their financial reporting to comply.
84
4.5.1.2 Mitigation/Assumption
Most companies are still wrestling with the requirements of the new legislation and
have not yet fully implemented the new reporting systems required by the law. According
to an IDG survey in 2005, companies have indeed been revising their reporting
procedures and standards, but that is more due to ongoing business process changes than
to the legislation [105]. The historical figures used for the calculations do not show any
major changes between 2003 and 2003, and the other figures are all as of December 2004
or later.
Unfortunately, in several cases, the systems themselves had been developed several years
ago and no one could be found at the end of the project that was there at the beginning.
4.5.2.1 Risk
Respondents could try to answer questions for which they have no data and could
4.5.2.2 Mitigation/Assumption
knowledge about how to do business, and that includes information on how the KM
system was developed. It was assumed that companies would have access to this
85
historical knowledge if needed to answer the survey; it was also assumed much more
likely that any company with a good KM system already had that history close to hand at
all times.
In many cases, the KMS process itself within an organization is continual, and there is
no start or completion. Certain steps of the EME process may not apply to the
4.5.3.1 Risk
This could make it difficult to accurately answer questions about activities during the
4.5.3.2 Mitigation/Assumption
Part of the purpose of this study is to validate EME against real-life KM systems
development situations. If there are EME steps that no one follows, that is a useful and
relevant data point. The life-cycle questions were all designed to include the “Not
Applicable” response. The intention is that respondents who had not project inception or
project completion would simply select that answer. As a fallback position, none of the
life-cycle questions on the survey is mandatory and they can be left blank. While this
may lower the number of life-cycle responses received, at least the ones received will be
accurate.
86
4.5.4 Different Approaches to KM and KMS Projects
completely different view of systems success and the development process from the IT-
centric one in EME. In other organizations, there was no “corporate standard” approach
to implementing KM systems and the number of potential subject systems was quite
4.5.4.1 Risk
The survey was sent to companies generally through their information technology
organization. It is possible that this organization has a different view of the success or
failure of a KM project than corporate management or the regular employee users of the
system. It is also possible that the system used as a basis for completing the survey does
not accurately represent the corporate KM system “culture”. Lastly, a broader definition
of KM/KMS could affect the answers by including data and information management
systems.
4.5.4.2 Mitigation/Assumption
Given the generally-high level of the questions asked in the survey, and the fact that
the survey asks few questions about technology, the IT department would probably not be
87
A mitigation strategy could not be found for those cases in which the responses for a
single system did not necessarily extend across the enterprise. Any results for an
assumption is that they are representative projects – additional research will be required
results.
The measurement of KM “success” used for this study – the measurement of all assets
and market value not directly attributable to financial or physical assets – allows the use
still assumed to reflect the corporate approach to systems engineering and so the
The survey instrument is lengthy and addresses all eighteen areas in some detail. It
also uses the terms as defined in the EME construct that might not be familiar to all
respondents.
4.5.5.1 Risk
There were concerns that respondents might misunderstand the terms or might choose
not to complete the survey (average completion time by the validators on the final version
88
4.5.5.2 Mitigation/Assumption
Each survey was preceded by a lengthy description of its intent and purpose as well as
email and telephone interviews and question/answer sessions. This was done both to
explain the purpose of the study project and to identify the desired individual
respondent(s). As part of this introduction the respondents were informed that the survey
could be lengthy but were asked to fill out as much as possible. They were also informed
that there were no “right answers” and that negative responses were perfectly acceptable.
noncompletions; of ten respondents who began the survey process, six of them finished
89
CHAPTER 5 RESEARCH FINDINGS
“It is of the highest importance in the art of detection to be able to recognise out of a
The KC and EME data were initially collected separately from each other (but in
parallel). Since the initial data could have proven useful even if the hypotheses could not
be disproven, both sets of data were analyzed to produce a set of intermediate findings for
both the KC measures and the EME survey prior to the requisite attempt to determine
The original data in the 2001 comparison needed updating in order to bring them into
line with the timeframe of the EME assessment (early 2006) and so were obtained from
Earnings per share (EPS) data for all companies were based on actual data for 2003,
2004, and 2005 and on estimates for 2006, 2007, and 2008 (all estimates were as of the
month of December). Balance sheet data, including stock prices, were for year-end 2004
90
as 2005 reports had not yet been posted. Specific data fields requested and their
definitions are shown in Appendix B; data received and used for knowledge capital
calculations are shown in Appendix C. Data were initially requested, received, and
analyzed for all 37 subject companies; in order to minimize confidentiality issues, data
included in this study are only for those companies that participated in the EME survey.
The databases were queried for several asset value variables needed to generate the
asset values and earnings estimate information necessary for the correlation. “Gross
(with depreciation) values reported for property, plant, and equipment [119]. “Financial
Assets” include all long-term financial investments and advances [119]. Specific results
It should be noted that the timeframe of December 2004 was chosen as it was the
most recent for which data had been posted. However, it also has the benefit of having
been a relatively quiet period for the markets. The Enron bankruptcy fallout from
December 2003 had worked its way through the markets by then; the oil price shocks of
2005 had not yet occurred. The major event of the month was the tsunami in Southeast
Asia; however, since that occurred at the end of the month during the holiday period and
none of the surveyed firms were dramatically affected by either the event itself or the
recovery efforts, the impact on the surveyed firms by 31 December 2004 was negligible.
The major financial market event was the issuance by the U.S. Federal Accounting
Standards Board (FASB0 of the new rules requiring that employee stock options be
91
recorded at fair market value vice the “intrinsic” bases used previously; however, those
rules did not take effect until 15 June 2005 and so did not yet affect 2004 reported data.
Data used for the EME assessment can be divided into two categories: respondents
Thirty-seven companies were identified for this study. Given the small sample size,
the lack of need for randomization among the study respondents, and the need to ensure
identification and targeting approach was used. Corporate points of contact were
identified, and those in turn provided specific names to whom the survey would be sent.
Extensive literature, personal contacts, and online searches identified several possible
candidates within each of the 37 companies. This first phase identified 37 potential
points of contact within 26 of the subject companies. When no specific name or names
position of the Chief Information Officer, Chief Technology Officer, or Chief eBusiness
Officer was selected and more online searches were performed. This provided an
For these 46 names, corporate website and online searches provided only 7 corporate
email addresses and telephone numbers, several of which were for main switchboards (it
92
was decided not to use personal addresses or phone numbers in order to reinforce to the
The remaining 39 names were then searched against business contact data from the
valid email addresses and telephone numbers. In addition, Jigsaw offered the option to
search their database against management level and department criteria, so CIOs, CTOs,
and similar titles could be identified. Again, for each identified contact, a name, email
address, and telephone number were obtained. At the end of the initial respondent data
acquisition period, the contact listing covered all 37 companies with at least one contact
As the initial survey alert message was sent out (to determine who in each company
should actually complete the survey), all 49 addresses were used. 8 messages were
returned from 8 companies as undeliverable. After additional research with Jigsaw, more
contacts were added to the list; this process repeated four more times. Eventually a valid
(non-bouncing) email address was obtained for each of the 37 companies except for one
unsolicited email messages such as the one used for this study. A call to the corporate
turn produced a valid “message sent”, thus accomplishing the initial goal of one valid
93
potential points of contact. As each replied they were in turn added to the contact listing.
The final contact listing had 98 valid names and email addresses; 39 of those responded
The survey was open for five weeks in late January and early February 2006. To
allow for travel and internal routing of the initial message, firms were given one week to
reply to the contact initiation email, and then a second email was sent. If there was still
no response after a second week, a third email was sent. After an additional 72 hours,
telephone contact was made. For subsequent emails and telephone calls, if no response
was received within 72 hours, a second email was sent. After 24 hours, a telephone call
was made. If at any point someone requested a desire not to participate, he or she was
immediately removed from the list. This resulted in a an overall participation rate of
21.6%. Table 5-1 below shows how many of the 37 firms replied to the initial contact,
how many requested the survey, how many agreed to participate in the survey effort, and
37 22 20 13 8
participation in this effort. Consequently, company names were removed from the
surveys and an identifying letter assigned instead. General industry sectors for the eight
94
respondents are shown in Table 5-2 below; although the final sample size was small, it
was diverse enough to offer a relatively broad view of Fortune 500 firms.
A Automotive
B Industrial
C Home Products
D Industrial
E Automotive
F Telecommunications
G Semiconductors
H Electric Utility
Once the survey was ready for distribution, it was posted in an online version as well
complete the survey through a phone interview; this survey was subsequently entered into
the online collection after its review and acceptance by the respondent. All other
respondents were sent a Word or PDF version of the survey along with the online link, as
Surveys were returned within five weeks after the initial contacts were made. Six of
them were entered directly into the QuestionPro website by the respondents. Two of
them were returned via email. The cutoff date for data collection for this effort was
initially three weeks after the first round of contacts began; collection was held open an
95
additional twelve days after the original deadline in order to add additional inputs,
increase the sample size, and receive inputs from specific organizations. Data collection
5.2.1.1 Equations
accordance with the hypotheses described in Chapter Four, the knowledge capital and
comprehensive value ratio were calculated for each of the responding organizations.
( FY 02 FY 03 FY 04) 2 u ( FY 05 FY 06 FY 07)
NEPS (5.1)
9
96
KCE
Knowledge Capital: KC (5.6)
0.105
MV
Comprehensive Value Ratio: CVR (5.10)
CV
sectors, the average rate of return for physical and financial assets across industrial
sectors is 7.0% for physical assets and 4.5% for financial ones [92]. Lev has separately
determined that KC has a higher rate of return in the early years (before competition
erases the competitive advantage of knowledge equity) but it still continues to earn at a
premium over other assets for as long as ten years [92][93]. When this “knowledge
premium” is discounted back to present-day values, the effective discount rate is about
10.5%; this calculation has also been supported by historical analysis [50].
97
5.2.1.3 Calculations
The tables below show only those calculations for those companies who actually
participated in the study. Values for Gross Asset Value, Physical Assets (PA), Financial
Assets (FA), and Normalized Earnings Per Share (NEPS) were calculated earlier in the
study process as described above in Section 5.1.1. Appendix C shows the intermediate
financial data definitions and results and calculations for the companies who chose to
participate.
Physical Financial
Total Knowledge
Asset Asset Knowledge
Rank Name Normalized Capital
Earnings (at Earnings (at Capital
Earnings Earnings
7% return) 4.5% return)
98
Table 5-4 – Comprehensive Value Ratio Calculations (CVR)
Comprehensive
Rank Name Market Value (MV) Book Value (BV) MV/CV
Value (CV)
5.2.1.4 Results
The results show that Lev’s rankings from 2001 still generally hold true in 2006. In
2006 the distinction between KC rankings and CVR rankings is not so pronounced; the
companies tracked together more closely than they did in 2001. This is probably due to
the changes in the financial markets since 2001; however, this in turn may be partly
5.2.2.1 Calculations
In Table 5-5 below are the summary means for each of the participating organizations
in each of the assessed EME areas (specific answers for each of the 117 questions are
99
13b – Informal Organizational Structure
13a – Formal Organizational Structure
17 – Risk Management/Mitigation
16 – Legacy System Integration
3 – Critical Asset Identification
12 – KM Strategy Identification
7b – Organizational Support
14 – KM Technology Listing
6 – Environmental Changes
18 – Change Management
9 – Functional Processes
7c – Technology Support
7a – Leadership Support
1 – Enterprise Definition
5 – Strategic Objectives
8 – Strategic Functions
7d – Learning Support
2 – Value Proposition
10 – Required Assets
11 – Asset Sources
Company Name
Rank
5 B 1.80 3.00 3.00 2.25 1.00 N/A 2.36 3.42 1.71 2.60 2.33 2.67 3.67 2.67 2.00 2.20 4.00 N/A 2.00 1.13 4.00 N/A
100
2 C 2.20 2.00 2.50 2.25 2.14 3.00 1.57 1.92 2.13 1.75 1.67 1.67 2.00 2.33 2.00 1.80 N/A 2.50 1.00 1.44 2.25 2.50
8 D 3.00 1.00 5.00 5.00 2.67 5.00 2.14 1.50 1.78 1.00 5.00 5.00 5.00 5.00 3.00 1.60 N/A N/A 4.00 N/A 3.00 N/A
4 E 2.25 2.00 1.25 1.75 1.43 1.50 3.00 2.00 2.33 2.20 2.67 2.67 2.67 2.33 3.00 2.60 4.00 N/A 3.00 1.38 2.67 2.00
7 F 2.80 3.33 3.00 3.00 3.00 4.00 2.29 2.75 2.50 3.00 3.67 1.00 3.00 2.67 2.00 2.60 4.00 1.00 1.00 2.44 5.00 N/A
3 G 3.20 2.33 2.25 2.25 2.43 N/A 2.00 2.33 2.06 1.75 2.00 2.00 2.00 2.00 2.00 1.80 2.00 N/A 2.00 1.56 3.67 3.00
1 H 1.00 1.33 1.50 2.00 2.29 1.00 1.14 1.50 1.81 1.20 1.33 1.33 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.33 2.50 1.00
5.2.2.2 Mean Scores and Deviations for EME Areas
Below are the summary means for each of the assessed EME areas (specific means for
each of the 117 questions are shown in Appendix E). For purposes of computing the
mean score P, only the actual Likert scale scores were used (1 through 5); blank answers,
N/A answers, and Don’t Know answers were all ignored in the mean calculations.
For this step, mean scores for each of the areas were computed and standard
deviations were calculated. The EME areas were then ranked according to their average
mean as shown:
EME Average
Relevant EME Area Std Dev
Rank Score
103
Table 5-6 (cont) -- EME Effectiveness Ratings by Area
EME Average
Relevant EME Area Std Dev
Rank Score
4.00
3.50
3.00
2.50
EME SCore
2.00
1.50
1.00
0.50
0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
EME Area
104
Figure 5-2 shows how each company scored in its own self-assessment of EME
across the board. Remembering that low scores equate to good EME performance, it is
implementing formal engineering and system development processes (they also tend to
view KM as a training concept). Company H, on the other hand, is doing quite well at
implementing this concept (not surprising given that they are a computer hardware
3.50
3.00
2.50
2.00
Score
1.50
1.00
0.50
0.00
A B C D E F G H
Company
105
5.2.3 Results
Based upon the survey, the most self-rated effective EME areas are (18) Change
Structure. The least effective ones are (in descending order) (13a) Formal Organizational
Structure, (6) Environmental Changes, and (16) Legacy System Integration. It should
also be noted that five EME items had at least one respondent answer “N/A” or leave a
major EME area (not just individual questions) completely blank, thus indicating either
6. Environmental Changes 2
This could easily be a reflection of the small sample size, but it is also possible that
additional work needs to be done in either refining the survey instrument or defining the
EME scores by company and by area are shown below in the next two charts. The
first shows all of a company’s scores on each of the 18 areas. In this chart, the X axis is
the list of respondents. This chart allows some generalizations about scoring biases or
106
tendencies within an individual organization (for example, Companies D and H, who
define KM as a training issue, seem to fell that they have EME working well throughout
fall?)
Scores By Company
6.00
1
2
3
5.00 4
5
6
7a
4.00 7b
7c
7d
8
Score
3.00 9
10
11
12a
2.00 12b
13a
13b
14
1.00 15
16
17
18
0.00
0 1 2 3 4 5 6 7 8 9
EME Area
The next chart, Figure 5-4, tracks each of the eight companies throughout the EME
assessment process. In this chart the X-axis is the series of EME areas. This both
highlights individual organizational biases but also allows a quick but meaningful
107
Figure 5-4 -- Company Scores By EME Area
5.00
4.50
4.00
3.50
A
3.00 B
C
Score
D
2.50
E
F
2.00 G
H
1.50
1.00
0.50
0.00
1 2 3 4 5 6 7a 7b 7c 7d 8 9 10 11 12a 12b 13a 13b 14 15 16 17 18
EME Area
Other data, including the raw data, free text responses, and additional graphs, may be
Although the financial and EME data gathered in the previous two sections are
for this study, they must be correlated to determine if the results of these calculations are
independent from each other. The three hypotheses tested for rejection were:
x H10: Organizations that use any of the EME approach show no difference in KM
108
x H20: Organizations that use more EME steps show no difference in KM return
x H30: Organizations that use specific parts of the EME process show no difference
in KM returns than organizations that use other parts of the EME approach.
For the first hypothesis set, the study set out to disprove the statement:
x H10: Organizations that use any of the EME approach show no difference in KM
KC)?
CVR)?
In the previous data collection phase, eight companies were surveyed; all of them
reported some use of the EME construct (EME Present = 1). Consequently, no
comparison could be made and this hypothesis test was terminated. The study moved on
109
5.3.2 H2: Does More EME Correlate to More KM Success?
While the first hypothesis set attempted to analyze whether the existence of EME and
KM success were correlated, the second hypothesis set extended the concept by testing
o Does a higher EME mean (across all 18 areas) correlate at all to KC?
o Does a higher EME mean (across all 18 areas) correlate at all to CVR?
o Does a higher EME mean (across all 18 areas) correlate positively to CVR
For H2, the variables of interest are the level of overall observance of the EME
principles (EME mean, calculated across all 18 areas and 70 Likert-scale questions) and
the overall level of KC or CVR within a company. Since the Likert means are means of
ordinal variables, they are still ordinal variables. Since the financial figures would be
that EME and KC or CVR could be correlated using the non-parametric Spearman’s rho
There are additional tests for nonparametric rankings, primarily the Friedman test and
the “coefficient of concordance” (a normalized version of the Friedman test also known
110
as the Kendall’s w test). These both test more than two sets of ranks, such as the EME
rankings, the KC rankings, and the CVR rankings, in a single calculation to see if there is
correlation. However, since CVR and KC are already mathematically related (as shown
above, KC is actually derived from CVR), there is no need to test the correlation of all
three sets of rankings and the Spearman’s rho test was sufficient for this study.
Again, there were eight successful samples (companies that both completed the EME
survey and had knowledge capital data computed). Each company was ranked 1 to 8 in
each of the three critical measurements: EME mean, level of knowledge capital (KC), and
the market value/comprehensive value ratio (CVR). Ranking results are shown in the
table below:
Company A 6 7 7
Company B 5 5 2
Company C 2 2 5
Company D 8 6 8
Company E 4 8 3
Company F 7 4 6
Company G 3 1 1
Company H 1 3 3
Since the rankings are based on ordinal relative assessment data, traditional
parametric statistics are not appropriate for this analysis. Although the financial data are
quantitative and objective, the EME ratings were primarily scored on a five-point Likert
111
scale. Although the scales were consistent in direction (1 was always considered
“compliant” or “good”), they still could not be used for a true objective statistical
success, it can be seen in Table 5-8 above that the measures generally produce different
rankings. For this analysis, both measures were compared to the level of EME
observance to determine if one had a higher level of correlation than the other did.
For this analysis, the Spearman’s rho test (also known as the rank correlation
SSxy
rs (5.11)
SSx u SSy
Since there were no ties in the rankings, this can be simplified to:
n
2
6 u ¦ di
i 1
rs 1 2
(5.12)
n u (n 1)
Company A 6 7 -1 1
Company B 5 5 0 0
Company C 2 2 0 0
Company D 8 6 2 4
Company E 4 8 -4 16
Company F 7 4 3 9
Company G 3 1 2 4
112
Company Name EME Rank KC Rank d d2
Company H 1 3 -2 4
2
N= 8 6d = 38
The measured value for rs (rm) for n=8 ranked pairs is 0.5476. For n=8 pairs, and a
level of significance (D) of 0.05 (95% level of confidence), the critical value for rs (rc) is
0.6429 [87]. Since rm < rc, no significance can be ascribed to the correlation, and the H20
between the level of EME usage and the level of knowledge capital within a corporation.
Company A 6 7 1 1
Company B 5 2 -3 9
Company C 2 5 3 9
Company D 8 8 0 0
Company E 4 3 -1 1
Company F 7 6 -1 1
Company G 3 1 -2 4
Company H 1 3 2 4
2
N= 8 6d = 29
The measured value for rs (rm) for n=8 ranked pairs is 0.6602. For n=8 pairs, and a
level of significance (D) of 0.05 (95% level of confidence), the critical value for rs (rc) is
again 0.6429 [87]. Since rm > rc, statistical significance can be ascribed to the correlation,
113
and the H20 null hypothesis can be disproven for CVR – there is a statistically-significant
correlation between the level of EME usage and the market’s recognition and valuation of
corporate knowledge.
Since the EME-CVR null hypothesis was disproven, the null hypothesis H20 was
with KM returns.
It appears that CVR is a better method for comparison when used with EME. The
EME-CVR correlation at 0.6602 was higher than the KC correlation (at 0.5476), and so
for this area of analysis, EME-CVR would be the stronger measure of EME-KM success.
While the first hypothesis set analyzed whether the existence of EME and KM
success were correlated, and the second hypothesis set tested whether more usage of EME
correlates with more successful KM systems, the third attempted to determine which of
x H30: Organizations that use specific parts of the EME process show no difference
in KM returns than organizations that use other parts of the EME approach.
114
Since the calculations for resolving H1 and H2 above concluded that there is some
correlation between EME and KM success, the study now turned toward identifying
o Which EME areas correlate positively to KC and how do they rank (higher
o Which EME areas correlate positively to CVR and how do they rank
For H3, the variables of interest are the level of overall observance of the EME
principles (EME mean within each EME area based on the 70 Likert-scale questions) and
the overall level of KC or CVR within a company. Since this analysis is also comparing
ranks, the non-parametric Spearman’s rho was used for determining correlation.
As with the other analyses, there were eight successful samples (companies that both
completed the EME survey and had knowledge capital data computed). As above, each
company was ranked 1 to 8 in each of the three critical measurements: EME observance,
level of knowledge capital, and the market value/comprehensive value ratio. However,
this time the EME rankings were for mean Likert scores on each of the areas of the EME
construct. As an example, ranking results for the first of the measured areas are shown in
115
Table 5-11 – Data for H3 Statistical Comparison, “EME1 – Enterprise Definition”
Company A 6 7 7
Company B 2 5 2
Company C 3 2 5
Company D 6 6 8
Company E 4 8 3
Company F 5 4 6
Company G 8 1 1
Company H 1 3 3
Once these data were calculated for each of the EME areas, they were tabulated and
summary results placed in Table 5-12 below (note: areas with insufficient data generated
an rm greater than one; those values were ignored when assigning statistical significance):
116
EME-KC EME-CV Critical Significant?
Value
EME Element
Measured Measured EME-KC EME-CV EME- EME-
Um Um Uc Rank Rank KC CV
117
5.3.3.1 Alternative Hypothesis
Since both the EME-KC and EME-CVR null hypotheses were disproven for several
EME areas, and since both of the analyses were positively correlated at the D = 0.05 level
for a one-tailed test, the null hypothesis H30 was rejected and the alternative H3a was
accepted:
x H3a: Organizations that use specific parts of the EME process show a positive
Figure 5-5 below shows how the KC and CVR comparisons correlate fopr each of the
9. Functional Processes
1. Enterprise Definition
1.0000
5. Strategic Objectives
0.8000
3. Critical Asset Identification
Critical
value
0.6000 rc
0.4000
Correlation Coefficient
0.2000
KC
0.0000
CVR
13b. Informal Organizational Structure
-0.2000
6. Environmental Changes
-0.4000
18. Change Management
-0.6000 Critical
value
rc
-0.8000
-1.0000
EME Area
118
The five EME areas that showed statistically-significant correlation with KC success
were:
The areas that showed the lowest correlation with KC success were Environmental
significant and Risk Management, with a high number of N/A responses, is suspect).
For CVR success, there were six EME areas that were statistically-significant:
The areas that showed the lowest correlation with CVR success were (7d) Learning
Support (rm= 0.0119) (7a) Leadership Support (rm= -0.0746), and (7b) Organizational
Support (rm= -0.0833). This does not necessarily mean that these were negative
influences but they had no apparent correlation with a successful outcome. This lends
119
some support to the anecdotal finding that KM efforts often succeeded for reasons
Those organizations that reported corporate-wide efforts chartered and led by senior
management generally reported lower levels of perceived success on the EME survey,
and this lack of correlation shows that it does not necessarily affect financial success
either. The organizational section of the survey asked specifically about the use of
of those concepts generally self-reported lower levels of EME success, and the financial
figures show no effect. This leads to the possibility that the most effective KM
while leadership in all cases supported the KM efforts, their level of involvement did not
seem to affect the financial outcome. This is an area which deserves additional analysis.
The data from the survey and results of the exercises were summarized in this
Chapter. Specific raw survey scores are in Appendix D. Intermediate and final
calculations based on those data are in Appendix E. Additional graphs and charts are in
Appendix F.
120
CHAPTER 6 CONCLUSIONS, DISCUSSION, AND SUGGESTIONS FOR
FURTHER RESEARCH
“People do not like to think. If one thinks, one must reach conclusions. Conclusions
Helen Keller
6.1 Conclusions
The results of this study show that the two of the three null hypotheses can be
rejected. Since all of the respondents indicated at least some usage of the basic tenets of
EME, it could not be tested whether there is any correlation between its presence and
Enterprise Management Engineering and both the Comprehensive Value Ratio and
between EME and the relative CVR measure is a stronger one but both measures do have
some validity. The next logical step would be to determine whether EME could be
121
The other relevant conclusions can be divided into three groups: conclusions based on
the EME/KC correlation, conclusions based upon the KC data alone, and conclusions
In addition to the major conclusion cited above, that EME and KC/CVR do positively
correlate, there are three other lesser conclusions that can be drawn from this effort, each
The EME/KC Valuation Approach only works on a corporate level. The reasons for
this is that the KC data are only objectively available at the corporate level – there is no
standard repository today, nor is there a requirement, for corporations to report this type
of annual report data for subsidiaries or major operating divisions. In order to keep the
accounting data objective, the assumption is that they should be publicly available from
an objective body. One organization with division-level KM efforts highlighted that its
division-level approaches were so different (based on differing missions for the division
units) that an aggregate answer would have cancelled itself out. This is a flaw in the
study methodology, which currently requires that all assessments be made at the overall
corporate level – it does not extend well if there is no corporate approach to KM.
internal data generated at lower levels. In the above example, this would allow
122
correlations to be made at the division level instead of the corporate one. These data may
community is slowly coming to grips with it, the contacted corporations used for this
study (those who completed the study and those who did not) all varied dramatically on
their own ideas of what constituted knowledge. For purposes of the study the definition
of knowledge, KM, and KMS was deliberately kept as broad as possible, both to
encourage more reporting and response and to allow a more accurate use of the KC
valuation metric. This metric assigns all corporate earnings not attributable to long-term
only the “traditional” knowledge definition of “corporate information in context” but also
items such as training systems and course material, data and distribution rights, patents
and copyrights, and mineral exploitation rights (depending on the accounting standards).
In discussions with the survey respondents, they indicated that from their perspectives
those items did count as knowledge. This definition and Lev’s definition compare very
equitably but are both broader than many of the papers and documents published in this
123
6.1.1.3 Few Organizations Have Corporate KM “Systems”
KM systems imposed from the top have a low record of success by any metric. Those
corporations that stated that they have KM systems generally identified them as being at
organizations that state that they have corporate-level KM did not like calling it a
“system” in the terms used by traditional systems engineering (as shown in the literature
review in Chapter Two). The correlation results support the idea that the more leadership
and business management concepts (as one respondent put it, “trends”) get involved, the
less successful the effort is. As expected, respondents who worked in the IT organization
tended to discount the importance of non-IT factors in the KM system process, and those
who were not from the IT department tended to view the concept of system as too rigid
and formal.
There were some conclusions that could be drawn from the analysis of the data used
produce rankings (both for absolute KC values and for relative CVR measures) that both
repeat his original conclusions and reflect common market wisdom for the subject
companies.
124
6.1.2.1 Accounting and Organizational Changes Can Cause Invalid Figures
Companies that restructure, have an unusual year, or a bad year suffer poor KC
valuations from a onetime snapshot of balance sheet figures. This study attempted to
financially-struggling companies but the blanket exclusion criteria were not always
applicable – nor should they have been (useful data were still gathered from one company
even though this snapshot would show that this company is not being recognized for their
KM).
While the financial industry takes this into account (as an example, for this study, for
one company, the earnings per share were forecast to have one negative year out of six
but overall it received a positive earnings prediction), this is because the figures are
viewed over time and anomalies are essentially discounted. If the KC valuation metric
will be used for measuring the efficacy of KM, KM practitioners and corporate
decisionmakers will need to come to consensus on the meaning of this figure over time.
The four years between financial snapshots for this study produced some major
changes in how companies chose to classify and report certain types of assets or earnings.
Having said that, the changes imposed by a U.S. Government push toward more rigid
accounting standards and the overall global push toward a common global method of
accounting are forcing many companies to change their annual reporting methodology.
This push toward more and different disclosure may eventually result in a more
125
transparent means of valuating knowledge resources, but in the interim, it will make
meaningful comparisons across time periods far more difficult to do with meaningful
precision.
the study also came to several conclusions about the EME model separate and apart from
The EME model has 18 steps, 15 of which are concentrated in the design and build
phase. Respondents repeatedly cited that one of the keys to effective KM systems is the
constant update, renewal, and retirement of outdated, revised, or inaccurate data. Based
upon the respondents’ experiences and their need to accommodate constantly changing
inputs, the KM system engineering process as shown in the EME construct should be
presented more like a continuous process cycle and less like a sequential set of events
resulting in a single (final) deliverable. This in turn requires that as much attention be
126
6.1.3.2 EME Terms of Reference Are Too “Traditional”
This conclusion came about from repeated explanations and discussions with
corporate respondents on the terms used in the survey and on the overall terminology for
traditionally associated with “classical” system engineering (aerospace, utilities, IT) had
no issue with the terminology used to describe the model; however, respondents from
industries with little or no exposure to defense or aerospace clients or staff found the
terminology formal and restrictive. The model might benefit from broader definitions of
One intent of the EME survey was to identify areas that are more and less widely used
and to correlate those areas with the KM valuation process. A related intent was to
identify areas that were universally used (or not used) and to solicit independent feedback
from respondents on the EME construct. One interesting observation is that the EME
areas that the respondents rated the three highest on their surveys (the “perceived”
successes) did not necessarily correlate with the highest actual KM successes as did other
areas:
127
Table 6-1 -- Perceived vs Actual EME "Success" Areas
9. Functional Processes 19 2 6
8. Strategic Functions 13 3 9
1. Enterprise Definition 15 5 2
Based on those responses, the following changes should be considered for the EME
model:
x Revise and expand the section on change management and its use
x Revise and expand the section on asset source identification, its purpose, and its
justification
x Revise and expand the section on enterprise definition, its purpose, and its
justification
128
x Risk management and mitigation should be considered as part of the upfront
strategic planning process rather than as part of the integrative plan; however, its
x The section on environmental changes, their anticipation, and their place in the
x EME should consider aligning itself more closely with the CVR measure of
knowledge success. This means that more emphasis should be placed on those
areas that correlated more highly with the CVR measurement, in order to ensure a
As stated throughout this study, this effort is an initial survey and a broadbrushed
validation. There are several areas for potential followup, identified both through the
literature review and through the results of this study effort. These areas include:
x Readdress the first hypothesis set to correlate the presence of EME with the
more representative sample set than the top of the Fortune 500 ones set used for
this study).
x Identifying and proving causation (not just correlation) between EME and its
129
x Developing and assessing concrete and objective definitions for the stages of
EME along with metrics of progress and accomplishment for each (there is an
construct;
and
x Development of a suggested set of EME and CVR reporting standards that would
charts from this study such as Figure 5-2 (which charted average overall EME scores by
company). Given that the eight respondents came from several industry sectors, replacing
the company codes (which are meaningless anyway) with the applicable sector names
pursue these divergences (especially the counterintuitive ones, such as home products
outscoring semiconductors) to try to determine whether they are real and if so, why they
are.
130
Figure 6-1 -- Industrial Sector EME Scores
3.50
3.00
2.50
2.00
Score
1.50
1.00
0.50
0.00
e
e
ts
s
ns
ity
l
l
ria
ia
iv
iv
or
uc
tr
io
til
ot
ot
st
ct
s
od
at
U
om
om
du
du
du
ic
ic
Pr
In
In
on
un
ut
ut
tr
e
ec
A
ic
m
om
El
m
Se
H
co
le
Te
Company
Research in any of these areas would go a long way toward providing some additional
quantitative proof of the actual processes, costs, and benefits of successful knowledge
6.3 Discussion
It should be remembered that this survey was a self-assessment of projects that were
generally at least the partial responsibility of the respondents (or in some cases their
predecessors). The subjective bias was deemed preferable to not being able to get
accurate information about the actual implementation process for a project; however, as
with all project lessons learned, the self-assessments will be relative and may not indicate
131
totally objective results. Objective assessments of EME application, akin to the CMMI
SCAMPI survey and accreditation process, would improve the validity of this research
and allow more specific analyses. Although the increasing acceptance of the CMMI and
the INCOSE/SEI system engineering standards is beginning to move beyond the narrow
standard widely applicable across multiple industries will require additional research in
Since this is the first survey to attempt to validate the specific EME model, the
analysis was relatively high-level and still includes a degree of uncertainty. The findings
from this effort are more similar to indicators of trends or possible concepts than
definitive statements of causation and relevance. However, the findings from the
statistical analysis portion of this study are supported both by the original literature
review and by the free text and anecdotal responses of the survey respondents. Although
from organization to organization and from researcher to researcher, the overall trends as
indicated by the findings of this study are relatively robust. KM is reaching a crossroads;
a common taxonomy and objective performance metrics are critical to the acceptance of
In order for KM systems developers and practitioners to truly be able to make the
business case for their future systems, they will need to work with the financial managers
to develop and implement accurate reporting and measurements of the true cost and
impacts of their systems. The broad valuation method used in this study produced an
132
initial assessment but is by no means a complete inventory of the current state of EME
implementation or knowledge valuation even across the entire Fortune 500 let alone the
global business environment. Furthermore, this study was confined only to large
profitable public and non-financial companies – hardly a result that can be easily
and refine the new standards so that more accurate intangible measurements can be put
failed buzzword because too much has been promised with no proof of delivery. Without
become more than just another failed business or IT buzzword, it will need to combine
the pragmatism of the management perspective with the discipline of the IT systems
view. Hopefully this effort is the first of many to begin to forge that synthesis and bring
KM from the trend list into the permanent organizational management lexicon.
133
REFERENCES
[1] Alavi, M. and Leidner, D., 1999, “Knowledge Management Systems: Issues,
[2] Alvesson, M. & Karreman, D., 2001, “Odd Couple: Making Sense Of The
[3] American Productivity and Quality Center, 2003, “Measuring the Impact of
B%20Exec%20Summ.pdf?paf_gear_id=contentgearhome&paf_dm=full&pagesel
[4] Andrade, J., Ares, J., Garcia, R., Rodriguez, S., and Suarez, S., 2003, “Lessons
[5] Aris, J., 2000, “Inventing Systems Engineering”, IEEE Annals of the History of
[6] Aston, A., 2002, “Brainpower on the Balance Sheet,” Business Week, 26 August
2002,
http://www.businessweek.com:/print/magazine/content/02_34/b3796624.htm?mz,
134
[7] Aston, A., and Lev, B., 2002, “Recalculating the Balance Sheet”, Business Week,
26 August 2002,
http://www.businessweek.com:/magazine/content/02_34/b3796625.htm, site
[10] Baum, G., 2000, “Introducing the New Value Creation Index,” Forbes ASAP, 01
24 March 2004.
[11] Blanchard, B. S., 2003, Systems Engineering Management, Third Edition, Wiley-
[12] Blumentritt, R. and Johnston, R., 1999, “Towards A Strategy For Knowledge
[13] Bourdreau, A. and Couillard, G., 1999, “Systems Integration And Knowledge
[14] Brigham, E. F., and Gapenski, L. C., 1988, Financial Management: Theory and
[15] Brooking, A., 1999, Corporate Memory: Strategies for Knowledge Management,
135
[16] Brummet, R. L., Flamholtz, E., and Pyle, W. C., 1968, “Human Resource
224.
[17] Bukh, P. N. and Johanson, U., 2003, “Research And Knowledge Interaction:
[19] Carper, W. B., 2002, “The Early Development Of Human Resource Accounting
Including The Impact Of Evolving Asset Valuation Theory,” paper submitted for
March 2004.
[20] Cho, G., Jerrell, H., and Landay, W., 1999, “Program Management 2000: Know
The Way: How Knowledge Management Can Improve DoD Acquisition,” Report
Of the DSMC 1998–1999 Military Research Fellows, Ft. Belvoir VA: Defense
[21] Chourides, P., Longbottom, D., and Murphy, W., 2003, “Excellence In
136
[22] Civi, E., 2000, “Knowledge Management As A Competitive Asset: A Review,”
[24] CMMI Product Team, 2001, Capability Maturity Model Integration (CMMI) for
[25] CMMI Product Team, 2002, Capability Maturity Model Integration (CMMI) for
Chapters 2, 4, and 7.
[26] CMMI Product Team, 2005, Capability Maturity Model Integration (CMMI)
Overview, http://www.sei.cmu.edu/cmmi/adoption/pdf/cmmi-overview05.pdf ,
2005.
[28] Danish Agency for Trade and Industry, 2000, A Guideline For Intellectual Capital
137
http://www.efs.dk/publikationer/rapporter/guidelineICS/download.htm, site
[29] Daum, J., 2001, “How Accounting Gets More Radical In Measuring What Really
[30] Davenport, T., and Prusak, L., 1998, Working Knowledge: How Organizations
Manage What They Know, Harvard Business School Press, Boston, Chapters 7, 8,
and 9.
[32] Deng, Z., Lev, B., and Narin, F., 2003, “Science and Technology As Predictors of
Stock Performance,” Intangible Assets: Values, Measures, and Risks, J. Hand and
December 2005.
138
[35] Doran, T., 2004, “IEEE Std 1220-1998 Revised,” paper presented at USAF
[36] Edvinsson, L. and Malone, M., 1997, Intellectual Capital: Navigating in the New
London, Chapter 8.
[38] Edwards, J. S., Shaw, D., and Collier, P. M., 2005, “Knowledge Management
[39] Eisner, H., 2002, Essentials of Project and Systems Engineering Management,
Second Edition, John Wiley and Sons, New York, Chapters 3, 7, 8, and 13.
[41] European Forum for Quality Management, 2003, “2003 EFQM Excellence
[42] Federal Accounting Standards Board, 2001, “FASB Statement 142: Goodwill and
139
[43] Felton, S. M. and Finnie, W. C., 2003, “Knowledge Is Today's Capital: Strategy &
Leadership Interviews Thomas A Stewart,” Strategy & Leadership, 31(2), pp. 48-
55.
6, and 14.
Management, Second Edition, John Wiley and Sons, Indianapolis, Chapters 3 and
7.
[46] Gebauer, M., 2002, “Human Resource Accounting: Measuring the Value of
Human Assets and the Need for Information Management,” Badovinac, B. et al.
http://notesweb.uniwh.de/wg/wiwi/wgwiwi.nsf/1084c7f3cb4b7f4ac1256b6f00585
1c5/3d0ea9dde1e99e6ac1256c140044dbbe/$FILE/Gebauer%20(2002)%20HRA.p
[47] Gold, A. H., Malhotra, A., and Segars, A. H., 2001, “Knowledge Management:
[48] Gordon, J., 2004, “Making Knowledge Management Work,” Training, 42(8), pp.
16-21.
140
[49] Green, A., and Ryan, J. J. C. H., 2005, “Framework of Intangible Valuation Areas
[50] Gu, Feng, and Lev, Baruch, 2001, “Intangible Assets -- Measurement, Drivers,
Systems: Issues And Challenges For Theory And Practice,” Proceedings Of The
[52] Harrison, S., Sullivan, P. H., and Castagna, M. J., 2001, “Intellectual Capital
pp. 155-165.
[53] Harrison, S., Sullivan, P. H., and Castagna, M. J., 2001, “Intellectual Capital
Reporting, B. Lev, ed., The Brookings Institution, Washington DC, Chapter Five.
[54] Hendricks, K., and Singhal, V., 2000, “The Impact of Total Quality Management
March 2004.
141
[55] Hildebrand, C., 1999, “Making KM Pay Off,”, CIO Enterprise Magazine, (12)9,
pp. 64-66.
[56] Huber, G. P., 1990, "A Theory of the Effects of Advanced Information
[58] IEEE Computer Society, Software Engineering Standards Committee, 2005, IEEE
[60] Institutional Brokers Estimate System, 2000, The I/B/E/S Glossary – A Guide to
Understanding I/B/E/S Terms and Conventions, New York, I/B/E/S, pp. 10-48,
19 January 2006.
142
[62] International Council on Systems Engineering, 2006, “What is System
2006.
[63] Ittner, C. D. and Larcker, D.F., 1998, “Are Nonfinancial Measures Leading
[65] Jensen, H., 2003, “Overview of Tools and Frameworks for Managing
15Jan05.
[67] Kankanalli, A. and Tan, B. C. Y., 2004, “A Review of Metrics for Knowledge
143
the 37th IEEE Hawaii International Conference on System Sciences, Honolulu HI,
pp. 238-245.
[68] Kaplan, R., and Norton, D., 1996, The Balanced Scorecard: Translating Strategy
[69] Kay, R., 2002, “QuickStudy: System Development Life Cycle,” ComputerWorld,
28 December 2005.
[70] Kearney, A.T., 2002, “The 2002 Value Creation Index – A Presciption for
http://www.atkearney.com/shared_res/pdf/2002_Value_Creation_Index_S.pdf,
[71] Kemp, L.L., Nidiffer, K.E., Rose, L.C., Small, R., and Stankosky, M., 2001,
pp. 66-68.
[72] King, W. R., Marks, P.V., and McCoy, S., 2002, “The Most Important Issues In
[73] Kossiakoff, A. and Sweet, W.N., 2002, Systems Engineering Principles and
[74] Koulopoulos, T., and Frappaolo, C., 1999, “Why Do A Knowledge Audit?” J. W.
Cortada and J.A. Woods (Eds.), The Knowledge Management Yearbook, 2000-
144
[75] Laplante, P. A. and Neill, C. J., 2004, “’The Demise of the Waterfall Model Is
http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=110
[77] Lev, B. and Daum, J.H., 2004, “The Dominance Of Intangible Assets:
pp. 6-17.
[78] Lev, B. and Sougiannis, T., 2003, “The Capitalizattin, Amortization, and Value-
Relevance of R&D,” Intangible Assets: Values, Measures, and Risks, J. Hand and
[79] Lev, B. and Zarowin, P., 1999, “The Boundaries of Financial Reporting and How
[80] Lev, B., 2000, “New Accounting for the New Economy,” online paper,
145
[82] Lev, B., 2003, “What Then Must We Do?” Intangible Assets: Values, Measures,
and Risks, J. Hand and B. Lev, eds., Oxford University Press, London, pp. 511-
524.
[83] Lev. B., 2005, “Intangible Assets: Concept and Measurements”, Encyclopedia of
2, pp. 299-305.
[84] Lev, B., 2005, “The Valution of Organizational Capital,” online paper:
http://pages.stern.nyu.edu/~blev/docs/The%20Valuation%20of%20Organization
pp. 54-67.
[86] Liebowitz, J., and Wright, K., 1999, “A Look Toward Valuating Human Capital,”
[87] Lindley, D.V. and Scott, W.F, 1984, New Cambridge Statistical Tables, Second
[88] Marr, B., Schiuma, G., and Neely, A., 2004, “Intellectual Capital - Defining Key
146
[90] McAdam, R. and McCreedy, S., 1999, “A Critical Review Of Knowledge
224-234.
[93] Mintz, S.L., 2000, “CFO's Second Annual Knowledge Capital Scorecard: A
[96] Mouritsen, J., Larsen, H. T., and Bukh, P. N., 2005, “Dealing With The
[97] Mouritsen, J., Larsen, H. T., Bukh, P. N., and Johansen, M. R., 2001, “Reading
147
[98] Ndofor, H. A., and Levitas, E., 2004, “Signaling the Strategic Value of
[99] Neill, C. J. and Laplante, P.A., 2003, “Requirements Engineering: The State Of
[100] Norton, D. P., 2002, “Measuring Value Creation With The Balanced Scorecard,”
[101] O’Sullivan, K. and Stankosky, M., 2004, “The Impact of Knowledge Management
[102] Osterlund, A., 2001, “Grey Matters: CFO's Third Annual Knowledge Capital
http://www.cfo.com/article.cfm/2992913/c_3046504?f=magazine_featured, site
[105] Revenue Recognition staff, 2006, “Sarbox Has Widespread Impact On Revenue
January 2006.
148
[106] Rodgers, W., 2003, “Measurement And Reporting Of Knowledge-Based Assets,”.
[107] Royce, W. W., 1987, “Managing the Development of Large Software Systems:
[109] Salisbury, M. W., 2003. “Putting Theory Into Practice To Build Knowledge
[110] Schultze, U. and Boland, R.J., 2000, “Knowledge Management Technology And
[111] Shapiro, C., and Varian, H., 1999, Information Rules: A Strategic Guide to the
[112] Shapiro, C., and Varian, H., 2003, “The Information Economy,” Intangible
Assets: Values, Measures, and Risks, J. Hand and B. Lev, eds., Oxford University
[113] Sheard, S. and Lake, J. G., 1998, “Systems Engineering Standards and Models
28 December 2005.
149
[115] Skyrme, D. J., 1998, “Knowledge Management Solutions – The IT Contribution”,
[116] Smith, G. V., and Parr, R. L., 2005, Intellectual Property: Valuation,
Exploitation, and Infringement Damages, John Wiley and Sons, New York,
[119] Standard and Poor’s, 2003, CompuStat® Users Guide, Standard and Poor’s,
http://wrds.wharton.upenn.edu/support/docs/compustat/user_guide/user_all.pdf,
[120] Standfield, K., 2002, Intangible Management: Tools for Solving the Accounting
and Management Crisis, The Academic Press, San Diego, Chapters 12, 13, 14,
and 15.
150
[123] Stankosky, M., and Baldanza, C., 2001, “A Systems Approach to Engineering a
[124] Stephenson, M., and Davies, T., 1999, “Technology Support for Sustainable
[125] Stewart, T. A., 1997, Intellectual Capital, Currency Doubleday, New York,
[126] Stewart, T. A., 2001, “The Fortune 500/Accounting Gets Radical, ” Fortune,
[127] Sunassee, N. and Sewry, D., 2002, “A Theoretical Framework for Knowledge
[128] Sveiby, K. E., 1997, The New Organizational Wealth, Berrett-Koehler Publishers,
[129] Tiwana, A. and Ramesh, B., 2001, “Integrating Knowledge on the Web,” IEEE
[130] Tiwana, A., 2000, The Knowledge Management Toolkit, Prentice Hall Publishing,
151
[131] University of Pennsylvania, 2006, Wharton Research Data Services,
[132] Upton, W., 2003, “Challenges From the New Economy for Business and Financial
Reporting,” Intangible Assets: Values, Measures, and Risks, J. Hand and B. Lev,
[134] Wiig, K. M., 1999, “Introducing Knowledge Management Into the Enterprise,”
[135] Yu, L., 2005, “Does Knowledge Sharing Pay Off?” MIT Sloan Management
Review, 46(3), 5.
[137] Zhou, A. Z. and Fink, D., 2003, “The Intellectual Capital Web: A Systematic
152
APPENDIX A -- SURVEY INSTRUMENT
153
154
155
156
157
158
159
160
161
162
163
164
165
APPENDIX B –FINANCIAL DATA VARIABLES AND DEFINITIONS
166
Database Field
Data Source Required Data Definition
Name(s)
I/B/E/S [59] Actual Earnings Per The reported annual earnings per EPS
Share (EPS) for share for a company for the year
Dec2002 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
I/B/E/S [59] Actual Earnings Per The reported annual earnings per EPS
Share (EPS) for share for a company for the year
Dec2003 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
I/B/E/S [59] Actual Earnings Per The reported annual earnings per EPS
Share (EPS) for share for a company for the year
Dec2004 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
167
Database Field
Data Source Required Data Definition
Name(s)
168
Database Field
Data Source Required Data Definition
Name(s)
169
Database Field
Data Source Required Data Definition
Name(s)
170
Database Field
Data Source Required Data Definition
Name(s)
CompuStat Common Equity – Total This item represents the common DATA60
[118] ($MM) shareholders’ interest in the
company [119].
CompuStat Depreciation, This item represents the total DATA196
[118] Depletion, and portion of asset cost written off by
Amortization periodic depreciation charges
since the assets were acquired
[119].
Calculated Gross Assets Assets – Total/Liabilities and DATA6+DATA196
Stockholders’ Equity – Total +
Depreciation, Depletion, and
Amortization
Calculated Total Earnings EPS Basic * Shares for EPS DATA53*DATA54
Calculated Book Value BPS * Common Shares DATA60*BPS
Outstanding
Calculated Market Value Price CY04 * Common Shares DATA24*DATA25
Outstanding
Additional Information:
This item represents the total portion of asset cost written off by periodic
depreciation charges since the assets were acquired. Not all types of property, plant &
equipment included in Net Property, Plant and Equipment on the Balance Sheet, will
have a corresponding depreciation that can be collected in this data item. These types of
property, plant and equipment include funds for construction and property plant &
171
This item includes:
3. Depletion
4. Depreciation
5. Reserve for possible future loss on disposals (when included in depreciation and
amortization)
subsidiaries, affiliates and joint ventures in which the parent company has significant
mentioned
2. Investments of more than one percent to 19 percent when the “ cost” or “ equity”
3. Joint ventures not yet operating (included in Investments and Advances – Other)
172
4. Joint ventures when there is no indication of the equity method (included in
This item represents long-term receivables and other investments, and advances
3. mentioned
4. Banks and savings and loans’ investment securities (available for sale and held
5. for maturity)
7. Brokerage firms’
173
13. Land or property held for resale (for companies whose primary business is not
3. Equity in consolidated joint ventures when held for loan collateral (included in
Assets – Other)
174
4. Film production companies’ film costs (included in Property, Plant, and
Method)
7. Land development companies’ land held for development and sale (included in
9. Receivables from officers and directors, employees and all holders of equity
175
APPENDIX C – FINANCIAL DATA RECEIVED
176
Table C- 1 -- Total Physical and Financial Asset Values
177
Table C-2 -- Earnings Estimates and "Normalized" Earnings
Per Share Per Share Per Share Per Share Per Share Per Share Earnings per
178
APPENDIX D – RAW SURVEY DATA
179
Table D- 1 -- Raw Survey Data
A B C D E F G H
1. Enterprise Definition
1a. At the start of this effort, how well did the organization 3 2 2 5 3 3 4 1
proposed KMS?
1e. If this definition changed during the course of the project, 3 3 3 1 N/A 3 2 1
1. Enterprise Definition 3.00 1.80 2.20 3.00 2.25 2.80 3.20 1.00
2c. If this definition changed during the course of the project, 3 3 3 1 N/A 4 3 1
2. Value Proposition 3.67 3.00 2.00 1.00 2.00 3.33 2.33 1.33
decisions documented?
3b. How well was this list used in the project inception 2 3 2 N/A 1 4 2 1
phase?
3c. How well was this list used in the system development 2 3 2 N/A 1 3 3 2
phase?
3d. How well was this list used in the deployment and 2 3 4 N/A 2 2 2 2
maintenance phase?
3e. Was this list changed at any point in the process? Y N/A DK N/A N Y Y Y
3f. Why and when? See Table N/A N/A N/A N/A See Table See Table See Table
3. Critical Asset Identification 2.00 3.00 2.50 5.00 1.25 3.00 2.25 1.50
decision-making documented?
A B C D E F G H
4b. How well was this list used in the project inception 3 2 2 N/A 2 3 2 2
phase?
4c. How well was this list used in the system development 2 2 2 N/A 1 2 2 2
phase?
4d. How well was this list used in the deployment and 2 3 3 N/A 2 4 2 2
maintenance phase?
4e. Was this list changed at any point in the process? Y N/A N/A N/A N Y DK Y
4f. Why and when? See Table N/A N/A N/A N/A See Table N/A See Table
4. Asset Source Identification 2.75 2.25 2.25 5.00 1.75 3.00 2.25 2.00
5. Strategic Objectives
5a. How well did the organization define (or have defined for 2 1 1 2 1 4 1 1
5c. Please list some of the metrics 5c. used to measure See Table N/A N/A N/A See Table See Table See Table See Table
5h. (if applicable) How well were any of these objectives 2 1 2 N/A 1 4 2 1
operations?
5. Strategic Objectives 2.43 1.00 2.14 2.67 1.43 3.00 2.43 2.29
objectives??
6b. How many potential changes were identified (enter N/A N/A N/A N/A N/A 20 See Table N/A 10-20?
or DK if needed)? F-1
6c. How well did the organization define sensors or methods 4 N/A N/A N/A N/A 3 N/A
6d. What were those sensors? See Table N/A N/A N/A N/A See Table N/A See Table
6e. How often were they monitored or assessed? yearly N/A N/A N/A N/A weekly N/A See Table
F-1
6f. Did any of the potential environmental changes actually Y N/A DK N/A Y N N/A Y
occur?
6g. How much did these changes affect accomplishment of N/A N/A DK N/A 2 5 N/A
6h. Did the sensors or methods work as expected? DK N/A N/A N/A DK Y N/A Y
6i. If they did not work as expected, please describe the OK…stay N/A N/A N/A N/A N/A N/A See Table
reason(s) why they did not meet expectations (answer N/A or tuned F-1
DK as needed)
6. Environmental Changes 3.50 N/A 3.00 5.00 1.50 4.00 N/A 1.00
project?
(v) Growth 1 1 2 3 4 3 2 1
(vii) Communications 2 4 2 2 2 1 2 1
(v) Growth 2 2 1 3 4 3 2 1
(vii) Communications 2 3 2 1 2 1 2 1
7a. Leadership Support 1.50 2.36 1.57 2.14 3.00 2.29 2.00 1.14
8. Organizational Commitment
organization?
A B C D E F G H
7b. Organizational Support 2.58 3.42 1.92 1.50 2.00 2.75 2.33 1.50
9. KMS Technologies
KM system?
7c. Technology Support 2.67 1.71 2.13 1.78 2.33 2.50 2.06 1.81
rewarded/penalized?
rewarded/penalized?
7d. Learning Support 2.40 2.60 1.75 1.00 2.20 3.00 1.75 1.20
11a. During the system development stage, how well did the 2 2 1 5 2 3 2 2
strategic objectives?
11b. How much was this list used in the system 3 2 1 N/A 2 4 2 1
development phase?
11c. How much was this list used in the deployment and 3 3 3 N/A 4 4 2 1
maintenance phase?
11d. Was this list changed at any point in the process? N DK DK N/A N Y DK N
A B C D E F G H
11e. Why and when? N/A N/A N/A N/A N/A See Table N/A N/A
F-1
8. Strategic Functions 2.67 2.33 1.67 5.00 2.67 3.67 2.00 1.33
12a. During the system development stage, how well did the 2 2 1 5 2 1 2 2
12b. How much was this list used in the system 2 3 1 N/A 2 1 2 1
development phase?
12c. How much was this list used in the deployment and 2 3 3 N/A 4 1 2 1
maintenance phase?
12d. Was this list changed at any point in the process? Y DK Y N/A N Y N N
12e. Why and when? See Table N/A See Table N/A N/A See Table N/A N/A
9. Functional Processes 2.00 2.67 1.67 5.00 2.67 1.00 2.00 1.33
13a. During the system development stage, how well did the 2 2 2 5 2 3 2 1
13b. How much was this list used in the system 3 4 2 N/A 2 3 2 1
development phase?
13c. How much was this list used in the deployment and 3 5 2 N/A 4 3 2 1
maintenance phase?
13d. Was this list changed at any point in the process? N DK N N/A N N N Y
13e. Why and when? N/A N/A N/A N/A N/A N/A N/A See Table
F-1
10. Required Assets 2.67 3.67 2.00 5.00 2.67 3.00 2.00 1.00
14a. During the system development stage, how well did the 2 2 2 5 1 2 2 1
14b. How much was this list used in the system 2 2 2 N/A 2 3 2 1
development phase?
14c. How much was this list used in the deployment and 2 4 3 N/A 4 3 2 1
maintenance phase?
14d. Was this list changed at any point in the process? N DK N N/A N N N Y
14e. Why and when? N/A N/A N/A N/A N/A N/A N/A See Table
F-1
11. Asset Sources 2.00 2.67 2.33 5.00 2.33 2.67 2.00 1.00
both strategies.
customized.
strategy?
15c. If both strategies were used for this KMS, how often 1 2 2 3 3 2 2 1
12a. KM Strategy Identification 1.00 2.00 2.00 3.00 3.00 2.00 2.00 1.00
following areas:
displayed?)
seeking it?)
12b. KM Strategy Effectiveness 3.00 2.20 1.80 1.60 2.60 2.60 1.80 1.00
implementation process?
17b. If not, why not? N/A N/A N/A DK N/A N/A N/A N/A
phase?
A B C D E F G H
maintenance phase?
17f. Why and when? See Table N/A N/A DK See Table See Table N/A N/A
13a. Formal Organizational Structure 2.50 4.00 N/A N/A 4.00 4.00 2.00 1.00
organizational structure? 1
18b. Did it show where it supported the formal one? Y N/A N DK N/A N/A N/A 1
18c. If not, why not? N/A N/A DK N/A N/A N/A N/A
18d. How much was this used in the system development 2 N/A 2 DK N/A 1 DK
phase? 1
18e. How much was this used in the deployment and 2 N/A 3 DK N/A 1 DK
maintenance phase? 1
18f. Was this changed at any point in the process? Y N/A N DK N/A N DK Y
18g. Why and when? See Table N/A N/A DK N/A N/A N/A See Table
F-1 F-1
13b. Informal Organizational Structure 2.00 N/A 2.50 N/A N/A 1.00 N/A 1.00
effort?
19c. If not, why not? N/A N/A N/A N/A N/A N/A N/A N/A
development phase?
19e. How much was this list used in the deployment and 4 2 1 4 3 1 2 1
maintenance phase?
19g. Why and when? See Table N/A N/A N/A See Table N/A N/A See Table
14. KM Technology Listing 4.00 2.00 1.00 4.00 3.00 1.00 2.00 1.00
20b. How integrative was the management plan that was 1 DK 2 DK d/k 2 2 1
20c. If there was no plan, why not? N/A N/A N/A N/A N/A N/A N/A N/A
A B C D E F G H
20d. If there was a plan, how well did the plan include any or
20e. What other major topics were covered in the project or See Table N/A N/A N/A N/A See Table N/A See Table
15. Integrative Management Plan 1.78 1.13 1.44 N/A 1.38 2.44 1.56 1.33
legacy components?
21b. If there was no plan, why not? N/A N/A N/A N/A N/A See Table N/A N/A
F-1
16. Legacy System Integration 5.00 4.00 2.25 3.00 2.67 5.00 3.67 2.50
17. Risk Management/Mitigation N/A N/A 2.50 N/A 2.00 N/A 3.00 1.00
21d. What other major topics were covered in the legacy N/A N/A N/A See Table N/A N/A N/A N/A
22b. If not, why not? See Table N/A N/A N/A N/A N/A N/A N/A
F-1
22d. If so, how successful would you say that the plan has N/A 2 2 1 N/A 1 N/A 1
been?
A B C D E F G H
22e. Please list reason(s) why or why not? N/A N/A N/A See Table N/A See Table N/A See Table
18. Change Management N/A 2.00 2.00 1.00 N/A 1.00 N/A 1.00
F-1 F-1
APPENDIX E – INTERMEDIATE STATISTICAL CALCULATIONS AND
CONCLUSIONS
198
Table E- 1 -- Summary of All Question Responses
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
KMS Project Inception Phase
1. Enterprise Definition
1a. At the start of this effort,
how well did the organization
Enterprise
define and illustrate the Likert - - 2.88 1.2464 1.55 - -
Definition
enterprise to be included in the
proposed KMS?
1b. How well did this definition Enterprise
Likert - - 2.63 1.0607 1.13 - -
identify key relationships? Definition
1c. How well did this definition Enterprise
Likert - - 1.75 0.7071 0.50 - -
identify key stakeholders? Definition
1d. How well was this definition Enterprise
Likert - - 2.50 1.1952 1.43 - -
documented? Definition
1e. If this definition changed
during the course of the project,
how well was this change Enterprise
Likert - - 2.29 0.9512 0.90 1 -
reflected in the project Definition
requirements (if no change,
please select N/A)?
2. KMS Project Value - -
2a. How well did the
Value
organization state the value Likert - - 2.13 0.9910 0.98 - -
Proposition
proposition for the KMS
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
implementation?
2b. How well was this definition Value
Likert - - 2.57 1.1339 1.29 - -
documented? Proposition
2c. If this definition changed
during the course of the project,
how well was this change in Value
Likert - - 2.57 1.1339 1.29 - 1
definition reflected in the project Proposition
requirements (if no change,
please select N/A)?
3. Definition of Critical
1 -
Intellectual Assets
3a. Before starting system
development, how well were the Identify Critical
critical intellectual assets Intellectual Likert - - 2.38 1.3025 1.70 - -
needed to make strategic Assets
decisions documented?
Identify Critical
3b. How well was this list used
Intellectual Likert - - 2.14 1.0690 1.14 - -
in the project inception phase?
Assets
3c. How well was this list used Identify Critical
in the system development Intellectual Likert - - 2.29 0.7559 0.57 - -
phase? Assets
3d. How well was this list used Identify Critical
Likert - - 2.43 0.7868 0.62 1 -
in the deployment and Intellectual
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
maintenance phase? Assets
Identify Critical
3e. Was this list changed at any
Intellectual Binary 4 1 - - - 1 -
point in the process?
Assets
Identify Critical
3f. Why and when? Intellectual Free Text - - - - - 1 -
Assets
4. Sources of Critical Intellectual
2 1
Assets
4a. Before starting system
development, how well were the Identify
sources of the intellectual Intellectual Likert - - 2.88 1.1260 1.27 4 -
assets needed for strategic Asset Sources
decision-making documented?
Identify
4b. How well was this list used
Intellectual Likert - - 2.29 0.4880 0.24 - -
in the project inception phase?
Asset Sources
4c. How well was this list used Identify
in the system development Intellectual Likert - - 1.86 0.3780 0.14 - -
phase? Asset Sources
4d. How well was this list used Identify
in the deployment and Intellectual Likert - - 2.57 0.7868 0.62 - -
maintenance phase? Asset Sources
4e. Was this list changed at any Identify Binary 3 1 - - - 1 -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
point in the process? Intellectual
Asset Sources
Identify
4f. Why and when? Intellectual Free Text - - - - - 1 -
Asset Sources
5. Strategic Objectives 1 -
5a. How well did the
organization define (or have Strategic
Likert - - 1.63 1.0607 1.13 3 1
defined for it) one or more Objectives
strategic objectives?
5b. How well were these defined Strategic
Likert - - 2.00 1.0690 1.14 5 -
in measurable terms? Objectives
5c. Please list some of the
metrics 5b. used to measure Strategic
Free Text - - - - - - -
progress toward these Objectives
objectives
5d. How well were these
Strategic
objectives defined prior to the Likert - - 2.38 1.1877 1.41 - -
Objectives
KMS project inception phase?
5e. How well were any of these
Strategic
objectives actually measured Likert - - 3.00 1.2910 1.67 - -
Objectives
prior to KMS project inception?
5f. How well were any of these Strategic
Likert - - 2.14 1.0690 1.14 - -
objectives actually measured Objectives
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
during KMS project
development?
5g. How well were any of these
Strategic
objectives actually measured Likert - - 2.00 1.0000 1.00 3 -
Objectives
after initial KMS deployment?
5h. (if applicable) How well were
any of these objectives actually
Strategic
measured during Likert - - 1.86 1.0690 1.14 - -
Objectives
ongoing/steady-state
operations?
6. Environmental Changes and
1 -
Sensors
6a. How well did the
organization identify and/or
define potential environmental Identify
changes (other than the Environmental Likert - - 2.83 1.6021 2.57 1 -
potential KMS project) that Changes
could impact achievement of the
strategic objectives?
6b. How many potential Identify
changes were identified (enter Environmental Free Text - - - - - 1 -
N/A or DK if needed)? Changes
6c. How well did the Identify
Likert - - 2.67 1.5275 2.33 1 -
organization define sensors or Environmental
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
methods to monitor those Changes
changes?
Identify
6d. What were those sensors? Environmental Free Text - - - - - - -
Changes
Identify
6e. How often were they
Environmental Free Text - - - - - - -
monitored or assessed?
Changes
6f. Did any of the potential Identify
environmental changes actually Environmental Binary 3 1 - - - - 2
occur? Changes
6g. How much did these Identify
changes affect accomplishment Environmental Likert - - - - - 5 -
of the strategic objectives? Changes
Identify
6h. Did the sensors or methods
Environmental Binary - - - - - 5 -
work as expected?
Changes
6i. If they did not work as
expected, please describe the Identify
reason(s) why they did not meet Environmental Free Text - - - - - 5 -
expectations (answer N/A or DK Changes
as needed)
7. Senior Leadership
5 -
Commitment
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
7a. At KMS project inception,
did the organization have the
Leadership
commitment of senior Likert 8 - - - - 3 1
Pillar
leadership to the need for a KM
project?
7b. How strong were each of the
following leadership factors 4 1
within the organization?
Leadership
(i) Business Culture Likert - - 1.63 0.5175 0.27 4 2
Pillar
Leadership
(ii) Strategic Planning Likert - - 2.00 1.1952 1.43 6 -
Pillar
(iii) Organizational Vision and Leadership
Likert - - 1.50 0.5345 0.29 2 -
Goals Pillar
Leadership
(iv) Organizational Climate Likert - - 2.25 1.0351 1.07 - -
Pillar
Leadership
(v) Growth Likert - - 2.13 1.1260 1.27 - -
Pillar
(vi) Organizational Leadership
Likert - - 2.38 0.5175 0.27 - -
Segmentation or Restructuring Pillar
Leadership
(vii) Communications Likert - - 2.00 0.9258 0.86 - -
Pillar
7c. How well did each of the
- -
following leadership factors
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
within the organization support
the KMS project?
Leadership
(i) Business Culture Likert - - 2.00 1.0690 1.14 - -
Pillar
Leadership
(ii) Strategic Planning Likert - - 2.00 1.0690 1.14 - -
Pillar
(iii) Organizational Vision and Leadership
Likert - - 1.75 0.7071 0.50 - -
Goals Pillar
Leadership
(iv) Organizational Climate Likert - - 2.13 0.6409 0.41 - -
Pillar
Leadership
(v) Growth Likert - - 2.25 1.0351 1.07 - -
Pillar
(vi) Organizational Leadership
Likert - - 2.25 0.7071 0.50 - -
Segmentation or Restructuring Pillar
Leadership
(vii) Communications Likert - - 1.75 0.7071 0.50 - -
Pillar
8. Organizational Commitment - -
8a. At KMS project inception,
did the organization support the Organization
Likert 5 3 - - - - -
implementation of a KM Pillar
project?
8b. How strongly were one or
more of the following - -
organizational paradigms
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
implemented within your
organization?
(i) Business Process Organization
Likert - - 1.88 0.6409 0.41 - -
Reengineering Pillar
Organization
(ii) Performance Metrics Likert - - 1.43 0.5345 0.29 - -
Pillar
Organization
(iii) Management by Objective Likert - - 1.57 0.5345 0.29 - -
Pillar
(iv) Total Quality
Organization
Management/Leadership Likert - - 2.00 0.8165 0.67 - -
Pillar
(TQM/TQL)
(v) Automated Workflow Organization
Likert - - 3.43 1.2724 1.62 - -
Management Pillar
Organization
(vi) Communications methods Likert - - 2.25 1.2817 1.64 - -
Pillar
8c. How well did one or more of
the following organizational
- -
paradigms support the KMS
project?
(i) Business Process Organization
Likert - - 2.38 1.1877 1.41 - -
Reengineering Pillar
Organization
(ii) Performance Metrics Likert - - 1.86 0.8997 0.81 1 -
Pillar
(iii) Management by Objective Organization Likert - - 2.43 1.2724 1.62 1 -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Pillar
(iv) Total Quality
Organization
Management/Leadership Likert - - 2.57 1.2724 1.62 1 -
Pillar
(TQM/TQL)
(v) Automated Workflow Organization
Likert - - 3.43 1.3973 1.95 1 -
Management Pillar
Organization
(vi) Communications methods Likert - - 2.63 1.4079 1.98 - -
Pillar
9. KMS Technologies - -
9a. Did the organization provide
Technology
appropriate technology for a KM Likert 8 - - - - - -
Pillar
system?
9b. Which of the following
technologies were involved in
1 -
the KMS and if so, when were
they introduced?
Technology
(i) E-mail Likert - - 2.00 - - 1 -
Pillar
(ii) On-Line Analytic Processing Technology
Likert - - 2.25 0.5000 0.25 1 -
(OLAP) Pillar
(iii) Data Warehousing/Data Technology
Likert - - 2.40 0.5477 0.30 1 -
Mart/Data Mining Tools Pillar
Technology
(iv) Search Engines Likert - - 2.43 0.5345 0.29 - -
Pillar
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
(v) Other Decision Support Technology
Likert - - 2.75 0.9574 0.92 - -
Systems Pillar
Technology
(vi) Business Process Modeling Likert - - 2.40 0.5477 0.30 - -
Pillar
(vii) Other Knowledge Technology
Likert - - 2.63 0.9161 0.84 - -
Management Tools Pillar
(viii) Other Communication Technology
Likert - - 2.50 0.5477 0.30 - -
Systems or Processes Pillar
9c. Did they help or hurt? 1 -
Technology
(i) E-mail Likert - - 1.57 1.1339 1.29 4 -
Pillar
(ii) On-Line Analytic Processing Technology
Likert - - 1.75 0.5000 0.25 3 -
(OLAP) Pillar
(iii) Data Warehousing/Data Technology
Likert - - 1.40 0.5477 0.30 1 -
Mart/Data Mining Tools Pillar
Technology
(iv) Search Engines Likert - - 2.00 1.0690 1.14 4 -
Pillar
(v) Other Decision Support Technology
Likert - - 1.75 0.9574 0.92 3 -
Systems Pillar
Technology
(vi) Business Process Modeling Likert - - 2.25 0.5000 0.25 - -
Pillar
(vii) Other Knowledge Technology
Likert - - 1.86 0.6901 0.48 2 -
Management Tools Pillar
(viii) Other Communication Technology Likert - - 2.14 0.6901 0.48 - -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Systems or Processes Pillar
10. Organizational Learning 1 -
10a. Is there a process for
Learning Pillar Binary 5 1 - - - 4 -
organizational learning?
10b. How formal is the
Learning Pillar Likert - - 2.00 1.0954 1.20 3 -
process?
10c. Is there an environment
Learning Pillar Binary 7 1 - - - - -
which encourages it?
10d. How collaborative is the
Learning Pillar Likert - - 1.63 0.7440 0.55 4 -
environment?
10e. How virtual is the
Learning Pillar Likert - - 1.63 0.5175 0.27 3 1
environment?
10f. How much is knowledge
exchange officially Learning Pillar Likert - - 2.75 1.3887 1.93 - 1
rewarded/penalized?
10g. How much is knowledge
exchange unofficially Learning Pillar Likert - - 2.00 0.7559 0.57 1 -
rewarded/penalized?
KMS Project Development
- -
Phase
11. Functional Definition - -
11a. During the system Functions for
development stage, how well did Strategic Likert - - 2.38 1.1877 1.41 1 1
the organization list the Objectives
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
functions needed to accomplish
the strategic objectives?
11b. How much was this list Functions for
used in the system development Strategic Likert - - 2.14 1.0690 1.14 2 -
phase? Objectives
11c. How much was this list Functions for
used in the deployment and Strategic Likert - - 2.86 1.0690 1.14 - -
maintenance phase? Objectives
Functions for
11d. Was this list changed at
Strategic Binary 1 3 - - - - -
any point in the process?
Objectives
Functions for
11e. Why and when? Strategic Free Text - - - - - - -
Objectives
12. Operational Process
- -
Definition
12a. During the system
development stage, how well did
Functional
the organization list the Likert - - 2.13 1.2464 1.55 - -
Processes
operational processes required
to accomplish these functions?
12b. How much was this list
Functional
used in the system development Likert - - 1.71 0.7559 0.57 - -
Processes
phase?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
12c. How much was this list
Functional
used in the deployment and Likert - - 2.29 1.1127 1.24 - -
Processes
maintenance phase?
12d. Was this list changed at Functional
Binary 3 2 - - - - -
any point in the process? Processes
Functional
12e. Why and when? Free Text - - - - - - -
Processes
13. Intellectual Asset Listing 1 -
13a. During the system
development stage, how well did
the organization list the
Asset Listing Likert - - 2.38 1.1877 1.41 1 -
intellectual assets required to
accomplish both functions and
processes?
13b. How much was this list
used in the system development Asset Listing Likert - - 2.43 0.9759 0.95 1 3
phase?
13c. How much was this list
used in the deployment and Asset Listing Likert - - 2.86 1.3452 1.81 7 -
maintenance phase?
13d. Was this list changed at
Asset Listing Binary 1 5 - - - - -
any point in the process?
13e. Why and when? Asset Listing Free Text - - - - - - -
14. Intellectual Asset Sourcing - -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
14a. During the system
development stage, how well did
Asset Sourcing Likert - - 2.13 1.2464 1.55 1 -
the organization list the sources
of these intellectual assets?
14b. How much was this list
used in the system development Asset Sourcing Likert - - 2.00 0.5774 0.33 1 -
phase?
14c. How much was this list
used in the deployment and Asset Sourcing Likert - - 2.71 1.1127 1.24 1 1
maintenance phase?
14d. Was this list changed at
Asset Sourcing Binary 1 5 - - - 5 -
any point in the process?
14e. Why and when? Asset Sourcing Free Text - - - - - - -
15. KMS Implementation
- -
Strategies
15a. Did this KMS use a
KM Strategy
knowledge codification Likert 8 - - - - - -
Identification
strategy?
15b. Did this KMS use a
KM Strategy
knowledge personalization Likert 8 - - - - 1 -
Identification
strategy?
15c. If both strategies were
KM Strategy
used for this KMS, how often Likert - - 2.00 0.7559 0.57 1 -
Identification
were they both used?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
16. KMS Implementation
1 1
Effectiveness
How well did the selected
strategy/strategies address the 7 -
following areas:
16a. Assurance (is the KM Strategy
Likert - - 2.25 1.2817 1.64 - -
knowledge correct?) Identification
16b. Generation (does the KM Strategy
Likert - - 2.50 1.0690 1.14 - -
knowledge exist?) Identification
16c. Codification (how is the
KM Strategy
knowledge stored and Likert - - 1.75 0.4629 0.21 - -
Identification
displayed?)
16d. Transfer (how can the
KM Strategy
knowledge get to the person Likert - - 1.88 0.9910 0.98 1 -
Identification
seeking it?)
16e. Use (how was the KM Strategy
Likert - - 2.00 1.1952 1.43 1 -
knowledge utilized?) Identification
17. Formal Organizational
1 1
Structure
17a. Did an accurate diagram or
representation of the formal
Organizational
organizational structure exist at Binary 5 2 - - - 7 -
Structures
any point in the KMS
implementation process?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Organizational
17b. If not, why not? Free Text - - - - - - -
Structures
17c. How much was this used in
Organizational
the system development Likert - - 2.83 1.3292 1.77 - -
Structures
phase?
17d. How much was this used in
Organizational
the deployment and Likert - - 3.00 1.2649 1.60 - -
Structures
maintenance phase?
17e. Was this changed at any Organizational
Binary 3 2 - - - - -
point in the process? Structures
Organizational
17f. Why and when? Free Text - - - - - - -
Structures
18. Informal Organizational
- -
Structure
18a. Was there a diagram or
Organizational
representation of the informal Binary 2 4 - - - - -
Structures
organizational structure?
18b. Did it show where it Organizational
Binary 1 1 - - - - -
supported the formal one? Structures
Organizational
18c. If not, why not? Free Text - - - - - - -
Structures
18d. How much was this used in
Organizational
the system development Likert - - 1.50 0.5774 0.33 - -
Structures
phase?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
18e. How much was this used in
Organizational
the deployment and Likert - - 1.75 0.9574 0.92 - -
Structures
maintenance phase?
18f. Was this changed at any Organizational
Binary 2 2 - - - - -
point in the process? Structures
Organizational
18g. Why and when? Free Text - - - - - - -
Structures
19. Technology Definition - -
19a. Was there a list of the
KM
technology needed to support Binary 8 - - - - - -
Technologies
the planned KM strategies?
19b. Was it used in support of KM
Binary 8 - - - - - -
the KMS implementation effort? Technologies
KM
19c. If not, why not? Free Text - - - - - - -
Technologies
19d. How much was this list
KM
used in the system development Likert - - 2.25 1.2817 1.64 - -
Technologies
phase?
19e. How much was this list
KM
used in the deployment and Likert - - 2.25 1.2817 1.64 - 1
Technologies
maintenance phase?
19f. Was this changed at any KM
Binary 4 2 - - - 7 1
point in the process? Technologies
19g. Why and when? KM Free Text - - - - - - 2
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Technologies
KMS Fielding and Support
- 2
Phase
20. Integrative Management
- 3
Plan
20a. How comprehensive was
Integrative
the management plan that was
Management Likert - - 1.86 0.6901 0.48 4 1
developed and documented for
Plan
the KMS?
20b. How integrative was the
Integrative
management plan that was
Management Likert - - 1.60 0.5477 0.30 2 -
developed and documented for
Plan
the KMS?
Integrative
20c. If there was no plan, why
Management Free Text - - - - - - -
not?
Plan
20d. If there was a plan, how
well did the plan include any or - 1
all of the following:
Integrative
(i) Deliverables Management Likert - - 1.29 0.4880 0.24 4 1
Plan
Integrative
(ii) Metrics for project control Likert - - 1.29 0.4880 0.24 6 1
Management
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Plan
Integrative
(iii) Metrics for defining project
Management Likert - - 1.57 0.7868 0.62 2 2
success
Plan
Integrative
(iv) Expected benefits Management Likert - - 1.86 0.6901 0.48 2 2
Plan
Integrative
(v) Required resources (time,
Management Likert - - 2.00 1.1547 1.33 2 2
money, personnel)
Plan
Integrative
(vi) Responsible persons Management Likert - - 1.71 1.2536 1.57 5 1
Plan
Integrative
(vii) Schedule Management Likert - - 1.14 0.3780 0.14 4 -
Plan
20e. What other major topics Integrative
were covered in the project or Management Free Text - - - - - - -
system management plan? Plan
21. Legacy Integration Planning - -
21a. Was there a lot of planning
Legacy
to integrate the KMS with legacy Likert - - 3.63 0.9161 0.84 - -
Integration
components?
21b. If there was no plan, why Legacy Free Text - - - - - 8 -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
not? Integration
21c. If there was legacy
integration planning, how well
- -
did the planning address any or
all of the following:
Legacy
(i) Current functions Likert - - 2.33 1.5275 2.33 - -
Integration
Legacy
(ii) Current processes Likert - - 2.25 1.2583 1.58 1 1
Integration
(iii) Formal organizational Legacy
Likert - - 3.33 1.1547 1.33 5 -
structure Integration
(iv) Informal organizational Legacy
Likert - - 3.50 1.0000 1.00 - -
structure(s) Integration
Legacy
(v) IT systems Likert - - 2.50 1.2910 1.67 - -
Integration
Legacy
(vi) Risk identification Likert - - 2.25 0.9574 0.92 - -
Integration
(vii) Risk management and/or Legacy
Likert - - 2.00 0.8165 0.67 - 1
mitigation Integration
21d. What other major topics
Legacy
were covered in the legacy Free Text - - - - - - 2
Integration
system integration plan?
22. Change Management
8 -
Planning
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
22a. Did the organization
develop a plan for evaluating, Change
Binary 6 1 - - - - -
implementing, and Managing Management
change?
Change
22b. If not, why not? Free Text - - - - - 1 -
Management
Change
22c. Has this plan been used? Binary - - - - - 1 -
Management
22d. If so, how successful would
Change
you say that the plan has Likert - - 1.40 0.5477 0.30 1 -
Management
been?
22e. Please list reason(s) why Change
Free Text - - - - - 1 -
or why not? Management
Table E- 2 -- Company-level EME Survey Area Responses
1. Enterprise Definition 3.00 1.80 2.20 3.00 2.25 2.80 3.20 1.00 2.61 0.5232 15 0
2. Value Proposition 3.67 3.00 2.00 1.00 2.00 3.33 2.33 1.33 2.48 0.9201 13 0
3. Critical Asset Identification 2.00 3.00 2.50 5.00 1.25 3.00 2.25 1.50 2.71 1.1764 16 0
4. Asset Source Identification 2.75 2.25 2.25 5.00 1.75 3.00 2.25 2.00 2.75 1.0704 18 0
5. Strategic Objectives 2.43 1.00 2.14 2.67 1.43 3.00 2.43 2.29 2.16 0.7056 7 0
6. Environmental Changes 3.50 N/A 3.00 5.00 1.50 4.00 N/A 1.00 3.40 1.2942 22 2
7a. Leadership Support 1.50 2.36 1.57 2.14 3.00 2.29 2.00 1.14 2.12 0.5096 5 0
7b. Organizational Support 2.58 3.42 1.92 1.50 2.00 2.75 2.33 1.50 2.36 0.6305 10 0
7c. Technology Support 2.67 1.71 2.13 1.78 2.33 2.50 2.06 1.81 2.17 0.3551 8 0
7d. Learning Support 2.40 2.60 1.75 1.00 2.20 3.00 1.75 1.20 2.10 0.6602 4 0
8. Strategic Functions 2.67 2.33 1.67 5.00 2.67 3.67 2.00 1.33 2.86 1.1362 19 0
9. Functional Processes 2.00 2.67 1.67 5.00 2.67 1.00 2.00 1.33 2.43 1.2724 11 0
10. Required Assets 2.67 3.67 2.00 5.00 2.67 3.00 2.00 1.00 3.00 1.0541 20 0
11. Asset Sources 2.00 2.67 2.33 5.00 2.33 2.67 2.00 1.00 2.71 1.0440 16 0
12a. KM Strategy Identification 1.00 2.00 2.00 3.00 3.00 2.00 2.00 1.00 2.14 0.6901 6 0
12b. KM Strategy Effectiveness 3.00 2.20 1.80 1.60 2.60 2.60 1.80 1.00 2.23 0.5219 9 0
13b. Informal Organizational 2.00 N/A 2.50 N/A N/A 1.00 N/A 1.00 1.83 0.7638 3 4
EME Area A B C D E F G H Avg StdDev Rank N/A
Structure
14. KM Technology Listing 4.00 2.00 1.00 4.00 3.00 1.00 2.00 1.00 2.43 1.2724 11 0
15. Integrative Management Plan 1.78 1.13 1.44 N/A 1.38 2.44 1.56 1.33 1.62 0.4571 2 1
16. Legacy System Integration 5.00 4.00 2.25 3.00 2.67 5.00 3.67 2.50 3.65 1.0891 23 0
17. Risk Management/Mitigation N/A N/A 2.5 N/A 2.00 N/A 3.00 1.00 2.50 0.5000 14 4
18. Change Management N/A 2.00 2.00 1.00 N/A 1.00 N/A 1.00 1.50 0.5774 1 3
EME Score 2.62 2.49 2.03 3.19 2.32 2.68 2.23 1.32
EME Rank 5 4 1 7 3 6 2 1
F
E
B
A
H
D
C
G
Company
1
6
3
8
5
4
2
6
1 – Enterprise Definition
2
1
3
5
7
3
6
8
2 – Value Proposition
2
8
5
4
6
1
6
3
3 – Critical Asset Identification
2
8
3
3
7
1
3
6
4 – Asset Source Identification
4
7
3
5
8
2
1
5
5 – Strategic Objectives
1
6
3
5
2
4
6 – Environmental Changes
1
5
3
4
6
8
7
2
7a – Leadership Support
1
1
3
5
7
4
8
6
7b – Organizational Support
3
2
5
4
7
6
1
8
7c – Technology Support
2
1
3
3
8
5
7
6
7d – Learning Support
1
8
2
3
7
5
4
5
8 – Strategic Functions
2
8
3
4
1
6
6
4
9 – Functional Processes
1
8
2
2
6
4
7
4
10 – Required Assets
1
8
4
2
6
4
6
2
11 – Asset Sources
1
7
3
3
3
7
3
1
1
2
3
3
6
6
5
8
2
4
4
3
6
4
7
4
1
14 – KM Technology Listing
2
4
3
1
6
5
7
3
6
7
5
7
17 – Risk Management/Mitigation
1
1
4
4
18 – Change Management
Table E- 4 -- H3 Statistical Comparison, “EME1 - Enterprise Definition”
Company A 6 7 7
Company B 2 5 2
Company C 3 2 5
Company D 6 6 8
Company E 4 8 3
Company F 5 4 6
Company G 8 1 1
Company H 1 3 3
Company A 8 7 7
Company B 6 5 2
Company C 3 2 5
Company D 1 6 8
Company E 3 8 3
Company F 7 4 6
Company G 5 1 1
Company H 2 3 3
224
Table E- 6 -- H3 Statistical Comparison, “EME3 – Critical Asset Identification”
Company A 3 7 7
Company B 6 5 2
Company C 5 2 5
Company D 8 6 8
Company E 1 8 3
Company F 6 4 6
Company G 4 1 1
Company H 2 3 3
Company A 6 7 7
Company B 3 5 2
Company C 3 2 5
Company D 8 6 8
Company E 1 8 3
Company F 7 4 6
Company G 3 1 1
Company H 2 3 3
225
Table E- 8 -- H3 Statistical Comparison, “EME5 – Strategic Objectives”
Company A 5 7 7
Company B 1 5 2
Company C 3 2 5
Company D 7 6 8
Company E 2 8 3
Company F 8 4 6
Company G 5 1 1
Company H 4 3 3
Companies that answered “N/A” or “DK” for an entire section are left blank
Company A 4 7 7
Company B 5 2
Company C 3 2 5
Company D 6 6 8
Company E 2 8 3
Company F 5 4 6
Company G 1 1
Company H 1 3 3
226
Table E- 10 -- H3 Statistical Comparison, “EME7a – Leadership Support”
Company A 2 7 7
Company B 7 5 2
Company C 3 2 5
Company D 5 6 8
Company E 8 8 3
Company F 6 4 6
Company G 4 1 1
Company H 1 3 3
Company A 6 7 7
Company B 8 5 2
Company C 3 2 5
Company D 1 6 8
Company E 4 8 3
Company F 7 4 6
Company G 5 1 1
Company H 1 3 3
227
Table E- 12 -- H3 Statistical Comparison, “EME7c – Technology Support”
Company A 8 7 7
Company B 1 5 2
Company C 5 2 5
Company D 2 6 8
Company E 6 8 3
Company F 7 4 6
Company G 4 1 1
Company H 3 3 3
Company A 6 7 7
Company B 7 5 2
Company C 3 2 5
Company D 1 6 8
Company E 5 8 3
Company F 8 4 6
Company G 3 1 1
Company H 2 3 3
228
Table E- 14 -- H3 Statistical Comparison, “EME8 – Strategic Functions”
Company A 5 7 7
Company B 4 5 2
Company C 2 2 5
Company D 8 6 8
Company E 5 8 3
Company F 7 4 6
Company G 3 1 1
Company H 1 3 3
Company A 4 7 7
Company B 6 5 2
Company C 3 2 5
Company D 8 6 8
Company E 6 8 3
Company F 1 4 6
Company G 4 1 1
Company H 2 3 3
229
Table E- 16 -- H3 Statistical Comparison, “EME10 – Required Assets”
Company A 4 7 7
Company B 7 5 2
Company C 2 2 5
Company D 8 6 8
Company E 4 8 3
Company F 6 4 6
Company G 2 1 1
Company H 1 3 3
Company A 2 7 7
Company B 6 5 2
Company C 4 2 5
Company D 8 6 8
Company E 4 8 3
Company F 6 4 6
Company G 2 1 1
Company H 1 3 3
230
Table E- 18 -- H3 Statistical Comparison, “EME12a – KM Strategy Identification”
Company A 1 7 7
Company B 3 5 2
Company C 3 2 5
Company D 7 6 8
Company E 7 8 3
Company F 3 4 6
Company G 3 1 1
Company H 1 3 3
Company A 8 7 7
Company B 5 5 2
Company C 3 2 5
Company D 2 6 8
Company E 6 8 3
Company F 6 4 6
Company G 3 1 1
Company H 1 3 3
231
Table E- 20 -- H3 Statistical Comparison, “EME 13a – Formal Organizational Structure”
Companies that answered “N/A” or “DK” for an entire section are left blank
Company A 3 7 7
Company B 4 5 2
Company C 2 5
Company D 6 8
Company E 4 8 3
Company F 4 4 6
Company G 2 1 1
Company H 1 3 3
Companies that answered “N/A” or “DK” for an entire section are left blank
Company A 3 7 7
Company B 5 2
Company C 4 2 5
Company D 6 8
Company E 8 3
Company F 1 4 6
Company G 1 1
Company H 1 3 3
232
Table E- 22 -- H3 Statistical Comparison, “EME 14 – KM Technology Listing”
Company A 7 7 7
Company B 4 5 2
Company C 1 2 5
Company D 7 6 8
Company E 6 8 3
Company F 1 4 6
Company G 4 1 1
Company H 1 3 3
Companies that answered “N/A” or “DK” for an entire section are left blank
Company A 6 7 7
Company B 1 5 2
Company C 4 2 5
Company D 6 8
Company E 3 8 3
Company F 7 4 6
Company G 5 1 1
Company H 2 3 3
233
Table E- 24 -- H3 Statistical Comparison, “EME 16 – Legacy System Integration”
Company A 7 7 7
Company B 6 5 2
Company C 1 2 5
Company D 4 6 8
Company E 3 8 3
Company F 7 4 6
Company G 5 1 1
Company H 2 3 3
Companies that answered “N/A” or “DK” for an entire section are left blank
Company A 7 7
Company B 5 2
Company C 3 2 5
Company D 6 8
Company E 2 8 3
Company F 4 6
Company G 4 1 1
Company H 1 3 3
234
Table E- 26 -- H3 Statistical Comparison, “EME 18 – Change Management”
Companies that answered “N/A” or “DK” for an entire section are left blank
Company A 7 7
Company B 4 5 2
Company C 4 2 5
Company D 1 6 8
Company E 8 3
Company F 1 4 6
Company G 1 1
Company H 1 3 3
235
APPENDIX F -- ADDITIONAL GRAPHS AND CHARTS
236
Figure F - 1 -- EME/KC/CVR Rank Correlations
Rank Correlations
5
KC Rank
Rank
CVR Rank
EME Rank
4
0
Company
237
Figure F - 2 -- Trendlines -- EME and CVR
3.5
2.5
Raw Scores
1.5
0.5
0
Company
CVR Score Overall EME Score Linear (CVR Score) Linear (Overall EME Score)
238
Figure F - 3 -- Correlation Between KC and CVR Rankings
5
Rank
KC Rank
CVR Rank
4
0
Company
239
Table F- 1 -- Free Text Comments From EME Survey
Question Answers
least one specific example identified where training had a positive impact
on plant or human performance? (3)Was all needed training completed
during the measured quarter to meet plant performance needs? (4)Were
all training-related PIP corrective actions due in the measured quarter
completed or statused in a timely manner? (5)Were there any Category 1
or Category 2 PIPs assigned to the organization related to plant
performance with Training identified as the Culpable Organization in the
Problem Evaluation section? (6)Was training expertise utilized during
root cause investigation(s) of human performance problems to help
identify and differentiate between training and non-training solutions in
the measured quarter? (7)Were all training-related Action Tracking
Registers (specific discipline Training and specific discipline TPRC) due
in the measured quarter completed or statused in a timely manner? (8)
Are identified negative human performance trends, which result from a
knowledge or skill deficiency being addressed by training? (9)Are
methods to determine effectiveness of training considered prior to
development of training for performance improvement? Please note
these metrics are used solely for Objective 1 (i.e. there are five other
Objectives and supporting metrics.) This measurement is performed
quarterly.
6. Environmental Changes and Sensors
(How well did the organization identify and/or x 20
define potential environmental changes (other x 3-4 Risks were identified before and during the project
Question Answers
(During the system development stage, how well x As we discovered gaps and non-value-added processes, we evolved our
did the organization list the operational understanding
processes required to accomplish these x Adjustments where made in iterative cycles
functions? Was this list changed at any point in x Similar to the prior item, new functionality creates new functionality and
the process?) 12e. Why and when? operations. Client-Support was not an initial operation required but the
overwhelming need of the business for How-To, FAQ, Getting Started,
etc. required a new operational element
13. Intellectual Asset Listing
(During the system development stage, how well x Early in the program's implementation, knowledge and/or skill
did the organization list the intellectual assets deficiencies were identified mainly through the identification of
required to accomplish these functions? Was operational problems and/or commitment of human error. This was
this list changed at any point in the process?) especially true with early plant designs. Training organizations had to
13e. Why and when? adapt by learning how to provide just-in-time training to compensate for
operator knowledge and skill deficiencies. Training's improved
performance over the past 20 years in this area has reduced the
probability of similar issues becoming problems.
14. Intellectual Asset Sourcing
(During the system development stage, how well x Infrequent source-issues have occurred over the years. One that comes
did the organization list the sources of these to mind was the need to have a efficient, non-costly way to permit
intellectual assets? Was this list changed at any personnel to practice the skill of 'self-verification'. (Self-verification is a
point in the process?) 14e. Why and when? 'tool' used by individuals to help them reduce the rate at which human
error is committed.) Simulator personnel developed a small (3 feet wide
by 5 feet tall) simulator on wheels with labels, switches, and procedure
Question Answers
filled with human interface problems. The idea was to make the process
difficult to complete without committing human error. The software could
be driven by a simple 386 computer (old model). Plans and software for
the 'self-verification' simulator were given free of charge to all plants in
the U.S. This is a great example because of the ingenuity and simplicity
of the resolution to the problem, the teamwork of a Training organization
with line organizations, and the free-sharing of a great idea with other
organizations. The implementation of this idea has led to further
investigation and research into this and other human performance
related themes and behaviors.
17. Formal Organizational Structure
(Did an accurate diagram or representation of x to reflect organizational changes
the formal organizational structure exist at any x Often, as people joined and left the organization or as organizational
point in the KMS implementation process? Was change occurred as a result of business changes
this changed at any point in the process?) 17f. x Additional skills and knowledge were needed on the customer service
Why and when? side of the implementation.
18. Informal Organizational Structure
(Did an accurate diagram or representation of x As the org changed and processes changed, the informal structure
the informal organizational structure exist at any needed to be modified
point in the KMS implementation process? Was x Creation and maintenance of training performance criteria and
this changed at any point in the process?) 18g. assessment methodology for non-accredited training programs support
Why and when? was recognized as a way to reduce safety and human performance
issues, refine assessment and observation programs, and assist in the
Question Answers