Download as pdf or txt
Download as pdf or txt
You are on page 1of 271

NOTE TO USERS

Page(s) not included in the original manuscript are


unavailable from the author or university. The
manuscript was microfilmed as received

101 and 102

This reproduction is the best copy available.


A VALIDATION OF THE

ENTERPRISE MANAGEMENT ENGINEERING APPROACH TO

KNOWLEDGE MANAGEMENT SYSTEMS ENGINEERING

by

Ellen F. Mac Garrigle

B.S., History and Naval Engineering, May 1985, United States Naval Academy

M.E.A., Marketing of Technology, February 1990, the George Washington University

A Dissertation submitted to

The Faculty of

The School of Engineering and Applied Science

of the George Washington University in partial satisfaction

of the requirements of the degree of Doctor of Science

Graduation Date

May 30, 2006

Dissertation directed by:

Dr. Michael A. Stankosky

Associate Professor of Engineering Management and Systems Engineering


UMI Number: 3614805

All rights reserved

INFORMATION TO ALL USERS


The quality of this reproduction is dependent upon the quality of the copy submitted.

In the unlikely event that the author did not send a complete manuscript
and there are missing pages, these will be noted. Also, if material had to be removed,
a note will indicate the deletion.

UMI 3614805
Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author.
Microform Edition © ProQuest LLC.
All rights reserved. This work is protected against
unauthorized copying under Title 17, United States Code

ProQuest LLC.
789 East Eisenhower Parkway
P.O. Box 1346
Ann Arbor, MI 48106 - 1346
Copyright © 2006

Ellen F. Mac Garrigle

ii
ABSTRACT

Knowledge management is one of the current “buzzwords” gaining popularity on an

almost-daily basis within the business world. Much attention has been paid to the theory

and justification of knowledge management (KM) as an effective business and

organizational practice. However, much less attention has been paid to the more specific

issues of effective implementation of knowledge management, or to the potential

financial benefit or payoff that could potentially result from an effective system

implementation. As the concept of KM becomes more generally accepted, knowledge

management systems (KMS) are becoming more prevalent. A KMS is often considered

simply another information system to be designed, built, and supported by the IT

department. In actual implementation, many KM system development efforts are not

successful. There is frequently a perception that strict adherence to development

processes produces an excessive time lag, rigor, and formality which will “disrupt” the

desired free flow of knowledge. Professor Michael Stankosky of GWU has posited a

more flexible variation of the usual systems engineering (SE) approach, tailored

specifically to the KM domain and known as Enterprise Management Engineering©

(EME). This approach takes the four major pillars of KM as identified by GWU research

in this area – Leadership, Organization, Technology, and Learning -- and adapts eighteen

key SE steps to accommodate the more flexible and imprecise nature of “knowledge”.

Anecdotal study of successful KMS developments has shown that many of the more

formal processes imposed by systems engineering (such as defining strategic objectives

before beginning system development) serve a useful purpose. Consequently, an

iii
integrated systems engineering process tailored specifically to the KM domain should

lead to more successful implementations of KM systems. If this is so, organizations that

have followed some or all of the steps in this process will have designed and deployed

more “successful” KMS than those organizations that have not done so. To support and

refine this approach, a survey was developed to determine the usage of the 18 steps

identified in EME. These results were then analyzed against a objective financial

measurement of organizational KM to determine whether a correlation exists. This study

is intended to test the validity of the efficacy of the EME approach to KM

implementation.

For the financial measurement data, the subject list of organizations for this study

used a measure of intangible valuation developed by Professor Baruch Lev of NYU called

Knowledge Capital Earnings © (KCE). This is the amount of earnings that a company

with good “knowledge” has left over once its earnings based on tangible financial and

physical assets have been subtracted from overall earnings. KCE can then be used to

determine the Knowledge Capital (KC) of an organization. This in turn provides two

quantitative measures (one relative, one absolute) that can be used to define a successful

knowledge company.

For this study, Lev’s research from 2001 was updated, using more recent financial

data. Several of these organizations completed a survey instrument based upon the 18

points of the EME approach. The results for the 18 steps were compared against each

other and against each organization’s KC scores. The results show that there is a

significant correlation between EME and the relative KC measurement, and select EME

steps do correlate significantly with a high KC value. Although this study, being the first

iv
validation effort, does not show provable causation, it does demonstrate a quantifiable

correlation and association between EME and successful KM implementation. This in

turn should contribute to the slim body of objective knowledge on the design,

deployment, and measurement of KM systems.

v
DEDICATION

I would like to dedicate this to my family:

x My father, George Fain, who so bravely resisted the temptation to ask me when

(if) I was ever going to finish

x My in-laws, Patricia and (particularly) Edward Hymson, who had no such qualms

x My sister, Kathy Fain, who still does not believe that I have actually finished this

x My husband, Ken, who tolerated endless nights of false starts and stops and took

care of our daughter countless evenings and weekends so that I could finish this

x My daughter, Kelly, who someday will wonder what all of the big deal was about

this

Most of all, I want to dedicate this to my mother, Dr. Lin Fain, who went back to

school after twenty years and completed her own dissertation while working full-time:

x She never thought education could – or should – be optional

x She never believed that engineering and history were incompatible

x She never let circumstances interfere with her plans

Although she is not here to see me finish this in person, I know that she is monitoring

every keystroke, wincing at every awkward sentence, and trying to correct every

typographical error.

I also hope that she is proud that I will be wearing her cap and gown when I receive

this degree.

vi
ACKNOWLEDGEMENTS

I would like to thank the following people who have been instrumental to my

completing this incredibly long journey:

x Professor Baruch Lev, of the Stern School of Business at NYU, who graciously

granted me permission to cite his theories and works on knowledge valuation in

this document. Any errors or misinterpretations are mine and mine alone.

x Professor Barry Silverman, formerly of GWU and now of the University of

Pennsylvania, for originally taking me on as a doctoral candidate all those years

ago

x Professor Mike Stankosky, GWU, who subsequently agreed to take me on as the

world’s longest-serving doctoral candidate, who helped me to find not one but

two new dissertation topics when my others became obsolete, who encouraged me

to stay with it all these years, and who also granted me permission to cite his

works and theory on enterprise management engineering. As with Professor Lev,

any errors or misinterpretations are mine and mine alone.

x Professors Jack Harrald and Lile Murphree and Lecturers Frank Calabrese and

Charlie Bixler of GWU, for agreeing to serve on my committee no matter how

long it took

x Professor Thomas Mazzuchi, Chair of the Engineering Management and Systems

Engineering Department at GWU, who finally, firmly, and graciously told me that

enough was enough and that I needed to complete this degree

vii
x Ms. Zoe Dansan, who provided the outstanding coordination and support that

allowed me to get through all of the university wickets for this project despite my

having a full-time job and rarely setting foot on campus

x The hard-working information technology and knowledge management

professionals for the firms contacted for this study, for taking the time and the

interest to reply to my repeated requests for information (and I resolve never to

ignore an email from a student again)

x My employer, the MITRE Corporation, McLean, Virginia, for encouraging this

degree and for giving me the support (and the small office supplies) needed to

complete this task

x My supervisors and managers over the years – Dr. Steve Cohen, CAPT USN

(retired), and Joe Vrabel, Don Zugby, and Lori Scherer of MITRE -- who politely

but repeatedly egged me on both to begin the degree and to complete the

dissertation

x My coworkers, customers, and sponsors, who provided their own share of strong

support and pointed questions

x My friends, especially the GNO crowd of Susan Keuch, Karen Detweiler, and

Heidi Avery, who hung placeholders on my office walls and constantly asked me

when I was going to finish my degree (and Karen who is soon to finish her own

doctorate)

x Thanks to all of you and to anyone else I forgot to name for all of your support for

the years – I could not have done this without all of your help.

viii
TABLE OF CONTENTS

Abstract .................................................................................................................... iii

Dedication .................................................................................................................... vi

Acknowledgements ......................................................................................................... vii

Table of Contents ............................................................................................................. ix

List of Figures ................................................................................................................. xiv

List of Tables .................................................................................................................. xvi

List of Acronyms ............................................................................................................ xix

Chapter 1 Introduction ............................................................................................... 1

1.1 Background of the Problem .............................................................................. 1

1.2 Statement of the Problem .................................................................................. 2

1.3 Purpose of the Study ......................................................................................... 3

1.4 Statement of the Hypotheses ............................................................................. 4

H1: Does use of EME affect KM success? ........................................................ 4

H2: Does More EME Correlate to More KM Success? .................................... 5

H3: Do Different Parts of EME Correlate More Strongly to KM Success? ...... 6

1.5 Overview ........................................................................................................... 6

Chapter 2 Review of Related Literature ................................................................... 8

ix
2.1 Integrated Approaches to Systems Engineering................................................ 8

2.1.1 Before Integration – The Waterfall ....................................................... 9

2.1.2 Integration of Control and Oversight – the Vee .................................. 12

2.1.3 Integration of Support and Management – SIMILAR ........................ 14

2.1.4 Integration of Process -- Software Engineering Institute/Capability

Maturity Model (CMM) ...................................................................... 16

2.1.5 Integration of Stakeholder – the CMMI .............................................. 19

2.1.6 Integration of Human Components -- The IEEE 2005 SE Standard... 22

2.2 Knowledge Management Systems .................................................................. 24

2.3 Knowledge Systems Engineering ................................................................... 27

2.4 Measuring the “Success” of Knowledge Management ................................... 32

2.4.1 Skandia Navigator ............................................................................... 34

2.4.2 Intangible Asset Management ............................................................. 38

2.4.3 Balanced Scorecard ............................................................................. 41

2.4.4 Human Resource Costing and Accounting (HRCA) .......................... 44

2.4.5 MERITUM Project ............................................................................. 46

2.4.6 Danish Agency for Trade and Industry (DATI) .................................. 48

2.4.7 Value Based Management/Value Creation Index (VCI)..................... 50

2.4.8 Value Creation Index (VCI) ................................................................ 52

2.4.9 Value Creation Index (A.T. Kearney version) .................................... 53

2.4.10 Framework of Intangible Valuation Areas (FIVA) ............................. 56

2.4.11 Value Chain Scoreboard (VCS) .......................................................... 57

x
2.5 Knowledge Scorecard/Knowledge Capital ..................................................... 60

Chapter 3 Research Problem ................................................................................... 68

3.1 Problem ........................................................................................................... 68

3.2 Hypotheses ...................................................................................................... 69

3.2.1 H1: Does EME Correlate to KM Success? .......................................... 70

3.2.2 H2: Does More EME Correlate to More KM Success? ...................... 70

3.2.3 H3: Do Different Parts of EME Correlate More Strongly to KM

Success? .............................................................................................. 71

3.3 Approach ......................................................................................................... 72

3.4 Next Steps ....................................................................................................... 75

Chapter 4 Research Methodology ........................................................................... 76

4.1 Subjects ........................................................................................................... 76

4.2 Instrument Development ................................................................................. 77

4.3 Instrument Development and Piloting ............................................................ 79

4.4 Design and Procedure ..................................................................................... 81

4.4.1 Knowledge Capital Data ..................................................................... 81

4.4.2 EME survey......................................................................................... 81

4.5 Key Risks, Assumptions and Limitations ....................................................... 84

4.5.1 Sarbanes-Oxley Act ............................................................................ 84

4.5.2 Age of KMS Projects .......................................................................... 85

4.5.3 Informality of KMS Projects ............................................................... 86

xi
4.5.4 Different Approaches to KM and KMS Projects ................................ 87

4.5.5 Complexity/Comprehension of the EME Survey ............................... 88

Chapter 5 Research Findings ................................................................................... 90

5.1 Data Collection ............................................................................................... 90

5.1.1 Knowledge Capital Data ..................................................................... 90

5.1.2 Enterprise Management Engineering Assessment Data ..................... 92

5.2 Data Analysis .................................................................................................. 96

5.2.1 Knowledge Capital Data ..................................................................... 96

5.2.2 Enterprise Management Engineering Survey Data ............................. 99

5.2.3 Results ............................................................................................... 106

5.3 Correlation and Hypothesis Tests ................................................................. 108

5.3.1 H1: Does EME Correlate to KM Success? ........................................ 109

5.3.2 H2: Does More EME Correlate to More KM Success? .................... 110

5.3.3 H3 -- Do Different Parts of EME Correlate More Strongly to KM

Success? ............................................................................................ 114

5.4 Additional Data ............................................................................................. 120

Chapter 6 Conclusions, Discussion, And Suggestions For Further Research ... 121

6.1 Conclusions ................................................................................................... 121

6.1.1 EME/KC Correlation ........................................................................ 122

6.1.2 KC Valuation Method Conclusions .................................................. 124

6.1.3 EME Survey Conclusions ................................................................. 126

xii
6.2 Areas for Future Research............................................................................. 129

6.3 Discussion ..................................................................................................... 131

References ................................................................................................................. 134

Appendix A -- Survey Instrument ............................................................................... 153

Appendix B –Financial Data Variables and Definitions ........................................... 166

Appendix C – Financial Data Received ...................................................................... 176

Appendix D – Raw Survey Data .................................................................................. 179

Appendix E – Intermediate Statistical Calculations and Conclusions..................... 198

Appendix F -- Additional Graphs And Charts .......................................................... 236

xiii
LIST OF FIGURES

Figure 2-1 -- Waterfall Model (Royce) [107] ................................................................... 10

Figure 2-2 -- Vee Model (Forsberg Mooz and Cotterman) [45] ....................................... 13

Figure 2-3 -- SIMILAR Model (Bahill and Gissing) [9] .................................................. 15

Figure 2-4 -- Continuous Representation, CMMI 1.1 [25] ............................................... 20

Figure 2-5 -- Staged Representation, CMMI 1.1 [25] ........................................... 21

Figure 2-6 -- IEEE Systems Engineering Process (2005) [58] ......................................... 23

Figure 2-7 -- The Three Components of a Knowledge Management System................... 25

Figure 2-8 -- Stankosky’s Original Integrative KMS Engineering Model [121] .............. 29

Figure 2-9 -- Stankosky's Enterprise Management Engineering Model ........................... 30

Figure 2-10 -- Skandia Navigator (Edvinsson) [36] ......................................................... 36

Figure 2-11 -- Intangible Asset Monitor Model (Sveiby) [128] ....................................... 39

Figure 2-12 -- Balanced Scorecard (Kaplan and Norton) [68].......................................... 42

Figure 2-13 -- Human Resource Cost Accounting (HRCA) (Brummet, Flamholtz, Pyle) 45

Figure 2-14 -- Intellectual Capital Management Model (MERITUM) [65] ..................... 48

Figure 2-15 -- Intellectual Capital Statement (DATI) [28] ............................................... 49

Figure 2-16 -- Value-Based Accounting (Ittner and Larcker) [64] ................... 51

Figure 2-18 -- Framework of Intangible Assets (Green) [49] ........................................... 57

Figure 2-19 -- Value Chain Scorecard (Lev) [79] ............................................................. 59

Figure 5-1 -- Mean Scores By EME Area ....................................................................... 104

Figure 5-2 -- Company Overall EME Scores .................................................................. 105

xiv
Figure 5-3 -- EME Area Scores Per Company................................................................ 107

Figure 5-4 -- Company Scores By EME Area ................................................................ 108

Figure 5-5 -- EME-CVR and EME-KC Correlation ....................................................... 118

Figure 6-1 -- Industrial Sector EME Scores .................................................................... 131

Figure F - 1 -- EME/KC/CVR Rank Correlations .......................................................... 237

Figure F - 2 -- Trendlines -- EME and CVR ................................................................... 238

Figure F - 3 -- Correlation Between KC and CVR Rankings ......................................... 239

xv
LIST OF TABLES

Table 2-1 -- EIA Systems Engineering Focus Areas [40] ................................................. 17

Table 2-2 -- Capability Maturity Model Levels ................................................................ 18

Table 2-3 -- Perceived vs. Actual Intangible Value Factors (VCI) ................................... 52

Table 2-4 -- Knowledge Capital Across Non-Financial Industry Sectors, 1999 [102] ..... 64

Table 2-5 -- Lev's Top 50 Knowledge Companies – Knowledge Capital (2001) ............. 65

Table 3-1 -- Mapping of EME and Survey Instrument Questions .................................... 72

Table 4-1 -- EME KMS Survey Areas .............................................................................. 78

Table 4-2 -- Survey Question Types ................................................................................. 82

Table 5-1 -- Survey Respondents ...................................................................................... 94

Table 5-2 – Characterization of Survey Respondents ....................................................... 95

Table 5-3 -- Knowledge Capital Calculations ($M) ......................................................... 98

Table 5-4 – Comprehensive Value Ratio Calculations (CVR)........................... 99

Table 5-6 -- EME Effectiveness Ratings by Area ........................................................... 103

Table 5-7 -- Unused or Unknown EME Areas ............................................................... 106

Table 5-8 – Data Used for H2 Statistical Comparison .................................................... 111

Table 5-9 – Data Used for H2 rs – EME vs KC .............................................................. 112

Table 5-10 – Data Used for H2 rs – EME vs KC ............................................................ 113

Table 5-11 – Data for H3 Statistical Comparison, “EME1 – Enterprise Definition”...... 116

Table 5-12 -- EME and KM Correlations By EME Phase (D=0.05, n=8 ranked pairs).. 116

Table 6-1 -- Perceived vs Actual EME "Success" Areas ................................................ 128

xvi
Table C- 1 -- Total Physical and Financial Asset Values ............................................... 177

Table C-2 -- Earnings Estimates and "Normalized" Earnings ........................................ 178

Table D- 1 -- Raw Survey Data....................................................................................... 180

Table E- 1 -- Summary of All Question Responses ........................................................ 199

Table E- 2 -- Company-level EME Survey Area Responses .......................................... 221

Table E- 3-- Company Ranks by EME Criteria .............................................................. 223

Table E- 4 -- H3 Statistical Comparison, “EME1 - Enterprise Definition” .................... 224

Table E- 5 -- H3 Statistical Comparison, “EME2 - Value Proposition”......................... 224

Table E- 6 -- H3 Statistical Comparison, “EME3 – Critical Asset Identification”......... 225

Table E- 7 -- H3 Statistical Comparison, “EME4 – Asset Source Identification” ......... 225

Table E- 8 -- H3 Statistical Comparison, “EME5 – Strategic Objectives”..................... 226

Table E- 9 -- H3 Statistical Comparison, “EME6 – Environmental Changes” .............. 226

Table E- 10 -- H3 Statistical Comparison, “EME7a – Leadership Support” .................. 227

Table E- 11 -- H3 Statistical Comparison, “EME7b – Organizational Support” ........... 227

Table E- 12 -- H3 Statistical Comparison, “EME7c – Technology Support” ................ 228

Table E- 13 -- H3 Statistical Comparison, “EME 7d – Learning Support” .................... 228

Table E- 14 -- H3 Statistical Comparison, “EME8 – Strategic Functions” .................... 229

Table E- 15 -- H3 Statistical Comparison, “EME9 – Functional Processes” ................. 229

Table E- 16 -- H3 Statistical Comparison, “EME10 – Required Assets” ....................... 230

Table E- 17 -- H3 Statistical Comparison, “EME11 – Asset Sources” .......................... 230

Table E- 18 -- H3 Statistical Comparison, “EME12a – KM Strategy Identification” .... 231

Table E- 19 -- H3 Statistical Comparison, “EME12b – KM Strategy Effectiveness” .... 231

xvii
Table E- 20 -- H3 Statistical Comparison, “EME 13a – Formal Organizational Structure”

................................................................................................................................. 232

Table E- 21 -- H3 Statistical Comparison, “EME 13b – Informal Organizational

Structure” ................................................................................................................ 232

Table E- 22 -- H3 Statistical Comparison, “EME 14 – KM Technology Listing” ......... 233

Table E- 23 -- H3 Statistical Comparison, “EME 15 – Integrative Management Plan” . 233

Table E- 24 -- H3 Statistical Comparison, “EME 16 – Legacy System Integration” ..... 234

Table E- 25 -- H3 Statistical Comparison, “EME 17 – Risk Management/Mitigation” . 234

Table E- 26 -- H3 Statistical Comparison, “EME 18 – Change Management” .............. 235

Table F- 1 -- Free Text Comments From EME Survey .................................................. 240

xviii
LIST OF ACRONYMS

APQC American Productivity and Quality Center

BSC Balanced Scorecard

BV Book Value

CMM Capability Maturity Model

CMMI Capability Maturity Model – Integrated

CV Comprehensive Value

CVR Comprehensive Value Ratio

DATI Danish Agency for Trade and Industry

DAU Defense Acquisition University (US DoD)

DK Don’t Know

DSMC Defense Systems Management College (US DoD)

EIA Electronic Industries Association

EME Enterprise Management Engineering

ER Economic Return

FASB Federal Accounting Standards Board (US)

FC Financial Capital

FIVA Framework for Intangible Asset Valuation

GAAP Generally Accepted Accounting Principles

GWU George Washington University

HRCA Human Resources Cost Accounting

xix
IAM Intangible Asset Management

INCOSE International Council on Systems Engineering

KC Knowledge Capital

KCDR Knowledge Capital Discount Rate

KCE Knowledge Capital Earnings

KM Knowledge Management

KMS Knowledge Management System

KMSE Knowledge Management Systems Engineering

MERITUM Measuring Intangibles To Understand and Improve Innovation

Management

MV Market Value

N/A Not Applicable

NPV Net Present Value

NYU New York University

PC Physical Capital

SEI Software Engineering Institute

VCI Value Creation Index

VCS Value Chain Scorecard

WRDS Wharton Research Data Services

xx
CHAPTER 1 INTRODUCTION

“Knowledge is Power”

Sir Francis Bacon

1.1 Background of the Problem

Knowledge management (KM) is one of the current “buzzwords” gaining popularity

on almost a daily basis within the business world. Yet unlike many previous fads that

came and left, knowledge management has stayed. In many organizations, KM has

substantially improved the capabilities of the organization – to the point where the

systematic study of knowledge management can now be considered to be a valid

academic endeavor. Knowledge management itself has many definitions, depending on

the definer’s point of view:

x “Knowledge management is a systematic process of connecting people to

people and people to the knowledge they need to effectively act and create

new knowledge.” (American Productivity and Quality Center [APQC], KM

research organization) [3]

x “Knowledge Management is the explicit and systematic management of vital

knowledge - and its associated processes of creation, organization, diffusion,

use and exploitation” (David Skyrme, KM consultant) [115]

1
x “Because knowledge is personalized, in order for one person’s knowledge to

be useful to another individual, it must be communicated in such a manner as

to be interpretable and accessible to the other individual…Knowledge

management, then, refers to a systemic and organizationally specified process

for acquiring, organizing and communicating both tacit and explicit

knowledge of employees so that other employees may make use of it to be

more effective and productive in their work.” (Maryam Alavi and Dorothy

Leidner, KM academic researchers) [1]

x “…not a single undertaking; rather, it is a cultural and strategic change,

powered by Information Technology and process innovation. As you

implement Knowledge Management projects in your organization, regardless

of the projects selected, the people, process, and technology issues need to be

addressed simultaneously or disappointing results are apt to follow.”

(Caspar W. Weinberger, former senior manager of organization with KM)

[20]

1.2 Statement of the Problem

Much attention has been paid to the theory and justification of knowledge

management as an effective business and organizational practice [128][124]. However,

much less has been paid to the more tactical issues of effective implementation of

knowledge management. A KM system, like any other system, requires a thorough

engineering process to ensure that it meets its intended requirements within its given

constraints. Knowledge management engineering is the “systematic design, development,

2
and integration of an enterprise’s intellectual assets to maximize its efficiency,

effectiveness, and innovation” [123]. Use of a formal and comprehensive systems

engineering process – “knowledge management engineering” – should lead to more

successful implementations of knowledge management systems.

1.3 Purpose of the Study

Systems engineering (SE) approaches are often used to build KM systems (indeed,

many organizations claim to use formal SE processes to build all their systems).

However, many KM system deployments are not considered successful. Sometimes the

initial definition of “system” (or of SE) is so formal and restrictive as to doom the

knowledge project to failure. Sometimes all of the elements of the SE approach are not

always used. In other cases, organizations believed that the rigor and standardization of a

typical formal process could disrupt the free flow of knowledge and so deliberately

ignored any sort of formal engineering process. Stankosky [122] has posited a KM-

specific variant of systems engineering known as “Enterprise Management Engineering”

(EME). This approach adapts essential systems engineering concepts to the more

informal, subjective, and flexible requirements of an effective knowledge management

system.

There may be a link between “shorting” the SE approach and building unsuccessful

systems. A Teltech survey of 83 organizations (with 93 KM applications) in 1998 found

that 76 percent of high-impact KM applications had “invested in putting together an

advance-planning strategy”; however, only 13 percent of low-impact projects had done

3
so. The organizations who implemented the projects informally, with little planning or

process, had a much lower reported success rate [55].

This study attempted to validate the EME model by correlating the “success” of

organizational KM systems with the use of aspects of the EME model. Subjects for the

effort were large nonfinancial corporations with publicly-available financial performance

figures.

1.4 Statement of the Hypotheses

Existing literature supports the use of a formal process to design, develop, and

operate a successful knowledge management system. It also supports the claim that that

process should be “integrative” – it should incorporate aspects of systems engineering and

effective project management [39]. This is particularly important given the seemingly-

contradictory requirements for advanced planning and rapid flexibility when developing

KM systems [121]. However, it had not yet been validated that the EME process, as

defined by Stankosky in his initial iteration, results in a more successful KMS. Part of

the issue is the question of “successful”; in many similar past efforts, success has been

self-reported. This study used external and objective financial measures – knowledge

capital and knowledge capital earnings -- to determine whether knowledge management

is successful in an organization.

Therefore, the first hypothesis set for this study is:

H1: Does use of EME affect KM success?

x H10: Organizations that use any of the EME approach show no difference in KM

return when compared to organizations that use no EME.

4
The alternative hypothesis to this is:

x H1a: Organizations that use any of the EME approach show a more positive

correlation with KM return when compared to organizations that use no EME.

If the null hypothesis can be disproved, the conclusion would be that the existence of

an integrative systems engineering approach in an organization correlates positively with

a high knowledge-related financial return and thus would indicate success for a KMS

within that organization.

However, this conclusion merely establishes a potential relationship between the

overall EME process and the successful KMS. In order to try to validate the specific

EME model, a correlation should be made between how much of the EME process was

used and how successful the KMS implementation was.

This brings up a second hypothesis set:

H2: Does More EME Correlate to More KM Success?

x H20: Organizations with a higher ranking on EME better show no difference in

KM return than those with lower rankings.

The alternative hypothesis to this is:

x H2a: Organizations with a higher ranking on EME show a positive correlation

with KM return.

Disproving this null hypothesis would support the contention that more usage of EME

directly correlates to more KM return and thus more successful system implementation.

This still leaves one hypothesis to address – which parts of the EME process correlate

with the highest KM return?

5
H3: Do Different Parts of EME Correlate More Strongly to KM Success?

x H30: Organizations that use specific parts of the EME process show no difference

in KM returns than organizations that use other parts of the EME approach.

The alternative hypothesis to this is:

x H3a: Organizations that use specific parts of the EME process show a positive

correlation with KM returns.

If this null hypothesis can be disproved, the conclusion would be that specific EME

steps correlate more highly with more successful KMS implementations.

If all three null hypotheses can be rejected, EME can be correlated at a high level with

successful knowledge management, and the EME model therefore should be extended to

the KMS engineering realm for the next phase of validation.

1.5 Overview

This paper begins with a review of the relevant literature on the evolution of

integrative systems engineering, in which “traditional” systems engineering became

combined with project management concepts. It then reviews the need for knowledge

management engineering and summarizes the EME construct [122] and the Four Pillars

of KM. The last phase of the literature review lists several approaches for defining

knowledge management success metrics and ends with the measure used for defining KM

system success for this study – the Knowledge Capital (KC) valuation approach,

developed by Lev of NYU [80][92].

The next two chapters of the study address the relevance of the study topics and

outlines the three hypothesis sets in additional detail. This is followed by a description of

6
the scope and methodology for the study itself, outlining the criteria for choosing subject

organizations, the nature of the financial measures used to define success, and the

structure and description of the survey instrument used to measure EME among the

subject organizations.

The last section of this paper summarizes the data, results, and conclusions. Eight

companies ultimately returned surveys within the deadline period. The first hypothesis

set could not be addressed since none of the participants reported no use of the EME

construct. There is no statistically-significant correlation between the absolute measure

of KC and EME (although it is a very strong inference). However, based upon the results

of the latter two hypothesis comparisons, it can be stated that there is indeed a statistical

correlation between the relative measure of KC, the Comprehensive Value Ratio (CVR),

and the use of EME. There are also several statistically-significant correlations for

specific EME areas and both measures of KC. The paper then identifies areas of the

EME construct that should be added or reexamined and highlight potential additional

areas for research. It concludes by highlighting apparent disconnects between the

corporate financial and KM communities and the “academic” and corporate definitions of

both knowledge and system, one which should be addressed if KM systems are to become

widely accepted and acknowledged as such.

7
CHAPTER 2 REVIEW OF RELATED LITERATURE

“One day Alice came to a fork in the road and saw a Cheshire cat in a tree.

“Which road do I take?" she asked.

"Where do you want to go?" was his response.

"I don't know," Alice answered.

"Then," said the cat, "it doesn't matter."

"---so long as I get somewhere," Alice added as an explanation.

Lewis Carroll

2.1 Integrated Approaches to Systems Engineering

Systems engineering, as a recognized discipline, essentially evolved from a means of

optimizing large-scale industrial production in the mid-1950s to the increasingly complex

requirements of defense and space systems to handle multiple dimensions and analyses by

the mid 1960s [5][73]. The “systems approach” allowed engineers and scientists to view

the entire problem set and increased their ability to model potential solutions to complex

problems; the development of “systems engineering” allowed them to design, develop,

implement, and operate those potential solutions. Soon the approach became mandatory

in the defense and aerospace fields and spread from there to other areas dealing with

8
technically- or operationally-complex issues, particularly into the area of computer and

software systems development [13][39][73].

One of the issues is the tautology that it is extremely difficult to define “systems

engineering” without the use of the word “system”. Eisner [39] defines the systems

approach as “a recognition that all elements of a system interoperate harmoniously,

which, in turn, requires a systematic and repeatable process for designing, developing,

and operating the system.” The International Council on Systems Engineering (INCOSE)

defines systems engineering as “the application of an interdisciplinary approach and

means to enable the realization of successful systems” [61].

In fact, the U. S. Department of Defense even issued a draft military standard (MIL-

STD-499B) specifically calling out processes and procedures required for systems

engineering on military contracts. This standard was an update of a widely-used standard

originally issued in the early 1970s and defined systems engineering as “an

interdisciplinary approach to evolve and verify an integrated and life-cycle-balanced set

of system product and process solutions that satisfy customer needs” [33]. Although this

document was never formally issued (due to a DoD drive in the 1990s toward commercial

specifications), a “demilitarized” version of this standard in fact became the core for a

new generation of industry standards [34].

2.1.1 Before Integration – The Waterfall

Winston Royce first put forth the waterfall model for information systems design in

1970 as part of mainframe computer engineering development tasks [75][107]. The

9
approach, borrowed from traditional construction projects, assumes that each step would

be completed before work begins on the next step. In cases where the variables and

technical problems are well-defined and understood, it allows relatively low-risk progress

through each of the stages [39]. It provides relatively tight control and allows easy

monitoring of cost and schedule variables, thus making the project easier to manage than

some other development methods [11].

Figure 2-1 -- Waterfall Model (Royce) [107]

Although many software practitioners still use this model (Neill and Laplante believe

that the oft-trumpeted recent “death” of the waterfall model is another urban myth [98]),

there are some serious issues restricting its utility for less-well-defined or more time-

constrained projects. This approach was more than adequate for large-scale, slow-

evolution development tasks; it began to show problems with the move away from

10
hardware-based systems to more adaptive (and rapidly-evolving) software projects [98].

The dependence on the completion of each step prior to initiation of the next one is a

critical tenet of the waterfall approach [39]; although this requirement encourages

stability and thorough design, it also restricts development flexibility and response times

[75]. In a perfect waterfall implementation, each step concludes with a thorough review

of all the progress accomplished up until that point, and no step begins until the results of

the previous one have been accepted [73]. In order to keep the project on schedule, large

amounts of time-consuming documentation are required in order for later steps to begin

their planning. All requirements must be documented, evaluated, and prioritized [107].

Documentation in turn can become its own goal, keeping project staff from working to

increase the system’s capabilities.

For a knowledge management system, where the “information” within the system is

more perishable and harder to code as data, the waterfall model shows cracks in the initial

stage of system requirements definition. While system requirements should be stated

upfront in any design project regardless of the methodology [31], the rigid formality of

the waterfall requirements process often removes end users from the process, producing a

system designed by its builders to meet the builders’ concept of user requirements. The

same issue recurs in the software requirements and analysis phases – since each data item

must be completely resolved and thoroughly analyzed prior to its incorporation into the

system design, the resulting constructs are effectively bulletproof [107]. However, the

defined system requirements can also be obsolete or irrelevant to the needs and desires of

the end user population.

11
The earliest “KM” systems, rule-based expert systems, were often built using this

approach [94]; however, those systems were generally processing structured data, not

knowledge. Addressing design and engineering as isolated steps in an absolutely

sequential process can stifle the ability of a KM system designer to ensure user

involvement, knowledge accuracy, and currency, three traits that improve user acceptance

(and thus success) of a KM system [22][71][72].

2.1.2 Integration of Control and Oversight – the Vee

Shortly after the waterfall model became widely accepted, a variation on this model

began to emerge, again from the aerospace industry. One of the issues with the waterfall,

even in hardware development environments, was its absolute rigidity of process. As

development cycles began to shorten, the requirements definition process became a

chokepoint in systems development. It became harder to completely define all possible

requirements at the start of the process; it also became harder to test and evaluate whether

those requirements had actually been accomplished. The Vee model, popularized by

Forsberg, Mooz, and Cotterman [45], popularized the concept of functional

decomposition and (re)integration.

Essentially, the orderly progression of the waterfall process continues as before. The

Vee (Figure 2-2 below) allows the beginning of the process to address the requirements at

a broader overall level. Operational concepts determine functional requirements that

determine system specifications that lead to part-level drawings. Each step in the process

drives out additional detail until, at the bottom of the Vee, the systems engineer is dealing

12
with the smallest details and actually building system components [45]. As the engineer

moves back up the Vee, smaller parts become assemblies, which become subsystems,

which become systems. Each phase of increasing integration on the right side tracks

directly across to the decomposition on the left side, so as each level is assembled, it is

tested against the design documentation appropriate for that level. This makes testing and

verification much easier; it also allows project managers to continue to have insight and

control over the development process [45].

Figure 2-2 -- Vee Model (Forsberg Mooz and Cotterman) [45]

While this process is more responsive to incorporating users and managers into the

review and testing process and allows developers and assemblers to move through the

cycle more rapidly and efficiently, it is still essentially a process that assumes that the

system being built consists primarily of hardware and software to execute defined tasks,

13
usually based upon either data or information [1]. The second generation of KM systems,

often object-oriented, were built using this approach. Systems that used chaining and

inference to work through the processes of transmitting and storing information (as

opposed to data) required this type of design and test relationship to ensure that the

completed integrated product worked as intended, passing internal information states

between components [94]. However, knowledge differs from both data and information

in that it both requires and generates “context”. While it does allow and support

gradually-increasing levels of detail and refinement, the Vee process does not easily

accommodate the varying levels of detail and flexibility required for effective knowledge

codification.

2.1.3 Integration of Support and Management – SIMILAR

In 1998, Bahill and Gissing [9] proposed a model that integrated a re-evaluation

function throughout the entire systems engineering process and removed the inherent

single lifecycle approach of previous models. The process was made repetitive, non-

sequential, and more generic. Requirements definition, always the bane of KM systems

designers, became the more generic “State the Problem”. System analysis became

“Alternative Investigations” and “Modeling”. Systems were no longer built; they were

“Integrated” and “Launched”. At the end of the entire process, the system’s performance

was consciously assessed, and at each of the phases in the project, the progress and status

were reevaluated to ensure that the project continued to meet customer needs [113].

This mode is also called by its acronym, SIMILAR: State, Investigate, Model,

14
Integrate, Launch, Assess and Re-evaluate (Figure 2-3 below). Although the steps are

shown in a left-to-right sequence, the intent of the SIMILAR model is that any or all of

these steps could be going on at a particular point in time [9]. As with the Vee model,

additional levels of detail are driven out as the system continues its development. As

more detail becomes known, each of the six areas of development become richer and

better defined.

Figure 2-3 -- SIMILAR Model (Bahill and Gissing) [9]

INCOSE currently (2006) has this model posted on their website as their preferred

definition of systems engineering [62]. It provides much more flexibility that the Vee or

the waterfall, allowing system designers and engineers to revise their plans and products

as more information becomes known [7]. SIMILAR provides an excellent framework for

building any information technology system, allowing user input, management oversight,

decomposition and integration, and continual feedback and revision throughout the

process. The system building aspects of systems engineering have finally all been

incorporated into a single model.

15
2.1.4 Integration of Process -- Software Engineering Institute/Capability Maturity

Model (CMM)

However, other definitions began to move from the “interoperational” approach

toward the “integrative” approach. The Electronics Industries Association (EIA)

attempted to consolidate some of the multiple systems engineering standards and

definitions into a single document, EIA 732 [40], in 1998. Known as the Systems

Engineering Capability Model (SECM), the document attempted to consolidate and relate

the Systems Engineering Capability Maturity Model (SE CMM), the INCOSE Systems

Engineering Capability Assessment Model (SECAM) and the EIA-632 Standard,

Processes for Engineering a System. The SECM defined systems engineering as “an

inter-disciplinary approach and means to enable the realization of successful systems”

[40]. It was intended to serve as an interim standard and a draft for the future Capability

Maturity Model (CMM) and included a list of systems engineering and project

management tasks which could be assessed against the CMM levels. These Focus Areas

(FAs) are shown in Table 2-1 below:

16
Table 2-1 -- EIA Systems Engineering Focus Areas [40]

1.0 Systems Engineering Technical Category


FA 1.1 Define Stakeholder and System Level Requirements
FA 1.2 Define Technical Problem
FA 1.3 Define Solution
FA 1.4 Assess and Select
FA 1.5 Integrate System
FA 1.6 Verify System
FA 1.7 Validate System
2.0 Systems Engineering Management Category
FA 2.1 Plan and Organize
FA 2.2 Monitor and Control
FA 2.3 Integrate Disciplines
FA 2.4 Coordinate with Suppliers
FA 2.5 Manage Risk
FA 2.6 Manage Data
FA 2.7 Manage Configurations
FA 2.8 Ensure Quality
3.0 Systems Engineering Environment Category
FA 3.1 Define and Improve the Systems Engineering Process
FA 3.2 Manage Competency
FA 3.3 Manage Technology
FA 3.4 Manage Systems Engineering Support Environment

The Defense Systems Management College (DSMC) in 2001 stated that “systems

engineering is an interdisciplinary engineering management process that evolves and

verifies an integrated, life-cycle balanced (emphases added) set of system solutions that

satisfy customer needs.” [31]

17
The Software Engineering Institute (SEI) at Carnegie-Mellon University has been

working on the question of integrated systems engineering and objective capability

assessments for several years now [25]. Their Capability Maturity Model (CMM) defines

specific quantitative process and performance metrics that must be met in order for an

organization to claim that it has a particular level of maturity. There are five levels in the

process, shown in Table 2-2 below:

Table 2-2 -- Capability Maturity Model Levels

Level 1 Initial Processes are usually ad hoc and chaotic. The


organization usually does not provide a stable
environment. Products and services may work but
frequently exceed the budget and schedule of their
projects.
Level 2 Managed Projects of the organization have ensured that
requirements are managed and that processes are
planned performed, measured, and controlled.
Level 3 Defined Processes are well characterized and understood, and
are described in standards, procedures, tools, and
methods.
Level 4 Quantitatively Managed Subprocesses are selected that significantly contribute
to overall process performance. These selected
subprocesses are controlled using statistical and other
quantitative techniques.
Level 5 Optimizing Processes are continually improved based on a
quantitative understanding of the common causes of
variation inherent in processes. Maturity level 5
focuses on continually improving process performance
through both incremental and innovative technological
improvements.

18
As organizations move through the process and achieve each successive level, each

level builds on the one before it. The requirements measured by the CMM assessments

become an integral part of the everyday business process. The first CMM assessments

were used to document and assess software development processes but later expanded

into the areas of personal software development practices, integrated product

development, and systems engineering practices.

2.1.5 Integration of Stakeholder – the CMMI

In late 2001, the Software Engineering Institute at Carnegie-Mellon University (SEI)

introduced the Capability Maturity Model (CMM®) IntegrationSM (CMMI). The CMMI

was formed to address the problem of having to use multiple Capability Maturity Models.

CMMI 1.1 combines four CMMs into a single source and addresses software engineering,

systems engineering, integrated product development, and supplier sourcing. CMMI 1.2

will be released in 2006 and will further integrate systems engineering, acquisition, and

supplier sourcing in an integrated architecture [26]. The user selects which of the areas

they wish to address and can use the CMMI standards to conduct a detailed assessment of

that area. The process can be implemented as a continuous representation, where the

implementation of the CMMI process itself is tailored and flexible. This can involve

addressing as many as 25 process areas simultaneously, as shown in Figure 2-4 below:

19
Figure 2-4 -- Continuous Representation, CMMI 1.1 [25]

It can also be implemented in the more widely-known staged representation, where an

organization moves up the levels as a whole in a predefined sequence. This allows the

use of a single standard and fits more easily into firms more familiar with software

development. Figure 2-5 below shows the staged representation, which is more narrowly

focused and allows an organization to concentrate their resources but still requires intense

effort and resources for implementation.

20
Figure 2-5 -- Staged Representation, CMMI 1.1 [25]

Both versions allows for objective external process measurement and evaluation

through the Standard CMMI Appraisal Method for Process Improvement (SCAMPISM)

process [27]. The CMMI can be implemented in one, two, three, or all of its four parts,

culminating in the 724-page “CMMI-SE/SW/IPPD/SS, V1.1, Continuous” standard [25].

The CMMI standard for systems engineering defines the process as

“the interdisciplinary approach governing the total technical and managerial

effort required to transform a set of customer needs, expectations, and constraints

into a product solution and support that solution throughout the product’s life.

21
This includes the definition of technical performance measures, the integration of

engineering specialties towards the establishment of a product architecture, and

the definition of supporting life-cycle processes that balance cost, performance,

and schedule objectives” [24].

The current CMMI standard for the software engineering and systems engineering

models, v1.1 [24], addresses only software and systems engineering. It is over 600 pages

long and provides extremely detailed information as to how to establish, measure, and

maintain each component of an organization’s software and systems engineering

processes. While the CMMI framework demonstrates one direction in which the study

and standardization of systems engineering is headed, and it is certainly a high-level,

comprehensive, and integrative concept, the formal CMMI implementation process is

extremely complicated and time-consuming – something which many organizations may

not see the need to address for an internal corporate knowledge management system.

2.1.6 Integration of Human Components -- The IEEE 2005 SE Standard

In 2005, the Institute of Electrical and Electronics Engineers (IEEE) Computer

Society issued a revised standard (IEEE 1220-2005) for the “Application and

Management of the Systems Engineering Process” [58]. This version was drafted to

begin to bring the IEEE standards into line with the international ISO/ISC 15288 standard

and is representative of the current state of the art in “integrative” systems engineering

[35]. It is intended to ensure that all aspects of the system, including the inputs of the end

users and the system stakeholders, are incorporated into the design and build process.

22
The systems engineering cycle and the project management process have merged to

become a continuous integrative management approach.

Figure 2-6 -- IEEE Systems Engineering Process (2005) [58]

The concurrency of these efforts is a major difference from the “cycles” of the

previous models. No longer does a project start and stop – it now continues indefinitely,

having constant inputs and course corrections to ensure that user requirements are current

and the technology is up to date [58]. This approach is a good start for evaluating an

approach for building a KMS, for it ensures continuous updating and user feedback, but it

23
is still lacking emphasis on the “softer” aspects of KM. A KMS approach needs to

consider more than just the technology pillar [121].

2.2 Knowledge Management Systems

Knowledge management systems can be simplistically defined as those policies,

processes, and tools that define and enable the instantiation and continuation of

knowledge management within an organization. While knowledge management itself can

be divided into four “pillars”– leadership, organization, technology, and learning [123] –

the actual system required to implement effective KM often does not receive the same

degree of planning, design, and implementation as the knowledge content itself. Often an

organization will concentrate on one of those three aspects – usually the tools as those

are often easiest for an organization to “sell” – ignoring the fact that each of the three

major areas of a knowledge management system (KMS) are as critical in their own way

as the underlying knowledge content. As shown in Figure 2-7 below, all three elements

should combine to form an equilateral triangle.

24
Figure 2-7 -- The Three Components of a Knowledge Management System

Pro
es

ce
lici

sse
Po

s
Tools

Policies come into play as the organization begins to examine how and why a KMS

would be installed and used. No knowledge management system can possibly hope to

succeed without the continual backing of senior management and the inculcation of KM

as an essential means of doing business [3][125][126][128]. Changes in organizations

and objectives are often required for a KM project to succeed. However, the policy

changes are often considered among the hardest, and so those are often delayed or ignored

outright.

Processes define whether the system will be used in such a way that it supports the

objectives of the organization without hindering its regular operations. Automating an

25
inefficient process merely increases the speed at which a user can become frustrated

[103]. The process of using the system – and the focus of the system and the knowledge

within it – must remain upon the organization’s key objective, whatever that may be

[126]. Without an effective process in place to implement the policies decided with the

tools provided, the system will not be used and so it cannot be successful. As Cho,

Jarrell, and Landay pointed out in 2000 “unless you have a Workforce willing and able to

use the tools, you cannot have an effective KM system” [31].

Tools are needed to provide, store, and process the knowledge in the system. Often a

project manager believes that simply purchasing and installing Lotus Notes/intranets/

extranets/collaborative tools/data mining/fill-in-the-blank software packages will

automatically and painlessly cause all of the knowledge to flow from those who have it to

those who need it. However, knowledge is not the same as information or data. It adds

context, and context adds complexity [1]. Once the subject matter begins to get complex,

and the system changes from calculating and evaluating objective data to assessing and

categorizing subjective information, the effective use of the systems dramatically

decreases as the resource and schedule costs of coding and maintenance begin to

dramatically increase [56]. Much of this problem stems from the fact that, once the

systems begin to support and display complex behavior, they become incredibly

maintenance-intensive. Upkeep of sophisticated knowledge management systems

requires the constant involvement of both subject matter experts (to validate the content)

and information systems professionals (to maintain the system) [35].

26
These systems require planning, design, development, and maintenance procedures

just as would any other complex systems project. All three dimensions should be

addressed, and they should be addressed from a systems life-cycle perspective while still

retaining the flexibility needed to elicit, store, and use knowledge in a timely, efficient,

and effective manner. This brings us to the need for a knowledge systems engineering

construct.

2.3 Knowledge Systems Engineering

Stankosky and Baldanza in 2001 [123] cited the following stages in their proposed

Knowledge Systems Engineering (KSE) process:

1. Define the environment

2. Define the objectives and the metrics used to assess them

3. Design the “four spheres” of KM

4. Assess and define the “system-level” relevant requirements in each of the four

areas

5. Define a functional architecture that meets the objectives of Step Two and the

requirements of Step Three

6. Define subordinate architecture views

7. Operational, systems, technical -- and information

8. Build the system (preferably with a prototype)

These stages track closely with the successive stages of systems engineering defined

in the various sources in Section 2.1 above: requirements definition/analysis/allocation

27
(steps 1, 2, 3, 4), architecture design/decomposition (steps 5/6), system and subsystem

design (step 7), and prototype (8). The steps missing (but assumed) in the above

definition are those for support and feedback (operations and maintenance) of the system,

and those steps should probably be added to the proposed KSE cycle. However, there is

no reason to treat a knowledge management systems engineering problem differently

from any other type of complex system. The key is to remember not only to use a

systems engineering view but also to ensure that the entire knowledge management

system is addressed in the cycle. If the definition of a KM “system” is expanded beyond

technology to include the relevant leadership, organizational, and learning policies,

processes, and tools, the traditional integrative systems engineering approach should

produce a successful KMS.

The original KMS systems engineering model is shown in Figure 2-8 below [121]:

28
Figure 2-8 -- Stankosky’s Original Integrative KMS Engineering Model [121]

The model shown above has since been expanded to Enterprise Management

Engineering (EME), the current instantiation of Stankosky’s model. EME (Figure 2-9

below) illustrates the concepts represented in the above figure, but refines the model,

imposes it over the traditional systems development life cycle, and pulls together all the

requisite concepts for “integrative management” into a single entity operative throughout

the system.

29
Figure 2-9 -- Stankosky's Enterprise Management Engineering Model

The eighteen system design and development activities that needed in order to

implement the EME model can be summed up in the list below as follows [122]:

x Milestone 1 – Project Inception

o Define/illustrate the enterprise

o State the value proposition of the organization.

o List the critical intellectual assets needed to make strategic decisions.

o Identify the sources of your intellectual assets for strategic decision-making.

o List the strategic objectives in measurable terms (include success factors).

30
o Identify critical environmental changes that would impact strategic objectives

o Audit the four KM pillars (Leadership, Organization, Technology, Learning)

x Milestone 2 – Project Development

o List the functions needed to accomplish the strategic objectives.

o Diagram the operational processes to accomplish these functions.

o List the intellectual assets required to accomplish both functions and

processes.

o List the sources of these intellectual assets.

o Identify codification and/or personalization strategies needed to leverage these

intellectual assets.

o Diagram the formal and informal organization structures.

o List the KM technologies needed to support the KM strategies

x Milestone 3 – Project Deployment and Support

o Develop an integrative management plan for the KMS that defines

x deliverables (include metrics for control/success and expected benefits,

such as efficiency, effectiveness, and innovation)

x resources required

x persons responsible

x timetable.

o Integrate the KMS with legacy components (such as current functions,

processes, formal and informal organization structures, and IT systems)

o Highlight risks and risk management/mitigation plans

31
o Discuss change implementation and management

If the above steps are followed, and the system is appropriately defined, the EME

construct for development and implementation of a knowledge management system

should be able to be proven successful.

2.4 Measuring the “Success” of Knowledge Management

One of the issues in explaining and justifying knowledge management systems is that

knowledge itself is not an easy thing to measure [8][15][83]. In principle, knowledge is a

good thing, so institutionalizing it and making it available to everyone should produce

higher productivity, motivate the workforce, and improve the organization. In practice,

however, KM systems have historically suffered not only from being IT projects but also

from trying to automate something not easily measured or encoded [72]. Data are easier

to quantify than information; knowledge, on the other hand, is almost impossible to

capture with numbers [90].

Nevertheless, several authors have tried to do so. Corporate knowledge (often

referred to as intellectual capital) is traditionally viewed as an intangible asset, not easily

severable from the “value” of the workforce itself (and the opportunity cost of having to

replace the workforce en masse). As knowledge-based organizations take on more of a

role in the world economy [6][7], corporate financial positions need to begin to find an

accurate measure for representing the “value” of their knowledge to the outside world

[36][117].

32
Intangible assets are defined by the Federal Accounting Standards Board as all non-

financial assets that lack physical substance [42]. Intangible assets can be further divided

into five areas [7]:

x Brands

x Intellectual Property

x Process and Systems

x Sustainability and Environment (i.e., environmentally-conscious activities)

x Goodwill

All of these combine to define the scope of an organization’s corporate “knowledge”.

As can be seen from the next section, several models exist for defining and evaluating

KM (or progress in KM), and most of them tend to be divided into information and

knowledge of employees (intellectual property), internal processes and assets (process

and systems) and external processes and assets (brands, sustainability, and goodwill).

Consequently, for purposes of this study, “knowledge” is defined as all five of the above

categories. The value of corporate knowledge management should therefore be defined

as some objective calculation involving intangible assets.

The American Productivity and Quality Center (APQC) produces studies and analyses

on knowledge management. APQC offers both a “Roadmap” for KM system

implementation and a series of targeted reports on various aspects of KM. One particular

study of APQC sponsor and partner organizations [3] produced four guidelines to

remember when evaluating any purported knowledge management measurement or

metrics approach:

33
x KM measures must be appropriate for the knowledge management approach,

objectives, and maturity – mature decision support systems for manufacturing

companies should not be compared to early-stage lessons learned discussions

from a small advertising agency

x A framework should link KM measures to inputs, process changes, and desired

outcomes – context is critical to ensuring where the KM efforts should be

producing a change (and where they should not be doing so)

x The measurement system actually works – processes and accountability for

making the measurements and meeting the objectives are essential to monitoring

and assessing a KM system, and

x Obvious and compelling successes – the organization must constantly remind

itself, its employees, and its management of the benefits of successful KM.

While these guidelines are not in themselves useful for the KM success screening

required for this study, they do provide a useful context for evaluating KM measurement

models. Several approaches have been proposed; these are described in detail in the

following sections.

2.4.1 Skandia Navigator

Edvinsson and Malone [36] describe an Intellectual Capital Report (ICR), consisting

of 111 indices that, taken as a whole, purport to represent the knowledge contained within

the workforce and the organization as well as the financial status that that intellectual

capital produces. Organizations tailor and select these indices to meet their reporting and

34
management requirements. They then report these figures and management and analysts

can theoretically review them to identify trends, areas needing improvement, or areas

with superior performance in assessing the knowledge usage of that organization.

Edvinsson and Malone based most of their work on the example of the Skandia

Corporation, a Swedish financial management and insurance firm which is one of the

well-known exemplars of corporate knowledge management and which first completed

an Intellectual Capital Reports in 1995 (Edvinsson is the “Intellectual Capital Director” at

Skandia). Edvinsson and Skandia later promulgated their “Navigator” approach, which

attempts to streamline the original list of indices down to about 30 indicators, measured

annually. These indicators are still divided into the same five categories, but the

categories are linked together in a series of dependent relationships as shown below:

35
Figure 2-10 -- Skandia Navigator (Edvinsson) [36]

Financial Focus
History

Customer Focus Human Focus Process Focus Today

Renewal and Development Focus Tomorrow

The Navigator indices are quite specific and quantitative; they fall into five areas.

Some of the 113 are shown below along with their unit of measure:

x Financial Focus

o Fund assets/employee ($)

o Income/employee ($)

o Market value/employee ($)

o Return on net asset value (NAV) (%)

x Customer Focus

36
o Days spent visiting customers (#)

o Market share (%)

o Number of contracts (#)

o Vacancy Rate (%)

x Process Focus

o Administrative expense/employee (%)

o Administrative expense/total revenues (%)

o IT capacity (CPU and DASD) (#)

o IT expense/employee ($)

o Renewal and Staff Development

o R&D expense/administrative expense (%)

o Marketing expense/customer ($)

o Training expense/employee ($)

o Share of development hours (%)

x Human Focus

o Employee Turnover (%)

o Number of employees (#)

o Average years of service (#)

o Time in training (Days/year) (#)

These quantitative measures can be used for assessing an organization’s progress

along the KM path. However, the Navigator is not an appropriate measure for use in this

study since the measures could (and indeed are designed to) vary from company to

37
company. Furthermore, they are far too detailed for the initial comparison needed to

validate the EME construct.

2.4.2 Intangible Asset Management

Karl-Erik Sveiby uses a slightly different approach. His “Invisible Balance Sheet”,

first proposed in 1988, proposed to measure what he calls “invisible equity”. This was

developed to try to explain the difference between the physical valuation of a company

and the value that could actually be received if the company were sold (in the past this

has often been accounted for as “goodwill”). The Invisible Balance Sheet has evolved

into a concept known as Intangible Asset Management, or IAM [128]. IAM divides an

organization’s value into tangible and intangible indicators and then allocates the

intangibles to external structure, internal structure, or individual competence. Each area

is then assessed against four possible criteria: growth, renewal, efficiency, and stability.

Organizations select indicators for each of these criteria that can show progress in each

area.

38
Figure 2-11 -- Intangible Asset Monitor Model (Sveiby) [128]

As with the Navigator model, indicators can (and should) be tailored for a particular

company or unit. This approach attempts to concentrate more on trend measurement, or

“flows” as Sveiby terms it. Specific measurements in each category are similar to the

Navigator, but they are grouped to indicate trends and to measure one of the four criteria.

Knowledge in this view is a fluid process more than an endstate, and therefore it should

be measured by measuring its movement. Indicators could include:

x External Structure Indicators

x Growth and Renewal

x Profitability per customer

x Organic growth (increase in income less the portion of that increase due to

acquisitions)

x Efficiency

x Win/loss index

39
x Sales per customer

x Stability

x Customer longevity

x Customer repeat orders

x Internal Structure Indicators

x Growth and Renewal

x Investment in the internal structure

x Investment in information processing systems

x Efficiency

x Proportion of support staff

x Sales per support person

x Stability

x Age of the organization

x “Rookie” ratio (ratio of newcomers to veterans)

x Individual Competence Indicators

x Growth and Renewal

x Number of years in the profession

x Number of years of education and/or training

x Efficiency

x Proportion of professionals in the company

x Value added per professional

x Stability

40
x Average age

x Professional turnover rate

As with the Navigator, though, this approach would not work well for this study. The

indicators vary from organization to organization, depending on what management selects

to track and what the unit is capable of recording. Furthermore, this measurement again

is more for tracking a single organization’s internal progress along a KM path and does

not translate well to high-level “snapshot” comparisons across multiple organizations.

2.4.3 Balanced Scorecard

The Balanced Scorecard (BSC) is an attempt to formalize the feedback process that is

only implicit in the previous two models. It was developed independently from the

Navigator and the IAM by Robert Kaplan and David Norton in 1992 [68]. It has received

a lot of attention in recent years since the U.S. Government has chosen to use the concept

for measuring and reporting on the performance of federal agencies and departments.

The BSC looks at four major areas: financial, internal, customer (or external processes),

and learning and growth. Each of these four areas is constantly interacting with the others

in order to drive and shape organizational vision and strategy as shown in Figure 2-12

below:

41
Figure 2-12 -- Balanced Scorecard (Kaplan and Norton) [68]

Specific metrics in each area are developed by defining

x objectives which will meet the strategic goals;

x measures which will objectively assess progress toward the objective;

x targets (desired end states for each measure); and

x initiatives to be undertaken in order to reach the desired targets

Some metrics that can be used as part of the BSC approach to measuring KM include:

x Customer satisfaction and dissatisfaction

x Customer retention and behavior

42
x Financial returns to shareholders

x Stakeholder satisfaction and dissatisfaction

x Stakeholder retention and behavior

x Market potential

x Market growth rate

x Market share

x Customer acquisition

x Customer profitability

x Product/service profitability

x External factors that affect customers

x Strategic goals and the objectives necessary to achieve them.

x Product and service quality

x Process quality and capability

x Productivity

x Waste

x Product and service costs

x Stakeholder resource contribution

x Stakeholder contribution quality

x Organizational capabilities

x Infrastructure capabilities

x Stakeholder capabilities

43
This is essentially an extension of the Total Quality Management (TQM) model.

TQM attempted to improve quality (and effectiveness) by measuring certain key state

indicators that show outputs. The BSC approach takes this one step further and provides

feedback not only on the state indicators but also on the processes used to get there – BSC

measures outcomes. Taken together, the BSC uses a “double-loop feedback approach”

that helps to monitor intermediate progress as well as final accomplishment.

This measurement goes beyond the indicator reporting of the Navigator and IAM

approaches to add progress reporting. This “partial credit” approach can be useful

(particularly for large organizations that are slow to change) but again, unless all of the

participating organizations share the same goals and strategy, using the BSC to compare

and contrast multiple organizations would not be appropriate. The BSC sets targets for

organizations, and the organizations are then measured against their own targets. This

highly customizable approach can be used to show progress toward organizational goals

and strategies, something all KM systems should do. However, the strength of the BSC

(its ability to show intermediate progress if tailored correctly) is also the reason why it

would not be an appropriate metric for this study (the fact that it can be tailored means

that it should not be used for objective comparisons).

2.4.4 Human Resource Costing and Accounting (HRCA)

Human Resource Costing and Accounting (HRCA) was first proposed in 1968 by R.

Lee Brummet, Eric Flamholtz, and William Pyle [16][46]. This approach attempts to

quantify the actual costs of hiring, replacing, retaining, and motivating experienced staff –

44
the past costs associated with obtaining the staff holding and generating the corporate

knowledge. Initially, the concept merely attempted to quantify the actual costs incurred

through those activities. As time went on, the concept was extended to also include

replacement and separation costs [44][19].

Essentially, this approach (Figure 2-13 below) takes into account company

investments in human resources as both a loss and a contributor to profit and posits that

most organizations severely undervalue benefits and costs associated with human

resources. The HRCA model quantifies financial benefits and costs associated with staff

motivation, productivity, employee benefits, organization learning, recruitment, staffing

vacancies and opportunity costs, and turnover. These figures can be added to standard

corporate balance sheets or reported separately. This approach is used to annotate

corporate profit and loss statements in several European companies [66] but has not

gained much traction elsewhere.

Figure 2-13 -- Human Resource Cost Accounting (HRCA) (Brummet, Flamholtz, Pyle)

45
This is an interesting addition to the literature on costing intangible assets, but it is not

clear that this approach necessarily fits the definition of knowledge as used in this study.

Unlike many of the other models presented, this one could be used to compare some

organizations against others, but the emphasis is overwhelmingly on the costs (and cost

avoidance) of human resources management actions, not necessarily the benefits of the

institutionalization of corporate and staff knowledge. Consequently, an organization with

satisfied staff (and correspondingly low turnover and HR costs) would score quite highly

on this model, even if no corporate or employee knowledge were produced, stored, or

reused.

2.4.5 MERITUM Project

The MERITUM Project ("Measuring Intangibles to Understand and Improve

Innovation Management") resulted from a consortium of researchers from Denmark,

46
France, Finland, Norway, Spain and Sweden [65]. The intent was to observe and

categorize several European firms in an attempt to model intellectual capital management

and reporting as part of the EU efforts to produce standard guidelines for corporate

reporting. The final report of the project was issued in 2000. The business model is

composed of three parts as shown in the figure below: identification, measurement and

action. The reporting process requires organizations to classify different intangible

resources, activities, and indicators into one of the following three categories:

Human capital: Knowledge, experience, skills, and capabilities of people.

Structural capital: Internal corporate processes, guidance, procedures, systems,

cultures and information.

Relational capital: Everything linked to external relationships. This includes both

human and structural capital that affects or interacts with outside stakeholders.

47
Figure 2-14 -- Intellectual Capital Management Model (MERITUM) [65]

MERITUM is a more systematic way of validating and institutionalizing the

Intellectual Capital Report as first defined by the Skandia Navigator. However, since the

specific indicators used to measure intellectual capital are designed and intended to vary

from organization to organization and situation to situation, their contextual nature makes

them impractical for the widespread common denominator needed for this study.

2.4.6 Danish Agency for Trade and Industry (DATI)

The DATI Guidelines were created for use by the Danish Agency for Trade and

Industry (DATI) in 2000 [28]. The DATI Guidelines explain the use of evaluation criteria

(66 are provided as examples of metrics that can be used, depending on the organization

and the market segment). Seventeen Danish companies volunteered to support the effort

and worked with DATI to attempt to represent their own position in their own

“Intellectual Capital Reports”. The Guidelines stress more the need to track and report on

a selected KM model but do not themselves recommend a particular model or approach.

48
Figure 2-15 -- Intellectual Capital Statement (DATI) [28]

As shown above, the DATI approach itself concentrates more on measuring and

representing knowledge management progress within an organization. DATI does define

intellectual capital as the entire difference between “market value” and “book value”,

where book value is the value of the assets in the annual report. DATI also views

intellectual capital as being synonymous with knowledge capital. In this respect, the

principles beneath the DATI guidelines are similar to those in Lev’s Knowledge Capital

(KC) computation. There can be other explanations for differences in market to book

value – market value, after all, is a reflection of what the stock market thinks of a

company, which may or may not consider its intellectual capital. This approach may

work well for regulated organizations where stock prices reflect fairly certain predictions

49
of future performance. However, this approach still needs the ability to (1) account for

financial assets, and (2) accommodate industry- or market-specific historical rates of

return.

2.4.7 Value Based Management/Value Creation Index (VCI)

Christopher Ittner and David Larcker from the Wharton School of Business have been

developing and refining approaches to “value-based management” for several years

[63][64]. Understanding and measuring value drivers, as shown in Error! Reference

source not found. Error! Reference source not found., and ensuring that there are

well-known and measurable causal links between objectives, strategies, drivers,

measures, and performance evaluation, are key to increasing value. This holds as true for

non-financial aspects of an organization as for the financial ones.

50
Figure 2-16 -- Value-Based Accounting (Ittner and Larcker) [64]

In fact, Ittner and Larcker feel that non-financial data can and should be used to

supplement “traditional” accounting data; they better reflect the strategic direction of a

company [63]. Several criteria actually provide a considerable share of a company’s

values even after accounting measures are taken into account. These non-financial or

intangible criteria that affect the market value of a firm are those that could be

knowledge-based.

51
2.4.8 Value Creation Index (VCI)

This approach was tested in a joint effort by Forbes magazine, the Wharton Business

School, and the Cap Gemini Ernst & Young Center for Business Innovation. One of the

major intentions of this effort was to analyze intangible factors affecting corporate value

both against actual performance (which factors had the most effect on corporate value)

and against perceived value (which factors did corporate executives feel were more

important for corporate value). The results are shown in the table below:

Table 2-3 -- Perceived vs. Actual Intangible Value Factors (VCI)

Perceived Corporate Value Factor Actual Corporate Value Factor

Rank Factor Rank Factor

1 Customer Satisfaction 1 Innovation

2 Ability To Attract Talented 2 Ability To Attract Talented Employees


Employees
3 Innovation 3 Alliances

4 Brand Investment 4 Quality Of Major Processes,


Products, Or Services
5 Technology 5 Environmental Performance

6 Alliances 6 Brand Investment

7 Quality Of Major Processes, 7 Technology


Products, Or Services
8 Environmental Performance 8 Customer Satisfaction

What conclusions can be drawn from this analysis? The factors offered by the

executives are the same as those in many of the same concepts that have been posited in

the other models and approaches discussed in the previous section. Models for

knowledge management are being received and remembered by those in management.

However, they do not necessarily translate into improved market performance In fact, the

52
bottom two factors within the “actual” category are actually tied for last place – neither

technology nor customer satisfaction showed ANY statistical correlation to the market

value.

This study analyzed companies in “durable manufacturing”. Most companies that

publicly claim KM and which show measurable benefits from KM are more information-

oriented [93]; they claim that valuation results and models based on “old economy”

models do not and should not apply [102]. Yet many of the older companies are

leveraging and managing their knowledge just as well as the newer ones [10].

Furthermore, these results held valid with a 0.9 level correlation for e-commerce

companies in 2000 [102].

This version of the Value Creation Index is essentially a rank ordering of non-

financial factors that affect corporate value. This concept is especially interesting in that

it runs counter to many of the other KM approaches which are based on either technology

or customer satisfaction. However, these non-financial measures are also generally non-

quantifiable and must be assessed on a relative basis. That approach will not work for

this study.

2.4.9 Value Creation Index (A.T. Kearney version)

In 1997, A.T. Kearney separately developed a concept they call the Value Creation

Index (VCI) [70]. Their Value Creation Indices are completely quantitative and are

composed of economic return (ER), the NPV of marketed products/economic capital

(EC), and the NPV of pipeline products/economic capital. One distinction is that

53
economic return and economic capital incorporate the research and development costs

and equipment as assets (since it is assumed that they will eventually pay off) instead of

costs. The “NPV of marketed products/economic capital” figure accounts for current

performance; the “NPV of pipeline products/economic capital” figure takes into account

expected future performance. Therefore, this VCI model supports long-term R&D and

innovation – key components of knowledge management -- and is particularly appropriate

for industries with lengthy R&D cycles. The VCI formula is shown in Equation 2-1

below:

(0.01 ( NPVmarket  NPVpipeline)


VCI ER  (2-1)
EC

The figure Error! Reference source not found. shows how this figure is affected by

various factors and the percentage of influence that each has on the result (this model is

for the pharmaceutical industry); companies then have something more concrete to target

for their performance metrics and objectives.

54
Figure 2-17 -- Value Creation Index (VCI) Value Tree (Kearney) [70]

This approach has promise, but it is really only appropriate for firms which (1) have a

long product development cycle, and (2) have a reasonable expectation that their

development (as opposed to production) products will be successful. In pharmaceuticals,

the development cycle is not necessarily any longer due to the need to create -- new drugs,

new chemicals, or new technologies can all either take long periods of time or can occur

quickly. In this industry, the product deployment cycle is artificially long due to the need

for testing, review, and approval. Most knowledge firms operate with a faster turnaround

time for both their products and their processes; using this model as a KM “screener” to

compare firms (as this study requires) would restrict the study unnecessarily.

55
2.4.10 Framework of Intangible Valuation Areas (FIVA)

Dr. Annie Green of GWU proposed the Framework of Intangible Valuation Areas

(FIVA) as part of her dissertation in 2004 [49]. It reconciles existing anecdotal and

subjective intangible asset models with quantitative and process-oriented value creation

or value chain such as those in the previous section. If value can be added to tangible

products through these processes, it stands to reason that value can also be added to

intangible ones. The issue in the past has been the inability of the marketplace to

objectively organize and assess those assets. FIVA takes the stepwise approach of value

creation but applies that concept not to the physical products but to the intangible ones.

The intangible value drivers are shown in the bottom row of the figure below as

Alternatives. The middle level, Criteria, are actually knowledge management objectives,

akin to Stankosky’s Four KM Pillars of Leadership, Organization, Technology, and

Learning. However, these criteria were derived by Green by analysis of the importance of

the value drivers to strategic goal accomplishment.

56
Figure 2-18 -- Framework of Intangible Assets (Green) [49]

While FIVA shows great promise as a global means of binning, assessing, and

integrating all intangibles in support of an overall strategic goal, Green herself admits that

intangibles include more than just “knowledge”; they also include such assets as the

social interactions and contexts [49]. Consequently, her framework was deemed too

broad for use in this study effort.

2.4.11 Value Chain Scoreboard (VCS)

The Value Chain Scoreboard (VCS) was developed by Baruk Lev from NYU Stern

Business School [79]. As with the other models previously described, it is also matrix of

non-financial indicators. However, it is intended to address the disconnect that Lev sees

between “traditional” accounting and “value addition”. As companies have become more

knowledge intensive, actions that produce (or reduce) corporate knowledge occur further

and further in time from the events that actually are recorded on balance sheets. A

57
discovery or invention – using knowledge – takes longer to get to market and sell –

producing income. Unlike the customary industrial model, knowledge is not always

immediately apparent and does not always provide a visible short-turn return. To reflect

this emphasis on the development cycle, the VCS classifies metrics into the cycle of

product development and commercialization:

x Discovery/Learning -- includes measures on internal renewal, acquired

capabilities and networking.

x Implementation -- comprises metrics of intellectual property, technological

feasibility and the Internet.

x Commercialization -- includes measures of customers, performance and growth

prospects.

Lev took the management metrics proposed in several of the earlier models and

placed them in the product life-cycle context. His definition of a “value chain” equates to

stock market analysts’ definition of a “business model” [79]. That information has

always been difficult both to quantify and to objectively represent in a universally-

understandable manner.

58
Figure 2-19 -- Value Chain Scorecard (Lev) [79]

Not all of the bullets within each of the above boxes would necessarily apply to each

company or industry/market sector; ten to twelve of these for each company should be

able to represent the answers to questions most frequently asked by analysts when

evaluating a company’s financial prospects [79][126]. These should show a

comprehensive view of the company and its ability to create value based upon its inherent

knowledge and its utilization of it. In addition, these indicators share three requisite

criteria: quantifiability, “standardizability”, and a scientifically-based linkage to value.

Lev’s proposed report is intended to complement, not replace, existing rigid profit and

loss reporting requirements. The idea is to place the objective and familiar numbers in a

59
context that shows the ability of a company to leverage its intellectual capital. This is a

complex concept, as one form of information display should be able to support the other.

For purposes of this study, the VCS is considerably more complex than the simple “does

KM work in this company” measurement required for this analysis; instead, it will serve

as the basis for the chosen KM screening metric – Knowledge Capital (KC).

2.5 Knowledge Scorecard/Knowledge Capital

Knowledge Capital (KC), also created by Lev [92] is a more streamlined version of

the VCS that can be used as a gross measurement of the effectiveness and value of an

organization’s intangible assets. This metric, instead of being used to measure the

progress of an organization, allows comparison between organizations [126]. One of the

problems with knowledge management and traditional accounting methods is that assets

are ultimately only worth what the market will bear; unless knowledge can be

enumerated, it is unlikely that it will be appropriately appreciated [79][81].

Essentially, knowledge capital can be calculated through the process below:

1. Take the asset value of a company (total, physical, and financial)

2. Determine predicted "normalized earnings". Normalized earnings are calculated

by averaging the past three years' actual figures and the next three years'

predictions. However, since knowledge increases in value over time, and since

this is an attempt to value the future, the future predictions are weighted twice as

heavily as past performance. The sum is then divided by nine. This produces a

figure for normalized earnings per share (EPSn).

60
3. Multiply EPSn by the number of Common Shares Outstanding to equal the total

“normalized” earnings

4. Using a constant for expected after-tax return on financial assets (about 4.5%

from empirical evidence), multiply by financial assets (Investments and Advances

– Equity and Cost) to determine financial earnings

5. Using a constant for expected after-tax return on physical assets (about 7% from

empirical evidence), multiply by physical assets (Property, Plant, Equipment –

Net) to determine physical earnings

6. Subtract financial and physical earnings from total "normalized" earnings. The

remaining figure is Knowledge Capital Earnings (KCE)

7. Divide KCE by expected "Knowledge Capital Discount Rate" (KCDR) (10.5% --

average after-tax rate of return for knowledge-intensive industries). The

remaining number is Knowledge Capital (KC)

The KCE represents the amount of a company’s earnings that is generated from the

use of its knowledge assets [93]. To determine the value for KC, Lev suggests using a

“knowledge capital discount rate” (KCDR) as well. This is analogous to the 7% average

return on investment for a physical asset or the going 4.5% return for a financial one.

Dividing the KCE by the KCDR produces the asset value for the “knowledge capital”.

Lev calculated the KCDR based upon the 10.5% average after-tax profits for the

software and biotechnology sectors over the past several years [50][93][94]. These two

sectors are generally considered more knowledge-intensive (and less of anything else)

than other market segments.

61
The market validates these figures. The correlation between a calculated KC for a

company and the actual measured total return from a stock is 0.53 [92]. The comparable

correlation between traditional “earnings” and stock value is only 0.29 [50]. Clearly the

KC computation is a valid, if rough, indicator of the implicit value of a corporation’s

knowledge and thus of its KM process (implicit or explicit).

KC shows the absolute amount of the knowledge assets within a company – large

firms will generally have larger numbers regardless of the industry or of their specific use

of KM. It can be used to show what percentage of a company’s overall financial numbers

can be attributed to knowledge, and for this reason, it is a useful measure within

companies for measuring the value of their knowledge efforts as shown in their bottom

line [93]. However, this figure alone can be misleading, as larger companies will almost

always generate higher earnings overall and so any large company will have higher

KC/KCE. Comparing KC between companies will not always be accurate, as many

knowledge-intensive sectors are comprised of smaller companies with smaller total assets

from which to begin the KC calculation [50].

To address this issue, there is a companion measurement for KC, and that is the

“Comprehensive Value” [93]. The market value of a company is the number of shares of

common stock multiplied by the current market price for that stock. The book value of a

company is the value of all assets of an organization less all liabilities, debts, and

intangible assets – as shown in official financial records [14]. Financial markets compare

the two numbers (through either the “market-to-book ratio” also referred to as “price to

book ratio” or its inverse, the “book-to-market ratio”) to see whether a company has

62
overvalued or undervalued stock. If the market value is higher than the book value, than

the market believes that a company is worth more than it is stating [14].

Lev defines comprehensive value as book value plus knowledge capital [93]. Using

the ratio approach of market-to-comprehensive value eliminates the KC bias toward

larger companies. This in turn can show not only whether the market believes that a

stock is over or undervalued but also can also show what percentage of its stock price can

be derived from its KC. Two companies with similar KC values can have wildly

dissimilar comprehensive values if the market values one company’s knowledge at a

higher level (or believes the other company is being less than forthcoming in determining

their book value calculations) [93].

In 1999, 2000, and 2001 Lev and Marc Bothwell (Credit-Suisse) applied this KC

concept to companies in specific industry sectors for CFO magazine in a concept called

the Knowledge Scorecard [92][93]. In 2001, Lev completed studies of non-financial

firms in 22 industry sectors as shown in Table 2-4 below [102]. He was able to show

through this measurement that the sectors thought higher in knowledge usage (biotech,

computer software, and semiconductors) scored highest in the market

value/comprehensive value calculation (MV/CV).

63
Table 2-4 -- Knowledge Capital Across Non-Financial Industry Sectors, 1999 [102]

Market
Knowledge Market Value/
Knowledge Value/Book KC MV/CV
Industry Earnings Comprehensive
Capital ($M) Value Rank Rank
($M) Value (MV/CV)
(MV/BV)

Aerospace & Defense 23,447 1,417 1.80 0.50 8 21


Airlines 7,949 399 1.00 0.55 19 20
Biotech 4,393 171 16.30 3.07 22 1
Chemicals 9,948 632 2.20 0.75 17 17
Computer Hardware 49,857 2,490 17.50 1.85 3 4
Computer Software 38,908 1,782 15.20 2.40 5 2
Electric Utilities 10,351 691 2.10 0.99 15 14
Electrical 7,690 450 3.60 0.75 20 18
Food / Beverages 18,565 1,306 9.10 1.08 11 12
Forest Products 8,884 854 1.58 0.81 18 16
Home Products 19,296 1,097 6.60 1.11 10 11
Industrial 23,132 1,166 3.30 0.81 9 15
Media 16,759 646 2.70 1.40 12 8
Motor Vehicles 13,413 962 1.90 0.46 14 22
Newspapers 5,619 336 3.20 0.67 21 19
Oil 24,559 2,210 3.40 1.27 7 10
Pharmaceuticals 75,224 4,295 12.20 1.34 2 9
Retail 15,406 885 3.80 1.52 13 7
Semiconductors 42,029 1,859 12.60 2.08 4 3
Specialty Retail 10,320 512 8.00 1.77 16 6
Telecom 81,221 4,851 3.50 1.00 1 13
Telecom Equip. 26,947 1,684 7.70 1.78 6 5
Lev then moved from industry sector studies to analyzing large (non-financial)

companies across industry sectors. For the 2001 listing of the Fortune 500, Lev took the

top 50 publicly-held non-financial companies that showed a profit that year (in order to

ensure objective reporting and meaningful estimates). In order to get 50 qualifying

companies, the lowest ranking one on the list came in at 99 overall [126]. Again it can be

64
seen that the distinction between the revenue of the company (which determines the

Fortune 500 ranking) [126] and the knowledge values support the general assessments of

some of these companies. GM, AT&T, and WorldCom, while all large companies, did

not rank nearly as high in knowledge assets as GE, Pfizer, and Microsoft, nor were they

as knowledge intensive as Intel, SBC, or Lockheed Martin.

Table 2-5 -- Lev's Top 50 Knowledge Companies – Knowledge Capital (2001)

Overall
KC MV/CV Knowledge Knowledge
500
Rank Rank Company Name Capital Earnings MV/BV MV/CV
Rank
(2001) (2001) ($M) ($M)
(2001)
1 43 5 GE 254,381 12,435 8.10 1.34
2 33 53 Pfizer 219,202 7,686 15.10 1.03
3 40 79 Microsoft 204,515 8,735 6.90 1.16
4 4 11 Philip Morris 188,538 10,226 7.00 0.52
5 41 1 Exxon Mobil 176,409 10,050 4.10 1.17
6 29 41 Intel 173,964 7,339 5.10 0.91
7 20 14 SBC 155,402 7,897 4.80 0.78
8 32 8 IBM 148,679 7,534 8.10 0.99
9 18 10 Verizon 141,471 7,494 3.80 0.74
10 38 30 Merck 139,494 7,497 11.50 1.11
11 48 2 Wal-Mart 99,973 5,018 6.90 1.63
12 39 58 Johnson & Johnson 94,769 4,976 6.90 1.14
13 42 88 Bristol-Myers Squibb 85,837 4,709 12.50 1.21
14 45 93 Coca-Cola 73,976 3,906 12.70 1.42
15 21 48 Dell 72,985 2,499 11.40 0.81
16 24 66 BellSouth 71,269 4,004 4.50 0.86
17 35 31 Procter & Gamble 66,609 3,931 6.90 1.07
18 14 4 Ford 59,311 4,310 2.90 0.70
19 5 71 Honeywell 52,798 2,533 3.30 0.52
20 19 15 Boeing 51,179 2,427 4.20 0.75
21 36 94 PepsiCo 50,614 2,607 8.80 1.10
22 37 52 UPS 48,508 2,470 6.60 1.10

65
Table 2-5 (cont) -- Lev's Top 50 Knowledge Companies – Knowledge Capital (2001)

Overall
KC MV/CV Knowledge Knowledge
500
Rank Rank Company Name Capital Earnings MV/BV MV/CV
Rank
(2001) (2001) ($M) ($M)
(2001)
23 47 23 Home Depot 48,389 1,952 6.70 1.58
24 30 19 Hewlett-Packard 48,226 2,500 4.30 0.97
25 22 67 Walt Disney 46,960 2,307 2.40 0.82
6 25 20 Chevron 46,606 2,585 2.90 0.86
27 7 27 Compaq 38,569 1,671 2.60 0.62
28 9 77 Alcoa 37,158 1,702 2.70 0.64
29 23 64 United Technologies 34,866 1,832 4.60 0.83
30 49 9 AT&T 34,411 1,443 2.50 1.64
31 15 78 Dow Chemical 31,085 1,751 3.10 0.70
32 16 46 Safeway 30,754 1,408 4.80 0.72
33 1 92 Loews 30,675 1,639 1.00 0.26
34 34 56 DuPont 29,939 1,946 3.40 1.04
35 10 34 Motorola 29,002 1,125 1.70 0.66
36 3 69 Lockheed Martin 25,428 1,433 2.30 0.50
37 31 16 Texaco 24,110 1,286 2.70 0.97
38 12 18 Kroger 24,103 1,150 6.00 0.68
39 27 3 GM 24,032 1,564 1.60 0.89
40 2 29 Sears 23,685 1,463 1.80 0.41
41 46 7 Enron 22,297 1,022 4.30 1.47
42 28 17 Duke Energy 21,797 1,248 2.80 0.89
43 13 44 Conoco 21,153 1,196 3.20 0.68
44 26 96 Sara Lee 19,398 1,249 14.50 0.86
45 6 89 Phillips Petroleum 18,886 1,011 2.30 0.56
46 11 99 Caterpillar 17,443 1,083 2.80 0.67
47 44 37 Target 16,597 853 4.90 1.38
48 17 32 Worldcom 14,411 (124) 0.90 0.72
49 50 90 Walgreen 14,243 661 10.10 2.30
50 8 68 Conagra Foods 13,765 853 3.50 0.62

While this is a simplistic way of assessing the dollar value of knowledge, it can be

useful in determining whether companies (and their investors) value knowledge. If

knowledge is valued, it is probably managed. Whether it is managed in accordance with

the current precepts of Enterprise Management Engineering becomes the next step in this

66
study. EME may not be the only way in which to introduce successful knowledge

management, but the anecdotal evidence to date supports the idea that it is a logical

approach. The next step is to determine whether this is indeed so.

67
CHAPTER 3 RESEARCH PROBLEM

“If knowledge can create problems, it is not through ignorance that we can solve

them.”

Isaac Asimov

3.1 Problem

Much research has been done on various systems engineering paradigms, and recently

the approach has shifted toward a more integrative approach, combining multiple

disciplines in a more holistic approach. Knowledge management itself has become so

accepted that it has moved from its origins in the design of rigid rule-based artificial

intelligence systems to a more flexible and inclusive concept incorporating training,

learning, strategic planning, and user/customer interaction throughout the process.

The major problem now has become determining how to define knowledge

management systems engineering success. Systems engineering is difficult enough to

quantify when it is measuring and developing an objective repeatable and measurable

product; when combined with the amorphous concepts many use for valuing

“knowledge”, the combination can mean that formal systems engineering discipline falls

by the wayside.

68
This need not happen. Formal system engineering has a place in the design and

implementation of effective knowledge management systems. As with other systems, the

SE discipline needs to adapt to the specific domain for which it is being used, but

Stankosky’s EME construct provides a reasonable approach to doing just that.

At the same time, it is difficult to postulate the success or failure of the EME

approach (or any other one) unless and until there exists an objective and accepted

method of quantifying knowledge management success. One of the best ways to prove

the efficacy of such a system to a corporate decisionmaker is to assert that knowledge

does have a value; as shown by Lev’s research, that value can be recognized and

rewarded by the stock market.

The experiment for this study was designed to link the two concepts of Enterprise

Management Engineering and Knowledge Capital to produce an objective and

measurable concept for Knowledge Management Systems Engineering.

3.2 Hypotheses

In order to begin to prove a linkage between the two concepts, a relationship should

be defined between them. Since it is always easier to disprove a hypothesis than to prove

one, this study used three null hypotheses, each of which postulated that there would be

no connection at all between results measured from a survey of EME practices and results

measured from a series of financial reports.

69
3.2.1 H1: Does EME Correlate to KM Success?

x H10: Organizations that use any of the EME approach show no difference in KM

return when compared to organizations that use no EME.

The alternative hypothesis to this is:

x H1a: Organizations that use any of the EME approach show a more positive

correlation with KM return when compared to organizations that use no EME.

The null hypothesis for this set says that no degree of integrative systems engineering

affects the odds for success for a KMS within that organization. However, this merely

establishes a potential positive correlation between the concept of a comprehensive

systems engineering process and higher measured knowledge capital and earnings. In

order to try to validate this specific EME model, portions of Stankosky’s EME construct

need to have actually been used to build the KMS, and a correlation should be made

between how much of the EME process was used and how “successful” the KMS

implementation was.

3.2.2 H2: Does More EME Correlate to More KM Success?

x H20: Organizations with a higher ranking on EME better show no difference in

KM return than those with lower rankings.

The alternative hypothesis to this is:

x H2a: Organizations with a higher ranking on EME show a positive correlation

with KM return.

70
If this hypothesis can be disproved with a one-tailed test, the conclusion would be that

more EME processes positively correlate to more successful KMS implementations.

Still, one last series of analyses is needed. Some portions of the EME process are more

widely used than others; it would seem that some EME steps might have higher rates of

knowledge “payback”.

3.2.3 H3: Do Different Parts of EME Correlate More Strongly to KM Success?

x H30: Organizations that use specific parts of the EME process show no difference

in KM returns than organizations that use other parts of the EME approach.

The alternative hypothesis to this is:

x H3a: Organizations that use specific parts of the EME process show a positive

correlation with KM returns.

The last area of analysis addresses which EME steps are more widely used and which

specific ones correlate to a higher KC, a higher CVR, or both. If this hypothesis can be

disproved, these steps identified, and additional correlations made, the EME development

process can be refined to emphasize and further expand those steps with a higher

KC/CVR correlation.

If all three hypotheses can be disproven, there is a high probability that the specific

EME process does have some direct correlation with KC and measurable knowledge

management. EME can be extended to the KMS engineering realm and the model would

have successfully passed its first significant validation effort.

71
3.3 Approach

As mentioned in Chapter Two, there are eighteen steps in the EME construct. The

study used a survey approach to ask certain targeted corporations whether they had used

any or all of the eighteen steps and to what extent (there were 22 major question areas,

addressing the eighteen steps, each of the four KM pillars, and distinguishing between

KM approaches and effectiveness and formal and informal organization). The study also

asked additional detail about each of the eighteen areas in an effort to further expand the

detail of the model. The result was a survey with 118 questions addressing the eighteen

areas (not all questions were always asked – several of them were branching questions,

dependent upon the answers to previous ones). The detailed survey is included as

Appendix A; the mapping between the EME tenets and the questions is shown in Table

3-1 below:

Table 3-1 -- Mapping of EME and Survey Instrument Questions

EME Step Survey Questions

Milestone 1 – Project Inception

Define/illustrate the enterprise Q1a, 1b, 1c, 1d, 1e

State the value proposition of the Q2a, 2b, 2c

organization.

List the critical intellectual assets Q3a, 3b, 3c, 3d, 3e, 3f

needed to make strategic decisions.

Identify the sources of your intellectual Q4a, 4b, 4c, 4d, 4e, 4f

assets for strategic decision-making.

72
Table 3-1 (cont) -- Mapping of EME and Survey Instrument Questions

EME Step Survey Questions

List the strategic objectives in Q5a, 5b, 5c, 5d, 5e, 5f

measurable terms (include success

factors).

Identify critical environmental changes Q6a, 6b, 6c, 6d, 6e, 6f, 6g, 6h, 6i

that would impact strategic objectives

Audit the four KM pillars Leadership: 7a, 7b, 7c

Organization: 8a, 8b, 8c

Technology: 9a, 9b, 9c

Learning: 10a, 10b, 10c, 10d, 10e, 10f

Milestone 2 – Project Development

List the functions needed to 11a, 11b, 11c, 11d, 11e

accomplish the strategic objectives.

Diagram the operational processes to 12a, 12b, 12c, 12d, 12e

accomplish these functions.

List the intellectual assets required to 13a, 13b, 13c, 13d, 13e

accomplish both functions and

processes.

List the sources of these intellectual 14a, 14b, 14c, 14d, 14e

assets.

Identify codification and/or 15a, 15b, 15c

personalization strategies needed to

leverage these intellectual assets.

73
Table 3-1 (cont) -- Mapping of EME and Survey Instrument Questions

EME Step Survey Questions

Diagram the formal and informal 17a, 17b, 17c, 17d, 17e, 17f, 18a, 18b, 18c, 18d, 18e,

organization structures. 18f, 18g

Milestone 3 – Project Deployment and Support

Develop an integrative management 20a, 20b, 20c, 20e

plan for the KMS that defines

x deliverables (include metrics 20d(i)

for control/success and

expected benefits, such as

efficiency, effectiveness, and

innovation)

x resources required 20d(ii)

x persons responsible 20d(iii)

x timetable. 20d(iv)

Integrate the KMS with legacy 21a, 21b, 21c, 21d

components (such as current

functions, processes, formal and

informal organization structures, and IT

systems)

Highlight risks and risk 21c(vi), 21c(vii)

management/mitigation plans

Discuss change implementation and 22a, 22b, 22c, 22d, 22e

management

74
3.4 Next Steps

These questions provided a comprehensive evaluation of an organization’s use of the

entire EME construct. Once organizations were assessed in each of these areas, they

were ranked both overall and by each EME area. Organizations were also ranked using

the financial data needed to determine KC and CVR scores for each of the subject

companies. These ranks would then be analyzed to determine whether a statistical

correlation did indeed exist. In addition, answers for each of the areas were also tabulated

and analyzed to determine the use and effectiveness of each area with respect to the

others. The next chapter describes the survey methodology and process.

75
CHAPTER 4 RESEARCH METHODOLOGY

“One accurate measurement is worth more than a thousand expert opinions.”

RADM Grace Hopper

4.1 Subjects

The initial goal of this effort was to attempt to analyze 10-20 organizational subjects.

A minimum sample size of 10 was desired in order to have higher nonparametric

statistical validity for any claims of correlation, and a higher sample size would lend

additional strength to the statistical outcome. Consequently, the initial sample population

was intended to use the same 50 organizations as Lev used in his 2001 study, the top fifty

of the then-Fortune 500 who were publicly-traded, non-financial companies.

Since the KC metric requires the public availability of certain corporate financial

statements (specifically, actual and predicted corporate earnings and the value of

corporate tangible – physical and financial – assets), organizations in this study were

restricted to publicly-traded companies (to ensure publicly reported available figures in

accordance with Generally Accepted Accounting Principles, or GAAP). This list of

subjects was also restricted to those firms that have actually generated earnings per share

for the past three years, as at this point, Lev’s calculations do not easily apply for

76
financial, unprofitable, non-profit, academic, or governmental institutions (this is a

promising question for additional research) [79].

The original list of companies cited by Lev in 2001 [126] and based on figures from

2000 has changed since then due to corporate reorganizations. Ten firms which had

reorganized since then, one which had gone bankrupt, one which showed no profit in

2004, and one which had no data in CompuStat for 2004 were all dropped from the

potential survey list, leaving an initial survey group of 37 companies. In order to try to

replicate Lev’s results as closely as possible, the study proceeded to request participation

from the 37 remaining companies. This list of initial candidate organizations was ranked

and analyzed using both the Knowledge Capital (KC) and Market Value/Comprehensive

Value Ratio (CVR) measurements as defined by Lev in order to try to objectively

determine the “success” of the KMS implementation effort.

4.2 Instrument Development

The subjects were each contacted and sent a survey based upon Stankosky’s EME

construct. The survey itself was organized to reflect the different concepts that have

greater weight at different points in the KMS engineering life-cycle. It was designed to

assess organizational compliance with Stankosky’s EME approach as objectively as

possible, and answers were usually requested on a Likert scale from 1 (positive, well or

strong) to 5 (negative, weak, or nonexistent). Although respondents may have different

subjective scales, a Likert scale helps to minimize this issue by making all answers

relative only to others provided by the same respondents. Participants also had the option

77
to answer N/A or “Don’t Know” to most questions, and there were some optional open-

ended ones to help identify potential areas for additional research.

The survey asked 117 questions on the 18 EME topics, divided into three areas

corresponding to the life-cycle phases of a KMS project. The questions are summarized

below in Table 4-1:

Table 4-1 -- EME KMS Survey Areas

KMS Project Inception Phase – At project start:


x How well was the enterprise defined?
x How well was the value of the KMS project defined?
x How well were critical intellectual assets documented?
x How well were the sources of those critical intellectual assets defined?
x How well did the organization define strategic objectives?
x How well did the organization define potential environmental factors that could affect
success?
x How well did senior leadership support the project?
x How well did the organization support the project?
x Did the organization provide appropriate technology(ies) for the KMS system? Did that
technology help or hurt?
x Is there a process for organizational learning and an environment that encourages it?
KMS Project Development Phase – While the project is underway:
x How well were functional objectives defined?
x How well were operational processes defined?
x How well were critical intellectual assets for both functions and processes defined?
x How well were the sources of those critical intellectual assets for both functions and
processes defined?
x Did the KMS use a codification or personalization strategy or both?

78
Table 4-1 (cont) -- EME KMS Survey Areas

KMS Project Development Phase – While the project is underway: (cont)


x Which strategies were used for:
x Knowledge Assurance?
x Knowledge Generation?
x Knowledge Codification?
x Knowledge Transfer?
x Was a diagram or representation of the formal organization used in the KMS process?
x Was a diagram or representation of the informal organization used in the KMS process?
x Was there a list of the technology needed to support the planned KMS?
KMS Fielding and Support Phase – When the project is deployed and in the field:
x How comprehensive and integrative was the KMS management plan?
x Was the KMS planned to be integrated into legacy components?
x Does the organization have a plan for managing and implementing change?

The survey itself is attached as Appendix A. The online version was worded

identically to the softcopy version attached in Appendix A; it displayed differently due to

screen display conventions and the introduction of branching logic but the content was

identical.

4.3 Instrument Development and Piloting

Since no survey existed that measured the specific systems engineering constructs

required for validation of the EME model, the one that was developed required

assessment and piloting (due to the lack of existing validated test data, this instrument

was not formally validated). The instrument was originally created as a list of binary

(yes/no) survey questions. After presentation of the dissertation proposal, at the

recommendation of the research committee, the survey was revised to incorporate a more

79
variable Likert scale as it was felt that this would generate more answers. Since this

survey was also an initial attempt to gather general information about real-world

experience with the EME construct, several free text optional response questions were

also added.

The revised survey was then reviewed by several associates of the researcher,

including three systems engineers and one retired forms designer. Most of their

suggestions were incorporated into a redesign of the survey form. There was a general

feeling that the survey was too long; the author chose to keep the length in order to

address each of the 18 EME areas but revised the survey to combine questions where

possible, to make answers optional, and to add the “Not Applicable” and “Don’t Know”

options to most of the questions.

The survey was then posted in an online version and the website link sent to the

reviewers. Comments addressing clarity and logic were incorporated where appropriate

(given the relatively small sample size).

The survey itself was submitted to the George Washington University Office of

Health Research (OHR) Institutional Review Board (IRB) in November of 2004. After

removing several initial requests for subject identification data from the survey, the

package received approval U120419ER on 22 February 2005 and a renewal on 31

January 2006 in order to complete followup with the survey respondents.

80
4.4 Design and Procedure

Since the data for this study came from two sources, the design and procedure for

obtaining the data proceeded in parallel.

4.4.1 Knowledge Capital Data

The original data in the 2001 comparison needed updating in order to bring them into

line with the timeframe of the EME assessment (early 2006). The earning results and

predictions used by Lev were from I/B/E/S International; the financial data used for this

study used the same sources as closely as possible. The needed data were obtained from

the online databases for Institutional Brokers Estimate System (I/B/E/S) (for estimated

and actual earnings data) [59] and Standard and Poor’s CompuStat Industrial Annual

Reports (for asset value and market data) [118]. This both ensured that the corporate

figures were as objective as possible and will also allow access by others to the same

datasets should that be desired. Online access to these datasets for purposes of this study

was provided through GWU’s subscription to the Wharton Research Data Services at the

University of Pennsylvania [131].

4.4.2 EME survey

Each organization on the subject list was contacted and given the option of

completing the survey in an online format, as an attachment to an email message, or in a

hard copy version with a stamped return envelope. Each survey was identified through a

dropbox listing with one of the 37 firms selected for the study. Points of contact within

81
each organization were identified and a copy of the survey sent to at least one respondent

(by name) within each organization that agreed to participate. Results were requested

within a three-week deadline; the author followed up with non-respondents by telephone

and/or email in order to get at least the minimum number of respondents. Eight surveys

in all were received that could be used (more were received but some companies

withdrew their submissions because of worries about corporate identification).

The instrument itself had 117 questions addressing the 18 steps of EME. The

questions themselves were divided into the following five types as shown in Table 4-2

below:

Table 4-2 -- Survey Question Types

Question Type Quantity

Menu pulldown options (company identification data) 2

Blank entry (optional) for email address for followup/results 1

Five-point Likert Scale, 1 = positive, with N/A and Don’t Know


70
options

Binary, Yes/No, with N/A and Don’t Know options 21

Free text entry fields 23

As noted above, 70 of these questions measured aspects of those steps using

continuous ordinal Likert scales. Although a qualitative Likert scale like this one should

not be used for parametric statistics, it is still possible to examine Yes and No answers to

develop some indications of relevant items or events within the EME cycle. It is also

possible to see if there are any steps that have an unusually high number of “N/A” or

82
“Don’t Know” responses – while these could indicate confusion with the survey question,

they could also indicate a proposed concept within EME that should be reexamined.

Four major types of analysis were performed. The first, an attempt to disprove H10

by calculating a significant positive correlation between the overall concept of EME and

KC/CVR figures, used basic parametric correlation analysis with Pearson’s r since it was

judged that the binary answers and the financial figures were objective and measurable

quantities.

The second analysis used nonparametric analysis (more appropriate given the

relatively subjective nature of the EME scoring and ranking process) to attempt to

disprove H20. This analysis sought to compare and correlate financial rankings based on

KM and CVR with the ranks derived from summary EME survey assessments through

use of the Spearman r coefficient; this can be used to infer whether more EME and high

KC/CVR correlate at a statistically significant level.

The third analysis focused within the EME survey itself, to determine which of the 18

steps were implemented more or less frequently within these organizations’ KMS

development efforts. This was a simple parametric calculation of frequency; mean,

median, and standard deviation were calculated for each of the 18 areas of the EME

construct, the four pillars, and formal and informal organization (for a total of 22 assessed

areas) to see which of the steps were more widely used.

The last stage was to attempt to disprove H30 by correlating KC and CVR

performance with each of the 18 EME construct areas to determine which ones had a

higher correlation (and thus more success). This was actually another nonparametric

83
analysis or correlation similar to the attempt to disprove H20, except this time it was

performed with ranks based upon the EME means from specific areas within the survey

(as opposed to ranks based on the overall survey scores).

Results from all four of these analyses are summarized in Chapters 5 and 6.

4.5 Key Risks, Assumptions and Limitations

There were some limitations and assumptions made as part of this survey that may

end up affecting the results.

4.5.1 Sarbanes-Oxley Act

The passage of the Sarbanes-Oxley bill in 2002 caused changes in GAAP. The Act

generally forced more accuracy and clarity in financial reporting, causing many

companies to change the way in which they valued and reported intangible assets [105].

4.5.1.1 Risk

In order to accurately replicate and update Lev’s KC approach, the original companies

asked to participate in the survey in 2001 (with 13 exceptions) were the same companies

analyzed in 2005, using the same financial data sources. This allowed replication of the

financial analysis and results but comparing figures before and after the Act introduced

the possibility of the subjects having changed their financial reporting to comply.

84
4.5.1.2 Mitigation/Assumption

Most companies are still wrestling with the requirements of the new legislation and

have not yet fully implemented the new reporting systems required by the law. According

to an IDG survey in 2005, companies have indeed been revising their reporting

procedures and standards, but that is more due to ongoing business process changes than

to the legislation [105]. The historical figures used for the calculations do not show any

major changes between 2003 and 2003, and the other figures are all as of December 2004

or later.

4.5.2 Age of KMS Projects

The survey asked questions about every phase of KM system development.

Unfortunately, in several cases, the systems themselves had been developed several years

ago and no one could be found at the end of the project that was there at the beginning.

4.5.2.1 Risk

Respondents could try to answer questions for which they have no data and could

supply false or inaccurate answers.

4.5.2.2 Mitigation/Assumption

Most companies with KM practice a tradition of handing down information and

knowledge about how to do business, and that includes information on how the KM

system was developed. It was assumed that companies would have access to this

85
historical knowledge if needed to answer the survey; it was also assumed much more

likely that any company with a good KM system already had that history close to hand at

all times.

4.5.3 Informality of KMS Projects

In many cases, the KMS process itself within an organization is continual, and there is

no start or completion. Certain steps of the EME process may not apply to the

respondent’s way of doing business.

4.5.3.1 Risk

This could make it difficult to accurately answer questions about activities during the

start or end phase of a particular project.

4.5.3.2 Mitigation/Assumption

Part of the purpose of this study is to validate EME against real-life KM systems

development situations. If there are EME steps that no one follows, that is a useful and

relevant data point. The life-cycle questions were all designed to include the “Not

Applicable” response. The intention is that respondents who had not project inception or

project completion would simply select that answer. As a fallback position, none of the

life-cycle questions on the survey is mandatory and they can be left blank. While this

may lower the number of life-cycle responses received, at least the ones received will be

accurate.

86
4.5.4 Different Approaches to KM and KMS Projects

In several organizations, KM systems developed as an offshoot of their training,

business development, or strategic planning organizations. These groups often took a

completely different view of systems success and the development process from the IT-

centric one in EME. In other organizations, there was no “corporate standard” approach

to implementing KM systems and the number of potential subject systems was quite

large. Still others had broader definitions of KM and KMS.

4.5.4.1 Risk

The survey was sent to companies generally through their information technology

organization. It is possible that this organization has a different view of the success or

failure of a KM project than corporate management or the regular employee users of the

system. It is also possible that the system used as a basis for completing the survey does

not accurately represent the corporate KM system “culture”. Lastly, a broader definition

of KM/KMS could affect the answers by including data and information management

systems.

4.5.4.2 Mitigation/Assumption

Given the generally-high level of the questions asked in the survey, and the fact that

the survey asks few questions about technology, the IT department would probably not be

able to skew the responses beyond those of other KMS stakeholders.

87
A mitigation strategy could not be found for those cases in which the responses for a

single system did not necessarily extend across the enterprise. Any results for an

organization were extrapolated as all of the results of an organization. The global

assumption is that they are representative projects – additional research will be required

to determine whether the different approaches within an organization produce different

results.

The measurement of KM “success” used for this study – the measurement of all assets

and market value not directly attributable to financial or physical assets – allows the use

of a broader definition of KM. The corporate approach to KM system development is

still assumed to reflect the corporate approach to systems engineering and so the

responses are still valid for purposes of this study.

4.5.5 Complexity/Comprehension of the EME Survey

The survey instrument is lengthy and addresses all eighteen areas in some detail. It

also uses the terms as defined in the EME construct that might not be familiar to all

respondents.

4.5.5.1 Risk

There were concerns that respondents might misunderstand the terms or might choose

not to complete the survey (average completion time by the validators on the final version

was about twenty minutes).

88
4.5.5.2 Mitigation/Assumption

Each survey was preceded by a lengthy description of its intent and purpose as well as

email and telephone interviews and question/answer sessions. This was done both to

explain the purpose of the study project and to identify the desired individual

respondent(s). As part of this introduction the respondents were informed that the survey

could be lengthy but were asked to fill out as much as possible. They were also informed

that there were no “right answers” and that negative responses were perfectly acceptable.

This upfront acknowledgement seemed to minimize potential non-answers or

noncompletions; of ten respondents who began the survey process, six of them finished

within the original three-week period and eight finished overall.

89
CHAPTER 5 RESEARCH FINDINGS

“It is of the highest importance in the art of detection to be able to recognise out of a

number of facts which are incidental and which are vital . . . .”

Arthur Conan Doyle

5.1 Data Collection

The KC and EME data were initially collected separately from each other (but in

parallel). Since the initial data could have proven useful even if the hypotheses could not

be disproven, both sets of data were analyzed to produce a set of intermediate findings for

both the KC measures and the EME survey prior to the requisite attempt to determine

correlation between the two sets of figures.

5.1.1 Knowledge Capital Data

The original data in the 2001 comparison needed updating in order to bring them into

line with the timeframe of the EME assessment (early 2006) and so were obtained from

the online databases for I/B/E/S and CompuStat.

Earnings per share (EPS) data for all companies were based on actual data for 2003,

2004, and 2005 and on estimates for 2006, 2007, and 2008 (all estimates were as of the

month of December). Balance sheet data, including stock prices, were for year-end 2004

90
as 2005 reports had not yet been posted. Specific data fields requested and their

definitions are shown in Appendix B; data received and used for knowledge capital

calculations are shown in Appendix C. Data were initially requested, received, and

analyzed for all 37 subject companies; in order to minimize confidentiality issues, data

included in this study are only for those companies that participated in the EME survey.

The databases were queried for several asset value variables needed to generate the

asset values and earnings estimate information necessary for the correlation. “Gross

Asset Value” includes all assets, liabilities, stockholder equity, and

depreciation/depletion/amortization (DDA) [119]. “Physical Assets” includes all net

(with depreciation) values reported for property, plant, and equipment [119]. “Financial

Assets” include all long-term financial investments and advances [119]. Specific results

are shown in tables in Appendix C.

It should be noted that the timeframe of December 2004 was chosen as it was the

most recent for which data had been posted. However, it also has the benefit of having

been a relatively quiet period for the markets. The Enron bankruptcy fallout from

December 2003 had worked its way through the markets by then; the oil price shocks of

2005 had not yet occurred. The major event of the month was the tsunami in Southeast

Asia; however, since that occurred at the end of the month during the holiday period and

none of the surveyed firms were dramatically affected by either the event itself or the

recovery efforts, the impact on the surveyed firms by 31 December 2004 was negligible.

The major financial market event was the issuance by the U.S. Federal Accounting

Standards Board (FASB0 of the new rules requiring that employee stock options be

91
recorded at fair market value vice the “intrinsic” bases used previously; however, those

rules did not take effect until 15 June 2005 and so did not yet affect 2004 reported data.

5.1.2 Enterprise Management Engineering Assessment Data

Data used for the EME assessment can be divided into two categories: respondents

and survey answers.

5.1.2.1 Survey Contacts

Thirty-seven companies were identified for this study. Given the small sample size,

the lack of need for randomization among the study respondents, and the need to ensure

representation from as many of the subject companies as possible, a respondent

identification and targeting approach was used. Corporate points of contact were

identified, and those in turn provided specific names to whom the survey would be sent.

Extensive literature, personal contacts, and online searches identified several possible

candidates within each of the 37 companies. This first phase identified 37 potential

points of contact within 26 of the subject companies. When no specific name or names

could be identified in association with corporate knowledge management efforts, the

position of the Chief Information Officer, Chief Technology Officer, or Chief eBusiness

Officer was selected and more online searches were performed. This provided an

additional 9 names for 7 more companies.

For these 46 names, corporate website and online searches provided only 7 corporate

email addresses and telephone numbers, several of which were for main switchboards (it

92
was decided not to use personal addresses or phone numbers in order to reinforce to the

respondents that this was a legitimate survey and research effort).

The remaining 39 names were then searched against business contact data from the

Jigsaw Business Contacts website (www.jigsaw.com) in January 2006 to produce 32

valid email addresses and telephone numbers. In addition, Jigsaw offered the option to

search their database against management level and department criteria, so CIOs, CTOs,

and similar titles could be identified. Again, for each identified contact, a name, email

address, and telephone number were obtained. At the end of the initial respondent data

acquisition period, the contact listing covered all 37 companies with at least one contact

for a total of 49 contact names with email addresses.

As the initial survey alert message was sent out (to determine who in each company

should actually complete the survey), all 49 addresses were used. 8 messages were

returned from 8 companies as undeliverable. After additional research with Jigsaw, more

contacts were added to the list; this process repeated four more times. Eventually a valid

(non-bouncing) email address was obtained for each of the 37 companies except for one

company with spam-blocking software that allows by exception, thus eliminating

unsolicited email messages such as the one used for this study. A call to the corporate

switchboard (recommended by their website personnel) produced another address that in

turn produced a valid “message sent”, thus accomplishing the initial goal of one valid

address at each subject company.

Several organizations forwarded emails to other company personnel, who then

responded requesting additional information, copies of the survey, or suggesting other

93
potential points of contact. As each replied they were in turn added to the contact listing.

The final contact listing had 98 valid names and email addresses; 39 of those responded

offering assistance or requesting additional information.

5.1.2.2 Survey Respondents

The survey was open for five weeks in late January and early February 2006. To

allow for travel and internal routing of the initial message, firms were given one week to

reply to the contact initiation email, and then a second email was sent. If there was still

no response after a second week, a third email was sent. After an additional 72 hours,

telephone contact was made. For subsequent emails and telephone calls, if no response

was received within 72 hours, a second email was sent. After 24 hours, a telephone call

was made. If at any point someone requested a desire not to participate, he or she was

immediately removed from the list. This resulted in a an overall participation rate of

21.6%. Table 5-1 below shows how many of the 37 firms replied to the initial contact,

how many requested the survey, how many agreed to participate in the survey effort, and

how many completed it within the allotted timeframe.

Table 5-1 -- Survey Respondents

Contact Willing to Survey


Contact Initiated Survey Sent
Replied Participate Received

37 22 20 13 8

Most of the participating forms requested anonymity as a condition for their

participation in this effort. Consequently, company names were removed from the

surveys and an identifying letter assigned instead. General industry sectors for the eight

94
respondents are shown in Table 5-2 below; although the final sample size was small, it

was diverse enough to offer a relatively broad view of Fortune 500 firms.

Table 5-2 – Characterization of Survey Respondents

Company Code Industry Sector

A Automotive
B Industrial
C Home Products
D Industrial
E Automotive
F Telecommunications
G Semiconductors
H Electric Utility

5.1.2.3 Survey Distribution

Once the survey was ready for distribution, it was posted in an online version as well

hosted by QuestionPro (www.questionpro.com). One of the firms contacted preferred to

complete the survey through a phone interview; this survey was subsequently entered into

the online collection after its review and acceptance by the respondent. All other

respondents were sent a Word or PDF version of the survey along with the online link, as

it was easier to review and route the document in hard copy.

5.1.2.4 Survey Collection

Surveys were returned within five weeks after the initial contacts were made. Six of

them were entered directly into the QuestionPro website by the respondents. Two of

them were returned via email. The cutoff date for data collection for this effort was

initially three weeks after the first round of contacts began; collection was held open an

95
additional twelve days after the original deadline in order to add additional inputs,

increase the sample size, and receive inputs from specific organizations. Data collection

was officially terminated on 21 February 2006.

5.2 Data Analysis

5.2.1 Knowledge Capital Data

5.2.1.1 Equations

In order to measure the knowledge management success of an organization in

accordance with the hypotheses described in Chapter Four, the knowledge capital and

comprehensive value ratio were calculated for each of the responding organizations.

Lev’s financial relationships can be described as equations.

Normalized Earnings Per Share:

( FY 02  FY 03  FY 04)  2 u ( FY 05  FY 06  FY 07)
NEPS (5.1)
9

Normalized Earnings: NE NEPS u OutstandingShares (5.2)

PC Earnings: PCE 0.07 u PC (5.3)

FC Earnings FCE 0.045 u PC (5.4)

KC Earnings: KCE NE  PCE  FCE (5.5)

96
KCE
Knowledge Capital: KC (5.6)
0.105

Market Value: MV Price u OutstandingShares (5.7)

Book Value: BV BPS u OutstandingShares (5.8)

Comprehensive Value: CV BV  KC (5.9)

MV
Comprehensive Value Ratio: CVR (5.10)
CV

5.2.1.2 Financial Constants

As put forward by Lev, based on empirical evidence with knowledge-intensive

sectors, the average rate of return for physical and financial assets across industrial

sectors is 7.0% for physical assets and 4.5% for financial ones [92]. Lev has separately

determined that KC has a higher rate of return in the early years (before competition

erases the competitive advantage of knowledge equity) but it still continues to earn at a

premium over other assets for as long as ten years [92][93]. When this “knowledge

premium” is discounted back to present-day values, the effective discount rate is about

10.5%; this calculation has also been supported by historical analysis [50].

97
5.2.1.3 Calculations

The tables below show only those calculations for those companies who actually

participated in the study. Values for Gross Asset Value, Physical Assets (PA), Financial

Assets (FA), and Normalized Earnings Per Share (NEPS) were calculated earlier in the

study process as described above in Section 5.1.1. Appendix C shows the intermediate

financial data definitions and results and calculations for the companies who chose to

participate.

Table 5-3 -- Knowledge Capital Calculations ($M)

Physical Financial
Total Knowledge
Asset Asset Knowledge
Rank Name Normalized Capital
Earnings (at Earnings (at Capital
Earnings Earnings
7% return) 4.5% return)

7 Company A 3,361 76 3,218 66 633


5 Company B 1,162 325 12 825 7,854
2 Company C 4,375 1,059 0 3,317 31,593
6 Company D 1,274 576 19 678 6,459
8 Company E 1,502 5,631 271 -4,400 -41,908
4 Company F 3,595 1,653 922 1,020 9,714
1 Company G 32,946 1,183 0 31,794 302,512
3 Company H 4,252 2,513 52 1,688 16,073

98
Table 5-4 – Comprehensive Value Ratio Calculations (CVR)

Comprehensive
Rank Name Market Value (MV) Book Value (BV) MV/CV
Value (CV)

7 Company A 19,165 6,137 6,770 2.83


2 Company B 42,102 31,723 39,577 1.06
5 Company C 48,772 6.980 38,574 1.26
8 Company D 87,644 17,965 24,424 3.59
3 Company E 223,640 232,955 191,047 1.17
6 Company F 70,986 27,520 37,234 1.91
1 Company G 27,366 5,095 307,607 0.09
3 Company H 24,331 4,595 20,667 1.18

5.2.1.4 Results

The results show that Lev’s rankings from 2001 still generally hold true in 2006. In

2006 the distinction between KC rankings and CVR rankings is not so pronounced; the

companies tracked together more closely than they did in 2001. This is probably due to

the changes in the financial markets since 2001; however, this in turn may be partly

attributable to the increased pervasiveness of knowledge management across corporations

in the intervening five years

5.2.2 Enterprise Management Engineering Survey Data

5.2.2.1 Calculations

In Table 5-5 below are the summary means for each of the participating organizations

in each of the assessed EME areas (specific answers for each of the 117 questions are

shown in Appendix D):

99
13b – Informal Organizational Structure
13a – Formal Organizational Structure

15 – Integrative Management Plan

17 – Risk Management/Mitigation
16 – Legacy System Integration
3 – Critical Asset Identification

4 – Asset Source Identification

12 – KM Strategy Identification
7b – Organizational Support

14 – KM Technology Listing
6 – Environmental Changes

18 – Change Management
9 – Functional Processes
7c – Technology Support
7a – Leadership Support
1 – Enterprise Definition

5 – Strategic Objectives

8 – Strategic Functions
7d – Learning Support
2 – Value Proposition

10 – Required Assets

11 – Asset Sources
Company Name
Rank

Table 5-5 – EME Score Calculations (Mean Score per Area)


6 A 3.00 3.67 2.00 2.75 2.43 3.50 1.50 2.58 2.67 2.40 2.67 2.00 2.67 2.00 1.00 3.00 2.50 2.00 4.00 1.78 5.00 N/A

5 B 1.80 3.00 3.00 2.25 1.00 N/A 2.36 3.42 1.71 2.60 2.33 2.67 3.67 2.67 2.00 2.20 4.00 N/A 2.00 1.13 4.00 N/A
100

2 C 2.20 2.00 2.50 2.25 2.14 3.00 1.57 1.92 2.13 1.75 1.67 1.67 2.00 2.33 2.00 1.80 N/A 2.50 1.00 1.44 2.25 2.50

8 D 3.00 1.00 5.00 5.00 2.67 5.00 2.14 1.50 1.78 1.00 5.00 5.00 5.00 5.00 3.00 1.60 N/A N/A 4.00 N/A 3.00 N/A

4 E 2.25 2.00 1.25 1.75 1.43 1.50 3.00 2.00 2.33 2.20 2.67 2.67 2.67 2.33 3.00 2.60 4.00 N/A 3.00 1.38 2.67 2.00

7 F 2.80 3.33 3.00 3.00 3.00 4.00 2.29 2.75 2.50 3.00 3.67 1.00 3.00 2.67 2.00 2.60 4.00 1.00 1.00 2.44 5.00 N/A

3 G 3.20 2.33 2.25 2.25 2.43 N/A 2.00 2.33 2.06 1.75 2.00 2.00 2.00 2.00 2.00 1.80 2.00 N/A 2.00 1.56 3.67 3.00

1 H 1.00 1.33 1.50 2.00 2.29 1.00 1.14 1.50 1.81 1.20 1.33 1.33 1.00 1.00 1.00 1.00 1.00 1.00 1.00 1.33 2.50 1.00
5.2.2.2 Mean Scores and Deviations for EME Areas

Below are the summary means for each of the assessed EME areas (specific means for

each of the 117 questions are shown in Appendix E). For purposes of computing the

mean score P, only the actual Likert scale scores were used (1 through 5); blank answers,

N/A answers, and Don’t Know answers were all ignored in the mean calculations.

For this step, mean scores for each of the areas were computed and standard

deviations were calculated. The EME areas were then ranked according to their average

mean as shown:

Table 5-6 -- EME Effectiveness Ratings by Area

EME Average
Relevant EME Area Std Dev
Rank Score

1 18. Change Management 1.40 0.5477


2 15. Integrative Management Plan 1.58 0.4311
3 13b. Informal Organizational Structure 1.63 0.7500
4 7d. Learning Support 1.99 0.6891
5 12a. KM Strategy Identification 2.00 0.7559
6 7a. Leadership Support 2.00 0.5853
7 12b. KM Strategy Effectiveness 2.08 0.6497
8 17. Risk Management/Mitigation 2.13 0.8539
9 7c. Technology Support 2.13 0.3522
10 5. Strategic Objectives 2.17 0.6549
11 14. KM Technology Listing 2.25 1.2817
11 7b. Organizational Support 2.25 0.6577
13 9. Functional Processes 2.29 1.2400

103
Table 5-6 (cont) -- EME Effectiveness Ratings by Area

EME Average
Relevant EME Area Std Dev
Rank Score

14 2. Value Proposition 2.33 0.9428


15 1. Enterprise Definition 2.41 0.7466
16 11. Asset Sources 2.50 1.1409
16 3. Critical Asset Identification 2.56 1.1707
18 4. Asset Source Identification 2.66 1.0259
19 8. Strategic Functions 2.67 1.1819
20 10. Required Assets 2.75 1.2051
21 13a. Formal Organizational Structure 2.92 1.2813
22 6. Environmental Changes 3.00 1.5166
23 16. Legacy System Integration 3.51 1.0878

Figure 5-1 below shows the above information in a graphical format.

Figure 5-1 -- Mean Scores By EME Area

Average EME Scores By Area

4.00

3.50

3.00

2.50
EME SCore

2.00

1.50

1.00

0.50

0.00
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
EME Area

104
Figure 5-2 shows how each company scored in its own self-assessment of EME

across the board. Remembering that low scores equate to good EME performance, it is

evident that Company D, an older industrial concern, is having problems with

implementing formal engineering and system development processes (they also tend to

view KM as a training concept). Company H, on the other hand, is doing quite well at

implementing this concept (not surprising given that they are a computer hardware

manufacturer abd view a KM system as just another messier piece of hardware) .

Figure 5-2 -- Company Overall EME Scores

Company Overall EME Scores

3.50

3.00

2.50

2.00
Score

1.50

1.00

0.50

0.00

A B C D E F G H
Company

105
5.2.3 Results

Based upon the survey, the most self-rated effective EME areas are (18) Change

Management, (15) Integrative Management Plan, and (13b) Informal Organizational

Structure. The least effective ones are (in descending order) (13a) Formal Organizational

Structure, (6) Environmental Changes, and (16) Legacy System Integration. It should

also be noted that five EME items had at least one respondent answer “N/A” or leave a

major EME area (not just individual questions) completely blank, thus indicating either

no use or no understanding of the area:

Table 5-7 -- Unused or Unknown EME Areas

EME Area Number of N/A

13b. Informal Organizational Structure 4

17. Risk Management/Mitigation 4

18. Change Management 3

6. Environmental Changes 2

13a. Formal Organizational Structure 2

15. Integrative Management Plan 1

This could easily be a reflection of the small sample size, but it is also possible that

additional work needs to be done in either refining the survey instrument or defining the

terms of the EME model.

EME scores by company and by area are shown below in the next two charts. The

first shows all of a company’s scores on each of the 18 areas. In this chart, the X axis is

the list of respondents. This chart allows some generalizations about scoring biases or

106
tendencies within an individual organization (for example, Companies D and H, who

define KM as a training issue, seem to fell that they have EME working well throughout

their organizations, while Companies A, B, and G (Automotive, Industrial, and

Semiconductors, respectively) are not as optimistic about their performance. It also

supports questions of deviation or norming (where do most of an organization’s scores

fall?)

Figure 5-3 -- EME Area Scores Per Company

Scores By Company

6.00
1
2
3
5.00 4
5
6
7a
4.00 7b
7c
7d
8
Score

3.00 9
10
11
12a
2.00 12b
13a
13b
14
1.00 15
16
17
18
0.00
0 1 2 3 4 5 6 7 8 9
EME Area

The next chart, Figure 5-4, tracks each of the eight companies throughout the EME

assessment process. In this chart the X-axis is the series of EME areas. This both

highlights individual organizational biases but also allows a quick but meaningful

comparison across organizations.

107
Figure 5-4 -- Company Scores By EME Area

Scores By EME Area

5.00

4.50

4.00

3.50

A
3.00 B
C
Score

D
2.50
E
F
2.00 G
H
1.50

1.00

0.50

0.00
1 2 3 4 5 6 7a 7b 7c 7d 8 9 10 11 12a 12b 13a 13b 14 15 16 17 18
EME Area

Other data, including the raw data, free text responses, and additional graphs, may be

found in Appendices D, E, and F.

5.3 Correlation and Hypothesis Tests

Although the financial and EME data gathered in the previous two sections are

interesting in and of themselves, in order to attempt to disprove the hypotheses postulated

for this study, they must be correlated to determine if the results of these calculations are

independent from each other. The three hypotheses tested for rejection were:

x H10: Organizations that use any of the EME approach show no difference in KM

return when compared to organizations that use no EME.

108
x H20: Organizations that use more EME steps show no difference in KM return

than those that used fewer items..

x H30: Organizations that use specific parts of the EME process show no difference

in KM returns than organizations that use other parts of the EME approach.

5.3.1 H1: Does EME Correlate to KM Success?

For the first hypothesis set, the study set out to disprove the statement:

x H10: Organizations that use any of the EME approach show no difference in KM

return when compared to organizations that use no EME.

This specifically raised the following four questions:

o Does EME Present correlate at all to KC?

o Does EME Present correlate at all to CVR?

o Does EME Present correlate positively to KC (EME Present = higher

KC)?

o Does EME Present correlate positively to CVR (EME Present = higher

CVR)?

In the previous data collection phase, eight companies were surveyed; all of them

reported some use of the EME construct (EME Present = 1). Consequently, no

comparison could be made and this hypothesis test was terminated. The study moved on

to the second hypothesis set.

109
5.3.2 H2: Does More EME Correlate to More KM Success?

While the first hypothesis set attempted to analyze whether the existence of EME and

KM success were correlated, the second hypothesis set extended the concept by testing

whether more usage of EME results in more successful KM systems:

H20: Organizations with a higher ranking on EME show no difference in KM return

than those with lower rankings

This hypothesis set specifically raised the following four questions:

o Does a higher EME mean (across all 18 areas) correlate at all to KC?

o Does a higher EME mean (across all 18 areas) correlate at all to CVR?

o Does a higher EME mean (across all 18 areas) correlate positively to KC

(higher EME = higher KC)?

o Does a higher EME mean (across all 18 areas) correlate positively to CVR

(higher EME == higher CVR)?

For H2, the variables of interest are the level of overall observance of the EME

principles (EME mean, calculated across all 18 areas and 70 Likert-scale questions) and

the overall level of KC or CVR within a company. Since the Likert means are means of

ordinal variables, they are still ordinal variables. Since the financial figures would be

compared to ordinal rankings, they were converted in an intermediate step to rankings, so

that EME and KC or CVR could be correlated using the non-parametric Spearman’s rho

measurement for paired ordinal rankings.

There are additional tests for nonparametric rankings, primarily the Friedman test and

the “coefficient of concordance” (a normalized version of the Friedman test also known

110
as the Kendall’s w test). These both test more than two sets of ranks, such as the EME

rankings, the KC rankings, and the CVR rankings, in a single calculation to see if there is

correlation. However, since CVR and KC are already mathematically related (as shown

above, KC is actually derived from CVR), there is no need to test the correlation of all

three sets of rankings and the Spearman’s rho test was sufficient for this study.

Again, there were eight successful samples (companies that both completed the EME

survey and had knowledge capital data computed). Each company was ranked 1 to 8 in

each of the three critical measurements: EME mean, level of knowledge capital (KC), and

the market value/comprehensive value ratio (CVR). Ranking results are shown in the

table below:

Table 5-8 – Data Used for H2 Statistical Comparison

Company Name EME Rank KC Rank CV Rank

Company A 6 7 7

Company B 5 5 2

Company C 2 2 5

Company D 8 6 8

Company E 4 8 3

Company F 7 4 6

Company G 3 1 1

Company H 1 3 3

Since the rankings are based on ordinal relative assessment data, traditional

parametric statistics are not appropriate for this analysis. Although the financial data are

quantitative and objective, the EME ratings were primarily scored on a five-point Likert

111
scale. Although the scales were consistent in direction (1 was always considered

“compliant” or “good”), they still could not be used for a true objective statistical

assessment. Consequently, this analysis used nonparametric statistics of correlation.

Recalling that Lev’s analysis produces two measures of knowledge management

success, it can be seen in Table 5-8 above that the measures generally produce different

rankings. For this analysis, both measures were compared to the level of EME

observance to determine if one had a higher level of correlation than the other did.

For this analysis, the Spearman’s rho test (also known as the rank correlation

coefficient) was used:

SSxy
rs (5.11)
SSx u SSy

Since there were no ties in the rankings, this can be simplified to:

n
2
6 u ¦ di
i 1
rs 1 2
(5.12)
n u (n  1)

5.3.2.1 Higher EME vs KC

The first test compared EME usage and KC ranking:

Table 5-9 – Data Used for H2 rs – EME vs KC

Company Name EME Rank KC Rank d d2

Company A 6 7 -1 1
Company B 5 5 0 0
Company C 2 2 0 0
Company D 8 6 2 4
Company E 4 8 -4 16
Company F 7 4 3 9
Company G 3 1 2 4

112
Company Name EME Rank KC Rank d d2

Company H 1 3 -2 4
2
N= 8 6d = 38

The measured value for rs (rm) for n=8 ranked pairs is 0.5476. For n=8 pairs, and a

level of significance (D) of 0.05 (95% level of confidence), the critical value for rs (rc) is

0.6429 [87]. Since rm < rc, no significance can be ascribed to the correlation, and the H20

hypothesis cannot be disproven for KC – there is no statistically significant correlation

between the level of EME usage and the level of knowledge capital within a corporation.

5.3.2.2 Higher EME vs CVR

The second test compared EME usage and CVR:

Table 5-10 – Data Used for H2 rs – EME vs KC

Company Name EME Rank CVR Rank D d2

Company A 6 7 1 1
Company B 5 2 -3 9
Company C 2 5 3 9
Company D 8 8 0 0
Company E 4 3 -1 1
Company F 7 6 -1 1
Company G 3 1 -2 4
Company H 1 3 2 4
2
N= 8 6d = 29

The measured value for rs (rm) for n=8 ranked pairs is 0.6602. For n=8 pairs, and a

level of significance (D) of 0.05 (95% level of confidence), the critical value for rs (rc) is

again 0.6429 [87]. Since rm > rc, statistical significance can be ascribed to the correlation,

113
and the H20 null hypothesis can be disproven for CVR – there is a statistically-significant

correlation between the level of EME usage and the market’s recognition and valuation of

corporate knowledge.

5.3.2.3 Alternative Hypothesis

Since the EME-CVR null hypothesis was disproven, the null hypothesis H20 was

rejected and the alternative H2a was accepted:

x H2a: Organizations with a higher ranking on EME show a positive correlation

with KM returns.

It appears that CVR is a better method for comparison when used with EME. The

EME-CVR correlation at 0.6602 was higher than the KC correlation (at 0.5476), and so

for this area of analysis, EME-CVR would be the stronger measure of EME-KM success.

5.3.3 H3 -- Do Different Parts of EME Correlate More Strongly to KM Success?

While the first hypothesis set analyzed whether the existence of EME and KM

success were correlated, and the second hypothesis set tested whether more usage of EME

correlates with more successful KM systems, the third attempted to determine which of

the EME areas correlate with higher KM success:

x H30: Organizations that use specific parts of the EME process show no difference

in KM returns than organizations that use other parts of the EME approach.

114
Since the calculations for resolving H1 and H2 above concluded that there is some

correlation between EME and KM success, the study now turned toward identifying

specifics through the following four questions:

o Do some EME areas correlate more than others to KC?

o Do some EME areas correlate at all to CVR?

o Which EME areas correlate positively to KC and how do they rank (higher

EME = higher KC)?

o Which EME areas correlate positively to CVR and how do they rank

(higher EME == higher CVR)?

For H3, the variables of interest are the level of overall observance of the EME

principles (EME mean within each EME area based on the 70 Likert-scale questions) and

the overall level of KC or CVR within a company. Since this analysis is also comparing

ranks, the non-parametric Spearman’s rho was used for determining correlation.

As with the other analyses, there were eight successful samples (companies that both

completed the EME survey and had knowledge capital data computed). As above, each

company was ranked 1 to 8 in each of the three critical measurements: EME observance,

level of knowledge capital, and the market value/comprehensive value ratio. However,

this time the EME rankings were for mean Likert scores on each of the areas of the EME

construct. As an example, ranking results for the first of the measured areas are shown in

the table below (the other 21 tables are in Appendix E):

115
Table 5-11 – Data for H3 Statistical Comparison, “EME1 – Enterprise Definition”

Company Name EME1 Rank KC Rank CVR Rank

Company A 6 7 7
Company B 2 5 2
Company C 3 2 5
Company D 6 6 8
Company E 4 8 3
Company F 5 4 6
Company G 8 1 1
Company H 1 3 3

Once these data were calculated for each of the EME areas, they were tabulated and

summary results placed in Table 5-12 below (note: areas with insufficient data generated

an rm greater than one; those values were ignored when assigning statistical significance):

D=0.05, n=8 ranked pairs)


Table 5-12 -- EME and KM Correlations By EME Phase (D

EME-KC EME-CV Critical Significant?


Value
EME Element
Measured Measured EME-KC EME-CV EME- EME-
Um Um Uc  Rank Rank KC CV

1. Enterprise Definition 0.6786 0.8810 0.6429 5 2 YES YES


2. Value Proposition 0.3690 0.1667 0.6429 14 20
3. Critical Asset 0.0595 0.5714 0.6429 18 8
Identification
4. Asset Source 0.2976 0.8810 0.6429 16 2 YES
Identification
5. Strategic Objectives 0.3095 0.8690 0.6429 15 5 YES
6. Environmental -0.3143 0.4857 0.6429 21 12
Changes
7a. Leadership Support 0.6310 -0.0476 0.6429 6 22

116
EME-KC EME-CV Critical Significant?
Value
EME Element
Measured Measured EME-KC EME-CV EME- EME-
Um Um Uc  Rank Rank KC CV

7b. Organizational 0.3810 -0.0833 0.6429 13 23


Support
7c. Technology 0.4524 0.4405 0.6429 11 15
Support
7d. Learning Support 0.5238 0.0119 0.6429 9 21
8. Strategic Functions 0.7857 0.7857 0.6429 2 6 YES YES
9. Functional 0.7738 0.5476 0.6429 3 9 YES
Processes
10. Required Assets 0.6071 0.4762 0.6429 8 14
11. Asset Sources 0.4048 0.4881 0.6429 12 11
12a. KM Strategy 0.4881 0.3095 0.6429 10 18
Identification
12b. KM Strategy 0.7381 0.2976 0.6429 4 19 YES
Effectiveness
13a. Formal 0.0571 0.4000 0.6429 19 17
Organizational
Structure
13b. Informal -0.0286 0.4857 0.6429 20 12
Organizational
Structure
14. KM Technology 0.9167 0.6429 0.6429 1 7 YES
Listing
15. Integrative 0.1786 0.9464 0.6429 17 1 YES
Management Plan
16. Legacy System 0.6310 0.4286 0.6429 6 16
Integration
17. Risk -2.7000 0.5000 0.6429 23 10
Management/Mitigation
18. Change -0.5000 0.8810 0.6429 22 2 YES
Management

117
5.3.3.1 Alternative Hypothesis

Since both the EME-KC and EME-CVR null hypotheses were disproven for several

EME areas, and since both of the analyses were positively correlated at the D = 0.05 level

for a one-tailed test, the null hypothesis H30 was rejected and the alternative H3a was

accepted:

x H3a: Organizations that use specific parts of the EME process show a positive

correlation with KM returns.

Figure 5-5 below shows how the KC and CVR comparisons correlate fopr each of the

EME areas measured:

Figure 5-5 -- EME-CVR and EME-KC Correlation

16. Legacy System Integration


14. KM Technology Listing
12b. KM Strategy Effectiveness
8. Strategic Functions

9. Functional Processes
1. Enterprise Definition

12a. KM Strategy Identification


7a. Leadership Support

10. Required Assets


7c. Technology Support
7b. Organizational Support

7d. Learning Support


4. Asset Source Identification

1.0000

15. Integrative Management Plan


13a. Formal Organizational Structure
2. Value Proposition

5. Strategic Objectives

11. Asset Sources

0.8000
3. Critical Asset Identification

Critical
value
0.6000 rc

0.4000
Correlation Coefficient

0.2000

KC
0.0000
CVR
13b. Informal Organizational Structure

-0.2000
6. Environmental Changes

-0.4000
18. Change Management

-0.6000 Critical
value
rc
-0.8000

-1.0000
EME Area

118
The five EME areas that showed statistically-significant correlation with KC success

were:

x (14) KM Technology Listing (rm =0.9167)

x (8) Strategic Functions (rm =0.7857)

x (9) Functional Processes (rm =0.7738)

x (12b) KM Strategy Effectiveness (rm =0.7381)

x Enterprise Definition (rm =0.6786)

The areas that showed the lowest correlation with KC success were Environmental

Change, Change Management, and Risk Management/Mitigation (none were statistically-

significant and Risk Management, with a high number of N/A responses, is suspect).

For CVR success, there were six EME areas that were statistically-significant:

(15) the Integrated Management Plan (rm=0.9464)

(4) Asset Source Identification (rm=0.8810)

(1) Enterprise Definition (rm=0.8810)

(18) Change Management (rm=0.8810)

(5) Strategic Objectives (rm=0.8690)

(8) Strategic Functions (rm=0.7857)

The areas that showed the lowest correlation with CVR success were (7d) Learning

Support (rm= 0.0119) (7a) Leadership Support (rm= -0.0746), and (7b) Organizational

Support (rm= -0.0833). This does not necessarily mean that these were negative

influences but they had no apparent correlation with a successful outcome. This lends

119
some support to the anecdotal finding that KM efforts often succeeded for reasons

unrelated to management support (or in some cases “despite senior management”).

5.3.3.2 Related Findings

Those organizations that reported corporate-wide efforts chartered and led by senior

management generally reported lower levels of perceived success on the EME survey,

and this lack of correlation shows that it does not necessarily affect financial success

either. The organizational section of the survey asked specifically about the use of

several current management concepts (such as Business Process Reengineering,

Management by Objective, Performance Metrics, and Total Quality

Management/Leadership (TQM/TQL). Again, those organizations that reported more use

of those concepts generally self-reported lower levels of EME success, and the financial

figures show no effect. This leads to the possibility that the most effective KM

implementation occurs without specific formal organizational or leadership support;

while leadership in all cases supported the KM efforts, their level of involvement did not

seem to affect the financial outcome. This is an area which deserves additional analysis.

5.4 Additional Data

The data from the survey and results of the exercises were summarized in this

Chapter. Specific raw survey scores are in Appendix D. Intermediate and final

calculations based on those data are in Appendix E. Additional graphs and charts are in

Appendix F.

120
CHAPTER 6 CONCLUSIONS, DISCUSSION, AND SUGGESTIONS FOR

FURTHER RESEARCH

“People do not like to think. If one thinks, one must reach conclusions. Conclusions

are not always pleasant.”

Helen Keller

6.1 Conclusions

The results of this study show that the two of the three null hypotheses can be

rejected. Since all of the respondents indicated at least some usage of the basic tenets of

EME, it could not be tested whether there is any correlation between its presence and

successful KM is present. However, there is a statistically valid correlation between

Enterprise Management Engineering and both the Comprehensive Value Ratio and

Knowledge Capital methods of knowledge valuation measurement. The correlation

between EME and the relative CVR measure is a stronger one but both measures do have

some validity. The next logical step would be to determine whether EME could be

proven to cause an increase in KC valuation measures. This would require additional

longitudinal study within subject organizations.

121
The other relevant conclusions can be divided into three groups: conclusions based on

the EME/KC correlation, conclusions based upon the KC data alone, and conclusions

based upon the EME survey data alone.

6.1.1 EME/KC Correlation

In addition to the major conclusion cited above, that EME and KC/CVR do positively

correlate, there are three other lesser conclusions that can be drawn from this effort, each

of which is worthy of additional study and research on its own.

6.1.1.1 Corporate Level Only

The EME/KC Valuation Approach only works on a corporate level. The reasons for

this is that the KC data are only objectively available at the corporate level – there is no

standard repository today, nor is there a requirement, for corporations to report this type

of annual report data for subsidiaries or major operating divisions. In order to keep the

accounting data objective, the assumption is that they should be publicly available from

an objective body. One organization with division-level KM efforts highlighted that its

division-level approaches were so different (based on differing missions for the division

units) that an aggregate answer would have cancelled itself out. This is a flaw in the

study methodology, which currently requires that all assessments be made at the overall

corporate level – it does not extend well if there is no corporate approach to KM.

However, if a study is restricted to a single organization, it may be possible to use

internal data generated at lower levels. In the above example, this would allow

122
correlations to be made at the division level instead of the corporate one. These data may

not be objective but they should at least be internally consistent.

6.1.1.2 Common Definition of “Knowledge”

There is no common “corporate” definition or interpretation of knowledge,

knowledge management, or knowledge management system. Even though the academic

community is slowly coming to grips with it, the contacted corporations used for this

study (those who completed the study and those who did not) all varied dramatically on

their own ideas of what constituted knowledge. For purposes of the study the definition

of knowledge, KM, and KMS was deliberately kept as broad as possible, both to

encourage more reporting and response and to allow a more accurate use of the KC

valuation metric. This metric assigns all corporate earnings not attributable to long-term

financial assets or physical plant/property/equipment to knowledge. This includes not

only the “traditional” knowledge definition of “corporate information in context” but also

items such as training systems and course material, data and distribution rights, patents

and copyrights, and mineral exploitation rights (depending on the accounting standards).

In discussions with the survey respondents, they indicated that from their perspectives

those items did count as knowledge. This definition and Lev’s definition compare very

equitably but are both broader than many of the papers and documents published in this

field and reviewed for this study in Chapter Two.

123
6.1.1.3 Few Organizations Have Corporate KM “Systems”

KM systems imposed from the top have a low record of success by any metric. Those

corporations that stated that they have KM systems generally identified them as being at

lower organizational levels (divisions, subsidiaries, departments, functions). Those

organizations that state that they have corporate-level KM did not like calling it a

“system” in the terms used by traditional systems engineering (as shown in the literature

review in Chapter Two). The correlation results support the idea that the more leadership

and business management concepts (as one respondent put it, “trends”) get involved, the

less successful the effort is. As expected, respondents who worked in the IT organization

tended to discount the importance of non-IT factors in the KM system process, and those

who were not from the IT department tended to view the concept of system as too rigid

and formal.

6.1.2 KC Valuation Method Conclusions

There were some conclusions that could be drawn from the analysis of the data used

to update Lev’s KC valuation calculations as well. Generally speaking, his calculations

produce rankings (both for absolute KC values and for relative CVR measures) that both

repeat his original conclusions and reflect common market wisdom for the subject

companies.

124
6.1.2.1 Accounting and Organizational Changes Can Cause Invalid Figures

Companies that restructure, have an unusual year, or a bad year suffer poor KC

valuations from a onetime snapshot of balance sheet figures. This study attempted to

compensate for more permanent changes by excluding several of the restructured or

financially-struggling companies but the blanket exclusion criteria were not always

applicable – nor should they have been (useful data were still gathered from one company

even though this snapshot would show that this company is not being recognized for their

KM).

While the financial industry takes this into account (as an example, for this study, for

one company, the earnings per share were forecast to have one negative year out of six

but overall it received a positive earnings prediction), this is because the figures are

viewed over time and anomalies are essentially discounted. If the KC valuation metric

will be used for measuring the efficacy of KM, KM practitioners and corporate

decisionmakers will need to come to consensus on the meaning of this figure over time.

6.1.2.2 Changes in Accounting Methods Affect Analyses Over Time

The four years between financial snapshots for this study produced some major

changes in how companies chose to classify and report certain types of assets or earnings.

Having said that, the changes imposed by a U.S. Government push toward more rigid

accounting standards and the overall global push toward a common global method of

accounting are forcing many companies to change their annual reporting methodology.

This push toward more and different disclosure may eventually result in a more

125
transparent means of valuating knowledge resources, but in the interim, it will make

meaningful comparisons across time periods far more difficult to do with meaningful

precision.

6.1.3 EME Survey Conclusions

In addition to validating the original EM construct against the KC valuation model,

the study also came to several conclusions about the EME model separate and apart from

its correlation with KC/CVR.

6.1.3.1 Current EME Model Is A Cycle Not A Process

The EME model has 18 steps, 15 of which are concentrated in the design and build

phase. Respondents repeatedly cited that one of the keys to effective KM systems is the

constant update, renewal, and retirement of outdated, revised, or inaccurate data. Based

upon the respondents’ experiences and their need to accommodate constantly changing

inputs, the KM system engineering process as shown in the EME construct should be

presented more like a continuous process cycle and less like a sequential set of events

resulting in a single (final) deliverable. This in turn requires that as much attention be

paid to the system post-deployment (especially in terms of feedback, modifications, and

rebaselining as necessary) as in the initial system design phases.

126
6.1.3.2 EME Terms of Reference Are Too “Traditional”

This conclusion came about from repeated explanations and discussions with

corporate respondents on the terms used in the survey and on the overall terminology for

EME. KM is a broader system domain than information or data; the engineering

terminology should be broader as well. Respondents who worked in industries more

traditionally associated with “classical” system engineering (aerospace, utilities, IT) had

no issue with the terminology used to describe the model; however, respondents from

industries with little or no exposure to defense or aerospace clients or staff found the

terminology formal and restrictive. The model might benefit from broader definitions of

terms and specific nontraditional examples.

6.1.3.3 Revisit the EME Model in Certain Areas

One intent of the EME survey was to identify areas that are more and less widely used

and to correlate those areas with the KM valuation process. A related intent was to

identify areas that were universally used (or not used) and to solicit independent feedback

from respondents on the EME construct. One interesting observation is that the EME

areas that the respondents rated the three highest on their surveys (the “perceived”

successes) did not necessarily correlate with the highest actual KM successes as did other

areas:

127
Table 6-1 -- Perceived vs Actual EME "Success" Areas

EME EME-KC EME-CVR

EME Area Ranking Ranking Ranking

18. Change Management 1 22 2

15. Integrative Management Plan 2 17 1

13b. Informal Organizational Structure 3 20 12

14. KM Technology Listing 11 1 7

9. Functional Processes 19 2 6

8. Strategic Functions 13 3 9

4. Asset Source Identification 18 16 2

1. Enterprise Definition 15 5 2

Based on those responses, the following changes should be considered for the EME

model:

x Revise and expand the section on organizational structures (particularly informal

ones) and their use

x Revise and expand the section on change management and its use

x Revise and expand the section on asset source identification, its purpose, and its

justification

x Revise and expand the section on enterprise definition, its purpose, and its

justification

x Concentrate on identifying appropriate measures and levels of leadership,

organizational, learning support

128
x Risk management and mitigation should be considered as part of the upfront

strategic planning process rather than as part of the integrative plan; however, its

general reported nonuse means that regardless of where it goes, it needs to be

explained more explicitly.

x The section on environmental changes, their anticipation, and their place in the

planning process should be expanded. Few respondents acknowledged any use of

this segment of the model.

x EME should consider aligning itself more closely with the CVR measure of

knowledge success. This means that more emphasis should be placed on those

areas that correlated more highly with the CVR measurement, in order to ensure a

common understanding and provide additional implementation examples.

6.2 Areas for Future Research

As stated throughout this study, this effort is an initial survey and a broadbrushed

validation. There are several areas for potential followup, identified both through the

literature review and through the results of this study effort. These areas include:

x Readdress the first hypothesis set to correlate the presence of EME with the

success of KM implementation (this will require a much broader and potentially

more representative sample set than the top of the Fortune 500 ones set used for

this study).

x Identifying and proving causation (not just correlation) between EME and its

individual tenets and KMS success;

129
x Developing and assessing concrete and objective definitions for the stages of

EME along with metrics of progress and accomplishment for each (there is an

inherent bias in subjective self-reporting);

x Extension of the KC valuation concept to assess the success of KM in non-profit

and/or governmental organizations;

x Identification and confirmation of additional concepts that may be common

among successful KM implementations and thus should be added to the EME

construct;

x Further examination and decomposition of the utility of KC valuation as a

measure of KM success in light of changing accounting and reporting standards;

and

x Development of a suggested set of EME and CVR reporting standards that would

feed into the objective EME metrics developed above.

Lastly, another area of research is identified in an adaptation of some of the earlier

charts from this study such as Figure 5-2 (which charted average overall EME scores by

company). Given that the eight respondents came from several industry sectors, replacing

the company codes (which are meaningless anyway) with the applicable sector names

provides a different perspective that provokes additional questions. As seen in Figure

6-1, representatives of different industries have differing scores; it would be interesting to

pursue these divergences (especially the counterintuitive ones, such as home products

outscoring semiconductors) to try to determine whether they are real and if so, why they

are.

130
Figure 6-1 -- Industrial Sector EME Scores

Company Overall EME Scores

3.50

3.00

2.50

2.00
Score

1.50

1.00

0.50

0.00
e

e
ts

s
ns

ity
l
l
ria

ia
iv

iv

or
uc

tr

io

til
ot

ot
st

ct
s
od

at

U
om

om
du

du

du
ic

ic
Pr
In

In

on
un
ut

ut

tr
e

ec
A

ic
m
om

El
m

Se
H

co
le
Te

Company

Research in any of these areas would go a long way toward providing some additional

quantitative proof of the actual processes, costs, and benefits of successful knowledge

management systems engineering.

6.3 Discussion

It should be remembered that this survey was a self-assessment of projects that were

generally at least the partial responsibility of the respondents (or in some cases their

predecessors). The subjective bias was deemed preferable to not being able to get

accurate information about the actual implementation process for a project; however, as

with all project lessons learned, the self-assessments will be relative and may not indicate

131
totally objective results. Objective assessments of EME application, akin to the CMMI

SCAMPI survey and accreditation process, would improve the validity of this research

and allow more specific analyses. Although the increasing acceptance of the CMMI and

the INCOSE/SEI system engineering standards is beginning to move beyond the narrow

field of aerospace and defense contracting, development of a truly objective assessment

standard widely applicable across multiple industries will require additional research in

order to develop widely-applicable metrics.

Since this is the first survey to attempt to validate the specific EME model, the

analysis was relatively high-level and still includes a degree of uncertainty. The findings

from this effort are more similar to indicators of trends or possible concepts than

definitive statements of causation and relevance. However, the findings from the

statistical analysis portion of this study are supported both by the original literature

review and by the free text and anecdotal responses of the survey respondents. Although

the specific financial or performance numbers or correlation coefficients may change

from organization to organization and from researcher to researcher, the overall trends as

indicated by the findings of this study are relatively robust. KM is reaching a crossroads;

a common taxonomy and objective performance metrics are critical to the acceptance of

this concept as a useful management and organizational tool.

In order for KM systems developers and practitioners to truly be able to make the

business case for their future systems, they will need to work with the financial managers

to develop and implement accurate reporting and measurements of the true cost and

impacts of their systems. The broad valuation method used in this study produced an

132
initial assessment but is by no means a complete inventory of the current state of EME

implementation or knowledge valuation even across the entire Fortune 500 let alone the

global business environment. Furthermore, this study was confined only to large

profitable public and non-financial companies – hardly a result that can be easily

extrapolated across the entire organizational environment. Accounting standards are

entering a period of change and redefinition; there is a window of opportunity to revise

and refine the new standards so that more accurate intangible measurements can be put

into the regular reporting processes.

If academic research in KM is to affect corporate action, there should be a move

toward common terminology and taxonomy. KM is in danger of becoming yet another

failed buzzword because too much has been promised with no proof of delivery. Without

accurate and externally-reported measurements of knowledge progress, performance,

costs, and benefits, KM as a separate concept is in danger of marginalization. For KM to

become more than just another failed business or IT buzzword, it will need to combine

the pragmatism of the management perspective with the discipline of the IT systems

view. Hopefully this effort is the first of many to begin to forge that synthesis and bring

KM from the trend list into the permanent organizational management lexicon.

133
REFERENCES

[1] Alavi, M. and Leidner, D., 1999, “Knowledge Management Systems: Issues,

Challenges, And Benefits”, Communications of the AIS, 1(2es), article 7.

[2] Alvesson, M. & Karreman, D., 2001, “Odd Couple: Making Sense Of The

Curious Concept Of Knowledge Management”, Journal of Management Studies,

38(7), pp. 995-1018.

[3] American Productivity and Quality Center, 2003, “Measuring the Impact of

Knowledge Management,” APQC Consortium Learning Forum – Best Practice

Report, Houston: APQC Press, http://www.apqc.org/portal/apqc/ksn/01%20B-to-

B%20Exec%20Summ.pdf?paf_gear_id=contentgearhome&paf_dm=full&pagesel

ect=contentitem&docid=113078, site viewed 30 March 2004.

[4] Andrade, J., Ares, J., Garcia, R., Rodriguez, S., and Suarez, S., 2003, “Lessons

Learned for the Knowledge Management Systems Development,” Proceedings of

the 2003 IEEE International Conference on Information Reuse and Integration

(IRI – 2003), Las Vegas, NV, pp. 471-476.

[5] Aris, J., 2000, “Inventing Systems Engineering”, IEEE Annals of the History of

Computing, (22)3, pp. 4-15.

[6] Aston, A., 2002, “Brainpower on the Balance Sheet,” Business Week, 26 August

2002,

http://www.businessweek.com:/print/magazine/content/02_34/b3796624.htm?mz,

site visited 23 March 2004.

134
[7] Aston, A., and Lev, B., 2002, “Recalculating the Balance Sheet”, Business Week,

26 August 2002,

http://www.businessweek.com:/magazine/content/02_34/b3796625.htm, site

visited 23 March 2004.

[8] Babcock, P., 2004, “Shedding Light On Knowledge Management,” HRMagazine,

49(5), pp. 46-50.

[9] Bahill, A. T. and Gissing, B., 1998, “Re-Evaluating Systems Engineering

Concepts Using Systems Thinking”, IEEE Transaction on Systems, Man and

Cybernetics, Part C: Applications and Reviews, 28(4), pp. 516-527.

[10] Baum, G., 2000, “Introducing the New Value Creation Index,” Forbes ASAP, 01

April 2000, http://www.forbes.com/asap/2000/0403/140_print.html, site visited

24 March 2004.

[11] Blanchard, B. S., 2003, Systems Engineering Management, Third Edition, Wiley-

Interscience, New York, Chapters 1, 2, and 6.

[12] Blumentritt, R. and Johnston, R., 1999, “Towards A Strategy For Knowledge

Management,” Technology Analysis & Strategic Management, 11(3), pp. 287-300.

[13] Bourdreau, A. and Couillard, G., 1999, “Systems Integration And Knowledge

Management,” Information Systems Management, 16(4), pp. 24-32.

[14] Brigham, E. F., and Gapenski, L. C., 1988, Financial Management: Theory and

Practice, Fifth Edition, Dryden Press, Orlando FL, Chapter 21.

[15] Brooking, A., 1999, Corporate Memory: Strategies for Knowledge Management,

International Thompson Business Press, London, Chapters 1, 2, 8, and 9.

135
[16] Brummet, R. L., Flamholtz, E., and Pyle, W. C., 1968, “Human Resource

Management – A Challenge for Accountants,” Accounting Review, 43, pp. 217-

224.

[17] Bukh, P. N. and Johanson, U., 2003, “Research And Knowledge Interaction:

Guidelines For Intellectual Capital Reporting,” Journal of Intellectual Capital,

4(4), pp. 576-587.

[18] Cantor, M., 2002, Software Leadership: A Guide to Successful Software

Development, Addison-Wesley, NY, Appendix A.2, p. 153.

[19] Carper, W. B., 2002, “The Early Development Of Human Resource Accounting

Including The Impact Of Evolving Asset Valuation Theory,” paper submitted for

the Ninth World Congress of Accounting Historians, Melbourne, Australia,

August 2002, http://www.deakin.edu.au/wcah/papers/Carper.pdf, site visited 30

March 2004.

[20] Cho, G., Jerrell, H., and Landay, W., 1999, “Program Management 2000: Know

The Way: How Knowledge Management Can Improve DoD Acquisition,” Report

Of the DSMC 1998–1999 Military Research Fellows, Ft. Belvoir VA: Defense

Systems Management College Press,

http://www.dau.mil/pubs/mfrpts/mrfr_1999.asp, site accessed 15 January 2006,

Chapters 1, 2, 3 and 20.

[21] Chourides, P., Longbottom, D., and Murphy, W., 2003, “Excellence In

Knowledge Management: An Empirical Study To Identify Critical Factors And

Performance Measures,” Measuring Business Excellence, 7(2), pp. 29-45.

136
[22] Civi, E., 2000, “Knowledge Management As A Competitive Asset: A Review,”

Marketing Intelligence & Planning, 18(4), pp. 166-174.

[23] Clark, J. A., 1999, “A Graphical Method For Assessing Knowledge-Based

Systems Investments,” Logistics Information Management, 12(1/2), pp. 63-77.

[24] CMMI Product Team, 2001, Capability Maturity Model Integration (CMMI) for

Systems Engineering and Software Engineering (CMMI-SE/SW, V1.1)

(Continuous Representation), Pittsburgh PA: Carnegie-Mellon Software

Engineering Institute, Chapters 2, 3, 4, and 7.

[25] CMMI Product Team, 2002, Capability Maturity Model Integration (CMMI) for

Systems Engineering, Software Engineering, Integrated Product and Process

Development, and Supplier Sourcing (CMMI-SE/SW/IPPD/SS, V1.1) (Continuous

Representation), Pittsburgh PA: Carnegie-Mellon Software Engineering Institute,

Chapters 2, 4, and 7.

[26] CMMI Product Team, 2005, Capability Maturity Model Integration (CMMI)

Overview, http://www.sei.cmu.edu/cmmi/adoption/pdf/cmmi-overview05.pdf ,

site visited 05 October 2005.

[27] CMMI Product Team, 2005, CMMI Appraisals,

http://www.sei.cmu.edu/cmmi/appraisals/appraisals.html, site visited 05 October

2005.

[28] Danish Agency for Trade and Industry, 2000, A Guideline For Intellectual Capital

Statements – A key to Knowledge Management, Version 1.0,

137
http://www.efs.dk/publikationer/rapporter/guidelineICS/download.htm, site

visited 23 March 2004.

[29] Daum, J., 2001, “How Accounting Gets More Radical In Measuring What Really

Matters To Investors,” New Economy Analyst Report, 26 July 2001,

http://www.juergendaum.com/news/07_26_2001.htm, site visited 01 April 2003.

[30] Davenport, T., and Prusak, L., 1998, Working Knowledge: How Organizations

Manage What They Know, Harvard Business School Press, Boston, Chapters 7, 8,

and 9.

[31] Defense Acquisition University, 2001, Systems Engineering Fundamentals,

http://www.dau.mil/pubs/gdbks/sys_eng_fund.asp, site visited 13 January 2006.

[32] Deng, Z., Lev, B., and Narin, F., 2003, “Science and Technology As Predictors of

Stock Performance,” Intangible Assets: Values, Measures, and Risks, J. Hand and

B. Lev, eds., Oxford University Press, London, pp. 207-227.

[33] Department of Defense, HQ/AFSC/EN, 1992, “MIL-STD-499B, Draft Military

Standard: Systems Engineering”, 06 May 1992 draft,

http://www.software.org/quagmire/descriptions/mil-std-499b.asp, site visited 28

December 2005.

[34] Doran, T., 2000, “Compliance Frameworks Systems Engineering Standards,”

paper presented at National Defense Industries Association (NDIA) Systems

Engineering and Supportability Conference, 25 October 2000,

http://www.dtic.mil/ndia/systems/Doran.pdf, site visited 28 December 2005.

138
[35] Doran, T., 2004, “IEEE Std 1220-1998 Revised,” paper presented at USAF

Software Technology Conference, 20 April 2004, www.sstc-

online.org/Proceedings/2004/PDFFiles/TD572.pdf, site visited 10 January 2006.

[36] Edvinsson, L. and Malone, M., 1997, Intellectual Capital: Navigating in the New

Business Landscape, Harper Collins, New York, Chapters 2-5, 11.

[37] Edvinsson, L., 2002, Corporate Longitude, Financial Times Prentice-Hall,

London, Chapter 8.

[38] Edwards, J. S., Shaw, D., and Collier, P. M., 2005, “Knowledge Management

Systems: Finding A Way With Technology,” Journal of Knowledge Management,

9(1), pp. 113-125.

[39] Eisner, H., 2002, Essentials of Project and Systems Engineering Management,

Second Edition, John Wiley and Sons, New York, Chapters 3, 7, 8, and 13.

[40] Electronic Industries Alliance, 1998, EIA/IS-731 -- Systems Engineering

Capability Model (SECM), EIA, Arlington VA, Chapters 1, 2, and 4.

[41] European Forum for Quality Management, 2003, “2003 EFQM Excellence

Model”, http://www.efqm.org/model_awards/model/excellence_model.htm, site

visited 19 March 2004.

[42] Federal Accounting Standards Board, 2001, “FASB Statement 142: Goodwill and

Other Intangible Assets,” Norwalk CT: Financial Accounting Standards Board,

http://www.fasb.org/pdf/fas142.pdf, site visited 23 March 2004.

139
[43] Felton, S. M. and Finnie, W. C., 2003, “Knowledge Is Today's Capital: Strategy &

Leadership Interviews Thomas A Stewart,” Strategy & Leadership, 31(2), pp. 48-

55.

[44] Flamholtz, E. G., 1985, Human Resource Accounting, Advances in Concepts,

Methods, and Applications, Jossey-Bass Publishers, San Francisco, Chapters 2, 3,

6, and 14.

[45] Forsberg, K. Mooz, H. and Cotterham, H., 2000, Visualizing Project

Management, Second Edition, John Wiley and Sons, Indianapolis, Chapters 3 and

7.

[46] Gebauer, M., 2002, “Human Resource Accounting: Measuring the Value of

Human Assets and the Need for Information Management,” Badovinac, B. et al.

(Ed.), Human Beings and Information Specialists Proceedings,

Ljubliana/Stuttgart 2002, pp. 80-89,

http://notesweb.uniwh.de/wg/wiwi/wgwiwi.nsf/1084c7f3cb4b7f4ac1256b6f00585

1c5/3d0ea9dde1e99e6ac1256c140044dbbe/$FILE/Gebauer%20(2002)%20HRA.p

df, site visited 29 March 2004.

[47] Gold, A. H., Malhotra, A., and Segars, A. H., 2001, “Knowledge Management:

An Organizational Capabilities Perspective,” Journal of Management Information

Systems, 18(1), pp. 185-214.

[48] Gordon, J., 2004, “Making Knowledge Management Work,” Training, 42(8), pp.

16-21.

140
[49] Green, A., and Ryan, J. J. C. H., 2005, “Framework of Intangible Valuation Areas

(FIVA)”, Journal of Intellectual Capital, 6(1), pp. 43-52.

[50] Gu, Feng, and Lev, Baruch, 2001, “Intangible Assets -- Measurement, Drivers,

Usefulness,” online paper: http://pages.stern.nyu.edu/~blev/intangible-assets.doc,

site visited 14 March 2004.

[51] Hahn, J. and Subramani, M. R., 2000, “Framework Of Knowledge Management

Systems: Issues And Challenges For Theory And Practice,” Proceedings Of The

Twenty First ACM International Conference On Information Systems, Brisbane,

Queensland, Australia, pp. 302 – 312.

[52] Harrison, S., Sullivan, P. H., and Castagna, M. J., 2001, “Intellectual Capital

Management Best Practices,” Appendix B to Intangibles – Management,

Measurement, and Reporting, B. Lev, The Brookings Institution, Washington DC,

pp. 155-165.

[53] Harrison, S., Sullivan, P. H., and Castagna, M. J., 2001, “Intellectual Capital

Management Best Practices,” Intangibles – Management, Measurement, and

Reporting, B. Lev, ed., The Brookings Institution, Washington DC, Chapter Five.

[54] Hendricks, K., and Singhal, V., 2000, “The Impact of Total Quality Management

(TQM) on Financial Performance: Evidence from Quality Award Winners”,

European Foundation for Quality Management (EFQM) Paper,

http://www.efqm.org/model_awards/downloads/keyabstract.pdf, site visited 23

March 2004.

141
[55] Hildebrand, C., 1999, “Making KM Pay Off,”, CIO Enterprise Magazine, (12)9,

pp. 64-66.

[56] Huber, G. P., 1990, "A Theory of the Effects of Advanced Information

Technologies on Organizational Design, Intelligence, and Decision Making",

Academy of Management Review, 15(1), pp. 47-71.

[57] Hussi, T., 2004, “Reconfiguring Knowledge Management - Combining

Intellectual Capital, Intangible Assets And Knowledge Creation,” Journal of

Knowledge Management, 8(2), pp. 36-52.

[58] IEEE Computer Society, Software Engineering Standards Committee, 2005, IEEE

Standard 1220-2005 for Application and Management of the Systems Engineering

Process, Institute of Electrical and Electronics Engineers, New York, Chapters 1

and 4, Annexes A and B.

[59] Institutional Brokers Estimate System, 2006, I/B/E/S Database, website

http://wrds.wharton.upenn.edu/ds/ibes/statsum/, site accessed 19 January 2006.

[60] Institutional Brokers Estimate System, 2000, The I/B/E/S Glossary – A Guide to

Understanding I/B/E/S Terms and Conventions, New York, I/B/E/S, pp. 10-48,

http://wrds.wharton.upenn.edu/support/docs/ibes/glossary2001.pdf, site accessed

19 January 2006.

[61] International Council on Systems Engineering, 2004, “What is System

Engineering?,” website http://www.incose.org/practice/whatissystemseng.aspx,

site visted 26 January 2006.

142
[62] International Council on Systems Engineering, 2006, “What is System

Engineering? -- A Consensus of the INCOSE Fellows,” website

http://www.incose.org/practice/fellowsconsensus.aspx, site visited 26 January

2006.

[63] Ittner, C. D. and Larcker, D.F., 1998, “Are Nonfinancial Measures Leading

Indicators of Financial Performance? An Analysis of Customer Satisfaction,”

Journal of Accounting Research, 36(Supplement), pp. 1-35.

[64] Ittner, C. D. and Larcker, D.F., 2001, “Assessing Empirical Research In

Managerial Accounting: A Value-Based Management Perspective,” Journal of

Accounting and Economics, 32(3), pp. 349-410.

[65] Jensen, H., 2003, “Overview of Tools and Frameworks for Managing

Knowledge,” How to Develop and Monitor Your Company's Intellectual Capital -

- Tools and Actions For The Competency-Based Organization, Oslo: Nordic

Industrial Fund FRAME Project Publication (11-16),

http://www.nordicinnovation.net/_img/IC_rapport_engelsk.pdf , site visited

15Jan05.

[66] Johanson, U., 1996, presentation at Workshop 6: Changing Workplace Strategies:

Achieving Better Outcomes For Enterprises, Workers And Society, Chateau

Laurier, Ottawa, site visited 24 March 2004.

[67] Kankanalli, A. and Tan, B. C. Y., 2004, “A Review of Metrics for Knowledge

Management Systems and Knowledge Management Initiatives,” Proceedings of

143
the 37th IEEE Hawaii International Conference on System Sciences, Honolulu HI,

pp. 238-245.

[68] Kaplan, R., and Norton, D., 1996, The Balanced Scorecard: Translating Strategy

Into Action, Harvard Business School Press, Boston, Chapters 2, 7, and 8.

[69] Kay, R., 2002, “QuickStudy: System Development Life Cycle,” ComputerWorld,

14 May 2002. Online version at

http://www.computerworld.com/printthis/2002/0,4814,71151,00.html, site visited

28 December 2005.

[70] Kearney, A.T., 2002, “The 2002 Value Creation Index – A Presciption for

Health,” AT Kearney Report,

http://www.atkearney.com/shared_res/pdf/2002_Value_Creation_Index_S.pdf,

site visited 17 February 2004.

[71] Kemp, L.L., Nidiffer, K.E., Rose, L.C., Small, R., and Stankosky, M., 2001,

“Knowledge Management: Insights From the Trenches,” IEEE Software, 18(6),

pp. 66-68.

[72] King, W. R., Marks, P.V., and McCoy, S., 2002, “The Most Important Issues In

Knowledge Management,” Communications of the ACM, 45(9), pp. 93-97.

[73] Kossiakoff, A. and Sweet, W.N., 2002, Systems Engineering Principles and

Practice, Wiley-Interscience, New York, Chapters 1 and 3.

[74] Koulopoulos, T., and Frappaolo, C., 1999, “Why Do A Knowledge Audit?” J. W.

Cortada and J.A. Woods (Eds.), The Knowledge Management Yearbook, 2000-

2001, Boston MA: Butterworth-Heinemann, pp. 418-425.

144
[75] Laplante, P. A. and Neill, C. J., 2004, “’The Demise of the Waterfall Model Is

Imminent’ and Other Urban Myths,” ACM Queue, 1(10),

http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=110

, site visited 03 January 2006.

[76] Leonard-Barton, D., 1995, “Implementing and Integrating New Technical

Processes and Tools,” Knowledge Management Tools, R. Ruggles, ed.,

Butterworth-Heinemann, Boston, pp. 211-231.

[77] Lev, B. and Daum, J.H., 2004, “The Dominance Of Intangible Assets:

Consequences For Enterprise Management And Corporate Reporting,” Measuring

Business Excellence – The Journal of Business Performance Management, 8(1),

pp. 6-17.

[78] Lev, B. and Sougiannis, T., 2003, “The Capitalizattin, Amortization, and Value-

Relevance of R&D,” Intangible Assets: Values, Measures, and Risks, J. Hand and

B. Lev, eds., Oxford University Press, London, pp. 123-152.

[79] Lev, B. and Zarowin, P., 1999, “The Boundaries of Financial Reporting and How

to Extend Them,” Journal of Accounting Research, 37(2), pp. 353-385.

[80] Lev, B., 2000, “New Accounting for the New Economy,” online paper,

http://pages.stern.nyu.edu/~blev/newaccounting.doc, site visited 24 March 2002.

[81] Lev, B., 2002, “Communicating Knowledge Capabilities,” online paper:

http://pages.stern.nyu.edu/~blev/communicating.doc, site visited 24 March 2004.

145
[82] Lev, B., 2003, “What Then Must We Do?” Intangible Assets: Values, Measures,

and Risks, J. Hand and B. Lev, eds., Oxford University Press, London, pp. 511-

524.

[83] Lev. B., 2005, “Intangible Assets: Concept and Measurements”, Encyclopedia of

Social Measurement, K. Kempf-Leonard, ed., London, Academic Press, Volume

2, pp. 299-305.

[84] Lev, B., 2005, “The Valution of Organizational Capital,” online paper:

http://pages.stern.nyu.edu/~blev/docs/The%20Valuation%20of%20Organization

%20Capital.pdf, site visited 15 January 2006.

[85] Liebowitz, J. and Suen, C. Y., 2000, “Developing Knowledge Management

Metrics For Measuring Intellectual Capital,” Journal of Intellectual Capital, 1(1),

pp. 54-67.

[86] Liebowitz, J., and Wright, K., 1999, “A Look Toward Valuating Human Capital,”

Knowledge Management Handbook, J. Liebowitz, ed., CRC Press, Boca Raton

FL, Chapter Five.

[87] Lindley, D.V. and Scott, W.F, 1984, New Cambridge Statistical Tables, Second

Edition Cambridge, Cambridge University Press, pp. 57-73.

[88] Marr, B., Schiuma, G., and Neely, A., 2004, “Intellectual Capital - Defining Key

Performance Indicators For Organizational Knowledge Assets,” Business Process

Management Journal, 10(5), pp. 551-569.

[89] Martin, W. J., 2004, “Demonstrating Knowledge Value: A Broader Perspective

On Metrics,” Journal of Intellectual Capital, 5(1), pp. 77-91.

146
[90] McAdam, R. and McCreedy, S., 1999, “A Critical Review Of Knowledge

Management Models,” Learning Organization, 6(3), pp. 91-100.

[91] Meso, P. and Smith, R., 2000, “A Resource-Based View Of Organizational

Knowledge Management Systems,” Journal of Knowledge Management, 4(3), pp.

224-234.

[92] Mintz, S.L., 1999, “Seeing Is Believing: A Better Approach To Estimating

Knowledge Capital,” CFO Magazine, 15(2),

http://pages.stern.nyu.edu/html/99FEcfo.html, site visited 01 April 2003.

[93] Mintz, S.L., 2000, “CFO's Second Annual Knowledge Capital Scorecard: A

Knowing Glance,” CFO Magazine, 16(2),

http://www.cfo.com/article.cfm/2988588?f=related, site visited 01 April 2003.

[94] Mockler, R. J. and Dologite, D.G., 1992, An Introduction to Expert Systems,

Macmillan, New York, pp. 419-516.

[95] Moore, C. R., 1999, “Performance Measures for Knowledge Management,”

Knowledge Management Handbook, J. Liebowitz, ed., CRC Press, Boca Raton

FL, Chapter Six.

[96] Mouritsen, J., Larsen, H. T., and Bukh, P. N., 2005, “Dealing With The

Knowledge Economy: Intellectual Capital Versus Balanced Scorecard,” Journal

of Intellectual Capital, 6(1), pp. 8-27.

[97] Mouritsen, J., Larsen, H. T., Bukh, P. N., and Johansen, M. R., 2001, “Reading

An Intellectual Capital Statement: Describing And Prescribing Knowledge

Management Strategies,” Journal of Intellectual Capital, 2(4), pp. 359-383

147
[98] Ndofor, H. A., and Levitas, E., 2004, “Signaling the Strategic Value of

Knowledge,” Journal of Management, 30(5), pp. 685-702.

[99] Neill, C. J. and Laplante, P.A., 2003, “Requirements Engineering: The State Of

The Practice,” IEEE Software, 20(6), pp. 40-45.

[100] Norton, D. P., 2002, “Measuring Value Creation With The Balanced Scorecard,”

Balanced Scorecard Report, 2(3), pp. 1-5.

[101] O’Sullivan, K. and Stankosky, M., 2004, “The Impact of Knowledge Management

Technology on Intellectual Capital,” Journal of Information & Knowledge

Management, 03(04), pp. 331-346.

[102] Osterlund, A., 2001, “Grey Matters: CFO's Third Annual Knowledge Capital

Scorecard,” CFO Magazine, 17(4),

http://www.cfo.com/article.cfm/2992913/c_3046504?f=magazine_featured, site

visited 01 April 2003.

[103] Pearson, T. A. 1999, “Measurements And The Knowledge Revolution,” The

Knowledge Management Yearbook, 2000-2001, J. W. Cortada and J.A. Woods,

eds., Butterworth-Heinemann, Boston, pp. 408-418.

[104] Pressman, R. S., 2001, Software Engineering: A Practitioners’ Approach, Fifth

Edition, McGraw-Hill, New York, Chapter Six.

[105] Revenue Recognition staff, 2006, “Sarbox Has Widespread Impact On Revenue

Recognition Policies,” Revenue Recognition.com/CFO.com, website

http://www.revenuerecognition.com/article.cfm/5077434, site accessed 31

January 2006.

148
[106] Rodgers, W., 2003, “Measurement And Reporting Of Knowledge-Based Assets,”.

Journal of Intellectual Capital, 4(2), pp. 181-190.

[107] Royce, W. W., 1987, “Managing the Development of Large Software Systems:

Concepts and Techniques,” Proceedings of the 9th International IEEE-

CS/SIGSOFT Conference on Software Engineering, Monterey CA, pp. 328-338

[108] Sage, A. P. and Armstrong J. E., 2000, Introduction to Systems Engineering,

Wiley-Interscience, New York, Chapters 1 and 2.

[109] Salisbury, M. W., 2003. “Putting Theory Into Practice To Build Knowledge

Management Systems,” Journal of Knowledge Management, 7(2), pp. 128-141.

[110] Schultze, U. and Boland, R.J., 2000, “Knowledge Management Technology And

The Reproduction Of Knowledge Work Practices,” Journal of Strategic

Information Systems, 9(2-3), pp. 193-212 .

[111] Shapiro, C., and Varian, H., 1999, Information Rules: A Strategic Guide to the

Network Economy, Harvard Business School Press, Boston, Chapter One.

[112] Shapiro, C., and Varian, H., 2003, “The Information Economy,” Intangible

Assets: Values, Measures, and Risks, J. Hand and B. Lev, eds., Oxford University

Press, London, pp. 48-62.

[113] Sheard, S. and Lake, J. G., 1998, “Systems Engineering Standards and Models

Compared”, http://www.software.org/pub/externalpapers/9804-2.html, site visited

28 December 2005.

[114] Singer, C. A., 2003, “Context-Specific Intellectual Capital—The Next Link In

The Knowledge Chain,” IBM Systems Journal, 42(3), pp446-461.

149
[115] Skyrme, D. J., 1998, “Knowledge Management Solutions – The IT Contribution”,

ACM SIGGROUP Bulletin, 19(1), pp34-48.

[116] Smith, G. V., and Parr, R. L., 2005, Intellectual Property: Valuation,

Exploitation, and Infringement Damages, John Wiley and Sons, New York,

Chapters 4, 15, and 20.

[117] Smits, M. and de Moor, A., 2004, “Measuring Knowledge Management

Effectiveness in Communities of Practice,” Proceedings of the 37th IEEE Hawaii

International Conference on System Sciences, Honolulu HI, pp. 236-244.

[118] Standard and Poor’s, 2006, CompuStat® Industrial Annual Reports,

http://wrds.wharton.upenn.edu/ds/comp/inda/, site accessed 19 January 2006.

[119] Standard and Poor’s, 2003, CompuStat® Users Guide, Standard and Poor’s,

Centennial CO, Chapters Three and Five,

http://wrds.wharton.upenn.edu/support/docs/compustat/user_guide/user_all.pdf,

site accessed 19 January 2006.

[120] Standfield, K., 2002, Intangible Management: Tools for Solving the Accounting

and Management Crisis, The Academic Press, San Diego, Chapters 12, 13, 14,

and 15.

[121] Stankosky, M., 2001, “A Systems Approach to Engineering and Managing a

Knowledge Management System,” unpublished presentation, the George

Washington University, Washington DC.

[122] Stankosky, M., 2001, “Enterprise Management Engineering,” unpublished

document, the George Washington University, Washington DC.

150
[123] Stankosky, M., and Baldanza, C., 2001, “A Systems Approach to Engineering a

Knowledge Management System,” Knowledge Management: A Catalyst for

Electronic Government, R.C. Barquin, A. Bennet, and S.G. Remez, eds.,

Management Concepts, Vienna VA, Chapter 10.

[124] Stephenson, M., and Davies, T., 1999, “Technology Support for Sustainable

Innovation,” The Knowledge Management Yearbook, 2000-2001, J. W. Cortada

and J.A. Woods, eds., Butterworth-Heinemann, Boston, pp. 329-326.

[125] Stewart, T. A., 1997, Intellectual Capital, Currency Doubleday, New York,

Chapters 7, 8, 10, and Appendix.

[126] Stewart, T. A., 2001, “The Fortune 500/Accounting Gets Radical, ” Fortune,

143(8), pp. 184-191.

[127] Sunassee, N. and Sewry, D., 2002, “A Theoretical Framework for Knowledge

Management Implementation”, Proceedings Of The 2002 ACM Annual Research

Conference Of The South African Institute Of Computer Scientists And

Information Technologists On Enablement Through Technology, Port Elizabeth,

South Africa, pp. 235 – 245.

[128] Sveiby, K. E., 1997, The New Organizational Wealth, Berrett-Koehler Publishers,

Inc., San Francisco, Chapters 1, 7, 11, 12, and 13.

[129] Tiwana, A. and Ramesh, B., 2001, “Integrating Knowledge on the Web,” IEEE

Internet Computing, 5(3), pp. 32-40.

[130] Tiwana, A., 2000, The Knowledge Management Toolkit, Prentice Hall Publishing,

Upper Saddle River NJ, Chapters 2, 7, 10, 11, and 14.

151
[131] University of Pennsylvania, 2006, Wharton Research Data Services,

http://wrds.wharton.upenn.edu/home/index.shtml, site accessed 19 January 2006.

[132] Upton, W., 2003, “Challenges From the New Economy for Business and Financial

Reporting,” Intangible Assets: Values, Measures, and Risks, J. Hand and B. Lev,

eds., Oxford University Press, London, pp. 469-486.

[133] Weitzel, J. R. and Kerschberg, L., 1989, “Developing Knowledge-Based Systems:

Reorganizing the System Devlopment Life Cycle”, Communications of the ACM,

32(4), pp. 482-488.

[134] Wiig, K. M., 1999, “Introducing Knowledge Management Into the Enterprise,”

Knowledge Management Handbook, J. Liebowitz, ed., CRC Press, Boca Raton,

FL, Chapter Three.

[135] Yu, L., 2005, “Does Knowledge Sharing Pay Off?” MIT Sloan Management

Review, 46(3), 5.

[136] Zack, M. H., 1999, “Managing Organizational Ignorance”, The Knowledge

Management Yearbook, 2000-2001, J. W. Cortada and J.A. Woods, eds.,

Butterworth-Heinemann, Boston, pp. 353-374.

[137] Zhou, A. Z. and Fink, D., 2003, “The Intellectual Capital Web: A Systematic

Linking Of Intellectual Capital And Knowledge Management,” Journal of

Intellectual Capital, 4(1), pp. 34-48.

152
APPENDIX A -- SURVEY INSTRUMENT

153
154
155
156
157
158
159
160
161
162
163
164
165
APPENDIX B –FINANCIAL DATA VARIABLES AND DEFINITIONS

166
Database Field
Data Source Required Data Definition
Name(s)

I/B/E/S [59] Actual Earnings Per The reported annual earnings per EPS
Share (EPS) for share for a company for the year
Dec2002 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
I/B/E/S [59] Actual Earnings Per The reported annual earnings per EPS
Share (EPS) for share for a company for the year
Dec2003 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
I/B/E/S [59] Actual Earnings Per The reported annual earnings per EPS
Share (EPS) for share for a company for the year
Dec2004 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].

167
Database Field
Data Source Required Data Definition
Name(s)

I/B/E/S [59] Mean Predicted The arithmetic average of EPS


Earnings Per Share estimates for the fiscal period
(EPS) for Dec2005 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
I/B/E/S [59] Mean Predicted The arithmetic average of EPS
Earnings Per Share estimates for the fiscal period
(EPS) for Dec2006 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
I/B/E/S [59] Mean Predicted The arithmetic average of EPS
Earnings Per Share estimates for the fiscal period
(EPS) for Dec2007 indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].
I/B/E/S [59] Book Value Per Share, The arithmetic average of BPS
Dec2004 estimates for the fiscal period
indicated. Earnings per share is a
corporation’s net income from
continuing operations divided by
the weighted average number of
shares outstanding for the year
[60].

168
Database Field
Data Source Required Data Definition
Name(s)

CompuStat Assets – This item represents current DATA6


[118] Total/Liabilities and assets plus net property, plant,
Stockholders’ Equity – and equipment plus other
Total noncurrent assets (including
intangible assets, deferred
charges, and investments and
advances). Total liabilities and
stockholders’ equity represents
current liabilities plus long-term
debt plus other long-term liabilities
plus stockholders’ equity [119].
CompuStat Property, Plant, and Property, Plant, and Equipment – DATA8
[118] Equipment – Total Total (Net) represents the cost, of
(Net) tangible fixed property used in the
production of revenue, less
accumulated depreciation [119].
CompuStat Price CY04 Annual prices are reported on a DATA24
[118] calendar year basis, regardless of
the company’s fiscal yearend.
Prices are adjusted for all stock
splits and stock dividends that
occurred in the calendar year
[119].
CompuStat Common Shares This item represents the net DATA25
[118] Outstanding (MM) number of all common shares
outstanding at year-end for the
annual file excluding treasury
shares [119].

169
Database Field
Data Source Required Data Definition
Name(s)

CompuStat Investments and This item represents long-term DATA31


[118] Advances – Equity investments and advances to
Method ($MM) unconsolidated subsidiaries,
affiliates and joint ventures in
which the parent company has
significant control, as stated in the
consolidated financial statements
[119].
CompuStat Investments and This item represents long-term DATA32
[118] Advances – Other receivables and other
($MM) investments, and advances
including investments in affiliated
companies, unconsolidated
subsidiaries, and joint ventures in
which no equity in earnings has
yet been incurred [119].
CompuStat EPS Basic This item represents basic DATA53
[118] earnings per share including all
extraordinary items and
discontinued operations adjusted
for preferred dividends as
reported by the company [119].
CompuStat Shares for EPS (MM) Basic Shares represents the DATA54
[118] weighted average number of
common shares outstanding and
any securities that have been
determined to be common stock
equivalents that are used to
calculate Earnings per Share
(Basic) as reported by the
company [119].

170
Database Field
Data Source Required Data Definition
Name(s)

CompuStat Common Equity – Total This item represents the common DATA60
[118] ($MM) shareholders’ interest in the
company [119].
CompuStat Depreciation, This item represents the total DATA196
[118] Depletion, and portion of asset cost written off by
Amortization periodic depreciation charges
since the assets were acquired
[119].
Calculated Gross Assets Assets – Total/Liabilities and DATA6+DATA196
Stockholders’ Equity – Total +
Depreciation, Depletion, and
Amortization
Calculated Total Earnings EPS Basic * Shares for EPS DATA53*DATA54
Calculated Book Value BPS * Common Shares DATA60*BPS
Outstanding
Calculated Market Value Price CY04 * Common Shares DATA24*DATA25
Outstanding

Additional Information:

Depreciation, Depletion, and Amortization (Accumulated) (Balance Sheet)

This item represents the total portion of asset cost written off by periodic

depreciation charges since the assets were acquired. Not all types of property, plant &

equipment included in Net Property, Plant and Equipment on the Balance Sheet, will

have a corresponding depreciation that can be collected in this data item. These types of

property, plant and equipment include funds for construction and property plant &

equipment whose depreciation is charged directly to the asset account.

171
This item includes:

1. Amortization of tangible assets

2. Amortization of tools and dies

3. Depletion

4. Depreciation

5. Reserve for possible future loss on disposals (when included in depreciation and

amortization)

Investments and Advances – Equity Method

This item represents long-term investments and advances to unconsolidated

subsidiaries, affiliates and joint ventures in which the parent company has significant

control, as stated in the consolidated financial statements.

This item includes:

1. All investments carried at equity (from 1972 forward)

2. Investments of 20 percent to 50 percent when the “ cost” or “ equity” method is not

mentioned

3. Receivables from investments carried at equity

This item excludes:

1. All investments carried at cost (included in Investments and Advances – Other)

2. Investments of more than one percent to 19 percent when the “ cost” or “ equity”

method is not mentioned (included in Investments and Advances – Other)

3. Joint ventures not yet operating (included in Investments and Advances – Other)

172
4. Joint ventures when there is no indication of the equity method (included in

Investments and Advances – Other)

Investments and Advances – Other

This item represents long-term receivables and other investments, and advances

including investments in affiliated companies, unconsolidated subsidiaries, and joint

ventures in which no equity in earnings has yet been incurred.

This item includes:

1. All investments carried at cost

2. 1 – 19% owned investments when the “ cost” or “ equity” method is not

3. mentioned

4. Banks and savings and loans’ investment securities (available for sale and held

5. for maturity)

6. Direct financing leases (when the company is the lessor)

7. Brokerage firms’

o Investments in securities and mortgage loans

o Seat on, or membership in, a securities exchange

8. Extractive industries’ oil and gas royalties

9. Finance companies’ assets held strictly for investment purposes

10. Investments and advances to former subsidiaries

11. Investments and advances to subsidiaries to be sold

12. Joint ventures not yet operating

173
13. Land or property held for resale (for companies whose primary business is not

14. land development)

15. Leveraged leases (when the company is the lessor)

16. Long-term receivables (including receivables from parent)

17. Marketable securities (unless restricted or held for collateral)

18. Partnerships in which there is no significant control

19. Real estate investment trust companies’

o Equity investments in real estate

o Mortgage loans on real estate

o Property acquired through foreclosure

20. Royalty interests

21. Sales-type leases (when the company is the lessor)

22. Subleases (when the company is the lessor)

23. Sundry investments

24. Tax benefit leases

This item excludes:

1. Advances to sales staff (included in Assets – Other)

2. 20 – 50% owned investments when “ cost” or “ equity” method is not mentioned

3. Equity in consolidated joint ventures when held for loan collateral (included in

Assets – Other)

174
4. Film production companies’ film costs (included in Property, Plant, and

Equipment – Total [Net])

5. Investments carried at equity (included in Investments and Advances – Equity

Method)

6. Investments in a company’ s own securities (included in Assets – Other)

7. Land development companies’ land held for development and sale (included in

Property, Plant, and Equipment – Total [Net])

8. Publishing companies’ royalty advances to authors (included in Deferred Charges)

9. Receivables from officers and directors, employees and all holders of equity

securities (included in Assets – Other)

175
APPENDIX C – FINANCIAL DATA RECEIVED

176
Table C- 1 -- Total Physical and Financial Asset Values

(Company names have been deleted at their request)

Physical Asset Financial Asset Remaining Asset


1
Company Gross Asset Value
Value Value Value

Company A 309,647 42,906 3,123 263,618

Company B 37,111 4,331 1,954 30,826


2
Company C 70,802 14,108 - 56,694

Company D 50,872 7,682 9,856 33,334

Company E 500,078 75,084 30,614 394,380

Company F 67,591 22,039 23,048 22,504


2
Company G 61,416 15,768 - 45,648

Company H 60,531 33,506 1,292 25,733


1
– Company names have been removed for confidentiality purposes
2–
These companies may have used a different standard for reporting their financial assets

177
Table C-2 -- Earnings Estimates and "Normalized" Earnings

(Company names have been deleted at their request)

Earnings Earnings Earnings Earnings Earnings Earnings “Normalized”

Per Share Per Share Per Share Per Share Per Share Per Share Earnings per

(FY02) (FY03) (FY04) (FY05) (FY06) (FY07) Share

Company A 0.47 1.14 2.13 1.02 0.70 0.76 0.97

Company B 2.00 1.56 1.68 2.12 2.41 2.80 2.21

Company C 2.04 2.32 2.66 2.59 2.99 3.48 2.79

Company D 1.15 1.56 2.84 3.94 4.77 5.23 3.71

Company E 6.64 5.62 6.40 (4.11) 1.22 3.31 2.17

Company F 2.09 2.07 1.83 1.76 1.96 2.12 1.96

Company G 0.51 0.85 1.16 1.43 1.63 1.83 1.37

Company H 1.88 1.28 1.35 1.64 1.76 1.92 1.68

178
APPENDIX D – RAW SURVEY DATA

179
Table D- 1 -- Raw Survey Data

A B C D E F G H

KMS Project Inception Phase

1. Enterprise Definition

1a. At the start of this effort, how well did the organization 3 2 2 5 3 3 4 1

define and illustrate the enterprise to be included in the

proposed KMS?

1b. How well did this definition identify key relationships? 3 2 2 3 2 4 4 1

1c. How well did this definition identify key stakeholders? 3 1 1 2 2 2 2 1

1d. How well was this definition documented? 3 1 3 4 2 2 4 1

1e. If this definition changed during the course of the project, 3 3 3 1 N/A 3 2 1

how well was this change reflected in the project

requirements (if no change, please select N/A)?

1. Enterprise Definition 3.00 1.80 2.20 3.00 2.25 2.80 3.20 1.00

2. KMS Project Value

2a. How well did the organization state the value 4 3 1 1 2 2 2 2

proposition for the KMS implementation?

2b. How well was this definition documented? 4 3 2 DK 2 4 2 1

2c. If this definition changed during the course of the project, 3 3 3 1 N/A 4 3 1

how well was this change in definition reflected in the

project requirements (if no change, please select N/A)?


A B C D E F G H

2. Value Proposition 3.67 3.00 2.00 1.00 2.00 3.33 2.33 1.33

3. Definition of Critical Intellectual Assets

3a. Before starting system development, how well were the 2 3 2 5 1 3 2 1

critical intellectual assets needed to make strategic

decisions documented?

3b. How well was this list used in the project inception 2 3 2 N/A 1 4 2 1

phase?

3c. How well was this list used in the system development 2 3 2 N/A 1 3 3 2

phase?

3d. How well was this list used in the deployment and 2 3 4 N/A 2 2 2 2

maintenance phase?

3e. Was this list changed at any point in the process? Y N/A DK N/A N Y Y Y

3f. Why and when? See Table N/A N/A N/A N/A See Table See Table See Table

F-1 F-1 F-1 F-1

3. Critical Asset Identification 2.00 3.00 2.50 5.00 1.25 3.00 2.25 1.50

4. Sources of Critical Intellectual Assets

4a. Before starting system development, how well were the 4 2 2 5 2 3 3 2

sources of the intellectual assets needed for strategic

decision-making documented?
A B C D E F G H

4b. How well was this list used in the project inception 3 2 2 N/A 2 3 2 2

phase?

4c. How well was this list used in the system development 2 2 2 N/A 1 2 2 2

phase?

4d. How well was this list used in the deployment and 2 3 3 N/A 2 4 2 2

maintenance phase?

4e. Was this list changed at any point in the process? Y N/A N/A N/A N Y DK Y

4f. Why and when? See Table N/A N/A N/A N/A See Table N/A See Table

F-1 F-1 F-1

4. Asset Source Identification 2.75 2.25 2.25 5.00 1.75 3.00 2.25 2.00

5. Strategic Objectives

5a. How well did the organization define (or have defined for 2 1 1 2 1 4 1 1

it) one or more strategic objectives?

5b. How well were these defined in measurable terms? 3 1 1 4 2 2 2 1

5c. Please list some of the metrics 5c. used to measure See Table N/A N/A N/A See Table See Table See Table See Table

progress toward these objectives F-1 F-1 F-1 F-1 F-1

5d. How well were these objectives defined prior to the 2 1 2 2 1 3 4 4

KMS project inception phase?

5e. How well were any of these objectives actually 4 1 4 N/A 2 2 4 4

measured prior to KMS project inception?


A B C D E F G H

5f. How well were any of these objectives actually 2 1 3 N/A 1 2 2 4

measured during KMS project development?

5g. How well were any of these objectives actually 2 1 2 N/A 2 4 2 1

measured after initial KMS deployment?

5h. (if applicable) How well were any of these objectives 2 1 2 N/A 1 4 2 1

actually measured during ongoing/steady-state

operations?

5. Strategic Objectives 2.43 1.00 2.14 2.67 1.43 3.00 2.43 2.29

6. Environmental Changes and Sensors

6a. How well did the organization identify and/or define 3 DK 3 5 1 4 DK 1

potential environmental changes (other than the potential

KMS project) that could impact achievement of the strategic

objectives??

6b. How many potential changes were identified (enter N/A N/A N/A N/A N/A 20 See Table N/A 10-20?

or DK if needed)? F-1

6c. How well did the organization define sensors or methods 4 N/A N/A N/A N/A 3 N/A

to monitor those changes? 1

6d. What were those sensors? See Table N/A N/A N/A N/A See Table N/A See Table

F-1 F-1 F-1


A B C D E F G H

6e. How often were they monitored or assessed? yearly N/A N/A N/A N/A weekly N/A See Table

F-1

6f. Did any of the potential environmental changes actually Y N/A DK N/A Y N N/A Y

occur?

6g. How much did these changes affect accomplishment of N/A N/A DK N/A 2 5 N/A

the strategic objectives? 1

6h. Did the sensors or methods work as expected? DK N/A N/A N/A DK Y N/A Y

6i. If they did not work as expected, please describe the OK…stay N/A N/A N/A N/A N/A N/A See Table

reason(s) why they did not meet expectations (answer N/A or tuned F-1

DK as needed)

6. Environmental Changes 3.50 N/A 3.00 5.00 1.50 4.00 N/A 1.00

7. Senior Leadership Commitment

7a. At KMS project inception, did the organization have the Y Y Y Y Y Y Y Y

commitment of senior leadership to the need for a KM

project?

7b. How strong were each of the following leadership factors

within the organization?

(i) Business Culture 1 2 1 1 2 2 2 2

(ii) Strategic Planning 1 1 1 3 4 3 2 1

(iii) Organizational Vision and Goals 1 1 1 2 2 2 2 1


A B C D E F G H

(iv) Organizational Climate 1 3 2 2 4 3 2 1

(v) Growth 1 1 2 3 4 3 2 1

(vi) Organizational Segmentation or Restructuring 2 3 2 3 3 2 2 2

(vii) Communications 2 4 2 2 2 1 2 1

7c. How well did each of the following leadership factors

within the organization support the KMS project?

(i) Business Culture 1 2 2 1 4 3 2 1

(ii) Strategic Planning 1 3 1 2 4 2 2 1

(iii) Organizational Vision and Goals 2 3 1 1 2 2 2 1

(iv) Organizational Climate 2 2 2 3 2 3 2 1

(v) Growth 2 2 1 3 4 3 2 1

(vi) Organizational Segmentation or Restructuring 2 3 2 3 3 2 2 1

(vii) Communications 2 3 2 1 2 1 2 1

7a. Leadership Support 1.50 2.36 1.57 2.14 3.00 2.29 2.00 1.14

8. Organizational Commitment

8a. At KMS project inception, did the organization support Y N N N Y Y Y Y

the implementation of a KM project?

8b. How strongly were one or more of the following

organizational paradigms implemented within your

organization?
A B C D E F G H

(i) Business Process Reengineering 2 2 1 2 2 3 2 1

(ii) Performance Metrics 2 1 1 N/A 1 2 2 1

(iii) Management by Objective 1 2 1 N/A 1 2 2 2

(iv) Total Quality Management/Leadership (TQM/TQL) 2 1 1 N/A 2 3 2 3

(v) Automated Workflow Management 4 5 3 N/A 4 4 3 1

(vi) Communications methods 4 4 2 1 1 2 3 1

8c. How well did one or more of the following organizational

paradigms support the KMS project?

(i) Business Process Reengineering 2 5 2 2 2 3 2 1

(ii) Performance Metrics 3 1 2 N/A 1 3 2 1

(iii) Management by Objective 2 5 2 N/A 1 3 2 2

(iv) Total Quality Management/Leadership (TQM/TQL) 1 5 3 N/A 2 2 2 3

(v) Automated Workflow Management 4 5 3 N/A 5 3 3 1

(vi) Communications methods 4 5 2 1 2 3 3 1

7b. Organizational Support 2.58 3.42 1.92 1.50 2.00 2.75 2.33 1.50

9. KMS Technologies

9a. Did the organization provide appropriate technology for a Y Y Y Y Y Y Y Y

KM system?

9b. Which of the following technologies were involved in the

KMS and if so, when were they introduced?


A B C D E F G H

(i) E-mail 2 2 2 2 N/A 2 2 2

(ii) On-Line Analytic Processing (OLAP) 3 N/A 2 N/A N/A N/A 2 2

(iii) Data Warehousing/Data Mart/Data Mining Tools 3 N/A 2 N/A 3 N/A 2 2

(iv) Search Engines N/A 2 2 2 3 3 3 2

(v) Other Decision Support Systems 4 N/A 2 N/A N/A N/A 3 2

(vi) Business Process Modeling 3 N/A 2 2 N/A N/A 3 2

(vii) Other Knowledge Management Tools 2 2 4 2 3 4 2 2

(viii) Other Communication Systems or Processes 3 N/A 2 2 N/A 3 3 2

9c. Did they help or hurt?

(i) E-mail 2 1 4 1 N/A 1 1 1

(ii) On-Line Analytic Processing (OLAP) 2 N/A 2 N/A N/A N/A 2 1

(iii) Data Warehousing/Data Mart/Data Mining Tools 2 N/A 1 N/A 2 N/A 1 1

(iv) Search Engines 3 1 1 2 1 4 2 2

(v) Other Decision Support Systems 3 N/A 1 N/A N/A N/A 1 2

(vi) Business Process Modeling 3 N/A 2 DK N/A N/A 2 2

(vii) Other Knowledge Management Tools 3 1 DK 2 2 1 2 2

(viii) Other Communication Systems or Processes 2 3 3 1 N/A 2 2 2

7c. Technology Support 2.67 1.71 2.13 1.78 2.33 2.50 2.06 1.81

10. Organizational Learning

10a. Is there a process for organizational learning? Y Y N Y Y N/A DK Y


A B C D E F G H

10b. How formal is the process? 2 2 N/A 1 2 4 N/A 1

10c. Is there an environment which encourages it? Y Y Y Y Y N Y Y

10d. How collaborative is the environment? 2 2 1 1 2 3 1 1

10e. How virtual is the environment? 2 2 2 1 1 2 1 2

10f. How much is knowledge exchange officially 3 5 2 1 4 3 3 1

rewarded/penalized?

10g. How much is knowledge exchange unofficially 3 2 2 1 2 3 2 1

rewarded/penalized?

7d. Learning Support 2.40 2.60 1.75 1.00 2.20 3.00 1.75 1.20

KMS Project Development Phase

11. Functional Definition

11a. During the system development stage, how well did the 2 2 1 5 2 3 2 2

organization list the functions needed to accomplish the

strategic objectives?

11b. How much was this list used in the system 3 2 1 N/A 2 4 2 1

development phase?

11c. How much was this list used in the deployment and 3 3 3 N/A 4 4 2 1

maintenance phase?

11d. Was this list changed at any point in the process? N DK DK N/A N Y DK N
A B C D E F G H

11e. Why and when? N/A N/A N/A N/A N/A See Table N/A N/A

F-1

8. Strategic Functions 2.67 2.33 1.67 5.00 2.67 3.67 2.00 1.33

12. Operational Process Definition

12a. During the system development stage, how well did the 2 2 1 5 2 1 2 2

organization list the operational processes required to

accomplish these functions?

12b. How much was this list used in the system 2 3 1 N/A 2 1 2 1

development phase?

12c. How much was this list used in the deployment and 2 3 3 N/A 4 1 2 1

maintenance phase?

12d. Was this list changed at any point in the process? Y DK Y N/A N Y N N

12e. Why and when? See Table N/A See Table N/A N/A See Table N/A N/A

F-1 F-1 F-1

9. Functional Processes 2.00 2.67 1.67 5.00 2.67 1.00 2.00 1.33

13. Intellectual Asset Listing

13a. During the system development stage, how well did the 2 2 2 5 2 3 2 1

organization list the intellectual assets required to

accomplish both functions and processes?


A B C D E F G H

13b. How much was this list used in the system 3 4 2 N/A 2 3 2 1

development phase?

13c. How much was this list used in the deployment and 3 5 2 N/A 4 3 2 1

maintenance phase?

13d. Was this list changed at any point in the process? N DK N N/A N N N Y

13e. Why and when? N/A N/A N/A N/A N/A N/A N/A See Table

F-1

10. Required Assets 2.67 3.67 2.00 5.00 2.67 3.00 2.00 1.00

14. Intellectual Asset Sourcing

14a. During the system development stage, how well did the 2 2 2 5 1 2 2 1

organization list the sources of these intellectual assets?

14b. How much was this list used in the system 2 2 2 N/A 2 3 2 1

development phase?

14c. How much was this list used in the deployment and 2 4 3 N/A 4 3 2 1

maintenance phase?

14d. Was this list changed at any point in the process? N DK N N/A N N N Y

14e. Why and when? N/A N/A N/A N/A N/A N/A N/A See Table

F-1

11. Asset Sources 2.00 2.67 2.33 5.00 2.33 2.67 2.00 1.00

15. KMS Implementation Strategies


A B C D E F G H

There are two types of strategies for implementing

knowledge management systems within an organization.

Both can use information technology to improve their

efficiency, and an organization can successfully use one or

both strategies.

‡7KHILUVWcodification, relies more on automated

knowledge elicitation, representation, and search functions

which are then stored in databases -- physical networking.

This is usually more appropriate for problems which can be

defined and which have repeatable answers or processes.

‡7KHVHFRQGpersonalization, relies more on informal

dialogue, human interactions, and expertise identification --

social networking. This is usually more appropriate for

“fuzzy” problems where solutions may need to be extremely

customized.

15a. Did this KMS use a knowledge codification strategy? Y Y Y Y Y Y Y Y

15b. Did this KMS use a knowledge personalization Y Y Y Y Y Y Y Y

strategy?

15c. If both strategies were used for this KMS, how often 1 2 2 3 3 2 2 1

were they both used?


A B C D E F G H

12a. KM Strategy Identification 1.00 2.00 2.00 3.00 3.00 2.00 2.00 1.00

16. KMS Implementation Effectiveness

How well did the selected strategy/strategies address the

following areas:

16a. Assurance (is the knowledge correct?) 1 3 2 1 4 4 2 1

16b. Generation (does the knowledge exist?) 4 2 2 3 2 4 2 1

16c. Codification (how is the knowledge stored and 2 2 2 2 2 2 1 1

displayed?)

16d. Transfer (how can the knowledge get to the person 4 1 2 1 2 2 2 1

seeking it?)

16e. Use (how was the knowledge utilized?) 4 3 1 1 3 1 2 1

12b. KM Strategy Effectiveness 3.00 2.20 1.80 1.60 2.60 2.60 1.80 1.00

17. Formal Organizational Structure

17a. Did an accurate diagram or representation of the Y N N DK Y Y Y Y

formal organizational structure exist at any point in the KMS

implementation process?

17b. If not, why not? N/A N/A N/A DK N/A N/A N/A N/A

17c. How much was this used in the system development 2 4 DK DK 4 4 2 1

phase?
A B C D E F G H

17d. How much was this used in the deployment and 3 4 DK DK 4 4 2 1

maintenance phase?

17e. Was this changed at any point in the process? Y DK DK DK Y Y N N

17f. Why and when? See Table N/A N/A DK See Table See Table N/A N/A

F-1 F-1 F-1

13a. Formal Organizational Structure 2.50 4.00 N/A N/A 4.00 4.00 2.00 1.00

18. Informal Organizational Structure

18a. Was there a diagram or representation of the informal Y N Y DK N N N

organizational structure? 1

18b. Did it show where it supported the formal one? Y N/A N DK N/A N/A N/A 1

18c. If not, why not? N/A N/A DK N/A N/A N/A N/A

18d. How much was this used in the system development 2 N/A 2 DK N/A 1 DK

phase? 1

18e. How much was this used in the deployment and 2 N/A 3 DK N/A 1 DK

maintenance phase? 1

18f. Was this changed at any point in the process? Y N/A N DK N/A N DK Y

18g. Why and when? See Table N/A N/A DK N/A N/A N/A See Table

F-1 F-1

13b. Informal Organizational Structure 2.00 N/A 2.50 N/A N/A 1.00 N/A 1.00

19. Technology Definition


A B C D E F G H

19a. Was there a list of the technology needed to support Y Y Y Y Y Y Y Y

the planned KM strategies?

19b. Was it used in support of the KMS implementation Y Y Y Y Y Y Y Y

effort?

19c. If not, why not? N/A N/A N/A N/A N/A N/A N/A N/A

19d. How much was this list used in the system 4 2 1 4 3 1 2 1

development phase?

19e. How much was this list used in the deployment and 4 2 1 4 3 1 2 1

maintenance phase?

19f. Was this changed at any point in the process? Y N/A N DK Y N Y Y

19g. Why and when? See Table N/A N/A N/A See Table N/A N/A See Table

F-1 F-1 F-1

14. KM Technology Listing 4.00 2.00 1.00 4.00 3.00 1.00 2.00 1.00

KMS Fielding and Support Phase

20. Integrative Management Plan

20a. How comprehensive was the management plan that 1 2 2 DK 2 3 2 1

was developed and documented for the KMS?

20b. How integrative was the management plan that was 1 DK 2 DK d/k 2 2 1

developed and documented for the KMS?

20c. If there was no plan, why not? N/A N/A N/A N/A N/A N/A N/A N/A
A B C D E F G H

20d. If there was a plan, how well did the plan include any or

all of the following:

(i) Deliverables 1 1 1 N/A 2 2 1 1

(ii) Metrics for project control 1 1 1 N/A 1 2 1 2

(iii) Metrics for defining project success 1 1 1 N/A 1 3 2 2

(iv) Expected benefits 1 1 2 N/A 2 3 2 2

(v) Required resources (time, money, personnel) 4 1 2 N/A 1 3 2 1

(vi) Responsible persons 4 1 1 N/A 1 3 1 1

(vii) Schedule 2 1 1 N/A 1 1 1 1

20e. What other major topics were covered in the project or See Table N/A N/A N/A N/A See Table N/A See Table

system management plan? F-1 F-1 F-1

15. Integrative Management Plan 1.78 1.13 1.44 N/A 1.38 2.44 1.56 1.33

21. Legacy Integration Planning

21a. Was there a lot of planning to integrate the KMS with 5 4 3 3 3 5 3 3

legacy components?

21b. If there was no plan, why not? N/A N/A N/A N/A N/A See Table N/A N/A

F-1

21c. If there was legacy integration planning, how well did

the planning address any or all of the following:

(i) Current functions N/A N/A N/A N/A 2 N/A 4 1


A B C D E F G H

(ii) Current processes N/A N/A 2 N/A 2 N/A 4 1

(iii) Formal organizational structure N/A N/A N/A N/A 4 N/A 4 2

(iv) Informal organizational structure(s) N/A N/A 2 N/A 4 N/A 4 4

(v) IT systems N/A N/A 2 N/A 1 N/A 3 4

16. Legacy System Integration 5.00 4.00 2.25 3.00 2.67 5.00 3.67 2.50

(vi) Risk identification N/A N/A 3 N/A 2 N/A 3 1

(vii) Risk management and/or mitigation N/A N/A 2 N/A 2 N/A 3 1

17. Risk Management/Mitigation N/A N/A 2.50 N/A 2.00 N/A 3.00 1.00

21d. What other major topics were covered in the legacy N/A N/A N/A See Table N/A N/A N/A N/A

system integration plan? F-1

22. Change Management Planning

22a. Did the organization develop a plan for evaluating, N Y Y Y Y Y DK Y

implementing, and managing change?

22b. If not, why not? See Table N/A N/A N/A N/A N/A N/A N/A

F-1

22c. Has this plan been used? N/A Y Y Y N/A Y N/A Y

22d. If so, how successful would you say that the plan has N/A 2 2 1 N/A 1 N/A 1

been?
A B C D E F G H

22e. Please list reason(s) why or why not? N/A N/A N/A See Table N/A See Table N/A See Table

F-1 F-1 F-1

18. Change Management N/A 2.00 2.00 1.00 N/A 1.00 N/A 1.00

Other comments See Table See Table

F-1 F-1
APPENDIX E – INTERMEDIATE STATISTICAL CALCULATIONS AND

CONCLUSIONS

198
Table E- 1 -- Summary of All Question Responses

Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
KMS Project Inception Phase
1. Enterprise Definition
1a. At the start of this effort,
how well did the organization
Enterprise
define and illustrate the Likert - - 2.88 1.2464 1.55 - -
Definition
enterprise to be included in the
proposed KMS?
1b. How well did this definition Enterprise
Likert - - 2.63 1.0607 1.13 - -
identify key relationships? Definition
1c. How well did this definition Enterprise
Likert - - 1.75 0.7071 0.50 - -
identify key stakeholders? Definition
1d. How well was this definition Enterprise
Likert - - 2.50 1.1952 1.43 - -
documented? Definition
1e. If this definition changed
during the course of the project,
how well was this change Enterprise
Likert - - 2.29 0.9512 0.90 1 -
reflected in the project Definition
requirements (if no change,
please select N/A)?
2. KMS Project Value - -
2a. How well did the
Value
organization state the value Likert - - 2.13 0.9910 0.98 - -
Proposition
proposition for the KMS
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
implementation?
2b. How well was this definition Value
Likert - - 2.57 1.1339 1.29 - -
documented? Proposition
2c. If this definition changed
during the course of the project,
how well was this change in Value
Likert - - 2.57 1.1339 1.29 - 1
definition reflected in the project Proposition
requirements (if no change,
please select N/A)?
3. Definition of Critical
1 -
Intellectual Assets
3a. Before starting system
development, how well were the Identify Critical
critical intellectual assets Intellectual Likert - - 2.38 1.3025 1.70 - -
needed to make strategic Assets
decisions documented?
Identify Critical
3b. How well was this list used
Intellectual Likert - - 2.14 1.0690 1.14 - -
in the project inception phase?
Assets
3c. How well was this list used Identify Critical
in the system development Intellectual Likert - - 2.29 0.7559 0.57 - -
phase? Assets
3d. How well was this list used Identify Critical
Likert - - 2.43 0.7868 0.62 1 -
in the deployment and Intellectual
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
maintenance phase? Assets
Identify Critical
3e. Was this list changed at any
Intellectual Binary 4 1 - - - 1 -
point in the process?
Assets
Identify Critical
3f. Why and when? Intellectual Free Text - - - - - 1 -
Assets
4. Sources of Critical Intellectual
2 1
Assets
4a. Before starting system
development, how well were the Identify
sources of the intellectual Intellectual Likert - - 2.88 1.1260 1.27 4 -
assets needed for strategic Asset Sources
decision-making documented?
Identify
4b. How well was this list used
Intellectual Likert - - 2.29 0.4880 0.24 - -
in the project inception phase?
Asset Sources
4c. How well was this list used Identify
in the system development Intellectual Likert - - 1.86 0.3780 0.14 - -
phase? Asset Sources
4d. How well was this list used Identify
in the deployment and Intellectual Likert - - 2.57 0.7868 0.62 - -
maintenance phase? Asset Sources
4e. Was this list changed at any Identify Binary 3 1 - - - 1 -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
point in the process? Intellectual
Asset Sources
Identify
4f. Why and when? Intellectual Free Text - - - - - 1 -
Asset Sources
5. Strategic Objectives 1 -
5a. How well did the
organization define (or have Strategic
Likert - - 1.63 1.0607 1.13 3 1
defined for it) one or more Objectives
strategic objectives?
5b. How well were these defined Strategic
Likert - - 2.00 1.0690 1.14 5 -
in measurable terms? Objectives
5c. Please list some of the
metrics 5b. used to measure Strategic
Free Text - - - - - - -
progress toward these Objectives
objectives
5d. How well were these
Strategic
objectives defined prior to the Likert - - 2.38 1.1877 1.41 - -
Objectives
KMS project inception phase?
5e. How well were any of these
Strategic
objectives actually measured Likert - - 3.00 1.2910 1.67 - -
Objectives
prior to KMS project inception?
5f. How well were any of these Strategic
Likert - - 2.14 1.0690 1.14 - -
objectives actually measured Objectives
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
during KMS project
development?
5g. How well were any of these
Strategic
objectives actually measured Likert - - 2.00 1.0000 1.00 3 -
Objectives
after initial KMS deployment?
5h. (if applicable) How well were
any of these objectives actually
Strategic
measured during Likert - - 1.86 1.0690 1.14 - -
Objectives
ongoing/steady-state
operations?
6. Environmental Changes and
1 -
Sensors
6a. How well did the
organization identify and/or
define potential environmental Identify
changes (other than the Environmental Likert - - 2.83 1.6021 2.57 1 -
potential KMS project) that Changes
could impact achievement of the
strategic objectives?
6b. How many potential Identify
changes were identified (enter Environmental Free Text - - - - - 1 -
N/A or DK if needed)? Changes
6c. How well did the Identify
Likert - - 2.67 1.5275 2.33 1 -
organization define sensors or Environmental
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
methods to monitor those Changes
changes?
Identify
6d. What were those sensors? Environmental Free Text - - - - - - -
Changes
Identify
6e. How often were they
Environmental Free Text - - - - - - -
monitored or assessed?
Changes
6f. Did any of the potential Identify
environmental changes actually Environmental Binary 3 1 - - - - 2
occur? Changes
6g. How much did these Identify
changes affect accomplishment Environmental Likert - - - - - 5 -
of the strategic objectives? Changes
Identify
6h. Did the sensors or methods
Environmental Binary - - - - - 5 -
work as expected?
Changes
6i. If they did not work as
expected, please describe the Identify
reason(s) why they did not meet Environmental Free Text - - - - - 5 -
expectations (answer N/A or DK Changes
as needed)
7. Senior Leadership
5 -
Commitment
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
7a. At KMS project inception,
did the organization have the
Leadership
commitment of senior Likert 8 - - - - 3 1
Pillar
leadership to the need for a KM
project?
7b. How strong were each of the
following leadership factors 4 1
within the organization?
Leadership
(i) Business Culture Likert - - 1.63 0.5175 0.27 4 2
Pillar
Leadership
(ii) Strategic Planning Likert - - 2.00 1.1952 1.43 6 -
Pillar
(iii) Organizational Vision and Leadership
Likert - - 1.50 0.5345 0.29 2 -
Goals Pillar
Leadership
(iv) Organizational Climate Likert - - 2.25 1.0351 1.07 - -
Pillar
Leadership
(v) Growth Likert - - 2.13 1.1260 1.27 - -
Pillar
(vi) Organizational Leadership
Likert - - 2.38 0.5175 0.27 - -
Segmentation or Restructuring Pillar
Leadership
(vii) Communications Likert - - 2.00 0.9258 0.86 - -
Pillar
7c. How well did each of the
- -
following leadership factors
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
within the organization support
the KMS project?
Leadership
(i) Business Culture Likert - - 2.00 1.0690 1.14 - -
Pillar
Leadership
(ii) Strategic Planning Likert - - 2.00 1.0690 1.14 - -
Pillar
(iii) Organizational Vision and Leadership
Likert - - 1.75 0.7071 0.50 - -
Goals Pillar
Leadership
(iv) Organizational Climate Likert - - 2.13 0.6409 0.41 - -
Pillar
Leadership
(v) Growth Likert - - 2.25 1.0351 1.07 - -
Pillar
(vi) Organizational Leadership
Likert - - 2.25 0.7071 0.50 - -
Segmentation or Restructuring Pillar
Leadership
(vii) Communications Likert - - 1.75 0.7071 0.50 - -
Pillar
8. Organizational Commitment - -
8a. At KMS project inception,
did the organization support the Organization
Likert 5 3 - - - - -
implementation of a KM Pillar
project?
8b. How strongly were one or
more of the following - -
organizational paradigms
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
implemented within your
organization?
(i) Business Process Organization
Likert - - 1.88 0.6409 0.41 - -
Reengineering Pillar
Organization
(ii) Performance Metrics Likert - - 1.43 0.5345 0.29 - -
Pillar
Organization
(iii) Management by Objective Likert - - 1.57 0.5345 0.29 - -
Pillar
(iv) Total Quality
Organization
Management/Leadership Likert - - 2.00 0.8165 0.67 - -
Pillar
(TQM/TQL)
(v) Automated Workflow Organization
Likert - - 3.43 1.2724 1.62 - -
Management Pillar
Organization
(vi) Communications methods Likert - - 2.25 1.2817 1.64 - -
Pillar
8c. How well did one or more of
the following organizational
- -
paradigms support the KMS
project?
(i) Business Process Organization
Likert - - 2.38 1.1877 1.41 - -
Reengineering Pillar
Organization
(ii) Performance Metrics Likert - - 1.86 0.8997 0.81 1 -
Pillar
(iii) Management by Objective Organization Likert - - 2.43 1.2724 1.62 1 -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Pillar
(iv) Total Quality
Organization
Management/Leadership Likert - - 2.57 1.2724 1.62 1 -
Pillar
(TQM/TQL)
(v) Automated Workflow Organization
Likert - - 3.43 1.3973 1.95 1 -
Management Pillar
Organization
(vi) Communications methods Likert - - 2.63 1.4079 1.98 - -
Pillar
9. KMS Technologies - -
9a. Did the organization provide
Technology
appropriate technology for a KM Likert 8 - - - - - -
Pillar
system?
9b. Which of the following
technologies were involved in
1 -
the KMS and if so, when were
they introduced?
Technology
(i) E-mail Likert - - 2.00 - - 1 -
Pillar
(ii) On-Line Analytic Processing Technology
Likert - - 2.25 0.5000 0.25 1 -
(OLAP) Pillar
(iii) Data Warehousing/Data Technology
Likert - - 2.40 0.5477 0.30 1 -
Mart/Data Mining Tools Pillar
Technology
(iv) Search Engines Likert - - 2.43 0.5345 0.29 - -
Pillar
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
(v) Other Decision Support Technology
Likert - - 2.75 0.9574 0.92 - -
Systems Pillar
Technology
(vi) Business Process Modeling Likert - - 2.40 0.5477 0.30 - -
Pillar
(vii) Other Knowledge Technology
Likert - - 2.63 0.9161 0.84 - -
Management Tools Pillar
(viii) Other Communication Technology
Likert - - 2.50 0.5477 0.30 - -
Systems or Processes Pillar
9c. Did they help or hurt? 1 -
Technology
(i) E-mail Likert - - 1.57 1.1339 1.29 4 -
Pillar
(ii) On-Line Analytic Processing Technology
Likert - - 1.75 0.5000 0.25 3 -
(OLAP) Pillar
(iii) Data Warehousing/Data Technology
Likert - - 1.40 0.5477 0.30 1 -
Mart/Data Mining Tools Pillar
Technology
(iv) Search Engines Likert - - 2.00 1.0690 1.14 4 -
Pillar
(v) Other Decision Support Technology
Likert - - 1.75 0.9574 0.92 3 -
Systems Pillar
Technology
(vi) Business Process Modeling Likert - - 2.25 0.5000 0.25 - -
Pillar
(vii) Other Knowledge Technology
Likert - - 1.86 0.6901 0.48 2 -
Management Tools Pillar
(viii) Other Communication Technology Likert - - 2.14 0.6901 0.48 - -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Systems or Processes Pillar
10. Organizational Learning 1 -
10a. Is there a process for
Learning Pillar Binary 5 1 - - - 4 -
organizational learning?
10b. How formal is the
Learning Pillar Likert - - 2.00 1.0954 1.20 3 -
process?
10c. Is there an environment
Learning Pillar Binary 7 1 - - - - -
which encourages it?
10d. How collaborative is the
Learning Pillar Likert - - 1.63 0.7440 0.55 4 -
environment?
10e. How virtual is the
Learning Pillar Likert - - 1.63 0.5175 0.27 3 1
environment?
10f. How much is knowledge
exchange officially Learning Pillar Likert - - 2.75 1.3887 1.93 - 1
rewarded/penalized?
10g. How much is knowledge
exchange unofficially Learning Pillar Likert - - 2.00 0.7559 0.57 1 -
rewarded/penalized?
KMS Project Development
- -
Phase
11. Functional Definition - -
11a. During the system Functions for
development stage, how well did Strategic Likert - - 2.38 1.1877 1.41 1 1
the organization list the Objectives
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
functions needed to accomplish
the strategic objectives?
11b. How much was this list Functions for
used in the system development Strategic Likert - - 2.14 1.0690 1.14 2 -
phase? Objectives
11c. How much was this list Functions for
used in the deployment and Strategic Likert - - 2.86 1.0690 1.14 - -
maintenance phase? Objectives
Functions for
11d. Was this list changed at
Strategic Binary 1 3 - - - - -
any point in the process?
Objectives
Functions for
11e. Why and when? Strategic Free Text - - - - - - -
Objectives
12. Operational Process
- -
Definition
12a. During the system
development stage, how well did
Functional
the organization list the Likert - - 2.13 1.2464 1.55 - -
Processes
operational processes required
to accomplish these functions?
12b. How much was this list
Functional
used in the system development Likert - - 1.71 0.7559 0.57 - -
Processes
phase?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
12c. How much was this list
Functional
used in the deployment and Likert - - 2.29 1.1127 1.24 - -
Processes
maintenance phase?
12d. Was this list changed at Functional
Binary 3 2 - - - - -
any point in the process? Processes
Functional
12e. Why and when? Free Text - - - - - - -
Processes
13. Intellectual Asset Listing 1 -
13a. During the system
development stage, how well did
the organization list the
Asset Listing Likert - - 2.38 1.1877 1.41 1 -
intellectual assets required to
accomplish both functions and
processes?
13b. How much was this list
used in the system development Asset Listing Likert - - 2.43 0.9759 0.95 1 3
phase?
13c. How much was this list
used in the deployment and Asset Listing Likert - - 2.86 1.3452 1.81 7 -
maintenance phase?
13d. Was this list changed at
Asset Listing Binary 1 5 - - - - -
any point in the process?
13e. Why and when? Asset Listing Free Text - - - - - - -
14. Intellectual Asset Sourcing - -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
14a. During the system
development stage, how well did
Asset Sourcing Likert - - 2.13 1.2464 1.55 1 -
the organization list the sources
of these intellectual assets?
14b. How much was this list
used in the system development Asset Sourcing Likert - - 2.00 0.5774 0.33 1 -
phase?
14c. How much was this list
used in the deployment and Asset Sourcing Likert - - 2.71 1.1127 1.24 1 1
maintenance phase?
14d. Was this list changed at
Asset Sourcing Binary 1 5 - - - 5 -
any point in the process?
14e. Why and when? Asset Sourcing Free Text - - - - - - -
15. KMS Implementation
- -
Strategies
15a. Did this KMS use a
KM Strategy
knowledge codification Likert 8 - - - - - -
Identification
strategy?
15b. Did this KMS use a
KM Strategy
knowledge personalization Likert 8 - - - - 1 -
Identification
strategy?
15c. If both strategies were
KM Strategy
used for this KMS, how often Likert - - 2.00 0.7559 0.57 1 -
Identification
were they both used?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
16. KMS Implementation
1 1
Effectiveness
How well did the selected
strategy/strategies address the 7 -
following areas:
16a. Assurance (is the KM Strategy
Likert - - 2.25 1.2817 1.64 - -
knowledge correct?) Identification
16b. Generation (does the KM Strategy
Likert - - 2.50 1.0690 1.14 - -
knowledge exist?) Identification
16c. Codification (how is the
KM Strategy
knowledge stored and Likert - - 1.75 0.4629 0.21 - -
Identification
displayed?)
16d. Transfer (how can the
KM Strategy
knowledge get to the person Likert - - 1.88 0.9910 0.98 1 -
Identification
seeking it?)
16e. Use (how was the KM Strategy
Likert - - 2.00 1.1952 1.43 1 -
knowledge utilized?) Identification
17. Formal Organizational
1 1
Structure
17a. Did an accurate diagram or
representation of the formal
Organizational
organizational structure exist at Binary 5 2 - - - 7 -
Structures
any point in the KMS
implementation process?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Organizational
17b. If not, why not? Free Text - - - - - - -
Structures
17c. How much was this used in
Organizational
the system development Likert - - 2.83 1.3292 1.77 - -
Structures
phase?
17d. How much was this used in
Organizational
the deployment and Likert - - 3.00 1.2649 1.60 - -
Structures
maintenance phase?
17e. Was this changed at any Organizational
Binary 3 2 - - - - -
point in the process? Structures
Organizational
17f. Why and when? Free Text - - - - - - -
Structures
18. Informal Organizational
- -
Structure
18a. Was there a diagram or
Organizational
representation of the informal Binary 2 4 - - - - -
Structures
organizational structure?
18b. Did it show where it Organizational
Binary 1 1 - - - - -
supported the formal one? Structures
Organizational
18c. If not, why not? Free Text - - - - - - -
Structures
18d. How much was this used in
Organizational
the system development Likert - - 1.50 0.5774 0.33 - -
Structures
phase?
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
18e. How much was this used in
Organizational
the deployment and Likert - - 1.75 0.9574 0.92 - -
Structures
maintenance phase?
18f. Was this changed at any Organizational
Binary 2 2 - - - - -
point in the process? Structures
Organizational
18g. Why and when? Free Text - - - - - - -
Structures
19. Technology Definition - -
19a. Was there a list of the
KM
technology needed to support Binary 8 - - - - - -
Technologies
the planned KM strategies?
19b. Was it used in support of KM
Binary 8 - - - - - -
the KMS implementation effort? Technologies
KM
19c. If not, why not? Free Text - - - - - - -
Technologies
19d. How much was this list
KM
used in the system development Likert - - 2.25 1.2817 1.64 - -
Technologies
phase?
19e. How much was this list
KM
used in the deployment and Likert - - 2.25 1.2817 1.64 - 1
Technologies
maintenance phase?
19f. Was this changed at any KM
Binary 4 2 - - - 7 1
point in the process? Technologies
19g. Why and when? KM Free Text - - - - - - 2
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Technologies
KMS Fielding and Support
- 2
Phase
20. Integrative Management
- 3
Plan
20a. How comprehensive was
Integrative
the management plan that was
Management Likert - - 1.86 0.6901 0.48 4 1
developed and documented for
Plan
the KMS?
20b. How integrative was the
Integrative
management plan that was
Management Likert - - 1.60 0.5477 0.30 2 -
developed and documented for
Plan
the KMS?
Integrative
20c. If there was no plan, why
Management Free Text - - - - - - -
not?
Plan
20d. If there was a plan, how
well did the plan include any or - 1
all of the following:
Integrative
(i) Deliverables Management Likert - - 1.29 0.4880 0.24 4 1
Plan
Integrative
(ii) Metrics for project control Likert - - 1.29 0.4880 0.24 6 1
Management
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
Plan
Integrative
(iii) Metrics for defining project
Management Likert - - 1.57 0.7868 0.62 2 2
success
Plan
Integrative
(iv) Expected benefits Management Likert - - 1.86 0.6901 0.48 2 2
Plan
Integrative
(v) Required resources (time,
Management Likert - - 2.00 1.1547 1.33 2 2
money, personnel)
Plan
Integrative
(vi) Responsible persons Management Likert - - 1.71 1.2536 1.57 5 1
Plan
Integrative
(vii) Schedule Management Likert - - 1.14 0.3780 0.14 4 -
Plan
20e. What other major topics Integrative
were covered in the project or Management Free Text - - - - - - -
system management plan? Plan
21. Legacy Integration Planning - -
21a. Was there a lot of planning
Legacy
to integrate the KMS with legacy Likert - - 3.63 0.9161 0.84 - -
Integration
components?
21b. If there was no plan, why Legacy Free Text - - - - - 8 -
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
not? Integration
21c. If there was legacy
integration planning, how well
- -
did the planning address any or
all of the following:
Legacy
(i) Current functions Likert - - 2.33 1.5275 2.33 - -
Integration
Legacy
(ii) Current processes Likert - - 2.25 1.2583 1.58 1 1
Integration
(iii) Formal organizational Legacy
Likert - - 3.33 1.1547 1.33 5 -
structure Integration
(iv) Informal organizational Legacy
Likert - - 3.50 1.0000 1.00 - -
structure(s) Integration
Legacy
(v) IT systems Likert - - 2.50 1.2910 1.67 - -
Integration
Legacy
(vi) Risk identification Likert - - 2.25 0.9574 0.92 - -
Integration
(vii) Risk management and/or Legacy
Likert - - 2.00 0.8165 0.67 - 1
mitigation Integration
21d. What other major topics
Legacy
were covered in the legacy Free Text - - - - - - 2
Integration
system integration plan?
22. Change Management
8 -
Planning
Don’t
#N/A
Relevant EME Question (for binary) (for Likert Scales) Know
Survey Question
Area Type Average Standard
#Yes #No Variance
Score Deviation
22a. Did the organization
develop a plan for evaluating, Change
Binary 6 1 - - - - -
implementing, and Managing Management
change?
Change
22b. If not, why not? Free Text - - - - - 1 -
Management
Change
22c. Has this plan been used? Binary - - - - - 1 -
Management
22d. If so, how successful would
Change
you say that the plan has Likert - - 1.40 0.5477 0.30 1 -
Management
been?
22e. Please list reason(s) why Change
Free Text - - - - - 1 -
or why not? Management
Table E- 2 -- Company-level EME Survey Area Responses

EME Area A B C D E F G H Avg StdDev Rank N/A

1. Enterprise Definition 3.00 1.80 2.20 3.00 2.25 2.80 3.20 1.00 2.61 0.5232 15 0

2. Value Proposition 3.67 3.00 2.00 1.00 2.00 3.33 2.33 1.33 2.48 0.9201 13 0

3. Critical Asset Identification 2.00 3.00 2.50 5.00 1.25 3.00 2.25 1.50 2.71 1.1764 16 0

4. Asset Source Identification 2.75 2.25 2.25 5.00 1.75 3.00 2.25 2.00 2.75 1.0704 18 0

5. Strategic Objectives 2.43 1.00 2.14 2.67 1.43 3.00 2.43 2.29 2.16 0.7056 7 0

6. Environmental Changes 3.50 N/A 3.00 5.00 1.50 4.00 N/A 1.00 3.40 1.2942 22 2

7a. Leadership Support 1.50 2.36 1.57 2.14 3.00 2.29 2.00 1.14 2.12 0.5096 5 0

7b. Organizational Support 2.58 3.42 1.92 1.50 2.00 2.75 2.33 1.50 2.36 0.6305 10 0

7c. Technology Support 2.67 1.71 2.13 1.78 2.33 2.50 2.06 1.81 2.17 0.3551 8 0

7d. Learning Support 2.40 2.60 1.75 1.00 2.20 3.00 1.75 1.20 2.10 0.6602 4 0

8. Strategic Functions 2.67 2.33 1.67 5.00 2.67 3.67 2.00 1.33 2.86 1.1362 19 0

9. Functional Processes 2.00 2.67 1.67 5.00 2.67 1.00 2.00 1.33 2.43 1.2724 11 0

10. Required Assets 2.67 3.67 2.00 5.00 2.67 3.00 2.00 1.00 3.00 1.0541 20 0

11. Asset Sources 2.00 2.67 2.33 5.00 2.33 2.67 2.00 1.00 2.71 1.0440 16 0

12a. KM Strategy Identification 1.00 2.00 2.00 3.00 3.00 2.00 2.00 1.00 2.14 0.6901 6 0

12b. KM Strategy Effectiveness 3.00 2.20 1.80 1.60 2.60 2.60 1.80 1.00 2.23 0.5219 9 0

13a. Formal Organizational


2.50 4.00 N/A N/A 4.00 4.00 2.00 1.00 3.30 0.9747 21 2
Structure

13b. Informal Organizational 2.00 N/A 2.50 N/A N/A 1.00 N/A 1.00 1.83 0.7638 3 4
EME Area A B C D E F G H Avg StdDev Rank N/A

Structure

14. KM Technology Listing 4.00 2.00 1.00 4.00 3.00 1.00 2.00 1.00 2.43 1.2724 11 0

15. Integrative Management Plan 1.78 1.13 1.44 N/A 1.38 2.44 1.56 1.33 1.62 0.4571 2 1

16. Legacy System Integration 5.00 4.00 2.25 3.00 2.67 5.00 3.67 2.50 3.65 1.0891 23 0

17. Risk Management/Mitigation N/A N/A 2.5 N/A 2.00 N/A 3.00 1.00 2.50 0.5000 14 4

18. Change Management N/A 2.00 2.00 1.00 N/A 1.00 N/A 1.00 1.50 0.5774 1 3

EME Score 2.62 2.49 2.03 3.19 2.32 2.68 2.23 1.32

EME Rank 5 4 1 7 3 6 2 1
F
E
B
A

H
D
C

G
Company

1
6
3

8
5
4
2
6
1 – Enterprise Definition

2
1
3

5
7
3
6
8
2 – Value Proposition

2
8
5

4
6
1
6
3
3 – Critical Asset Identification

2
8
3

3
7
1
3
6
4 – Asset Source Identification

4
7
3

5
8
2
1
5
5 – Strategic Objectives

1
6
3

5
2
4
6 – Environmental Changes

1
5
3

4
6
8
7
2
7a – Leadership Support

1
1
3

5
7
4
8
6
7b – Organizational Support

3
2
5

4
7
6
1
8

7c – Technology Support

2
1
3

3
8
5
7
6

7d – Learning Support

1
8
2

3
7
5
4
5

8 – Strategic Functions

2
8
3

4
1
6
6
4

9 – Functional Processes

1
8
2

2
6
4
7
4

10 – Required Assets

1
8
4

2
6
4
6
2

11 – Asset Sources
1
7
3

3
3
7
3
1

12a – KM Strategy Identification


Table E- 3-- Company Ranks by EME Criteria

1
2
3

3
6
6
5
8

12b – KM Strategy Effectiveness


1
4

2
4
4
3

13a – Formal Organizational Structure


1
4
3

13b – Informal Organizational Structure


1
7
1

6
4
7

4
1

14 – KM Technology Listing
2
4

3
1
6

5
7

15 – Integrative Management Plan


2
4
1

3
6
7

5
7

16 – Legacy System Integration


1
3

17 – Risk Management/Mitigation
1
1
4
4

18 – Change Management
Table E- 4 -- H3 Statistical Comparison, “EME1 - Enterprise Definition”

Company Name EME Rank KC Rank CV Rank

Company A 6 7 7

Company B 2 5 2

Company C 3 2 5

Company D 6 6 8

Company E 4 8 3

Company F 5 4 6

Company G 8 1 1

Company H 1 3 3

Table E- 5 -- H3 Statistical Comparison, “EME2 - Value Proposition”

Company Name EME Rank KC Rank CV Rank

Company A 8 7 7

Company B 6 5 2

Company C 3 2 5

Company D 1 6 8

Company E 3 8 3

Company F 7 4 6

Company G 5 1 1

Company H 2 3 3

224
Table E- 6 -- H3 Statistical Comparison, “EME3 – Critical Asset Identification”

Company Name EME Rank KC Rank CV Rank

Company A 3 7 7

Company B 6 5 2

Company C 5 2 5

Company D 8 6 8

Company E 1 8 3

Company F 6 4 6

Company G 4 1 1

Company H 2 3 3

Table E- 7 -- H3 Statistical Comparison, “EME4 – Asset Source Identification”

Company Name EME Rank KC Rank CV Rank

Company A 6 7 7

Company B 3 5 2

Company C 3 2 5

Company D 8 6 8

Company E 1 8 3

Company F 7 4 6

Company G 3 1 1

Company H 2 3 3

225
Table E- 8 -- H3 Statistical Comparison, “EME5 – Strategic Objectives”

Company Name EME Rank KC Rank CV Rank

Company A 5 7 7

Company B 1 5 2

Company C 3 2 5

Company D 7 6 8

Company E 2 8 3

Company F 8 4 6

Company G 5 1 1

Company H 4 3 3

Table E- 9 -- H3 Statistical Comparison, “EME6 – Environmental Changes”

Companies that answered “N/A” or “DK” for an entire section are left blank

Company Name EME Rank KC Rank CV Rank

Company A 4 7 7

Company B 5 2

Company C 3 2 5

Company D 6 6 8

Company E 2 8 3

Company F 5 4 6

Company G 1 1

Company H 1 3 3

226
Table E- 10 -- H3 Statistical Comparison, “EME7a – Leadership Support”

Company Name EME Rank KC Rank CV Rank

Company A 2 7 7

Company B 7 5 2

Company C 3 2 5

Company D 5 6 8

Company E 8 8 3

Company F 6 4 6

Company G 4 1 1

Company H 1 3 3

Table E- 11 -- H3 Statistical Comparison, “EME7b – Organizational Support”

Company Name EME Rank KC Rank CV Rank

Company A 6 7 7

Company B 8 5 2

Company C 3 2 5

Company D 1 6 8

Company E 4 8 3

Company F 7 4 6

Company G 5 1 1

Company H 1 3 3

227
Table E- 12 -- H3 Statistical Comparison, “EME7c – Technology Support”

Company Name EME Rank KC Rank CV Rank

Company A 8 7 7

Company B 1 5 2

Company C 5 2 5

Company D 2 6 8

Company E 6 8 3

Company F 7 4 6

Company G 4 1 1

Company H 3 3 3

Table E- 13 -- H3 Statistical Comparison, “EME 7d – Learning Support”

Company Name EME Rank KC Rank CV Rank

Company A 6 7 7

Company B 7 5 2

Company C 3 2 5

Company D 1 6 8

Company E 5 8 3

Company F 8 4 6

Company G 3 1 1

Company H 2 3 3

228
Table E- 14 -- H3 Statistical Comparison, “EME8 – Strategic Functions”

Company Name EME Rank KC Rank CV Rank

Company A 5 7 7

Company B 4 5 2

Company C 2 2 5

Company D 8 6 8

Company E 5 8 3

Company F 7 4 6

Company G 3 1 1

Company H 1 3 3

Table E- 15 -- H3 Statistical Comparison, “EME9 – Functional Processes”

Company Name EME Rank KC Rank CV Rank

Company A 4 7 7

Company B 6 5 2

Company C 3 2 5

Company D 8 6 8

Company E 6 8 3

Company F 1 4 6

Company G 4 1 1

Company H 2 3 3

229
Table E- 16 -- H3 Statistical Comparison, “EME10 – Required Assets”

Company Name EME Rank KC Rank CV Rank

Company A 4 7 7

Company B 7 5 2

Company C 2 2 5

Company D 8 6 8

Company E 4 8 3

Company F 6 4 6

Company G 2 1 1

Company H 1 3 3

Table E- 17 -- H3 Statistical Comparison, “EME11 – Asset Sources”

Company Name EME Rank KC Rank CV Rank

Company A 2 7 7

Company B 6 5 2

Company C 4 2 5

Company D 8 6 8

Company E 4 8 3

Company F 6 4 6

Company G 2 1 1

Company H 1 3 3

230
Table E- 18 -- H3 Statistical Comparison, “EME12a – KM Strategy Identification”

Company Name EME Rank KC Rank CV Rank

Company A 1 7 7

Company B 3 5 2

Company C 3 2 5

Company D 7 6 8

Company E 7 8 3

Company F 3 4 6

Company G 3 1 1

Company H 1 3 3

Table E- 19 -- H3 Statistical Comparison, “EME12b – KM Strategy Effectiveness”

Company Name EME Rank KC Rank CV Rank

Company A 8 7 7

Company B 5 5 2

Company C 3 2 5

Company D 2 6 8

Company E 6 8 3

Company F 6 4 6

Company G 3 1 1

Company H 1 3 3

231
Table E- 20 -- H3 Statistical Comparison, “EME 13a – Formal Organizational Structure”

Companies that answered “N/A” or “DK” for an entire section are left blank

Company Name EME Rank KC Rank CV Rank

Company A 3 7 7

Company B 4 5 2

Company C 2 5

Company D 6 8

Company E 4 8 3

Company F 4 4 6

Company G 2 1 1

Company H 1 3 3

Table E- 21 -- H3 Statistical Comparison, “EME 13b – Informal Organizational Structure”

Companies that answered “N/A” or “DK” for an entire section are left blank

Company Name EME Rank KC Rank CV Rank

Company A 3 7 7

Company B 5 2

Company C 4 2 5

Company D 6 8

Company E 8 3

Company F 1 4 6

Company G 1 1

Company H 1 3 3

232
Table E- 22 -- H3 Statistical Comparison, “EME 14 – KM Technology Listing”

Company Name EME Rank KC Rank CV Rank

Company A 7 7 7

Company B 4 5 2

Company C 1 2 5

Company D 7 6 8

Company E 6 8 3

Company F 1 4 6

Company G 4 1 1

Company H 1 3 3

Table E- 23 -- H3 Statistical Comparison, “EME 15 – Integrative Management Plan”

Companies that answered “N/A” or “DK” for an entire section are left blank

Company Name EME Rank KC Rank CV Rank

Company A 6 7 7

Company B 1 5 2

Company C 4 2 5

Company D 6 8

Company E 3 8 3

Company F 7 4 6

Company G 5 1 1

Company H 2 3 3

233
Table E- 24 -- H3 Statistical Comparison, “EME 16 – Legacy System Integration”

Company Name EME Rank KC Rank CV Rank

Company A 7 7 7

Company B 6 5 2

Company C 1 2 5

Company D 4 6 8

Company E 3 8 3

Company F 7 4 6

Company G 5 1 1

Company H 2 3 3

Table E- 25 -- H3 Statistical Comparison, “EME 17 – Risk Management/Mitigation”

Companies that answered “N/A” or “DK” for an entire section are left blank

Company Name EME Rank KC Rank CV Rank

Company A 7 7

Company B 5 2

Company C 3 2 5

Company D 6 8

Company E 2 8 3

Company F 4 6

Company G 4 1 1

Company H 1 3 3

234
Table E- 26 -- H3 Statistical Comparison, “EME 18 – Change Management”

Companies that answered “N/A” or “DK” for an entire section are left blank

Company Name EME Rank KC Rank CV Rank

Company A 7 7

Company B 4 5 2

Company C 4 2 5

Company D 1 6 8

Company E 8 3

Company F 1 4 6

Company G 1 1

Company H 1 3 3

235
APPENDIX F -- ADDITIONAL GRAPHS AND CHARTS

236
Figure F - 1 -- EME/KC/CVR Rank Correlations

Rank Correlations

5
KC Rank
Rank

CVR Rank
EME Rank
4

0
Company

237
Figure F - 2 -- Trendlines -- EME and CVR

Trendlines -- EME and CVR

3.5

2.5
Raw Scores

1.5

0.5

0
Company

CVR Score Overall EME Score Linear (CVR Score) Linear (Overall EME Score)

238
Figure F - 3 -- Correlation Between KC and CVR Rankings

Correlations Between KC and CVR Rankings

5
Rank

KC Rank
CVR Rank
4

0
Company

239
Table F- 1 -- Free Text Comments From EME Survey

Question Answers

3. Definition of Critical Intellectual Assets


(Before starting system development, how well x Evolution of the definition by learning thorough examination and adding
were the critical intellectual assets needed to stakeholders
make strategic decisions documented? Was x As with any KM or Collaborative project, the Asset Collection will change
this list changed at any point in the process?) due to the enormous collection of hidden information. As users
3f. Why and when? understand the power, they are more adept at adding content they would
otherwise not do without the tools.
x As KM concepts grew (became known to other groups/departments)
there was a need to establish an Enterprisewide KM department.
x Our 'system list' (from a Training perspective, our job and task analysis,
and task-to-training matrix) is modified anytime a process,
organizational, or behavioral change is required. The need may be
identified through discovery of a knowledge and/or skill deficiency,
assessment of process effectiveness, or job and/or training observations.
Changes to the 'system list' may also include revisions of training
materials, policy manuals, and procedures.
4. Sources of Critical Intellectual Assets
(Before starting system development, how well x Again evolution from learning and adding stakeholders to increase
were the sources of the intellectual assets understanding of KM condition
needed for strategic decision-making x This particular project (Collaborative KM) was part of an overall
documented? Was this list changed at any point Information Worker project. Ongoing changes is part of the new value
in the process?) 4f. Why and when? creation for KM type projects. Most of the changes occurred during the
Question Answers

deployment vs. the integration points


x Refer to the explanation for item 3.e.
5. Strategic Objectives
(How well did the organization define (or have x cost, content management
defined for it) one or more strategic objectives? ) x not able to provide to outside organizations, sorry. In general, they dealt
5c. Please list some of the metrics used to with contributions to the KM environment and measuring the use/reuse
measure progress toward these objectives of that knowledge.
x Content Based Metrics
Usage Based Metrics
Standard ROI Measures
Standard PM Measures
x Cost savings; cost avoidance; efficiency gains; new capabilities…
x To provide a more rigorous assessment of training performance,
measures have been designed to assess performance in accreditation
criteria for each accreditation program and Training. Measures were
created for each accredited program as appropriate to that discipline.
Accreditation criteria are based on six 'objectives' against which a plant's
Training and line organizations' use of Training as a performance
improvement tool is measured. The objectives are: (1) Training for
Performance Improvement - Training is used as a strategic tool to
provide highly skilled and knowledgeable personnel for safe, reliable
operations and to support performance improvement; (2) Management
of Training Processes and Resources - Management is committed to
Question Answers

and accountable for developing and sustaining training programs that


meet station needs. Resources and an infrastructure of training
processes are applied consistent with these needs to support training
program sustainability; (3)Initial Training and Qualification - The initial
training program uses a systematic approach to training to provide
personnel with the necessary knowledge and skills to independently
perform their job assignments; (4)Continuing Training -Continuing
training uses a systematic approach to training to refresh and improve
the application of job-related knowledge and skills and to meet
management expectations for personnel and plant performance; (5)
Conduct of Training and Trainee Evaluation - Training is conducted using
methods and settings that support trainee attainment of job-related
knowledge and skills. Achievement of learning is confirmed with reliable
and valid evaluation methods; and (6) Training Effectiveness Evaluation -
Evaluation methods are used systematically to assess training
effectiveness and modify training to improve personnel and plant
performance. Each objective has several metrics against which our
performance is measured. Less than satisfactory performance for any
metric requires the derivation of corrective actions to resolve the issue
and prevent recurrence. The following are the metrics used for the
assessment of Objective 1 (Training for Performance Improvement):
(1)Are there currently any open Findings (from assessments) or training
related Areas for Improvement (from Plant Evaluations)? (2)Was there at
Question Answers

least one specific example identified where training had a positive impact
on plant or human performance? (3)Was all needed training completed
during the measured quarter to meet plant performance needs? (4)Were
all training-related PIP corrective actions due in the measured quarter
completed or statused in a timely manner? (5)Were there any Category 1
or Category 2 PIPs assigned to the organization related to plant
performance with Training identified as the Culpable Organization in the
Problem Evaluation section? (6)Was training expertise utilized during
root cause investigation(s) of human performance problems to help
identify and differentiate between training and non-training solutions in
the measured quarter? (7)Were all training-related Action Tracking
Registers (specific discipline Training and specific discipline TPRC) due
in the measured quarter completed or statused in a timely manner? (8)
Are identified negative human performance trends, which result from a
knowledge or skill deficiency being addressed by training? (9)Are
methods to determine effectiveness of training considered prior to
development of training for performance improvement? Please note
these metrics are used solely for Objective 1 (i.e. there are five other
Objectives and supporting metrics.) This measurement is performed
quarterly.
6. Environmental Changes and Sensors
(How well did the organization identify and/or x 20
define potential environmental changes (other x 3-4 Risks were identified before and during the project
Question Answers

than the potential KMS project) that could impact x 10-20?


achievement of the strategic objectives?) 6b.
How many potential changes were identified
(enter N/A or DK if needed)?
6d. What were those sensors? x not very well done; too many internal conflicts and concerns to gain
momentum
x Project Tracking and PM Methods, Pilot Projects, User Interviews, and
Director Reviews
x Observations of training; observations of tasks being performed on the
job; assessments; root cause investigations; apparent cause problem
evaluations; and corrective action program.
6e. How often were they monitored or x Yearly
assessed? x Weekly
x Approximately 7000-8000 'condition reports' are entered into the
corrective action program per year. Of this number, approximately 50 are
sufficiently significant to warrant a root cause investigation. Another
2000-2500 are investigated at the apparent cause level (less costly
method of problem investigation where one person conducts a high level
assessment of the problem and writes the evaluation). Targeted internal
assessments are periodically conducted for processes and organizations
to determine effectiveness. Regulatory agencies also conduct frequent,
scheduled assessments at the technological, programmatic, and
organizational effectiveness level. (The bottom line: a huge amount of
Question Answers

effort, time, and money is invested into maintaining a healthy corrective


action and assessments program with a sufficiently low 'sensitivity
threshold'.
6i. If they did not work as expected, please x OK…stay tuned
describe the reason(s) why they did not meet x The recent event in another plant resulted in a report which had a major,
expectations (answer N/A or DK as needed) positive impact on adjustments to station-specific and company-specific
corrective action and assessment programs. The industry had focused
more on technology-related issues (to ensure continued safe and
efficient operation of the individual plants) rather than on programmatic
and organizational (i.e. cultural) issues and their potential negative
impacts on vertical and lateral organizational communications. Cultural-
level assessments conducted at all plants resulted from the
programmatic and organizational weaknesses identified at the recent
event. The goal was to more accurately adjust our own 'sensitivity
threshold' toward O&P issues and refine our use and application of our
observation, corrective action and assessments programs.
11. Functional Definition
(During the system development stage, how well x While the architects try to think of every possibility of use for KM tools,
did the organization list the functions needed to even the best will miss something. The collaborative solution product
accomplish the strategic objectives? Was this could create the ability to business users to build Intranet and
list changed at any point in the process?)11e. modifications of the strategy were required to allow and enable this new
Why and when? requirement.
12. Operational Process Definition
Question Answers

(During the system development stage, how well x As we discovered gaps and non-value-added processes, we evolved our
did the organization list the operational understanding
processes required to accomplish these x Adjustments where made in iterative cycles
functions? Was this list changed at any point in x Similar to the prior item, new functionality creates new functionality and
the process?) 12e. Why and when? operations. Client-Support was not an initial operation required but the
overwhelming need of the business for How-To, FAQ, Getting Started,
etc. required a new operational element
13. Intellectual Asset Listing
(During the system development stage, how well x Early in the program's implementation, knowledge and/or skill
did the organization list the intellectual assets deficiencies were identified mainly through the identification of
required to accomplish these functions? Was operational problems and/or commitment of human error. This was
this list changed at any point in the process?) especially true with early plant designs. Training organizations had to
13e. Why and when? adapt by learning how to provide just-in-time training to compensate for
operator knowledge and skill deficiencies. Training's improved
performance over the past 20 years in this area has reduced the
probability of similar issues becoming problems.
14. Intellectual Asset Sourcing
(During the system development stage, how well x Infrequent source-issues have occurred over the years. One that comes
did the organization list the sources of these to mind was the need to have a efficient, non-costly way to permit
intellectual assets? Was this list changed at any personnel to practice the skill of 'self-verification'. (Self-verification is a
point in the process?) 14e. Why and when? 'tool' used by individuals to help them reduce the rate at which human
error is committed.) Simulator personnel developed a small (3 feet wide
by 5 feet tall) simulator on wheels with labels, switches, and procedure
Question Answers

filled with human interface problems. The idea was to make the process
difficult to complete without committing human error. The software could
be driven by a simple 386 computer (old model). Plans and software for
the 'self-verification' simulator were given free of charge to all plants in
the U.S. This is a great example because of the ingenuity and simplicity
of the resolution to the problem, the teamwork of a Training organization
with line organizations, and the free-sharing of a great idea with other
organizations. The implementation of this idea has led to further
investigation and research into this and other human performance
related themes and behaviors.
17. Formal Organizational Structure
(Did an accurate diagram or representation of x to reflect organizational changes
the formal organizational structure exist at any x Often, as people joined and left the organization or as organizational
point in the KMS implementation process? Was change occurred as a result of business changes
this changed at any point in the process?) 17f. x Additional skills and knowledge were needed on the customer service
Why and when? side of the implementation.
18. Informal Organizational Structure
(Did an accurate diagram or representation of x As the org changed and processes changed, the informal structure
the informal organizational structure exist at any needed to be modified
point in the KMS implementation process? Was x Creation and maintenance of training performance criteria and
this changed at any point in the process?) 18g. assessment methodology for non-accredited training programs support
Why and when? was recognized as a way to reduce safety and human performance
issues, refine assessment and observation programs, and assist in the
Question Answers

identification of knowledge management needs across site organizations


(who don't have training programs within accreditation 'space'.)
19. Technology Definition
(Was there a list of the technology needed to x Technology was used as an enabler and not a means to an end
support the planned KM strategies? Was this x Due to changes in our thinking relative to what technologies we would
changed at any point in the process?) 19g. Why use, as well as due to cost considerations, this list was modified over
and when? time.
x Technology improvements are constantly pursued by an individual within
the Training organization. Recent technology-application improvements
have included the use of a program called 'Beyond Question'. This
program allows ongoing assessment of student knowledge and
comprehension while in a classroom learning environment. Other
technologies are demonstrated as part of an ongoing instructor
continuing training program.
20. Integrative Management Plan
20e. What other major topics were covered in x funding models; access models
the project or system management plan? x Client Support
Usability and Design
Templates
Architecture Designs
Dependencies and Constraints
x Evaluation of system effectiveness (e.g. how often, methodology used,
level of assessment required, etc.)
Question Answers

21. Legacy Integration Planning


( Was there a lot of planning to integrate the x New technology and no real collaborative solution in place. The only
KMS with legacy components?) 21b. If there post production planning was to replace current web sites to the
was no plan, why not? collaborative solution.
21d. What other major topics were covered in x Need for collaboration; information validation/revalidation; system
the legacy system integration plan? usability
22. Change Management Planning
(Did the organization develop a plan for x not a priority as of current
evaluating, implementing, and managing
change? ) 22b. If not, why not?
(If this plan was used, how successful would you x Flexibility allows responsiveness -- designed into the system
say that the plan has been?) 22e. Please list x Customer Support, Vendor and Partner Buy-in
reason(s) why or why not? x I refer back to my use of the term, 'sensitivity threshold'. Our oversight of
its 'position' permits us to maintain a corrective action and assessments
program that is effectively used to identify and resolve problems. The
strategy involves identifying problems while they are still of low
consequence, performing a trend analysis, and identifying similar
problems to include in the resolution.
Overall Comments
Other comments x Grass roots support and good usability design (and redesign) were
critical to system success. No formally-imposed system would have
been accepted, now that usability is right, users are flocking to the
system (which is primarily web-based collaboration). Off-the-shelf
Question Answers

packages cannot respond to required need for flexibility. System allows


knowledge to be shared up and down value chain. Emphasis is on
“connecting people” instead of “collecting content”
x The real value of the knowledge management system comes from after
the implementation. No matter how much you plan, the ability to rapidly
change or alter the direction once live will make or break the project.

You might also like