Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 47

What is the Balanced Scorecard?

©Paul Arveson, 1998

A new approach to strategic management was developed in the early 1990's by Drs. Robert Kaplan (Harvard Business
School) and David Norton. They named this system the 'balanced scorecard'. Recognizing some of the weaknesses and
vagueness of previous management approaches, the balanced scorecard approach provides a clear prescription as to
what companies should measure in order to 'balance' the financial perspective.

The balanced scorecard is a management system (not only a measurement system) that enables organizations to clarify
their vision and strategy and translate them into action. It provides feedback around both the internal business processes
and external outcomes in order to continuously improve strategic performance and results. When fully deployed, the
balanced scorecard transforms strategic planning from an academic exercise into the nerve center of an enterprise.

Kaplan and Norton describe the innovation of the balanced scorecard as follows:

"The balanced scorecard retains traditional financial measures. But financial measures tell the story of
past events, an adequate story for industrial age companies for which investments in long-term
capabilities and customer relationships were not critical for success. These financial measures are
inadequate, however, for guiding and evaluating the journey that information age companies must make
to create future value through investment in customers, suppliers, employees, processes, technology, and
innovation."

The balanced scorecard suggests that we view the organization from four perspectives, and to develop metrics, collect
data and analyze it relative to each of these perspectives:

 The Learning and Growth Perspective


 The Business Process Perspective
 The Customer Perspective
 The Financial Perspective
The Balanced Scorecard and Measurement-Based Management

The balanced scorecard methodology builds on some key concepts of previous management ideas such as Total Quality
Management (TQM), including customer-defined quality, continuous improvement, employee empowerment, and --
primarily -- measurement-based management and feedback.

Double-Loop Feedback

In traditional industrial activity, "quality control" and "zero defects" were the watchwords. In order to shield the
customer from receiving poor quality products, aggressive efforts were focused on inspection and testing at the end of
the production line. The problem with this approach -- as pointed out by Deming -- is that the true causes of defects
could never be identified, and there would always be inefficiencies due to the rejection of defects. What Deming saw
was that variation is created at every step in a production process, and the causes of variation need to be
identified and fixed. If this can be done, then there is a way to reduce the defects and improve product
quality indefinitely. To establish such a process, Deming emphasized that all business processes should
be part of a system with feedback loops. The feedback data should be examined by managers to
determine the causes of variation, what are the processes with significant problems, and then they can
focus attention on fixing that subset of processes.

The balanced scorecard incorporates feedback around internal business process outputs, as in TQM, but also adds a
feedback loop around the outcomesof business strategies. This creates a "double-loop feedback" process in the
balanced scorecard.

Outcome Metrics

You can't improve what you can't measure. So metrics must be developed based on the priorities of the strategic plan,
which provides the key business drivers and criteria for metrics that managers most desire to watch. Processes are then
designed to collect information relevant to these metrics and reduce it to numerical form for storage, display, and
analysis. Decision makers examine the outcomes of various measured processes and strategies and track the results to
guide the company and provide feedback.

So the value of metrics is in their ability to provide a factual basis for defining:

 Strategic feedback to show the present status of the organization from many perspectives for decision makers
 Diagnostic feedback into various processes to guide improvements on a continuous basis
 Trends in performance over time as the metrics are tracked
 Feedback around the measurement methods themselves, and which metrics should be tracked
 Quantitative inputs to forecasting methods and models for decision support systems

Management by Fact

The goal of making measurements is to permit managers to see their company more clearly -- from many perspectives
-- and hence to make wiser long-term decisions. The Baldrige Criteria (1997) booklet reiterates this concept of fact-
based management:

"Modern businesses depend upon measurement and analysis of performance. Measurements must derive
from the company's strategy and provide critical data and information about key processes, outputs and
results. Data and information needed for performance measurement and improvement are of many types,
including: customer, product and service performance, operations, market, competitive comparisons,
supplier, employee-related, and cost and financial. Analysis entails using data to determine trends,
projections, and cause and effect -- that might not be evident without analysis. Data and analysis support
a variety of company purposes, such as planning, reviewing company performance, improving
operations, and comparing company performance with competitors' or with 'best practices' benchmarks."
"A major consideration in performance improvement involves the creation and use of performance
measures or indicators. Performance measures or indicators are measurable characteristics of products,
services, processes, and operations the company uses to track and improve performance. The measures
or indicators should be selected to best represent the factors that lead to improved customer, operational,
and financial performance. A comprehensive set of measures or indicators tied to customer and/or
company performance requirements represents a clear basis for aligning all activities with the company's
goals. Through the analysis of data from the tracking processes, the measures or indicators themselves
may be evaluated and changed to better support such goals."

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 355
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

1. The Learning and Growth Perspective

This perspective includes employee training and corporate cultural attitudes related to both individual and corporate
self-improvement. In a knowledge-worker organization, people -- the only repository of knowledge -- are the main
resource. In the current climate of rapid technological change, it is becoming necessary for knowledge workers to be in
a continuous learning mode. Government agencies often find themselves unable to hire new technical workers, and at
the same time there is a decline in training of existing employees. This is a leading indicator of 'brain drain' that must
be reversed. Metrics can be put into place to guide managers in focusing training funds where they can help the most.
In any case, learning and growth constitute the essential foundation for success of any knowledge-worker organization.

Kaplan and Norton emphasize that 'learning' is more than 'training'; it also includes things like mentors and tutors
within the organization, as well as that ease of communication among workers that allows them to readily get help on a
problem when it is needed. It also includes technological tools; what the Baldrige criteria call "high performance work
systems." One of these, the Intranet, will be examined in detail later in this document.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us
2. The Business Process Perspective

This perspective refers to internal business processes. Metrics based on this perspective allow the managers to know
how well their business is running, and whether its products and services conform to customer requirements (the
mission). These metrics have to be carefully designed by those who know these processes most intimately; with our
unique missions these are not something that can be developed by outside consultants.

In addition to the strategic management process, two kinds of business processes may be identified: a) mission-oriented
processes, and b) support processes. Mission-oriented processes are the special functions of government offices, and
many unique problems are encountered in these processes. The support processes are more repetitive in nature, and
hence easier to measure and benchmark using generic metrics.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

3. The Customer Perspective

Recent management philosophy has shown an increasing realization of the importance of customer focus and customer
satisfaction in any business. These are leading indicators: if customers are not satisfied, they will eventually find other
suppliers that will meet their needs. Poor performance from this perspective is thus a leading indicator of future decline,
even though the current financial picture may look good.

In developing metrics for satisfaction, customers should be analyzed in terms of kinds of customers and the kinds of
processes for which we are providing a product or service to those customer groups.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

4. The Financial Perspective

Kaplan and Norton do not disregard the traditional need for financial data. Timely and accurate funding data will
always be a priority, and managers will do whatever necessary to provide it. In fact, often there is more than enough
handling and processing of financial data. With the implementation of a corporate database, it is hoped that more of the
processing can be centralized and automated. But the point is that the current emphasis on financials leads to the
"unbalanced" situation with regard to other perspectives.

There is perhaps a need to include additional financial-related data, such as risk assessment and cost-benefit data, in
this category.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Basic Concepts
There are numerous approaches to organizational management, and since the balanced scorecard approach is relatively
new, it may be unfamiliar to managers who have experience primarily in program management. These essays will serve
to orient your thinking to the balanced scorecard approach, and to evaluate it critically.

 What is the Balanced Scorecard?


 Definitions of Management Terms

 The Balanced Scorecard -- Not Just Another Project


 Three Approaches to Management
 Selecting a Management Approach
 Measurement-Based Management and Its Excesses
 Objections to Measurement-Based Management, with Responses
 A Sample of Balanced Scorecard Adopters

Return to home page

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Definitions of Terms
Some of these definitions were obtained from government agencies such as the US Office of Management and
Budget (OMB) or the Government Accountability Office (GAO); some were obtained from other authorities. Links
are provided to sites that add more details.

Activity-Based Costing:A business practice in which costs are tagged and accounted in detailed activity categories, so
that return on investment and improvement effectiveness can be evaluated. Implementing ABC requires proper data
structures, and an adequate data reporting and collection system involving all employees in the activity.

Activity-Based Management: The use of ABC data to ascertain the efficiency or profitability of business units, and
the use of strategic initiatives and operational changes in an effort to optimize financial performance.

Agency: In most US Federal Government legislation, an organization with a budget of at least $20 million per year.

Applied Information Economics (AIE): AIE is a practical application of scientific and mathematical methods to the
Information Technology investment process. AIE uses statistical methods to maintain consistency in risk analysis and
decision making with a specified level of uncertainty.

Architecture: Design; the way components fit together to form a unified system. May be conceived of any complex
system such as "software architecture" or "network architecture" [Free On-line Dictionary of Computing]. An IT
architecture is a design for the arrangement and interoperation of technical components that together provide an
organization its information and communication infrastructure. [ICH].

Balanced Scorecard: A measurement-based strategic management system, originated by Robert Kaplan and David
Norton, which provides a method of aligning business activities to the strategy, and monitoring performance of
strategic goals over time.

Baldrige Award: A prestigious award, developed by US Commerce Secretary Malcom Baldrige in 1984 to offer an
incentive to companies that score highest on a detailed set of management quality assessment criteria. The criteria
include leadership, use of information and analysis, strategic planning, human resources, business process management,
financial results and customer focus and satisfaction. The award is currently administered by the National Institute for
Standards and Technology.
Baseline: Data on the current process that provides the metrics against which to compare improvements and to use in
benchmarking. [GAO]

Benchmarking: The process of comparing one set of measurements of a process, product or service to those of another
organization. The objective of benchmarking is to set appropriate reliability and quality metrics for your company
based on metrics for similar processes in other companies.

Business case: A structured proposal for business improvement that functions as a decision package for organizational
decision-makers. A business case includes an analysis of business process performance and associated needs or
problems, proposed alternative solutions, assumptions, constraints, and a risk-adjusted cost-benefit analysis. [GAO]

Business Process Improvement: A methodology for focused change in a business process achieved by analyzing the
AS-IS process using flowcharts and other tools, then developing a streamlined TO-BE process in which automation
may be added to result in a process that is better, faster, and cheaper. BPI aims at cost reductions of 10-40%, with
moderate risk.

Business Process Reengineering: A methodology (developed by Michael Hammer) for radical, rapid change in
business processes achieved by redesigning the process from scratch and then adding automation. Aimed at cost
reductions of 70% or more when starting with antiquated processes, but with a significant risk of lower results.

Capability Maturity Model (CMM): A scale for assessing the degree of built-in documentation and discipline in a
process, in which the scale goes from Level 1, with no formal process, to Level 5, with a continuous, rigorous and self-
improving process. Developed by the Software Engineering Institute of Carnegie Mellon University, and now being
extended to a broader range of applications in management.

Core competency: A distinctive area of expertise of an organization that is critical to its long term success. These are
built up over time and cannot be imitated easily. The concept was developed by C.K. Prahalad and G. Hamel in a series
of articles in Harvard Business Review around 1990. Sometimes called core capability .

Cost-benefit analysis: A technique used to compare the various costs associated with an investment with the benefits
that it proposes to return. Both tangible and intangible factors should be addressed and accounted for. [GAO]

Customer: In the private sector, those who pay, or exchange value, for products or services. In government, customers
consist of (a) the taxpayers; (b) taxpayer representatives in Congress; (c) the sponsors of the agency; (d) the managers
of an agency program; (e) the recipients of the agency's products and services. There may be several more categories of
'customers'; they should be carefully segmented for maximum strategic benefit. Compare with primary customer and
stakeholder .

Critical success factor: See key success factor.

Discount factor: The factor that translates expected financial benefits or costs in any given future year into present
value terms. The discount factor is equal to 1/(1 + i)t where i is the interest rate and t is the number of years from the
date of initiation for the program or policy until the given future year. [GAO] Discount rate is the interest rate used in
calculating the present value of expected yearly benefits and costs. [GAO]

Economic Value Added (EVA): Net operating profit after taxes minus (capital x cost of capital). EVA is a measure of
the economic value of an investment or project.

Earned Value Management: Earned value is a project management technique that relates resource planning to
schedules and to technical cost and schedule requirements. All work is planned, budgeted, and scheduled in time-
phased ''planned value'' increments constituting a cost and schedule measurement baseline. There are two major
objectives of an earned value system: to encourage contractors to use effective internal cost and schedule management
control systems; and to permit the customer to be able to rely on timely data produced by those systems for determining
product-oriented contract status. (http://www.acq.osd.mil/pm/evbasics.htm)
Effectiveness: (a) Degree to which an activity or initiative is successful in achieving a specified goal; (b) degree to
which activities of a unit achieve the unit's mission or goal.

Efficiency: (a) Degree of capability or productivity of a process, such as the number of cases closed per year; (b) tasks
accomplished per unit cost.

EFQM: The European Foundation for Quality Management's Model of Excellence, which provides benchmarking and
self-assessment in a framework similar to that of the Malcom Baldrige criteria.

Enterprise: A system of business endeavor within a particular business environment. An enterprise architecture is a
design for the arrangement and interoperation of business components (e.g., policies, operations, infrastructure,
information) that together make up the enterprise's means of operation. [ICH ].

Executive Information System: Generic term for a software application that provides high-level information to
decision makers, usually to support resource allocation, strategy or priority decisions. This could include a balanced
scorecard system, Enterprise Resource Planning (ERP) system, Decision Support System (DSS), etc. Technologies
include databases, a data warehouse, and analytic applications such as OLAP (On-Line Analysis Protocol), and many
mission-specific data reporting systems.

Federal Enterprise Architecture Framework (FEAF) - An organizing mechanism for managing development,
maintenance, and facilitated decision making of a Federal EA. The Framework provides a structure for organizing
Federal resources and for describing and managing Federal EA activities.

Feedback: Information obtained from the results of a process that is used in guiding the way that process is done.
There should be feedback loops around all important activities. Strategic feedback (for each strategic activity) validates
effectiveness of the strategy by measuring outcomes (long-term). Diagnostic feedback tracks efficiency of internal
business processes (usually generic across all mission activities). Metrics feedback allows for refining the selection of
metrics to be measured. Measurement feedback allows for the improvement of measurement techniques and frequency.

Five Forces Model: A tool developed by Michael Porter that analyzes an industry in terms of five competitive forces:
bargaining power of suppliers, bargaining power of buyers, threat of new entrants, threat of substitute products, and
rivalry between existing competitors.

Framework: A logical structure for classifying and organizing complex information. [Federal Enterprise Architecture
Framework] See also Zachman framework .

Functional Economic Analysis (FEA):An analytical technique for assessing the value added at various stages or
functions in a process. Most relevant in manufacturing industries, where such increments in value can be readily
measured.

Gap Analysis: Gap analysis naturally flows from benchmarking or other assessments. Once we understand what is the
general expectation of performance in industry, we can then compare that with current capabilities, and this becomes
the gap analysis. Such analysis can be performed at the strategic or operational level of an organization.

Goal: A specific intended result of a strategy; used interchangeably with objective. See also Outcome Goal, Output
Goal, Performance Goal, Strategic Goal. [Note: the term "goal" is used in a wide variety of ways in planning; e.g. as a
strategic result or outcome; an objective, a measure, a target, etc.]

Governance: The processes and systems by which an organization or society operate.

Impact: Changes in outcomes that can be attributed to a particular project, program or policy, in a situation where
there may be many other influences on outcomes. Impact evaluation attempts to answer the question, "What would the
situation have been if the intervention had not taken place?" [World Bank].
Improvement: An activity undertaken based on strategic objectives such as reduced cycle time, reduced cost, and
customer satisfaction. This includes improvements directly in mission activities (production, design, testing etc.) and/or
in support activities for the mission.

Indicator: A simple metric that is intended to be easy to measure. Its intent is to obtain general information
about performance trends by means of surveys, telephone interviews, and the like.

Input: Resources (funds, labor, time, equipment, space, technology etc.) used to produce outputs and outcomes.

Information technology (IT): Includes all matters concerned with the furtherance of computer science and technology
and with the design, development, installation, and implementation of information systems and applications [San Diego
State University]. An information technology architecture is an integrated framework for acquiring and evolving IT to
achieve strategic goals. It has both logical and technical components. Logical components include mission, functional
and information requirements, system configurations, and information flows. Technical components include IT
standards and rules that will be used to implement the logical architecture.

Intermediate Outcome: An outcome from a business activity that can be identified and measured in the near term, and
is an indicator of longer-term outcomes. This is practical when long-term outcomes are diffuse, delayed or
otherwise difficult to measure. Example: service response time, which is of concern to the customer making a call or
requesting a service but does not indicate anything directly about the success of the call or request.

ISO 9000: ISO, the International Organization for Standardization, has established a series of performance and quality
management system standards for industrial organizations. Organizations may receive certification from the ISO
Certification body if they are in compliance with the relevant international standards.

IT investment management approach: An analytical framework for linking IT investment decisions to an


organization's strategic objectives and business plans. The investment management approach consists of three phases--
select, control and evaluate. Among other things, this management approach requires discipline, executive management
involvement, accountability, and a focus on risks and returns using quantifiable measures. [GAO]

Key Performance Indicators (KPI): A short list of metrics that a company's managers have identified as the most
important variables reflecting mission success or organizational performance.

Key Success Factors (KSF): The three to five broad areas on which an organization must focus in order to achieve its
vision. They may be major weaknesses that must be fixed before other goals can be achieved. They are not as specific
as strategies. Sometimes called strategic themes or critical success factors (CSF). (Mark Graham Brown, Winning
Score).

Knowledge Management: "Knowledge Management caters to the critical issues of organizational adaptation, survival
and competence in face of increasingly discontinuous environmental change. Essentially, it embodies organizational
processes that seek synergistic combination of data and information processing capacity of information technologies,
and the creative and innovative capacity of human beings." (http://www.brint.com/interview/maeil.htm)

Logic Model: A generic model of any business process, which breaks it down into inputs, activities (or processes),
outputs, and outcomes (or results). Sometimes intermediate outcomes are also included.

McKinsey / General Electric Matrix: A portfolio planning tool that uses a 3 x 3 matrix. One scale is market
attractiveness, the other is competitive strength. The Strategic Business Units of a large company can be compared
within this matrix.

Measurement: An observation that reduces the amount of uncertainty about the value of a quantity. In the balanced
scorecard, measurements are collected for feedback. The measurement system gathers information about all the
significant activities of a company. Measurements are the data resulting from the measurement effort. Measurement
also implies a methodology, analysis, and other activities involved with how particular measurements are collected and
managed. There may be many ways of measuring the same thing.
Metrics: Often used interchangeably with measurements. However, it is helpful to separate these definitions. Metrics
are the various parameters or ways of looking at a process that is to be measured. Metrics define what is to be
measured. Some metrics are specialized, so they can't be directly benchmarked or interpreted outside a mission-
specific business unit. Other measures will be generic, and they can be aggregated across business units, e.g. cycle
time, customer satisfaction, and financial results.

Mission activities: Things that an agency does for its customers. For private companies, profit or value creation is an
overarching mission. For nonprofit organizations, the mission itself takes priority, although cost reduction is still
usually a high priority activity.

Mission effectiveness: Degree to which mission activities achieve mission outcomes or results.

Mission value: (1) Mission outcome benefits per unit cost; a key metric for nonprofit and governmental organizations.
(2) For a collection of missions within an organization, the relative value contributed by each mission. (3) The
combination of strategic significance and results produced by a mission.

Mixed system: An information system that supports both financial and non-financial functions. [GAO]

Model - A representation of a set of components of a process, system, or subject area, generally developed for
understanding, analysis, improvement, and/or replacement of the process [GAO]. A representation of information,
activities, relationships, and constraints [Treasury Enterprise Architecture Framework].

Net present value (NPV): The future stream of benefits and costs converted into equivalent values today. This is done
by assigning monetary values to benefits and costs, discounting future benefits and costs using an appropriate discount
rate, and subtracting the sum total of discounted costs from the sum total of discounted benefits. [GAO]

Non-value-added work: Work activities that add no value to the mission of the organization. Such activities may or
may not be necessary; necessary ones may include utilities, supplies, travel and maintenance; unnecessary ones may
include searching for information, duplicating work, rework, time not working, etc.

Objective: An aim or intended result of a strategy. See goal.

Organization: The command, control and feedback relationships among a group of people and information systems.
Examples: a private company, a government agency.

Outcome: A description of the intended result, effect, or consequence that will occur from carrying out a program or
activity. (OMB). The end result that is sought (examples: in the private sector, financial profitability; in the public
sector, cleaner air or reduced incidence of disease).

Outcome measure: A long-term, ultimate measure of success or strategic effectiveness. An event, occurrence, or
condition that is outside the activity or program itself and is of direct importance to customers or the public. We also
include indicators of service quality, those of importance to customers, under this category. (Dr. Harry Hatry)

Output: Products and services delivered. Outputs are the immediate products of internal activity: the amount of work
done within the organization or by its contractors (such as miles of road repaired or number of calls answered).

Performance-based budgeting: A management process in which performance of various activities in an


organization is measured, and budgets for further work on these activities is adjusted based on their performance.
(Note: this does not necessarily imply that budgets for poorly-performing activities will be reduced; see the discussion
here.)

Performance goal: A target level of performance expressed as a tangible, measurable objective, against which actual
achievement can be compared, including a goal expressed as a quantitative standard, value, or rate. (OMB).

Performance indicator: A particular value or characteristic used to measure output or outcome.


Performance measurement: The process of developing measurable indicators that can be systematically tracked to
assess progress made in achieving predetermined goals and using such indicators to assess progress in achieving these
goals [GAO]. A performance gap is the gap between what customers and stakeholders expect and what each process
and related subprocesses produces in terms of quality, quantity, time, and cost of services and products [GAO].

Performance metric: see Metrics.

PEST analysis: A planning tool for identifying the external Political/Legal, Economic, Social, and Technological
issues that could affect the strategic planning of an organization.

Plan: A prescribed, written sequence of actions to achieve a goal, usually ordered in phases or steps with a schedule
and measureable targets; defines who is responsible for achievement, who will do the work, and links to other related
plans and goals. By law agencies must have strategic plans, business plans, and performance plans. They may also have
implementation plans, program plans, project plans, management plans, office plans, personnel plans, operational
plans, etc.

Primary customer: The customer group that must be satisfied if the overall mission of the organization is to succeed.
Generally this is the end user or direct recipient of an organization's products or services, and has the capabilities
to report satisfaction and give feedback. In a commercial organization, it is the group that is the main source of
income.

Profit: Financial gain, or revenues minus expenses. Profit is the overarching mission of private-sector companies.
Nonprofit or governmental organizations either operate at a loss or attempt to achieve a zero profit; for them the
overarching mission is a charter for a service, or a goal to be achieved. Therefore, there is a basic distinction in
measures of strategic success between profit and nonprofit or governmental organizations.

Project management: A set of well-defined methods and techniques for managing a team of people to accomplish a
series of work tasks within a well-defined schedule and budget. The techniques may include work breakdown structure,
workflow, earned value management (EVM), total quality management (TQM), statistical process control (SPC),
quality function deployment (QFD), design of experiments, concurrent engineering, Six Sigma etc. Tools include
flowcharts, PERT charts, GANTT charts (e.g. Microsoft Project), control charts, cause-and-effect (tree or wishbone)
diagrams, Pareto diagrams, etc. (Note that the balanced scorecard is a strategic management, not a project management
technique).

Return on Investment (ROI): In the private sector, the annual financial benefit after an investment minus the cost of
the investment. In the public sector, cost reduction or cost avoidance obtained after an improvement in processes or
systems, minus the cost of the improvement.

Risk analysis: A technique to identify and assess factors that may jeopardize the success of a project or achieving a
goal. This technique also helps define preventive measures to reduce the probability of these factors from occurring and
identify countermeasures to successfully deal with these constraints when they develop. [GAO]

Sensitivity analysis: Analysis of how sensitive outcomes are to changes in the assumptions. The assumptions that
deserve the most attention should depend largely on the dominant benefit and cost elements and the areas of greatest
uncertainty of the program or process being analyzed. [GAO]

Six Sigma: Literally, refers to the reduction of errors to six standard deviations from the mean value of a process
output or task opportunities, i.e. about 1 error in 300,000 opportunities. In modern practice, this terminology has been
applied to a quality improvement methodology for industry.

Stakeholder: An individual or group with an interest in the success of an organization in delivering intended results
and maintaining the viability of the organization's products and services. Stakeholders influence programs, products,
and services. Examples include congressional members and staff of relevant appropriations, authorizing, and oversight
committees; representatives of central management and oversight entities such as OMB and GAO; and representatives
of key interest groups, including those groups that represent the organization's customers and interested members of the
public. [GAO]

Standard: A set of criteria (some of which may be mandatory), voluntary guidelines, and best practices. Examples
include application development, project management, vendor management, production operation, user support, asset
management, technology evaluation, architecture governance, configuration management, problem resolution. [Federal
Enterprise Architecture Framework]

Statistical Process Control (SPC): A mathematical procedure for measuring and tracking the variability in a
manufacturing process; developed by Shewhart in the 1930's and applied by Deming in TQM.

Strategic Business Unit (SBU): In a commercial company, an SBU is a unit of the company that has a separate
mission and objectives, and that can be planned and evaluated independently from the other parts of the company. An
SBU may be a division, a product line or an individual brand; the collection of SBUs is a portfolio.

Strategic goal or general goal: An elaboration of the mission statement, developing with greater specificity how an
agency will carry out its mission. The goal may be of a programmatic, policy, or management nature, and is expressed
in a manner which allows a future assessment to be made of whether the goal was or is being achieved. (OMB). The
quantifiable aims of strategic activities, including outcome goals and output goals.

Strategic objective or general objective: Often synonymous with a general goal. In a strategic plan, an objective may
complement a general goal whose achievement cannot be directly measured. The assessment is made on the objective
rather than the general goal. Objectives may also be characterized as being particularly focused on the conduct of basic
agency functions and operations that support the conduct of programs and activities. (OMB)

Strategic activities: activities or initiatives that a company or agency does for itself, to achieve its overall strategic
goals.

Strategic business unit: A portion of an organization aligned to a particular strategy.

Strategic elements: Mission, vision, values, assessment data, strategic plans and other information that serves to
support strategic planning.

Strategic imperatives: Company values.

Strategic initiatives: Specific activities or actions undertaken to achieve a strategic goal, including the plans and
milestones.

Strategic measures or metrics: Quantifiable indicators of status of a strategic activity.

Strategic plan - A document used by an organization to align its organization and budget structure with organizational
priorities, missions, and objectives. According to requirements of Government Performance and Results Act (1993), a
strategic plan should include a mission statement, a description of the agency's long-term goals and objectives, and
strategies or means the agency plans to use to achieve these general goals and objectives. The strategic plan may also
identify external factors that could affect achievement of long-term goals. [GAO] Strategic planning is a systematic
method used by an organization to anticipate and adapt to expected changes. The IRM portion of strategic planning sets
broad direction and goals for managing information and supporting delivery of services to customers and the public and
identifies the major IRM activities to be undertaken to accomplish the desired agency mission and goals. [GAO]

Strategic targets: Numbers to achieve on each strategic metric by a specified time.

Strategic themes: The general strategy broken down into a few distinct focus areas that may lead to desired results,
such as customer intimacy, operational excellence, business growth, etc. Usually general, expressed as brief titles, and
not quantified.
Strategy: (1) Hypotheses or educated guesses that propose the direction an organization should go to fulfil its vision
and maximize the possibility of its future success. (2) Unique and sustainable ways by which organizations create
value. (from Kaplan & Norton). Answers the question, "Are we doing the right things?"

The Balanced Scorecard -- Not Just Another Project


©Paul Arveson, 1998

Managers in many government agencies have been reared on project management. It is the way they think about
achieving their mission. In the Defense Department, project or program management has been the framework for
development of every system costing from ten thousand dollars to ten billion dollars. There is a long-established
tradition of on-the-job training and experience for young people to learn and be mentored by experienced project
managers. Many guidebooks, manuals, software programs, and other means have been devised to aid the project
manager.

Project management has been in the management culture for decades, and the federal government has thousands of
project managers who are routinely capable of amazingly complex achievements. In fact, many project managers may
have never seen or considered any other way to get things done.

Although it is not necessary here to describe project management in detail, a simple diagram will help to show its
general features.

The figure represents a time line or GANTT chart. All projects (or programs) have a definite start time (green) and a
definite stop time (red) when the final deliverables (products, services, documents, decisions, etc.) are delivered to the
customer. The goal is to meet customer requirements. The initial stage requires establishment of a precise budget and a
plan of action and milestones (POA&M). The work is focused on the actual mission of production undertaken for the
customer. It may be broken down into a hierarchy of subtasks, called an Engineering Schedule Work Breakdown
Structure (ESWBS). Status and review meetings are scheduled at regular intervals throughout the project. Usually some
kind of final report is written as one of the deliverables. The goal is to reach the end point on time and within budget,
since there are usually other projects that are depending on input from the deliverables of this project. So project
management is the effort to manage work within a finite, clearly scoped, hierarchically-structured, linear development
process with a definite beginning and end.

The balanced scorecard management system is not just another project. It is fundamentally different from project
management in several respects. To illustrate the radical nature of this difference, a diagram is shown of the BSC
performance measurement process, as it would run when installed in an organization.
The first thing to notice is the topology: the balanced scorecard management process, derived from Deming's Total
Quality Management, is a continuous cyclical process. It has neither beginning nor end. Its task is not directly
concerned about the mission of the organization, but rather with internal processes (diagnostic measures) and external
outcomes (strategic measures). The system's control is based on performance metrics or "metadata" that are tracked
continously over time to look for trends, best and worst practices, and areas for improvement. It delivers information to
managers for guiding their decisions, but these are self-assessments, not customer requirements or compliance data.

People trained only in project management may have difficulty in figuring out how to accomplish the BSC, simply
because it is such a different kind of management paradigm. One of the key practical difficulties is to figure out how to
get the process started in the first place. If this is not a project, where does one begin? What kind of plan is appropriate
for deployment of the balanced scorecard system?

If we want to ride a rotating merry-go-round, we had better not attempt to just hop on. We will probably get hurt -- and
won't get on. The situation is similar with the balanced scorecard. To get on the merry-go-round, we have to accelerate
in the same direction for awhile, then hop on when our speed equals that of the circular floor. In other words, there
needs to be a ramp-up phase, where everyone "comes up to speed." This includes training or retraining of project
managers, and probably focused deployment of pilot efforts before attempting to cover an entire large agency.
Sustained, patient leadership will be needed before the payoff is attained.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Management Approaches
© Paul Arveson 1998

Government agencies cannot live by project management alone. Congress, in the GPRA, the Executive Branch in the
Reinventing Government initiative, and DoD Secretary Cohen in the Defense Reform Initiative, are asking us to find
ways to increase productivity and efficiency, while maintaining mission effectiveness. That is where the new
management approaches come in -- they are more applicable than project management to the kinds of internal
improvements that are needed.
The table below summarizes comparisons of three different management approaches or methodologies. The
comparisons are shown for several different features. It is evident from this comparison that BPI and the Balanced
Scorecard are quite different in most respects from project management. They have different purposes and meet
different needs.

Project Management Business Process Improvement Balanced Scorecard

Age of Approach Decades Began in DoD 1992 Began in 1990

Prime Customer External Sponsor Internal Director External IG, Internal Director

Project Requirements,
Goal Definition Cost, cycle time reductions Strategic management system
Mission Needs Statement

Focus Technical Mission Business Processes Multiple perspectives

Scope Specialized unit unit to enterprise dept. to enterprise

Plans Plan of Action & Milestones Process Improvement Plan Strategic Plan, Performance Plan

Schedule & Work Breakdown Schedule, Cross-functional teams, 1-2 yr.


Team directed, focus groups
teaming Action Items implementation

Management Team building, Budgeting, Baseline process analysis, to-be Define metrics, collect data, analyze
Activities Task Tracking, Reviews process design, automation data, decide on changes

Tools (see links) Microsoft Project, Primavera TurboBPR, IDEF0 Data collection system, scorecards

Measures of Deliverables on time, on Cost reductions minus cost of Learning what strategies work;
success budget BPI effort improved results on many metrics

In attempting to implement the newer management methodologies in a traditional project management organization,
there are two possible options:

1. train the managers in the new approaches and techniques;


2. translate the new approaches into familiar project form, and treat them as conventional projects.

Option 1 is always recommended. The problem with that is that we do not have the time or money to spend on a lot of
training in new techniques.

Option 2 is something that hasn't been suggested before, to my knowledge. I don't know if it is feasible, or even if it
makes sense. But if it could be done, it would save a lot of time in deploying the new initiatives.

Option 2 was actually suggested by the DoD's 1998 Performance Plan, in which one of the top level mission goals was
'Cost Reduction'. In other words, the DoD management recognizes that this is in itself worthy of being a strategic goal
on the level of its other missions, not just an internal efficiency need.

Return
Balanced Scorecard Institute
1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 355
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Selecting a Management Approach


©Paul Arveson, 1998

One of the reasons why managers are having such difficulty in applying management methods to government problems
is this: there are many different schools of thought on management approaches, and each of these schools has its own
proponents. Generally, an original proponent makes his or her name in that particular concept, and becomes an 'expert'
and a 'guru' of it. There is little incentive to integrate this one approach with others.

That job is left to the poor managers who have to figure out how to apply what theory to their business problems. They
have heard something about MBO, TQM, BPR, ISO-9000, CMM, ABC, BSC, and all the other buzz words and
acronyms of management -- but they have received precious little guidance as to what to select that best fits their
business needs, and the top-level requirements such as the GPRA. Usually, however, managers will tend to use the
approach with which they are most familiar, which is probably project management or program management.

At any point in time, management culture tends to be dominated by one school of thought. Currently an emerging idea
is the 'balanced scorecard'. The book on this theory by Kaplan and Norton is currently one of the top 10 best sellers in
the field. Management consultants and writers tend to adopt the theory that is currently in vogue, and its popularity thus
tends to grow rapidly to a peak, until it is superseded by the next new idea. The schools come and go approximately
every 10 years. A similar phenomenon seems to take place in other social sciences, such as psychology, sociology, and
education.

Thomas Kuhn's book The Structure of Scientific Revolutions analyzed this phenomenon. Although its conclusions may
be taken too far, the general description of the process seems true enough:

1. A revolutionary new idea comes out of the blue, and champions and followers arise to promote it.
2. A school of thought and literature arises around the subject.
3. The idea becomes so popular that it becomes part of the 'establishment'. Its view is unquestioned and it
dominates the scene for awhile.
4. Anomalies, counterexamples and new ideas emerge that cause the original idea to be deeply questioned.
5. A period of conflict between proponents and opponents prevails.
6. One of the new ideas takes over the field, except for a few die-hards who have little but historical influence.
7. The old idea may not be forgotten, but is absorbed into the new idea as a 'special case' or a 'useful fiction' that
may be helpful in certain situations. (This appears to be the current status of Freudian theory within
psychotherapy, for example).

Management Flexibility

A manager who only has experience in one approach, such as project management, may have difficulty in adapting to
changing demands. A manager can be much more effective if he or she is able to select a management approach that is
most appropriate to the desired need or goal. This adaptability or 'eclectic' flexibility may prove very useful in the
changing government management environment.

There is no good reason why managers must follow the latest school of management thought. On the other hand, just
because an idea is new does not mean that it should be dismissed. There are reasons why one particular approach is
better than another depending on the strategic goal or need. The balanced scorecard, for instance, appears to be a very
appropriate technique for meeting the urgent management needs of many Government agencies, such as their need to
comply with the requirements of the GPRA. However, this need should not blind managers to other, perhaps even more
pressing goals of their organization that may require a different approach.

The following table was developed to aid in selection of a management approach, depending on the conditions and
need of the organization (strategic goal). The conditions will partly determine the best option. (The terms are defined
here.)

Time Horizon Change Technical Risk


Strategic Goal Recommended Option
(years) Readiness Level Tolerance

2-3 GPRA Compliance Moderate High Moderate Balanced Scorecard

>30% cost reductions,


3-6 High High High BPR
survival

1-3 20% Cost reductions Moderate Moderate Moderate BPI

Long term Continuous improvement Moderate Moderate Low TQM

Balanced Scorecard +
2-3 Baldrige score elevation Moderate High Low
ABC

2-5 Strategic alignment Low Moderate Moderate Balanced Scorecard

2-5 Marketing credibility Low Low Moderate ISO-9000, incremental

2-5 Increased capabilities Moderate High Low CMM

Note that not all possible combinations of conditions are included in the table. If your conditions are beyond the levels
indicated here (e.g. low risk tolerance when 30% cost reductions are needed), then it is likely that a 'best option' does
not exist for your situation. You may need to gain additional senior management support and tolerance for risk before
conducting strategic activities.

Details about each of these approaches may be found in books and at web sites listed in the Links.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Measurement-Based Management -- And Its Excesses

© Paul Arveson 1998


Henry Mintzberg is a widely-respected professor of management at McGill University in Montreal. He wrote an article
called "Managing Government, Governing Management" for Harvard Business Review in May-June, 1996. In it, he
said:

"Next, consider the myth of measurement, an ideology embraced with almost religious fervor by the
Management movement. What is its effect in government? Things have to be measured, to be sure,
especially costs. But how many of the real benefits of government activities lends themselves to such
measurement? Some rather simple and directly delivered ones do -- especially at the municipal level --
such as garbage collection. But what about the rest? Robert McNamara's famous planning,
programming, and budgeting systems in the U.S. federal government failed for this reason:
Measurement often missed the point, sometimes causing awful distortions. (Remember the body counts
of Vietnam?) How many times do we have to come back to this one until we finally give up? Many
activities are in the public sector precisely because of measurement problems: if everything was so
crystal clear and every benefit so easily attributable, those activities would have been in the private
sector long ago."

As we begin to embark on a comprehensive measurement-based management program, what are we to make of this
critique? Certainly the excesses of the "Whiz Kids" of the 1960's present a notorious example of the failings and
unintended consequences of previous measurement systems. This critique has to be confronted directly lest we be
doomed to repeat history.

In the context above, it is clear that Mintzberg is talking about outcome as opposed to output measurements. This is an
important distinction. Briefly, outputs are productivity or efficiency metrics such as the number of reports written per
month, the number of widgets manufactured per day, etc. They also might include service metrics such as the number
of customers visited per month, the number of help desk calls resolved, etc.

Outcomes, on the other hand, deal with long-term effectiveness or results of a mission or function. One generic
example of an outcome metric is "customer satisfaction". In private companies, this is a leading indicator of future
profitability, growth and increased market share: all outcomes indicating success. However, in the case of government,
it is difficult to define outcomes adequately. Since government is a not-for-profit organization, financial results are not
relevant as outcome metrics. Growth and market share are also not applicable. "Customer satisfaction" is somewhat
relevant, but who is the customer? The government user of a process? The civilian user? The taxpayer? The sponsor or
administrator? Congress? The OMB? It could be all the above.

Probably the only generally applicable outcome metric for government is "mission effectiveness". The missions of
government are undertaken for the public good, not for gain. But that 'good' is defined differently depending on the
agency's mission. Sometimes it is feasible to measure. For example, a police department's outcome would be the
overall crime rate in its district. But how would you define the outcome for a military base? Number of wars won?
What about a basic research lab, whose inventions might revolutionize technology 20 years from now? It becomes
infeasible to measure outcomes in many such cases, and attempts to do so may turn missions astray, as Mintzberg says
they did in Vietnam.

So I will grant that in the context in which he writes, Mintzberg is correct. We should be very careful about establishing
metrics for government mission outcomes, and in some cases not even try.

But for output measurements, it seems that the situation is quite the opposite. Government offices, warehouses, bases
and facilities have much the same support services and user needs as any private company's. So we should expect to
find degrees of efficiency of operating and support processes that can be measured, benchmarked and improved. This is
the appropriate function of a measurement program, and its goal is simply to improve productivity and efficiency, i.e.
reduce the cost and cycle time of internal processes. In all conceivable cases, such reductions will please customers,
whoever they are, and probably result in improved outcomes.

Therefore, I believe there is an appropriate place for a measurement system in government agencies, although its
emphasis should be on measuring outputs (such as process cycle times) and only the outcomes that are short-term and
relatively easy to measure, such as self-reported customer satisfaction from several classes of customers. Such a system
will, I believe, meet the letter and the spirit of the Government Performance and Results Act while avoiding excessive
agonizing over metrics definitions.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Objections to a Performance Management System, and Responses


© Paul Arveson 1998

1. The costs outweigh the benefits. What will we find that we didn't already know?

What is the cost of not proving your value? Our competitors will prove theirs.

Today, our customers expect us to show evidence of progress. They have been through the performance management
training too -- and the law (GPRA) requires it.

Web sites can enable and automate much of the work. This is a new method that is easy and inexpensive to implement,
relative to the tools we had a few years ago.

Performance measurement has been demonstrated to be a 'best business practice' in terms of improving the bottom line
in all kinds of companies (e.g. Motorola, Mobil, Cigna, the Coast Guard, Minn. Dept. of Revenue, the City of Charlotte
(NC), Veterans Admin., Rockwater Energy, FMC, a bank, and an insurance company.)

2. But some tasks will be labor intensive: metrics definition, software development, data collection.

It's true that defining metrics is time consuming and has to be done by managers in their respective mission units. But
once they are defined, they won't change very often. And some of the metrics are generic across all units, such as cycle
time, customer satisfaction, employee attitudes, etc. Also, software tools are available to assist in this task.

Software development efforts should be kept to a minimum by using COTS products to the maximum. There are now
companies that specialize in this kind of product, but most agencies can probably just leverage the existing financial
data warehouse to support this system.

Data collection will be supported in many cases by using web-based forms. Manual work such as collecting customer
data, telephone interviews, etc. can be supported by broadening the work of the existing 'financial assistants' to non-
financial metrics.

3. We have only limited control over results. Why should we be held accountable for things we can't control?

With our strategic initiative of customer service, we must take responsibility for our mission effectiveness; we have to
improve customer relationships; there is no alternative. Our customers will understand our limitations if they are in a
close partnership with us.

4. The results will be used against us.

The results can also be used for us. What has been hurting us more is not having any results to show.
What better way to gain resources from the sponsor than to clearly show them the consequences of the present
situation.

We can't see our own blind spots. We need someone else to point them out to us. The measurements add visibility, even
if it is painful.

If we excel, it is NOT generally true that successful organizations lose budget.

5. Management will misuse or misinterpret the results. The process will be gamed.

That's why we need a balanced mix of several measures. Inspections such as Baldrige assessments are done by a variety
of dedicated people across the organization. They want to do a good, honest job.

The measurements will be validated by an independent IG team or third party.

It should be emphasized to everyone that the main purpose of the balanced scorecard is not individual performance, but
collective organizational performance. (Another separate system should be used for individual performance
evaluations.) Therefore, results at the lower levels can be aggregated in such a way that individual employee
performance is not reported out. This will eliminate a source of fear that will lead to gaming or failure to produce the
data.

6. They will score us by inappropriate or unfair standards.

We get to define our own metrics, at least the ones that are pertinent to each mission. That's the only way to define
OUR mission effectiveness. And the other metrics, like cycle time and customer satisfaction, are generic across all
organizations.

7. Too much complexity: There are numerous systems and assessment criteria; how will we combine them all?
(ISO 9001, ISO 14000, Baldrige, ABC, EVA, CMM, Balanced Scorecard, strategic initiatives).

I don't think it is necessary to make such a complex system. What is needed is a minimum basic set of measurements
across various business perspectives and aligned to our strategic plan, as the balanced scorecard prescribes. We do need
to develop ISO certifications when appropriate, but the metrics for them could be simple, like 'percentage coverage' and
'cycle time'.

After all, the purpose of the system is to clarify our situation for senior managers, not to make it more complex.

8. It's too big and ambitious and expensive to deploy a performance measurement system in this entire
organization. We can't afford such large-scale efforts.

Agreed. It should not be deployed across the organization all at once. Rather, it should start small, in a business unit,
and be allowed to develop incrementally. Experience will be gained before company-wide deployment is considered.
This reduces cost, risk, and disruption.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

The Balanced Scorecard - Who's Doing It?

Increasingly, as balanced scorecard (BSC) concepts become more refined, we have had more inquiries asking for
examples of organizations that have implemented the BSC, how the BSC applies to a particular business sector, metrics
are appropriate for that sector, etc. This section provides a database of working balanced scorecard examples that our
research has located.

Although by the end of 2001 about 36% of global companies are working with the balanced scorecard (according to
Bain), much of the information in the commercial sector is proprietary, because it relates to the strategies of specific
companies. Public-sector (government) organizations are usually not concerned with proprietary information, but also
they do not usually have a mandate (or much funding) to post their management information on web sites.

The following link will take you to our compilation of data on organizations that have reported at least a partial
adoption of the balanced scorecard:

Adopters of the balanced scorecard (in alphabetical order of organization name)

It is necessary to do extensive research in order to locate this information. If you would like to report other balanced
scorecard examples, please let us know!

Return to home page

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Balanced Scorecard Adopters


Sample List

This table lists a sampling of documented balanced scorecard adopters. Click on any organization's name to see
its details. Click on a heading to sort the data by this heading.

Or search for an adopter.

Organization Sector City State Country


Allfirst Bank Banking USA
Ann Taylor Stores Retail USA
AT&T Canada Long Distance Telecommunications Canada
Bank of Tokyo-Mitsubishi Banking Japan
Blue Cross Blue Shield of Minnesota Health Care Insuranc MN USA
BMW Financial Services Financial Services Germany
Bonneville Power Administration Utilities WA USA
Boston Lyric Opera Entertainment Boston MA USA
British Telecommunications Worldwide Telecommunications UK
California Poly. State University, San Luis Obispo Higher Education San Luis Obispo CA USA
California State University system Higher Education USA
California State University, Pomona Higher Education CA USA
Carleton University Higher Education Ottawa ON Canada
Caterpillar, Inc. Manufacturing USA
Charleston Southern University Higher Education Charleston SC USA
Chemical Bank Banking NY USA
Cigna Property & Casualty Insurance USA
Cigna Property & Casualty Insurance USA
Citizen Schools Middle Schools Boston MA USA
City of Charlotte Local Government Charlotte NC USA
Cornell University Higher Education Cornell NY USA
Crown Castle International Corp. Telecommunications USA
DaimlerChrysler Manufacturing Germany
Datex-Ohmeda Health Care Supplies USA
DDB Worldwide Marketing
Deakin University Higher Education Victoria Australia
Defense Logistics Agency Government USA
Devereux Foundation Mental Health Care USA
Duke University Hospital Health Care USA
DuPont Manufacturing DE USA
Entergy Energy New Orleans LA USA
Equifax, Inc. Financial Services Atlanta GA USA
ExxonMobil Corp. Energy USA
Fannie Mae Banking Washington DC USA
Finnforest, UK Natural Resources UK
First Energy Corp. Energy USA
Ford Motor Company Manufacturing USA
Fort Hays State University Higher Education Hays KN USA
Foster Farms Agriculture CA USA
General Electric Company Manufacturing USA
High Performance Systems, Inc. Information Technolo USA
Hilton Hotels Corp. Hospitality USA
Homestead Technologies Internet Communicati USA
Honeywell Manufacturing USA
IBM Information Technolo USA
Illinois Benedictine College Higher Education IL USA
Indiana University Higher Education IN USA
Ingersoll-Rand Manufacturing USA
International Data Corp. Information Technolo USA
KeyCorp Financial Services USA
Lawrence Hospital Health Care Westchester NY USA
Link List Multiple
Link List Multiple
Lloyds TSB Bank Banking UK
May Institute Health Care Norwood MA USA
McCord Travel Management (now WorldTravel BTI) Leisure & Travel USA
MDS Health & Life Scienc USA
Media General Media USA
Mercury Computer Systems, Inc. Information Technolo USA
Mobil North American Marketing & Refining Energy USA
Montefiore Medical Center Health Care USA
National City Bank Banking Cleveland OH USA
National Reconnaissance Office Government Dulles VA USA
NCR Corp. Information Technolo USA
Northern States Power Company Energy USA
Northwestern Mutual Insurance USA
Nova Scotia Power, Inc. Utilities NS Canada
Ohio State University Higher Education Columbus OH USA
Ontario Hospitals Health Care Canada
Owens & Minor Health Care Supply USA
Pennsylvania State University Higher Education State College PA USA
Pfizer Inc. Pharmaceuticals USA
Philips Electronics Manufacturing Netherlands
Prison Fellowship Ministries Humanitarian USA
Reuters America, Inc. Financial Services USA
Ricoh Corp. Manufacturing Japan
Royal Canadian Mounted Police Government Canada
Saatchi & Saatchi Worldwide Marketing NY USA
Saint Leo University Higher Education Saint Leo FL USA
Scudder Kemper Investments Inc. Financial Services USA
Sears Roebuck & Company Retail USA
Siemens AG Manufacturing Germany
Southern Gardens Citrus Processing Corp. Food Processing FL USA
St. Mary's/Duluth Clinic Health System Health Care MN USA
St. Michael's Hospital Health Care Toronto ON Canada
T. Rowe Price Investment Technologies, Inc. Financial Services USA
Texas Education Agency Education TX USA
The Handleman Company Wholesale distributi USA
The Store 24 Companies, Inc. Retail USA
The Thompson Corp. Information Systems USA
UK Ministry of Defence Government UK
Unicco Service Co. Industrial Services USA
United Way of Southeastern New England Humanitarian MA USA
Univ. of California Higher Education CA USA
Univ. of California San Diego Higher Education San Diego CA USA
Univ. of California San Diego Higher Education San Diego CA USA
Univ. of California San Diego Higher Education San Diego CA USA
Univ. of California San Diego Higher Education San Diego CA USA
University of Akron, Ohio Higher Education Akron OH USA
University of Alaska Higher Education AK USA
University of Arizona Higher Education Tucson AZ USA
University of California, Berkeley Higher Education CA USA
University of California, Los Angeles Higher Education Los Angeles CA USA
University of Denver Higher Education Denver CO USA
University of Florida Higher Education Gainesville FL USA
University of Iowa Higher Education Iowa City IA USA
University of Louisville, KY Higher Education Louisville KY USA
University of Missouri Higher Education Kansas City MO USA
University of North Carolina at Wilmington Higher Education Wilmington NC USA
University of Northern Colorado Higher Education CO USA
University of St. Thomas Higher Education Minneapolis MN USA
University of Vermont Higher Education Burlington VT USA
University of Virginia Library Higher Education VA USA
University of Washington Higher Education Seattle WA USA
UPS Shipping Atlanta GA United States
US Army Medical Command Health Care VA USA
US West Telecommunications USA
Vanderbilt University Medical Center Health Care Nashville TN USA
Verizon Communications Inc. Telecommunications USA
Volvofinans Financial Services Sweden
Walt Disney World Company Entertainment USA
Wayne State University Higher Education Detroit MI USA
Wells Fargo Bank Banking CA USA
Balanced Scorecard Institute
975 Walnut St. 1025 Connecticut Ave. NW
Suite 360 Suite 1000
Cary, NC 27511 Washington, DC 20036
(919) 460-8180 (202) 857-9719
www.balancedscorecard.org
Contact us

Balanced Scorecard Examples


OK, show me some examples! Who is doing balanced scorecards, and what are their results? (Aren't results what this
is all about?)

Below we offer links to some files and publications that will show you what the documents and results of balanced
scorecards look like. Although these all differ in format and details, they serve to illustrate the visual effectiveness of
the balanced scorecard approach to strategic management. (Note: these documents are the products of their respective
organizations, not the Balanced Scorecard Institute).

Non-profit Organizations:

Oak Knoll Academy - A primer on development of a management strategy for a fictitious private school, by Balanced
Scorecard Institute Associate Dr. Lawrence Grayson. A strategy map for the school is also available.

Vinfen Corporation - A private, non-profit human services organization based in Cambridge, MA. They recently
published a scorecard and a newsletter that provides details about their strategic plan and performance measures.

Government Organizations:

Defense Financial Accounting Service (DFAS) - Example of a balanced scorecard-based strategic plan for this world-
class financial organization, and some additional information about how it was developed (Nov. 2001).

Federal Avaiation Administration Logistics Center - A highly customer-focused organization with a balanced
scorecard-based strategic plan. Their original plan is a rather large (37 MB) file, so we have removed the graphics and
here we provide the text content only, in order to reduce the file size.

Department of Energy Federal Procurement System- One of the early Federal Government adopters of the balanced
scorecard. Continues to lead by example with this FY2003 Performance Assessment.

Department of Energy Federal Personal Property Management Program- Example of a balanced scorecard for a
major government program.

Commercial Organizations:

Regional Airline - A strategy map, with objectives, performance measures and initiatives in the balanced scorecard
framework.

Credit Card Company - A generic example of a possible strategy map for an innovative credit card company.

Database:
Over 130 Balanced Scorecard Adopters

More coming! If you would like to share your balanced scorecard plans and/or results with the world, please contact
us.

Return to home page

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 355
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Performance Measures or Metrics


"You get what you measure" -- whether it is really what you want or not. Therefore, it is extremely important to
conduct adequate planning to define appropriate goals, metrics, targets, schedules, data collection processes, and
analysis procedures. The goal of all this work is to raise the visibility of the important things that are happening in
the organization -- now and in the future. The following articles discuss aspects of metrics or measures in the context
of public-sector and nonprofit organizations.

 Desired Features of Agency Metrics


 What Should Your Company Measure Besides Financial Results?
 The Ethics Perspective
 Questions for Planners to Ask about Strategic Initiatives
 How do we Translate Private-Sector Performance Metrics to Public Sector?
 What are Appropriate Metrics for Government Agencies?
 Basic Concepts in Conducting Business Surveys

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Performance Measures or Metrics


"You get what you measure" -- whether it is really what you want or not. Therefore, it is extremely important to
conduct adequate planning to define appropriate goals, metrics, targets, schedules, data collection processes, and
analysis procedures. The goal of all this work is to raise the visibility of the important things that are happening in
the organization -- now and in the future. The following articles discuss aspects of metrics or measures in the context
of public-sector and nonprofit organizations.

 Desired Features of Agency Metrics


 What Should Your Company Measure Besides Financial Results?
 The Ethics Perspective
 Questions for Planners to Ask about Strategic Initiatives
 How do we Translate Private-Sector Performance Metrics to Public Sector?
 What are Appropriate Metrics for Government Agencies?
 Basic Concepts in Conducting Business Surveys

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

What Should Your Company Measure Besides Financial Results?


by Will Kaydos *

Most people don’t recognize that performance measurement lies at the heart of the improvements we humans
have made in our standard of living in the past few centuries. That’s because almost all of the gains can be linked
to using the Scientific Method to determine cause-effect relationships - and that requires measurement.

For example, bloodletting to cure illness was a common practice in many cultures for over 2000 years until Pierre
Louis used measurement to show the practice did not increase recovery rates (circa 1850). Although his
discoveries were not quickly adopted, they eventually led to discontinuing this worthless, and potentially harmful,
procedure.

Until Louis came along, everyone "knew" bloodletting worked because some people did recover after being
drained of several ounces of blood. Sure, some died, but when that happened, the rationale was that the person
was just too sick to be cured in the first place. It’s hard to argue with that kind of thinking unless, like Louis, you
have the data to prove otherwise.
But are things so much different today? They certainly are in the scientific community, where rigorous
substantiation of theories by sound data is always required. The business community, however, is nowhere near
as disciplined. Some companies have excellent measurement systems, but many are still in the Dark Ages when
it comes to their executives and managers having the measures necessary to understand how their business
works, how it is performing, and why it is performing as it is. Like tenth-century physicians, managers in these
companies are making decisions based on subjective information, anecdotal evidence, and beliefs about what
drives results that are simply not true.

As Pierre Louis demonstrated, when it comes to identifying cause-effect relationships, nothing beats
measurement - and the same is true about identifying business problems and opportunities. So what should a
well-managed company measure besides the usual financial results?

Measurement frameworks

The Balanced Scorecard is currently a very trendy (and often misunderstood) topic in business circles, but there
are other measurement frameworks such as the Performance Prism, the Quantum Performance Management
Model and the Tableau de Bord. All are useful, but none of them is the answer to everything despite what their
advocates may say.

Combining elements of various measurement frameworks yields the measurement model below. It works as
follows:

1. The needs and expectations of customers and stakeholders are the primary drivers of strategies.
Stakeholders include shareholders and employees, but suppliers, the community, government entities and
other organizations could also be important stakeholders.

2. Strategy consists of defining your intended customers and how you are going to compete for them. A
company’s strategy is made up of individual strategies, which are the key actions a company must take to
achieve its vision and goals. When developing strategies, all other elements of the model must be considered.

3. Operations include all direct and support business activities that execute strategies and produce products and
services for customers and stakeholders.

4. The capabilities of a company’s organization and infrastructure enable its operations to efficiently satisfy
customer and stakeholder requirements. Stakeholder capabilities may also be important to a company’s
operations. In the short-term, capabilities can limit what strategies are feasible; in the long-term they may
need to be developed to implement certain strategies.

5. Stakeholder contributions include products or services that are essential to operations. For example, suppliers
may provide critical technical support for designing products.
6. Products and services provided to customers create financial returns (7) for shareholders and perhaps other
stakeholders as well.

Note: For public sector organizations, the model is similar, but customer and stakeholder satisfaction become the
primary desired outcome, not financial returns.

Critical questions executives must answer

Using this model, we can develop the critical questions executives must be able to answer about their company
and the performance measures that apply to each question.

CUSTOMERS

1. Are we satisfying our customers?

· Customer satisfaction and dissatisfaction

· Customer retention and behavior

STAKEHOLDERS

2. Are we satisfying our shareholders?

· Financial returns to shareholders

3. Are we satisfying our other stakeholders?

· Stakeholder satisfaction and dissatisfaction

· Stakeholder retention and behavior

STRATEGIES

4. What is happening to our customer base?

· Market potential

· Market growth rate

5. Is our company strategy working?

· Market share

· Customer acquisition

· Customer profitability

· Product/service profitability

· External factors that affect customers

6. Are our individual strategies being properly executed?

· Strategic goals and the objectives necessary to achieve them.


OPERATIONS

7. Are we serving our customers and stakeholders effectively?

· Product and service quality

8. Are we operating efficiently?

· Process quality and capability

· Productivity

· Waste

· Product and service costs

STAKEHOLDER CONTRIBUTIONS

9. Are stakeholders contributing what they should?

· Stakeholder resource contribution

· Stakeholder contribution quality

CAPABILITIES

10. Are we developing the abilities we need to ; execute our strategies?

· Organizational capabilities

· Infrastructure capabilities

· Stakeholder capabilities

This is a simplified picture of what should be measured at the executive level of any company. There are some
overlaps among the questions and measures as well as many finer points that won’t be discussed here, but this
overview is consistent with what leading companies are measuring.

The importance of market share

Many would say market share is the most important measure, because if you’re losing market share, your
competitors have an advantage and they will soon eat your lunch. Market share won’t tell you how to correct the
problem, but it will tell you when you are getting into trouble.

For example, Kmart was losing market share to Wal-Mart way back in the ‘70’s, but Kmart’s management failed
to heed the warning - apparently because the company was still growing and profitable. Although Kmart
eventually made some changes to its strategy, it was too little, too late. Now it’s a Chapter 11 "blue-light special"
and the chances of it regaining its market position appear to be somewhere between zero and none.

Measurement and business success


All of the listed variables can be measured to a useful degree of accuracy and some companies are doing it.
Companies that have won the Baldrige Award or similar state award have extensive measurement systems that
include all of the above measures.

In reviewing numerous Baldrige-based quality award applications, I have found that a good estimate of a
company’s final score can be made by just examining the measures being used. Why? Because the depth,
breadth and underlying logic of a company’s measures reflect management’s understanding of the business and
how well it is being managed.

Not surprisingly, over a five-year period ending in 1998, the winners of Baldrige and similar awards did two to
three times better than comparable companies in terms of their growth in sales and operating income. That is a
huge difference!

Determining what to measure

So how can you determine what your company should measure? As mentioned before, there are several
frameworks that can be used. Although they all have merit, some have advantages in terms of their state of
development, ease of use, and direct relationship to common business practices.

I believe the best approach for developing company or business unit strategy and related measures is to use the
Balanced Scorecard methodology in conjunction with the robust perspectives of the Performance Prism. Balanced
Scorecard performance systems have an established record of success, but one needs a disciplined way of
building and implementing the system to ensure that business strategies get executed and that the necessary
organization culture change gets implemented. One framework that is becoming an international "best practice"
is the Balanced Scorecard Institute's Nine-Step Methodology for developing strategic themes, business strategies,
strategic goals, strategy maps, performance measures, targets, and new initiatives. The result is a strategic
management system that is comprehensive, logically sound, and supported by the whole organization.

This does not assure the strategies will work, but the measures will provide timely feedback about how well they
are working so timely corrective action can be taken regarding the strategies or their execution. Without the
measures, a company’s strategy and finances could get substantially off-track before any problems are even
recognized.

But having good strategies is not enough to be successful. Operational excellence is also needed to execute
them. To achieve and maintain high levels of productivity, quality, and customer service, comprehensive
operations or process measurement systems are needed to manage processes, departments and work units.
These systems would include the measures that are strategically important, but those measures alone are
insufficient for effectively managing operations.

For developing operational measures, I recommend the approach and model given in my book Operational
Performance Measurement: Increasing Total Productivity. No doubt I am biased, but the book’s process
measurement model is the only one I’ve seen that meets three critical criteria: it is logically sound, it readily
relates to real world processes, and it has a record of successful application. The model is also consistent with
TQM and Six Sigma methodologies that contain many specialized techniques for measuring and managing
processes.
Becoming familiar with the Baldrige Criteria for Performance Excellence is also recommended. Since it outlines
general management best practices, it provides a very helpful perspective on what a well-managed company
should be measuring, as well as what it should be doing.

Cascading measures

Corporate level measures are very important, but they aren’t going to have much impact unless they are
cascaded all they way down to front-line employees. The case for cascading is simple: Do you want 10% of your
employees working toward company objectives or 100%?

With some exceptions, such as market share, what you measure at the top is what must be measured at all
levels. However, the specific measures will change with every function and organizational level because managers
doing different jobs need different information to make different decisions.

The same methodologies used to develop measures at the corporate level can be used to cascade the measures
down to front-line managers, supervisors, and employees. However, as you go down the organization chart, the
focus is on operations or processes. Strategy is incorporated into operational measures by giving more weight to
the measures that are strategically important. This communicates strategy to all employees by translating it into
operational terms - a primary objective of the Balanced Scorecard.

Implementing performance measures

Determining what to measure can take considerable effort, but it will probably be less than one-third of the total
effort required to implement an efficient and effective measurement system. Data collection and processing
systems will have to be implemented to produce the measures; everyone will have to be trained in using the
systems and measures; and as the measures are used, some problems are sure to be identified that will require
changes to the system.

Perhaps the greatest challenge faced when implementing performance measurement systems is changing an
organization’s culture. Using performance measures requires managers and employees to change the way they
think and act. For most people, this is relatively easy, but for some, changing old beliefs and habits is very
difficult.

Measurement System Self-Assessment Overcoming such problems requires strong leadership to provide
appropriate direction and support. The best measurement system in
We have prepared a checklist you can use the world will yield few benefits if the right knowledge, skills,
to conduct a self-assessment of your abilities, and values are not developed in a company. An organization
company's performance measurement doesn’t just interface with a measurement system; it is part of the
system. Please contact Will Kaydos if you system.
would like a copy.

Developing and implementing effective measurement systems requires leadership, commitment, and hard work.
Some investment is required, but it is small relative to the key benefits of a well-designed and implemented
measurement system:
 The ability to determine if sales and profit problems are caused by strategies, operations, or both;
 Early identification of problems and opportunities;
 Increased productivity, quality, and customer service;
 A clear understanding of what drives financial and operational performance so resources can be
allocated to the areas of greatest return; and
 A cohesive organization working toward common goals.

No matter what approach you use to develop performance measures, bear in mind that the objective is not to
have a Balanced Scorecard, Performance Prism or some other type of system, but to have the measures in place
that will enable managers at all levels to answer the ten key questions given earlier.

If all of your managers can readily answer those questions about their areas of responsibility and support their
answers with objective numbers, your company has the performance measures it needs. If they can’t, some of
the "good" decisions they are making are undoubtedly not very effective - and they may even be harmful.

Sounds a lot like practicing blood

The Ethics Perspective


© Paul Arveson, 2002

In the fall of 2001, Americans experienced a new depth of evil. Following the attacks on two cities, the mail system
was attacked with Anthrax and electronic systems were attacked with the CodeRed and other viruses. We have a new
awareness of forces that seek to do unlimited harm to society. We sense that we are quite vulnerable to random attacks
in a variety of ways, and it is unlikely that the government can stop all of them.

But our society has also been subjected to a threat which appears to have had a more serious and lasting effect on our
economy than the terrorist attacks. Some corporations have defrauded the public on the unprecedented scale of many
billions of dollars. We have witnessed the consequences of conflicts of interest by auditors and stockbrokers. Piracy of
software and music is widespread across the Internet, and counterfeiting of prescription drugs is leading to many
deaths. Bogus "alternative" medicines and phony diet treatments are everywhere. Our regulatory institutions, such as
the Securities and Exchange Commission (SEC), the Federal Trade Commission (FTC) and the Food and Drug
Administration (FDA), seem unable to respond adequately to these new forces of fraud.

The Mercatus Report on Government Accountability for 1999 concluded:

"... Government has the same fiduciary responsibility to taxpayers that companies have to their shareholders. Agency reports
should mirror standards required in the reports of Fortune 500 companies, which suffer severe penalties if they fail to report
accurately and ethically to their shareholders."

How ironic it is, in these post-Enron times, that the regulatory status of the private sector would be held up as a
standard for government accountability! Criminal and civil litigation against Enron, Andersen, and other corporations is
likely to continue for many years, with billions of dollars at stake.
In a first effort to restore trust in corporations, the Sarbanes-Oxley Act now requires the CEOs and CFOs of large
corporations to sign an affidavit certifying the accuracy of their financial reports. They must certify that to the best of
their knowledge, "No covered report contained an untrue statement of material fact ..."

But how can executives know for sure that everything in these reports is true? Of course, assuring this is the reason for
the enormous effort and cost that goes into the maintenance of accurate accounting data in any organization, along with
independent audits and inspections. However, as we have seen, financial audits can be misleading or untrustworthy.
Loss of credibility has led to the collapase of at least one major accounting firm, and the breakup of others. In the wake
of these accounting scandals, it is evident that fraud or at least misleading statements are widely distributed in
companies. The suspicion of this -- whether justified or not -- is causing a retreat of investors from the entire stock
market.

We have learned once again that the crucial factor for economic health is trust. Without trust, business could not
function, and the economy would degenerate back to barter. It is this assault on our sense of trust, more than the overt
acts of terrorism, that has created a lasting downward pressure on markets.

"Trust but Verify"

Corporate governance has become an important new concern in our society. But signed affidavits do not by themselves
justify trust. Legislation alone, although beneficial and well-intended, cannot guarantee trust. The size and complexity
of modern corporations and governmental organizations is such that no one can know everything that is going on.
Executives have to trust their subordinates and vice versa. Inspectors and auditors have to invest a significant amount
of trust in those who supply them with documents. And investors have to trust the information provided to them by
company managers. Even a police state like the Soviet Union could not know enough about its people, and it collapsed
under its own weight. A society, to be a society, must depend on the internalized "good faith" and trust of a significant
majority of its members.

Trust is also extremely important in the technologies we use to share information. We can use "secure" means of
communication, such as encryption and digital certificates, but how can we verify that our networks are truly secure?
Often in recent times we have discovered, too late, holes and security lapses that we had assumed were not there. In the
wake of heavy criticism, even Microsoft's Bill Gates has distributed his thoughts about the need for "trustworthy
computing". But he only refers to the technical aspects of security. This alone is an insufficient guarantee. As long as
human beings are involved in a process, there will be the possibility of deceit.

The original balanced scorecard emphasized the importance of employee learning and growth as the foundation of
success, and the source of innovation that leads to target-breaking performance. This is certainly valid, but recall that
the managers of Enron were all highly educated professionals. Yet their education served only to make them more
clever and evasive in building a fradulent corporation. Education is not enough.

Sound business ethics must be practiced in order to rebuild trust in a company, and in our entire economic system.
Underlying all of the other features of a healthy organization, there must be an abundance of good will, transparency
and ethical behavior. Because of the pervasive importance of customer and investor trust, I suggest that ethical business
behavior should be included in the "learning and growth" perspective of the balanced scorecard, or that an ethical
business culture should be added as a fifth perspective where appropriate.

Measures of Corporate Ethics

But how can you measure ethical performance? Balanced scorecard designers, of course, routinely encounter aspects of
performance that are qualitative and hard to measure. Perhaps this is the hardest - but it is not impossible. Based on the
principles that "he who is faithful in the least will be faithful also in much" and "a tree is known by its fruit", here are
some possible metrics to consider (metrics with asterisks (*) are also relevant in public-sector organizations):

Metric Rationale
Modern business is complex; we must not assume that the rules of
Level of business ethical behavior are known without training and evaluation of
ethics training* employees. This could also include training on the company's own
business principles.
Morale of If a company doesn't treat its own employees fairly, how fair can it
employees* be to outsiders?
Openness, Do executives hide behind spin and obfuscation? Is there a lot of
transparency* secrecy?
Do leaders report bad news as well as good news? Are they self-
Candor*
critical? Do they take responsibility?
Turnover rate* If high, may indicate employee dissatisfaction.
If constantly strained, may indicate perceptions of distrust by
Union relations*
employees.
Do employees have access to an anonymous communication
Hotline*
channel to managers or inspectors?
It may be more likely for family members to share dishonest
Nepotism* practices, to exclude more-qualified managers, or to be distracted
by infighting.
Community
Do the managers care about their neighbors?
involvement*
Criminal records* Public records may be available for some executives.
Driving records* Bad driving may be a symptom of other problems.
Managers' use of corporate resources, opulence of buildings, homes,
Extravagance*
etc. may be excessive.
Environmental
Do they leave all the lights on? Do they recycle? Do they pollute?
awareness*
Are employee policies in line with industry, or excessively
Employee policies*
stringent?
Resignations* Executive resignations, other than for age, may indicate conflicts.
Employee diversity* Is it representative of the general population?
Whistleblowers* If any, what are they saying? How are they treated?
US govt. agencies - see Inspector General reports. Stock investors
Inspectors, should check the EDGAR database. For occupational safety, check
Regulators* for OSHA violations. For foods and drugs, check the FDA. For
consumer products, check CPSC. For vehicles, check NHTSA.
Financial and related company data should be reported frequently
Transparency*
and openly on web sites.
Charitable giving;
Altruistic behavior is difficult for crooks.
foundations
Work hours Long hours mean stressed employees.
Lobbying expenses Excessive, multi-party lobbying efforts were an Enron trademark.
Too many lawsuits and lawyers on the payroll may indicate a
Legal expenses
defensive or aggressive culture.
Insider trading New rules require prompt reporting within days.
Taxes paid No taxes? Maybe the company is being too clever.
Cash flow Better indicator of health than stock price.
Dividends Company cannot cheat with payouts of dividends.
401(K) plans May employees invest other than in company stock?
Stock options Are they expensed?
Bond ratings Indicates analysts' estimates of company's ability to pay its debts.
Is company stock included in a socially responsible mutual fund?
Social responsibility
(See lists)

A collection of such metrics may show trends that are not visible in corporate annual reports or on the resumes of
distinguished executives. But these traditional sources of data have recently been found to be lacking and misleading.
Behavior-based metrics such as the above may provide a better overall view of the level of corporate ethical culture.
(Of course, there are no guaranteesthat these metrics will be sufficient indicators of trustworthy or ethical behavior).
As much as we know that clever people can deceive, we also know we must depend on clever people to expose deceit.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

letting, doesn’t it?

Questions for Strategic Planners to Ask


© Paul Arveson 2002

Aligning strategic initiatives to strategy is done through measurements of the performance of those initiatives. There is
no other way. Therefore strategic planning and the measurement of strategy should go hand in hand.

It is widely understood that the balanced scorecard incorporates feedback loops in order to not only measure, but also to
adjust, if necessary, the activities and their inputs (budgets) in order to improve mission success and mission value.
Actually, there are as many as FOUR different types of feedback loops in operation when strategic management is
fully operational. These are:

1. Feedback around the performance of strategic initiatives (strategic feedback)


2. Feedback around the internal operational performance (diagnostic or operational feedback)
3. Feedback around the definitions of metrics (metrics feedback)
4. Feedback around the design of measurement methods (methodology feedback)

This list is not intended to intimidate, but simply to identify the parameters that can be varied if necessary in the
evolution of strategic management within an organization. In practice, one starts implementation simply with a fixed
set of strategies, metrics and measurement methods that are based on "initial guesses" as to what will succeed. These
guesses may be based on business cases, but these should be seen as predictions, not as hard and fast proof, otherwise
we would not need to measure anything, or make any hard strategic decisions. There are too many uncontrolled
variables for us to see the future with any "business case" clarity.

Initial questions to ask

To verify the design of the initial BSC, we could ask the following questions of every strategic initiative:

Alignment questions:
What is the strategic goal that is being addressed by this activity?

What organizational mission does it relate to?

Do we have a hypothesis as to how this initiative will eventually improve results (i.e. a strategy map)?

Baseline questions:

What is the existing level of performance? Do we know?

Are we collecting this data and storing it somewhere?

What are the statistical parameters of this data, e.g. how much random variation does it contain?

Cost and risk questions:

What is the existing cost of operation?

How much will that increase when we do the initiative?

What is the risk that this cost will be exceeded?

Is the money being spent on this initiative the best use of the funds, or is there a better usage?

What is the risk that the initiative will fail? Has this assessment been included in the planning?

Customer and Stakeholder questions:

Have you listed all the communities of interest that have a stake in this initiative?

Who are the kinds of customers/stakeholders who will benefit directly from this initiative?

Who will benefit indirectly?

Is the specified initiative the best way to increase satisfaction for all kinds of customers, or is there a better way?

How will we know that the initiative benefits these customers?

Metrics questions:

What metrics will be used to define the benefit?

Are these the best metrics? How do we know that?

How many metrics need to be tracked? If this is a large number (it usually is), what kind of system are you planning to
use to track them?

Are the metrics standardized, so they can be benchmarked against performance in other organizations?

Measurement Methodology questions:

How will the metrics be measured? What methods will be used, and how frequently will data be collected?

Is this the best way to do the measurements? How do we know that?


Results questions:

How can we demonstrate that this strategic initiative, and not something else, contributed to a change in results?

How much of the change was probably random?

Lessons for Planners

Some of these questions should be documented in a good business case. Other questions should be addressed in a
performance plan. However, unfortunately, plans are often written that address few, if any, of these questions. Why?
Perhaps because of the dominance of program management in government. But strategic management and program
management require different approaches. (See the essay on this topic here .)

In most cases, managers are eager to get started -- they form teams to gather ideas for things to do, projects and
programs to initiate -- and may not get around to placing the initiatives into a framework of measurement-based
strategic management. They get busy with new activities, but never learn how the ultimate mission or strategy is being
affected; which initiatives are working and which are not.

This is the reason why (in the US Government) OMB and GAO keep asking agencies for performance plans, not just
program plans. It takes discipline and patience to establish a true performance-based strategic management system.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Translating Performance Metrics from the Private to the Public Sector


© Paul Arveson 1999

It is a truism that we live in an era of accelerated change, and no organization can survive without increasing its own
pace of decision-making. Hence an increased emphasis on realistic planning is being recognized, and in fact the US
Congress has mandated such an emphasis in legislation such as the Government Performance and Results Act (GPRA)
of 1993. It requires governmental agencies to develop strategic plans and performance plans that evaluate the success
of the strategic plan. The intent is to make governmental agencies more accountable for results to their ultimate
customer -- the taxpayers.

But there is a significant amount of 'translation' required to convert the language of the private sector into terms that are
appropriate for nonprofit and governmental organizations. The amount of translation has usually been underestimated
in the sincere attempt to make governmental agencies emulate private-sector business practices. However, since the
goal of planners is to define appropriate metrics for performance, it is important to clarify these distinctions.

All governmental agencies exist not for profit but to fulfil their charter or mission, which is an "inherently
governmental function". Government agencies have authority to conduct their mission that is delegated either by
congressional statute or by Executive Order. Moreover, in US law, government agencies are prohibited from direct
competition with the private sector in providing products and services. Hence, unlike private-sector businesses that can
change in any way they please, government agencies are constrained to work within their authorized mission. On the
other hand, private corporations are prohibited from engaging in some activities that are authorized for the government
only; these exclusions are described in the Constitution. Hence, the Department of Defense has the authority to develop
weapon systems and hire personnel to operate them. The General Services Administration has the authority to develop
and implement policies for management and office processes throughout the government. The State Department has the
authority to develop and implement foreign policy.

The key metric for government (or nonprofit) performance, therefore, is not financial in nature, but rather mission
effectiveness. But mission effectiveness is not a definite and static thing. Usually, an agency has a rather broad general
mission, which incorporates many specific sub-missions or departmental missions within it. At any given time, some
departmental missions may be more important than others for the needs of the country. The selection of the
departmental mission priorities is an ongoing strategic planning responsibility.

To summarize the similarities and differences of strategy between public and private-sector organizations, the
following table was prepared:

Comparing Strategy in Private and Public-Sector Organizations

Strategic Feature Private Sector Public Sector


General Strategic Goal competitiveness mission effectiveness
General Financial Goals profit; growth; market share cost reduction; efficiency
accountability to public; integrity;
Values innovation; creativity; good will; recognition
fairness
Desired Outcome Customer satisfaction Customer satisfaction
Stakeholders stockholders; owners; market taxpayers; inspectors; legislators
Budget Priorities Defined
customer demand leadership; legislators; planners
by:
protection of intellectual capital; proprietary
Justification for secrecy national security
knowledge
growth rate; earnings; market share best management practices
Key Success Factors uniqueness sameness; economies of scale
advanced technology standardized technology

Examination of this table will illustrate the often significant differences between public and private-sector
organizations. The only clear similarity between the two is in the desire for "customer satisfaction", but even here there
is a difference, because the definition of "customer" is different in the two cases. This table illustrates the necessity for
significant revision or 'translation' of much of the private-sector focused guidance commonly available for
implementing the balanced scorecard and other strategic planning efforts.

Government agencies need to exist; why bother to measure performance?

It is sometimes stated that as a consequence of the GPRA, budgets will be determined based on performance as
evaluated by the Office of Management and Budget (OMB). However, this is oversimplified, private-sector language.
Actually, budgets for many government agencies will always be provided to a large extent based on the necessity of the
agencies' missions, irrespective of their performance. For example, if the police force in a city has poor performance,
will the budget for that force be cut? That is unlikely; probably the opposite will occur as more funds are made
available for hiring, training, etc. The government agency's right to exist remains intact because of its validity on a
political level. This is totally unlike the situation for a private sector company.

Of course some agencies do not have charters as permanent as that of police forces. There are many offices, bureaus,
commissions and administrations that were established by legislators for a temporary task, and sometimes a 'sunset
clause' is included in the charter to ensure that the organization doesn't continue after its task is done. And even within
a large permanent agency, such as the Department of Defense, there are many smaller agencies and offices that are
temporary in nature. For such agencies, there may need to be a periodic review of their strategic need in light of new
conditions.

From this discussion, it sounds as if the whole intent of the GPRA, to improve governmental performance and results
for the taxpayers, is being undermined. If agencies have a political right to exist, regardless of their level of
performance, what is the need for performance assessments?
Indeed there is an important role for the GPRA, but it is not based on strategic needs. Its value lies elsewhere -- in
something that should be pervasive throughout the government: the GPRA's focus is on the effectiveness and efficiency
of the agency's authorized work.

Since each agency has its assigned mission, the metric for success of that mission will be unique to the agency. Success
is thus defined specifically to the agency's charter. "Performance" in this context means, "How well is the agency doing
its mission?" Metrics of performance answer the question "How do you know how well the agency is doing?" The
answer may take the form of a balanced scorecard on the mission-oriented workforce.

But in addition to mission work, every agency also contains a support workforce that does the same kinds of tasks:
business systems such as payroll and human resources, financial data accounting, etc., utilities, facilities, maintenance,
file management, forms processing, and other kinds of office work that are "generic" -- essentially the same in all
agencies. These support functions are necessary but they do not relate directly to the mission of the agency -- although
they do play a role in its effectiveness and the viability of the organization. The important difference between generic
and specific (mission-related) metrics is that the generic metrics can be benchmarked across other organizations. This
provides a way for agencies to compare their processes with the best practices in the private sector, and to identify
processes with exceptionally high or low efficiency. Typically the Baldrige criteria are used for such assessments,
although other methods can yield the same kinds of generic performance metrics.

Criteria or metrics for government or nonprofit agency performance can thus be broken down into the following three
categories:

1. Strategic need (current and in the foreseeable future)


2. Mission-specific effectiveness metrics (uniqueness and viability)
3. Generic efficiency metrics

In another essay, details regarding the definition of metrics in each category will be outlined based on the concept of
triage.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Designing Metrics for Government Agency Performance

© Paul Arveson 1999

"I am the true vine, and my Father is the gardener. He cuts off every branch in me that bears no fruit, while every
branch that does bear fruit he trims clean so that it will be even more fruitful." -- John 15:1

In a well-maintained garden, the gardener doesn't have much work to do at any particular time; the work is at a vigilant
and consistent steady pace. In a well-managed company, the situation is similar; major management handling is
unnecessary, crises are minimized, and the mission of the organization continues to be viable. However, in the current
environment of rapid political, technological and economic change, few organizations can remain calm and steady.
There are too many forces for change that threaten the viability of the organization -- even if it is an established
governmental agency.

Decisions regarding budget priorities are the central decisions of any organization. They are analogous to the decisions
made by a gardener. The gardener may either prune and fertilize a plant, cut it down and discard it, or leave it alone.
The same approach applies to management decision-making. Exactly what kinds of decisions are usually meant? What
things do senior managers decide? We find that the decisions often take the form of triage: they may either increase an
activity's budget, reduce it, or maintain it at a consistent level.

In a previous essay, it was argued that metrics for performance evaluations of nonprofit or governmental agencies fall
into three broad categories:

1. Projected future needs


2. Mission effectiveness
3. Support work efficiency.

Below we will examine how triage can be applied to each of these categories, which will help us to determine the most
relevant metrics needed to guide senior management decision-making.

Why metrics?

Before we get into the specifics, a little needs to be said about the role of metrics (as opposed e.g. to reports, stories,
judgements or guesswork) in the decision-making process. Consider these four questions:

1. What are you doing?

2. How well are you doing it?

3. How do you know how well you are doing it?

4. How can you demonstrate to others how well you are doing it?

The answer to the first question should be the agency's or department's mission statement, PLUS all the other activities
done to support the mission. There are a lot of these, and many may be 'hidden', so they need to be identified.

Answering the other questions requires managers to define metrics, collect data (such as surveys) and analyze the data.
The better the documentation, the more visible and convincing the decision-making process becomes.

Stage 1: Strategic Needs Triage


Strategic needs issues address the question: To what extent is the agency's mission needed, now and in the future? Or
within an agency: To what extent are the departmental missions needed, now and in the future? These issues are
resolved primarily by looking outside the agency to its customer's needs now and related needs anticipated in the
future. Other considerations are the replacement cost and time it would take to reconstruct the capability if it were
abandoned or outsourced and then required inside the government once again. Often demographics play a role in future
projections on both these questions.

"Departmental" missions are here defined as more specific missions (or "sub-missions") that are contained within
smaller departments or other units of an agency.

"Future" is the time horizon of the strategic plan, which should be clearly stated in it. For information technology
capabilities, a 3-year time horizon is 'long-term', for urban infrastructure or Navy ships, a 10- to 20-year time horizon is
'long-term'.

At the strategic stage, the mission capabilities are defined rather broadly, because decisions about future needs cannot
be too precise.

Strategic needs decisions aim to sort out the following three categories of mission capabilities:

1. Mission capabilities that are going to be needed increasingly the future, that are not duplicated in the private
sector, that have inherently governmental requirements, and that have a high replacement cost and long time to
build. EXPAND these capabilities by aggressive hiring, training, construction and other such efforts.
2. Mission capabilities that are currently viable and strong within the government, and that meet the other
characteristics of #1. Viability (age of employees, size, skills etc.), customer satisfaction (sponsors, users,
taxpayers), output quality, outcome success. SUSTAIN these capabilities at current funding levels.
3. Mission capabilities that are currently weak, and that will probably have a declining need in the future.
REDUCE these capabilities. (Preferably this may be done by employee retraining, attrition and consolidation as
opposed to outright reductions in force (RIFs). Also, care should be taken to document and preserve essential
knowledge and capital assets of these capabilities in case our assumptions about future needs were incorrect.)

Strategic Needs Metrics

Obviously it is difficult to predict what will be needed in the future. But a decision to remain the same is also a
prediction that future needs will be what they are now. Is this the best option? There is no shortcut to this aspect of
strategic planning; it requires a lot of reading and study and wise judgement on the part of senior managers.

Mission uniqueness plays a role in the strategic triage decision. This is analogous to assessments of competitive
advantage in the private sector. In the current political and economic environment, an agency needs to be aware of any
aspects of its work that overlap work in the private sector. If agencies wish to sustain their previous dominance of
activities that have moved to the private sector, or expand business in such activities, then they will face political
hurdles as well as the economic challenge of competition with profit-oriented companies. A careful determination of
what activities are "inherently governmental", including security considerations, should be undertaken to define agency
policy in this area. Such a planning effort goes beyond the scope of this paper.

Stage 2: Mission-specific Effectiveness Triage

In stage 2, decision-making focuses on those capabilities that will continue to be needed in the future, as determined in
stage 1. At this stage, mission capabilities are broken down into more specific sub-missions so that their effectiveness
can be evaluated individually. The meaning of "effectiveness" is defined differently for each sub-mission, and it takes
experts in each unit to evaluate this effectiveness. These capabilities are sorted into the following three categories:
1. Weak but unique capabilities that are currently struggling, e.g. with a poor prognosis due to demographics, but
are unique within the government and are unlikely or undesirable to be supported by the private sector.
ENHANCE these capabilities by focused training, hiring and promotions.
2. Viable capabilities that are currently strong and sustainable, with good demographics, that are either unique and
inherently governmental or competitive in quality and performance with the private sector. SUPPORT these
capabilities by appropriate incentives and budgets.
3. Weak and not unique capabilities that are currently in decline, and are duplicated or exceeded by the private
sector or are not inherently governmental. REDUCE these capabilities by outsourcing or other methods.

Mission Effectiveness Metrics

Metrics for departmental mission effectiveness will be primarily focused on outcomes: the long-term success of the
mission's work. However, this is often difficult to evaluate because of its long time scale. Shorter-term metrics can
consider viability as a leading indicator of effectiveness: the level of learning and skills in the department,
demographics (ratio of younger to older employees with the same skills), employee satisfaction, and the quality of
internal teaming, communications, and management. Evaluations based on the Capability Maturity Models appear to
focus on these issues of organizational viability and are recommended for this purpose.

Another difficulty with mission effectiveness metrics is that they are defined differently for each departmental mission:
they are unique. Hence they cannot be directly compared or benchmarked against each other, as the more generic
efficiency metrics can. However, each of them can be compared with themselves -- by plotting their trends across time.
This requires an initial baseline measurement to be made, followed by periodic measurements using the same metric. In
this way, improvements in any effectiveness metric, no matter how specialized, can be evaluated.

Customer satisfaction -- where the customer is the specific sponsor of each departmental program -- is an exception to
the above difficulty. Customer satisfaction can be standardized and scored across departments, as well as across time,
irrespective of the specific mission of the department. So customer (sponsor) satisfaction measurements should be a
requirement for monitoring the health of any agency or department.

Stage 3: Operational Efficiency Triage

The third stage of triage is to determine the general efficiency and productivity of routine office processes and business
practices. The scope of this evaluation varies with the organization; usually each physical campus or physical plant has
established networks, procedures and standards and cultures for administration that are similar within this physical
area, so this is probably the most appropriate scope. The determination of process efficiency can then be compared with
that of other companies or agencies using similar metrics -- that is why it is important to develop standardized
operational metrics.

Triage decisions for efficiency take the following form:

1. Organizations with "dysfunctional" work processes, e.g. inappropriate or antiquated business processes,
excessive paperwork, lack of standards, etc. IMPROVE business processes, reengineer and then add modern
automated technology, EDI, etc.
2. Organizations with appropriate modern processes and standards. MAINTAIN processes by vigilant
management.
3. Organizations with antiquated business processes, excessive paperwork, and without sufficient trained
leadership to manage significant change. GET OUTSIDE HELP to plan and manage the organization, make
outsourcing decisions, etc. This probably involves private consultants, but also increased dependence on the
parent agency's support and guidance.

Efficiency Metrics

Metrics for generic efficiency include: Office work processing times, electronic/paper ratios, staff/employee ratio,
standardization, total cost of ownership, overhead rate, results of customer satisfaction surveys (of sponsors, internal
users, or taxpayers), output quality, and office productivity. The precise data collection and sampling procedures,
scales, intervals and calculations of these metrics should be standardized at as high a level as possible in the
organization in order that individual departments can be compared or benchmarked easily and meaningfully. This
evaluation will allow managers to identify best practices to expand, areas of poor efficiency to improve, and how the
agency compares with the best-of-breed in the private sector.

Conclusions

Triage decision-making is a key responsibility of senior managers in government agencies. These triage decisions focus
on three kinds of questions:

1. To what extent is the agency's mission needed, now and in the future?
2. How effective are we at doing our overall mission, and each of the sub-missions or core capabilities?
3. How efficient are our generic business processes in comparison with the best-of-breed?

Each of these questions result in a triage that determines areas where improvements should be focused, areas that
should be left alone, and areas that should be downsized or outsourced. Once these areas have been sorted out, it is vital
to link them to budget allocations. Improving or enhancing a capability means increasing the funds used for things like
business process reengineering, upgrading information resources, recruiting, hiring, training and pay incentives.

All three kinds of questions need to be continually asked and evaluated. The easiest way to do this is by implementing a
measurement system (such as a balanced scorecard) that can provide more or less continuous monitoring of the need,
effectiveness and efficiency of each capability. If triage decisions are based on these measurements, and if these
decisions are linked to budgets, then the agency has established a process for continuous strategic alignment and
continuous improvement of internal operations.

But the linkage to budgets is vital: if senior decision makers are not willing to allocate budgets according to
measurement-based triage, then what is their function? They have become not managers, but mere caretakers of
processes (see Jac Fiz-Enz, Benchmarking Staff Performance).

This article has aimed at working backwards from the kinds of decisions that senior managers make to determine the
kinds of metrics that need to be defined in order to help them make these decisions reliably. This process of design-by-
working-backwards has resulted in a rational method of determining what metrics are really needed -- a question that
often arises in developing a Balanced Scorecard. Most of the "textbook examples" of scorecards apply to commercial
firms that may have little relation to governmental agencies. If these models are taken at face value, it may lead to
developing scorecards and data collection processes that are excessive or not clearly focused on the key needs of
decision-makers.

We find that the metrics really needed by government agency decision-makers fall into three general categories:
1. Strategic needs metrics: ways to assess the future needs related to the agency's general mission and sub-
missions based on an analysis of the external situation. (SWOT analysis and gap analysis are often the
approaches used here).
2. Mission effectiveness metrics: ways to assess the health and viability of those missions that are going to be
needed in the future. Detailed definitions of these metrics are required by each unit's capability experts, as well
as collection of appropriate data on outputs and outcomes.
3. Operational efficiency metrics: ways to assess the quality of support functions in enabling the needed missions
to be accomplished for the minimum cost and time. This requires standardized metrics, customer surveys and
benchmarking to identify best and worst practices.

Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

Business Survey Methods


Much of the balanced scorecard depends on data collected by surveys. Below are excerpts from authorities on survey
design and survey methods, providing some of the basic principles and considerations that should be understood by
anyone who needs to develop a survey for supporting the balanced scorecard, for example for customer satisfaction or
employee satisfaction data.

Data Collection and Response Quality

"Collecting data on business organizations is very different from collecting data on individuals. Examples of the
questions that survey designers must answer include the following:

 What level of the business organization is best able to answer survey questions -- the establishment, the
enterprise, or something in between?
 In terms of job title or position, who is the person within the business organization most likely to know or be
able to fine tghe answers to survey questions?
 Will permission have to be obtained from the owner or chief ecexutive officer prior to completing the
questionnaire?
 What records are available to the firm for use in answering survey questions?
 Can survey questions be structured to conform to the business' record-keeping practices, including its fiscal
year?
 Are there particular times of the year when data are more readily available?
 What is the best way to collect data that may be viewed as confidential business information?

It is time for businesses to get up to date, and use the more accurate modern survey techniques."

Cox, B.G. and Chinnappa, B. N. (1995). "Unique Features of Business Surveys." Ch. 1 in Business Survey Methods.
Wiley. NY. p. 9ff.

"The fundamental principle of sample design is to maximize precision within the fixed budget, or to minimize
cost for a specified level of precision."

Colledge, M.J. (1995). "Frames and Business Registers: An Overview", Ch. 2 in Business Survey Methods, Wiley, NY.
p. 21.
Definitions of Survey Terms

Sampling Survey

"In the broadest sense the purpose of a sample survey is the collection of information to satisfy a definite need."

There may be "a variety of purposes for which information is collected. Most frequently, however, interest has centered
on four characteristics of the universe or population under study. These are:

 population total (e.g. the total number unemployed);


 population mean (the average number of persons engaged by an industrial establishment)
 population proportion (proportion of cultivated area devoted to cotton);
 population ratio (the ratio of expenditure on foods to that on rent). The populations are considered are finite in
the sense that the number of objects contained in them (such as persons, farms, firms, stores) is limited."

"Broadly speaking, information on a population may be collected in two ways. Either every unit in the population is
enumerated (called complete enumeration, or census) or enumeration is limited to only a part or a sample selected from
the population.... A sample survey will usually be less costly than a complete census.... Also, it will take less time to
collect and process data from a sample than from a census. But economy is not the only consideration; the most
important point is whether the accuracy of the results would be adequate for the end in view. It is a curious fact that the
results from a carefully planned and well-executed sample survey are expected to be more accurate (nearer to the aim
of the study) than those from a complete census that can be taken. A complete census ordinarily requires a huge and
unwieldy organization and therefore many types of errors creep in which cannot be controlled adequately. In a sample
survey the volume of work is reduced considerably."

Frame

"In order to cover the population decided upon, there should be some list, map or other acceptable material (called the
frame) which serves as a guide to the universe to be covered. The list or map must be examined to be sure it is
reasonably free from defects. If it is out of date, consideration should be given to making it up to date. It would be
important to know how the list or the map had been made."

List of 13 survey planning considerations [and one added]:

1. Define objectives of the survey


2. Define population to be covered
3. Define the frame for the data
4. Divide up the population into sampling units
5. Determine the sampling parameters: size, manner of selecting, estimation of population characteristics, margin
of uncertainty allowed, estimated cost
6. Define information to be collected: relevance, completeness
7. Define method of collecting data [and calculating its actual cost]
8. Specify survey time scale, completion dates of steps
9. Construct questionnaire or schedule
10. Train interviewers and establish supervision
11. Design procedure for inspecting raw results and editing
12. Define how to handle nonrespondents
13. Analyze the data
14. [Present results to decision makers]

Raj, D. (1968). Sampling Theory. McGraw-Hill. NY.

For an excellent white paper on survey design, along with many related resources, please go here.
Return

Balanced Scorecard Institute


1025 Connecticut Ave. NW 975 Walnut St.
Suite 1000 Suite 360
Washington, DC 20036 Cary, NC 27511
(202) 857-9719 (919) 460-8180
www.balancedscorecard.org
Contact us

You might also like