Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 590

Reviewer 1

Management Advisory Services

THE CPA LICENSURE EXAMINATION SYLLABUS


MANAGEMENT ADVISORY SERVICES

Effective May 2016 Examination

This subject covers the candidates’ knowledge of the concepts, techniques and methodology
applicable to management accounting, financial management and management consultancy.
Candidates should know and understand the role of information in accounting, finance and
economics in management consultancy and in management processes of planning, controlling
and decision-making.

The candidates must have a working knowledge to comply with the various management
accounting and consultancy engagements.

The candidates must also be able to communicate effectively matters pertaining to the
management accounting and consultancy work that will be handled.

The knowledge of the candidates in the competencies cited above is that of an entry level
accountant who can address the fundamental requirements of the various parties that the
candidates will be interacting professionally in the future.

The examination shall have seventy (70) multiple choice questions.

The syllabus for the subject is presented subsequently.


Reviewer 2
Management Advisory Services

I. MANAGEMENT ACCOUNTING

One simple definition of management accounting is the provision of financial and non-
financial decision-making information to managers.
According to the Institute of Management Accountants (IMA): "Management accounting is
a profession that involves partnering in management decision making, devising planning and
performance management systems, and providing expertise in financial reporting and control
to assist management in the formulation and implementation of an organization's strategy".

Management Accounting is the process of analysis, interpretation and presentation of


accounting information collected with the help of financial accounting and cost accounting, in
order to assist management in the process of decision making, creation of policy and day to
day operation of an organization. Thus, it is clear from the above that the management
accounting is based on financial accounting and cost accounting.

A. Objectives, Role And Scope Of Management Accounting (describe and


differentiate it from financial accounting)

OBJECTIVES OF MANAGEMENT ACCOUNTING

The basic objective of management accounting is to assist the management in performing


its functions effectively. The functions of the management are planning, organizing,
directing and controlling. More specific objectives include (1) performance measurement,
(2) risk assessment, (3) resource allocation, and (4) financial statement presentation.

Management accounting helps in the performance of each of these functions in the


following ways: 

 Provides data: Management accounting serves as an important source of data


for management planning. The accounts and documents are a store-house of a
vast quantity of data about the past progress of the enterprise,
facilitating forecasts for the future. 

 Modifies data: The accounting data required for managerial decisions is properly


collected and classified. For example, purchase figures for different months may
be classified to know total purchases made during each period product-wise,
supplier-wise and territory-wise. 

 Analyses and interprets data: The accounting data is probed meaningfully for
effective planning and decision-making. For this purpose the data is presented in
a comparative form. Ratios are calculated and likely trends are projected.

 Serves as a means of communication: Management accounting provides a


means of communicating management plans upward, downward and outward
through the organization. Initially, it means identifying the feasibility and
consistency of the various segments of the plan. At later stages it keeps all
parties informed about the plans that have been agreed upon and their roles in
these plans. 
Reviewer 3
Management Advisory Services

 Facilitates control: Management accounting helps in translating given objectives


and strategy into specified goals for attainment by a specified time and secures
effective accomplishment of these goals in an efficient manner. All this is made
possible through budgetary control and standard costing which is an integral part
of management accounting. 

 Uses qualitative information: Management accounting does not restrict itself to


financial data for helping the management in decision making but also uses such
information which may not be capable of being measured in monetary terms.
Such information may be collected form special surveys, statistical compilations,
engineering records, etc.

Objective # 1. Assistance in Planning and Formulation of Future Policies:

Management accounting assists management in planning the activities of the business.


Planning is deciding in advance what is to be done, when it is to be done, how it is to be
done and by whom it is to be done. It involves forecasting on the basis of available
information, setting goals, framing policies, determining the alternative courses of actions
and deciding on the programme of activities to be undertaken.

Thus, planning is making intelligent forecasting. This forecasting is based on facts. Facts
are provided by past accounts on which forecast of future transactions is made.
Management accounting helps management in its function of planning through the
process of budgetary control.

Objective # 2. Helps in the Interpretation of Financial Information:

Accounting is a technical subject and may not be easily understandable by everyone till
the user has a good knowledge of the subject. Management may not be able to use the
accounting information in its raw form due to lack of knowledge of accounting techniques.
Management accountant presents the information in an intelligible and non-technical
manner. This will help the management in interpreting the financial data, evaluating
alternative courses of action available and guiding the management in taking decisions
and having the most desired financial results.

Objective # 3. Helps in Controlling Performance:

Management accounting is a useful device of managerial control. The whole organisation


is divided into responsibility centres and each centre is put under the charge of one
responsible person. He will be associated with the planning and framing of the budgets
and be required to execute the plans and standards and deviations are analysed in order
to pinpoint the responsibility.

Thus, management accountant helps in controlling the performance of the different


responsibility centres and take suitable actions in order to correct the adverse deviations
by revising the budgets if need be.
Reviewer 4
Management Advisory Services

Management accounting assists management in location of weak spots and in taking


corrective actions against such spots which are not in conformity with the budgeted
performance. Thus, management accounting helps management in discharging its control
function successfully through budgetary control and standard costing.

Objective # 4. Helps in Organizing:

Thus management accountant recommends the use of budgeting, responsibility


accounting, cost control techniques and internal financial control. This all needs the
intensive study of the organisation structure. In turn, it helps to rationalise the organisation
structure.

Objective # 5. Helps in the Solution of Strategic Business Problems:

Whenever there is a question of starting a new business, expanding or diversifying the


existing business, strategic business problem has to be faced and solved.
Similarly when in a particular situation, there are different alternatives as whether labour
should be replaced by machinery or not, whether selling price should be reduced or not,
whether to export the item or not etc., a management accountant helps in solving such
problems and decision-making.

He provides accounting data to a management with his recommendation as to which


alternative will be the best. For such decisions, the management accountant may take the
help of marginal costing, cost volume profit analysis, standard costing, capital budgeting
etc.

Management accounting provides feedback to the management such as what business to


engage in or diversify how to run that business efficiently. This is most important
contribution which the management accountant has made.

Objective # 6. Helps in Coordinating Operations:

Management accounting helps the management in co-coordinating the activities of the


concern by getting prepared functional budgets in the first instance and then co-
coordinating the whole activities of the concern by integrating all functional budgets into
one known as master budget. Thus, management accounting is a useful tool in co-
ordinating the various operations of the business.

Objective # 7. Helps in Motivating Employees:

The management accountant by setting goals, planning the best and economical course
of action and then measuring the performance tries his best to increase the effectiveness
of the organisation and thereby motivate the members of the organisation.

Objective # 8. Communicating Up-to-date Information:

Management accounting assists management in communicating the financial facts about


the enterprise to the persons who are interested in these facts so that they may be guided
to a line of action to be pursued. Management needs information for taking decisions and
for evaluating performance of the business.
Reviewer 5
Management Advisory Services

The required information can be made available to the management by means of reports
which are an integral part of the management accounting. Reports are means of
communication of facts which should be brought to the notice of various levels of
management so that they may be guided for taking suitable action for the purposes of
control.

Objective # 9. Helps in Evaluating the Efficiency and Effectiveness of Policies:

Management accounting also lays emphasis on management audit which means


evaluating the efficiency and effectiveness of management policies. Management policies
are reviewed from time to time to make an improvement in them so that maximum
efficiency may be achieved.

ROLE OF MANAGEMENT ACCOUNTING

The purpose of management accounting in the organization is to support competitive


decision making by collecting, processing, and communicating information that helps
management plan, control, and evaluate business processes and company strategy.

Management accounting in simple terms is accounting for management which deals with
internals users of accounting information. If considered in a broader sense, it's of great
use in different managerial functions. It can help business to attain the expected results
on the basis of timely information and reports relating to the internal operations. Since
management accounting is primarily concerned with management needs, it plays an
important role in the management process assisting the managers to lead the business in
an efficient way executing any of these basic functions of management
effectively, Planning, Organizing, Coordinating, Motivating, Controlling and
Communicating.

Planning: Planning process may be short term or long term. Management is extremely
concerned with planning process, as effective planning leads a business to reap desired
results. By means of planning, a business management can identify where it stands and
what it takes to grow and develop in a way as has been predetermined. Management
accounting makes a valuable contribution in planning process of a business management
by providing significant and relevant data, such as, budgetary controls, cash and profit
and loss forecasts, pricing and evaluating proposals for capital budgeting etc.
Management accounting aids greatly in centralizing the various plans into one plan to
facilitate management in effective decision making process.

Organizing: Management accounting plays a vital role in this function. It involves


identifying the elements of organizational structure in order that the functions of
management accounting may be executed in a desired way. The process involves the
entire organization splitting in into various profits and cost centers in order to determine
the results thereof, while it facilitates in controlling measures for various cost centers.

Coordinating: Management accounting seeks to achieve effective coordination between


various departments. Focusing on the need of department budgets and reports, it
provides necessary information that facilitates management in serving various purposes.
Reviewer 6
Management Advisory Services

Motivating: The process of motivation reinforces an action. In this way, management


accounting seeks to motivate the concerned employees of various departments toward
the accomplishment of desired objectives, while allowing each and every employee to
contribute in common goals of business. It involves preparing periodic profit and loss
accounts, budgets and other necessary reports. It may be said that the budgets and target
to be achieved are a good cause to motivate the employees in order to perform better.

Controlling: As regards controlling, so far as the budgets are useful in facilitating the
planning process of a business, budgetary control is a good means of controlling as well.
Other techniques, such as, standard costing and departmental operating statements are
of great help in making controlling measures. They are also helpful to take remedial
measures in case of deviations in the performance of business entity.

Communicating: It is through this process that the results are communicated to the
owners, superiors and subordinates. It includes transmitting data highlighting necessary
information, such as, progress of business, financial position - to the required users. This
enables managers to highlight the issues that are worth and that they need proper
analysis, so that the intended results may be attained.

The purpose of management accounting in the organization is to support competitive


decision making by collecting, processing, and communicating information that helps
management plan, control, and evaluate business processes and company strategy. The
interesting thing about management accounting is that it is rare to find an individual within
a company with the title of “management accountant.” Often many individuals function as
accountants within the organization, but these individuals typically operate as financial
accountants, costs accountants, tax accountants, or internal auditors. However, the ability
to develop and use good management accounting (which covers a lot more ground than
the product costing done by cost accountants) is actually an important ability for many
individuals, including finance professionals, operational and marketing managers, top-
level executives, and information technologists.

Generally, in a very large company, each division has a top accountant called the
controller, and much of the management accounting that is done in these divisions comes
under the leadership of the controller. On the other hand, the controller usually reports to
the vice president of finance for the division who, in turn, reports to the division’s president
and/or overall chief financial officer (CFO). All of these individuals are responsible for the
flow of good accounting information that supports the planning, control, and evaluation
work that takes place within the organization.

As should be clear by now, the process of management accounting is the process of


creating and using cost, quality, and time-based information to make effective decisions
within the organization. Many people in the organization play a role in this process. The
internal audit department has the responsibility of ensuring that controls are followed and
operations are efficient. Financial accounting, while providing information to outsiders
(such as creditors, investors, and government agencies), must also provide relevant
financial reports to decision makers within the organization. Systems professionals have
the responsibility to process information so that it is available to management in formats
useful for decision making. Tax department experts make sure that the organization
complies with the tax laws and pays no more than its legally obligated tax liability, but
these people also participate in good planning, control, and evaluation of processes and
decisions that will affect future tax expense exposure. Finally, cost accounting obviously
Reviewer 7
Management Advisory Services

plays a key role in tracking and reporting relevant product and service costs. Overall, the
controller works to bring together all this information as an integral part of the planning,
controlling, evaluating, and decision-making activities that take place throughout the
organization.

FYI

Individuals interested in developing and demonstrating a professional competency in


management accounting can obtain a professional certificate that is much like the CPA
certification. The Certificate in Management Accounting (CMA) is sponsored by the
Institute of Management Accountants (IMA), a national organization of professional
management accountants. Five areas of study are emphasized on the CMA exam: (1)
economics and finance, (2) organizational behavior, (3) public reporting, (4) periodic
reporting for internal and external purposes, and (5) decision analysis, including modeling
and information systems.

Technology and the Management Accountant

As you have read this introductory chapter to management accounting, you have likely noticed
that the goals of management accounting information provided to the management and
executive teams inside the organization are quite different from the financial accounting
information provided to groups outside the organization, such as investors, creditors, and
regulators. You may even ask how information and performance measures regarding
quality and time can be provided by a typical general ledger system that is limited to
debits and credits of dollar amounts. This is a good question! For most of the twentieth
century, management accountants have been able to successfully produce management
accounting information using the general ledger system of financial accounting. This
marriage of management accounting and financial accounting information systems
worked as long as the goal of management accounting was strictly to track cost
information. Now, however, the emergence of JIT, coupled with increased competition in a
worldwide market, has forced most organizations to compete on issues of quality and
timeliness, as well as cost. The problem is that it is very difficult to use a debit/credit
system to track organizational performance regarding quality and time. Thankfully,
computerized information systems, specifically database systems, have progressed to a
point where it is economically feasible for organizations to track just about any kind of
information. Now the real challenge for current and future management accountants is to
organize the immense amount of data that can be provided to support decision making
without creating information overload in managers and executives. In this process,
management accountants should understand how to use the most current technology.
Typically, developing knowledge and skills in computer technologies will require additional
courses of study for the future business professional. The goal of the remainder of this
book is to provide you with a framework for developing cost, quality, and time-based
information that supports the management process. This framework must then be used
with top-notch technology in order to provide information that truly adds competitive value
to organizations!

Looking Forward in the Management Accounting Profession

Business professionals involved in management accounting have come a long way since the
early days of management accounting in the 1800s. Today, management accounting
professionals play a key role in many organizations. The nature of their work continues to
Reviewer 8
Management Advisory Services

expand as new industries develop and computer technology grows in importance in the
gathering and use of information by decision makers. For example, you’ve spent the bulk of this
chapter being introduced to management accounting in the context of DuPont, a manufacturing
business. However, businesses focused on service rather than manufacturing (e.g., law firms,
banks, hospitals, transportation, hotels) are far and away the dominant industries in the U.S.
economy. Further, merchandising companies (retailers and wholesalers) combine to be as
strong an economic force as the manufacturing industry. And as you’re certainly aware, the
explosion of the Internet has established a new aspect in our economy—e-commerce. At this
point, e-commerce is generally a growing delivery platform for many service and merchandising
companies, rather than a separate industry. You need to be aware of these trends as you work
through this textbook. We will spend a lot of time applying concepts and tools of management
accounting to nonmanufacturing settings. As we close this chapter, we want to leave you with
two lingering, but important, questions. First, can a service or merchandising company
effectively perform C-V-P analysis, product costing, and segment analysis? Or are these
techniques useful only for manufacturing companies? Second, does the arrival of e-commerce
in service, merchandising, or manufacturing organizations change your response to the first
question? That is, as companies shift more and more of their operations (such as sales of
software, financial services, and groceries) into the “virtual environment” of the Internet, does e-
commerce affect the use of any management accounting techniques that you are studying in
this textbook? Think about these questions. We plan to spend a lot of time in the next several
chapters exploring some possible answers with you.

FYI

By 2004, e-commerce activities across the world will be enormous, amounting to $6.8
trillion, or 8.6% of the global sales of all goods and services. Interestingly, while the
United States accounted for 75% of worldwide e-commerce sales in 2000, that share is
expected to drop to a little less than 50% by 2004.

Source: “Global eCommerce Approaches Hypergrowth,” Forrester Research, Inc., April


18, 2000

TO SUMMARIZE

Management accounting plays a key role in organizations today. The top accountant in
most organizations is the controller. All accounting functions report to this individual,
including the cost accountants, the financial and tax accountants, the internal auditors,
and systems support personnel. Though much management accounting originates within
these positions, all decision makers in the organization must understand how to create
and use good management accounting information. Management accounting is also
being significantly affected by dramatic improvements in computer technology. Today’s
technology allows management to track performance information that goes beyond the
cost-based information of historic general ledger systems. Good management accounting
involves a responsibility to manage a wide variety of critical information. Hence, those
involved need to anticipate and be prepared to deal with various ethical dilemmas. And
finally, though we’ve used DuPont as the example company in this chapter, you need to
understand that management accounting is not just for manufacturing companies.
Service and merchandising industries represent a much larger portion of the U.S.
economy than does the manufacturing industry. Further, the advent of the Internet and e-
commerce is bringing dramatic changes to many companies and industries. This textbook
Reviewer 9
Management Advisory Services

will explore management accounting in all types of business. As you work through the
remainder of this textbook, you should consider how each new concept you learn could be
applied in multiple types of business settings.

SCOPE OF MANAGEMENT ACCOUNTING

The scope or field of management accounting is very wide and broad based and it
includes a variety of aspects of business operations. The main aim of management
accounting is to help management in its functions of planning, directing, controlling and
areas of specialization included within the umbrella of management accounting. The
scope of management accounting can be studied as follows:

 Function
 Merits
 Human resource management
 Financial Statements

Financial Accounting

Financial accounting forms the basis for analysis and interpretation for furnishing
meaningful data to the management. The control aspect is based on financial data and
performance evaluation, on recorded facts and figures. So, management accounting is
closely related to financial accounting in many respects.

Cost Accounting

Cost accounting is the process and techniques of ascertaining cost.


Planning, decision making and control are the basic managerial functions. The cost
accounting system provides the necessary tool for carrying out such functions efficiently.
The tools includes standard costing, inventory management, variable costing etc.

Budgeting and Forecasting

Budgeting means expressing the plans, policies and goals of the firm for a definite
period in future. Forecasting on the other hand, is a prediction of what will happen as a
result of a given set of circumstances. Forecasting is a judgement whereas the budgeting
is an organizational object. These are useful for management accounting in planning.

Inventory Control

Inventory is necessary to control from the time it is acquire till its final disposal as it
involves large sum. For controlling inventory, management should determine different
level of stock. The inventory control technique will be helpful for taking
managerial decisions.

Statistical Method

Statistical tools not only make the information more impressive, comprehensive and
intelligible but also are highly useful for planning and forecasting.

Interpretation of Data
Reviewer 10
Management Advisory Services

Analysis and interpretation of financial statements are important part of management


accounting. After analyzing the financial statements, the interpretation is made and the
reports drawn from this analysis are presented to the management. Interpreting the
accounting data to the authorities in the management is the principal task of management
accounting.

Reporting To Management

The interpreted information must be communicated to those who are interested in it.
The report may cover Profit and Loss Account, Cash Flow and Funds Flow statements
etc.

Internal Audit and Tax Accounting

Management accounting studies all the tax matters to assist the management in
investment decisions vis-a-vis tax planning as a resource to enjoy tax relief.

Internal audit system is necessary to judge the performance of every department.


Management is able to know deviations in performance through internal audit. It also
helps management in fixing responsibility of different individuals.

Methods of Procedures

This includes maintenance of proper data processing and other office management
services. It may have to deal with filing, copying, duplicating, communicating and
management information system and also may have to report about the utility of different
office machines.

The main concern of management accounting is to provide necessary quantitative and


qualitative information to the management for planning and control. For this purpose it
draws out information from accounting as well as non-accounting sources.

Hence, its scope is quite vast and it includes within its fold almost all aspects of business
operations. However, the following areas may rightly be pointed out as lying within the
scope of management accounting.

Financial Accounting:

The major function of management accounting is the rearrangement or modification of


data. Financial accounting provides the very basis for such a function. Hence,
management accounting cannot obtain full control and coordination of operations without
a well-designed financial accounting system.

Cost Accounting:

Planning, decision-making and control are the basic managerial functions. The cost
accounting system provides necessary tools such as standard costing, budgetary control,
inventory control, marginal costing, and differential costing etc., for carrying out such
functions efficiently. Hence, cost accounting is considered a necessary adjunct of
management accounting.
Reviewer 11
Management Advisory Services

Revaluation Accounting:

Revaluation or replacement value accounting is mainly concerned with ensuring that


capital is maintained in real terms and profit is calculated on this basis.

Statistical Methods:

Statistical tools such as graph, charts, diagrams and index numbers etc., make the
information more impressive and comprehensive. Other tools such as time series,
regression analysis, sampling techniques etc., are highly useful for planning and
forecasting.

Operations Research:

Modern managements are faced with highly complicated business problems in their
decision-making processes. O P techniques like linear programming, queuing theory,
decision theory, etc., enable management to find scientific solutions for the business
problems.

Taxation:

This includes computation of income tax as per tax laws and regulations, filing of returns
and making tax payments. In recent times, it also includes tax planning.

Organization and Methods [O&M]:

O&M deal with organizations reducing cost and improving the efficiency of accounting, as
also of office systems, procedures, and operations etc.

Office Services:
This includes maintenance of proper data processing and other office management
services, communication and best use of latest mechanical devices.

Law:

Most of the management decisions have to be taken in a legal environment where the
requirements of a number of statutory provisions or regulations are to be fulfilled.
Some of the Acts, which have their influence on management decisions, are as follows:

The Companies Act, MRTP Act, FEMA, SEBI Regulations, etc.

Internal Audit:

This includes the development of a suitable system of internal audit for internal control.

Internal Reporting:

This includes the preparation of quarterly, half yearly, and other interim reports and
income statements, cash flow and funds flow statements, scarp reports, etc.
Reviewer 12
Management Advisory Services

The Scope of Management accounting is very wide and broad based. It includes all
information, which is provided to the management for financial analysis and interpretation
of the business operation. The following field of activities are included in the scope of this
subject:

Financial Accounting: Financial accounting though provides historical information but is


very planning and financial forecasting. It is an essential perquisite of any discussion of
management accounting. Financial statements contain enough information that is used by
management for decision making. Management accounting contains only tools and
techniques and its get the data for interpretation and analysis mainly firm financial
accounting. Thus, without efficient financial accounting system, management accounting
cannot be operative.

Cost accounting:  cost accounting provides various techniques for determining cost of
manufacturing products of cost of providing service. It uses financial data for finding out
cost of various job, Product or processes. Business executives depend heavily on
accounting information in general and on cost information in particular because any
activity of an organization can be described by its cost. They make use of various cost
data in managing organization effectively. Cost accounting is Considered as a backbone
of management accounting as its provides the analytical tools such as Budgetary Control,
Standard Costing, Marginal Costing, Inventory costing, Operating Costing Etc., which are
used by management to discharge its responsibilities effectively.

Financial Management: Financial management is concerned with the planning and


controlling of the financial resources of the firm. It deals with the raising funds and their
effective utilization. Its main aim is to use the fund in such a way that the earning of the
firm is maximized. Today finance has become the life blood of any business concerned.
Although, financial management has emerged as a separate subject, management
accounting includes and extends to the operation of financial management etc.

Financial Statement Analysis:  The Various parties concerned with the financial


statements may need information, which can be obtained by financial statement analysis
and developing certain trends and ratios. A person can gain meaningful insights and
conclusions about the firm with the help of analysis and Interpretation of the information
contained in financial statements. Different techniques have been developed which can
be used for the proper interpretation and analysis of financial statement.

Interpretation of data: The Work of Interpretation of financial data is done by the


management accountant. He interprets various financial statements to the management.
This statements may be studied in comparison to statements of earlier periods or in
comparison with the statements of similar other concerns. The significance of these
reports is explained to the management in a simple language. If the statement are not
properly interpreted then wrong conclusions may be drawn. So, Interpretation is Important
as compiling of financial statements.

Management Reporting: Clear Informative, Timely reports are essential management


tools in reaching decisions that make the best use of firm's resources. Thus, one of the
basic responsibility of management accounting is to keep the management well informed
about the operations of the business. The reports are presented in the form of graphs,
Reviewer 13
Management Advisory Services

diagrams, index numbers or other statistical techniques so as to make them easily


understandable. The management accountant send interim reports may be monthly,
quarterly, half yearly, these report are cover profits or order in hand, etc. these reports are
helpful in giving a constant review of the working of the business.

Quantitative Techniques:  Modern managers believe that the financial and economic data
available for managerial decisions can be more useful when analyzed with more
sophisticated analysis and evaluation techniques. This Techniques such a time series,
regression analysis and sampling techniques are commonly used for this purpose.
Further, managers also use techniques such a linear programming, game theory, queuing
theory etc. in their decision making Process.

Inflation Accounting:  Inflation accounting attempts to identify certain characteristics of


accounting that tend the reporting of financial results during the period of rapidly changing
prices. It devices and implements appropriate methods to analysis and interpret the
Inflation on the Financial Information.  

DISTINCTIONS AMONG MANAGEMENT ACCOUNTING, COST ACCOUNTING, AND


FINANCIAL ACCOUNTING

The accounting system is part of the organization’s management information system


(MIS).

The cost accounting system, which accumulates data about the costs of producing goods
and services, is part of the organization’s overall accounting system. It accumulates cost
information for both management accounting and financial accounting.

MANAGEMENT FINANCIAL
ACCOUNTING ACCOUNTING
External users:
Internal users: officers and stockholders, creditors,
USERS OF REPORT
managers concerned government
agencies.
To provide internal users
with information that may To provide external users
be used by managers in with information about the
PURPOSE carrying out the functions organization’s financial
of planning, controlling, position and results of
decision-making, and operations.
performance evaluation.
Different types of reports,
such as budgets, financial Primarily financial
projections, cost analyses, statements and the
TYPES OF REPORTS
etc., depending on the accompanying notes to
specific needs of such statements.
management.
BASIS OF REPORTS Reports are based on a Reports are based almost
combination of historical, exclusively on historical
estimated, and projected data.
Reviewer 14
Management Advisory Services

data.
Reports are prepared in
In preparing reports, the
accordance with generally
management of a
accepted accounting
STANDARDS OF company can set rules to
principles and other
PRESENTATION produce information most
pronouncements of
relevant to its specific
authoritative accounting
needs.
bodies.
Focus of reports is on the
company’s value chain,
Financial reports relate to
REPORTING ENTITY such as a business
the business as a whole.
segment, product-line,
supplier, or customer.
Reports may cover any
time period – year,
quarter, month, week, day, Reports usually cover a
PERIOD COVERED
etc. Reports may be year, quarter, or month.
required as frequently as
needed.

1. Basic Management Functions And Concepts

Functions of Management

Management has been described as a social process involving responsibility for


economical and effective planning & regulation of operation of an enterprise in the
fulfillment of given purposes. It is a dynamic process consisting of various elements
and activities. These activities are different from operative functions like marketing,
finance, purchase etc. Rather these activities are common to each and every manger
irrespective of his level or status.

Different experts have classified functions of management. According to George &


Jerry, “There are four fundamental functions of management i.e. planning, organizing,
actuating and controlling”.

According to Henry Fayol, “To manage is to forecast and plan, to organize, to


command, & to control”. Whereas Luther Gullick has given a keyword ’POSDCORB’
where P stands for Planning, O for Organizing, S for Staffing, D for Directing, Co for
Co-ordination, R for reporting & B for Budgeting. But the most widely accepted are
functions of management given by KOONTZ and O’DONNEL i.e. Planning,
Organizing, Staffing, Directing and Controlling.

For theoretical purposes, it may be convenient to separate the function of


management but practically these functions are overlapping in nature i.e. they are
highly inseparable. Each function blends into the other & each affects the
performance of others.
Reviewer 15
Management Advisory Services

 Planning

It is the basic function of management. It deals with chalking out a future course
of action & deciding in advance the most appropriate course of actions for
achievement of pre-determined goals. According to KOONTZ, “Planning is
deciding in advance - what to do, when to do & how to do. It bridges the gap from
where we are & where we want to be”. A plan is a future course of actions. It is
an exercise in problem solving & decision making. Planning is determination of
courses of action to achieve desired goals. Thus, planning is a systematic
thinking about ways & means for accomplishment of pre-determined goals.
Planning is necessary to ensure proper utilization of human & non-human
resources. It is all pervasive, it is an intellectual activity and it also helps in
avoiding confusion, uncertainties, risks, wastages etc.

 Organizing

It is the process of bringing together physical, financial and human resources and
developing productive relationship amongst them for achievement of
organizational goals. According to Henry Fayol, “To organize a business is to
provide it with everything useful or its functioning i.e. raw material, tools, capital
and personnel’s”. To organize a business involves determining & providing
human and non-human resources to the organizational structure. Organizing as
a process involves:

 Identification of activities.
 Classification of grouping of activities.
 Assignment of duties.
 Delegation of authority and creation of responsibility.
 Coordinating authority and responsibility relationships.

 Staffing
Reviewer 16
Management Advisory Services

It is the function of manning the organization structure and keeping it manned.


Staffing has assumed greater importance in the recent years due to
advancement of technology, increase in size of business, complexity of human
behavior etc. The main purpose of staffing is to put right man on right job i.e.
square pegs in square holes and round pegs in round holes. According to Kootz
& O’Donell, “Managerial function of staffing involves manning the organization
structure through proper and effective selection, appraisal & development of
personnel to fill the roles designed in the structure”. Staffing involves:

 Manpower Planning (estimating man power in terms of searching, choose


the person and giving the right place).
 Recruitment, Selection & Placement.
 Training & Development.
 Remuneration.
 Performance Appraisal.
 Promotions & Transfer.

 Directing

It is that part of managerial function which actuates the organizational methods to


work efficiently for achievement of organizational purposes. It is considered life-
spark of the enterprise which sets it in motion the action of people because
planning, organizing and staffing are the mere preparations for doing the work.
Direction is that inert-personnel aspect of management which deals directly with
influencing, guiding, supervising, motivating sub-ordinate for the achievement of
organizational goals. Direction has following elements:

 Supervision – implies overseeing the work of subordinates by their


superiors. It is the act of watching & directing work & workers.
 Motivation – means inspiring, stimulating or encouraging the sub-ordinates
with zeal to work. Positive, negative, monetary, non-monetary incentives
may be used for this purpose.
 Leadership – may be defined as a process by which manager guides and
influences the work of subordinates in desired direction.
 Communication – is the process of passing information, experience, opinion
etc. from one person to another. It is a bridge of understanding.
 Controlling

It implies measurement of accomplishment against the standards and correction


of deviation if any to ensure achievement of organizational goals. The purpose of
controlling is to ensure that everything occurs in conformities with the standards.
An efficient system of control helps to predict deviations before they actually
occur. According to Theo Haimann, “Controlling is the process of checking
whether or not proper progress is being made towards the objectives and goals
and acting if necessary, to correct any deviation”. According to Koontz &
O’Donell “Controlling is the measurement & correction of performance activities
of subordinates in order to make sure that the enterprise objectives and plans
desired to obtain them as being accomplished”. Therefore controlling has
following steps:
Reviewer 17
Management Advisory Services

 Establishment of standard performance.


 Measurement of actual performance.
 Comparison of actual performance with the standards and finding out
deviation if any.
 Corrective action.

Concepts, principles and functions of management

1. 3. Contd. Terms Definition Managers • Have an assigned position within the


formal organization • Are expected to carry out specific function duties and
responsibilities Leader Somebody who leads or who goes first, a person in
charge of a group leadership Leadership is the process of influencing people
to accomplish goals Administrator • A person who dispenses or administers
something. Effectiveness • Accomplishments of objectives Efficiency •
Accomplishments of objectives with minimum use of resources.
2. 4. Concepts of Leader and Manager Leader Manager • Is visionary in
identifying need change • Is a role model • Is sensitive to timing initiatives •
Is creative in identifying solutions • Individual efforts • Assess the driving and
restricting forces • Identifies and implements strategies • Seek subordinates
input • Supports and rewards •Understands future directions
3. 5. Comparison bet. Leadership and Management Management Leadership •
Management is responsible for various functions such as planning,
organizing, leading and controlling, which are related to the total
organization • Management is concerned with the promotion of the welfare
of the entire organization without giving scope to vested interest. •
Leadership is the ability to influence the group in achieving the goals set by
the management • Leadership influence individual which will contribute to
the attainment of group goals • Leadership used informal power to influence
the group • Leadership is necessary to create change
4. 6. Comparison bet. Administration & Management Administration
Management • Determination of Objectives • Thinking and determinative
functions. • Take major decisions about over enterprise. • Planning and
organizing functions involved. • It coordinate finance, production and
distribution • Plans and actions • Doing and executive function • Takes
decisions within the framework set by the admin. • Motivating and controlling
functions also involved. • It uses organization for the achievement of the
targets fixed by administration.
5. 7. Administration Versus Management Administration Management • The
process and agency which is responsible for the determination of aims for
which an organization and its management are to strive, which establishes
the broad policies under which they are to operate and which gives general
oversight to be continuing effectiveness of the total operation in reaching the
objectives sought. • The process and agency, which directs and guides the
operations as an organization in the realizing of established aims. • Higher
functions of management includes administration. • Two types of
management are there: administrative and operative management.
Reviewer 18
Management Advisory Services

6. 8. Fig. Management and Administration Top Level Management Middle


Management Lower Management Hospital Board and Executive Director
Hospital Matron Assistant Matron Nursing Supervisors Ward Sisters
Administration Management
7. 9. Definition • Management etymology: – Managgaire (italian): means to
handle – Manus (latin): handling – Mesnagement (french) and later
menagement: management during 17th and 18th • The term management is
used at times to indicate the “process or the functions”: planning, organizing,
staffing, directing and controlling. • Term management is also being used as
a “discipline”, i.e. a body of knowledge and practice
8. 10. Contd. • Management can be defined categorizing under three
orientations. 1. Productivity orientation 2. Human relation orientation 3.
Process orientation 4. Decision-Making orientation 5. Systems approach
9. 11. Productivity Orientation • Management is the art of knowing “what you
want to do ….in the best and cheapest way.” – Frederick W. Taylor (1914),
profounder of this approach • Management is to conduct the affairs of a
business, moving towards its objectives through a continuous improvement
and optimization of resources via the essential management functions. –
Henri Fayol (1917) • Critics: definition ignores the human side, which is the
most important element of management, and also silent about the process
of management.
10. 12. Human Relation Orientation • Management is the art of getting things
done through and with informally organized groups, and it is the art of
creating the environment in which people can perform and individuals could
cooperate towards attaining group goals. • Critics: management thinkers put
primary focus on people and their feelings, not on productivity or functions.
The chief concerns are individuals, group process, interpersonal relations,
leadership, and communication.
11. 13. Process Orientation • Management is the process by which managers
create, direct, maintain and operate purposive organizations through
systematic coordinated cooperative human efforts. – Dalton McFarland
(1976) • The distinct process consisting of planning, organizing, actuating
and controlling to determine and accomplish the stated objectives by the use
of human and other resources. – Terry & Franklin (1988) • Critics: this
approach embraces human element: the most important aspect of
management, clarifies about what a manager has to do and why and also
clearly indicates how it is done. The management thinkers believe that
management does not do; it gets others to do.
12. 14. Decision-Making Orientation • Management is simply the process of
decision making and control over the action of human beings for the express
purpose of attaining pre determined goals. • The management quality is
judged by the quality of decision in diverse situation in the economic front
and amid risks and uncertainties. – Banerjee (1996)
13. 15. Contd. • Focuses on managerial decisions. It views management as a
series of decision making. Concentrates on rational approach to decision
making. • Critics: this orientation is silent about the process part, a it
provides no clues at all as what the manager needs to know and do. –
Ernest Dale (1973)
14. 16. Systems Approach • Management is defined as the process of planning,
organizing, directing and controlling to accomplish the predetermined
Reviewer 19
Management Advisory Services

objectives effectively through the coordinated use of human and material


resources. • Nursing Management is the process of working through nursing
and other supporting staff to provide care to patients or clients as needed by
them.
15. 17. Contd. • Critics: this approach views an enterprise as a system
composed of a set of interrelated but separate elements of subsystems
working towards achievement of a common goal. The systems operations
are viewed as procuring inputs and processing the inputs into outputs. •
Physical facilities, human resources, money are the part of a system;
material, energy flow, and information are the inputs; and management
processes these inputs into outputs in the form of services, products, and
group-satisfaction, and others.
16. 18. Terminology Related to Principles of Management • Division of work:
specialization for all kinds of works to develop • Authority and responsibility:
related authority flows from responsibility • Discipline: implies obedience and
respect of authority • Unity of command: One employ one boss • Unity of
direction: one plan and one head for a group of activities having one
objectives
17. 19. Contd. • Subordination: individual interest to general interest, interest of
organization should be above the interest of individual • Remuneration: a fair
and equitable pay to employees • Centralization: highly centralized power
structure • Scalar chain: all employees are lined with each others in a
hierarchy or superior subordinate’s relationship
18. 20. Contd. • Order: a place for everything and everything in its proper place •
Equity: sense of kindness and justice throughout all levels of person •
Stability: the tenure of personal job security to avoid turnover of employee. A
union is strength there should be cohesiveness spirits • Initiative: an
encourage subordinates initiate • Esprit de corps: a union is strength; there
should be cohesiveness and term spirit
19. 21. Principles of Management • 14 principles of management as given by
Henry Fayol (Administrative management theory) are: – Division of Work: an
employee assigned to only one type of work to increase output which leads
to specialization. The work division should be done based on efficiency of
subordinates. – Authority and Responsibility: authority means right to give
order and power. Responsibility refers to the obligation to perform the
manner desired and directed by superior authorities in any management
process.
20. 22. Contd. – Discipline: the workers should be obedient and respectful of the
organization, and this is absolutely essential. – Unity of command: one
employee should have only one boss and receive orders for him/her using
one plan – Subordination of Individual Interest to General Interest: it means
supremacy of organizational goals over interests of individual or a group of
individuals, including that of manager.
21. 23. Contd. – Remuneration of Personnel: the price rendered or remuneration
should be fair and satisfactory to the employees and employer including the
managers justifying the workload, job hazards, efficiency and quality of
performance. – Centralization: decisions are made from the top (managers).
Subordinates should be given enough authority to do their job properly –
Scalar chain (Hierarchy): the line of authority from top management to
Reviewer 20
Management Advisory Services

lowest ranks represents the scalar chain. Communications should follow this
chain.
22. 24. Contd. – Order: It implies order of things and people. Placing all required
things and materials in prescribed place i.e. in right place. Working place
should be clean, tidy and safe for employees. Engagement of right people in
the right place. – Equity: It is the combination of kindness and justice.
Employees expect equity from the management. Employees should be
treated fairly and justly, kindly for devotion and loyalty from employees in
return.
23. 25. Contd. – Stability of Tenure of Personnel: For maximum productivity
through efficient workers, a stable work force with stable tenure is needed. –
Initiative: passion, energy and initiative from the employees of all levels
through freedom to think out a plan and execute it. It motivates people and
increases productivity. – Esprit de Corp: team or organizational spirit i.e.
cohesion among personnel is a great source of strength in the organization.
Managers should strive to promote team spirit, unity and organizational
communication.
24. 26. Function of Management 1. Planning 2. Organizing 3. Leading / directing
4. Controlling
25. 27. Planning • Planning is a basic managerial function. It is setting goals and
deciding how to best achieve them in advance. Planning is predetermining
future and selecting appropriate goals and actions to achieve them. • The
process by which management set objectives, assess the future, and
develop course of action to accomplish these objectives.
26. 28. Contd. • Planning requires decision making by all levels of managers •
Planning is also to decide in advance about what to do, how to do, when to
do and who is to do. • A good planning is also required for good utilization of
human and non human resources to accomplish pre determined goals.
27. 29. Contd. • Planning is the core area of all the functions of management. It
is the foundation upon which the other three areas should be build. • The
planning process is ongoing. • There are uncontrollable, external factors that
constantly affect an organization both positively and negatively. • Depending
on the circumstances, these external factors may cause an organization to
adjust its course of action in accomplishing certain goals. This is referred to
as strategic planning.
28. 30. Contd. • During strategic planning, management analyzes internal and
external factors that do and may affect organization, as well as the
objectives and goals. • From there they determine the organization’s
strengths, weaknesses, opportunities and threats. • In order for
management to do this effectively, planning has to be realistic and
comprehensive.
29. 31. Organizing • An important function of management. • Also important for
performing staffing, directing and controlling functions. • The process of
arranging people and physical resources to carry out plans and accomplish
the organizational goals. • Its ongoing.
30. 32. Organizing involves: • Defining tasks required for achieving goals. What
task to be done? • Grouping the activities in logical pattern • Determining
manpower requirement • Establishing authority and responsibility for each
position. Who reports to whom? • Assigning the activities to specific position
and people
Reviewer 21
Management Advisory Services

31. 33. Contd. • Coordinating their activities authority relations • Organizing


efficiency and reducing the operation cost through avoiding repetition and
duplication of activities.
32. 34. Leading • Leading – A continuous process of setting objectives and
trying to achieve them through the efforts of other people. • Leadership is an
important function of management. • Leadership is guiding and influencing
people to achieve goals willingly and enthusiastically in a given situation.
33. 35. Contd. • Leading consists of : Leadership, Motivation and
Communication • Leadership is the ability to influence a group toward
achievement of goals. • Motivation is the acts of stimulating people to
contribute at some higher rate. • Communication consists of conveying
information from top to bottom, bottom to top and at horizontal and lateral.
34. 36. Controlling • Controlling consists of actions and decision, manager
undertaken to ensure actual result. • It ensures the right thing is done in the
right manner and at the right time. • The steps of controlling: – Establishing
standards – Measuring actual performance – Finding and analyzing
deviations – Corrective action
35. 37. Principles of management that will apply in different situations •
Management by objectives • Learning from experience • Division of labor •
Substitution of resources • Coordination of work activities • Functions
determine structure • Delegation of authority • Management by exception
36. 38. “Management by Objectives” • Deciding and saying what to be
accomplished is setting an objective ( a goal, a purpose, an end, a target).
There are many kinds of objective. • The management principle that
underlies the comparison of objectives with their achievement in order to
judge effectiveness is known as “Learning from Experience”.
37. 39. Contd. • When there is a gap between objectives and results (or
achievements), management analyzes why only the observed results were
achieved and why fell short of the set objectives. • Some causes can be
easily remedied, and action is taken accordingly. • Others cannot be
removed in short term and are then called constraints. • Management learns
from this process and uses what it has learned in its further decisions for
achieving its objectives. This process is sometimes called “feedback”.
38. 40. “Division of Labor” • When work is divided, or distributed, among
members of a group, and the work is directed and coordinated, the group
becomes a team. • In a team, and generally then there is specialization and
division of labor, which each category of staff exercising its own skills
towards achieving the objectives, management consists in assigning a
balanced proportion of each kind of staff to the work to be done.
39. 41. Contd. • The team approach is the way in which management attempts
to bring about balance among the different members of the team and the
work they do.
40. 42. “Coordination of Activities” or “Convergence of work” • Convergence of
work means that the activities of the various people who do the work come
together in the achievement of objectives • The activities should be
designed, assigned and directed in such a way that they support each other
in moving towards a common goal. • Also implies that working relations-they
ways in which the members of a team interact with one another-should
contribute to the success of each activity, and thus to general effectiveness.
Reviewer 22
Management Advisory Services

41. 43. “Substitute of Resources” • Substitution means replacement • One


particular type of substitution of resources is labour substitution e.g. using
trained ANM or volunteers for tasks formerly undertaken by professionals.
42. 44. “Functions Determine Structure” • When work is clearly defined, i.e. the
function and duties of individual members of the team are clearly defined
and known to all, the working relations (the structure) follow.
43. 45. “Delegation of Authority” • Delegation takes place when someone with
authority “lends” the authority to another person, conditionally or not, so as
to enable that person to take responsibility when the need arises. • Also
ensure that the decision, once taken, is made known to all concerned. This
is communication.
44. 46. Contd. • Decision should be communicated between those who make
decisions, those who implement them, and the people affected by the
decisions. • “Shortest decision-path”: deals with the issue: who should make
which decision? And often when and where as well. Delegation of authority
is the answer to clarify this.
45. 47. Contd. • In such way, decisions are made as close as possible in time
and place to the object of the decision and to those affected by it. • It saves
time and work (e.g. in transmitting information) and also ensures that
decisions can take full account of the circumstances which make the
decisions necessary and in which they are put into effect.
46. 48. “Management by Exception” • Management of exception means two
things: – First: be selective. Do not become overloaded with routine and
unnecessary information. Keep your mind available for critical information,
on which manager will be required to act. – Second: make big decisions first.
To be overloaded with petty decisions may result in more important ones
being neglected or what has been called “postponing decisions until they
become unnecessary”. • In short, management by exception means
selectivity in information and priority in decision.
47. 49. Importance of Management 1. Optimum utilization of resources 2.
Competitive strength 3. Cordial organizational relation 4. Motivation of
employees 5. Introduction of new techniques 6. Effective management :
society gets the benefits 7. Expansion of business
48. 50. Contd. 8. Brings stability and prosperity 9. Develops team spirit
10.Ensures effective use of managers 11.Ensures smooth functioning
(raises the efficiency, productivity and profitability) 12.Reduces turnover and
absenteeism 13.Creates sound organisation
49. 51. Evolution of Management Thought This evolution of management
thought can be studied in the following broad stages: 1.The Classical Theory
of Management (Classical Approach; 1900-1930): It includes the following
three streams of thought: (i) Bureaucracy, (ii) Scientific Management; and
(iii) Administrative Management
50. 52. Contd. 2. The Neo-classical theory of Management (1930- 1960): It
includes the following two streams: (i) Human Relations Approach and (ii)
Behavioral Sciences Approach 3. The Modern Theory of Management (1960
onwards): It includes the following three streams of thought: (i) Quantitative
Approach to Management (Operations Research); (ii) Systems Approach to
Management and (iii) Contingency Approach to Management.
51. 53. Thank You
Reviewer 23
Management Advisory Services

2. Distinction Among Management Accounting, Cost Accounting And Financial


Accounting

Introduction

Starting a career as a staff accountant with the goal of becoming partner in a public
accounting firm is the dream of many accounting majors. However, a career goal of
becoming a chief financial officer or controller is equally viable, and the end result can
be equally rewarding.

This text presents tools and techniques used by cost and management accountants,
and also provides problem-solving methods that are useful in achieving corporate
goals. Such knowledge is important to a student who wants to become a Certified
Public Accountant (CPA) and/or a Certified Management Accountant (CMA). The first
part of this text presents the traditional methods of cost and management accounting,
which are the building blocks for generating information used to satisfy internal and
external user needs. The second part of the text presents innovative cost and
management accounting topics and methods.

Comparison of Financial, Management, and Cost Accounting

Accounting is called the language of business. As such, accounting can be viewed as


having different “dialects.” The financial accounting “dialect” is often characterized as
the primary focus of accounting. Financial accounting concentrates on the
preparation and provision of financial statements: the balance sheet, income
statement, cash flow statement, and statement of changes in stockholders’ equity.
The second “dialect” of accounting is that of management and cost accounting.
Management accounting is concerned with providing information to parties inside an
organization so that they can plan, control operations, make decisions, and evaluate
performance.1

Financial Accounting

The objective of financial accounting is to provide useful information to external


parties, including investors and creditors. Financial accounting requires compliance
with generally accepted accounting principles (GAAP), which are primarily issued by
the Financial Accounting Standards Board (FASB), the International Accounting
Standards Board (IASB), and the Securities and Exchange Commission (SEC).
Financial accounting information is typically historical, quantitative, monetary, and
verifiable. Such information usually reflects activities of the whole organization.
Publicly held companies are required to have their financial statements audited by an
independent auditing firm. Oversight of auditing standards for public companies is the
responsibility of the Public Company Accounting Oversight Board (PCAOB). The
PCAOB was created by the Sarbanes-Oxley Act of 2002 (SOX), legislation that was
passed because of perceived abuses in financial reporting by corporate managers.

In the early 1900s, financial accounting was the primary source of information for
evaluating business operations. Companies often used return on investment (ROI) to
allocate resources and evaluate divisional performance. ROI is calculated as income
Reviewer 24
Management Advisory Services

divided by total assets. Using a single measure such as ROI for decision making was
considered reasonable when companies engaged in one type of activity, operated
only domestically, were primarily labor intensive, and were managed and owned by a
small number of people who were very familiar with the operating processes.

As the securities market grew, so did the demand for audited financial statements.
Preparing financial reports was costly, and information technology was limited.
Developing a management accounting system separate from the financial accounting
system would have been cost prohibitive, particularly given the limited benefits that
would have accrued to managers and owners who were intimately familiar with their
company’s narrowly focused operating activity. Collecting information and providing
reports to management on a real-time basis would have been impossible in that era.

Management Accounting

Management accounting is used to gather the financial and nonfinancial information


needed by internal users. Managers are concerned with fulfilling corporate goals,
communicating and implementing strategy, and coordinating product design,
production, and marketing while simultaneously operating distinct business
segments. Management accounting information commonly addresses individual or
divisional concerns rather than those of the firm as a whole. Management accounting
is not required to adhere to GAAP but provides both historical and forward-looking
information for managers.

By the mid-1900s, managers were often no longer owners but, instead, individuals
who had been selected for their positions because of their skills in accounting,
finance, or law. These managers frequently lacked in-depth knowledge of a
company’s underlying operations and processes. Additionally, companies began
operating in multiple states and countries and began manufacturing many products in
a non-labor-intensive environment. Trying to manage by using only financial reporting
information sometimes created dysfunctional behavior. Managers needed an
accounting system that could help implement and monitor a company’s goals in a
globally competitive, multiple-product environment. Introduction of affordable
information technology allowed management accounting to develop into a discipline
separate from financial accounting. Under these new circumstances, management
accounting evolved to be independent of financial accounting.

The primary differences between financial and management accounting are shown in
Exhibit 1–1.

Exhibit 1–1 Financial and Management Accounting Diff erences


Financial Accounting Management Accounting
Primary users: External, Internal
Primary organizational focus: Whole (aggregated), Parts (segmented)
Information characteristics: Must be

• Historical
• Quantitative
• Monetary
• Verifi able
Reviewer 25
Management Advisory Services

May be

• Current or forecasted
• Quantitative or qualitative
• Monetary or nonmonetary
• Timely and, at a minimum, reasonably estimated

Overriding criteria:

Generally accepted accounting principles, Situational relevance (usefulness)


Consistency, Benefits in excess of costs
Verifiability, Flexibility
Recordkeeping, Formal, Combination of formal and informal

As companies grew and were organized across multiple locations, financial


accounting became less appropriate for satisfying management’s information needs.

To prepare plans, evaluate performance, and make more complex decisions,


management needed forward-looking information rather than only the historical data
provided by financial accounting. The upstream costs (research, development,
product design, and supply chain) and downstream costs (marketing, distribution, and
customer service) that companies incurred were becoming a larger percentage of
total costs. When making pricing decisions, managers needed to add these upstream
and downstream costs to the GAAP-determined product cost. The various types of
costs associated with products are shown in Exhibit 1–2.

Cost Accounting

Cost accounting can be viewed as the intersection between financial and


management accounting (see Exhibit 1–3). Cost accounting addresses the
informational demands of both financial and management accounting by providing
product cost information to

• External parties (stockholders, creditors, and various regulatory bodies) for


investment and credit decisions and for reporting purposes, and

• Internal managers for planning, controlling, decision making, and evaluating


performance.

Exhibit 1–2 Organizational Costs

Upstream Costs:

Research and development, product design

Downstream Costs:

Marketing, distribution, and customer service


Reviewer 26
Management Advisory Services

Product cost is developed in compliance with GAAP for financial reporting purposes,
and, for a manufacturing company, consists of the sum of all factory costs incurred to
make one unit of product. But product cost information can also be developed outside
of the constraints of GAAP to assist management in its needs for planning and
controlling operations.

As companies expand operations, managers recognize that a single cost can no


longer be computed for a specific product. For example, a company’s Asian
operations could be highly labor intensive, whereas North American operations could
be highly capital intensive.

Product costs cannot be easily compared between the two locations because their
production processes are not similar. Such complications have resulted in the
evolution of the cost accounting database, which includes more than simply financial
accounting measures.

Accounting is a versatile profession and is continually changing and adapting to


various requirements needed in different industries over the years. There are different
branches of accounting namely financial management, cost, government, forensic
and many others can be named according to the needs of the organization. So, there
arises a necessity to know the difference between various types of accounting.

In this article, you will understand the difference between financial accounting,


cost accounting and management accounting.  Before going into the study of
differences among them, let us define first what is financial accounting? Financial
accounting defined as the representation of the company or firm’s activities done
during the period and its financial position at the end of the accounting period. It deals
with the preparation and presentation of financial statements namely Balance Sheet,
Profit and Loss Account, Cash Flow Statement and Statement of Changes in Equity.

Understanding financial accounting basics is important to prepare and interpret


financial statements. One of the basic financial accounting equation is “Assets is
equal to Owner’s Capital (Equity) plus Liabilities”. Assets are the company’s property
that helps in running the business smoothly and earn from its operations. While
liabilities are the money which the company owes to creditors, banks etc. Normally
too much liability is not considered good for the company. The reason is in case of
liquidation or termination of company the creditors have to be paid first and if there is
any residue, the owners or shareholders can take home. Capital is the contribution
made the owner or the assets owned in setting up the business and bring it to life.

Financial Accounting vs. Cost Accounting vs. Management Accounting

Financial Accounting gives out information about the enterprise’s financial


activities and situation. It makes use of the past or historical data. All the transactions
and statements are recorded and presented in terms of money mostly. Persons who
make use of these financial statements are outsiders like banks, shareholders,
creditors, government authorities etc.  Financial statements are usually presented
once in a year and there is a certain format for their presentation. It is mandatory for
Reviewer 27
Management Advisory Services

the companies to follow the rules and policies framed under GAAP (Generally
Accepted Accounting Principles). It indicates whether the company is running in loss
or profit.

Cost Accounting helps in the determination of the cost of the product, how to
control it and in making decisions. It makes use of both past and present data for
ascertainment of product cost. There is no specific format for the preparation of cost
accounting statements. It is used by the internal management of the company and
usually the cost accountant prepares this to ascertain the cost of a particular product
taking into account the cost of materials, labor and different overheads. No certain
periodicity is needed for the preparation of these statements and they are needed as
and when required by the management. This makes use of certain rules and
regulations while computing the cost of different products in different industries.

Unlike the above two accounting, Management Accounting deals with both
quantitative and qualitative aspects. This involves the preparation of budgets,
forecasts to make viable and valuable future decisions by the management. Many
decisions are taken based on the projected figures of the future. There is no question
of rules and regulations to be followed while preparing these statements but the
management can set their own principles. Like cost accounting, in management
accounting also there is no specific time span for its statement and report
preparation.  It makes use of both cost and financial statements as well to analyze the
data.

FINANCIAL MANAGEMENT
ACCOUNTING ACCOUNTING
External (Investors,
Internal (Managers of
PRIMARY USERS government authorities,
business, employees)
creditors)
Help investors, creditors,
Help managers plan and
PURPOSE and others make
control business
OF INFORMATION investment, credit, and
operations
other decisions
Current and future
TIMELINES Delayed or historical
oriented
GAAP does not apply,
but information should be
RESTRICTIONS GAAP FASB AND SEC
restricted to strategic and
operational needs
Objective, auditable, More subjective and
NATURE OF
reliable, consistent and judgmental, valid,
INFORMATION
precise relevant and accurate
Highly aggregated Disaggregated
SCOPE information about the information to support
overall organization local decisions
Concern about how
BEHAVIORAL Concern about adequacy
reports will affect
 IMPLICATIONS of disclosure
employees behavior
Reviewer 28
Management Advisory Services

Usually approximate but


Must be accurate and relevant and flexible.
timely. Compulsory Except for few
FEATURES
under company law is an companies, it is not
end in itself mandatory Is a mean to
the end
It is primarily concerned
SEGMENTS Segment reporting is the
with reporting for the
OF ORGANISATION primary emphasis.
company as a whole.

FINANCIAL COST
ACCOUNTING ACCOUNTING
It provides information It provides information of
about financial ascertainments of costs
OBJECTIVE performance and to control costs and for
financial position of decision making about
the business. the costs.
It classifies, records,
It classifies records,
presents and interprets
presents and interprets
NATURE in a significant manner
transactions in terms of
materials, labor
money.
and overhead costs.
It records and presents
estimated, budgeted
RECORDING OF DATA It records historical data. data. It makes use of
both historical costs and
predetermined costs.
External users like
shareholders, creditors, Used by Internal
USERS OF
financial analysts, management at different
INFORMATION
government and its levels.
agencies, etc.
It  provides details of
ANALYSIS OF COSTS It shows profit/loss of the costs and profit of each
AND PROFITS organization. product, process, job,
etc.
They are prepared for a
They are prepared as
TIME PERIOD definite period, usually a
and when required.
year.
Reviewer 29
Management Advisory Services

FINANCIAL COST
ACCOUNTING ACCOUNTING
A set format is used for There are no set formats
PRESENTATION OF
presenting financial for presenting cost
INFORMATION
information. information.

3. Role And Activities Of Controller And Treasurer

Contrasting the Roles of Treasurers and Controllers

Treasurers and controllers are both financial managers, but they have different
roles. Controllers usually concentrate on what has already happened inside a
company. They prepare financial statements and other reports based on past activity.
Treasurers focus outward and interact with the bankers, shareholders and potential
investors who provide capital. In some small businesses, the owner, a controller and
an outside accountant might share the financial duties.

Qualifications

Treasurers and controllers should be college graduates -- with several years of


experience working in accounting or finance positions -- who have completed courses
in accounting, finance or economics. Both positions call for candidates who pay
attention to detail, are analytically minded and possess organizational skills. They
should be comfortable working with Microsoft Excel and financial analysis software.
Master's degrees are preferred, especially by large corporations. Employers often
require that controllers be certified public accountants.

Where They Work

There are positions for treasurers and controllers in all sizes of companies
except for the smallest, where owners and outside accountants often perform the
necessary financial functions. Treasurers and controllers work in nonprofit
organizations and government agencies as well as private sector businesses,
especially banks and other financial businesses. Their day-to-day functions include
accounting oversight -- mainly the concern of controllers -- analysis and reporting.
Treasurers tend to specialize in cash management and risk management.

Focus

Controllers focus on the internal workings of organizations. They prepare


budgets and supervise accounting and auditing work. They also generate the tax
returns and financial statements required by regulators. Controllers monitor whether
operational units are meeting deadlines and complying with regulations. Treasurers
obtain loans and other credit from outside sources, maintain relationships with banks,
raise equity capital, invest company funds and communicate with shareholders. In
general, they manage the company's cash and ensure that the company meets the
financial goals expressed in the budget.

CONTROLLER: The Chief Management Accountant


Reviewer 30
Management Advisory Services

CONTROLLER - the chief management accounting executive of an organization who


is mainly responsible for the accounting aspects of management planning and
control.

FUNCTIONS OF THE CONTROLLER

 PLANNING FOR CONTROL – to establish, coordinate, and administer, as


an integral part of management, an adequate plan for the control of
operations.

 REPORTING AND INTERPRETING – to compare performance with


operating plans and standards and to report and interpret results of
operations to the concerned users of such reports.

 EVALUATING AND CONSULTING – to consult with all levels of


management responsible for policy or action concerning any phase of the
operation of the business as it relates to the attainment of objectives and
effectiveness of policies, organizational structures, and procedures.

 TAX ADMINISTRATION – to establish and administer tax policies and


procedures.

 GOVERNMENT REPORTING – to supervise or coordinate the preparation


of reports to government agencies.

 PROTECTION OF ASSETS – to assure protection for the assets of


business through internal control, internal auditing, and assuring proper
insurance coverage.

 ECONOMIC APPRAISAL – to continuously appraise economic and social


forces and government influences and to interpret their effect upon the
business.

DISTINCTIONS BETWEEN CONTROLLERSHIP AND TREASURERSHIP

CONTROLLERSHIP TREASURERSHIP
Planning and control Provision of capital
Reporting and interpreting Investor relations
Evaluating and consulting Short-term financing
Tax administration Banking and custody
Government reporting Credit and collections
Protection of assets Investments
Economic appraisal Insurance

4. International Certifications In Management Accounting


Reviewer 31
Management Advisory Services

IMA’s CMA® (Certified Management Accountant) certification is a professional


credential that can be earned in the advanced management accounting and financial
management fields. The certification signifies that the person possesses knowledge
in the areas of financial planning, analysis, control, decision support, and professional
ethics, the skills most in demand on finance teams around the world. The CMA is a
U.S.-based, globally recognized certification offered by The Institute of Management
Accountants.

CMA-certified professionals work inside organizations of all sizes, industries, and


types, including manufacturing and services, public and private enterprises, not-for-
profit organizations, academic institutions, government entities, and multinational
corporations. To date, more than 50,000 CMAs have been certified in more than 100
countries. To obtain certification, candidates must pass a rigorous exam, meet an
educational requirement, experience requirement, and demonstrate a commitment to
continuous learning through Continuing Professional Education (CPE).

SUMMARY

1. Certified Public Accountant (CPA)


2. Certified Management Accountant (CMA)*
3. Certified Financial Manager (CFM)*

*These are not “licenses”, per se, but do represent significant competency in
managerial accounting and financial management skills. These certifications are
sponsored by the Institute of Management Accountants.

CERTIFICATION AVAILABLE TO MANAGEMENT ACCOUNTANTS

The CMA Program of Certificate in Management Accounting

The CMA Program or Certificate in Management Accounting is a program for


management accountants designed to recognize their unique qualifications, high
standards, and professional expertise in the field of management accounting.

Qualified management accountants earn the designation Certified Management


Accountant (CMA), the international accountant’s counterpart to the Certified Public
Accountants (CPA).

The Organization Involved

In the United States, the CMA Program is conducted by the Institute of Management
Accountants (IMA), the largest US Professional organization of accountants.

In the Philippines, the Philippine Association of Management Accountants (PAMA)


conducts the Certificate in Management Accounting (CMA) program through its
continuing education arm, the Philippine Institute of Management Accountants
(PIMA).

The PAMA is affiliated with the Institute of Management Accountants or IMA.


Reviewer 32
Management Advisory Services

The PAMA was founded primarily to provide its members with professional and
educational activities that enhance their knowledge of management accounting
principles and methods.

Objectives of the Program

The CMA has four objectives, consistent with the mission of the Philippine
Association of Management Accountants (PAMA) to “promote management
accounting, enhance the capability of its members and foster high standards of
professionalism.”

 To establish Management Accounting as a recognized profession in the field


of business

 To encourage stricter and high quality educational standards in


Management Accounting

 To provide objective means for measuring the Management Accountant’s


knowledge and competence

 To encourage continued professional growth

Requirements to Become a CMA

Qualifying Experience Consists of the Following:


Non-qualifying Experience Consists of the Following:

B. Management Accounting Concepts & Techniques For Planning & Control


1. Cost Terms, Concepts And Behavior
a. Nature And Classifications Of Cost and Cost Accumulation Methods

DIFFERENT TYPES OF COSTS

Identification, Differentiation, Characteristics, and Behavior

DIRECT AND INDIRECT

Identification

Direct costs are costs that are related to a particular cost object and can
economically and effectively be traced to that cost object.

Indirect costs are costs that are related to a cost object, but cannot practically,
economically, and effectively be traced to such cost object. Cost assignment is
done by allocating the indirect cost to the related cost objects.
Reviewer 33
Management Advisory Services

Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting

FIXED AND VARIABLE

Identification

Variable costs are within the relevant range and time period under consideration,
the total amount varies directly to the change in activity level or cost driver, and
the per unit amount is constant.

Fixed costs are within the relevant range and time period under consideration,
the total amount remains unchanged, and the per unit amount varies inversely or
indirectly with the change in the cost driver. Fixed costs may be committed or
discretionary (managed).

Committed Fixed Costs are long term in nature and cannot be


eliminated even for short period of time without affecting the profitability
or long-term goals of the firm. (Example: depreciation of buildings and
equipment)

Discretionary or Managed Fixed Costs usually arise from periodic (may


be annual, etc.) decisions by management to spend in certain fixed
costs area such as research, advertising, maintenance contracts.
Discretionary fixed costs may be changed by management from period
to period or even during (within) the period, if circumstances demand
such change. (Examples: research and development costs, advertising
expense, maintenance costs provided by service contractors)

Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting

INVENTORIABLE AND PERIOD

Identification

Inventoriable (Product) costs are costs incurred to manufacture a product.

 Product costs of the units sold during the period are recognized as
expenses (cost of goods sold) in the income statement.
 Product costs of the unsold units become the costs of inventory and
treated as asset in the balance sheet.
Reviewer 34
Management Advisory Services

Period costs are the non-manufacturing costs that include selling, administrative,
and research and development costs. These costs are expensed in the period of
incurrence and do not become part of the cost of inventory.

Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting

OPPORTUNITY AND SUNK

Identification

Opportunity costs are income or benefits given up when one alternative is


selected over another.

Sunk/Past or Historical costs are already incurred and cannot be changed by any
decision made now or to be made in the future.

Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting

DIFFERENT TYPES OF COST ACCUMULATION METHODS

JOB ORDER COSTING

Identification

Job order costing method is the accumulation of costs by specific jobs (i.e.,
physical units, distinct batches, or job lots). This costing method is appropriate if
a product can be produced separately, distinct from the other jobs which require
different amount of materials, labor, and overhead.

Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting

PROCESS COSTING

Process costing accumulates all the costs of operating a process for a period of
time and then divides the cost by the number of units of product that passed
through that process during the period; the result is a unit cost. If the product of
one process becomes the material of the next, a unit cost is computed for each
process.
Reviewer 35
Management Advisory Services

Identification

Process costing accumulates costs by production process or by department on a


period to period basis. It is also applicable when all the units are worked within a
department or when there is no need to distinguish among units.

Differentiation
Characteristics
Behavior
Usefulness in Cost Planning
Usefulness in Financial and Management Reporting

ABC COSTING

Activity-based costing (ABC) has been popularized because of the rapid increase
in the automation of manufacturing process, which has led to a significant
increase in the incurrence of indirect costs and a consequent need for more
accurate cost allocation.

Under the activity-based costing, as the name implies, costs are accumulated by
activity rather than by department or function for purposes of product costing.

Identification

ABC Costing is one means of refining a cost system to avoid what has been
called peanut-butter costing. Inaccurately averaging or spreading costs like
peanut-butter over products that use different amounts of resources results in a
product-cost-cross-subsidization.

Product-cost cross-subsidization describes the condition in which the miscosting


of one product causes the miscasting of other products.

In the Accounting Glossary of the Statements of Management Accounting, ABC


was defined as a system that:

 Identifies the causal relationship between the incurrence of cost and


activities

 Determines the underlying driver of the activities

 Established cost pools related to individual drivers

 Develops costing rates

 Applies cost to products on the basis of resources consumed (drivers)

Differentiation
Characteristics
Behavior
Reviewer 36
Management Advisory Services

Usefulness in Cost Planning


Usefulness in Financial and Management Reporting

b. Analysis Of Cost Behavior (Variable, Fixed, Semi-Variable/Mixed, Step-


Cost)

RELEVANT RANGE

A range of activity that reflects the company’s normal operating range. Within this
relevant range, the cost behavior to be discussed is valid.

Variable

The total amount varies directly with cost driver, and the per cost driver remains
constant.

Fixed

The total amount remains constant, and the per cost driver varies inversely with
cost driver.

Semi-Variable / Mixed

Mixed costs or Total Costs have variable and fixed costs components.

TC = FC + VC

Where: TC = total cost, FC = total fixed cost, VC = total variable cost

Total variable cost varies directly with the activity level or cost driver.

VC = variable cost per cost driver x cost driver or VC = bx

Where: VC = total variable cost, b = variable cost per cost driver, x = cost driver

Example: If the cost driver is number of units and variable cost per unit is P5,
then VC = 5x

The total or mixed cost function may be expressed as:

TC = FC + bx

LINEARITY ASSUMPTION – within the relevant range, there is a strict linear


relationship between the cost and cost driver. Costs may therefore be shown
graphically as straight lines. In the illustration, the price is the cost while the units
are the cost driver.
Reviewer 37
Management Advisory Services

Step Cost

When activity changes, a step cost shifts upward or downward by a certain


interval or step. The graph for both types of step costs follow:

Step variable costs have small steps, while step fixed costs have large steps.

A step cost is a cost that does not change steadily with changes in activity
volume, but rather at discrete points. The concept is used when making
investment decisions and deciding whether to accept additional customer orders.

A step cost is a fixed cost within certain boundaries, outside of which it will
change. When stated on a graph, step costs appear to be incurred in a stair step
pattern, with no change over a certain volume range, then a sudden increase,
then no change over the next (and higher) volume range, then another sudden
increase, and so on. The same pattern applies in reverse when the volume of
activity declines.

For example, a facility cost will remain steady until additional floor space is
constructed, at which point the cost will increase to a new and higher level as the
entity incurs new costs to maintain the additional floor space, to heat and air
condition it, insure it, and so forth.
Reviewer 38
Management Advisory Services

As another example, a company can produce 10,000 widgets during one eight-
hour shift. If the company receives additional customer orders for more widgets,
then it must add another shift, which requires the services of an additional shift
supervisor. Thus, the cost of the shift supervisor is a step cost that occurs when
the company reaches a production requirement of 10,001 widgets. This new
level of step cost will continue until yet another shift must be added, at which
point the company will incur another step cost for the shift supervisor for the night
shift.

Step costing is extremely important to be aware of when a company is about to


reach a new and higher activity level where it must incur a large incremental step
cost. In some cases, incurring the extra amount of a step cost may eliminate
profits that management had been expecting with an increase in volume. If the
increase in volume is relatively minor, but still calls for incurring a step cost, it is
possible that profits will actually decline; a close examination of this issue may
result in a business turning away sales in order to maintain its profitability.

Conversely, a company should be aware of step costs when its activity level
declines, so that it can reduce costs in an appropriate manner to maintain
profitability. This may require an examination of the costs of terminating staff,
selling off equipment, or tearing down structures.

The point at which a step cost will be incurred can be delayed by implementing
production efficiencies, which increase the number of units that can be produced
with the existing production configuration. Another option is to offer overtime to
employees, so that the company can produce more units without hiring additional
full-time staff.

Similar Terms

A step cost is also known as a stepped cost or a step-variable cost.

Fixed, Variable, Mixed, and Step Cost

I currently work as an insurance representative. My company would more likely


benefit from using customer cost hierarchy for determining cost drivers. The
overall objective of any industry is to improve costing of services provided. With a
customer cost hierarchy an insurance industry will be able to accurately measure
the activity to several products and have data easily available.

In an insurance industry the data is easily attainable and include direct cost to
the products and services offer. The customer output unit level cost would be the
cost of the activities to sell each product to a policyholder. The customer batch
level costs can be identifying as any cost related to each product sold such like
the cost of insuring a policyholder. The customer sustaining costs include any
activity to maintain the policyholder, which may include the marketing costs as
well as meeting expenses. The distributions channel cost involving the
distribution of information and sale of each product sold. A cost included may be
the salaries to the agents or any license staff selling insurance products. Finally,
the corporate sustaining cost which are the costs of activities that cannot be
Reviewer 39
Management Advisory Services

traced to policyholders’ or personnel like management and administration


salaries.

Fixed cost examples at my current work environment include salaries, rent,


insurance premiums, and marketing among others. Our workers have a salary
base pay that won’t change regardless of the hours work. Rent and insurance
premiums are in a one-year contract, which is at a fixed rate. The marketing and
advertising is done directly through the regional office, which gives an agent a
fixed amount to spend in advertising and it must meet marketing regulations.

Variable cost examples include shipping materials and commissions. Our


shipping materials usually rely on the marketing plan, but in constantly varies
from month to month. Commissions are based of our office production, which
varies from time to time.

Mixed cost examples include our utilities. For instance, our phone line and
Internet service has a fixed monthly rate. However, when our communication
needs increase we may go over our coverage data which leads to overages. The
same applies to our electricity and gas service which it also has steady rate, but
during different seasons it may increase or decrease. In both instances it
includes a fixed and variable element.

Step cost examples include our marketing events. Every month our agency is
involved in community events where we have a fixed rate for rent, but the
supplies taken for marketing vary depending on the statistics gathered by the
marketing department. For instance, one event may only require 1,000 supplies,
but at another even our statistics may suggest to increase our supply to 2,500,
which increases our cost for supplies.

c. Splitting Mixed Cost (High-Low, Scatter Graph, Least-Squares


Regressions)

Separating Mixed Costs

A mixed cost contains both a variable and a fixed component. For example, a
cell phone plan that has a flat charge for basic service (the fixed component) plus
a stated rate for each minute of use (the variable component) creates a mixed
cost. A mixed cost does not remain constant with changes in activity, nor does it
fluctuate on a per-unit basis in direct proportion to changes in activity. To simplify
estimation of costs, accountants typically assume that costs are linear rather
than curvilinear. Because of this assumption, the general formula for a straight
line can be used to describe any type of cost within a relevant range of activity.
The straight-line formula is

y = a + bX

Where
= total cost (dependent variable),
y
a = fixed portion of total cost,
b = unit change of variable cost relative to unit changes in
Reviewer 40
Management Advisory Services

activity, and
activity base to which y is being related (the predictor,
X =
cost driver, or independent variable)

If a cost is entirely variable, the value in the formula is zero. If the cost is entirely
fixed, the b value in the formula is zero. If a cost is mixed, it is necessary to
determine formula values for both a and b. Two methods of determining these
values—and thereby separating a mixed cost into its variable and fixed
components—are the high–low method, scatter graph, and regression analysis.

High-Low

In this method, the fixed and variable elements of the mixed costs are computed
from two data points (periods) – the high and low periods as to activity level or
cost driver.

DEFINITION of 'High-Low Method'

In cost accounting, a way of attempting to separate out fixed and variable


costs given a limited amount of data. The high-low method involves taking the
highest level of activity and the lowest level of activity and comparing the total
costs at each level. If the variable cost is a fixed charge per unit and fixed costs
remain the same, it is possible to determine the fixed and variable costs by
solving the system of equations.

BREAKING DOWN 'High-Low Method'

The high-low method is not preferred because it can yield an incorrect


understanding of the data if there are changes in variable or fixed cost rates over
time, or if a tiered pricing system is employed. In most real-world cases it should
be possible to obtain more information so the variable and fixed costs can be
determined directly. Thus, the high-low method should only be used when it is
not possible to obtain actual billing data.

High–Low Method

The high–low method analyzes a mixed cost by first selecting the highest and
lowest levels of activity in a data set if these two points are within the relevant
range. Activity levels are used because activities cause costs to change, not vice
versa. Occasionally, operations occur at a level outside the relevant range (e.g.,
a special rush order could require excess labor or machine time), or cost
distortions occur within the relevant range (a leak in a water pipe goes unnoticed
for a period of time). Such non-representative or abnormal observations are
called outliers and should be disregarded when analyzing a mixed cost.

Next, changes in activity and cost are determined by subtracting low values from
high values. These changes are used to calculate the b (variable unit cost) value
in the y = a + bX formula as follows:
Reviewer 41
Management Advisory Services

Cost at High Activity Level−Cost at Low Activity Level


b=
High Activity Level−Low Activity Level
Change∈Total Cost
b=
Change∈ Activity Level

The b value is the unit variable cost per measure of activity. This value is
multiplied by the activity level to determine the amount of total variable cost
contained in the total cost at either the high or the low level of activity. The fixed
portion of a mixed cost is found by subtracting total variable cost from total cost.

As the activity level changes, the change in total mixed cost equals the change in
activity multiplied by the unit variable cost. By definition, the fixed cost element
does not fluctuate with changes in activity.

The problem below illustrates the high–low method using machine hours and
utility cost information for Mizzou Mechanical. In November 2017, the company
wanted to calculate its predetermined OH rate to use in calendar year 2018.
Mizzou Mechanical gathered information for the prior 10 months’ machine hours
and utility costs. During 2017, the company’s normal operating range of activity
was between 3,500 and 9,000 machine hours per month. Because it is
substantially in excess of normal activity levels, the May observation is viewed as
an outlier and should not be used in the analysis of utility cost.

ILLUSTRATIVE PROBLEM Analysis of Mixed Cost for Mizzou Mechanical

The following machine hours and utility cost information is available:

Month Machine Hours Utility Cost


January 7,260 $2,960
February 8,850 3,410
March 4,800 1,920
April 9,000 3,500
May 11,000 3,900 Outlier
June 4,900 1,860
July 4,600 2,180
August 8,900 3,470
September 5,900 2,480
October 5,500 2,310

STEP 1: Select the highest and lowest levels of activity within the relevant range
and obtain the costs associated with those levels. These levels and costs are
9,000 and 4,600 hours, and $3,500 and $2,180, respectively.

STEP 2: Calculate the change in cost compared to the change in activity.

Machine Hours Associated Total Cost


High activity 9,000 $3,500
Low activity 4,600 2,180
Reviewer 42
Management Advisory Services

Changes 4,400 $1,320

STEP 3: Determine the relationship of cost change to activity change to find the
variable cost element.

b = $1,320 ÷ 4,400 MH = $0.30 per machine hour

STEP 4: Compute total variable cost (TVC) at either level of activity.

$0.30
High level of activity : TVC = = $2,700
(9,000)
$0.30
Low level of activity : TVC = = $1,380
(4,600)

STEP 5: Subtract total variable cost from total cost at the associated level of
activity to determine fixed cost.

High level of activity : a = $3,500 - $2,700 = $800


Low level of activity : a = $2,180 - $1,380 = $800

STEP 6: Substitute the fixed and variable cost values in the straight-line formula
to get an equation that can be used to estimate total cost at any level of activity
within the relevant range.

y = $800 + $0.30X

Where X = machine hours

One potential weakness of the high–low method is that outliers can inadvertently
be used in the calculation. Estimates of future costs calculated from a line drawn
using such points will not indicate actual costs and probably are not good
predictions. A second weakness of this method is that it considers only two data
points. A more precise method of analyzing mixed costs is least squares
regression analysis.

Scatter Graph

Various costs (the dependent variable) are plotted on a vertical line (y-axis) and
measurement figures (cost drivers or activity levels) are plotted on a horizontal
line (x-axis). A straight line is drawn through the points and, using this line, the
rate of variability and the fixed cost are computed.

Scatter Graph Method

Scatter graph or “visual fit analysis” plots the observation on a graph and draws
conclusion on the relationships depicted by such observations. This method uses
the principles found in a regression line. A regression line is a straight line that
depicts the relationship of two variables – one is independent and the other is
Reviewer 43
Management Advisory Services

dependent. A regression line is normally expressed in the equation y = a + bX.


This equation is the perfect resemblance of total costs where TC = FC + VC.

The scatter graph method derived its name from its process where observations
are scattered in a graph depicting the relationship of x and y variables where,
normally, “x” represents the horizontal line or the units of measure and “y”
represents the vertical line or the amount. In using this model in segregating
fixed and variable elements of costs, the following steps are followed:

 Draw the x (horizontal) and y (vertical) axes in the graph. Scale the axes.

 Plot the observed data on the graph.

 Determine the behavior of the plotted observations on the graph.

 Draw a straight line in the middle of the plotted observation following the
depicted relationship between “x” and “y”, where the differences of the points
above the line is equal to the differences of the points below the line.

 The point of origin (or point of intercept) is the value of “a”.

 Compute “b” by choosing two “y” values as Y1 and Y2. Get the corresponding
values of X1 and X2.

 The value of “b” equals the difference in the values of “ y” divided by the
difference in the values of “x”.

 Assign the computed values of “a” and “b” in the regression line equation.

To illustrate this process, let us consider the next sample problem.


Reviewer 44
Management Advisory Services
Reviewer 45
Management Advisory Services
Reviewer 46
Management Advisory Services

Least-Squares Regressions

This method mathematically determines a line of best fit or a linear regression


line through a set of plotted points so that the sum of the squared deviations of
each actual plotted point from the point directly above or below it on the
regression line is at a minimum.

Least Squares Regression Analysis

Least squares regression analysis is a statistical technique that analyzes the


relationship between independent (causal) and dependent (effect) variables. The
least squares method is used to develop an equation that predicts an unknown
value of a dependent variable (cost) from the known values of one or more
independent variables (activities that create costs). When multiple independent
variables exist, least squares regression also helps to select the independent
variable that is the best predictor of the dependent variable. For example,
managers can use least squares to decide whether machine hours, direct labor
hours, or pounds of material moved best explain and predict changes in a
specific overhead cost [Further discussion of finding independent variable(s) that
best predict the value of the dependent variable can be found in most textbooks
on statistical methods treating regression analysis under the headings of
dispersion, coefficient of correlation, coefficient of determination, or standard
error of the estimate.].

Simple regression analysis uses one independent variable to predict the


dependent variable based on the y = a + bX formula for a straight line. In multiple
regression, two or more independent variables are used to predict the dependent
variable. All text examples use simple regression and assume that a linear
relationship exists between variables so that each one-unit change in the
independent variable produces a constant unit change in the dependent variable.
[Curvilinear relationships between variables also exist. For example, quality
defects (dependent variable) tend to increase at an increasing rate in relationship
to machinery age (independent variable).].

A regression line is any line that goes through the means (or averages) of the
independent and dependent variables in a set of observations. As shown in
Reviewer 47
Management Advisory Services

Exhibit 3–7, numerous straight lines can be drawn through any set of data
observations, but most of these lines would provide a poor fit to the data.

Actual observation values are designated as y values; these points do not


generally fall directly on a regression line. The least squares method
mathematically fits the best possible regression line to observed data points. The
method fits this line by minimizing the sum of the squares of the vertical
deviations between the actual observation points and the regression line. The
regression line represents computed values for all activity levels, and the points
on the regression line are designated as yc values.
The regression line of best fit is found by predicting the a and b values in a
straight line formula using the actual activity and cost values (y values) from the
observations. The equations necessary to compute b and a values using the
method of least squares are as follows:
Reviewer 48
Management Advisory Services

Using the machine hour and utility cost data for Mizzou Mechanical (excluding
the May outlier), the following calculations can be made:

The b (variable cost) and a (fixed cost) values for the company’s utility costs are
$0.35 and $354.62, respectively. These values are close to, but not exactly the
same as, the values computed using the high–low method.

By using these values, predicted costs (yc values) can be computed for each
actual activity level. The line drawn through all of the yc values will be the line of
best fit for the data. Because actual costs do not generally fall directly on the
regression line and predicted costs naturally do, these two costs differ at their
related activity levels. It is acceptable for the regression line not to pass through
any of the actual observation points because the line has been determined to
mathematically “fit” the data. Like all mathematical models, regression analysis is
based on certain assumptions that produce limitations on the model’s use. Three
of these assumptions follow; others are beyond the scope of the text. First, for
regression analysis to be useful, the independent variable must be a valid
predictor of the dependent variable; the relationship can be tested by determining
the coefficient of correlation. Second, like the high–low method, regression
analysis should be used only within a relevant range of activity. Third, the
regression model is useful only as long as the circumstances existing at the time
of its development remain constant; consequently, if significant additions are
made to capacity or if there is a major change in technology usage, the
regression line will no longer be valid.

Once a method has been selected and mixed overhead costs have been
separated into fixed and variable components, a flexible budget can be
developed to indicate the estimated amount of overhead at various levels of the
denominator activity.

Deficiencies of the Visual Fit and High Low Methods


Reviewer 49
Management Advisory Services

The visual-fit method suffers from a lack of objectivity. Given that the cost line is
created by visual approximation or “eyeballing,’ different cost analysts will likely
produce different lines. The high-low method, on the other hand, is objective.
However, it uses only two data points and ignores the rest, thus generalizing
about cost behavior by relying on only a very small percentage of possible data
observations.

Least Squares Regression and Multiple Regression

In the least-squares regression (LSR) method, the cost line is positioned to


minimize the sum of the squared deviations between the cost line and the data
points. The cost line fit to the data using LSR is called a regression line. The
statistical equation for this line is represented by the formula: Y = a + bX, with X
denoting activity level (independent variable) and Y denoting the total cost
(dependent variable).

The multiple-regression line has all the same properties of the simple LSR line,
but more than one independent variable is taken into consideration. The use of
more independent variables can better explain accompanying changes in cost.

2. Cost-Volume-Profit (CVP) Analysis


a. Uses, Assumptions And Limitations Of CVP Analysis

USES OF COST-VOLUME-PROFIT (CVP) ANALYSIS

 It will provide management with cost and profit data for profit planning, policy
formulation, and decision making.

 It will provide data in determining the optimal level and mix of output to be
produced with available resources.

 It will help management to pre-determine the required volume of production


and sales to achieve a desired profit.

ASSUMPTIONS OF COST-VOLUME-PROFIT (CVP) ANALYSIS

The variables of profit are the unit sales price, unit variable costs, total fixed
costs, sales volume (or volume), and sales mix. Sales mix is considered when a
business sells two or more products. The assumptions to these variables as they
relate to profit are as follows:

Assumptions
Variables of Profit
Basic Sensitivity
Sales Volume Changes Changes
Unit Sales Price Constant* Changes
Unit Variable Costs Constant Changes
Total Fixed Costs Constant Changes
Sales Mix Constant Changes
*Constant means linear
Reviewer 50
Management Advisory Services

Basic Assumptions (only quantity sold changes)

The unit sales price once established is considered constant for planning
purposes. The sales price is impacted by competition, variability in supply and
demand, laws, technology, distribution channels, emerging practices, input of
production prices, taxes and subsidies, seasonality, and other determinants.

The unit variable costs once established is considered constant for planning
purposes. Although, the unit variable costs are affected by a change in the prices
of suppliers, labor, rentals, telecommunications, fuel, warehousing, distribution,
taxes and licenses, agency costs, and such other determinants. The total fixed
costs and expenses once established are also considered constant for planning
purposes.

Managing, i.e., controlling, sales price is not within the bounds of managerial
control or influence in the short run. Management can only control costs. The
process of managing costs and sales volume as they impact profit is known as
cost-volume-profit analysis.

The basic CVP analysis is based on the following assumptions:

Areas Basic Assumptions


The behavior of sales and costs is linear within the
relevant range. Total fixed costs remain constant,
Linearity and Behavior but unit fixed cost changes (i.e., unit fixed costs
decreases as production increases). Total variable
costs change, but unit variable cost is constant.
Unit Sales Price Unit sales price is constant.
There is only one product or, in case of multi-
Product
product operations, the sales mix is constant.
Work in Process
There is no work in process inventory.
Inventory
There is no change in the finished goods
Production Equals Sales inventory, which means, that production equals
sales.

All of the above assumptions are anchored on the general assumption that costs
and expenses are separable into their fixed and variable components. Cost-
volume-profit analysis also assumes that labor productivity, production
technology, and market conditions will not change. Or if they change, their
impact would be included in the sensitivity analysis. Also, it is assumed that there
is no inflation, or if it can be forecasted, it is already included in the CVP analysis
data.

CVP Sensitivity Assumptions (all profit variables change)

The assumptions that sales price, unit variable costs, and total fixed costs are
invariable are made to establish ballpark figures. These figures serve as initial
points of understanding the results of business operations. The assumptions
Reviewer 51
Management Advisory Services

used in the basic CVP analysis are stiff, unreal, and are not reflective of practical
business decisions. In the real world, changes abound and their impacts are
sometimes profound.

Sales price change. Unit variable costs and total fixed costs also change. Sales
mix changes as well. The process of considering the impact and the results to
profit of the changes in its variables is called as CVP Sensitivity Analysis.

LIMITATIONS OF COST-VOLUME-PROFIT (CVP) ANALYSIS

When one attempts to apply breakeven analysis in practice, a number of issues


immediately arise. For instance:

 The total revenue function is based on the assumption that the price per unit
is constant regardless of the volume of sales and production, which is
normally, not realistic. Say, the demand declines, don’t you think the
company would not lower the selling price in an effort to boost sales? On the
other hand, if demand is high, the firm could have the best chance to
increase price and improve profit margin.

 Is it realistic to expect variable cost per unit to be constant at all output


level? Say again, at very low outputs, the cost per unit might be high
because the labor force would not be producing enough units and learn how
to produce them efficiently, and since the demand is low the company will
produce at low volume and will not buy raw materials in bulk, thus it cannot
take advantage of quantity discounts in procuring materials. Similarly, at
high volumes, the firm might have to employ labor on an overtime basis, on
rush jobs, or utilize its equipment which are less efficient both of which
would lead to a higher variable unit costs.

In line with these, breakeven analysis could be expanded and the cost curve
would change from linear to non-linear. This situation might exist where the firm
has a loss at low sales volume, earns a profit over some range of sales volumes,
and then has a net loss at a very high sales volume.

The firm might want to consider changing its level of fixed costs. Higher fixed
costs are not good, other things held constant. Higher fixed costs are associated
with a more mechanized or automated processes, however, it reduces variable
costs per unit. Profit under different production setups and price cost situations
could be best presented and analyzed using the cost structure and operating
leverage.

b. Factors Affecting Profit

ELEMENTS OF CVP ANALYSIS

1. Sales

a. Selling price
Reviewer 52
Management Advisory Services

b. Units or volume
2. Total fixed costs

3. Variable costs per unit

4. Sales mix

THE CONTRIBUTION MARGIN INCOME STATEMENT

The costs and expenses in the Contribution Margin Income Statement are
classified as to behavior (variable and fixed). The amount of contribution margin,
which is the difference between sales and variable costs, is shown. The format is
as follows:

CONTRIBUTION MARGIN INCOME STATEMENT

Sales (units x selling price) xx


Less variable costs (units x variable cost per unit) xx
Contribution margin xx
Less total fixed costs xx
Income before tax xx

c. Breakeven Point In Unit Sales And Peso Sales

BREAK-EVEN POINT – the sales volume level (in pesos or in units) where total
revenues equals total costs, that is, there is neither profit nor loss.

Methods of Determining the Break-even Point

 GRAPHICAL METHOD

 CONTRIBUTION MARGIN METHOD (FORMULA APPROACH)

a. Single-Product Break-even Calculations

(1) Break-even Point in Pesos:

FC
BEPp =
CMR

Where: BEPp = break-even point in pesos,


FC = Total fixed costs,
CMR = Contribution margin ratio

(2) Break-even Point in Units:

FC
BEPu =
CM /u
Reviewer 53
Management Advisory Services

Where: BEPp = break-even point in units,


FC = Total fixed costs,
CMR = Contribution margin per unit

b. Multiple-Product/Service Break-even Calculations

FC
BEPp =
WaCMR

FC
BEPu =
WaCM /u

Where: WaCMR = weighted average contribution margin ratio


WaCM/u = weighted average contribution margin per unit

d. Required Selling Price, Unit Sales And Peso Sales To Achieve A Target
Profit

REQUIRED SALES IN REQUIRED SALES IN


UNITS PESOS
SINGLE PRODUCT
To earn a desired
RSu = ÷
amount of profit before
tax
To earn a desired
amount of profit after tax
To earn a desired profit
ratio (Profit as a
percentage of the
required sales)
MULTIPLE PRODUCT
To earn a desired
amount of profit before
tax
To earn a desired
amount of profit after tax
Reviewer 54
Management Advisory Services

e. Sensitivity Analysis (Including Indifference Point In Unit Sales And Peso


Sales)

CVP Sensitivity Analysis

The assumptions

Indifference Point: Formula and Calculation

Indifference Point: Formula and Calculation!

Another important tool that managers use to help them choose between
alternative cost structures is the indifference point. The indifference point is the
level of volume at which total costs, and hence profits, are the same under both
cost structures. If the company operated at that level of volume, the alternative
used would not matter because income would be the same either way. At the
cost indifference point, total costs (fixed cost and variable cost) associated with
the two alternatives are equal.

There may be two methods or two alternatives of doing a thing, say two methods
of production. It is also possible at a particular level of activity; one production
method is superior to another, and vice versa. There is a need to know at which
level of production, it will be desirable to shift from one production method to
another production method. This level or point is known as cost indifference point
and at this point total cost of two production methods is same.

Cost indifference point can be calculated as follows:

Cost Indifference Point = Differential fixed cost/Differential variable cost per unit

Alternatively, we may calculate the indifference point by setting up an equation


where each side represents total cost under one of the alternatives. (Because
selling price is the same under both of these alternatives, profits will be the same
when total costs are the same.) At unit volumes below the indifference point, the
alternative with the lower fixed cost gives higher profits; at volumes above the
indifference point, the alternative with the higher fixed cost is more profitable.

For example, assume indifference point for a company’s new product is 18,333
units, calculated as follows, with Q equal to unit volume.

Assume the following details about two methods of production, A and B for the
new product:
Reviewer 55
Management Advisory Services

Production Method A = Fixed Rs 40,000; Variable cost per unit Rs 7


Production Method B = Fixed cost Rs 95,000; Variable cost per unit Rs 4
Selling price for both production methods Rs 10 per unit

The indifference point will be 18,333 units, calculated as follows, Q indicates unit
Volume.

Total Cost for Production A = Total Cost for Production B


Fixed cost + variable cost = Fixed cost + variable cost
Rs 40,000 + Rs 7 Q = Rs 95,000 + Rs 4Q
Rs 3Q = Rs 55,000
Q = 18,333 units (rounded)

At volumes below 18,333 units, production A gives lower total costs (and higher
profits); above 18,333 units, production B gives higher profits.

The line Rs 3Q = Rs 55,000 gives a clue to the trade-off between the


alternatives. The company gains Rs 3 per unit in reduced variable costs by
increasing fixed costs Rs 55,000. The indifference point shows that the company
needs 18,333 units to make the trade-off desirable.

It may be noticed that break-even point for the two methods are:

Production method A:
Rs 40,000/Rs 3 = 13,333 units

Production method B:
Rs 95,000/Rs 6 = 15,833 units

Managers may have no correct answer in their choice of cost structure.

Analytical tools such as the indifference point, margin of safety, and CVP graph
help them evaluate alternatives, but the decision depends on their attitudes
about risk and return. If they want to avoid risk, they will choose production A,
forgoing the potential for higher profits from production B. If they are
venturesome, they probably will be willing to take some risk for the potentially
higher returns and choose production B.

Cost indifference point is useful in many decision situations, such as quality


improvement programmes, different marketing plans, production plans or
methods etc.

Cost indifference point should be distinguished from break-even point. Break-


even point compares total sales and total cost of a product. Also, at break-even
point total cost line intersects total sales line. As stated above, cost indifference
signifies equality of total costs of two alternatives. At cost indifference point, total
cost lines of two alternatives intersect each other.

f. Use Of Sales Mix In Multi-Product Companies


Reviewer 56
Management Advisory Services

Sales Mix

In addition to the assumptions introduced in chapter 7 for basic cost-volume-


profit (CVP) analysis, one additional assumption must be specified: The sales
mix is expected to remain steady. Sales mix refers to the relative proportions in
which a company’s products are sold. For example, suppose a deli sells 2
sandwiches for every bag of chips sold for every 3 soft drinks sold. The sales mix
in units for the deli is 2 to 1 to 3. The sales mix is expressed in standard form as
2 : 1 : 3. In other words, out of every 6 items sold, the company typically sells 2
sandwiches, 1 bag of chips, and 3 soft drinks. This group of 6 items is often
known as a bundle. It is important to note that it may take multiple customers to
sell all items in the bundle, however, on average, a company can rely on its
product mix in the short run. Understanding a company's sales mix is helpful for
budgeting, for managing a company's inventory levels, and for determining
breakeven and target profit levels. 

Sales mix can be stated two different ways--in terms of units and in terms of
sales dollars. To illustrate, suppose Jama Giants produces two products: cakes
and pies. Sales mix in units differs from sales mix in revenue dollars because
both the selling price of cakes and pies and the number of pies and cakes sold
differ. The company has provided the following expected sales information for
its products for the month of May:

  Cakes Pies Total


Budgeted units to be sold 2,000 6,000 8,000
Sales revenue $24,000 $36,000 $60,000

Sales Mix in Units

The unit sales mix is 2,000 cakes to 6,000 pies. However, sales mix is always
stated in lowest terms, a concept you learned in middle school math classes.
'Lowest terms' is always expressed in whole numbers. Fractions and decimals
are unacceptable because partial units cannot be sold. Reducing to lowest
terms, the sales mix in units is:
 
2000 : 6000  ==> 2 : 6 ==> 1 : 3

The unit sales mix tells us that Jama Giants sells one cake for every three pies
sold. 

Sales Mix in Sales Dollars (Revenue)

The company's sales mix based on sales dollars is determined in much the same
manner by comparing revenues of each product and then reducing to lowest
terms:
$24,000 : $36,000 ==> 2 : 3
The revenue sales mix tells us that Jama Giants sells $2 of cakes for every $3
of pies.
Reviewer 57
Management Advisory Services

Using the Profit Equation with Multiple Products

In order to consider the sales mix when calculating the breakeven point in units
for multiple products, you must determine a weighted average contribution
margin amount, which considers the differing selling prices, variable costs per
unit, and number of units for each products. 
 
When calculating the breakeven point or target profit in units, use the weighted
average contribution margin (WACM) per unit. When calculating the breakeven
point in sales dollars, use the weighted average contribution
margin ratio (WACMR). The table below summarizes which contribution margin
amount to use when calculating the breakeven point or target profit for single and
multiple products.
 
Which Contribution Amount to Use to Calculate the Breakeven Point or Target Profit
 
Number of When Calculating the Breakeven Point When Calculating the Breakeven Point or
Products or Target Profit in Units Target Profit in Sales Dollars
For a single Contribution margin per unit Contribution margin ratio
product
Weighted average contribution margin Weighted average contribution margin
For multiple
per unit ratio
products
Unit sales mix Revenue sales mix

g. Concepts Of Margin Of Safety And Degree Of Operating Leverage

MARGIN OF SAFETY

The amount of peso sales or the number of units by which actual or budgeted
sales may be decreased without resulting into a loss.

MSp = Sp – BEPp or MSp / SP

MSu = Su – BEPu or MSu / SU

MSR = MSp / Sp or MSu / Su

Where: MSp = Margin of safety in pesos


MSu = Margin of safety in units
MSR = Margin of safety ratio
Sp = Sales in pesos
Su = Sales in units
BEPp = Break-even point in pesos
BEPu = Break-even point in units
SP = Selling price

DEGREE OF OPERATING LEVERAGE / OPERATING LEVERAGE FACTOR

DOL or OLF = Total CM / Profit before tax or


Reviewer 58
Management Advisory Services

= %∆ in profit before tax / %∆ in sales

3. Standard Costing And Variance Analysis


a. Direct Material Variance (Quantity, Price Usage, Purchase Price, Mix And
Yield)

QUANTITY / USAGE

Direct material quantity variance (also called the direct material usage/efficiency
variance) is the product of standard price of a unit of direct material and the
difference between standard quantity of direct material allowed and actual
quantity of direct material used. The formula to calculate direct material quantity
variance is:
DM Quantity Variance = ( SQ − AQ ) × SP

Where,
   SQ is the standard quantity allowed
   AQ is the actual quantity of direct material used
   SP is the standard price per unit of direct material
Standard quantity allowed (SQ) is calculated as the product of standard quantity
of direct material per unit and actual units produced
Reviewer 59
Management Advisory Services
Reviewer 60
Management Advisory Services

PURCHASE PRICE / PRICE USAGE


Reviewer 61
Management Advisory Services
Reviewer 62
Management Advisory Services

MIX AND YIELD


Reviewer 63
Management Advisory Services
Reviewer 64
Management Advisory Services
Reviewer 65
Management Advisory Services
Reviewer 66
Management Advisory Services

During January 2010, Sanjay Corporation produced 400 mountain bikes (the
actual quantity made by Sanjay Corporation in January 2010). The top half of
Exhibit 7–4 shows the standard quantities and costs for that production, while the
bottom half of the exhibit shows actual quantities and costs. This information is
used to compute the January 2010 variances.
Reviewer 67
Management Advisory Services

Material Variances

The general variance analysis model is used to compute price and quantity
variances for each type of direct material. To illustrate the calculations, direct
material item WF-05 is used.
Reviewer 68
Management Advisory Services

The material price variance (MPV) indicates whether the amount paid for
material was less or more than standard price. For item WF-05, the price paid
was $19 rather than the standard price of $20 per unit. This variance is favorable
because the actual price is less than the standard. A favorable variance reduces
the cost of production and, thus, a negative sign indicates a favorable variance.
The MPV can also be calculated as follows:

The purchasing manager should be able to explain why the price paid for item
WF-05 was less than standard.

The material quantity variance (MQV) indicates whether the actual quantity used
was less or more than the standard quantity allowed for the actual output. This
difference is multiplied by the standard price per unit of material because
quantities cannot be entered into the accounting records. Production used 13
more units of WF-05 than the standard allowed, resulting in a $260 unfavorable
material quantity variance. The MQV can be calculated as follows:

The production manager should be able to explain why the additional WF-05
components were used in January.

The total material variance (TMV) is the summation of the individual variances or
can also be calculated by subtracting the total standard cost for component WF-
05 from the total actual cost of WF-05:

Price and quantity variance computations must be made for each direct material
component and these component variances are summed to obtain the total price
Reviewer 69
Management Advisory Services

and quantity variances. Such a summation, however, does not provide useful
information for cost control.

Point-of-Purchase Material Variance Model

A total variance for a cost component generally equals the sum of the price and
usage variances.

An exception to this rule occurs when the quantity of material purchased is not
the same as the quantity of material placed into production. Because the material
price variance relates to the purchasing (rather than the production) function, the
point-of-purchase model calculates the material price variance using the quantity
of materials purchased (Q p) rather than the quantity of materials used (Q u).
The general variance analysis model is altered slightly to isolate the variance as
early as possible to provide more rapid information for management control
purposes.

Assume that Sanjay Corporation purchased 450 WF-05s at $19 per unit during
January, but only used 413 for the 400 bikes produced that month. Using the
point-of-purchase variance model, the computation for the material price
variance is adjusted, but the computation for the material quantity variance
remains the same as previously shown. The point-of-purchase material variance
model is a “staggered” one as follows:

The material quantity variance is still computed on the actual quantity used and,
thus, remains at $260 U. However, because the price and quantity variances
have been computed using different bases, they should not be summed. Thus,
no total material variance can be meaningfully determined when the quantity of
material purchased differs from the quantity of material used.

Mix and Yield Variances for Materials

The above discussion focused on a single material and one labor category in the
production of the product. Most companies, however, use a combination of many
materials and various classifications of direct labor to produce the goods.

When the company’s product uses more than one material, the goal is to
combine those materials in such a way that can produce the desired product
Reviewer 70
Management Advisory Services

quality in the most cost-beneficial manner. These mix and yield variances are on
the assumption that materials are substitute for one another without affecting
product quality. If this assumption is not present, changing the mix cannot
improve the yield and may even prove to be wasteful.

Mix – is the possible combination of materials or labor.

Yield – is the result derived from the quantity of output resulting from a specified
input. Yield ratio is the expected or actual relationship between input and output.

PRICE VARIANCE – difference in actual price at actual mix at actual quantity


and the Standard price at actual mix at actual quantity, which measures the
effect of the price actually purchased and the price budgeted.

MIX VARIANCE – difference of standard materials costs at actual mix and actual
quantity and the standard price of materials at standard mix and actual quantity,
which measures the effect of substituting a nonstandard mix of materials during
the production process.

YIELD VARIANCE – is the difference between the actual total quantity of input
and the standard total quantity allowed based on output, which reflects standard
mix and standard price.

b. Direct Labor Variance (Efficiency, Rate, Mix And Yield)

Direct Labor Rate Variance xx


Direct Labor Efficiency Variance:
Direct Labor Mix Variance xx
Direct Labor Yield Variance xx xx
Total Direct Labor Variance xx

EFFICENCY
Reviewer 71
Management Advisory Services
Reviewer 72
Management Advisory Services
Reviewer 73
Management Advisory Services

RATE
Reviewer 74
Management Advisory Services
Reviewer 75
Management Advisory Services

MIX

YIELD

Direct Labor Yield Variance:

= (Actual Yield – Standard Yield) x Standard Labor Cost Per Unit

Labor Variances

The labor variances for mountain bicycle production in January 2010 would be
computed on a departmental basis and then summed across departments. To
illustrate the computations, the Painting Department data are used. Each
mountain bike requires 3 hours in the Painting Department; thus, the standard
labor time allowed for 400 bikes is (400 x 3) or 1,200 hours. The actual labor time
used in the Painting Department is shown on Exhibit 7–4 as 1,100 hours.
Calculations of the labor variances are as follows:
Reviewer 76
Management Advisory Services

The labor rate variance (LRV) is the difference between the actual wages paid to
labor for the period and the standard cost of actual hours worked. In January,
there was no difference between the actual and the standard wage rates per
hour. The labor efficiency variance (LEV) indicates whether the amount of time
worked was less or more than the standard quantity allowed for the actual
output. This difference is multiplied by the standard rate per hour of labor time. In
January, the Painting Department worked 100 hours less than the standard
allowed to produce 400 mountain bikes. The LRV and LEV can also be
computed as follows:

The total labor variance for the Painting Department can be calculated as
$1,200F by either

1. Subtracting the total standard labor cost ($14,400) from the total actual labor
cost ($13,200) or

2. Summing individual labor variances ($0 + - $1,200 F).

Mix and Yield Variances for Labor

LABOR RATE VARIANCE – difference in actual rate at actual mix at actual total
hours and the standard price at actual mix at actual hours, which is the measure
of the cost of paying workers at other than standard rates.

LABOR MIX VARIANCE – difference of standard rate at actual mix and actual
total hours and the standard rate at standard mix and actual hours. It is the
financial effect associated with changing the proportionate amount of higher or
lower paid workers in production.

LABOR YIELD VARIANCE – is the difference between the total labor cost at
standard rate at standard mix at actual total hours and the standard rate at
standard mix at standard total hours, which reflects the monetary impact of using
Reviewer 77
Management Advisory Services

more or fewer total hours than the standard allowed. The sum of the labor mix
and yield variance equals the labor efficiency variance.

c. Factory Overhead Variance – Two-Way Method (Controllable And


Volume); Three-Way Method (Spending, Variable Efficiency And
Volume); Four-Way Method (Variable Spending, Fixed Spending,
Variable Efficiency And Volume)

TOTAL OVERHEAD VARIANCE

NON-
CONTROLLABLE
CONTROLLABLE BUDGET VARIANCE VOLUME
VARIANCE

EFFICIENCY VOLUME
SPENDING VARIANCE
VARIANCE VARIANCE

FIXED VARIABLE VARIABLE


FIXED VOLUME
SPENDING SPENDING EFFICIENCY
VARIANCE
VARIANCE VARIANCE VARIANCE

Controllable Budget Variance:


Spending Variance:
Fixed Spending Variance xx
Variable Spending Variance xx xx
Variable Efficiency Variance xx xx
Non-Controllable Fixed Volume Variance xx
Total Overhead Variance xx

ACTUAL BAAH BASH APPLIED


Fixed Overhead Actual FOH Budgeted FOH Budgeted FOH FOH Rate x SH
Variable Overhead Actual VOH S. VOH Rate x AH S. VOH Rate x SH S. VOH Rate x SH
Reviewer 78
Management Advisory Services

TWO-WAY METHOD

Controllable

ACTUAL vs. BASH

Volume

BASH vs. APPLIED

THREE-WAY METHOD

Spending

ACTUAL vs. BAAH

Variable Efficiency

BAAH vs. BASH

Volume

BASH vs. APPLIED

FOUR-WAY METHOD

Variable Spending

ACTUAL VOH vs. STANDARD VOH @ ACTUAL HOURS

Fixed Spending

ACTUAL FOH vs. BUDGETED FOH

Variable Efficiency

BAAH vs. BASH

Volume

BASH vs. APPLIED

Overhead Variances

Because total variable overhead changes in direct relationship with changes in


activity and fixed overhead per unit changes inversely with changes in activity, a
specific capacity level must be selected to compute budgeted overhead costs
and to develop a predetermined overhead (OH) rate. To compute the
predetermined OH rates, managers at Sanjay Corporation used a capacity level
of 5,000 mountain bikes or 48,750 direct labor hours (5,000 bikes x 9.75 hours
Reviewer 79
Management Advisory Services

each). At that level of direct labor hours (DLHs), budgeted variable overhead
costs were calculated as $682,500 and budgeted annual fixed overhead costs
were $120,000. Company accountants decided to set the variable overhead
(VOH) rate using direct labor hours and the fixed overhead (FOH) rate using
number of mountain bikes as follows:

Because Sanjay Corporation uses separate variable and fixed overhead


application rates, separate price and usage components can be calculated for
each type of overhead. This four-variance approach provides managers with the
greatest detail and, thus, the greatest flexibility for control and performance
evaluation.

Variable Overhead

The computations for VOH variances are as follows:

As discussed in Chapter 3, actual VOH cost is debited to the Variable


Manufacturing Overhead Control account with appropriate credits to various
accounts. Applied VOH is debited to Work in Process Inventory and credited to
Variable Manufacturing Overhead Control. The applied VOH reflects the
standard predetermined OH rate multiplied by the standard quantity of activity for
the period’s actual output. The total VOH variance is the balance in Variable
Manufacturing Overhead Control at the end of the period and is equal to the
amount of underapplied or overapplied VOH. Using the actual January 2010
VOH cost information in Exhibit 7–4, the VOH variances for mountain bike
production are calculated as follows:
Reviewer 80
Management Advisory Services

The difference between actual VOH and budgeted VOH based on actual hours is
the variable overhead spending variance. VOH spending variances are caused
by both component price and volume differences. For example, an unfavorable
variable overhead spending variance could be caused by either paying a higher
price or using more indirect material than the standard allows. Variable overhead
spending variances associated with price differences can occur because, over
time, changes in VOH prices have not been included in the standard rate. For
example, average indirect labor wage rates or utility rates could have changed
since the predetermined VOH rate was computed. Managers usually have little
control over prices charged by external parties and should not be held
accountable for variances arising because of such price changes. In these
instances, the standard rates should be adjusted.

Variable overhead spending variances associated with quantity differences can


be caused by waste or shrinkage of production inputs (such as indirect material).
For example, deterioration of material during storage or from lack of proper
handling can be recognized only after the material is placed into production.
Such occurrences usually have little relationship to the input activity basis used,
but they do affect the VOH spending variance. If waste or spoilage is the cause
of the VOH spending variance, managers should be held accountable and
encouraged to implement more effective controls.

The difference between budgeted VOH for actual hours and applied VOH is the
variable overhead efficiency variance.

This variance quantifies the effect of using more or less of the activity or resource
that is the base for VOH application. For example, Sanjay Corporation applies
VOH to mountain bikes using direct labor hours. If Sanjay uses direct labor time
inefficiently, higher variable overhead costs will occur. When actual input
exceeds standard input allowed, production operations are considered to be
inefficient. Excess input also indicates that an increased VOH budget is needed
to support the additional activity base being used.
Reviewer 81
Management Advisory Services

Nurseries have to be careful about the storage of seeds, which can rapidly
deteriorate with high temperature or humidity. Spoiled seeds will create a higher
variable overhead spending variance for future greenhouse operations.

Fixed Overhead

The total fixed overhead (FOH) variance is divided into price and volume
components by inserting budgeted FOH in the middle column of the general
variance analysis model as follows:

The left column is the total actual fixed overhead incurred. As discussed in
Chapter 3, actual FOH cost is debited to Fixed Manufacturing Overhead Control
and credited to various accounts. Budgeted FOH is a constant amount
throughout the relevant range of activity and was the amount used to develop the
predetermined FOH rate; thus, the middle column is a constant figure regardless
of the actual quantity of input or the standard quantity of input allowed.

Applied FOH is debited to Work in Process Inventory and credited to Fixed


Manufacturing Overhead Control. The applied FOH reflects the standard
predetermined FOH rate multiplied by the standard quantity of activity for the
period’s actual output. The total FOH variance is the balance in Fixed
Manufacturing Overhead Control at the end of the period and is equal to the
amount of underapplied or overapplied FOH.
Reviewer 82
Management Advisory Services

Total budgeted FOH for Sanjay Corporation for 2010 is given in Exhibit 7–3 as
$487,500. Assuming that FOH is incurred steadily throughout the year, the
monthly budgeted FOH is $40,625. Using the information in Exhibit 7–4, the FOH
variances for mountain bike production are calculated as follows:

The difference between actual and budgeted FOH is the fixed overhead
spending variance. This amount normally represents the differences between
budgeted and actual costs for the numerous FOH components, although it can
also reflect resource mismanagement. Individual FOH components would be
shown in the company’s flexible overhead budget, and individual spending
variances should be calculated for each component.

As with variable overhead, applied FOH is related to the predetermined rate and
the standard quantity for the actual production level achieved. Relative to FOH,
the standard input allowed for the achieved production level measures capacity
utilization for the period. The fixed overhead volume variance is the difference
between budgeted and applied FOH. This variance is caused solely by producing
at a level that differs from the level that was used to compute the predetermined
FOH rate. In the case of Sanjay Corporation, the $10 predetermined FOH rate
was computed by dividing $487,500 of budgeted FOH cost by a capacity level of
48,750 DLHs for 5,000 bikes. Had any other capacity level been chosen, the
predetermined FOH rate would have been a different amount, even though the
$487,500 budgeted fixed overhead would have remained the same. For
example, assume the company chose 4,800 bikes as the expected capacity for
2010:

If actual capacity usage differs from that used in determining the predetermined
FOH rate, a volume variance will arise because, by using a predetermined rate
Reviewer 83
Management Advisory Services

per unit of activity, fixed overhead is treated as if it were a variable cost even
though it is not.

Although capacity utilization is controllable to some degree, the volume variance


is the variance over which production managers have the least influence and
control, especially in the short run. Thus, a volume variance is also called a
noncontrollable variance. Although managers cannot control the capacity level
chosen to compute the predetermined FOH rate, they do have the ability to
control capacity utilization. Capacity utilization should be viewed in relation to
inventory level and sales demand. Underutilization of capacity is not always
undesirable; it is more appropriate to properly regulate production than to
produce inventory that ends up in stockpiles. Producing unneeded inventory
generates substantial costs for material, labor, and overhead as well as storage
and handling costs. The positive impact that such unneeded production will have
on the volume variance is insignificant because this variance is of little or no
value for managerial control purposes.

Management is usually aware, as production occurs, of capacity utilization even


if a volume variance is not reported. The volume variance merely translates
under- or overutilization into a dollar amount. An unfavorable volume variance
indicates less-than-expected utilization of capacity. If available capacity is
commonly being used at a level higher (or lower) than that which was anticipated
or is available, managers should recognize that condition, investigate the
reasons for it, and (if possible and desirable) initiate appropriate action.
Managers can influence capacity utilization by

 Modifying work schedules,

 Taking measures to relieve any obstructions to or congestion of production


activities,

 Carefully monitoring the movement of resources through the production


process, and

 Acquiring needed, or disposing of unneeded, space and equipment.

Preferably, such actions should be taken before production rather than after it.
Efforts made after production is completed might improve next period’s
operations but will have no impact on past production.

Alternative Overhead Variance Approaches

If the accounting system does not separate variable and fixed overhead costs,
insufficient data will be available to compute four overhead variances. Use of a
combined (variable and fixed) predetermined OH rate requires alternative
overhead variance computations. One approach is to calculate only the total
overhead variance, which is the difference between total actual overhead and
total overhead applied to production. The amount of applied overhead is found
by multiplying the combined rate by the standard quantity allowed for the actual
production. The one-variance approach is as follows:
Reviewer 84
Management Advisory Services

Like other total variances, the total overhead variance provides limited
information to managers. For Sanjay Corporation, the total overhead variance is
calculated as follows:

Note that this amount is the same as the summation of the $3,816 F total VOH
variance and the $500 F total FOH variance computed under the four-variance
approach.

A two-variance analysis is performed by inserting a middle column in the one-


variance model:

The middle column is the expected total overhead cost for the period’s actual
output. This amount represents total budgeted VOH at the standard quantity
measure allowed plus the budgeted FOH, which is constant at all activity levels
in the relevant range.

The budget variance equals total actual overhead minus budgeted overhead for
the period’s actual output. This variance is also referred to as the controllable
variance because managers are able to exert influence on this amount during the
short run. The difference between total applied overhead and budgeted overhead
for the period’s actual output is the volume variance; this variance is the same as
would be computed under the four-variance approach.

For Sanjay Corporation, the two-variance computations are as follows:


Reviewer 85
Management Advisory Services

Note that the budget variance amount is the same as the summation of the $736
FVOH spending variance, the $3,080 FVOH efficiency variance, and the $2,125
FFOH spending variance computed under the four-variance approach. The
$1,625 U volume variance is the same as the volume variance computed under
the four-variance approach.

Inserting another column between the left and middle columns of the two-
variance model provides a three-variance analysis by separating the budget
variance into spending and efficiency variances. The new column represents the
flexible budget based on the actual input measure(s). The three-variance model
is as follows:

The total overhead spending variance is computed as total actual overhead


minus total budgeted overhead at the actual input activity level; this amount
equals the sum of the VOH and FOH spending variances of the four-variance
approach. The overhead efficiency variance is related solely to variable overhead
and is the difference between total budgeted overhead at the actual input activity
level and total budgeted overhead at the standard activity level. This variance
measures, at standard cost, the effect on VOH from using more or fewer inputs
than standard for the actual production. The sum of the overhead spending and
overhead efficiency variances of the three-variance analysis equals the budget
variance of the two-variance analysis. The volume variance is the same amount
as that calculated using the two-variance or the four-variance approach.

For Sanjay Corporation, the three-variance computations are as follows:


Reviewer 86
Management Advisory Services

Note that the OH spending variance amount is the same as the summation of the
$736 FVOH spending variance and the $2,125 FFOH spending variance
computed under the four variance approach. The $3,080 FOH efficiency variance
is the same as the VOH efficiency variance computed under the four-variance
approach, and the $1,625 U volume variance is the same as the volume variance
computed under the four-variance approach.

If VOH and FOH are applied using a combined rate, the one-, two-, and three-
variance approaches will have the interrelationships shown in Exhibit 7–5. The
amounts in the exhibit represent the data provided earlier for Sanjay Corporation.
Managers should select the method that provides the most useful information
and that conforms to the company’s accounting system. As more companies
begin to recognize the existence of multiple cost drivers for overhead and to use
multiple bases for applying overhead to production, computation of the one-,
two-, and three-variance approaches will diminish.

4. Variable Costing And Absorption Costing


a. Nature And Treatment Of Fixed Factory Overhead Costs

ABSORPTION (FULL) COSTING

 A product costing method that includes all the manufacturing costs (direct
materials, direct labor, and both the variable and fixed factory overhead) in
the cost of a unit of product.
Reviewer 87
Management Advisory Services

 Under the absorption costing method, fixed factory overhead is treated as a


product cost.

VARIABLE COSTING

 A product costing method that includes only the variable manufacturing


costs (direct materials, direct labor, and variable overhead) in the cost of a
unit of product.

 Under the variable costing method, fixed factory overhead is treated as a


period cost.

PRODUCT COST COMPONENTS

COSTING METHOD
Absorption Variable
Direct materials Direct materials
Direct labor Direct labor
Variable FOH Variable FOH
Fixed FOH -
Product Cost Product Cost

b. Distinction Between Product Cost And Period Cost

COST
Product Period
Cost that is included in the Cost that is charged against current
computation of product cost that is revenue during a time period
apportioned between the sold and regardless of the difference between
unsold units. production and sales volume.
An inventoriable cost. The portion of
the cost that has been allocated to the Does not form part of the cost of
unsold units becomes part of the cost inventory.
of inventory.
Reduces current income by the
portion allocated to the sold units; the
Reduces income for the current period
portion allocated to unsold units is
by its full amount.
treated as an asset; being part of the
cost of inventory.

c. Inventory Costs Between Variable Costing And Absorption Costing

Accounting COSTING METHOD


Principles Absorption Variable
Cost Seldom segregates costs Costs are segregated into
segregation into variable and fixed costs variable and fixed
Reviewer 88
Management Advisory Services

Accounting COSTING METHOD


Principles Absorption Variable
Cost of inventory includes all Cost of inventory includes
the manufacturing costs: only the variable
Cost of
materials, labor, variable manufacturing costs:
inventory
factory overhead and fixed materials, labor, and variable
factory overhead factory overhead
Treatment of
Fixed factory overhead is Fixed factory overhead is
fixed factory
treated as product cost treated as period cost
overhead
Distinguishes between Distinguishes between
production and other costs variable and fixed costs
S xx S xx
Income - CGS (production cost) xx - VC xx
statement Gross profit xx CM xx
- S&A Costs xx - FC xx
Profit xx Profit xx
Net income between the two methods may differ from each
other because of the difference in the amount of fixed
overhead costs recognized as expense during an accounting
period. This is due to variations between sales and
Net income
production. In the long run, however, both methods give
substantially the same results since sales cannot
continuously exceed production, nor production can
continually exceed sales.

d. Reconciliation Of Operating Income Under Variable Costing And


Absorption Costing

RECONCILIATION OF ABSORPTION AND VARIABLE COSTING INCOME


FIGURES

Absorption costing income xx


Add : Fixed overhead in the beginning inventory xx
Total xx
Less : Fixed overhead in the ending inventory xx
Variable costing income xx

Variable costing income xx


Add : Fixed overhead in the ending inventory xx
Total xx
Less : Fixed overhead in the beginning inventory xx
Absorption costing income xx
Reviewer 89
Management Advisory Services

ACCOUNTING FOR DIFFERENCE IN INCOME

Change in inventory (production less sales) xx


Multiply by Fixed FOH cost per unit xx
Difference in income xx

DIFFERENCE IN INCOME UNDER ABSORPTION AND VARIABLE COSTING

Variable and absorption costing methods of accounting for fixed manufacturing


overhead result in different levels of net income in most cases. The differences
are timing differences, i.e., when to recognize fixed manufacturing overhead as
an expense. In variable costing, it is expensed during the period when the fixed
overhead is incurred, while in absorption costing, it is expensed in the period
when the units to which such fixed overhead has been related are sold.

COMPARISON OF
Production And Sales Net Income Fixed FOH Expensed
P=S AC = VC AC = VC
P>S AC > VC AC < VC
P<S AC < VC AC > VC

PRODUCTION EQUALS SALES

When production is equal to sales, there is no change in inventory. Fixed


overhead expensed under absorption costing equals fixed overhead expensed
under variable costing. Therefore, absorption costing income equals variable
costing income.

PRODUCTION IS GREATER THAN SALES

When production is greater than sales, there is an increase in inventory. Fixed


overhead expensed under absorption costing is less than fixed overhead
expensed under variable costing. Therefore, absorption costing income is greater
than variable costing income.

PRODUCTION IS LESS THAN SALES

When production is less than sales, there is a decrease in inventory. Fixed


overhead expensed under absorption costing is greater than fixed overhead
expensed under variable costing. Therefore, absorption costing income is less
than variable costing income.

5. Financial Planning And Budgets


a. Definition And Coverage Of The Budgeting Process

BUDGET
Reviewer 90
Management Advisory Services

 Is a detailed plan, expressed in quantitative terms, about business


operations for a specific period; a budget is a useful tool for planning and
controlling company expenses, cash flows, and earnings. The term
budgeting is used to denote the process of coming up with budgets.

 A realistic plan, expressed in quantitative terms, for a certain future period of


time.

ADVANTAGES & LIMITATIONS OF BUDGETING

1. Budgets can be used by top management to communicate its plans and


goals throughout the organization.

2. Budgets force management to think about and plan for the future.

3. Through budgeting, resources are more appropriately allocated.

4. Through budgeting, potential bottlenecks can be discovered before they


occur.

5. Budgeting promotes coordination of activities of the entire organization.

6. The goals and objectives identified in the budgeting process can serve as
benchmarks or standards for evaluating performance.

ADVANTAGES & LIMITATIONS OF BUDGETING


USES / ADVANTAGES LIMITATIONS
It forces managers to plan for the Considerable time and costs are
future. required.
It provides a means of communicating Budgets are merely estimates that
management’s plans throughout the require judgment and might be
entity. modified or revised if necessary.
A successful budgetary system
It directs the activities toward the
requires cooperation of all members
achievement of organizational goals.
of the organization.
It coordinates the activities of the Budgets sometimes restrict the
entire entity by integrating plans of flexibility of the decision-making
various parts. process.
It provides a means of allocating The budget program is merely a
resources to segments efficiently and guide, not a substitute for good
effectively. management ability.
It defines goals that serve as
benchmarks for evaluating
subsequent performance.

BUDGET COMMITTEE – composed of key management persons who are


responsible for overall policy matters relating to the budget program and for
coordinating the preparation of the budget itself.
Reviewer 91
Management Advisory Services

BUDGET MANUAL – describes how a budget is to be prepared. It usually


includes:

 BUDGET PLANNING CALENDAR – the schedule of activities for the


development and adoption of the budget. It includes a list of dates
indicating when specific information is to be provided by / to those who
are involved in the budgeting process.

 DISTRIBUTION INSTRUCTIONS – for all budget schedules, so that


those segments involved in the budget preparation would know to
whom / from whom a computed budget schedule is to be given /
acquired.

BUDGET REPORT – shows a comparison of the actual and budget


performance. The budget variances, which are properly described as either
favorable or unfavorable, are also shown on the report.

b. Master Budget And Its Components (Operating And Financial Budgets)

THE MASTER BUDGET

The master budget is a comprehensive budget that consolidates the overall plan
of the organization for a specified period. The master budget is mainly composed
of: (1) operating budgets and (2) financial budgets. The master budget, in some
organizations, is also referred to as pro forma budget, forecast budget, master
profit plan.

It encompasses the organization’s operating and financial plans for a certain


future period of time (budget period). It is composed of the operating budget and
financial budget.
Reviewer 92
Management Advisory Services

MASTER
MASTER
BUDGET

OPERATING FINANCIAL
BUDGET
BUDGET BUDGET
BUDGET

Production Budgeted Cost of Budgeted Budgeted Net Budgeted


Sales Budget Production Budgeted Cost of Operating Budgeted Net Income Cash Budget
Budget Goods Sold Income
Expenses
Expenses Statement
Statement

Direct Materials Budgeted


Budget
Budget Balance Sheet
Balance Sheet

Direct Labor Budgeted Cash


Budget Flow Statement

Factory Capital
Overhead
Overhead Expenditure
Expenditure
Budget
Budget Budget
Budget

Working Capital
Budget
Budget

Preparing a Master Budget by Analyzing the Behavior of Revenues and Costs


Reviewer 93
Management Advisory Services
Reviewer 94
Management Advisory Services

Using a Master Budget to Prepare Different Types of Supporting Budgets for


Planning and Control Purposes

Production Budget

Budgeted production is based on budgeted sales and inventory policies. An


inventory policy is normally based on the number of units to be sold in the
following period The formula for the budgeted production could be derived from
the traditional method of determining number of units sold which states that
finished goods inventory-beginning plus production less finished goods
inventory-ending equals budgeted sales. The computation for budgeted
production is presented in format below.

Budgeted sales xx
Add: Finished goods inventory, end xx
Total goods available for sale xx
Less
Finished goods inventory, beginning xx
:
BUDGETED PRODUCTION xx

Inventory Levels Budget


Operating Expenses Budget
Cash Budget

c. Types Of Budgets (Static, Flexible, Zero-Based, Continuous)


Reviewer 95
Management Advisory Services

BUDGETING MODELS

There are several budgeting models used by organizations. Some examples are
flexible budgeting, fixed (or static) budgeting, continuous budgeting, zero-based
budgeting, life-cycle budgeting, activity-based budgeting, kaizen budgeting, and
governmental budgeting.

Flexible budgeting

 Separates costs as to either variable or fixed. In this model, budgeted costs


are determined in any level of business activity. Flexible budgeting uses
standard costs to prepare budgets for multiple activity levels. Total fixed
costs remains constant while total variable costs increases as production
level increases. The budgeted costs based on actual level of production
become the standard costs and are compared with the actual costs to get
and analyze costs variances.

 A series of budgets prepare for many levels of activity. It makes possible the
adjustment of the budget to the actual level of activity before comparing the
budget figures with the actual results.

Fixed or static budgeting

 Does not segregate costs into fixed and variable components. Costs are
estimated only at a single level of activity. Actual costs are compared with
the budgeted costs regardless of the actual level of production and costs
variances are obtained and analyzed accordingly.

 A budget based on only one level of activity (sales or production volume)

Continuous or rolling budgeting

 Maintains a particular time frame (or period) covered in budgeting (say 12


months). When a time segment (e.g., month) had passed, it is dropped from
the budget frame and a new month is added to maintain a given time
covered by the budget.

 A budget that is revised on a regular (continuous) basis. For example, a


budget for 12 months is extended for another month in accordance with new
data as the current month ends.

Zero-based budgeting

 Does not consider the past performances in anticipating the future. Incoming
costs should be classified and packaged based on activities which must be
prioritized and justified as to their incurrence. The objective is to encourage
objective examination of all costs in the hope that costs could be better
controlled. ZBB starts from the lowest budgetary units of the organization. It
needs determination of objectives, operations, and costs for each activity
Reviewer 96
Management Advisory Services

and the alternative means of carrying out that activity. Different levels of
service or work effort are evaluated for each activity, measurements and
performance standards are established, and activities are ranked according
to their importance to the activity. A decision package is prepared that
describes various levels of service that may be provided, including at least
one level lower than the current one. Each expenditure is justified for each
budget period and costs are reviewed from a cost-benefit perspective.

 A budget is prepared is prepared every period from a base of zero. All


expenditures must be justified regardless of variances from previous
periods.

Incremental budgeting

 A budgeting process wherein the current period’s budget is simply adjusted


to allow for changes planned for the coming period.

Life-cycle budgeting

 Intends to account for all costs incurred in the stages of the “value chain”,
from research and development to design, production, marketing,
distribution, up to customer service. Costing in this model is important for
pricing decisions. Revenues generated from the product should cover not
only the costs of production but the entire business costs incurred. It is also
analyzed in line with the product life-cycle concept where products have four
life stages such as infancy (or start-up stage), growth stage, expansion
stage, and maturity (or decline) stage. It is estimated that about 80% of all
costs are already committed (may not yet be incurred) before the business
begins. Life-cycle budgeting emphasizes the potential for locking in
(designing in) future costs since the opportunity of reducing costs is great
before production begins. In a whole-life costs concept, the budget includes
the “after-purchase costs” closely associated with the life-cycle costs. After-
purchase costs include the costs of operating, support, repair, and disposal
incurred by customers. Whole-life cost equals the life-cycle costs plus the
after-purchase costs. Life-cycle costing is related to target costing and target
pricing. A target price is determined in a given market condition and costs
and profit margin are adjusted accordingly.

 A product’s revenues and expenses are estimated over its entire life cycle
(from research and development to withdrawal of customer support). This
concept is helpful in target costing and target pricing. It accounts for, and
emphasizes the relationships among the costs at all stages of the value
chain, such as research and development, design, production, marketing,
distribution, and customer service.

Activity-based budgeting

 Is applied when the activity-based costing system is used. It breaks down


processes into activities and permits the identification of value-adding
activities and their cost drivers. Activities are grouped according to their
Reviewer 97
Management Advisory Services

homogeneity and cost drivers are established per homogeneous pool. It


tracks down cost incurrence based on the behavior of its cost driver such as
number of set-ups, downtime, number of units produced, machine hours,
number of employees, square footage, number of kilowatt used, number of
customer complaints, and many more.

 Unlike in the traditional emphasis on functions or spending categories,


activity-based budgeting applies the ABC principles and procedures to
budgeting.

 The activities are identified, a cost pool is established for each activity, a
cost driver is identified for each pool, and the budgeted cost for each pool is
determined by multiplying the budgeted demand for the activity by the
estimated cost per unit of such activity.

Kaizen (continuous improvements) budgeting

 Assumes the continuous improvement of products or processes by way of


small innovations rather than major changes. Budgets are normally not
reached unless innovative improvements occur. Kaizen budgeting is based
on learning curve theory where cost decreases as time passes by and
experiences are gained. Kaizen is also related to product life-cycle costing.

 Kaizen is a Japanese term that means continuous improvement. Thus,


Kaizen budgeting assumes the continuous improvement of products and
processes; the effects of improvement and the costs of their implementation
are estimated.

 Kaizen budgeting is based not on the existing system but on changes that
are to be made.

Governmental budgeting

 Is not only a financial plan but is also an expression of public policy and a
form of control having the force of law. A governmental budget is a legal
document which must be complied with by a government agency head.
Since government budgeting is not profit-centered, the use of budgets in the
appropriation process is of major importance. One budgeting concept in
government budgeting is “line budgeting” where the emphasis is more on
the control of expenditures. Each line expense should be disbursed
according to the limits of the approved appropriations.

 Unlike in a private-sector budget, a governmental budget is not only a


financial plan and a basis for performance evaluation but also an expression
of public policy and a form of control having the force of law.

d. Budget Variance Analysis (Static And Flexible)

STATIC BUDGET VARIANCE ANALYSIS


Reviewer 98
Management Advisory Services

ACTUAL BUDGET VARIANCE


Level of activity (sales or production volume) 500 200 300
Cost 1 2,000 3,000 ( 1,000) F
Cost 2 5,500 5,000 500 U
Total 7,500 8,000 ( 500) F

FLEXIBLE BUDGET VARIANCE ANALYSIS

ACTUAL BUDGET VARIANCE


Level of activity (sales or production volume) 500 500 -
Variable Costs 3,500 2,500 1,000 U
Fixed Costs 5,000 5,000 -
Total 8,500 7,500 1,000 U

6. Activity-Based Costing (ABC) And Activity-Based Management (ABM)

ACTIVITY BASED COSTING (ABC) SYSTEM allocates overhead to multiple activity


cost pools and assigns the activity cost pools to products by means of cost drivers.

a. Activity Levels (Unit-Level, Batch-Level, Product-Level And Facility-


Level), Cost Pools And Activity Drivers

ACTIVITY LEVELS

An activity is any event, action, transaction, or work sequence that incurs costs
when producing a product or providing a service.

While using cost drivers to assign overhead costs to individual units works well
for some activities, for some activities such as setup costs, the costs are not
incurred to produce an individual unit but rather to produce a batch of the same
units. For other costs, the costs incurred might be based on the number of
product lines or simply because there is a manufacturing facility. To assign
overhead costs more accurately, activity‐based costing assigns activities to one
of four categories.

Unit-Level

Occur every time a service is performed or a product is made. The costs of


direct materials, direct labor, and machine maintenance are examples of
unit‐level activities.

Activity that must be done for each unit of production.


Reviewer 99
Management Advisory Services

It is an activity performed on each individual product or service. At this level,


the cost drivers will be volume-based since the amount of activity will
proportionally depend on the number of units produced.

A unit-level activity is an action that occurs whenever a unit is


manufactured. This activity is a volume-based cost driver, since the
amount that occurs will vary in direct proportion to the number of units
produced. In the cost hierarchy within an activity-based costing system,
a unit-level activity is the lowest level. The cost hierarchy is:

 Unit-level activities
 Batch-level activities
 Product-level activities
 Customer-level activities
 Organization-sustaining activities

The unit-level activities are performed each time a unit of a product is


produced. The number of times unit-level activities (such as drilling holes
and inspecting every part) are performed varies according to the number of
units produced. 

Batch-Level

These are costs incurred every time a group (batch) of units is produced or a
series of steps is performed. Purchase orders, machine setup, and quality
tests are examples of batch‐level activities.

The batch-level activities are performed each time a batch of goods is


produced. The number of times batch-level activities (such as setting up a
machine) are performed varies according to the number of batches made.
The costs of these activities can be assigned to individual batches but they
are fixed regardless of the number of units in the batch. 

Performed for each batch of product produced, rather than each unit.
Examples: setup, receiving and inspection, material-handling, packaging,
shipping, and quality assurance.

Performed each time a batch of goods is handled or processed.

In managerial accounting, production costs that are incurred only when a


new batch is processed. These costs might include things like set-up time,
moving materials and loading machines. For these costs, it does not matter
how many units are produced in the batch.

It is important to understand by which manner costs are incurred for two


primary reasons. First, when financial managers understand that certain
costs are incurred by batch, they may choose to run larger batches in order
to minimize cost. Second, understanding the batch cost allows
Reviewer 100
Management Advisory Services

managerial accountants to more accurately assign production costs to end


products. This makes a product profitability analysis more accurate.

Product-Level

Performed to support production (sales) of specific product type.

Also known as the product sustaining level, these are activities that are
needed to support the entire product line regardless of the number of units
and batches produced. Examples: engineering costs, product development
costs.

These are activities that support an entire product line but not necessarily
each individual unit. Examples of product‐line activities are engineering
changes made in the assembly line, product design changes, and
warehousing and storage costs for each product line.

Product-sustaining activities are performed as needed to support the


production of each different type of product. Examples of product-sustaining
activities are maintaining product specifications, performing engineering
change notices and developing special testing routines. These costs can be
assigned to individual products but are not proportional to the number of
units or batches produced.

Also called product-sustaining activity or service-sustaining activity, is an


activity performed to support production of a specific product or service
regardless of how many batches are run or how many items are produced.

Facility-Level

Facility-sustaining activities support a facility's general manufacturing


process. Examples of facility-sustaining activities are lighting and cleaning
the facility, facility security and managing the facility. The costs of the unit-
level, batch-level and product-sustaining level activities are attributed to
products based on each product's consumption of those activities. The costs
of facility-sustaining activities are allocated to products arbitrarily or treated
as period costs. 

These are necessary for development and production to take place. These
costs are administrative in nature and include building depreciation, property
taxes, plant security, insurance, accounting, outside landscape and
maintenance, and plant management's and support staff's salaries. The
costs of unit‐level, batch‐level, and product‐line activities are easily
allocated to a specific product, either directly as a unit‐level activity or
through allocation of a pooled cost for batch‐level and product‐line activities.
In contrast, the facility‐level costs are kept separate from product costs and
are not allocated to individual units because the allocation would have to be
made on an arbitrary basis such as square feet, number of divisions or
products, and so on.
Reviewer 101
Management Advisory Services

Also called business (organization) sustaining activity, is an activity that


supports business operations in general and cannot be traced to individual
units, batches, or products.

Performed to sustain a facility’s manufacturing process.

Also called the general operations level, performed in order for the entire
production process to occur. Examples: plant maintenance, plant
management, property taxes, and insurance.

COST POOLS

A “bucket” in which costs are accumulated that relate to a single activity


measure in the ABC system

It is a group of costs usually associated with a common cost driver.

Cost pools is an accounting term that refers to groups of accounts serving to


express the cost of goods and service allocable within a business or
manufacturing organization. The principle behind the pool is to correlate
direct and indirect costs with a specified cost driver, so to find out the total
sum of expenses related to the manufacture of a product.
While the exact construction cost pools differs, most companies choose to
form numerical based sequences that can then be allocated to the desired
project. More frequently, a single cost pool will have up to ten digits in the
sequence, with certain groups of those digits used to relate back to the
project. For example, the first three digits of the cost pool may categorize a
particular department, and the next three assign the project itself. The
remaining four digits assign a specific sub-group of expenses of the project,
such as clerical costs.

ACTIVITY DRIVERS

A factor that causes a change in the cost pool for a particular activity. It is
used as a basis for cost allocation; any factor or activity that has a direct
cause-effect relationship.

Cost drivers are the actual activities that cause the total cost in an activity
cost pool to increase. The number of times materials are ordered, the
number of production lines in a factory, and the number of shipments made
to customers are all examples of activities that impact the costs a company
incurs. When using ABC, the total cost of each activity pool is divided by the
total number of units of the activity to determine the cost per unit.

A cost driver is the particular activity that causes the incurrence of certain
costs.

What is an 'Activity Cost Driver'


Reviewer 102
Management Advisory Services

An activity cost driver is a factor that influences or contributes to the expense of


certain business operations. In activity-based costing (ABC), an activity cost
driver drives the costs of labor, maintenance or other variable expenses. Cost
drivers are essential in ABC, a branch of managerial accounting that allows
managers to determine the costs to perform an activity at various activity levels.

BREAKING DOWN 'Activity Cost Driver'

A cost driver is an activity that is the root cause of why a cost occurs. It must be
applicable and relevant to the event that is incurring a cost. There may be
multiple cost drivers responsible for the occurrence of a single expense. A cost
driver assists with allocation expenses in a systematic manner that theoretically
results in more accurate calculations of the true costs of a producing specific
products.

Examples of Cost Drivers

The most common cost driver has historically been direct labor hours. Expenses
incurred relating to the layout or structure of a building or warehouse may utilize
a cost driver of square footage to allocate expenses. More technical cost drivers
include machine hours, the number of change orders, the number of customer
contacts, the number of product returns, the machine setups required for
production or the number of inspections.

Example of Cost Allocation

A factory has a machine that requires periodic maintenance. This maintenance


incurs costs to be allocated to the products produced by the machinery.
Therefore, the cost driver is identified and used as a base to distribute the costs.
In this example, the cost driver selected is machinery hours. It is determined that
after every 1,000 machine hours, maintenance costing $500 is performed.
Therefore, every machine hour results in an eventual 50 cents in maintenance
costs that can be allocated to the product being manufactured based on the cost
driver of machine hours.

Distribution of Overhead Costs

A cost driver exists to allocate manufacturing overhead. The correct allocation of


manufacturing overhead is important for determining the true cost of a product.
Internal management utilizes the cost of a product in the determination of the
product's price. For this reason, the selection of accurate cost drivers has a
direct impact on the profitability and operations of an entity.

Subjectivity of Cost Drivers

Management selects cost drivers as the allocation base for distributing


manufacturing overhead. There are no industry standards or regulations
stipulating mandating cost driver selection. A cost driver is selected at
management's discretion based on the associated variables relating to the
expense being incurred.
Reviewer 103
Management Advisory Services

b. Determination Of Cost Pool Rates And Application Of Overhead Costs

DETERMINATION OF COST POOL RATES

Calculate the rate for each activity, using the estimated cost of each activity cost
pool and the estimated quantity for each allocation base. At this point, this should
start to look familiar because we did this using plant-wide rates and departmental
rates. To calculate the ABC rate:
Total estimated activity cost pool / Total estimated activity allocation base = ABC
rate
(NOTE: Estimated figures are used because actual figures are not yet known at
the start of the period.)
This is the exact same formula we used for plant-wide rates and departmental
rates. Total cost divided by total activity equals rate. The only thing that is
different about ABC rates is that you will have more of them. With plant-wide
rates we had one rate for the entire company. For departmental rates, we had
one for each department. For ABC, we will have one rate for each activity that
has been identified.
It is extremely important to label each of your rates. If you are calculating the rate
for machine setups, label your rate “$/setup”. This makes it much easier when
you are applying your rates. Don’t skip this step. When students make mistakes,
the mistakes are made in the application of the rates because students use the
wrong driver to apply the rates. When you label your rates, it is so much easier to
apply the rates because you don’t need to think about which rates to use for
each activity. If the problem states that there are 15 setups, look at your rates for
the one that is marked “$/setup” and use that one.

APPLICATION OF OVERHEAD COSTS

To apply the rates, multiply the actual amount of activity by the rate for that
activity. Again, that is very similar to what we did for plant-wide rates and
departmental rates. Just like departmental rates, once you get the amount for
each activity, you will need to add up the applied cost for each activity to get the
total overhead applied to your cost object.

Applied overhead is the amount of overhead cost that has been applied to
a cost object . Overhead application is required to meet certain accounting
requirements, but is not needed for most decision-making activities.

Applied overhead costs include any cost that cannot be directly assigned to
a cost object, such as rent, administrative staff compensation, and
insurance. A cost object is an item for which a cost is compiled, such as a
product, product line, distribution channel, subsidiary, process, geographic
region, or customer.
Reviewer 104
Management Advisory Services

Overhead is usually applied to cost objects based on a standard


methodology that is employed consistently from period to period. For
example:

 Apply factory overhead to products based on their use of machine


processing time
 Apply corporate overhead to subsidiaries based on the revenue, profit, or
asset levels of the subsidiaries

For example, a business applies overhead to its products based on standard


overhead application rate of $25 per hour of machine time used. Since the
total amount of machine hours used in the accounting period was 5,000
hours, the company applied $125,000 of overhead to the units produced in
that period.

As another example, a conglomerate has $10,000,000 of corporate


overhead. One of its subsidiaries generates 35% of total corporate revenue,
so $3,500,000 of the corporate overhead is charged to that subsidiary.

The amount of overhead applied is usually based on a standard application


rate that is only changed at fairly long intervals. Consequently, the amount
of applied overhead may differ from the actual amount of overhead incurred
by a business in any individual accounting period. The variance between the
two figures is assumed to average out to zero over multiple periods; if not,
the overhead application rate is altered to bring it more closely into
alignment with actual overhead.

Once assigned to a cost object, assigned overhead is then considered part


of the full cost of that cost object. Recording the full cost of a cost object is
considered appropriate under the major accounting frameworks, such as
Generally Accepted Accounting Principles and International Financial
Reporting Standards. Under these frameworks, applied overhead is included
in the financial statements of a business.

Applied overhead is not considered appropriate in many decision-making


situations. For example, the amount of corporate overhead applied to a
subsidiary reduces its profits, even though the activities of the corporate
headquarters staff do not assist the subsidiary in earning a higher profit.
Similarly, the application of factory overhead to a product may obscure its
actual cost for the purposes of establishing a short-term price for a specific
customer order. Consequently, applied overhead may be stripped away from
a cost object for the purposes of some types of decision making.

c. Traditional Costing Versus Activity-Based Costing

Costing systems helps companies determine the cost of a product related to the
revenue it generates. Two common costing systems used in business are
traditional costing and activity-based costing. Traditional costing assigns
Reviewer 105
Management Advisory Services

manufacturing overhead based on the volume of a cost driver, such as the


amount of direct labor hours needed to produce an item. A cost driver is a factor
that causes cost to incur, such as machine hours, direct labor hours and direct
material hours. Activity-based costing allocates the costs of manufacturing a
product according to the activities needed to produce the item. Managers should
understand the advantages and disadvantages of both systems to meet the
needs of their business.
Understanding Traditional Costing
Many manufacturing companies use the traditional costing system to assign
manufacturing overhead to units produced. Users of the traditional costing
method make the assumption that the volume metric is the underlying driver of
manufacturing overhead cost. Under traditional costing, accountants assign
manufacturing costs only to products. Traditional accounting fails to allocate
nonmanufacturing costs that also are associated with the production of an item,
such as administrative expenses. Companies commonly use traditional
accounting in external financial reports because it provides a value for the cost of
goods sold.
Pros and Cons of Traditional Costing
An advantage of using traditional-based costing is that it aligns with Generally
Accepted Accounting Principles, or GAAP. Easy implementation for companies
that provide one product also is a plus. However, traditional costing is an
outdated costing system in many companies because those manufacturing
companies now use machines and computers for much of their production.
Computers and machines make the system outdated because it often uses direct
labor hours to calculate cost. Cost is not appropriately assigned because direct
labor hours is not the best cost driver to use. Traditional costing negates other
cost drivers that may contribute to the cost of an item. Another disadvantage of
solely using the traditional costing system is that it can lead to bad management
decisions because it excludes certain nonmanufacturing costs.
Understanding Activity-Based Costing
Activity-based costing provides a more accurate view of product cost, but
companies typically use it as a supplemental costing system. The allocation
bases used in activity-based costing differ from those used in traditional costing.
Activity-based costing determines every activity associated with producing an
item and allocates a cost to the activity. The cost assigned to the activity is then
assigned to products that require the activity for production.
Pros and Cons of Activity-Based Costing
Greater costing accuracy is the primary benefit of activity-based costing.
Companies assign cost only to the products that require the activity for
production. This method eliminates allocating irrelevant costs to a product. Other
advantages of activity-based costing include an easy interpretation of cost for
internal management, the ability to enable benchmarking and a greater
understanding of overhead costs. Implementing an activity-based costing system
within a company requires substantial resources. This can prove a disadvantage
for companies with limited funds. Another disadvantage of using activity-based
costing is that it is easily misinterpreted by some users.
Reviewer 106
Management Advisory Services

In the field of accounting, activity-based costing and traditional costing are two


different methods for allocating indirect (overhead) costs to products.
Both methods estimate overhead costs related to production and then assign
these costs to products based on a cost-driver rate. The differences are in the
accuracy and complexity of the two methods. Traditional costing is more
simplistic and less accurate than ABC, and typically assigns overhead costs to
products based on an arbitrary average rate. ABC is more complex and more
accurate than traditional costing. This method first assigns indirect costs
to activities and then assigns the costs to products based on the products’ usage
of the activities.

Traditional Costing Method


Traditional costing systems apply indirect costs to products based on a
predetermined overhead rate. Unlike ABC, traditional costing systems
treat overhead costs as a single pool of indirect costs. Traditional costing
is optimal when indirect costs are low compared to direct costs. There are
several steps in the traditional costing process.

 Identify indirect costs.

 Estimate indirect costs for the appropriate period (month, quarter, and


year).

 Choose a cost-driver with a causal link to the cost (labor hours, machine


hours).

 Estimate an amount for the cost-driver for the appropriate period


(labor hours per quarter, etc.).

 Compute the predetermined overhead rate (see below).

 Apply overhead to products using the predetermined overhead rate.

Predetermined Overhead Rate Calculation

Predetermined Overhead Rate = Estimated Overhead Costs / Estimated Cost-


Driver Amount

For example:

$30/labor hr. = $360,000 indirect costs / 12,000 hours of direct labor

Activity-Based Costing Benefits


Activity based costing systems are more accurate than traditional
costing systems because they provide a more precise breakdown of indirect
costs. However, ABC systems are more complex and more costly to implement.
The leap from traditional costing to activity based costing is difficult.
Reviewer 107
Management Advisory Services

Traditional Costing Advantages and Disadvantages


Traditional costing systems are simpler and easier to implement than ABC
systems. However, traditional costing systems are not as accurate as ABC
systems. Traditional costing systems can also result in significant under-costing
and over-costing.

Traditional Volume-Based Costing


 Overhead is determined on the basis of a single certain cost driver.

Total Product X Product Y Product Z


Overhead Costs (500% of labor):
Labor Cost 60,000 8,000 20,000 32,000
x x x x
Multiply by 500% 5 5 5 5
300,000 40,000 100,000 160,000

Activity-Based Costing
 Group the resources used and pool them in an activity center and identify
cost driver to its activity centers.

Activity Center Using Resources


Activity Centers Cost Drivers Grinding Packaging
Occupancy Square Feet Used 160,000 80,000
Record Keeping Transactions Recorded 240,000 360,000
Human Resource Payroll Costs 20,000 40,000
Materials Handling Materials Cost 36,000 28,000

 Compute the cost functions to the activity center using the resources
[support cost (OH) ÷ resources used by activity center].

Activity Centers Cost Calculations Cost Functions


Occupancy 120,000 ÷ (160,000 + 80,000) 0.50 per square foot
Record Keeping 60,000 ÷ (240,000 + 360,000) 0.10 per transaction
Human Resource 18,000 ÷ (20,000 + 40,000) 30% of payroll cost
Materials Handling 32,000 ÷ (36,000 + 28,000) 50% of materials cost

 Using the above cost functions, the total costs will be assigned to the two
producing centers.

Grinding Packaging
A B C=AxB A B C=AxB
Occupancy 0.50 160,000 80,000 0.50 80,000 40,000
Record Keeping 0.10 240,000 24,000 0.10 360,000 36,000
Human Resource 30% 20,000 6,000 30% 40,000 12,000
Reviewer 108
Management Advisory Services

Materials Handling 50% 36,000 18,000 50% 28,000 14,000


TOTAL COST 128,000 102,000

 Link cost drivers to each product and determine its cost function.

Cost Drivers:
Cost Driver Activity Linked to Each Product
Activity Center Cost Driver Product X Product Y Product Z Total
Grinding Grinding Hours 4,000 6,000 10,000 20,000
Packaging Machine Hours 5,000 3,000 2,000 10,000

Cost Functions:
Grinding Packaging
Separate Support Costs 40,000 30,000
Share from the other activity center (per allocation) 128,000 102,000
Total Cost To Be Allocated 168,000 132,000
Divided by the total items in cost driver ÷ 20,000 ÷ 10,000
Cost Function - Per Item of Cost Driver (in hours) 8.40 13.20

 Using this cost function, the total overhead costs to be allocated to each
product are:

Product X
A B C=AxB
Grinding 8.40 4,000 33,600
Packaging 13.20 5,000 66,000
TOTAL COST 99,600

Product Y
A B C=AxB
Grinding 8.40 6,000 50,400
Packaging 13.20 3,000 39,600
TOTAL COST 90,000

Product Z
A B C=AxB
Grinding 8.40 10,000 84,000
Packaging 13.20 2,000 26,400
TOTAL COST 110,400

1. Determination Of Total Product Costs: Traditional Costing Versus ABC Costing


Traditional Volume-Based Costing
 Follow steps in cost allocation.
Reviewer 109
Management Advisory Services

 Materials, labor and other traceable costs, if any, are then added to the total
overhead cost allocated to determine the total cost of each product.

Total Product X Product Y Product Z


Materials Cost:
Grinding 36,000 2,000 12,000 22,000
Packaging 28,000 8,000 4,000 16,000
64,000 10,000 16,000 38,000
Labor Cost:
Grinding 20,000 4,000 4,000 12,000
Packaging 40,000 4,000 16,000 20,000
60,000 8,000 20,000 32,000
Overhead Costs (500% of labor):
Labor Cost 60,000 8,000 20,000 32,000
x x x x
Multiply by 500% 5 5 5 5
300,000 40,000 100,000 160,000
TOTAL MANUFACTURING COST 424,000 58,000 136,000 230,000
Divided by number of units produced ÷ 44,000 ÷ 30,000 ÷ 24,000
TOTAL MANUFACTURING COST PER
UNIT 1.32 4.53 9.58

Activity-Based Costing
 Follow steps in cost allocation.
 Materials, labor and other traceable costs, if any, are then added to the total
overhead cost allocated to determine the total cost of each product.

Total Product X Product Y Product Z


Materials Cost 64,000 10,000 16,000 38,000
Labor Cost 60,000 8,000 20,000 32,000
Overhead Costs 300,000 99,600 90,000 110,400
TOTAL MANUFACTURING COST 424,000 117,600 126,000 180,400
Divided by number of units produced ÷ 44,000 ÷ 30,000 ÷ 24,000
TOTAL MANUFACTURING COST PER
UNIT 2.67 4.20 7.52

d. Process Value Analysis (Value-Added Activities And Non-Value-Added


Activities)

PROCESS VALUE ANALYSIS

DEFINITION of 'Process Value Analysis - PVA'


Reviewer 110
Management Advisory Services

A strategy that businesses use to determine whether all of their


operational expenses are necessary and if they could be operating more
efficiently. Process value analysis looks at what the customer wants and then
asks if each aspect of operations is necessary to achieve that result. The goal of
process value analysis is to eliminate unnecessary expenses incurred in the
process of creating a good or service without sacrificing customer satisfaction.

BREAKING DOWN 'Process Value Analysis - PVA'

In conducting PVA, managers must look at whether any activities could be


eliminated or streamlined to decrease operational costs while still delivering what
customers want without sacrificing quality. Managers will consider whether any
new technologies could be profitably implemented, whether errors are being
made that could be avoided, whether there are extra steps in the process that
are unnecessary and so on. Any steps in the value chain process that are
identified as not adding economic value may be changed or thrown out.

Process Value Analysis: Why Each Step Counts

Monitoring, analyzing, and planning ahead are key principles of Business


Process Management (BPM), and logically so. Businesses are responsible for
their success, and every step of every process they conduct should ideally
provide value to both the business and its customers. To assess this, a business
must be concerned with Process Value Analysis—a qualitative analysis
procedure allowing a business to apply questions to specific process steps to
measure their success.
It is best to consider the term “value” in this context as referring to the value a
customer expects and is willing to pay for. That value originates from the steps
and processes a business performs to create the value—what some call a value
chain, meaning every step within a process adds some amount of value to the
final product or service.
Again, each step of a process should ideally provide value to both the business
and its customers—this is what Process Value Analysis is meant to measure.
Obviously this is not always clear-cut, since some steps don’t directly add value
to a service but rather facilitate the adding of value. Those steps, though, are
considered value-enabling steps—and still, though indirectly, give value to the
final product. Non-value-adding steps are steps that have been incorporated into
a process for some reason or another, but no longer add any value to the final
product by any means. It is these non-value-adding (and money-eating) steps
that should be eliminated.
Process Value Analysis is all about asking questions—after all, tough questions
typically reveal the most accurate answers. These three categories and
accompanying questions are useful for describing the types of value a specific
process step may have:

Value-Added Activities
Reviewer 111
Management Advisory Services

Value added to customers: steps that directly impact customer satisfaction

 Do customers recognize the value of the process step?


 Does the step specifically impact the service requirements of its customers?
 Is the step necessary to meet the timelines and expectations of those
served?
 Are customers willing to pay for this step?

Value added to operations: steps that support the ability to deliver services to the
people served

 Does the step meet legal, health, safety, or environmental regulatory


criteria?
 Is this process step being performed efficiently, or can it be refined?
 Could this process step be eliminated if a preceding step were performed
differently?
 Could a technology application eliminate or automate this step?
 Does this process step fulfill an external regulatory requirement?
 Most importantly, would eliminating this step impact the quality of the service
positively or negatively?

Value Adding Activities are any activities that add value to the customer and
meet the three criteria for a Value Adding Activity.

The three criteria for a Value Adding Activity are:

1. The step transforms the item toward completion


2. The step is done right the first time (not a rework step)
3. The customer cares (or would pay) for the step to be done

Value-added activities are necessary activities that incur costs but increase
the perceived value of a particular product to the customer. Example:
engineering designs modification.

Activities that are necessary (non-eliminable) to produce the products.

Non-Value-Added Activities

Non-value-added: steps that could be eliminated or changed without harming


service levels or the organization
 What specific direct or indirect value does this step have for customers or
operations?

These are operations that are either (1) unnecessary or dispensable, or (2)
necessary, but inefficient and improvable. Example: rework of defective
units.
Reviewer 112
Management Advisory Services

Activities that do not make the product or service more valuable to the
customer.

Activities that can be eliminated without deterioration of product quality and


value through reduced total production time and, thus, increases profitability.
Example of non-value added activity is the use of just-in-time production
systems. Eliminating activities related to storing and handling inventories will
not affect the quality of the product or service.

e. Activity Based Management (ABM)

Integrates ABC with other concepts such as Total Quality Management (TQM),
process value analysis and target costing to produce a management system that
strives for excellence through cost reduction and continuous process
improvement. An important goal of ABM is to reduce or eliminate non-value
added activities and costs.

ABC management is on the philosophy that activities identified for ABC can also
be used for cost management and performance evaluation purposes. It
eliminates activities that are non-value added costs. Non-value-added activities
simply add cost to, or increase the time spent on, a product or service without
increasing its market value. Awareness of these classifications encourages
managers to reduce or eliminate the time spent on the non-value added
activities.

7. Strategic Cost Management (utilize the concept for planning and control
purposes)
a. Total Quality Management

WHAT IS TOTAL QUALITY MANAGEMENT (TQM)?

A core definition of total quality management (TQM) describes a management


approach to long–term success through customer satisfaction. In a TQM effort,
all members of an organization participate in improving processes, products,
services, and the culture in which they work.

Total Quality Management TQM, also known as total productive maintenance,


describes a management approach to long-term success through customer
satisfaction. In a TQM effort, all members of an organization participate in
improving processes, products, services, and the culture in which they work.

Total Quality Management Principles: The 8 Primary Elements of TQM

Total quality management can be summarized as a management system for a


customer-focused organization that involves all employees in continual
Reviewer 113
Management Advisory Services

improvement. It uses strategy, data, and effective communications to integrate


the quality discipline into the culture and activities of the organization. Many of
these concepts are present in modern Quality Management Systems, the
successor to TQM. Here are the 8 principles of total quality management:
1. Customer-focused

The customer ultimately determines the level of quality. No matter what an


organization does to foster quality improvement—training employees, integrating
quality into the design process, upgrading computers or software, or buying new
measuring tools—the customer determines whether the efforts were worthwhile.
2. Total employee involvement

All employees participate in working toward common goals. Total employee


commitment can only be obtained after fear has been driven from the workplace,
when empowerment has occurred, and management has provided the proper
environment. High-performance work systems integrate continuous improvement
efforts with normal business operations. Self-managed work teams are one form
of empowerment.
3. Process-centered

A fundamental part of TQM is a focus on process thinking. A process is a series


of steps that take inputs from suppliers (internal or external) and transforms them
into outputs that are delivered to customers (again, either internal or external).
The steps required to carry out the process are defined, and performance
measures are continuously monitored in order to detect unexpected variation.
4. Integrated system

Although an organization may consist of many different functional specialties


often organized into vertically structured departments, it is the horizontal
processes interconnecting these functions that are the focus of TQM.

 Micro-processes add up to larger processes, and all processes aggregate into


the business processes required for defining and implementing strategy.
Everyone must understand the vision, mission, and guiding principles as well as
the quality policies, objectives, and critical processes of the organization.
Business performance must be monitored and communicated continuously.
 An integrated business system may be modeled after the Baldrige National
Quality Program criteria and/or incorporate the ISO 9000 standards. Every
organization has a unique work culture, and it is virtually impossible to achieve
excellence in its products and services unless a good quality culture has been
fostered. Thus, an integrated system connects business improvement elements
in an attempt to continually improve and exceed the expectations of customers,
employees, and other stakeholders.
5. Strategic and systematic approach

A critical part of the management of quality is the strategic and systematic


approach to achieving an organization’s vision, mission, and goals. This process,
called strategic planning or strategic management, includes the formulation of a
strategic plan that integrates quality as a core component.
6. Continual improvement
Reviewer 114
Management Advisory Services

A major thrust of TQM is continual process improvement. Continual improvement


drives an organization to be both analytical and creative in finding ways to
become more competitive and more effective at meeting stakeholder
expectations.
7. Fact-based decision making

In order to know how well an organization is performing, data on performance


measures are necessary. TQM requires that an organization continually collect
and analyze data in order to improve decision making accuracy, achieve
consensus, and allow prediction based on past history.
8. Communications

During times of organizational change, as well as part of day-to-day operation,


effective communications plays a large part in maintaining morale and in
motivating employees at all levels. Communications involve strategies, method,
and timeliness.

These elements are considered so essential to TQM that many organizations


define them, in some format, as a set of core values and principles on which the
organization is to operate. The methods for implementing this approach come
from the teachings of such quality leaders as Philip B. Crosby, W. Edwards
Deming, Armand V. Feigenbaum, Kaoru Ishikawa, and Joseph M. Juran.

Early
TIME 1940s 1960s 1980s and Beyond
1900s
Statistical Organizational
FOCUS Inspection Customer-driven Quality
Sampling Quality Focus
New Concept of Quality: Build quality into the
Old Concept of Quality: inspect for quality
process, Identify and correct causes of
after production (REACTIVE)
problems (PROACTIVE)

b. Just-In-Time Production System

What does 'Just In Time - JIT' mean

Just-in-time (JIT) is an inventory strategy companies employ to increase


efficiency and decrease waste by receiving goods only as they are needed in the
production process, thereby reducing inventory costs. This method requires
producers to forecast demand accurately.

This inventory supply system represents a shift away from the older just-in-case
strategy, in which producers carried large inventories in case higher demand had
to be met.

BREAKING DOWN 'Just In Time - JIT'

A good example would be a car manufacturer that operates with very low


inventory levels, relying on its supply chain to deliver the parts it needs to build
Reviewer 115
Management Advisory Services

cars. The parts needed to manufacture the cars do not arrive before or after they
are needed; instead, they arrive just as they are needed.
Advantages

Just-in-time inventory control has several advantages over traditional models.


Production runs remain short, which means manufacturers can move from one
type of product to another very easily. This method reduces costs by eliminating
warehouse storage needs. Companies also spend less money on raw
materials because they buy just enough to make the products and no more.

Disadvantages

The disadvantages of just-in-time inventories involve disruptions in the supply


chain. If a supplier of raw materials has a breakdown and cannot deliver the
goods on time, one supplier can shut down the entire production process. A
sudden order for goods that surpasses expectations may delay delivery of
finished products to clients.

Case Study

Toyota uses just-in-time inventory controls as part of its business model. Toyota


sends off orders for parts only when it receives new orders from customers. The
company started this method in the 1970s, and it took more than 15 years to
perfect. Several elements of just-in-time manufacturing need to occur for Toyota
to succeed. The company must have steady production, high-quality
workmanship, no machine breakdowns at the plant, reliable suppliers and quick
ways to assemble machines that put together vehicles.

Toyota's just-in-time concept almost came to a crashing halt in February 1997. A


fire at a brake parts plant owned by Aisin decimated its capacity to produce a P-
valve for Toyota vehicles. The company was the sole supplier of the part, and the
fact that the plant was shut down for weeks could have devastated Toyota's
supply line. The auto manufacturer ran out of P-valve parts after just one day.
Production lines shut down for just two days until a supplier of Aisin was able to
start manufacturing the necessary valves. Other suppliers for Toyota also had to
shut down because the auto manufacturer didn't need other parts to complete
any cars on the assembly line. The fire cost Toyota nearly $15 billion in revenue
and 70,000 cars due to its two-day shutdown, but it could have been much
worse.

The just-in-time (JIT) philosophy in the simplest form means getting the right
quantity of goods at the right place and at the right time.

c. Continuous Improvement

CONTINUOUS IMPROVEMENT
Reviewer 116
Management Advisory Services

Quality Glossary Definition: Continuous Improvement

Continuous improvement, sometimes called continual improvement, is the


ongoing improvement of products, services or processes through incremental
and breakthrough improvements.

Continuous improvement is an ongoing effort to improve products, services or


processes. These efforts can seek “incremental” improvement over time or
“breakthrough” improvement all at once.

Among the most widely used tools for continuous improvement is a four-step
quality model—the plan-do-check-act (PDCA) cycle, also known as Deming
Cycle or Shewhart Cycle:

  Plan: Identify an opportunity and plan for change.


  Do: Implement the change on a small scale.
  Check: Use data to analyze the results of the change and determine whether it
made a difference.
  Act: If the change was successful, implement it on a wider scale and
continuously assess your results. If the change did not work, begin the cycle
again.

Other widely used methods of continuous improvement — such as Six


Sigma, Lean, and Total Quality Management — emphasize employee
involvement and teamwork; measuring and systematizing processes; and
reducing variation, defects and cycle times.

Continuous  or Continual?

The terms continuous improvement and continual improvement are frequently


used interchangeably. But some quality practitioners make the following
distinction:

  Continual improvement:  a broader term preferred by W. Edwards Deming to


refer to general processes of improvement and encompassing “discontinuous”
improvements—that is, many different approaches, covering different areas.

  Continuous improvement: a subset of continual improvement, with a more


specific focus on linear, incremental improvement within an existing process.
Some practitioners also associate continuous improvement more closely with
techniques of statistical process control.

The main idea of continuous improvement is a philosophy of never-ending


improvement.

d. Business Process Reengineering


Reviewer 117
Management Advisory Services

Business Process Reengineering

Business Process Reengineering involves the radical redesign of core business


processes to achieve dramatic improvements in productivity, cycle times and
quality. In Business Process Reengineering, companies start with a blank sheet
of paper and rethink existing processes to deliver more value to the customer.
They typically adopt a new value system that places increased emphasis on
customer needs. Companies reduce organizational layers and eliminate
unproductive activities in two key areas. First, they redesign functional
organizations into cross-functional teams. Second, they use technology to
improve data dissemination and decision making.

Usage and satisfaction among survey respondents

How Business Process Reengineering works:

Business Process Reengineering is a dramatic change initiative that contains five


major steps. Managers should:

 Refocus company values on customer needs


 Redesign core processes, often using information technology to enable
improvements
 Reorganize a business into cross-functional teams with end-to-end
responsibility for a process
 Rethink basic organizational and people issues
 Improve business processes across the organization

Related topics Bain capabilities

 Cycle-Time Reduction  Business Process Redesign


 Horizontal Organizations  Cost & Supply Chain
 Overhead-Value Analysis Management
 Process Redesign  IT Project Turnaround

Companies use Business Process Reengineering to:


Reviewer 118
Management Advisory Services

Companies use Business Process Reengineering to improve performance


substantially on key processes that impact customers. Business Process
Reengineering can:

 Reduce costs and cycle time. Business Process Reengineering reduces costs


and cycle times by eliminating unproductive activities and the employees who
perform them. Reorganization by teams decreases the need for management
layers, accelerates information flows, and eliminates the errors and rework
caused by multiple handoffs.

 Improve quality. Business Process Reengineering improves quality by reducing


the fragmentation of work and establishing clear ownership of processes.
Workers gain responsibility for their output and can measure their performance
based on prompt feedback.

Business process reengineering


From Wikipedia, the free encyclopedia

Business process reengineering cycle


Business process re-engineering (BPR) is a business management strategy,
originally pioneered in the early 1990s, focusing on the analysis and design
of workflows and business processes within an organization. BPR aimed to
help organizations fundamentally rethink how they do their work in order to
dramatically improve customer service, cut operational costs, and become world-
class competitors.[1] In the mid-1990s, as many as 60% of the Fortune
500 companies claimed to either have initiated reengineering efforts, or to have
plans to do so.[2]
BPR seeks to help companies radically restructure their organizations by
focusing on the ground-up design of their business processes. According to
Davenport (1990) a business process is a set of logically related tasks performed
to achieve a defined business outcome. Re-engineering emphasized
a holistic focus on business objectives and how processes related to them,
Reviewer 119
Management Advisory Services

encouraging full-scale recreation of processes rather than iterative optimization


of sub-processes.[1]
Business process reengineering is also known as business process
redesign, business transformation, or business process change management.

Overview[edit]

Reengineering guidance and relationship of mission and work processes to


information technology.
Business process reengineering (BPR) is the practice of rethinking and
redesigning the way work is done to better support an organization's mission and
reduce costs. Reengineering starts with a high-level assessment of the
organization's mission, strategic goals, and customer needs. Basic questions are
asked, such as "Does our mission need to be redefined? Are our strategic goals
aligned with our mission? Who are our customers?" An organization may find
that it is operating on questionable assumptions, particularly in terms of the
wants and needs of its customers. Only after the organization rethinks what it
should be doing, does it go on to decide how best to do it.[1]
Within the framework of this basic assessment of mission and goals, re-
engineering focuses on the organization's business processes—the steps and
procedures that govern how resources are used to
create products and services that meet the needs of
particular customers or markets. As a structured ordering of work steps across
time and place, a business process can be decomposed into specific activities,
measured, modeled, and improved. It can also be completely redesigned or
eliminated altogether. Re-engineering identifies, analyzes, and re-designs an
organization's core business processes with the aim of achieving dramatic
improvements in critical performance measures, such as cost, quality, service,
and speed.[1]
Reviewer 120
Management Advisory Services

Re-engineering recognizes that an organization's business processes are usually


fragmented into sub-processes and tasks that are carried out by several
specialized functional areas within the organization. Often, no one is responsible
for the overall performance of the entire process. Reengineering maintains that
optimizing the performance of sub-processes can result in some benefits, but
cannot yield dramatic improvements if the process itself is fundamentally
inefficient and outmoded. For that reason, re-engineering focuses on re-
designing the process as a whole in order to achieve the greatest possible
benefits to the organization and their customers. This drive for realizing dramatic
improvements by fundamentally re-thinking how the organization's work should
be done distinguishes the re-engineering from process improvement efforts that
focus on functional or incremental improvement.[1]

History[edit]
Business Process Reengineering (BPR) began as a private sector technique to
help organizations fundamentally rethink how they do their work in order to
dramatically improve customer service, cut operational costs, and become world-
class competitors. A key stimulus for re-engineering has been the continuing
development and deployment of sophisticated information
systems and networks. Leading organizations are becoming bolder in using this
technology to support innovative business processes, rather than refining current
ways of doing work.[1]
Reengineering Work: Don't Automate, Obliterate, 1990[edit]
In 1990, Michael Hammer, a former professor of computer science at
the Massachusetts Institute of Technology (MIT), published the article
"Reengineering Work: Don't Automate, Obliterate" in the Harvard Business
Review, in which he claimed that the major challenge for managers is to
obliterate forms of work that do not add value, rather than using technology for
automating it.[3] This statement implicitly accused managers of having focused on
the wrong issues, namely that technology in general, and more specifically
information technology, has been used primarily for automating existing
processes rather than using it as an enabler for making non-value adding work
obsolete.
Hammer's claim was simple: Most of the work being done does not add any
value for customers, and this work should be removed, not accelerated through
automation. Instead, companies should reconsider their inability to satisfy
customer needs, and their insufficient cost structure[citation needed]. Even well
established management thinkers, such as Peter Drucker and Tom Peters, were
accepting and advocating BPR as a new tool for (re-)achieving success in a
dynamic world.[4] During the following years, a fast-growing number of
publications, books as well as journal articles, were dedicated to BPR, and many
consulting firms embarked on this trend and developed BPR methods. However,
the critics were fast to claim that BPR was a way to dehumanize the work place,
Reviewer 121
Management Advisory Services

increase managerial control, and to justify downsizing, i.e. major reductions of


the work force,[5] and a rebirth of Taylorism under a different label.
Despite this critique, reengineering was adopted at an accelerating pace and by
1993, as many as 60% of the Fortune 500 companies claimed to either have
initiated reengineering efforts, or to have plans to do so.[2] This trend was fueled
by the fast adoption of BPR by the consulting industry, but also by the
study Made in America,[6] conducted by MIT, that showed how companies in
many US industries had lagged behind their foreign counterparts in terms of
competitiveness, time-to-market and productivity.
Development after 1995[edit]
With the publication of critiques in 1995 and 1996 by some [who?] of the early BPR
proponents[citation needed], coupled with abuses and misuses of the concept by
others, the reengineering fervor in the U.S. began to wane. Since then,
considering business processes as a starting point for business analysis and
redesign has become a widely accepted approach and is a standard part of the
change methodology portfolio, but is typically performed in a less radical way
than originally proposed.
More recently, the concept of Business Process Management (BPM) has gained
major attention in the corporate world and can be considered a successor to the
BPR wave of the 1990s, as it is evenly driven by a striving for process efficiency
supported by information technology. Equivalently to the critique brought forward
against BPR, BPM is now accused[citation needed] of focusing on technology and
disregarding the people aspects of change.

Topics[edit]
The most notable definitions of reengineering are:

 "... the fundamental rethinking and radical redesign of business processes


to achieve dramatic improvements in critical contemporary modern
measures of performance, such as cost, quality, service, and speed."[7]
 "encompasses the envisioning of new work strategies, the actual process
design activity, and the implementation of the change in all its complex
technological, human, and organizational dimensions."[8]
BPR is different from other approaches to organization development (OD),
especially the continuous improvement or TQM movement, by virtue of its aim for
fundamental and radical change rather than iterative improvement. [9] In order to
achieve the major improvements BPR is seeking for, the change of structural
organizational variables, and other ways of managing and performing work is
often considered insufficient. For being able to reap the achievable benefits fully,
the use of information technology (IT) is conceived as a major contributing factor.
While IT traditionally has been used for supporting the existing business
functions, i.e. it was used for increasing organizational efficiency, it now plays a
Reviewer 122
Management Advisory Services

role as enabler of new organizational forms, and patterns of collaboration within


and between organizations[citation needed].
BPR derives its existence from different disciplines, and four major areas can be
identified as being subjected to change in BPR – organization, technology,
strategy, and people – where a process view is used as common framework for
considering these dimensions.
Business strategy is the primary driver of BPR initiatives and the other
dimensions are governed by strategy's encompassing role. The organization
dimension reflects the structural elements of the company, such as hierarchical
levels, the composition of organizational units, and the distribution of work
between them[citation needed]. Technology is concerned with the use of computer
systems and other forms of communication technology in the business. In BPR,
information technology is generally considered to act as enabler of new forms of
organizing and collaborating, rather than supporting existing business functions.
The people / human resources dimension deals with aspects such as education,
training, motivation and reward systems. The concept of business processes –
interrelated activities aiming at creating a value added output to a customer – is
the basic underlying idea of BPR. These processes are characterized by a
number of attributes: Process ownership, customer focus, value adding, and
cross-functionality.
The role of information technology[edit]
Information technology (IT) has historically played an important role in the
reengineering concept.[10] It is regarded by some as a major enabler for new
forms of working and collaborating within an organization and across
organizational borders[citation needed].
BPR literature[11] identified several so called disruptive technologies that were
supposed to challenge traditional wisdom about how work should be performed.

 Shared databases, making information available at many places


 Expert systems, allowing generalists to perform specialist tasks
 Telecommunication networks, allowing organizations to be centralized and
decentralized at the same time
 Decision-support tools, allowing decision-making to be a part of everybody's
job
 Wireless data communication and portable computers, allowing field
personnel to work office independent
 Interactive videodisk, to get in immediate contact with potential buyers
 Automatic identification and tracking, allowing things to tell where they are,
instead of requiring to be found
 High performance computing, allowing on-the-fly planning and revisioning
In the mid-1990s, especially workflow management systems were considered a
significant contributor to improved process efficiency. Also, ERP (enterprise
Reviewer 123
Management Advisory Services

resource planning) vendors, such as SAP, JD Edwards, Oracle, PeopleSoft,


positioned their solutions as vehicles for business process redesign and
improvement.
Research and methodology[edit]

Model based on PRLC approach


Although the labels and steps differ slightly, the early methodologies that were
rooted in IT-centric BPR solutions share many of the same basic principles and
elements. The following outline is one such model, based on the PRLC (Process
Reengineering Life Cycle) approach developed by Guha.[12] Simplified schematic
outline of using a business process approach, exemplified for pharmaceutical
R&D

1. Structural organization with functional units


2. Introduction of New Product Development as cross-functional process
3. Re-structuring and streamlining activities, removal of non-value adding
tasks
Benefiting from lessons learned from the early adopters, some BPR practitioners
advocated a change in emphasis to a customer-centric, as opposed to an IT-
centric, methodology. One such methodology, that also incorporated a Risk and
Impact Assessment to account for the effect that BPR can have on jobs and
operations, was described by Lon Roberts (1994).[13] Roberts also stressed the
use of change management tools to proactively address resistance to change—a
factor linked to the demise of many reengineering initiatives that looked good on
the drawing board.
Some items to use on a process analysis checklist are: Reduce handoffs,
Centralize data, Reduce delays, Free resources faster, Combine similar
activities. Also within the management consulting industry, a significant number
of methodological approaches have been developed.[14]
Reviewer 124
Management Advisory Services

Framework[edit]
An easy to follow seven step INSPIRE framework is developed by Bhudeb
Chakravarti which can be followed by any Process Analyst to perform BPR. The
seven steps of the framework are Initiate a new process reengineering project
and prepare a business case for the same; Negotiate with senior management to
get approval to start the process reengineering project; Select the key processes
that need to be reengineered; Plan the process reengineering
activities; Investigate the processes to analyze the problem areas; Redesign the
selected processes to improve the performance and Ensure the successful
implementation of redesigned processes through proper monitoring and
evaluation.

Factors for success and failure[edit]


This article's tone or style may not reflect the encyclopedic ton
Wikipedia. See Wikipedia's guide to writing better
suggestions. (February 2014) (Learn how and when to remove thi
message)
Factors that are important to BPR success include:

1. BPR team composition.


2. Business needs analysis.
3. Adequate IT infrastructure.
4. Effective [change management].
5. Ongoing continuous improvement
The aspects of a BPM effort that are modified include organizational structures,
management systems, employee responsibilities and performance
measurements, incentive systems, skills development, and the use of IT. BPR
can potentially affect every aspect of how business is conducted today.
Wholesale changes can cause results ranging from enviable success to
complete failure.
If successful, a BPM initiative can result in improved quality, customer service,
and competitiveness, as well as reductions in cost or cycle time. However, 50-
70% of reengineering projects are either failures or do not achieve significant
benefit.[15]
There are many reasons for sub-optimal business processes which include:

1. One department may be optimized at the expense of another


2. Lack of time to focus on improving business process
3. Lack of recognition of the extent of the problem
4. Lack of training
5. People involved use the best tool they have at their disposal which is
usually Excel to fix problems
Reviewer 125
Management Advisory Services

6. Inadequate infrastructure
7. Overly bureaucratic processes
8. Lack of motivation
Many unsuccessful BPR attempts may have been due to the confusion
surrounding BPR, and how it should be performed. Organizations were well
aware that changes needed to be made, but did not know which areas to change
or how to change them. As a result, process reengineering is a management
concept that has been formed by trial and error or, in other words, practical
experience. As more and more businesses reengineer their processes,
knowledge of what caused the successes or failures is becoming apparent.[16] To
reap lasting benefits, companies must be willing to examine how strategy and
reengineering complement each other by learning to quantify strategy in terms of
cost, milestones, and timetables, by accepting ownership of the strategy
throughout the organization, by assessing the organization’s current capabilities
and process realistically, and by linking strategy to the budgeting process.
Otherwise, BPR is only a short-term efficiency exercise.[17]
Organization-wide commitment[edit]
Major changes to business processes have a direct effect on processes,
technology, job roles, and workplace culture. Significant changes to even one of
those areas require resources, money, and leadership. Changing them
simultaneously is an extraordinary task.[16] Like any large and complex
undertaking, implementing reengineering requires the talents and energies of a
broad spectrum of experts. Since BPR can involve multiple areas within the
organization, it is important to get support from all affected departments. Through
the involvement of selected department members, the organization can gain
valuable input before a process is implemented; a step which promotes both the
cooperation and the vital acceptance of the reengineered process by all
segments of the organization.
Getting enterprise wide commitment involves the following: top management
sponsorship, bottom-up buy-in from process users, dedicated BPR team, and
budget allocation for the total solution with measures to demonstrate value.
Before any BPR project can be implemented successfully, there must be a
commitment to the project by the management of the organization, and strong
leadership must be provided.[18] Reengineering efforts can by no means be
exercised without a company-wide commitment to the goals. However, top
management commitment is imperative for success.[19][20] Top management must
recognize the need for change, develop a complete understanding of what BPR
is, and plan how to achieve it.[21]
Leadership has to be effective, strong, visible, and creative in thinking and
understanding in order to provide a clear vision.[22] Convincing every affected
group within the organization of the need for BPR is a key step in successfully
implementing a process. By informing all affected groups at every stage, and
Reviewer 126
Management Advisory Services

emphasizing the positive end results of the reengineering process, it is possible


to minimize resistance to change and increase the odds for success. The
ultimate success of BPR depends on the strong, consistent, and continuous
involvement of all departmental levels within the organization.[23]
Team composition[edit]
Once organization-wide commitment has been secured from all departments
involved in the reengineering effort and at different levels, the critical step of
selecting a BPR team must be taken. This team will form the nucleus of the BPR
effort, make key decisions and recommendations, and help communicate the
details and benefits of the BPR program to the entire organization. The
determinants of an effective BPR team may be summarized as follows:

 competency of the members of the team, their motivation,[24]


 their credibility within the organization and their creativity,[25]
 team empowerment, training of members in process mapping and
brainstorming techniques,[26]
 effective team leadership,[27]
 proper organization of the team,[28]
 complementary skills among team members, adequate size,
interchangeable accountability, clarity of work approach, and
 specificity of goals.[29]
The most effective BPR teams include active representatives from the following
work groups: top management, business area responsible for the process being
addressed, technology groups, finance, and members of all ultimate process
users’ groups. Team members who are selected from each work group within the
organization will affect the outcome of the reengineered process according to
their desired requirements. The BPR team should be mixed in depth and
knowledge. For example, it may include members with the following
characteristics:

 Members who do not know the process at all.


 Members who know the process inside-out.
 Customers, if possible.
 Members representing affected departments.
 One or two members of the best, brightest, passionate, and committed
technology experts.
 Members from outside of the organization[19]
Moreover, Covert (1997) recommends that in order to have an effective BPR
team, it must be kept under ten players. If the organization fails to keep the team
at a manageable size, the entire process will be much more difficult to execute
efficiently and effectively. The efforts of the team must be focused on identifying
breakthrough opportunities and designing new work steps or processes that will
create quantum gains and competitive advantage.[21]
Reviewer 127
Management Advisory Services

Business needs analysis[edit]


Another important factor in the success of any BPR effort is performing a
thorough business needs analysis. Too often, BPR teams jump directly into the
technology without first assessing the current processes of the organization and
determining what exactly needs reengineering. In this analysis phase, a series of
sessions should be held with process owners and stakeholders, regarding the
need and strategy for BPR. These sessions build a consensus as to the vision of
the ideal business process. They help identify essential goals for BPR within
each department and then collectively define objectives for how the project will
affect each work group or department on individual basis and the business
organization as a whole. The idea of these sessions is to conceptualize the ideal
business process for the organization and build a business process model.
Those items that seem unnecessary or unrealistic may be eliminated or modified
later on in the diagnosing stage of the BPR project. It is important to
acknowledge and evaluate all ideas in order to make all participants feel that
they are a part of this important and crucial process. Results of these meetings
will help formulate the basic plan for the project.
This plan includes the following:

 identifying specific problem areas,


 solidifying particular goals, and
 defining business objectives.
The business needs analysis contributes tremendously to the re-engineering
effort by helping the BPR team to prioritize and determine where it should focus
its improvements efforts.[19]
The business needs analysis also helps in relating the BPR project goals back to
key business objectives and the overall strategic direction for the organization.
This linkage should show the thread from the top to the bottom of the
organization, so each person can easily connect the overall business direction
with the re-engineering effort. This alignment must be demonstrated from the
perspective of financial performance, customer service, associate value, and the
vision for the organization.[16] Developing a business vision and process
objectives relies, on the one hand, on a clear understanding of organizational
strengths, weaknesses, and market structure, and on the other, on awareness
and knowledge about innovative activities undertaken by competitors and other
organizations.[30]
BPR projects that are not in alignment with the organization’s strategic direction
can be counterproductive. There is always a possibility that an organization may
make significant investments in an area that is not a core competency for the
company and later outsource this capability. Such reengineering initiatives are
wasteful and steal resources from other strategic projects. Moreover, without
strategic alignment, the organization’s key stakeholders and sponsors may find
Reviewer 128
Management Advisory Services

themselves unable to provide the level of support the organization needs in


terms of resources, especially if there are other more critical projects to the future
of the business, and are more aligned with the strategic direction.[16]
Adequate IT infrastructure[edit]
Researchers consider adequate IT infrastructure reassessment and composition
as a vital factor in successful BPR implementation. [22] Hammer (1990) prescribes
the use of IT to challenge the assumptions inherent in the work process that
have existed since long before the advent of modern computer and
communications technology.[31] Factors related to IT infrastructure have been
increasingly considered by many researchers and practitioners as a vital
component of successful BPR efforts.[32]

 Effective alignment of IT infrastructure and BPR strategy,


 building an effective IT infrastructure,
 adequate IT infrastructure investment decision,
 adequate measurement of IT infrastructure effectiveness,
 proper information systems (IS) integration,
 effective reengineering of legacy IS,
 increasing IT function competency, and
 effective use of software tools are the most important factors that contribute
to the success of BPR projects.
These are vital factors that contribute to building an effective IT infrastructure for
business processes.[22] BPR must be accompanied by strategic planning which
addresses leveraging IT as a competitive tool. [33] An IT infrastructure is made up
of physical assets, intellectual assets, shared services,[34] and their linkages.
[35]
 The way in which the IT infrastructure components are composed and their
linkages determines the extent to which information resources can be delivered.
An effective IT infrastructure composition process follows a top-down approach,
beginning with business strategy and IS strategy and passing through designs of
data, systems, and computer architecture.[36]
Linkages between the IT infrastructure components, as well as descriptions of
their contexts of interaction, are important for ensuring integrity and consistency
among the IT infrastructure components.[32] Furthermore, IT standards have a
major role in reconciling various infrastructure components to provide shared IT
services that are of a certain degree of effectiveness to support business process
applications, as well as to guide the process of acquiring, managing, and utilizing
IT assets.[35] The IT infrastructure shared services and the human IT
infrastructure components, in terms of their responsibilities and their needed
expertise, are both vital to the process of the IT infrastructure composition. IT
strategic alignment is approached through the process of integration between
business and IT strategies, as well as between IT and organizational
infrastructures.[22]
Reviewer 129
Management Advisory Services

Most analysts view BPR and IT as irrevocably linked. Walmart, for example,
would not have been able to reengineer the processes used to procure and
distribute mass-market retail goods without IT. Ford was able to decrease its
headcount in the procurement department by 75 percent by using IT in
conjunction with BPR, in another well-known example. [33] The IT infrastructure
and BPR are interdependent in the sense that deciding the information
requirements for the new business processes determines the IT infrastructure
constituents, and a recognition of IT capabilities provides alternatives for BPR.
[32]
 Building a responsive IT infrastructure is highly dependent on an appropriate
determination of business process information needs. This, in turn, is determined
by the types of activities embedded in a business process, and their sequencing
and reliance on other organizational processes.[37]
Effective change management[edit]
Al-Mashari and Zairi (2000) suggest that BPR involves changes in people
behavior and culture, processes, and technology. As a result, there are many
factors that prevent the effective implementation of BPR and hence restrict
innovation and continuous improvement. Change management, which involves
all human and social related changes and cultural adjustment techniques needed
by management to facilitate the insertion of newly designed processes and
structures into working practice and to deal effectively with resistance,is
considered by many researchers to be a crucial component of any BPR
effort.One of the most overlooked obstacles to successful BPR project
implementation is resistance from those whom implementers believe will benefit
the most. Most projects underestimate the cultural effect of major process and
structural change and as a result, do not achieve the full potential of their change
effort. Many people fail to understand that change is not an event, but rather a
management technique.
Change management is the discipline of managing change as a process, with
due consideration that employees are people, not programmable machines.
[16]
 Change is implicitly driven by motivation which is fueled by the recognition of
the need for change. An important step towards any successful reengineering
effort is to convey an understanding of the necessity for change.[19] It is a well-
known fact that organizations do not change unless people change; the better
change is managed, the less painful the transition is.
Organizational culture is a determining factor in successful BPR implementation.
[38]
 Organizational culture influences the organization’s ability to adapt to change.
Culture in an organization is a self-reinforcing set of beliefs, attitudes, and
behavior. Culture is one of the most resistant elements of organizational behavior
and is extremely difficult to change. BPR must consider current culture in order to
change these beliefs, attitudes, and behaviors effectively. Messages conveyed
from management in an organization continually enforce current culture. Change
is implicitly driven by motivation which is fueled by the recognition of the need for
change.
Reviewer 130
Management Advisory Services

The first step towards any successful transformation effort is to convey an


understanding of the necessity for change.[19] Management rewards system,
stories of company origin and early successes of founders, physical symbols,
and company icons constantly enforce the message of the current culture.
Implementing BPR successfully is dependent on how thoroughly management
conveys the new cultural messages to the organization.[18] These messages
provide people in the organization with a guideline to predict the outcome of
acceptable behavior patterns. People should be the focus for any successful
business change.
BPR is not a recipe for successful business transformation if it focuses on only
computer technology and process redesign. In fact, many BPR projects have
failed because they did not recognize the importance of the human element in
implementing BPR. Understanding the people in organizations, the current
company culture, motivation, leadership, and past performance is essential to
recognize, understand, and integrate into the vision and implementation of BPR.
If the human element is given equal or greater emphasis in BPR, the odds of
successful business transformation increase substantially.[18]
Ongoing continuous improvement[edit]
Many organizational change theorists hold a common view of organizations
adjusting gradually and incrementally and responding locally to individual crises
as they arise[19]Common elements are:

 BPR is a successive and ongoing process and should be regarded as an


improvement strategy that enables an organization to make the move from
traditional functional orientation to one that aligns with strategic business
processes.[30]
 Continuous improvement is defined as the propensity of the organization to
pursue incremental and innovative improvements in its processes, products,
and services.[19] The incremental change is governed by the knowledge
gained from each previous change cycle.
 It is essential that the automation infrastructure of the BPR activity provides
for performance measurements in order to support continuous
improvements. It will need to efficiently capture appropriate data and allow
access to appropriate individuals.
 To ensure that the process generates the desired benefits, it must be tested
before it is deployed to the end users. If it does not perform satisfactorily,
more time should be taken to modify the process until it does.
 A fundamental concept for quality practitioners is the use of feedback loops
at every step of the process and an environment that encourages constant
evaluation of results and individual efforts to improve.[39]
 At the end user’s level, there must be a proactive feedback mechanism that
provides for and facilitates resolutions of problems and issues. This will also
contribute to a continuous risk assessment and evaluation which are
Reviewer 131
Management Advisory Services

needed throughout the implementation process to deal with any risks at


their initial state and to ensure the success of the reengineering efforts.
 Anticipating and planning for risk handling is important for dealing effectively
with any risk when it first occurs and as early as possible in the BPR
process.[40] It is interesting that many of the successful applications of
reengineering described by its proponents are in organizations practicing
continuous improvement programs.
 Hammer and Champy (1993) use the IBM Credit Corporation as well as
Ford and Kodak, as examples of companies that carried out BPR
successfully due to the fact that they had long-running continuous
improvement programs.[39]
In conclusion, successful BPR can potentially create substantial improvements in
the way organizations do business and can actually produce fundamental
improvements for business operations. However, in order to achieve that, there
are some key success factors that must be taken into consideration when
performing BPR.
BPR success factors are a collection of lessons learned from reengineering
projects and from these lessons common themes have emerged. In addition, the
ultimate success of BPR depends on the people who do it and on how well they
can be committed and motivated to be creative and to apply their detailed
knowledge to the reengineering initiative. Organizations planning to undertake
BPR must take into consideration the success factors of BPR in order to ensure
that their reengineering related change efforts are comprehensive, well-
implemented, and have minimum chance of failure. This has been very beneficial
in all terms

Critique[edit]
Many companies used reengineering as a pretext to downsizing, though this was
not the intent of reengineering's proponents; consequently, reengineering earned
a reputation for being synonymous with downsizing and layoffs.[41]
In many circumstances, reengineering has not always lived up to its
expectations. Some prominent reasons include:

 Reengineering assumes that the factor that limits an organization's


performance is the ineffectiveness of its processes (which may or may not
be true) and offers no means of validating that assumption.
 Reengineering assumes the need to start the process of performance
improvement with a "clean slate," i.e. totally disregard the status quo.
 According to Eliyahu M. Goldratt (and his Theory of Constraints)
reengineering does not provide an effective way to focus improvement
efforts on the organization's constraint[citation needed].
Reviewer 132
Management Advisory Services

Others have claimed that reengineering was a recycled buzzword for commonly-
held ideas. Abrahamson (1996) argued that fashionable management terms tend
to follow a lifecycle, which for Reengineering peaked between 1993 and 1996
(Ponzi and Koenig 2002). They argue that Reengineering was in fact nothing
new (as e.g. when Henry Ford implemented the assembly line in 1908, he was in
fact reengineering, radically changing the way of thinking in an organization).
The most frequent critique against BPR concerns the strict focus on efficiency
and technology and the disregard of people in the organization that is subjected
to a reengineering initiative. Very often, the label BPR was used for major
workforce reductions. Thomas Davenport, an early BPR proponent, stated that:
"When I wrote about "business process redesign" in 1990, I explicitly said that
using it for cost reduction alone was not a sensible goal. And consultants Michael
Hammer and James Champy, the two names most closely associated with
reengineering, have insisted all along that layoffs shouldn't be the point. But the
fact is, once out of the bottle, the reengineering genie quickly turned ugly."[42]
Hammer similarly admitted that:
"I wasn't smart enough about that. I was reflecting my engineering background
and was insufficient appreciative of the human dimension. I've learned that's
critical."[43]

e. Kaizen Costing

Kaizen costing
From Wikipedia, the free encyclopedia
Kaizen costing is a cost reduction system. Yasuhiro Monden defines kaizen
costing as "the maintenance of present cost levels for products currently being
manufactured via systematic efforts to achieve the desired cost level." The
word kaizen is a Japanese word meaning continuous improvement.[citation needed]
Monden has described two types of kaizen costing:

 Asset and organisation specific kaizen costing activities planned according


to the exegencies of each deal
 Product model specific costing activities carried out in special projects with
added emphasis on value analysis
Kaizen costing is applied to products that are already in production phase. Prior
to kaizen costing, when the products are under development phase, target
costing is applied.
After targets have been set, they are continuously updated to display past
improvements, and projected (expected) improvements.
Adopting Kaizen costing requires a change in the method of setting standards.
Kaizen costing focuses on "cost reduction" rather than "cost control".
Reviewer 133
Management Advisory Services

Types of costs under consideration[edit]


Kaizen costing takes into consideration costs related to manufacturing stage,
which include:

 Costs of supply chain


 Legal costs
 Manufacturing costs
 Waste
 Recruitment costs
 Marketing, sales and distribution
 Product disposal

A process wherein a product undergoes cost reduction even when it is already


on the production stage. The cost minimization can include strategies in effective
waste management, continuous product improvement or better deals in the
acquisition of raw materials.

1. 1. Kaizen Costing PREPARED BY : GURSHARAN SINGH SAINI KHALSA


COLLEGE PATIALA , 147001
2. 2. Introduction Kaizen is known as " Genkakaizen " in Japanese companies. It is
used in manufacturing stage of the existing products as cost reduction process.
Kaizen is derived by Japanese automobile companies
3. 3. Concept of Kaizen Costing KAI CHANGE ZEN GOOD Change for Good
4. 4. DEFINITION Yashihuro Moden defines kaizen costing as "the maintenance of
present cost levels for products currently being manufactured via systematic
efforts to achieve the desired cost level."
5. 5. KC is applied to product that is already under Production For Cost Reduction
Cost Can be reduced through estimation of seven types of waste :- oover
production oInventory oWaiting oDefective oMotion oTransportation oOver
Processing
6. 6. NOTIONS OF KAIZEN COSTING o KAIZEN is Continuous o KAIZEN is
incremental in Nature o KAIZEN is participative
7. 7. Implementing Kaizen- few rules o List your own Problems o Grade problems
as to minor, difficult and major o Start with the smallest minor problem o Move on
to next graded problem and so on
8. 8. o Remember improvement is part of daily routine o Never accept status quo o
Never reject any idea before trying o Eliminate tried but failed experiments o
Highlight problems rather than hiding
9. 9. Procedure For Implementation 1.Form small groups from 6-10 persons 2.Give
them numbers-Kaizen 1,Kaizen-2… 3.Appoint an evaluator of the group
4,Arrange weekly meetings of group (6-12 months) 5.Submit progress of
improvement in writing 6.Allow each member to express 7.No disturbance when
others are speaking 8.However Clarifications can be sought instantantly
10. 10. Evaluation 0 - Marks for no improvement made 0 to 30 - Marks depends
upon improvement tried but failed 30 to 50 - Marks for small to moderate
improvement 50 to 75 - Marks for good improvement > 75 - Marks for
extraordinary improvement
11. 11. Kaizen Philosophy Approach to Traditional Organisation Kaizen Environment
1 Attitude Let it go Continuous Improvement 2 Employees Cost Assets 3
Reviewer 134
Management Advisory Services

Information Restricted Shared 4 Interpersonal Commercial Human Relationship


5 Managerial Belief Routine Change 6 Management Culture Bureaucratic
Participative 7 Management Function Control Supportive 8 Management Stress
Functional Cross Functional
12. 12. Advantages of Kaizen Costing •Customer Satisfaction •Process Centered
•Create Work Teams •Cross-functional •Increasing Employees Moral •Reduced
Errors •Promote openness •Acknowledge Problems openly
13. 13. Disadvantages •Requires Permanent change of Management System •Does
not Produce Required Results •Difficult to Convince People
14. 14. Lessons for Failure Lack of interest and support from management Lack of
training of Listening skills, Presentation Skill, Communication Skill Criticism of
failure from fellow members Ignoring Basic Concept ( Improvement is part of
daily routine) Work Pressure -sidelining the Kaizen
15. 15. Difference Between Target & Kaizen Costing Kaizen Costing is typically
based on the following : - Employees are the source of solutions - Cost reduction
is achieved by continuous improvement - Cost reduction targets are set every
month But Target Costing is estimated selling price - TARGET COST = desired
level of profit. It is an integral part of a strategic profit management system.
16. 16. Strategic Based Cost Challenges 1.DETERMINANTS OF ENVIRONMENTAL
UNCERTAINTY 2. VOLATILITY 3. COMPLEXITY 4.INCREASING GLOBAL
PRESENCE 5.INFORMATION OVERLOAD 6. AMBIGUITY

f. Product Life Cycle Costing

Procurement and production costing technique that considers all life cycle costs.
In procurement, it aims to determine the lowest cost of ownership of a fixed asset
(purchase price, installation, operation, maintenance and upgrading, disposal,
and other costs) during the asset's economic life. In manufacturing (as an
integral part of terotechnology), it aims to estimate not only the production costs
but also how much revenue a product will generate and what expenses will be
incurred at each stage of the value chain during the product's estimated life cycle
duration.

Product Life Cycle

A series of stages that products pass through in their lifetime, characterized by


changing product demands over time.
Reviewer 135
Management Advisory Services

PRODUCT LIFE
CYCLE

Early Stages Later Stages

Introduction Maturity

Growth Decline

g. Target Costing

Target costing
From Wikipedia, the free encyclopedia
Target costing is an approach to determine a product’s life-cycle cost which
should be sufficient to develop specified functionality and quality, while ensuring
its desired profit. It involves setting a target cost by subtracting a desired profit
margin from a competitive market price.[1] A target cost is the maximum amount
of cost that can be incurred on a product, however, the firm can still earn the
required profit margin from that product at a particular selling price. Target
costing decomposes the target cost from product level to component level.
Through this decomposition, target costing spread the competitive pressure
faced by the company to product’s designers and suppliers. Target costing
consists of cost planning in the design phase of production as well as cost
control throughout the resulting product life cycle. The cardinal rule of target
costing is to never exceed the target cost. However, the focus of target costing is
not to minimize costs, but to achieve a desired level of cost reduction determined
by target costing process.

Definition[edit]
Target costing is defined as "a disciplined process for determining and achieving
a full-stream cost at which a proposed product with specified functionality,
performance, and quality must be produced in order to generate the desired
profitability at the product’s anticipated selling price over a specified period of
time in the future." [2] This definition encompasses the principal concepts:
products should be based on an accurate assessment of the wants and needs of
customers in different market segments, and cost targets should be what result
after a sustainable profit margin is subtracted from what customers are willing to
pay at the time of product introduction and afterwards.
The fundamental objective of target costing is to manage the business to be
profitable in a highly competitive marketplace. In effect, target costing is a
Reviewer 136
Management Advisory Services

proactive cost planning, cost management, and cost reduction practice whereby


costs are planned and managed out of a product and business early in the
design and development cycle, rather than during the later stages of product
development and production.[3]

History[edit]
Target costing was developed independently in both USA and Japan in different
time periods.[4] Target costing was adopted earlier by American companies to
reduce cost and improve productivity, such as Ford Motor from 1900s, American
Motors from 1950s-1960s. Although the ideas of target costing were also applied
by a number of other American companies
including Boeing, Caterpillar, Northern Telecom, few of them apply target costing
as comprehensively and intensively as top Japanese companies such
as Nissan, Toyota, Nippondenso.[5] Target costing emerged from Japan from
1960s to early 1970s with the particular effort of Japanese automobile industry,
including Toyota and Nissan. It did not receive global attention until late 1980s to
1990s when some authors such as Monden (1992),[6] Sakurai (1989),[7] Tanaka
(1993),[8] and Cooper (1992)[9] described the way that Japanese companies
applied target costing to thrive in their business (IMA 1994). With superior
implementation system, Japanese manufacturers is more successful than the
American companies in developing target costing.[4] Traditional cost-plus
pricing strategy has been impeding the productivity and profitability for a long
time.[10][11] As a new strategy, target costing is replacing traditional cost-plus
pricing strategy by maximizing customer satisfaction by accepted level of quality
and functionality while minimizing costs.

Process of target costing[edit]


The process of target costing can be divided into three sections: the first section
involves in market-driven target costing, which focuses on studying market
condition to identifying product’s allowable cost in order to meet company’s long-
term profit at expected selling price; the second section involves in performing
cost reduction strategies with the product designer’s effort and creativity to
identify the product-level target cost; the third section is component-level target
cost which decomposes the production cost to functional and component levels
to transmit cost responsibility to suppliers.[1]

Target costing process

Market-driven target costing[edit]


Market driven target costing is the first section in target costing process which
focuses on studying market condition and determining company’s profit margin in
order to identify allowable cost of a product. Market driven costing can go
Reviewer 137
Management Advisory Services

through 5 steps including: establish company’s long-term sales and profit


objective; develop the mix of products; identify target selling price for each
product; identify profit margin for each product; and calculate allowable cost of
each product.[1]
Company’s long-term sales and profit objective are developed from extensive
analysis of relevant information relating to customers, market and products. Only
realistic plan is accepted to process the next step. Product mix is designed
carefully to ensure that it satisfies many customers, but also does not contain too
many products to confuse customers. Company may use simulation to explore
the impact of overall profit objective to different product mixes and determine the
most feasible product mix. Target selling price, target profit margin and allowable
cost are identified for each product. Target selling price need to consider to the
expected market condition at the time launching the product. Internal factors
such as product’s functionality and profit objective, and external factors such as
company’s image or expected price of competitive products will influence target
selling price. Company’s long-term profit plan and life-cycle cost are considered
when determining target profit margin. Firms might set up target profit margin
based on either actual profit margin of previous products or target profit margin of
product line. Simulation for overall group profitability can help to make sure
achieving group target. Subtracting target profit margin from target selling price
results in allowable cost for each product. Allowable cost is the cost that can
spend on the product to ensure meeting profit target if selling it at target price. It
is the signal about the magnitude of cost saving that team need to achieve.[1][5]

Product-level target costing[edit]

Following the completion of market-driven costing, the next task of the target
costing process is product-level target costing. Product-level target costing
concentrates on designing products that satisfy the company's customers at the
allowable cost. To achieve this goal, product-level target costing is typically
divided into three steps as shown below.[1]

Product-level target costing


Reviewer 138
Management Advisory Services

The first step is to set a product-level target cost. Since the allowable cost is
simply obtained from external conditions without considering the design
capabilities of the company as well as the realistic cost for manufacturing, it may
not be always achievable in practice. Thus, it is necessary to adjust the
unachievable allowable cost to an achievable target cost that the cost increase
should be reduced with great effort. The second step is to discipline this target
cost process, including monitoring the relationship between the target cost and
the estimated product cost at any point during the design process, applying the
cardinal rule so that the total target costs at the component-level does not
exceed the target cost of the product, and allowing exceptions for products
violating the cardinal rule. For a product exception to the cardinal rule, two
analyses are often performed after the launch of the product. One involves
reviewing the design process to find out why the target cost was unachieved.
The other is an immediate effort to reduce the excessive cost to ensure that the
period of violation is as short as possible. Once the target cost-reduction
objective is identified, the product-level target costing comes to the final step,
finding ways to achieve it. Engineering methods such as value
engineering (VE), design for manufacture and assembly (DFMA), and quality
function deployment (QFD) are commonly adopted in this step.[1]

Target costing and value engineering[edit]


Value engineering (VE), also known as value analysis (VA),[12] plays a crucial role
in the target costing process, particularly at the product level and the component
level. Among the three aforementioned methods in achieving the target cost, VE
is the most critical one because not only does it attempt to reduce costs, but also
aim to improve the functionality and quality of products. There are a variety of
practical VE strategies, including zero-look, first-look and second-look VE
approaches, as well as teardown approaches.[1]
Regarding the complexity of problems in the real world, implementing the target
costing process often relies on the computer simulation to reproduce stochastic
elements.[13] For example, many firms use simulation to study the complex
relationship between selling prices and profit margins, the impact of individual
product decisions on overall group profitability, the right mix of products to
enhance overall profit, or other economic modeling to overcome organizational
inertia by getting the most productive reasoning. In addition, simulation helps
estimate results rapidly for dynamic process changes.

Factors affecting target costing[edit]


The factors influencing the target costing process is broadly categorized based
on how a company's strategy for a product's quality, functionality and price
change over time. However, some factors play a specific role based on what
drives a company's approach to target costing.
Factors influencing market-driven costing[edit]
Intensity of competition and nature of the customer affect market-driven costing.
[14]
 Competitors introducing similar products has been shown to drive rival
companies to expend energy on implementing target costing systems such as in
the case of Toyota and Nissan or Apple and Google. The costing process is also
affected by the level of customer sophistication, changing requirements and the
Reviewer 139
Management Advisory Services

degree to which their future requirements are known.


The automotive and camera industry are prime examples for how customers
affect target costing based on their exact requirements.
Factors influencing product-level costing[edit]
Product strategy and product characteristics affect product-level target costing.
[1]
 Characteristics of product strategy such as number of products in line, rate of
redesign operations and level of innovation are shown to have an effect. Higher
number of products has a direct correlation with the benefits of target costing.
Frequent redesigns lead to the introduction of new products that have created
better benefits to target costing. It has to be noted that the value of historical
information reduces with greater innovation, thereby, reducing the benefits of
product level target costing.
The degree of complexity of the product, level of investments required and the
duration of product development process make up the factors that affect the
target costing process based on product characteristics. Product viability is
determined by the aforementioned factors. In turn, the target costing process is
also modified to suit the different degrees of complexity required.[1]
Factors influencing component-level costing[edit]
Supplier-Base strategy is the main factor that determines component-level target
costing because it is known to play a key role in the details a firm has about its
supplier capabilities.[1] There are three characteristics that make up the supplier-
base strategy, including the degree of horizontal integration, power over
suppliers and nature of supplier relations. Horizontal integration captures the
fraction of product costs sourced externally. Cost pressures on suppliers can
drive target costing if the buying power of firms is high enough. In turn, this may
lead to better benefits. More cooperative supplier relations have been shown to
increase mutual benefits in terms of target costs particularly at a component
level.

Applications[edit]
Aside from the application of target costing in the field of manufacturing, target
costing are also widely used in the following areas.
Energy[edit]
An Energy Retrofit Loan Analysis Model has been developed using a Monte
Carlo (MC) method for target costing in Energy Efficient buildings and
construction. MC method has been shown to be effective in determining the
impact of financial uncertainties in project performance.[15]
Target Value Design Decision Making Process (TVD-DMP) groups a set of
energy efficiency methods at different optimization levels to evaluate costs and
uncertainties involved in the energy efficiency process. Some major design
parameters are specified using this methods including Facility Operation
Schedule, Orientation, Plug load, HVAC and lightingsystems.
The entire process consists of three phases: initiation, definition and alignment.
Initiation stage involves developing a business case for energy efficiency using
target value design (TVD) training, organization and compensation. The
definition process involves defining and validating the case by tools such as
Reviewer 140
Management Advisory Services

values analysis and bench marking processes to determine the allowable costs.
By setting targets and designing the design process to align with those targets,
TVD-DMP has been shown to achieve a high level of collaboration needed for
energy efficiency investments. This is done by using risk analysis tools, pull
planning and rapid estimating processes.
Healthcare[edit]
Target costing and target value design have applications in building healthcare
facilities including critical components such as Neonatal Intensive Care
Units (NICUs). The process is influenced by unit locations, degree of comfort,
number of patients per room, type of supply location and access to nature.
[16]
 According to National Vital Statistics Reports, 12.18% of 2009 births were
premature and the cost per infant was $51,600. This led to opportunities for
NICUs to implement target value design for deciding whether to build a single-
family room or more open-bay NICUs. This was achieved using set-based design
analysis which challenges the designer to generate multiple alternatives for the
same functionality. Designs are evaluated keeping in mind the requirements of
the various stakeholders in the NICU including nurses, doctors, family members
and administrators. Unlike linear point-based design, set-based design narrows
options to the optimal one by eliminating alternatives simultaneously defined by
user constraints.
Construction[edit]
About 15% construction project in Japan adopted target costing for their cost
planning and management as recognized by Jacomit (2008).[17] In the U.S.,
target costing research has been carried out within the framework of lean
construction as target value design (TVD) method[18] and have been
disseminated widely over construction industry in recent years. Research has
proven that if being applied systematically, TVD can deliver a significant
improvement in project performance with average reduction of 15% in
comparison with market cost.[19] TVD in construction project considers the final
cost of project as a design parameter, similar to the capacity and aesthetics
requirements for the project. TVD requires the project team to develop a target
cost from the beginning. The project team is expected not to design exceeding
the target cost without the owner’s approval, and must use different skills to
maintain this target cost. In some cases, the cost can increase but the project
team must commit to decrease and must try their best to decrease without
impacting on other functions of the project.[20]

C. Management Accounting Concepts & Techniques For Performance


Measurement
1. Responsibility Accounting And Transfer Pricing
a. Type Of Responsibility Centers (Cost, Revenue, Profit And Investment
Centers)

COST CENTER

A cost center is a responsibility center in which the manager has the authority
only to incur costs and is specifically evaluated on the basis of how well costs are
Reviewer 141
Management Advisory Services

controlled and utilized. The unit manager is responsible for minimizing costs
subject to some output constraints. Examples are: maintenance department of a
manufacturing company; library section of a school; and an accounting
department of a trading concern. Performance of a cost center is evaluated
through variance analysis reports based on standard costs and flexible budgets.

REVENUE CENTER

A revenue center is an organizational unit for which a manager is accountable


only for the generation of revenues and has no control over setting selling prices
or budgeting costs.

PROFIT CENTER

A profit center is a responsibility center in which the manager is responsible for


generating revenues, planning and controlling expenses in his center. Most of
the time a profit center exists rather than a purely revenue center. The major goal
of the profit center manager is to maximize the segment’s net income. Examples
are: loans and discounts department of a commercial bank; college department
of a university; sales department of a trading firm. Performance of a profit center
is measured by using the contribution margin approach. It is the determination of
the profit center’s contribution to the recovery of indirect cost of the company.

INVESTMENT CENTER

An investment center is an organizational unit in which the manager is


responsible for generating revenues, planning and controlling expenses, and has
the authority to acquire, utilize, and dispose of assets in a manner that would
seek to earn the highest feasible rate of return on the center’s investment cost.
Most investment centers are independent or autonomous divisions or
subsidiaries allowing center’s managers the opportunity to make decisions in all
matters affecting their units and to be evaluated on the outcomes of those
decisions. Managers are encouraged to operate such centers as separate
economic entities that exist for the same basic organizational goals. Examples
include corporate headquarter or division of a large decentralized organization
such as Magnolia Products Division San Miguel Corporation; branch offices of
commercial banks. In addition to performance reports, the performance of an
investment center is also measured through the determination of its Return on
Investment (ROI) and Residual Income (RI).

b. Concepts Of Decentralization And Segment Reporting

Decentralization

Is a form of organization’s management style where the firm is divided into


smaller units. These units are called by various names, such as divisions,
segments, business units, center, and departments. Sometimes, a unit can still
be divided into many sub-units. Each unit and sub-unit has assigned a
responsible officer who does managerial functions. Its management style is to
allow top management to delegate to subordinate managers a significant degree
Reviewer 142
Management Advisory Services

of autonomy and independence in operations and decision-making for their


respective segments or units, which is covered by their area of responsibility.

Segment Reporting

Segment reporting is the reporting of the operating segments of a company


in the disclosures accompanying its financial statements. Segment reporting
is required for publicly-held entities, and is not required for privately held
ones. Segment reporting is intended to give information to investors and
creditors regarding the financial results and position of the most important
operating units of a company, which they can use as the basis for decisions
related to the company.

Under Generally Accepted Accounting Principles (GAAP), an operating


segment engages in business activities from which it may earn revenue and
incur expenses, has discrete financial information available, and whose
results are regularly reviewed by the entity's chief operating decision maker
for performance assessment and resource allocation decisions. Follow
these rules to determine which segments need to be reported:

 Aggregate the results of two or more segments if they have similar products,
services, processes, customers, distribution methods, and regulatory
environments.
 Report a segment if it has at least 10% of the revenues, 10% of the profit or
loss, or 10% of the combined assets of the entity.
 If the total revenue of the segments you have selected under the preceding
criteria comprise less than 75% of the entity's total revenue, then add more
segments until you reach that threshold.
 You can add more segments beyond the minimum just noted, but consider a
reduction if the total exceeds ten segments.

The information you should include in segment reporting includes:

 The factors used to identify reportable segments


 The types of products and services sold by each segment
 The basis of organization (such as being organized around a geographic
region, product line, and so forth)
 Revenues
 Interest expense
 Depreciation and amortization
 Material expense items
 Equity method interests in other entities
 Income tax expense or income
 Other material non-cash items
 Profit or loss
Reviewer 143
Management Advisory Services

The segment reporting requirements under International Financial Reporting


Standards are essentially identical to the requirements just noted under
GAAP.

c. Controllable And Non-Controllable Costs, Direct And Common Costs

CONTROLLABLE AND NON-CONTROLLABLE COSTS

Controllable Costs

In the realm of budgets and costs, the budget should carefully designate which
departments have authority over and are responsible for which costs. If a
department has authority and responsibility for certain costs, those costs are
called controllable costs.

Non-controllable Costs

The non-controllable costs are those costs that a department doesn’t have


authority over and can’t change.

Because authority and accountability go together, you can only hold individuals
and units in an organization accountable for those things that they can control. If
you don’t give subordinates authority to do something, how can you hold them
accountable for doing it?
Suppose Eve asked Alfred to walk her dog for a week. However, she refused to
give Alfred the keys to her apartment, so he had no access to the dog. Because
Eve didn’t give Alfred the authority to do his job, Eve can’t possibly hold him
accountable for not walking the dog (or for the resulting mess in her apartment).

Given the organization’s goals and strategies, every required task and decision
should be under someone’s watch. Responsibility accounting allows you to hold
subordinates responsible for all tasks over which they have control. Overhead
allocations are usually inconsistent with the idea of controllable costs. Overhead
allocations use allocation rates to assign overhead costs based on number of
units, direct labor hours, or other cost drivers to individual departments. Each
department must then include a portion of this overhead as a cost in its own
budget, even though these departments usually have little or no say over how
money is spent for this overhead.

Even when one of these departments closes completely, its overhead costs often
remain and get assigned to other departments. In this way, arbitrary overhead
allocations often result, forcing departments to accept responsibility for overhead
costs that they have little or no control over — non-controllable costs.
Reviewer 144
Management Advisory Services

DIRECT AND COMMON COSTS

Direct Costs

Direct fixed costs are fixed costs that can be directly traced to the segment. Just
because a fixed cost is direct does not mean that it is avoidable. There may be
depreciation, contractual obligations, and other costs that the company will not
be able to cut even if the segment is discontinued. If the fixed costs cannot be
avoided, losses will increase if the segment is discontinued because the segment
will no longer be contributing to the total contribution margin.
Common Costs

Common fixed costs are organization sustaining fixed costs that are allocated to
the segment. These fixed costs will continue even if the segment has been
eliminated; they will just be allocated to the remaining segments.

d. Performance Margin (Manager Versus Segment Performance)

RESPONSIBILITY CENTER
EVALUATION TECHNIQUES
MANAGER
Cost center manager Cost variance analysis
Revenue center manager Revenue variance analysis
Profit center manager Segment margin analysis
Return on Investment (ROI), Residual
Investment center manager Income Model, Economic Value
Added (EVA), etc.

e. Preparation Of ‘Segmented’ Income Statement

Sales xx
Variable Costs (xx)
Manufacturing Margin xx
Variable Expenses (xx)
Contribution Margin xx
Controllable Direct Fixed Costs and Expenses (xx)
Controllable Margin xx
Non-controllable Direct Fixed Costs and Expenses (xx)
Segment (Direct) Margin xx
Indirect (Allocated) Fixed Costs and Expenses (xx)
Operating Income xx

f. Return On Investment (ROI), Residual Income And Economic Value


Added (EVA)
Reviewer 145
Management Advisory Services

Return on Investment (ROI)

ROI = Segment Profit / Segment Investment


ROI = Profit / Net Sales x Net Sales / Investment
ROI = Return on Sales x Assets Turnover

Residual Income (RI)

RI = Segment Income – Minimum Income (MI)


MI = Investment x Implied Interest Rate

Economic Value Added (EVA)

EVA = Operating Profit after Tax (OPAT) – Minimum Income


OPAT = PBIT x after-tax rate
MI = Investment x weighted average cost of capital

g. Rational And Need For Transfer Price

What is a 'Transfer Price'

A transfer price is the price at which divisions of a company transact with each
other, such as the trade of supplies or labor between departments. Transfer
prices are used when individual entities of a larger multi-entity firm are treated
and measured as separately run entities. A transfer price can also be known as a
transfer cost.

BREAKING DOWN 'Transfer Price'


In managerial accounting, when different divisions of a multi-entity company are
in charge of their own profits, they are also responsible for their own return on
invested capital (ROIC). Therefore, when divisions are required to transact with
each other, a transfer price is used to determine costs. Transfer prices tend not
to differ much from the price in the market because one of the entities in such a
transaction loses out; they start either buying for more than the prevailing market
price or selling below the market price, and this affects their performance.

Regulations on transfer pricing ensure the fairness and accuracy of transfer


pricing among related entities. Regulations enforce an arm’s-length rule that
states that companies must establish pricing based on similar transactions done
between parties not of the same related company but at arm’s length.

Documentation Required for Transfer Pricing

Transfer pricing is closely monitored within a company’s financial reporting and


requires strict documentation that is included in financial reporting documents for
auditors and regulators. This documentation is closely scrutinized; if
inappropriately documented, it can lead to added expenses for the firm in the
form of added taxation or restatement fees. These prices are closely checked for
Reviewer 146
Management Advisory Services

accuracy to ensure that profits are booked appropriately within arm’s-length


pricing methods and associated taxes are paid accordingly.

Transfer prices are often used when companies sell goods within the company
but to parts of the company in other international jurisdictions. This type of
transfer pricing is common. Approximately 60% of the goods and services sold
internationally are done within companies as opposed to between unrelated
companies.

Transfer pricing multinationally has tax advantages, but regulatory authorities


frown upon using transfer pricing for tax avoidance. When transfer pricing
occurs, companies can book profits of goods and services in a different country
that may have a lower tax rate. In some cases, the transfer of goods and
services from one country to another within an interrelated company transaction
can allow a company to avoid tariffs on goods and services exchanged
internationally. The international tax laws are regulated by the Organization for
Economic Cooperation and Development (OECD), and auditing firms within each
international location audit financial statements accordingly.

h. Transfer Pricing Schemes (Minimum Transfer Price, Market-Based


Transfer Price, Cost-Based Transfer Price And Negotiated Price)

Minimum Transfer Price

A company that transfers goods between multiple divisions needs to establish


a transfer price so that each division can track its own efficiency. Since there isn't
a real market between a company's divisions, there is no way of knowing the
actual correct price to charge. There are different ways to find the minimum
acceptable transfer price. Some companies simply set the minimum as equal
to variable costs. Others add variable costs with a calculated opportunity cost.
The general economic transfer price rule is that the minimum must be greater
than or equal to the marginal cost of the selling division.

Marginal Cost to Selling Division

In economics and business management, a marginal cost is equal to the total


new expense incurred from the creation of one additional unit.

For example, suppose a hammer manufacturing company has two divisions: a


handle division and a hammer head division. The hammer head division only
begins work after receiving handles from the handle division; this means the
handle division is the selling division and the hammer head division is the buyer.

If it costs the handle division $7 to fashion its next handle (its marginal cost of
production) and ship it off, it doesn't make sense for the transfer price to be $5
(or any other number less than $7) – otherwise, the division would lose money at
the expense of money gained by the hammer head division.
Reviewer 147
Management Advisory Services

Calculating Opportunity Costs

Suppose that the hammer company also sells replacement handles for its
products. In this scenario, it sells some handles through retail rather than
sending them to the hammer head division. Suppose again that the handle
division can realize a $3 profit margin on its sold handles.

Now the cost of sending a handle isn't just the $7 marginal cost of production,
but also the $3 in lost profit (opportunity cost) from not selling the handle directly
to consumers. This means the new minimum transfer price must be $10 ($3 plus
$7).

Market-Based Transfer Price

The best transfer price is market price. Because individual business units or
segments have to compete with the rest of the world, they have to beat the
prevailing market price to stay competitive. They have to follow the market
streams of capitalistic model or free enterprise system.

Cost-Based Transfer Price

A cost-based transfer price equals cost plus a lump sum or a markup


percentage. Cost may either be standard or actual cost. Standard cost has the
advantage of isolating variances. Actual costs give the selling division a little
incentive to control costs. Actual cost-based transfer pricing does not promote
long-term manufacturing efficiencies. Another, the cost-based transfer pricing
does not give motivation on the part of the buying division since the costs
incurred by the selling division may not reflect the best possible performance in
the market which is adversely transferred to the buying division.

Negotiated Price

Negotiated transfer price may occur when segments are free to determine the
prices at which they buy and sell internally. It is especially appropriate when
market prices are subject to rapid fluctuations. It reflects the best bargain price
acceptable to the selling and buying divisions without adversely sacrificing their
respective interests.

2. Balanced Scorecard
a. Nature And Perspectives Of Balanced Scorecard

NATURE OF BALANCED SCORECARD

Introduction

Performance measures should be made for all critical resources used by


operations, and performance measurements should lead to insights about how to
improve resource uses and how to achieve organizational changes that allow
firms to remain competitive. Traditional area of performance measurement is the
one dealing with effectiveness and efficiency in the use of capital resources,
Reviewer 148
Management Advisory Services

which is within the domain of financial accounting or in financial forms. In other


cases, performance evaluation were focused on the budget versus actual
comparison. Balance scorecard will focus on the need of non-financial measures
and how to establish the basis for these performance evaluations.

Many companies use both financial and non-financial measures to evaluate


performance. This approach is known as the balanced scorecard. Balance
scorecard is an approach to performance measurement that gives weights
performance measures from four perspectives.

Objectives of Balance Scorecard

The primary purpose of the Balanced Scorecard is to translate an organization’s


vision, mission, and strategy into a set of performance measures that put that
strategy into action with clearly-stated objectives, measures, targets, and
initiatives. Involvement of the higher level of management is the key ingredient to
successful Balance Scorecard implementation. Only higher or senior
management understands the strategy of the whole organization and is
empowered to make the necessary decisions. Their involvement also builds
emotional commitment that is as important as their knowledge and authority.

PERSPECTIVES OF BALANCED SCORECARD

Payongayong

The four most commonly employed perspectives are as follows:

Financial Perspective – employs financial measures of performance used by


most firms. (i.e., ROI and employee turnover).

Customer Perspective – evaluates how well the company is performing from


the viewpoint of those people who buy and use its products.

Internal Process Perspective – evaluates all critical aspects of the value


chain (product development, production, delivery and after-sale service) to
ensure that the company is operating effectively and efficiently.
Reviewer 149
Management Advisory Services

Learning and Growth Perspective – evaluates how well the company


develops and retains its employees by evaluating employee skills and
satisfaction, training programs, and information dissemination.

The different perspectives are linked together so that a company can better
understand how to achieve its goals and what measures to use in evaluating
performances. Likewise, within each perspective, the balanced scorecard
identifies objectives that will contribute to attainment of strategic goals. It creates
linkages so that high-level corporate goals can be communicated down to the
lowest level of employee.

Balanced Scorecard Institute

The BSC suggests that we view the organization from four perspectives, and to
develop objectives, measures (KPIs), targets, and initiatives (actions) relative to
each of these points of view: 
 Financial: often renamed Stewardship or other more appropriate name in
the public sector, this perspective views organizational financial
performance and the use of financial resources

 Customer/Stakeholder: this perspective views organizational performance


from the point of view the customer or other key stakeholders that the
organization is designed to serve

 Internal Process: views organizational performance through the lenses of


the quality and efficiency related to our product or services or other key
business processes

 Organizational Capacity (originally called Learning and Growth): views


organizational performance through the lenses of human capital,
infrastructure, technology, culture and other capacities that are key to
breakthrough performance

Wikipedia

First Generation[edit]
The first generation of balanced scorecard designs used a "4 perspective"
approach to identify what measures to use to track the implementation of
strategy. `The original four "perspectives" proposed[6] were:

 Financial: encourages the identification of a few relevant high-level financial


measures. In particular, designers were encouraged to choose measures
that helped inform the answer to the question "How do we look to
shareholders?" Examples: cash flow, sales growth, operating income, return
on equity.[29]
Reviewer 150
Management Advisory Services

 Customer: encourages the identification of measures that answer the


question "What is important to our customers and stakeholders?" Examples:
percent of sales from new products, on time delivery, share of important
customers’ purchases, ranking by important customers.
 Internal business processes: encourages the identification of measures that
answer the question "What must we excel at?"Examples: cycle time, unit
cost, yield, new product introductions.
 Learning and growth: encourages the identification of measures that answer
the question "How can we continue to improve, create value and innovate?".
Examples: time to develop new generation of products, life cycle to product
maturity, time to market versus competition.
The idea was that managers used these perspective headings to prompt the
selection of a small number of measures that informed on that aspect of the
organisation's strategic performance.[6] The perspective headings show that
Kaplan and Norton were thinking about the needs of non-divisional commercial
organisations in their initial design. These categories were not so relevant to
public sector or non-profit organisations[21], or units within complex organizations
(which might have high degrees of internal specialization), and much of the early
literature on balanced scorecard focused on suggestions of alternative
'perspectives' that might have more relevance to these groups(e.g. Butler et al.
(1997),[22]Ahn (2001),[23] Elefalke (2001),[24] Brignall (2002),[25] Irwin (2002),
[26]
 Radnor et al. (2003)[27]).
These suggestions were notably triggered by a recognition that different but
equivalent headings would yield alternative sets of measures, and this represents
the major design challenge faced with this type of balanced scorecard design:
justifying the choice of measures made. "Of all the measures you could have
chosen, why did you choose these?" These issues contribute to dis-satisfaction
with early Balanced Scorecard designs, since if users are not confident that the
measures within the Balanced Scorecard are well chosen, they will have less
confidence in the information it provides.[30]
Although less common, these early-style balanced scorecards are still designed
and used today.[1]
In short, first generation balanced scorecards are hard to design in a way that
builds confidence that they are well designed. Because of this, many are
abandoned soon after completion.[9]

b. Financial And Non-Financial Performance Measures

FINANCIAL PERFORMANCE MEASURES

A Financial Perspective of the Balanced Scorecard

There are normally no problems with defining objectives for the financial
perspective of the Balanced Scorecard for profit-oriented organizations. Any
business has financial goals, and is accustomed to using financial metrics. For
most businesses the challenges are to shift a focus from financial perspective
only to the Customer, Internal, and Learning & Growth perspectives.
Reviewer 151
Management Advisory Services

Financial perspective for non-profit

The “financial” word in the name of the perspective might sound confusing for
non-profit organizations. They are not targeting financial outcomes, but some
social, cultural, political… goals. Still, non-profit organizations have stakeholders
that might be the members of communities that founded the organization, and in
this case financial perspective is actually a “Stakeholder Interests” perspective or
“Success” perspective.

The financial perspective is on the top of the Balanced Scorecard strategy map,
which is acceptable by for-profit organizations. Non-profits tend to put it below
other perspectives or in a separate resource part. This might be the case. But
let’s have a look at a simple example: “funds raised” metric, it is a financial
metric, but it is not a resource one, it is an outcome. So we still need a
“Success / Stakeholders Interests” perspective on the top, which will reflect
designed outcomes (not necessary financial ones).
 To define objectives from “Stakeholders Interest,” it is good idea for
formulate the question: “How does your department define their success?”
Framework for Finance Perspective
Let’s have a look at the 3 generic strategies:
 Product Leadership Strategy.
 Customer Value Strategy.
 Operational Excellence Strategy.
Reviewer 152
Management Advisory Services

One of the interpretations is to project these strategies on Revenue


Growth and Productivity objectives:
 Revenue Growth objective can be achieved by:
 Developing new revenue sources (creating new products and services). This
is primarily a projection of Product Leadership strategy.
 Improving current profitability (working on customer value proposition). This
is primarily a projection of Customer Value strategy.
 Productivity objective incorporates the projection of Operation Excellence
strategy:
 Decreasing costs.
 Resource optimization.

Depending on the scale of the business these objectives can be formulated in


various ways:

 Small businesses might want to find and employee a new technology that
would allow to decrease costs;
 Large companies might achieve resource optimization by sharing resources
and technologies between departments, or achieving economic outcome by
scaling production;
Balance inside the Financial perspective

As it was mentioned before, a typical balanced scorecard problem is that is not


balanced and too much attention is paid to the financial perspective. At the same
time, another imbalance is often seen inside financial perspective itself, where
managers tend to focus on what can give faster results (Productivity objectives),
and tend to ignore long term opportunities provided by Revenue
Growth objectives. Make sure that this is not the case in your business
scorecard.
Reviewer 153
Management Advisory Services

Cascading exercise

Let’s perform a cascading exercise for some objective from a finance


perspective. For this example let’s take “Operational Excellence” generic strategy
and its projection on Finance perspective, particularly a “Decreasing costs”
objective.

Executive level

Objective: Decrease Costs

Lagging measure: Achieved costs reduction, %

Leading measure: Time invested in the analysis of the problem.

Initiative: Build development and marketing costs map

R&D department level


Reviewer 154
Management Advisory Services

Objective: Decrease production costs

Lagging measure: Achieved production costs reduction, %

Leading measure: The number of experts interviewed about the problem.

Initiative: Build production costs map; Determine possible improvement


opportunities;

Senior engineer level

Objective: Decrease product testing costs

Lagging measure: Achieved testing costs reduction, %

Leading measure: The number of tested solutions.

Initiative: Find and implement test automation tool.

As you can see, the leading metrics are generic ones.

Focus on objectives, not metrics

Sometimes strategists are trying to be more specific with leading and lagging
metrics, or even take them from 3rd party lists. I would recommend being really
careful about this. In the early stage in the most cases it is impossible to come up
with indicators (especially leading ones) that will reflect the strategy properly.
Those indicators give a mock control over the performance.

My recommendation is to start with some generic indicators or metrics according


to the expert opinion. You are not trying to build a KPI scorecard, but a Strategy
scorecard. It is more important to have correct objectives, than metrics.

For now, all of the discussed ideas might sound like very complex ones.
We have an online training called “Building Balanced Scorecard Step by Step”
where under our guidance and following our examples you can build a prototype
of your own balanced scorecard. Check out the training schedule and details.
Reviewer 155
Management Advisory Services

Lessons learned

Here is a short summary of what we have discussed so far about Financial


Perspective.

 One needs to formulate on these perspective “success” goals.


 For-product companies define success in terms
of shareholders interest;
 Non-profits define it in terms of stakeholders interest;
 Generic strategies can be projected on a finance perspective as:
 Revenue Growth goal with two sub-goals;
 Productivity Goals with two sub-goals;
 Avoid focusing too much on financial perspective, and financial metrics

Financial Perspective: How Do We Look to Shareholders?

Financial performance measures indicate whether the company’s strategy,


implementation, and execution are contributing to bottom-line improvement.
Typical financial goals have to do with profitability, growth, and shareholder
value. ECI stated its financial goals simply: to survive, to succeed, and to
prosper. Survival was measured by cash flow, success by quarterly sales growth
and operating income by division, and prosperity by increased market share by
segment and return on equity.

But given today’s business environment, should senior managers even look at
the business from a financial perspective? Should they pay attention to short-
term financial measures like quarterly sales and operating income? Many have
criticized financial measures because of their well-documented inadequacies,
their backward-looking focus, and their inability to reflect contemporary value-
creating actions. Shareholder value analysis (SVA), which forecasts future cash
flows and discounts them back to a rough estimate of current value, is an attempt
to make financial analysis more forward looking. But SVA still is based on cash
flow rather than on the activities and processes that drive cash flow.

Some critics go much further in their indictment of financial measures. They


argue that the terms of competition have changed and that traditional financial
measures do not improve customer satisfaction, quality, cycle time, and
employee motivation. In their view, financial performance is the result of
operational actions, and financial success should be the logical consequence of
doing the fundamentals well. In other words, companies should stop navigating
by financial measures. By making fundamental improvements in their operations,
the financial numbers will take care of themselves, the argument goes.

Assertions that financial measures are unnecessary are incorrect for at least two
reasons. A well-designed financial control system can actually enhance rather
than inhibit an organization’s total quality management program. (See the insert,
“How One Company Used a Daily Financial Report to Improve Quality.”) More
important, however, the alleged linkage between improved operating
Reviewer 156
Management Advisory Services

performance and financial success is actually quite tenuous and uncertain. Let
us demonstrate rather than argue this point.

How One Company Used a Daily Financial Report to Improve Quality*

Over the three-year period between 1987 and 1990, a NYSE electronics
company made an order-of-magnitude improvement in quality and on-time
delivery performance. Outgoing defect rate dropped from 500 parts per million to
50, on-time delivery improved from 70% to 96% and yield jumped from 26% to
51%. Did these breakthrough improvements in quality, productivity, and
customer service provide substantial benefits to the company? Unfortunately not.
During the same three-year period, the company’s financial results showed little
improvement, and its stock price plummeted to one-third of its July 1987 value.
The considerable improvements in manufacturing capabilities had not been
translated into increased profitability. Slow releases of new products and a failure
to expand marketing to new and perhaps more demanding customers prevented
the company from realizing the benefits of its manufacturing achievements. The
operational achievements were real, but the company had failed to capitalize on
them.

The disparity between improved operational performance and disappointing


financial measures creates frustration for senior executives. This frustration is
often vented at nameless Wall Street analysts who allegedly cannot see past
quarterly blips in financial performance to the underlying long-term values these
executives sincerely believe they are creating in their organizations. But the hard
truth is that if improved performance fails to be reflected in the bottom line,
executives should reexamine the basic assumptions of their strategy and
mission. Not all long-term strategies are profitable strategies.

Measures of customer satisfaction, internal business performance, and


innovation and improvement are derived from the company’s particular view of
the world and its perspective on key success factors. But that view is not
necessarily correct. Even an excellent set of balanced scorecard measures does
not guarantee a winning strategy. The balanced scorecard can only translate a
company’s strategy into specific measurable objectives. A failure to convert
improved operational performance, as measured in the scorecard, into improved
financial performance should send executives back to their drawing boards to
rethink the company’s strategy or its implementation plans.

As one example, disappointing financial measures sometimes occur because


companies don’t follow up their operational improvements with another round of
actions. Quality and cycle-time improvements can create excess capacity.
Managers should be prepared to either put the excess capacity to work or else
get rid of it. The excess capacity must be either used by boosting revenues or
eliminated by reducing expenses if operational improvements are to be brought
down to the bottom line.

As companies improve their quality and response time, they eliminate the need
to build, inspect, and rework out-of-conformance products or to reschedule and
Reviewer 157
Management Advisory Services

expedite delayed orders. Eliminating these tasks means that some of the people
who perform them are no longer needed. Companies are understandably
reluctant to lay off employees, especially since the employees may have been
the source of the ideas that produced the higher quality and reduced cycle time.
Layoffs are a poor reward for past improvement and can damage the morale of
remaining workers, curtailing further improvement. But companies will not realize
all the financial benefits of their improvements until their employees and facilities
are working to capacity—or the companies confront the pain of downsizing to
eliminate the expenses of the newly created excess capacity.

If executives fully understood the consequences of their quality and cycle-time


improvement programs, they might be more aggressive about using the newly
created capacity. To capitalize on this self-created new capacity, however,
companies must expand sales to existing customers, market existing products to
entirely new customers (who are now accessible because of the improved quality
and delivery performance), and increase the flow of new products to the market.
These actions can generate added revenues with only modest increases in
operating expenses. If marketing and sales and R&D do not generate the
increased volume, the operating improvements will stand as excess capacity,
redundancy, and untapped capabilities. Periodic financial statements remind
executives that improved quality, response time, productivity, or new products
benefit the company only when they are translated into improved sales and
market share, reduced operating expenses, or higher asset turnover.

Ideally, companies should specify how improvements in quality, cycle time,


quoted lead times, delivery, and new product introduction will lead to higher
market share, operating margins, and asset turnover or to reduced operating
expenses. The challenge is to learn how to make such explicit linkage between
operations and finance. Exploring the complex dynamics will likely require
simulation and cost modeling.

NON-FINANCIAL PERFORMANCE MEASURES

The Balanced Scorecard—Measures that Drive Performance


 Robert S. Kaplan
 David P. Norton

What you measure is what you get. Senior executives understand that their
organization’s measurement system strongly affects the behavior of managers
and employees. Executives also understand that traditional financial accounting
measures like return-on-investment and earnings-per-share can give misleading
signals for continuous improvement and innovation—activities today’s
competitive environment demands. The traditional financial performance
measures worked well for the industrial era, but they are out of step with the
skills and competencies companies are trying to master today.

As managers and academic researchers have tried to remedy the inadequacies


of current performance measurement systems, some have focused on making
financial measures more relevant. Others have said, “Forget the financial
measures. Improve operational measures like cycle time and defect rates; the
Reviewer 158
Management Advisory Services

financial results will follow.” But managers should not have to choose between
financial and operational measures. In observing and working with many
companies, we have found that senior executives do not rely on one set of
measures to the exclusion of the other. They realize that no single measure can
provide a clear performance target or focus attention on the critical areas of the
business. Managers want a balanced presentation of both financial and
operational measures.

During a year-long research project with 12 companies at the leading edge of


performance measurement, we devised a “balanced scorecard”—a set of
measures that gives top managers a fast but comprehensive view of the
business. The balanced scorecard includes financial measures that tell the
results of actions already taken. And it complements the financial measures with
operational measures on customer satisfaction, internal processes, and the
organization’s innovation and improvement activities—operational measures that
are the drivers of future financial performance.

Think of the balanced scorecard as the dials and indicators in an airplane


cockpit. For the complex task of navigating and flying an airplane, pilots need
detailed information about many aspects of the flight. They need information on
fuel, air speed, altitude, bearing, destination, and other indicators that summarize
the current and predicted environment. Reliance on one instrument can be fatal.
Similarly, the complexity of managing an organization today requires that
managers be able to view performance in several areas simultaneously.

The balanced scorecard allows managers to look at the business from four
important perspectives. (See the exhibit “The Balanced Scorecard Links
Performance Measures.”) It provides answers to four basic questions:
Reviewer 159
Management Advisory Services

The Balanced Scorecard Links Performance Measures

 How do customers see us? (customer perspective)


 What must we excel at? (internal perspective)
 Can we continue to improve and create value? (innovation and learning
perspective)
 How do we look to shareholders? (financial perspective)

While giving senior managers information from four different perspectives, the
balanced scorecard minimizes information overload by limiting the number of
measures used. Companies rarely suffer from having too few measures. More
commonly, they keep adding new measures whenever an employee or a
consultant makes a worthwhile suggestion. One manager described the
proliferation of new measures at his company as its “kill another tree program.”
The balanced scorecard forces managers to focus on the handful of measures
that are most critical.

Several companies have already adopted the balanced scorecard. Their early
experiences using the scorecard have demonstrated that it meets several
managerial needs. First, the scorecard brings together, in a single management
report, many of the seemingly disparate elements of a company’s competitive
agenda: becoming customer oriented, shortening response time, improving
Reviewer 160
Management Advisory Services

quality, emphasizing teamwork, reducing new product launch times, and


managing for the long term.

Second, the scorecard guards against suboptimization. By forcing senior


managers to consider all the important operational measures together, the
balanced scorecard lets them see whether improvement in one area may have
been achieved at the expense of another. Even the best objective can be
achieved badly. Companies can reduce time to market, for example, in two very
different ways: by improving the management of new product introductions or by
releasing only products that are incrementally different from existing products.
Spending on setups can be cut either by reducing setup times or by increasing
batch sizes. Similarly, production output and first-pass yields can rise, but the
increases may be due to a shift in the product mix to more standard, easy-to-
produce but lower-margin products.

We will illustrate how companies can create their own balanced scorecard with
the experiences of one semiconductor company—let’s call it Electronic Circuits
Inc. ECI saw the scorecard as a way to clarify, simplify, and then operationalize
the vision at the top of the organization. The ECI scorecard was designed to
focus the attention of its top executives on a short list of critical indicators of
current and future performance.

Customer Perspective: How Do Customers See Us?

Many companies today have a corporate mission that focuses on the customer.
“To be number one in delivering value to customers” is a typical mission
statement. How a company is performing from its customers’ perspective has
become, therefore, a priority for top management. The balanced scorecard
demands that managers translate their general mission statement on customer
service into specific measures that reflect the factors that really matter to
customers.

Customers’ concerns tend to fall into four categories: time, quality, performance
and service, and cost. Lead time measures the time required for the company to
meet its customers’ needs. For existing products, lead time can be measured
from the time the company receives an order to the time it actually delivers the
product or service to the customer. For new products, lead time represents the
time to market, or how long it takes to bring a new product from the product
definition stage to the start of shipments. Quality measures the defect level of
incoming products as perceived and measured by the customer. Quality could
also measure on-time delivery, the accuracy of the company’s delivery forecasts.
The combination of performance and service measures how the company’s
products or services contribute to creating value for its customers.

Other Measures for the Customer’s Perspective


READ MORE

To put the balanced scorecard to work, companies should articulate goals for
time, quality, and performance and service and then translate these goals into
specific measures. Senior managers at ECI, for example, established general
Reviewer 161
Management Advisory Services

goals for customer performance: get standard products to market sooner,


improve customers’ time to market, become customers’ supplier of choice
through partnerships with them, and develop innovative products tailored to
customer needs. The managers translated these general goals into four specific
goals and identified an appropriate measure for each. (See the exhibit “ECI’s
Balanced Scorecard.”)

ECI’s Balanced Business Scorecard

To track the specific goal of providing a continuous stream of attractive solutions,


ECI measured the percent of sales from new products and the percent of sales
from proprietary products. That information was available internally. But certain
other measures forced the company to get data from outside. To assess whether
the company was achieving its goal of providing reliable, responsive supply, ECI
turned to its customers. When it found that each customer defined “reliable,
responsive supply” differently, ECI created a database of the factors as defined
by each of its major customers. The shift to external measures of performance
with customers led ECI to redefine “on time” so it matched customers’
Reviewer 162
Management Advisory Services

expectations. Some customers defined “on-time” as any shipment that arrived


within five days of scheduled delivery; others used a nine-day window. ECI itself
had been using a seven-day window, which meant that the company was not
satisfying some of its customers and overachieving at others. ECI also asked its
top ten customers to rank the company as a supplier overall.

Depending on customers’ evaluations to define some of a company’s


performance measures forces that company to view its performance through
customers’ eyes. Some companies hire third parties to perform anonymous
customer surveys, resulting in a customer-driven report card. The J.D. Powers
quality survey, for example, has become the standard of performance for the
automobile industry, while the Department of Transportation’s measurement of
on-time arrivals and lost baggage provides external standards for airlines.
Benchmarking procedures are yet another technique companies use to compare
their performance against competitors’ best practice. Many companies have
introduced “best of breed” comparison programs: the company looks to one
industry to find, say, the best distribution system, to another industry for the
lowest cost payroll process, and then forms a composite of those best practices
to set objectives for its own performance.

In addition to measures of time, quality, and performance and service,


companies must remain sensitive to the cost of their products. But customers
see price as only one component of the cost they incur when dealing with their
suppliers. Other supplier-driven costs range from ordering, scheduling delivery,
and paying for the materials; to receiving, inspecting, handling, and storing the
materials; to the scrap, rework, and obsolescence caused by the materials; and
schedule disruptions (expediting and value of lost output) from incorrect
deliveries. An excellent supplier may charge a higher unit price for products than
other vendors but nonetheless be a lower cost supplier because it can deliver
defect-free products in exactly the right quantities at exactly the right time directly
to the production process and can minimize, through electronic data interchange,
the administrative hassles of ordering, invoicing, and paying for materials.

Internal Business Perspective: What Must We Excel at?

Customer-based measures are important, but they must be translated into


measures of what the company must do internally to meet its customers’
expectations. After all, excellent customer performance derives from processes,
decisions, and actions occurring throughout an organization. Managers need to
focus on those critical internal operations that enable them to satisfy customer
needs. The second part of the balanced scorecard gives managers that internal
perspective.

Other Measures for the Internal Business Perspective


READ MORE

The internal measures for the balanced scorecard should stem from the business
processes that have the greatest impact on customer satisfaction—factors that
affect cycle time, quality, employee skills, and productivity, for example.
Companies should also attempt to identify and measure their company’s core
Reviewer 163
Management Advisory Services

competencies, the critical technologies needed to ensure continued market


leadership. Companies should decide what processes and competencies they
must excel at and specify measures for each.

Managers at ECI determined that submicron technology capability was critical to


its market position. They also decided that they had to focus on manufacturing
excellence, design productivity, and new product introduction. The company
developed operational measures for each of these four internal business goals.

To achieve goals on cycle time, quality, productivity, and cost, managers must
devise measures that are influenced by employees’ actions. Since much of the
action takes place at the department and workstation levels, managers need to
decompose overall cycle time, quality, product, and cost measures to local
levels. That way, the measures link top management’s judgment about key
internal processes and competencies to the actions taken by individuals that
affect overall corporate objectives. This linkage ensures that employees at lower
levels in the organization have clear targets for actions, decisions, and
improvement activities that will contribute to the company’s overall mission.

Information systems play an invaluable role in helping managers disaggregate


the summary measures. When an unexpected signal appears on the balanced
scorecard, executives can query their information system to find the source of
the trouble. If the aggregate measure for on-time delivery is poor, for example,
executives with a good information system can quickly look behind the aggregate
measure until they can identify late deliveries, day by day, by a particular plant to
an individual customer.

If the information system is unresponsive, however, it can be the Achilles’ heel of


performance measurement. Managers at ECI are currently limited by the
absence of such an operational information system. Their greatest concern is
that the scorecard information is not timely; reports are generally a week behind
the company’s routine management meetings, and the measures have yet to be
linked to measures for managers and employees at lower levels of the
organization. The company is in the process of developing a more responsive
information system to eliminate this constraint.

Innovation and Learning Perspective: Can We Continue to Improve and Create


Value?

The customer-based and internal business process measures on the balanced


scorecard identify the parameters that the company considers most important for
competitive success. But the targets for success keep changing. Intense global
competition requires that companies make continual improvements to
their existingproducts and processes and have the ability to introduce entirely
new products with expanded capabilities.

A company’s ability to innovate, improve, and learn ties directly to the company’s
value. That is, only through the ability to launch new products, create more value
for customers, and improve operating efficiencies continually can a company
Reviewer 164
Management Advisory Services

penetrate new markets and increase revenues and margins—in short, grow and
thereby increase shareholder value.

ECI’s innovation measures focus on the company’s ability to develop and


introduce standard products rapidly, products that the company expects will form
the bulk of its future sales. Its manufacturing improvement measure focuses on
new products; the goal is to achieve stability in the manufacturing of new
products rather than to improve manufacturing of existing products. Like many
other companies, ECI uses the percent of sales from new products as one of its
innovation and improvement measures. If sales from new products are trending
downward, managers can explore whether problems have arisen in new product
design or new product introduction.

In addition to measures on product and process innovation, some companies


overlay specific improvement goals for their existing processes. For example,
Analog Devices, a Massachusetts-based manufacturer of specialized
semiconductors, expects managers to improve their customer and internal
business process performance continuously. The company estimates specific
rates of improvement for on-time delivery, cycle time, defect rate, and yield.

Other companies, like Milliken & Co., require that managers make improvements
within a specific time period. Milliken did not want its “associates” (Milliken’s word
for employees) to rest on their laurels after winning the Baldridge Award.
Chairman and CEO Roger Milliken asked each plant to implement a “ten-four”
improvement program: measures of process defects, missed deliveries, and
scrap were to be reduced by a factor of ten over the next four years. These
targets emphasize the role for continuous improvement in customer satisfaction
and internal business processes.

In an article on Oct. 16, 2000, in the Financial Times’ Mastering Management


series, Wharton accounting professors Christopher Ittner and David
Larcker suggest that financial data have limitations as a measure of company
performance. The two note that other measures, such as quality, may be better
at forecasting, but can be difficult to implement. Below is the text of their article.

Choosing performance measures is a challenge. Performance measurement


systems play a key role in developing strategy, evaluating the achievement of
organizational objectives and compensating managers. Yet many managers feel
traditional financially oriented systems no longer work adequately. A recent
survey of U.S. financial services companies found most were not satisfied with
their measurement systems. They believed there was too much emphasis on
financial measures such as earnings and accounting returns and little emphasis
on drivers of value such as customer and employee satisfaction, innovation and
quality.

In response, companies are implementing new performance measurement


systems. A third of financial services companies, for example, made a major
Reviewer 165
Management Advisory Services

change in their performance measurement system during the past two years and
39% plan a major change within two years.

Inadequacies in financial performance measures have led to innovations ranging


from non-financial indicators of “intangible assets” and “intellectual capital” to
“balanced scorecards” of integrated financial and non-financial measures. This
article discusses the advantages and disadvantages of non-financial
performance measures and offers suggestions for implementation.

Advantages

Non-financial measures offer four clear advantages over measurement systems


based on financial data. First of these is a closer link to long-term organizational
strategies. Financial evaluation systems generally focus on annual or short-term
performance against accounting yardsticks. They do not deal with progress
relative to customer requirements or competitors, nor other non-financial
objectives that may be important in achieving profitability, competitive strength
and longer-term strategic goals. For example, new product development or
expanding organizational capabilities may be important strategic goals, but may
hinder short-term accounting performance.

By supplementing accounting measures with non-financial data about strategic


performance and implementation of strategic plans, companies can
communicate objectives and provide incentives for managers to address long-
term strategy.

Second, critics of traditional measures argue that drivers of success in many


industries are “intangible assets” such as intellectual capital and customer
loyalty, rather than the “hard assets” allowed on to balance sheets. Although it is
difficult to quantify intangible assets in financial terms, non-financial data can
provide indirect, quantitative indicators of a firm’s intangible assets.

One study examined the ability of non-financial indicators of “intangible assets” to


explain differences in US companies’ stock market values. It found that
measures related to innovation, management capability, employee relations,
quality and brand value explained a significant proportion of a company’s value,
even allowing for accounting assets and liabilities. By excluding these intangible
assets, financially oriented measurement can encourage managers to make
poor, even harmful, decisions.

Third, non-financial measures can be better indicators of future financial


performance. Even when the ultimate goal is maximizing financial performance,
current financial measures may not capture long-term benefits from decisions
made now. Consider, for example, investments in research and development or
customer satisfaction programs. Under U.S. accounting rules, research and
development expenditures and marketing costs must be charged for in the period
Reviewer 166
Management Advisory Services

they are incurred, so reducing profits. But successful research improves future
profits if it can be brought to market.

Similarly, investments in customer satisfaction can improve subsequent


economic performance by increasing revenues and loyalty of existing customers,
attracting new customers and reducing transaction costs. Non-financial data can
provide the missing link between these beneficial activities and financial results
by providing forward-looking information on accounting or stock performance.
For example, interim research results or customer indices may offer an indication
of future cash flows that would not be captured otherwise.

Finally, the choice of measures should be based on providing information about


managerial actions and the level of “noise” in the measures. Noise refers to
changes in the performance measure that are beyond the control of the manager
or organization, ranging from changes in the economy to luck (good or bad).
Managers must be aware of how much success is due to their actions or they will
not have the signals they need to maximize their effect on performance. Because
many non-financial measures are less susceptible to external noise than
accounting measures, their use may improve managers’ performance by
providing more precise evaluation of their actions. This also lowers the risk
imposed on managers when determining pay.

Disadvantages

Although there are many advantages to non-financial performance measures,


they are not without drawbacks. Research has identified five primary limitations.
Time and cost has been a problem for some companies. They have found the
costs of a system that tracks a large number of financial and non-financial
measures can be greater than its benefits. Development can consume
considerable time and expense, not least of which is selling the system to
skeptical employees who have learned to operate under existing rules. A greater
number of diverse performance measures frequently requires significant
investment in information systems to draw information from multiple (and often
incompatible) databases.

Evaluating performance using multiple measures that can conflict in the short
term can also be time-consuming. One bank that adopted a performance
evaluation system using multiple accounting and non-financial measures saw the
time required for area directors to evaluate branch managers increase from less
than one day per quarter to six days.

Bureaucracies can cause the measurement process to degenerate into


mechanistic exercises that add little to reaching strategic goals. For example,
shortly after becoming the first US company to win Japan’s prestigious Deming
Prize for quality improvement, Florida Power and Light found that employees
believed the company’s quality improvement process placed too much emphasis
on reporting, presenting and discussing a myriad of quality indicators. They felt
Reviewer 167
Management Advisory Services

this deprived them of time that could be better spent serving customers. The
company responded by eliminating most quality reviews, reducing the number of
indicators tracked and minimizing reports and meetings.

The second drawback is that, unlike accounting measures, non-financial data are
measured in many ways, there is no common denominator. Evaluating
performance or making trade-offs between attributes is difficult when some are
denominated in time, some in quantities or percentages and some in arbitrary
ways.

Many companies attempt to overcome this by rating each performance measure


in terms of its strategic importance (from, say, not important to extremely
important) and then evaluating overall performance based on a weighted
average of the measures. Others assign arbitrary weightings to the various
goals. One major car manufacturer, for example, structures executive bonuses
so: 40% based on warranty repairs per 100 vehicles sold; 20% on customer
satisfaction surveys; 20% on market share; and 20% on accounting performance
(pre-tax earnings). However, like all subjective assessments, these methods can
lead to considerable error.

Lack of causal links is a third issue. Many companies adopt non-financial


measures without articulating the relations between the measures or verifying
that they have a bearing on accounting and stock price performance. Unknown
or unverified causal links create two problems when evaluating performance:
incorrect measures focus attention on the wrong objectives and improvements
cannot be linked to later outcomes. Xerox, for example, spent millions of dollars
on customer surveys, under the assumption that improvements in satisfaction
translated into better financial performance. Later analysis found no such
association. As a result, Xerox shifted to a customer loyalty measure that was
found to be a leading indicator of financial performance.

The lack of an explicit casual model of the relations between measures also
contributes to difficulties in evaluating their relative importance. Without knowing
the size and timing of associations among measures, companies find it difficult to
make decisions or measure success based on them.

Fourth on the list of problems with non-financial measures is lack of statistical


reliability – whether a measure actually represents what it purports to represent,
rather than random “measurement error”. Many non-financial data such as
satisfaction measures are based on surveys with few respondents and few
questions. These measures generally exhibit poor statistical reliability, reducing
their ability to discriminate superior performance or predict future financial
results.

Finally, although financial measures are unlikely to capture fully the many
dimensions of organizational performance, implementing an evaluation system
with too many measures can lead to “measurement disintegration”. This occurs
when an overabundance of measures dilutes the effect of the measurement
Reviewer 168
Management Advisory Services

process. Managers chase a variety of measures simultaneously, while achieving


little gain in the main drivers of success.

Once managers have determined that the expected benefits from non-financial
data outweigh the costs, three steps can be used to select and implement
appropriate measures.

Understand Value Drivers


The starting point is understanding a company’s value drivers, the factors that
create stakeholder value. Once known, these factors determine which measures
contribute to long-term success and so how to translate corporate objectives into
measures that guide managers’ actions.

While this seems intuitive, experience indicates that companies do a poor job
determining and articulating these drivers. Managers tend to use one of three
methods to identify value drivers, the most common being intuition. However,
executives’ rankings of value drivers may not reflect their true importance. For
example, many executives rate environmental performance and quality as
relatively unimportant drivers of long-term financial performance. In contrast,
statistical analyses indicate these dimensions are strongly associated with a
company’s market value.

A second method is to use standard classifications such as financial, internal


business process, customer, learning and growth categories. While these may
be appropriate, other non-financial dimensions may be more important,
depending on the organization’s strategy, competitive environment and
objectives. Moreover, these categories do little to help determine weightings for
each dimension.

Perhaps the most sophisticated method of determining value drivers is statistical


analysis of the leading and lagging indicators of financial performance. The
resulting “causal business model” can help determine which measures predict
future financial performance and can assist in assigning weightings to measures
based on the strength of the statistical relation. Unfortunately, relatively few
companies develop such causal business models when selecting their
performance measures.

Review Consistencies
Most companies track hundreds, if not thousands, of non-financial measures in
their day-to-day operations. To avoid “reinventing the wheel”, an inventory of
current measures should be made. Once measures have been documented,
their value for performance measurement can be assessed. The issue at this
stage is the extent to which current measures are aligned with the company’s
strategies and value drivers. One method for assessing this alignment is “gap
analysis”. Gap analysis requires managers to rank performance measures on at
least two dimensions: their importance to strategic objectives and the importance
currently placed on them.
Reviewer 169
Management Advisory Services

Our survey of 148 US financial services companies — a joint research project


sponsored by the Cap Gemini Ernst & Young Center for Business Innovation and
the Wharton Research Program on Value Creation in Organizations – found
significant “measurement gaps” for many non-financial measures. For example,
72% of companies said customer-related performance was an extremely
important driver of long-term success, against 31% who chose short-term
financial performance. However, the quality of short-term financial measurement
is considerably better than measurement of customer satisfaction. Similar
disparities exist for non-financial measures related to employee performance,
operational results, quality, alliances, supplier relations, innovation, community
and the environment. More important, stock market and long-term accounting
performance are both higher when these measurement gaps are smaller.

Integrate Measures
Finally, after measures are chosen, they must become an integral part of
reporting and performance evaluation if they are to affect employee behavior and
organizational performance. This is not easy. Since the choice of performance
measures has a substantial impact on employees’ careers and pay, controversy
is bound to emerge no matter how appropriate the measures. Many companies
have failed to benefit from non-financial performance measures through being
reluctant to take this step.

Conclusion
Although non-financial measures are increasingly important in decision-making
and performance evaluation, companies should not simply copy measures used
by others. The choice of measures must be linked to factors such as corporate
strategy, value drivers, organizational objectives and the competitive
environment. In addition, companies should remember that performance
measurement choice is a dynamic process – measures may be appropriate
today, but the system needs to be continually reassessed as strategies and
competitive environments evolve.

D. Management Accounting Concepts & Techniques For Decision Making


1. Quantitative Techniques
a. Regression And Correlation Analysis

Regression and correlation analysis:

Regression analysis involves identifying the relationship between a dependent


variable and one or more independent variables. A model of the relationship is
hypothesized, and estimates of the parameter values are used to develop an
estimated regression equation. Various tests are then employed to determine if
the model is satisfactory. If the model is deemed satisfactory, the estimated
regression equation can be used to predict the value of the dependent variable
given values for the independent variables.

Regression model.
Reviewer 170
Management Advisory Services

In simple linear regression, the model used to describe the relationship between
a single dependent variable y and a single independent variable x is y = a0 + a1x
+ k. a0and a1 are referred to as the model parameters, and is a probabilistic error
term that accounts for the variability in y that cannot be explained by the linear
relationship with x. If the error term were not present, the model would be
deterministic; in that case, knowledge of the value of x would be sufficient to
determine the value of y.

Least squares method.

Either a simple or multiple regression model is initially posed as a hypothesis


concerning the relationship among the dependent and independent variables.
The least squares method is the most widely used procedure for developing
estimates of the model parameters.

As an illustration of regression analysis and the least squares method, suppose a


university medical center is investigating the relationship between stress and
blood pressure. Assume that both a stress test score and a blood pressure
reading have been recorded for a sample of 20 patients. The data are shown
graphically in the figure below, called a scatter diagram. Values of the
independent variable, stress test score, are given on the horizontal axis, and
values of the dependent variable, blood pressure, are shown on the vertical axis.
The line passing through the data points is the graph of the estimated regression
equation: y = 42.3 + 0.49x. The parameter estimates, b0 = 42.3 and b1 = 0.49,
were obtained using the least squares method.

Correlation.

Correlation and regression analysis are related in the sense that both deal with
relationships among variables. The correlation coefficient is a measure of linear
association between two variables. Values of the correlation coefficient are
always between -1 and +1. A correlation coefficient of +1 indicates that two
variables are perfectly related in a positive linear sense, a correlation coefficient
Reviewer 171
Management Advisory Services

of -1 indicates that two variables are perfectly related in a negative linear sense,
and a correlation coefficient of 0 indicates that there is no linear relationship
between the two variables. For simple linear regression, the sample correlation
coefficient is the square root of the coefficient of determination, with the sign of
the correlation coefficient being the same as the sign of b1, the coefficient of x1
in the estimated regression equation.

Neither regression nor correlation analyses can be interpreted as establishing


cause-and-effect relationships. They can indicate only how or to what extent
variables are associated with each other. The correlation coefficient measures
only the degree of linear association between two variables. Any conclusions
about a cause-and-effect relationship must be based on the judgment of the
analyst.

Introduction to Correlation and Regression Analysis

In this section we will first discuss correlation analysis, which is used to quantify
the association between two continuous variables (e.g., between an independent
and a dependent variable or between two independent variables). Regression
analysis is a related technique to assess the relationship between an outcome
variable and one or more risk factors or confounding variables. The outcome
variable is also called the response or dependent variable and the risk factors
and confounders are called the predictors, or explanatory or independent
variables. In regression analysis, the dependent variable is denoted " y" and the
independent variables are denoted by "x".

[NOTE: The term "predictor" can be misleading if it is interpreted as the


ability to predict even beyond the limits of the data. Also, the term
"explanatory variable" might give an impression of a causal effect in a
situation in which inferences should be limited to identifying associations.
The terms "independent" and "dependent" variable are less subject to
these interpretations as they do not strongly imply cause and effect.

Correlation Analysis

In correlation analysis, we estimate a sample correlation coefficient, more


specifically the Pearson Product Moment correlation coefficient. The sample
correlation coefficient, denoted r, ranges between -1 and +1 and quantifies the
direction and strength of the linear association between the two variables. The
correlation between two variables can be positive (i.e., higher levels of one
variable are associated with higher levels of the other) or negative (i.e., higher
levels of one variable are associated with lower levels of the other).

The sign of the correlation coefficient indicates the direction of the


association. The magnitude of the correlation coefficient indicates the strength of
the association.

For example, a correlation of r = 0.9 suggests a strong, positive association


between two variables, whereas a correlation of r = -0.2 suggest a weak,
Reviewer 172
Management Advisory Services

negative association. A correlation close to zero suggests no linear association


between two continuous variables.

LISA: [I find this description confusing. You say that the correlation coefficient is
a measure of the "strength of association", but if you think about it, isn't the slope
a better measure of association? We use risk ratios and odds ratios to quantify
the strength of association, i.e., when an exposure is present it has how many
times more likely the outcome is. The analogous quantity in correlation is the
slope, i.e., for a given increment in the independent variable, how many times is
the dependent variable going to increase? And "r" (or perhaps better R-squared)
is a measure of how much of the variability in the dependent variable can be
accounted for by differences in the independent variable. The analogous
measure for a dichotomous variable and a dichotomous outcome would be the
attributable proportion, i.e., the proportion of Y that can be attributed to the
presence of the exposure.]

It is important to note that there may be a non-linear association between two


continuous variables, but computation of a correlation coefficient does not detect
this. Therefore, it is always important to evaluate the data carefully before
computing a correlation coefficient. Graphical displays are particularly useful to
explore associations between variables.

The figure below shows four hypothetical scenarios in which one continuous
variable is plotted along the X-axis and the other along the Y-axis.

 
Reviewer 173
Management Advisory Services

 Scenario 1 depicts a strong positive association (r=0.9), similar to what


we might see for the correlation between infant birth weight and birth
length.
 Scenario 2 depicts a weaker association (r=0,2) that we might expect to
see between age and body mass index (which tends to increase with
age).
 Scenario 3 might depict the lack of association (r approximately 0)
between the extent of media exposure in adolescence and age at which
adolescents initiate sexual activity.
 Scenario 4 might depict the strong negative association (r= -0.9)
generally observed between the number of hours of aerobic exercise
per week and percent body fat.

Example - Correlation of Gestational Age and Birth Weight

A small study is conducted involving 17 infants to investigate the association


between gestational age at birth, measured in weeks, and birth weight,
measured in grams.
Reviewer 174
Management Advisory Services

We wish to estimate the association between gestational age and infant birth
weight. In this example, birth weight is the dependent variable and gestational
age is the independent variable. Thus y=birth weight and x=gestational age. The
data are displayed in a scatter diagram in the figure below.

Each point represents an (x,y) pair (in this case the gestational age, measured in
weeks, and the birth weight, measured in grams). Note that the independent
variable is on the horizontal axis (or X-axis), and the dependent variable is on the
vertical axis (or Y-axis). The scatter plot shows a positive or direct association
between gestational age and birth weight. Infants with shorter gestational ages
are more likely to be born with lower weights and infants with longer gestational
ages are more likely to be born with higher weights.

The formula for the sample correlation coefficient is

where Cov(x,y) is the covariance of x and y defined as

 are the sample variances of x and y, defined as


Reviewer 175
Management Advisory Services

The variances of x and y measure the variability of the x scores and y scores
around their respective sample means (

 , considered separately). The covariance measures the variability of


the (x,y) pairs around the mean of x and mean of y, considered simultaneously.

To compute the sample correlation coefficient, we need to compute the variance


of gestational age, the variance of birth weight and also the covariance of
gestational age and birth weight.

We first summarize the gestational age data. The mean gestational age is:

To compute the variance of gestational age, we need to sum the squared


deviations (or differences) between each observed gestational age and the mean
gestational age. The computations are summarized below.

The variance of gestational age is:


Reviewer 176
Management Advisory Services

Next, we summarize the birth weight data. The mean birth weight is:

The variance of birth weight is computed just as we did for gestational age as
shown in the table below.

The variance of birth weight is:

Next we compute the covariance,


Reviewer 177
Management Advisory Services

To compute the covariance of gestational age and birth weight, we need to


multiply the deviation from the mean gestational age by the deviation from the
mean birth weight for each participant (i.e.,

The computations are summarized below. Notice that we simply copy the
deviations from the mean gestational age and birth weight from the two tables
above into the table below and multiply.

The covariance of gestational age and birth weight is:

We now compute the sample correlation coefficient:

Not surprisingly, the sample correlation coefficient indicates a strong positive


correlation.
Reviewer 178
Management Advisory Services

As we noted, sample correlation coefficients range from -1 to +1. In practice,


meaningful correlations (i.e., correlations that are clinically or practically
important) can be as small as 0.4 (or -0.4) for positive (or negative) associations.
There are also statistical tests to determine whether an observed correlation is
statistically significant or not (i.e., statistically significantly different from zero).
Procedures to test whether an observed sample correlation is suggestive of a
statistically significant correlation are described in detail in Kleinbaum, Kupper
and Muller.1

b. Gantt Chart

Gantt chart

A Gantt chart showing three kinds of schedule dependencies (in red) and percent
complete indications.
A Gantt chart is a type of bar chart, devised by Henry Gantt in the 1910s, that
illustrates a project schedule. Gantt charts illustrate the start and finish dates of
the terminal elements and summary elements of a project. Terminal elements
and summary elements comprise the work breakdown structure of the project.
Modern Gantt charts also show the dependency (i.e., precedence network)
relationships between activities. Gantt charts can be used to show current
schedule status using percent-complete shadings and a vertical "TODAY" line as
shown here.
Although now regarded as a common charting technique, Gantt charts were
considered revolutionary when first introduced.[1]This chart is also used
in information technology to represent data that has been collected.

Historical development[edit]
The first known tool of this type was developed in 1896 by Karol Adamiecki, who
called it a harmonogram.[2] Adamiecki did not publish his chart until 1931,
however, and only in Polish, which limited both its adoption and recognition of his
authorship. The chart is named after Henry Gantt (1861–1919), who designed
his chart around the years 1910–1915.[3][4]
One of the first major applications of Gantt charts was by the United States
during World War I, at the instigation of General William Crozier.[5]
In the 1980s, personal computers allowed widespread creation of complex and
elaborate Gantt charts. The first desktop applications were intended mainly for
Reviewer 179
Management Advisory Services

project managers and project schedulers. With the advent of the Internet and
increased collaboration over networks at the end of the 1990s, Gantt charts
became a common feature of web-based applications, including
collaborative groupware.

Example[edit]
In the following table there are seven tasks, labeled a through g. Some tasks can
be done concurrently (a and b) while others cannot be done until their
predecessor task is complete (c and d cannot begin until a is complete).
Additionally, each task has three time estimates: the optimistic time estimate (O),
the most likely or normal time estimate (M), and the pessimistic time estimate
(P). The expected time (TE) is estimated using the beta probability distribution for
the time estimates, using the formula (O + 4M + P) ÷ 6.

Time estimates
Activity Predecessor Expected time
Opt. (O) Normal (M) Pess. (P)

a — 2 4 6 4.00

b — 3 5 9 5.33

c a 4 5 7 5.17

d a 4 6 10 6.33

e b, c 4 5 7 5.17

f d 3 4 8 4.50

g e 3 5 8 5.17
Once this step is complete, one can draw a Gantt chart or a network diagram.

A Gantt chart created using Microsoft Project (MSP). Note (1)


the critical path is in red, (2) the slack is the black lines connected to
Reviewer 180
Management Advisory Services

non-critical activities, (3) since Saturday and Sunday are not work days
and are thus excluded from the schedule, some bars on the Gantt chart
are longer if they cut through a weekend.

Gantt chart baseline[edit]


Baseline in Gantt chart is used for clear comparison picture of what and how
was planned and the current state of a project. Thus a manager or anyone
who manages a project is able to see whether a schedule deviates from the
initial plan. A project will be successfully accomplished when everything
goes according to a baseline.
A baseline gives a manager possibility to understand and track project
progress and forecast project results. Generally baselines are a combination
of project scope, cost and schedule (time) that are called triple constraints of
a project.
Thanks to baselines a project manager knows what exactly goes wrong and
how much it takes. They help to realize problematic points and minimize
them.

Further applications[edit]
Gantt charts can be used for scheduling generic resources as well as
project management. They can also be used for scheduling production
processes and employee rostering.[6] In the latter context, they may also be
known as timebar schedules. Gantt charts can be used to track shifts or
tasks and also vacations or other types of out-of-office time.
[7]
Specialized employee scheduling software may output schedules as a
Gantt chart, or they may be created through popular desktop publishing
software.

c. Program Evaluation Review Technique (PERT) – Critical Path Method


(CPM)

The program evaluation review technique (PERT) and critical path method
(CPM) are tools useful in planning, scheduling, and managing complex projects.
PERT/CPM (sometimes referred to as network analysis) provides a focus around
which managers and project planners can brainstorm. It is useful for evaluating
the performance of individuals and teams. The key concept in CPM/PERT is that
a small set of activities, which make up the longest path through the activity
network, control the entire project. If these critical activities can be identified and
assigned to the responsible persons, management resources can be optimally
used by concentrating on the few activities that determine the fate of the entire
project. Noncritical activities can be replanned or rescheduled, and resources for
them can be reallocated flexibly, without affecting the whole project.

There are many variations of CPM/PERT which have been useful in planning
costs and scheduling manpower and machine time. CPM/PERT can answer the
following important questions: 1) How long will the entire project take? What are
the risks involved? 2) Which are the critical activities or tasks in the project which
could delay everything if they are not completed on time? 3) Is the project on
Reviewer 181
Management Advisory Services

schedule, behind schedule, or ahead of schedule? 4) If the project must be


finished earlier than planned, what is the best way to do this at the least cost?

PERT/CPM can be used manually, but it is much easier to use project


management software (e.g., RFFlow). Operational research and quantitative
management books usually provide detailed descriptions of how to use these
tools.

Program evaluation and review technique

PERT network chart for a seven-month project with five milestones (10 through


50) and six activities (A through F).

The program (or project) evaluation and review technique, commonly


abbreviated PERT, is a statistical tool, used in project management, which was
designed to analyze and represent the tasks involved in completing a
given project.
First developed by the United States Navy in the 1950s, it is commonly used in
conjunction with the critical path method(CPM).

Overview[edit]
PERT is a method of analyzing the tasks involved in completing a given project,
especially the time needed to complete each task, and to identify the minimum
time needed to complete the total project. It incorporates uncertainty by making it
possible to schedule a project while not knowing precisely the details
and durations of all the activities. It is more of an event-oriented technique rather
than start- and completion-oriented, and is used more in projects where time is
the major factor rather than cost. It is applied to very large-scale, one-time,
complex, non-routine infrastructure and Research and Development projects.
Program Evaluation Review Technique (PERT) offers a management tool, which
relies "on arrow and node diagrams of activities and events: arrows represent
the activities or work necessary to reach the events or nodes that indicate each
completed phase of the total project." [1]
PERT and CPM are complementary tools, because "CPM employs one time
estimate and one cost estimate for each activity; PERT may utilize three time
estimates (optimistic, expected, and pessimistic) and no costs for each activity.
Reviewer 182
Management Advisory Services

Although these are distinct differences, the term PERT is applied increasingly to
all critical path scheduling."[1]

History[edit]
PERT was developed primarily to simplify the planning and scheduling of large
and complex projects. It was developed for the U.S. Navy Special Projects
Office in 1957 to support the U.S. Navy's Polaris nuclear submarine project. [2] It
found applications all over industry. An early example was it was used for
the 1968 Winter Olympics in Grenoble which applied PERT from 1965 until the
opening of the 1968 Games.[3] This project model was the first of its kind, a
revival for scientific management, founded by Frederick Taylor (Taylorism) and
later refined by Henry Ford (Fordism). DuPont's critical path method was
invented at roughly the same time as PERT.

PERT Summary Report Phase 2, 1958


Initially PERT stood for Program Evaluation Research Task, but by 1959 was
already renamed.[2] It had been made public in 1958 in two publications of the
U.S. Department of the Navy, entitled Program Evaluation Research Task,
Summary Report, Phase 1.[4] and Phase 2.[5] In a 1959 article in The American
Statistician the main Willard Fazar, Head of the Program Evaluation Branch,
Special Projects Office, U.S. Navy, gave a detailed description of the main
concepts of the PERT. He explained:
Through an electronic computer, the PERT technique processes data
representing the major, finite accomplishments (events) essential to achieve end-
objectives; the inter-dependence of those events; and estimates of time and
range of time necessary to complete each activity between two successive
events. Such time expectations include estimates of "most likely time", "optimistic
time", and "pessimistic time" for each activity. The technique is a management
control tool that sizes up the outlook for meeting objectives on time; highlights
danger signals requiring management decisions; reveals and defines both
methodicalness and slack in the flow plan or the network of sequential activities
that must be performed to meet objectives; compares current expectations
with scheduled completion dates and computes the probability for meeting
scheduled dates; and simulates the effects of options for decision — before
decision.
Reviewer 183
Management Advisory Services

The concept of PERT was developed by an operations research team staffed


with representatives from the Operations Research Department of Booz, Allen
and Hamilton; the Evaluation Office of the Lockheed Missile Systems Division;
and the Program Evaluation Branch, Special Projects Office, of the Department
of the Navy.[6]

PERT Guide for management use, June 1963


Ten years after the introduction of PERT in 1958 the American librarian Maribeth
Brennan published a selected bibliography with about 150 publications on PERT
and CPM, which had been published between 1958 and 1968. The origin and
development was summarized as follows:
PERT originated in 1958 with the... Polaris missile design and construction
scheduling. Since that time, it has been used extensively not only by
the aerospace industry but also in many situations where management desires to
achieve an objective or complete a task within a scheduled time and cost
expenditure; it came into popularity when the algorithm for calculating a
maximum value path was conceived. PERT and CPM may be calculated
manually or with a computer, but usually they require major computer support for
detailed projects. A number of colleges and universities now offer instructional
courses in both.[1]
For the subdivision of work units in PERT[7] another tool was developed:
the Work Breakdown Structure. The Work Breakdown Structure provides "a
framework for complete networking, the Work Breakdown Structure was formally
introduced as the first item of analysis in carrying out basic PERT/COST."[8]

Terminology[edit]
Events and activities[edit]
In PERT diagram the event is the main building block, and it known predecessor
events and successor events:

 PERT event: a point that marks the start or completion of one or more
activities. It consumes no time and uses no resources. When it marks the
completion of one or more activities, it is not "reached" (does not occur)
until all of the activities leading to that event have been completed.
Reviewer 184
Management Advisory Services

 predecessor event: an event that immediately precedes some other event


without any other events intervening. An event can have multiple
predecessor events and can be the predecessor of multiple events.
 successor event: an event that immediately follows some other event
without any other intervening events. An event can have multiple successor
events and can be the successor of multiple events.
Beside events PERT also knows activities and sub-activities:

 PERT activity: the actual performance of a task which consumes time and
requires resources (such as labor, materials, space, machinery). It can be
understood as representing the time, effort, and resources required to move
from one event to another. A PERT activity cannot be performed until the
predecessor event has occurred.
 PERT sub-activity: a PERT activity can be further decomposed into a set of
sub-activities. For example, activity A1 can be decomposed into A1.1, A1.2
and A1.3. Sub-activities have all the properties of activities; in particular, a
sub-activity has predecessor or successor events just like an activity. A sub-
activity can be decomposed again into finer-grained sub-activities.
Time[edit]
PERT has defined four types of time required to accomplish an activity:

 optimistic time: the minimum possible time required to accomplish an activity


(o) or a path (O), assuming everything proceeds better than is normally
expected
 pessimistic time: the maximum possible time required to accomplish an
activity (p) or a path (P), assuming everything goes wrong (but excluding
major catastrophes).
 most likely time: the best estimate of the time required to accomplish an
activity (m) or a path (M), assuming everything proceeds as normal.
 expected time: the best estimate of the time required to accomplish an
activity (te) or a path (TE), accounting for the fact that things don't always
proceed as normal (the implication being that the expected time is the
average time the task would require if the task were repeated on a number
of occasions over an extended period of time).
te = (o + 4m + p) ÷ 6

 standard deviation of time : the variability of the time for accomplishing


an activity (σte) or a path (σTE)
σte = (p - o) ÷ 6
Management tools[edit]
PERT supplies a number of tools for management with determination
of concepts, such as:

 float or slack is a measure of the excess time and resources


available to complete a task. It is the amount of time that a project
task can be delayed without causing a delay in any subsequent
tasks (free float) or the whole project (total float). Positive slack
Reviewer 185
Management Advisory Services

would indicate ahead of schedule; negative slack would


indicate behind schedule; and zero slack would indicate on
schedule.
 critical path: the longest possible continuous pathway taken from
the initial event to the terminal event. It determines the total
calendar time required for the project; and, therefore, any time
delays along the critical path will delay the reaching of the terminal
event by at least the same amount.
 critical activity: An activity that has total float equal to zero. An
activity with zero float is not necessarily on the critical path since
its path may not be the longest.
 Lead time: the time by which a predecessor event must be
completed in order to allow sufficient time for the activities that
must elapse before a specific PERT event reaches completion.
 lag time: the earliest time by which a successor event can follow a
specific PERT event.
 fast tracking: performing more critical activities in parallel
 crashing critical path: Shortening duration of critical activities

Implementation[edit]
The first step to scheduling the project is to determine the tasks that the
project requires and the order in which they must be completed. The
order may be easy to record for some tasks ( e.g. When building a
house, the land must be graded before the foundation can be laid)
while difficult for others (there are two areas that need to be graded,
but there are only enough bulldozers to do one). Additionally, the time
estimates usually reflect the normal, non-rushed time. Many times, the
time required to execute the task can be reduced for an additional cost
or a reduction in the quality.
Example[edit]
In the following example there are seven tasks, labeled A through G.
Some tasks can be done concurrently (A and B) while others cannot be
done until their predecessor task is complete ( C cannot begin until A is
complete). Additionally, each task has three time estimates: the
optimistic time estimate (o), the most likely or normal time estimate ( m),
and the pessimistic time estimate (p). The expected time (te) is
computed using the formula (o + 4m + p) ÷ 6.

Time estimates
Activity Predecessor Expected time
Opt. (o) Normal (m) Pess. (p)

A — 2 4 6 4.00
Reviewer 186
Management Advisory Services

B — 3 5 9 5.33

C A 4 5 7 5.17

D A 4 6 10 6.33

E B, C 4 5 7 5.17

F D 3 4 8 4.50

G E 3 5 8 5.17

Once this step is complete, one can draw a Gantt chart or a network
diagram.

A Gantt chart created using Microsoft Project (MSP). Note (1)


the critical path is in red, (2) the slack is the black lines connected to
non-critical activities, (3) since Saturday and Sunday are not work days
and are thus excluded from the schedule, some bars on the Gantt chart
are longer if they cut through a weekend.
Reviewer 187
Management Advisory Services

A Gantt chart created using OmniPlan. Note (1) the critical path is


highlighted, (2) the slack is not specifically indicated on task 5 (d),
though it can be observed on tasks 3 and 7 (b and f), (3) since
weekends are indicated by a thin vertical line, and take up no additional
space on the work calendar, bars on the Gantt chart are not longer or
shorter when they do or don't carry over a weekend.
Next step, creating network diagram by hand or by using
diagram software[edit]
A network diagram can be created by hand or by using
diagram software. There are two types of network diagrams,
activity on arrow (AOA) and activity on node (AON). Activity
on node diagrams are generally easier to create and interpret.
To create an AON diagram, it is recommended (but not
required) to start with a node named start. This "activity" has
a duration of zero (0). Then you draw each activity that does
not have a predecessor activity (a and b in this example) and
connect them with an arrow from start to each node. Next,
since both c and d list a as a predecessor activity, their nodes
are drawn with arrows coming from a. Activity e is listed
with b and c as predecessor activities, so node e is drawn
with arrows coming from both b and c, signifying that e cannot
begin until both b and c have been completed.
Activity f has d as a predecessor activity, so an arrow is
drawn connecting the activities. Likewise, an arrow is drawn
from e to g. Since there are no activities that come after f or g,
it is recommended (but again not required) to connect them to
a node labeled finish.
Reviewer 188
Management Advisory Services

A network diagram created using Microsoft Project (MSP). Note


the critical path is in red.

A node like this one (from Microsoft Visio) can be used to display the
activity name, duration, ES, EF, LS, LF, and slack.
By itself, the network diagram pictured above does
not give much more information than a Gantt chart;
however, it can be expanded to display more
information. The most common information shown
is:

1. The activity name


2. The normal duration time
3. The early start time (ES)
4. The early finish time (EF)
5. The late start time (LS)
6. The late finish time (LF)
7. The slack
In order to determine this information it is assumed
that the activities and normal duration times are
given. The first step is to determine the ES and EF.
The ES is defined as the maximum EF of all
predecessor activities, unless the activity in
question is the first activity, for which the ES is zero
(0). The EF is the ES plus the task duration (EF =
ES + duration).
Reviewer 189
Management Advisory Services

 The ES for start is zero since it is the first


activity. Since the duration is zero, the EF is
also zero. This EF is used as the ES
for a and b.
 The ES for a is zero. The duration (4 work
days) is added to the ES to get an EF of four.
This EF is used as the ES for c and d.
 The ES for b is zero. The duration (5.33 work
days) is added to the ES to get an EF of 5.33.
 The ES for c is four. The duration (5.17 work
days) is added to the ES to get an EF of 9.17.
 The ES for d is four. The duration (6.33 work
days) is added to the ES to get an EF of 10.33.
This EF is used as the ES for f.
 The ES for e is the greatest EF of its
predecessor activities (b and c). Since b has an
EF of 5.33 and c has an EF of 9.17, the ES
of e is 9.17. The duration (5.17 work days) is
added to the ES to get an EF of 14.34. This EF
is used as the ES for g.
 The ES for f is 10.33. The duration (4.5 work
days) is added to the ES to get an EF of 14.83.
 The ES for g is 14.34. The duration (5.17 work
days) is added to the ES to get an EF of 19.51.
 The ES for finish is the greatest EF of its
predecessor activities (f and g). Since f has an
EF of 14.83 and g has an EF of 19.51, the ES
of finish is 19.51. Finish is a milestone (and
therefore has a duration of zero), so the EF is
also 19.51.
Barring any unforeseen events, the project should
take 19.51 work days to complete. The next step is
to determine the late start (LS) and late finish (LF)
of each activity. This will eventually show if there
are activities that have slack. The LF is defined as
the minimum LS of all successor activities, unless
the activity is the last activity, for which the LF
equals the EF. The LS is the LF minus the task
duration (LS = LF − duration).

 The LF for finish is equal to the EF (19.51 work


days) since it is the last activity in the project.
Since the duration is zero, the LS is also 19.51
work days. This will be used as the LF
for f and g.
 The LF for g is 19.51 work days. The duration
(5.17 work days) is subtracted from the LF to
get an LS of 14.34 work days. This will be used
as the LF for e.
Reviewer 190
Management Advisory Services

 The LF for f is 19.51 work days. The duration


(4.5 work days) is subtracted from the LF to get
an LS of 15.01 work days. This will be used as
the LF for d.
 The LF for e is 14.34 work days. The duration
(5.17 work days) is subtracted from the LF to
get an LS of 9.17 work days. This will be used
as the LF for b and c.
 The LF for d is 15.01 work days. The duration
(6.33 work days) is subtracted from the LF to
get an LS of 8.68 work days.
 The LF for c is 9.17 work days. The duration
(5.17 work days) is subtracted from the LF to
get an LS of 4 work days.
 The LF for b is 9.17 work days. The duration
(5.33 work days) is subtracted from the LF to
get an LS of 3.84 work days.
 The LF for a is the minimum LS of its
successor activities. Since c has an LS of 4
work days and d has an LS of 8.68 work days,
the LF for a is 4 work days. The duration (4
work days) is subtracted from the LF to get an
LS of 0 work days.
 The LF for start is the minimum LS of its
successor activities. Since a has an LS of 0
work days and b has an LS of 3.84 work days,
the LS is 0 work days.
Next step, determine of critical path and possible
slack[edit]
The next step is to determine the critical path and if
any activities have slack. The critical path is the
path that takes the longest to complete. To
determine the path times, add the task durations for
all available paths. Activities that have slack can be
delayed without changing the overall time of the
project. Slack is computed in one of two ways, slack
= LF − EF or slack = LS − ES. Activities that are on
the critical path have a slack of zero (0).

 The duration of path adf is 14.83 work days.


 The duration of path aceg is 19.51 work days.
 The duration of path beg is 15.67 work days.
The critical path is aceg and the critical time is
19.51 work days. It is important to note that there
can be more than one critical path (in a project more
complex than this example) or that the critical path
can change. For example, let's say that
activities d and f take their pessimistic (b) times to
Reviewer 191
Management Advisory Services

complete instead of their expected (TE) times. The


critical path is now adf and the critical time is 22
work days. On the other hand, if activity c can be
reduced to one work day, the path time for aceg is
reduced to 15.34 work days, which is slightly less
than the time of the new critical path, beg (15.67
work days).
Assuming these scenarios do not happen, the slack
for each activity can now be determined.

 Start and finish are milestones and by definition


have no duration, therefore they can have no
slack (0 work days).
 The activities on the critical path by definition
have a slack of zero; however, it is always a
good idea to check the math anyway when
drawing by hand.
 LFa – EFa = 4 − 4 = 0
 LFc – EFc = 9.17 − 9.17 = 0
 LFe – EFe = 14.34 − 14.34 = 0
 LFg – EFg = 19.51 − 19.51 = 0
 Activity b has an LF of 9.17 and an EF of 5.33,
so the slack is 3.84 work days.
 Activity d has an LF of 15.01 and an EF of
10.33, so the slack is 4.68 work days.
 Activity f has an LF of 19.51 and an EF of
14.83, so the slack is 4.68 work days.
Therefore, activity b can be delayed almost 4 work
days without delaying the project. Likewise,
activity d or activity f can be delayed 4.68 work days
without delaying the project
(alternatively, d and f can be delayed 2.34 work
days each).
Reviewer 192
Management Advisory Services

A completed network diagram created using Microsoft Visio. Note


the critical path is in red.

PERT as project scheduling tool[edit]


Advantages[edit]

 PERT chart explicitly defines and makes


visible dependencies (precedence
relationships) between the work
breakdown structure (commonly WBS)
elements.
 PERT facilitates identification of the critical
path and makes this visible.
 PERT facilitates identification of early
start, late start, and slack for each activity.
Reviewer 193
Management Advisory Services

 PERT provides for potentially reduced


project duration due to better
understanding of dependencies leading to
improved overlapping of activities and
tasks where feasible.
 The large amount of project data can be
organized and presented in diagram for
use in decision making.
 PERT can provide a probability of
completing before a given time.
Disadvantages[edit]

 There can be potentially hundreds or


thousands of activities and individual
dependency relationships.
 PERT is not easily scalable for smaller
projects.
 The network charts tend to be large and
unwieldy requiring several pages to print
and requiring specially sized paper.
 The lack of a timeframe on most
PERT/CPM charts makes it harder to
show status although colours can help
(e.g., specific colour for completed nodes).
Uncertainty in project scheduling[edit]
During project execution, however, a real-life
project will never execute exactly as it was
planned due to uncertainty. This can be due to
ambiguity resulting from subjective estimates
that are prone to human errors or can be the
result of variability arising from unexpected
events or risks. The main reason that PERT
may provide inaccurate information about the
project completion time is due to this schedule
uncertainty. This inaccuracy may be large
enough to render such estimates as not
helpful.
One possible method to maximize solution
robustness is to include safety in the baseline
schedule in order to absorb the anticipated
disruptions. This is called proactive scheduling.
A pure proactive scheduling is a utopia;
incorporating safety in a baseline schedule
which allows for every possible disruption
would lead to a baseline schedule with a very
large make-span. A second approach,
termed reactive scheduling, consists of defining
Reviewer 194
Management Advisory Services

a procedure to react to disruptions that cannot


be absorbed by the baseline schedule.

d. Probability Analysis (Expected Value Concept)

Probability analysis

A technique used by risk managers for forecasting future events, such as


accidental and business losses. This process involves a review of historical loss
data to calculate a probability distribution that can be used to predict future
losses. The probability analyst views past losses as a range of outcomes of what
might be expected for the future and assumes that the environment will remain
fairly stable. This technique is particularly effective for companies that have a
large amount of data on past losses and that have experienced stable
operations. This type of analysis is contrasted to trend analysis.
What is the 'Expected Value'

The expected value (EV) is an anticipated value for a given investment. In


statistics and probability analysis, the EV is calculated by multiplying each of the
possible outcomes by the likelihood each outcome will occur, and summing all of
those values. By calculating expected values, investors can choose the scenario
most likely to give them their desired outcome.

BREAKING DOWN 'Expected Value'

Scenario analysis is one technique for calculating the EV of


an investment opportunity. It uses estimated probabilities with multivariate
models, to examine possible outcomes for a proposed investment. Scenario
analysis also helps investors determine whether they are taking on an
appropriate level of risk, given the likely outcome of the investment.

The EV of a random variable gives a measure of the center of the distribution of


the variable. Essentially, the EV is the long-term average value of the variable.
Because of the law of large numbers, the average value of the variable
converges to the EV as the number of repetitions approaches infinity. The EV is
also known as expectation, the mean or the first moment. EV can be calculated
for single discreet variables, single continuous variables, multiple discreet
variables and multiple continuous variables. For continuous variable situations,
integrals must be used.

Basic Expected Value Example

To calculate the EV for a single discreet random variable, you must multiply the
value of the variable by the probability of that value occurring. Take, for example,
a normal six-sided die. Once you roll the die, it has an equal one-sixth chance of
landing on one, two, three, four, five or six. Given this information, the calculation
is straightforward:
(1/6 * 1) + (1/6 * 2) + (1/6 * 3) + (1/6 * 4) + (1/6 * 5) + (1/6 * 6) = 3.5
Reviewer 195
Management Advisory Services

If you were to roll a six-sided die an infinite amount of times, you see the average
value equals 3.5.

A More Complicated Expected Value Example

The logic of EV can be used to find solutions to more complicated problems.


Assume the following situation: you have a six-sided die and want to roll the
highest number possible. You can roll the die once and if you dislike the result,
roll the die one more time. But if you roll the die a second time, you must accept
the value of the second roll.

Half of the time, the value of the first roll will be below the EV of 3.5, or a one,
two or three, and half the time, it will be above 3.5, or a four, five or six. When the
first roll is below 3.5, you should roll again, otherwise you should stick with the
first roll.

Thus, half the time you keep a four, five or six, the first roll, and half the time you
have an EV of 3.5, the second roll. The expected value of this scenario is:

(50% * ((4 + 5+ 6) / 3)) + (50% * 3.5) = 2.5 + 1.75 = 4.25

e. Decision Tree Diagram

What is a Decision Tree Diagram

Need to break down a complex decision? Try using a decision tree. Read on to
find out all about decision trees, including what they are, how they’re used, and
how to make one.

What is a decision tree?

A decision tree is a map of the possible outcomes of a series of related choices.


It allows an individual or organization to weigh possible actions against one
another based on their costs, probabilities, and benefits. They can be used either
to drive informal discussion or to map out an algorithm that predicts the best
choice mathematically.

A decision tree typically starts with a single node, which branches into possible
outcomes. Each of those outcomes leads to additional nodes, which branch off
into other possibilities. This gives it a treelike shape.

There are three different types of nodes: chance nodes, decision nodes, and end
nodes. A chance node, represented by a circle, shows the probabilities of certain
results. A decision node, represented by a square, shows a decision to be made,
and an end node shows the final outcome of a decision path.
Reviewer 196
Management Advisory Services

Decision trees can also be drawn with flowchart symbols, which some people
find easier to read and understand.

Decision tree symbols

Shape Name Meaning

Decision Indicates a
node decision to
be made

Chance Shows
node multiple
uncertain
outcomes
Reviewer 197
Management Advisory Services

Shape Name Meaning

Alternative Each branch


branches indicates a
possible
outcome or
action

Rejected Shows a
alternative choice that
was not
selected

Endpoint Indicates a
node final outcome

How to draw a decision tree

To draw a decision tree, first pick a medium. You can draw it by hand on paper
or a whiteboard, or you can use special decision tree software. In either case,
here are the steps to follow:

1. Start with the main decision. Draw a small box to represent this point, then
draw a line from the box to the right for each possible solution or action. Label
them accordingly.
Reviewer 198
Management Advisory Services

2. Add chance and decision nodes to expand the tree as follows:

 If another decision is necessary, draw another box.


 If the outcome is uncertain, draw a circle (circles represent chance
nodes).
 If the problem is solved, leave it blank (for now).
Reviewer 199
Management Advisory Services

From each decision node, draw possible solutions. From each chance node,
draw lines representing possible outcomes. If you intend to analyze your options
numerically, include the probability of each outcome and the cost of each action.

3. Continue to expand until every line reaches an endpoint, meaning that there
are no more choices to be made or chance outcomes to consider. Then, assign a
value to each possible outcome. It could be an abstract score or a financial
value. Add triangles to signify endpoints.

With a complete decision tree, you’re now ready to begin analyzing the decision
you face.

Decision tree analysis example

By calculating the expected utility or value of each choice in the tree, you can
minimize risk and maximize the likelihood of reaching a desirable outcome.

To calculate the expected utility of a choice, just subtract the cost of that decision
from the expected benefits. The expected benefits are equal to the total value of
all the outcomes that could result from that choice, with each value multiplied by
the likelihood that it’ll occur. Here’s how we’d calculate these values for the
example we made above:
Reviewer 200
Management Advisory Services

When identifying which outcome is the most desirable, it’s important to take the
decision maker’s utility preferences into account. For instance, some may prefer
low-risk options while others are willing to take risks for a larger benefit.

When you use your decision tree with an accompanying probability model, you
can use it to calculate the conditional probability of an event, or the likelihood that
it’ll happen, given that another event happens. To do so, simply start with the
initial event, then follow the path from that event to the target event, multiplying
the probability of each of those events together.

In this way, a decision tree can be used like a traditional tree diagram,
which maps out the probabilities of certain events, such as flipping a coin twice.

Advantages and disadvantages

Decision trees remain popular for reasons like these:

 How easy they are to understand


 They can be useful with or without hard data, and any data requires
minimal preparation
 New options can be added to existing trees
 Their value in picking out the best of several options
Reviewer 201
Management Advisory Services

 How easily they combine with other decision making tools

However, decision trees can become excessively complex. In such cases, a


more compact influence diagram can be a good alternative. Influence diagrams
narrow the focus to critical decisions, inputs, and objectives.

Decision trees in machine learning and data mining

A decision tree can also be used to help build automated predictive models,
which have applications in machine learning, data mining, and statistics. Known
as decision tree learning, this method takes into account observations about an
item to predict that item’s value.

In these decision trees, nodes represent data rather than decisions. This type of
tree is also known as a classification tree. Each branch contains a set of
attributes, or classification rules, that are associated with a particular class label,
which is found at the end of the branch.

These rules, also known as decision rules, can be expressed in an if-then clause,
with each decision or data value forming a clause, such that, for instance, “if
conditions 1, 2 and 3 are fulfilled, then outcome x will be the result with y
certainty.”

Each additional piece of data helps the model more accurately predict which of a
finite set of values the subject in question belongs to. That information can then
be used as an input in a larger decision making model.
Reviewer 202
Management Advisory Services

Sometimes the predicted variable will be a real number, such as a price.


Decision trees with continuous, infinite possible outcomes are called regression
trees.

For increased accuracy, sometimes multiple trees are used together in ensemble
methods:

 Bagging creates multiple trees by resampling the source data, then has


those trees vote to reach consensus.
 A Random Forest classifier consists of multiple trees designed to
increase the classification rate
 Boosted trees that can be used for regression and classification trees.
 The trees in a Rotation Forest are all trained by using PCA (principal
component analysis) on a random portion of the data

A decision tree is considered optimal when it represents the most data with the
fewest number of levels or questions. Algorithms designed to create optimized
decision trees include CART, ASSISTANT, CLS and ID3/4/5. A decision tree can
also be created by building association rules, placing the target variable on the
right.

Each method has to determine which is the best way to split the data at each
level. Common methods for doing so include measuring the Gini impurity,
information gain, and variance reduction.

Using decision trees in machine learning has several advantages:

 The cost of using the tree to predict data decreases with each additional
data point
 Works for either categorical or numerical data
 Can model problems with multiple outputs
 Uses a white box model (making results easy to explain)
 A tree’s reliability can be tested and quantified
 Tends to be accurate regardless of whether it violates the assumptions
of source data

But they also have a few disadvantages:

 When dealing with categorical data with multiple levels, the information
gain is biased in favor of the attributes with the most levels.
 Calculations can become complex when dealing with uncertainty and
lots of linked outcomes.
 Conjunctions between nodes are limited to AND, whereas decision
graphs allow for nodes linked by OR.

f. Learning Curve
Reviewer 203
Management Advisory Services

Learning curve
A learning curve is a graphical representation of the increase of learning (vertical
axis) with experience (horizontal axis).

Fig 1: Learning curve for a single subject, showing how learning improves with
experience

Fig 2: A learning curve averaged over many trials is smooth, and can be
expressed as a mathematical function
The term learning curve is used in two main ways: where the same task is
repeated in a series of trials, or where a body of knowledge is learned over time.
The first person to describe the learning curve was Hermann Ebbinghaus in
1885, in the field of the psychology of learning, although the name wasn't used
until 1909.[1][2] In 1936, Theodore Paul Wright described the effect of learning
on production costs in the aircraft industry.[3] This form, in which unit cost is
plotted against total production, is sometimes called an experience curve.
The familiar expression "a steep learning curve" is intended to mean that the
activity is difficult to learn, although a learning curve with a steep start actually
represents rapid progress.[4][5]

In psychology[edit]
The first person to describe the learning curve was Hermann Ebbinghaus in
1885. His tests involved memorizing series of nonsense syllables, and recording
the success over a number of trials. The translation does not use the
term learning curve—but he presents diagrams of learning against trial number.
He also notes that the score can decrease, or even oscillate.[5][6][7]
The first known use of the term learning curve is from 1909: "Bryan and Harter
(6) found in their study of the acquisition of the telegraphic language a learning
curve which had the rapid rise at the beginning followed by a period of
retardation, and was thus convex to the vertical axis."[2][5]
Psychologist Arthur Bills gave a more detailed description of learning curves in
1934. He also discussed the properties of different types of learning curves, such
Reviewer 204
Management Advisory Services

as negative acceleration, positive acceleration, plateaus, and ogive curves. (Fig


1)[8]

In economics[edit]
In 1936, Theodore Paul Wright described the effect of learning on production
costs in the aircraft industry and proposed a mathematical model of the learning
curve.[3]
In 1968 Bruce Henderson of the Boston Consulting Group (BCG) generalized the
Unit Cost model pioneered by Wright, and specifically used a Power Law, which
is sometimes called Henderson's Law. He named this particular version
the experience curve.[9][10] Research by BCG in the 1970s observed experience
curve effects for various industries that ranged from 10 to 25 percent.[11]
The economic learning of productivity and efficiency generally follows the same
kinds of experience curves and have interesting secondary effects. Efficiency
and productivity improvement can be considered as whole organization or
industry or economy learning processes, as well as for individuals. The general
pattern is of first speeding up and then slowing down, as the practically
achievable level of methodology improvement is reached. The effect of reducing
local effort and resource use by learning improved methods paradoxically often
has the opposite latent effect on the next larger scale system, by facilitating its
expansion, or economic growth, as discussed in the Jevons paradox in the
1880s and updated in the Khazzoom-Brookes Postulate in the 1980s.

Examples and mathematical modeling[edit]


A learning curve is a plot of proxy measures for implied learning (proficiency or
progression toward a limit) with experience.

 The Horizontal Axis represents experience either directly as time (clock


time, or the time spent on the activity), or can be related to time (a number
of trials, or the total number of units produced).
 The Vertical Axis is a measure representing learning or proficiency or other
proxy for "efficiency" or "productivity". It can either be increasing (for
example, the score in a test), or decreasing (the time to complete a test).
(Fig 5)
For the performance of one person in a series of trials the curve can be erratic,
with proficiency increasing, decreasing or leveling out in a plateau. (Fig 1)
When the results of a large number of individual trials are averaged then a
smooth curve results, which can often be described with a mathematical function.
(Fig 2)
Reviewer 205
Management Advisory Services

Fig 3: S-Curve or Sigmoid Function 

Fig 4: Exponential growth 

Fig 5: Exponential rise or fall to a limit 


Reviewer 206
Management Advisory Services

Fig 6: Power Law 


Several main functions have been used:[12][13][14]

 The S-Curve or Sigmoid function is the idealized general form of all learning


curves, with slowly accumulating small steps at first followed by larger steps
and then successively smaller ones later, as the learning activity reaches its
limit. That idealizes the normal progression from discovery of something to
learn about followed to the limit of what learning about it. The other shapes
of learning curves (4, 5 & 6) show segments of S-curves without their full
extents.
In this case the improvement of proficiency starts slowly, then increases
rapidly, and finally levels off. (Fig 3)

 Exponential growth
The proficiency can increase without limit, as in Exponential growth (Fig
4)

 Exponential rise or fall to a Limit


Proficiency can exponentially approach a limit in a manner similar to
that in which a capacitor charges or discharges (Exponential decay)
through a resistor. (Fig 5)
The increase in skill or retention of information may increase rapidly to
its maximum rate during the initial attempts, and then gradually levels
out, meaning that the subject's skill does not improve much with each
later repetition, with less new knowledge gained over time.

 Power law
This is similar in appearance to an Exponential decay function, and is
almost always used for a decreasing performance metric, such as cost.
(Fig 6) It also has the property that if you plot the logarithm of
proficiency against the logarithm of experience the result is a straight
line, and it is often presented that way.

The specific case of a plot of Unit Cost versus Total Production with a
Power Law was named the Experience Curve: the mathematical
function is sometimes called Henderson's Law.
Reviewer 207
Management Advisory Services

This form of learning curve is used extensively in industry for cost


projections.[15]

The page on "Experience curve effects" offers


more discussion of the mathematical theory of
representing them as deterministic processes,
and provides a good group of
empirical examples of how that technique has
been applied.

In machine learning[edit]
Plots relating performance to experience are
widely used in machine learning. Performance
is the error rate or accuracy of
the learning system, while experience may be
the number of training examples used for
learning or the number of iterations used
in optimizing the system model parameters.
[16]
 The machine learning curve is useful for
many purposes including comparing different
algorithms,[17] choosing model parameters
during design,[18] adjusting optimization to
improve convergence, and determining the
amount of data used for training.[19]

Broader interpretations[edit]
Initially introduced
in educational and behavioral psychology, the
term has acquired a broader interpretation over
time, and expressions such as "experience
curve", "improvement curve", "cost
improvement curve", "progress curve",
"progress function", "startup curve", and
"efficiency curve" are often used
interchangeably. In economics the subject is
rates of "development", as development refers
to a whole system learning process with
varying rates of progression. Generally
speaking all learning displays incremental
change over time, but describes an "S"
curve which has different appearances
depending on the time scale of observation. It
has now also become associated with the
evolutionary theory of punctuated
equilibrium and other kinds of revolutionary
change in complex systems generally, relating
to innovation, organizational behavior and
the management of group learning, among
other fields.[20] These processes of rapidly
Reviewer 208
Management Advisory Services

emerging new form appear to take place by


complex learning within the systems
themselves, which when observable, display
curves of changing rates that accelerate and
decelerate.

General learning limits[edit]


Learning curves, also called experience
curves, relate to the much broader subject of
natural limits for resources and technologies in
general. Such limits generally present
themselves as increasing complications that
slow the learning of how to do things more
efficiently, like the well-known limits of
perfecting any process or product or to
perfecting measurements.[21] These practical
experiences match the predictions of
the second law of thermodynamics for the
limits of waste reduction generally.
Approaching limits of perfecting things to
eliminate waste meets geometrically increasing
effort to make progress, and provides an
environmental measure of all factors seen and
unseen changing the learning experience.
Perfecting things becomes ever more difficult
despite increasing effort despite continuing
positive, if ever diminishing, results. The same
kind of slowing progress due to complications
in learning also appears in the limits of useful
technologies and of profitable markets applying
to product life cycle management and software
development cycles). Remaining market
segments or remaining potential efficiencies or
efficiencies are found in successively less
convenient forms.
Efficiency and development curves typically
follow a two-phase process of first bigger steps
corresponding to finding things easier, followed
by smaller steps of finding things more difficult.
It reflects bursts of learning following
breakthroughs that make learning easier
followed by meeting constraints that make
learning ever harder, perhaps toward a point of
cessation.

 Natural Limits One of the key studies in


the area concerns diminishing returns on
investments generally, either physical or
financial, pointing to whole system limits
Reviewer 209
Management Advisory Services

for resource development or other efforts.


The most studied of these may be Energy
Return on Energy Invested or EROEI,
discussed at length in an Encyclopedia of
the Earth article and in an OilDrum
article and series also referred to
as Hubert curves. The energy needed to
produce energy is a measure of our
difficulty in learning how to make
remaining energy resources useful in
relation to the effort expended. Energy
returns on energy invested have been in
continual decline for some time, caused
by natural resource limits and increasing
investment. Energy is both nature's and
our own principal resource for making
things happen. The point of diminishing
returns is when increasing investment
makes the resource more expensive. As
natural limits are approached, easily used
sources are exhausted and ones with
more complications need to be used
instead. As an environmental signal
persistently diminishing EROI indicates an
approach of whole system limits in our
ability to make things happen.
 Useful Natural Limits EROEI measures
the return on invested effort as a ratio of
R/I or learning progress. The inverse I/R
measures learning difficulty. The simple
difference is that if R approaches zero R/I
will too, but I/R will approach infinity.
When complications emerge to limit
learning progress the limit of useful
returns, uR, is approached and R-uR
approaches zero. The difficulty of useful
learning I/(R-uR) approaches infinity as
increasingly difficult tasks make the effort
unproductive. That point is approached as
a vertical asymptote, at a particular point
in time, that can be delayed only by
unsustainable effort. It defines a point at
which enough investment has been made
and the task is done, usually planned to
be the same as when the task is
complete. For unplanned tasks it may be
either foreseen or discovered by surprise.
The usefulness measure, uR, is affected
by the complexity of environmental
Reviewer 210
Management Advisory Services

responses that can only be measured


when they occur unless they are foreseen.

In culture[edit]
"Steep learning curve"[edit]
The expression steep learning curve is used
with opposite meanings. Most sources,
including the Oxford English Dictionary,
the American Heritage Dictionary of the
English Language, and Merriam-Webster’s
Collegiate Dictionary, define a learning curve
as the rate at which skill is acquired, so a steep
increase would mean a quick increment of skill.
[4][22]
However, the term is often used in common
English with the meaning of a difficult initial
learning process.[5][22] L. Ron Hubbard's Study
Tech uses "study gradient" in the same sense,
where "steep" means difficult.
Arguably, the common English use is due to
metaphorical interpretation of the curve as a hill
to climb. (A steeper hill is initially hard, while a
gentle slope is less strainful, though sometimes
rather tedious. Accordingly, the shape of the
curve (hill) may not indicate the total amount
of work required. Instead, it can be understood
as a matter of preference related to ambition,
personality and learning style.)

Fig 9 : Short and long learning curves 


Reviewer 211
Management Advisory Services

Fig 10 : Product A has lower functionality and a short


learning curve. Product B has greater functionality but
takes longer to learn 
The term learning curve with meanings
of easy and difficult can be described with
adjectives like short and long rather
than steep and shallow.[4] If two products have
similar functionality then the one with a "steep"
curve is probably better, because it can be
learned in a shorter time. (Fig 9) On the other
hand, if two products have different
functionality, then one with a short curve (a
short time to learn) and limited functionality
may not be as good as one with a long curve (a
long time to learn) and greater functionality.
(Fig 10)
For example, the Windows program Notepad is
extremely simple to learn, but offers little after
this. At the other extreme is the UNIX terminal
editor vi or Vim, which is difficult to learn, but
offers a wide array of features after the user
has learned how to use it.[23]
"ON a steep learning curve"[edit]
Ben Zimmer discusses the use of the term "ON
a steep learning curve" in an article "A Steep
Learning Curve" for Downton Abbey,
concentrating mainly on whether it is
an anachronism. "Matthew Crawley, the
presumptive heir of Downton Abbey and now
the co-owner of the estate, says, 'I've been on
a steep learning curve since arriving at
Downton.' By this he means that he's had a
difficult time learning the ways of Downton.
Unfortunately, people didn't start talking that
way until the 1970s."[5][24][25]
Reviewer 212
Management Advisory Services

Zimmer also comments that the popular use


of steep as difficult is a reversal of the technical
meaning. He identifies the first use of steep
learning curve as 1973, and
the arduousinterpretation as 1978.

g. Inventory Models (Carrying And Ordering Costs, EOQ Model, Safety


Stock, Reorder Point)

Inventory Management

 Definition
 Purposes of inventory
 Inventory costs
 Inventory models
o Economic Order Quantity
o Quantity Discount
Definition

Inventory -- stored resource (raw material, work-in-process, finished goods) that


is used to satisfy present or future demand

Inventory management -- determine how much to order? When to order?

ABC Analysis -- classify inventory into 3 groups according to its annual dollar
volume/usage

Annual dollar volume = annual demand x cost

An example: 
 
A Top 80% of total dollar volume 

B Next 15%

C Next 5%

Item# Annual Demand Cost Demand x Cost % of total cost Class

234 50 200 10000 10% B

170 10 200 2000 2% C

222 100 800 80000 80% A

410 50 100 5000 5% B

160 15 200 3000 3% C

Total 100000
Reviewer 213
Management Advisory Services

Exercise

Pg.541 Problem 13, 27

Purposes of inventory

1.  Smooth-out variations in operation performances

2.  Avoid stock out or shortage

3.  Safeguard against price changes and inflation

4.  Take advantage of quantity discounts 


  
 

Inventory costs

1.  Holding or carrying costs: storage, insurance, investment, pilferage, etc. 


 
Annual holding cost = average inventory level x holding cost per unit per year
= order quantity/2 x holding cost per unit per year

2.  Setup or ordering costs: cost involved in placing an order or setting up the
equipment to make the product 
 
Annual ordering cost = no. of orders placed in a year x cost per order
= annual demand/order quantity x cost per order

EOQ (Economic Order Quantity) Model

Assumptions

1.  Order arrives instantly

2.  No stockout

3.  Constant rate of demand

What is the order quantity such that the total cost is minimized?

1.  Total cost = holding cost + ordering cost


= (order quantity/2) x holding cost per unit per year + (annual demand/order
quantity) x cost per order
Reviewer 214
Management Advisory Services

2.  Optimal order quantity (Q*) is found when annual holding cost = ordering cost 
 

3.  Number of orders = Annual Demand/Q*

4.  Time between orders = No. of working days per year / number of orders

5.  Reorder point = daily demand x lead time + safety stock

Example:

Given:
Annual Demand = 60,000 
Ordering cost = $25 per order 
Holding cost = $3 per item per year 
No. of working days per year = 240
Then, it can be computed: 
Q* = 1000

Total cost = $3000

Number of orders = 60000/1000 = 60

Time between orders = 240/60 = 4 days

Daily demand = 60000/240 = 250

If lead time = 3 days (lead time < time between orders)


Reorder point = (60000/240)x3=750 
Reorder when inventory on hand = 750

If lead time = 5 days (lead time > time between orders)


Reorder point = 250x5 = 1250 
Reorder when inventory on hand = 1250-Q*=1250-1000=250

In class exercise

Pg.540, Problems 10 


Annual demand = 2000 
Ordering cost = $10 
Holding cost = $5
Reviewer 215
Management Advisory Services

EOQ = sqrt(2*2000*10/5) = 89 


Annual ordering cost = 2000/89*$10 = $223.6 
Annual holding cost = 89/2*$5 = $223.6

Exercise

Pg. 539, Problem 1, 7a 


 

Quantity Discount Model

1.  Total cost = holding + ordering + purchasing

2.  Holding cost is a % of the purchasing cost 


 

Case 1 
Annual Demand =100 per year

Ordering cost = 45 per order

Holding cost = 20% of cost of item 


 
Order quantity Cost per item

50 or less $18

51 to 59 $16

60 or more $12

 Should order 62 units 


  
 

Case 2

Same as case 1 except: 


 
Reviewer 216
Management Advisory Services

Order quantity Cost per item EOQ Remark

50 or less $18 50  

51 to 99 $16 54  

100 or more $12 62 Infeasible

Need to compare:

Total cost (Q=54) and Total cost (Q=100)

Total cost (Q=54) = (100/54)x45 + (54/2)x(0.2x16) + 16x100 =1780.53

Total cost (Q=100) = (100/100)x45 + (100/2)x(0.2x12) + 12x100 = 1425

 Order 100 units

Case 3

Same as case 1 except: 


 
Order quantity Cost per item EOQ Remark

55 or less $18 50 Feasible

56 to 99 $16 54 Infeasible

100 or more $12 62 Infeasible

Need to compare:

Total cost (Q=50), Total cost (Q=56) and Total cost (Q=100)

Total cost (Q=50) = (100/50)x45 + (50/2)x(0.2x18) + 18x100 = 1980

Total cost (Q=56) = (100/56)x45 + (56/2)x(0.2x16) + 16x100 =1781.16

Total cost (Q=100) = 1425

 Order 100 units

Pg.540, problem 7b

Exercise
Pg. 540, problems 12, 26 

Carrying and Ordering Costs


Reviewer 217
Management Advisory Services

COMPONENTS OF INVENTORY COSTS

The total inventory costs are comprised of:

CARRYING COSTS: This cost increases with order size or quantity of


inventory on hand. Example: Storage costs, insurance on inventory, normal
spoilage, record keeping, security (DIRECT)

ORDERING COSTS: This cost decreases with order size or quantity of


inventory on hand. Example: Delivery costs, inspection, handling,
purchasing receiving, quantity discount lost. (INVERSE)

EOQ Model

2 DO
EOQ=
√ C

Where:

O = costs of placing one order;


D = annual demand or usage in units;
C = cost of carrying one unit for one year

Safety Stock

Average inventory – (EOQ / 2)

Average inventory – {[(BI + EI) / 2] / 2}

Reorder Point

With safety stock: normal lead time usage

Without safety stock: normal lead time usage + safety stock = maximum lead
time x average usage
  
h. Linear Programming (Graphic Method; Algebraic Method)

 
Reviewer 218
Management Advisory Services

Linear Programming: The Graphical and Simplex Methods

INTRODUCTION

Linear programming (LP) is an application of matrix algebra used to solve a


broad class of problems that can be represented by a system of linear equations.
A linear equation is an algebraic equation whose variable quantity or quantities
are in the first power only and whose graph is a straight line. LP problems are
characterized by an objective function that is to be maximized or minimized,
subject to a number of constraints. Both the objective function and the
constraints must be formulated in terms of a linear equality or inequality.
Typically; the objective function will be to maximize profits (e.g., contribution
margin) or to minimize costs (e.g., variable costs).. The following assumptions
must be satisfied to justify the use of linear programming:
 Linearity. All functions, such as costs, prices, and technological
require-ments, must be linear in nature.
 Certainty. All parameters are assumed to be known with certainty.
 Nonnegativity. Negative values of decision variables are
unacceptable.

Two approaches were commonly used to solve LP problems:


 Graphical method
 Simplex method

Now, however, MSExcel is much easier to use.

The graphical method is limited to LP problems involving two decision variables


and a limited number of constraints due to the difficulty of graphing and
evaluating more than two decision variables. This restriction severely limits the
use of the graphical method for real-world problems. The graphical method is
presented first here, however, because it is simple and easy to understand and it
is a very good learning tool.

The computer-based simplex method is much more powerful than the graphical


method and provides the optimal solution to LP problems containing thousands
of decision variables and constraints. It uses an iterative algorithm to solve for
the optimal solution. Moreover, the simplex method provides information on slack
variables (unused resources) and shadow prices (opportunity costs) that is useful
in performing sensitivity analysis. Excel uses a special version of the simplex
method, as will be discussed later.

CONSTRUCTING LINEAR PROGRAMMING PROBLEMS AND SOLVING


THEM GRAPHICALLY

We will use the following Bridgeway Company case to introduce the graphical
method and illustrate how it solves LP maximization problems. Bridgeway
Company manufactures a printer and keyboard. The contribution margins of the
printer and keyboard are $30 and $20, respectively. Two types of skilled labor
are required to manufacture these products: soldering and assembling. A printer
requires 2 hours of soldering and I hour of assembling. A keyboard requires 1
hour of soldering and 1 hour of assembling. Bridgeway has 1,000 soldering
hours and 800 assembling hours available per week. There are no constraints on
Reviewer 219
Management Advisory Services

the supply of raw materials. Demand for keyboards is unlimited, but at most 350
printers are sold each week. Bridgeway wishes to maximize its weekly total
contribution margin.

Constructing the Linear Programming Problem for Maximization of the Objective


Function

Constructing the LP problem requires four steps:


 Step 1. Define the decision variables.
 Step 2. Define the objective function.
 Step 3. Determine the constraints.
 Step 4. Declare sign restrictions.

STEP 1: DEFINE THE DECISION VARIABLES. In any LP problem, the decision


variables should completely describe the decisions to be made. Bridgeway must
decide how many printers and keyboards should be manufactured each week.
With this in mind, the decision variables are defined as follows:
 X = Number of printers to produce weekly
 Y = Number of keyboards to produce weekly

STEP 2: DEFINE THE OBJECTIVE FUNCTION. The objective function


represents the goal that management is trying to achieve. The goal in
Bridgeway's case is to maximize (max) total contribution margin. For each printer
that is sold, $30 in contribution margin will be realized. For each keyboard that is
sold, $20 in contribution margin will be realized. Thus, the total contribution
margin for Bridgeway can be expressed by the following objective function
equation:

max Z = $30X + $20Y

where the variable Z denotes the objective function value of any LP problem. In
the Bridgeway case, Z equals the total contribution margin that will be realized
when an optimal mix of products X (printer) and Y (keyboard) is manufactured
and sold.

STEP 3: DETERMINE THE CONSTRAINTS. A constraint is simply some


limitation under which the enterprise must operate, such as limited production
time or raw materials. In Bridgeway's case, the objective function grows larger as
X and Y increase. In other words, if Bridgeway were free to choose any values
for X and Y, the company could make an arbitrarily large contribution margin by
choosing X and Y to be very large. The values of X and Y, however, are
restricted by the following three constraints:

Constraint 1. Each week, no more than 1,000 hours of soldering time may be
used. Thus, constraint l may be expressed by:

2X + Y ≤ 1,000

because it takes 2 hours of soldering to produce one printer and 1 hour of


soldering to produce one keyboard. The inequality sign means that the total
soldering time for both products X and Y cannot exceed the 1,000 soldering
hours available, but could he less than the available hours.
Reviewer 220
Management Advisory Services

Constraint 2. Each week, no more than 800 hours of assembling time may be
used. Thus, constraint 2 may be expressed by:

X + Y ≤ 800

Constraint 3. Because of limited demand, at most 350 printers should be


produced each week. This constraint can be expressed as follows:

X ≤ 350

STEP 4: DECLARE SIGN RESTRICTIONS. To complete the formulation of an


LP problem, the following question must be answered for each decision variable:
Can the decision variable assume only nonnegative values, or is it allowed to
assume both positive and negative values? In most LP problems, positive values
are assumed. For example, in the Bridgeway case, production cannot be less
than zero units. Therefore, the sign restrictions are:

X >= 0

Y >= 0

These four steps and the formulation of the LP problem for Bridgeway are
summarized in Exhibit 16-1. This LP problem provides the necessary data to
develop a graphical solution.

Summary of the Linear Programming Problem for Bridgeway Company

Choose production levels for printers (X) and keyboards (Y) that:
max Z = $30X + $20Y (objective function)
and satisfy the following:
2X + Y <= 1,000 (soldering time constraint)
X + Y <= 800 (assembling time constraint)
X <= 350 (demand constraint for printers)
X => 0 (sign restriction)
Y => 0 (sign restriction)

Graphical Solution to the Maximization Linear Programming Problem

The following are two of the most basic concepts associated with LP:
 Feasible region
 Optimal solution

The graphical solution involves two steps:

Step 1. Graphically determine the feasible region. Step 2. Search for the optimal
solution.

STEP 1: GRAPHICALLY DETERMINE THE FEASIBLE REGION. The feasi-ble


region represents the set of all feasible solutions to an LP problem. In the
Reviewer 221
Management Advisory Services

Bridgeway case, the feasible region is the set of all points (X, Y) satisfying the
constraints in Exhibit 16-1.

For a point (X, Y) to be in the feasible region, (X, Y) must satisfy all the above
inequalities. A graph containing these constraint equations is shown in Exhibit
16-2. Note that the only points satisfying the nonnegativity constraints are the
points in the first quadrant of the X, Y plane. This is indicated by the arrows
pointing to the right from the y-axis and upward from the x-axis. Thus, any point
that is outside the first quadrant cannot be in the feasible region.

Feasible Region for the Bridgeway Problem

In plotting equation 2X + Y <= 1,000 on the graph, the following questions are
asked: How much of product X could be produced if all resources were allocated
to it? In this equation, a total of 1,000 hours of soldering time is available. If all
1,000 hours are allocated to product X, 500 printers can be produced each week.
On the other hand, how much of product Y could be produced if all resources
were allocated to it? If all 1,000 soldering hours are allocated to produce Y, then
1,000 keyboards can be produced each week. Thus, the line on the graph
expressing the soldering time constraint equation extends from the 500-unit point
A on the x-axis to the 1,000-unit point B on the y-axis.

The equation associated with the assembling capacity constraint has been
plotted on the graph in a similar manner. If 800 assembling hours are allocated to
product X, then 800 printers can be produced. If, on the other hand, 800
assembling hours are allocated to product Y, then 800 keyboards can be
produced. This analysis results in line CD.

Since equation X < 350 concerns only product X, the line expressing the
equation on the graph does not touch the y-axis at all. It extends from the 350-
unit point E on the x-axis and runs parallel to the y-axis, thereby signifying that
regardless of the number of units of X produced, no more than 350 units of X can
ever be sold.
Reviewer 222
Management Advisory Services

Exhibit 16-2 shows that the set of points in the quadrant that satisfies all
constraints is bounded by the five-sided polygon HDGFE. Any point on this
polygon or in its interior is in the feasible region. Any other point fails to satisfy at
least one of the inequalities and thus falls outside the feasible region.

STEP 2: SEARCH FOR THE OPTIMAL SOLUTION. Having identified the


feasible region for the Bridgeway case, we now search for the optimal solution,
which will be the point in the feasible region that maximizes the objective
function. In Bridgeway's case, this is:

max Z = $30X + $20Y

To find the optimal solution, we graph lines so that all points on a particular line
have the same Z-value. In a maximization problem, such lines are called isoprofit
lines; in a minimization problem, they are called isocost lines. The parallel lines
are created by assigning various values to Z in the objective function to provide
either higher profits or lower costs.

A graph showing the isoprofit lines for Bridgeway Company appears in Exhibit
16-3. The isoprofit lines are broken to differentiate them from the lines that form
the feasible region. To draw an isoprofit line, any Z-value is chosen, then the x-
and y-intercepts are calculated. For example, a contribution margin value of
$6,000 gives a line with intercepts at 200 printers and 300 keyboards:

Graph Showing the Optimal Solution of the Bridgeway Problem

$6,000 = $30X + $20(0) $6,000 = $30(0) + $20Y

X = 200 Y = 300

Since all isoprofit lines are of the form $30X + $20Y = contribution margin, they
all have the same slope. Consequently, once an isoprofit line is drawn, all other
isoprofit lines can be found by moving parallel to the initial line. Another isoprofit
Reviewer 223
Management Advisory Services

line is found by selecting a contribution margin of $9,000, which gives a line


having intercepts at 300 printers and 450 keyboards:

$9,000 = $30X + $20(0) $9,000 = $30(0) + $20Y

X = 300 Y = 450

Isoprofit lines move in a northeast direction; that is, upward and to the right. After
a while, the isoprofit lines will no longer intersect the feasible region. The isoprofit
line intersecting the last vertex of the feasible region defines the largest Z-value
of any point in the feasible region and indicates the optimal solution to the LP
problem. In Exhibit 16-3, the isoprofit line passing through point G is the last
isoprofit line to intersect the feasible region. Thus, point G is the point in the
feasible region with the largest Z-value and is therefore the optimal solution to
the Bridgeway problem. Note that point G is located at the intersection of lines
2X + Y = 1,000 and X + Y = 800. Solving these two equations simultaneously
results in:

X = 200

Y = 600

The optimal value of Z (i.e., the total contribution margin) may be found by
substituting these values of X and Y into the objective function. Thus, the optimal
value of Z is:

max Z = $30 (200) + $20 (600) = $18,000

The five corners of the feasible region, designated by HDGFE, will yield different
product mixes between X and Y. The calculations are presented in Exhibit 16-4,
starting at the origin and going clockwise around the feasible region. These
calculations also show that the optimal production mix is 200 printers and 600
keyboards. Any other production mix will result in a lower total contribution
margin.

In some cases, an objective function may be parallel to one of the feasibility


region boundaries. In such a case, the optimal solution will include any solutions
that lie on the border. For example, in Exhibit 16-5 the solution is a line along the
boundary of the feasible region. Therefore, any mix of E and C along this line will
be optimal.
Reviewer 224
Management Advisory Services

z-Value for Each Corner in the Feasible Region

  Corner of the Feasible Region Units Produced


  H 0 0
  D 0 800
  G 200 600
  F 350 300
  E 350 0
       
  X Y Total Contribution Margin
H $30 (0) + $20 (0) $0
D $30 (0) + $20 (800) $16,000
G $30 20(0) + $20 (600) $18,0001
F $30 (350) + $20 (300) $16,500
E $30 (350) + $20 (0) $10,500
       
       

Constructing the Linear Programming Problem for Minimization of the Objective


Function

We will use the following K9 Kondo Company case to demonstrate how the
graphical method solves LP minimization problems. The K9 Kondo Company
manufactures climate-controlled doghouses. The company believes that its high-
volume customers are high-income male and female dog owners who want to
pamper their pets. To reach these groups, the marketing manager at K9 Kondo
is considering placing one-minute commercials on the following national TV
shows: “New York Dog Show” and “Man's Best Friend.”

A one-minute commercial on “New York Dog Show” costs $200,000, and a one-
minute commercial on “Man's Best Friend” costs $50,000. The marketing
manager would like the commercials to be seen by at least 60 million high-
income women and at least 36 million high-income men. Marketing studies show
the following:
 Each one-minute commercial on “New York Dog Show” is seen by six
million high-income women and two million high-income men.
 Each one-minute commercial on “Man's Best Friend” is seen by three
million high-income women and three million high-income men.

Constructing the LP problem for minimization of the objective function follows the
same steps used in constructing the LP problem for maximization of the objective
function:
Reviewer 225
Management Advisory Services

Multiple Optimal Solutions Graphic Example

 Step 1. Define the decision variables.


 Step 2. Define the objective function.
 Step 3. Determine the constraints.
 Step 4. Declare sign restrictions.

STEP 1: DEFINE THE DECISION VARIABLES. The marketing manager at K9


Kondo must decide how many “New York Dog Show” and “Man's Best Friend”
one-minute commercials to purchase. Therefore, the decision variables are:

X = Number of one-minute “New York Dog Show” commercials purchased

Y = Number of one-minute “Man's Best Friend” commercials purchased

STEP 2: DEFINE THE OBJECTIVE FUNCTION. The marketing manager is


trying to minimize total advertising cost. Thus, the objective function (in
thousands of dollars) is:

min Z = $200X + $50Y

STEP 3: DETERMINE THE CONSTRAINTS. The values of X and Y are


restricted by the following constraints:

Constraint 1. The commercials must reach at least 60 million high-income


women. Thus, constraint 1 may be expressed by:

6X + 3Y => 60

Constraint 2. The commercials must reach at least 36 million high-income men.


Thus, constraint 2 may be expressed by:
Reviewer 226
Management Advisory Services

2X+3Y => 36

STEP 4: DECLARE SIGN RESTRICTIONS. The sign restrictions are expressed


by:
 X => 0
 Y => 0

These four steps and the formulation of the LP problem for K9 Kondo are
summarized in Exhibit 16-6. This LP problem provides the necessary data to
develop a graphical solution.

Summary of the Linear Programming Problem for K9 Kondo Company

Choose the number of commercials on “New York Dog Show” (X) and “Man’s Best Friend” (Y) that:
min Z = $200X + $50Y (objective function)
and satisfy the following:
6X + 3Y => 60
2X+3Y => 36
X => 0
Y => 0

Graphical Solution to the Minimization Linear Programming Problem

Like the Bridgeway problem, the K9 Kondo problem has a feasible region, but K9
Kondo's feasible region, unlike Bridgeway's, contains points for which the value
of at least one variable can assume arbitrarily large values. Such a feasible
region is sometimes called an unbounded feasible region, but it is referred to
here as simply the feasible region.

The graphical solution includes two steps:


 Step 1. Graphically determine the feasible region.
 Step 2. Search for the optimal solution.

STEP 1: GRAPHICALLY DETERMINE THE FEASIBLE REGION. The feasi-ble


region for K9 Kondo's advertising campaign is shown in Exhibit 16-7. Note that
6X + 3Y => 60 is satisfied by points on or above the line AB and that 2X + 3 Y =>
36 is satisfied by the points on or above the line CD. The only points satisfying all
of the constraints are in the shaded region bounded by the x-axis, CEB, and the
y-axis. This is the feasible region.
Reviewer 227
Management Advisory Services

Feasible Region for the K9 Kondo Problem

Line AB, which represents the plot of constraint 6X + 3 Y => 60, is determined by
first plotting the end points of the line 6X + 3Y = 60. Setting first Y and then X
equal to 0, we have:

6X = 60 3Y = 60

X = 10 Y = 20

Therefore, end point A is X = 10 and Y = 0, and end point B is X = 0 and Y = 20.

Next, the constraint 2X + 3Y => 36 is plotted by first plotting the end points of the
line 2X + 3Y = 36. Again, setting first Y and then X equal to 0, we have:

2X = 36 3Y= 36

X=18 Y=12

Therefore, end point C is X = 18 and Y = 0, and end point D is X = 0 and Y = 12.

STEP 2: SEARCH FOR THE OPTIMAL SOLUTION. Note that instead of isoprofit
lines these are isocost lines. The objective function is

min Z = $200X + $50Y

and the marketing manager's goal is to minimize total advertising costs. Conse-
quently, feasible values for X and Y that minimize Z must be chosen. Thus, the
Reviewer 228
Management Advisory Services

optimal solution to the K9 Kondo LP problem is the point in the feasible region
with the smallest Z-value.

Consider an arbitrary cost of $1,800,000. That is, Z = $1,800 and the isocost line
is

$1,800 = $200X + $50Y

as shown in Exhibit 16-8. Another parallel isocost line $1,400 = $200X + $50Yis
also shown in Exhibit 16-8. Thus, the direction of minimum cost (i.e., decreasing
Z) is toward the southwest; that is, downward and to the left. At a cost of
$800,000 (Z = $800), the isocost line is beyond the feasible region and therefore
does not represent a feasible solution. The optimum isocost line is the one that
intersects point B, because this is the farthest southwest point in the feasible
region. Thus, point B is the optimal solution to the K9 Kondo problem. Or stating
it another way, point B has the smallest Z-value of any point in the feasible
region.

Graph Showing The Optimal Solution for the K9 Kondo Problem

Notice that the set of feasible solutions has three corner points B E, and C.
 B = (0, 20)
 C = (18, 0)

Notice that point E is at the intersection of lines 6X + 3 Y = 60 and 2X + 3 Y = 36.


Point E is obtained by finding the simultaneous solution to these two equations:
Reviewer 229
Management Advisory Services

6X + 3 Y = 60
-2X - 3Y = -36
4X = 24
X =6

Substituting X = 6 into 2X + 3Y = 36 yields:


 2(6) + 3Y = 36
 3Y= 24
 Y= 8

Thus, point E = (X = 6, Y = 8).

Now, the corner points of BEC are tested. The three corners will yield different
mixes of one-minute commercials on “New York Dog Show,” represented by X,
and on “Man's Best Friend,” represented by Y. The calculations are presented in
Exhibit 16-9. The optimal advertising plan is to purchase 20 one-minute
commercials on “Man's Best Friend” and zero one-minute commercials on “New
York Dog Show.” The total optimal advertising cost is $1,000,000.

Z-Value for Each Corner in the Feasible Region

  One-Minute Commercials Purchased


Corner of the Feasible Region X Y
B 0 20
E 6 8
C 18 0
X Y Total Advertising Cost
B: $200 (0) + $50 (20) = $1,000,000 (optimal solution)
E: $200 (6) + $50 (8) = $1,600,000
C: $200 (18) + $50 (0) = $3,600,000
     

USING THE SIMPLEX METHOD TO SOLVE MAXIMIZATION PROBLEMS

Solving LP problems graphically is only practical when there are two decision
variables. Moreover, the graphical method becomes cumbersome when there
are many constraints. Real-world LP problems typically have thousands of corner
points.

The reigning champ for handling such problems is the simplex method, devised
in 1947 by George B. Dantzig of Stanford University2. The simplex method
provides an iterative algorithm that systematically locates feasible corner points
that will improve the objective function value until the optimal solution is reached.
Regardless of the number of decision variables and constraints, the simplex
algorithm applies the key characteristic of any LP problem: An optimal solution
always occurs at a corner point of the feasible region. The simplex algorithm
finds corner-point solutions, tests them for optimality, and stops once an optimal
solution is found.
Reviewer 230
Management Advisory Services

Before discussing the simplex steps required to maximize an objective function,


a few concepts from linear algebra must be introduced3. These concepts are
relatively basic and can be explored further in any introductory linear algebra
text.

Scalars, Matrices, and Vectors

The numbers used in daily life are called scalars. Scalars are simply single
numbers, or variables used to identify single numbers. A number such as 8 is a
scalar.

People who have used a spreadsheet such as Excel or who have done any
computer programming already have a good understanding of the concept of
a matrix. A matrix is a rectangular array of numbers having m rows and n
columns; it is typically contained in brackets. For instance, one can refer to the 2
X 4 matrix [A] and identify the individual numbers with a subscripted lower-case
a, such as ay. In the following matrix, the subscripts i and j identify the row and
column, respectively, of each matrix entry:

  

A vector is a type of matrix having either an m dimension of 1 (row vector) or an


n dimension of 1 (column vector). Here, a column vector [b] and a row vector [c]
are shown below:

  

  

Arithmetic Operations with Matrices and Vectors

Solving an LP problem by the simplex method requires three linear algebra


operations: multiplying a vector by a matrix (which results in a vector), multiplying
a vector by a vector (which results in a scalar), and subtracting a vector from a
vector (which results in a vector).

To multiply a row vector by a matrix, the vector must have the same number of
columns as the matrix has rows; otherwise, the operation is impossible. The
following illustration shows how this multiplication is performed:

  

= [(cla11 + c  2 a  21 + C  3 a  31 ) (c  l a  12 + c  2 a22 + c  3 a  32 ) (c  1 a  13 + c  2 a  23 +
c  3 a  33 ) (c  1 a  14 + c  2 a  24 + C  3 a  34 )]
Reviewer 231
Management Advisory Services

The entries in each column of the matrix are multiplied by the entries in the
vector, then summed to produce the entries in the resulting row vector.

The next illustration demonstrates how a row vector may be multiplied by a


column vector. The result is a scalar. This operation is just a special case of the
preceding operation, as vectors are just special types of matrices. Again, the
number of columns in the row vector is equal to the number of rows in the
column vector:

  

= c1b1 + c2b2 + c3b3

Finally, to subtract a row vector from a row vector, the first entry of the second
vector is subtracted from the first entry of the first vector, which yields the first
entry of the resulting row vector. The second entries of the vectors are then
subtracted, yielding the second entry of the result, and so on until all entries have
been subtracted:

[c] - [d] = [c  1 c  2 c  3 ] - [d  1 d  2 d  3 ]

= [(c  1 - d  l ) (c  2 - d  2 ) (c  3 - d  3 )]

Additionally, matrices are added and subtracted in a similar manner.

Row-Reduced Matrix Form

The number of nonzero rows in a matrix is known as its rank. A matrix column
containing a single one (1) in any position, with the remaining column entries
being zeros, is known as an elementary column. A matrix is said to be in row-
reduced form if the number of elementary columns is equal to the rank of the
matrix. The following matrices [B], [C], and [D] serve as examples:

   

Which of these are row-reduced matrices? [B] is a row-reduced matrix, because


its rank is 2 (only the top two rows are nonzero) and it has two elementary
columns (3 and 4). [D] is also row-reduced, since its rank is 3 and columns 2, 3,
and 5 are elementary. However, [C] is not row-reduced, since its rank is 3 but
only columns 2 and 4 are elementary. One more elementary column is needed.

How might matrix [C] be put into row-reduced form? That is, how can one more
elementary column be created? Since each row of a matrix represents the
coefficients of an equation, all the numbers in any row of a matrix can be
multiplied by an appropriate constant without disrupting the meaning of the
matrix. Thus, a particular value can be changed to a 1. Also, since the matrix
represents a system of simultaneous linear equations, rows can legally be added
or subtracted from each other. Consequently, the other values can be changed
Reviewer 232
Management Advisory Services

to zeros in the column where the I is. Multiplying the first row of matrix [C] by 0.5
gives:

  

which produces a 1 in the first position of row 1. Next, row 1 is subtracted from
row 2 to obtain:

  

Finally, row 1 is subtracted from row 3 three times to get:

  

which produces the needed zeros in the first column. Matrix [C] is now in row-
reduced form.

Pivoting a Matrix

The key to iteration in the simplex algorithm is a matrix procedure called pivoting.


This is merely the row-reduction technique just presented; the goal is to obtain a
one (1) in a particular position in the matrix, with all other column entries
becoming zero. Once the pivot entry is chosen (which is a location in the matrix),
the row and column containing the pivot entry are called the pivot row and pivot
column, respectively. All the values in the pivot row are then divided by the value
in the pivot entry position to obtain a one (1) in the pivot entry position. Then, the
pivot row is used to obtain zeros in the rest of the pivot column by subtracting the
pivot row the required number of times from the other rows. The matrix has now
been “pivoted about” the pivot entry.

A Simplex Maximization Example

The simplex algorithm can be used to solve LP problems in which the goal is to
maximize the objective function. The following example is necessarily simple to
illustrate the mechanics of the algorithm; it could easily be solved graphically.
The method is the same for more complex problems. The simplex solution of a
minimization LP problem is described in a later section.

The Heartache machine shop has time available on three machines, and the
shop's owner wishes to schedule production of two types of fastening pins. The
owner's objective is to maximize the profit resulting from the proposed production
run.

Lathe A is used for rough turn of the pin stock and has 50 hours of time
available. Lathe B is used to finish turn the fastening pins and has 36 hours
Reviewer 233
Management Advisory Services

available. The third machine, grinder G, is used to finish grind each pin, thereby
completing the production process. The grinder has 81 hours available.
Manufacturing times for pin lots, in hours, are summarized as follows:
Machine Lot Times      
  A BG
Pin Type 1 10 6 4.5
Pin Type 2 5 6 18.0

Heartache’s profit on these pins is $9 per lot for Type 1 and $7 per lot for Type 2.

Before the simplex algorithm can be applied, the LP problem must be set up
using the four steps introduced in the graphical method section.

STEP 1: DEFINE THE DECISION VARIABLES. In this example, the produce-


thin mix of Types 1 and 2 must be “programmed” for maximum profitability.
Hence, the unknown number of lots of each pin type can be represented as
follows:
 XI = Number of lots of pin Type 1
 X2 = Number of lots of pin Type 2

STEP 2: DEFINE THE OBJECTIVE FUNCTION. The machine shop owner's goal
can be expressed by the following objective function equation:

max Z = $9X  1 + $7X2

This equation has one term for the profit generated by producing pin Type 1 and
another term for the profit generated by producing pin Type 2. Together, they
equal AeroTech's profit, Z, which is to be maximized.

STEP 3: DETERMINE THE CONSTRAINTS. This simplified example is limited


only by the machine times available for the production of fastening pins. Using
these times, along with the lot manufacturing times for each pin type, the
following constraints can be formulated:
10X  1 + 5X  2 <= 50 (lathe A)
6X  1 + 6X2 <= 36 (lathe B)
4.5X1 + 18X2 <= 81 (grinder G)

STEP 4: DECLARE SIGN RESTRICTIONS. Of course, the machine shop can-


not produce a negative number of fastening pins of either type. Therefore:
 X1 >= 0
 X2 >= 0

Exhibit 16-10 summarizes the complete LP problem for AeroTech machine shop.
So far, the procedure has been the same as for the graphical method described
at the beginning of this chapter. Now, six additional steps, known as the simplex
algorithm, are performed to arrive at an optimal solution. .
Reviewer 234
Management Advisory Services

Summary of the Linear Programming Problem for AeroTech Machine Shop

Choose production levels for Type 1 pins (X  1 ) and Type 2 pins (X  2 )
  max Z =
Objective function
$9X  1 + $7X  2
and satisfy
   
the following
10X  1 +
50 (lathe A time constraint)
5X  2 <=
6X  1 +
36 (lathe B time constraint)
6X  2 <=
4.5X  1 +
81 grinder G time constraint)
18X  2 <=
X  1 >= 0 (sign restriction)
X  2 >= 0 (sign restriction)

STEP 5: CONVERT THE LP TO STANDARD MATRIX FORM. Before solving a


problem using the simplex algorithm, the objective function and constraints must
be placed in standard matrix notation, and inequalities must be removed through
the use of slack variables. To eliminate the inequalities present in the three time
constraints of Exhibit 16-10, the slack variables X3, X4, and X5 are introduced.
They replace each inequality with an equals sign
10X  1 + 5X2 + X3   = 50 {lathe A)
6X  1 + 6X2   + X4 = 36 (lathe B)
4.5X  1 + 18X2     + X5 = 81 (grinder G)

The physical meaning of the slack variables in the AeroTech problem is the
remaining spare machine time given a particular solution for X, and X2. Ideally,
given an optimum solution, X,, X4, and X5 would all be zero, but this is seldom
the case. For our purposes here, however, the slack variables merely provide a
convenient way of converting inequalities to equalities.

This standard matrix notation for an LP objective function is:

max Z = [c][x]

where [c] contains the coefficients in the objective function (all slack variable
coefficients are zero). The objective function must satisfy:

[A][x] = [b]

[x] >= 0

[b] >= 0

where [x] is the variable vector, [A] is the matrix of constraint coefficients for the
variables, and [b] is the right-hand side vector from the constraint equation.
Reviewer 235
Management Advisory Services

To complete step 5, the AeroTech LP problem is presented in its standard matrix


form:

max Z = [c][x]

and satisfy:

[A][x] = [b]

  

One should make certain that the original objective function and constraints from
these matrices can be recreated. Also, notice that matrix [A] is row-reduced, a
necessary condition of the standard form for the simplex method.

STEP 6: PREPARE THE FIRST TABLEAU. The following matrix, called a


tableau, is associated with the solution:
[A] [b]  
[c*][A]-[c] [c*][b] > value of the objective function
Simplex indicators  

Simplex Indicators

The matrix [A] and the vectors [b] and [c] were defined in step 5. The vector [c*]
is a subset of [c] containing the coefficients of the variables that are currently
defined by elementary columns in matrix [A]. For the initial tableau, the initial
slack variables' coefficients are in [c*]. This will become clearer as the simplex
tableau of Exhibit 16-11 is filled in.

Two columns have been added to the left-hand side of the tableau in the exhibit.
The leftmost column simply indicates which decision variables are currently
elementary. Across the table from each of these variables, one finds the value of
I in an elementary column in [A]. The same variable appears at the top of that
elementary column.

The next column from the left contains the values of [c*]. In this first tableau, the
variables in the leftmost column are just the slack variables. In [c], the slack
variables all have coefficients of zero, so in this first tableau, the subset vector
[c*J is:

[c*] = [0 0 0]
Reviewer 236
Management Advisory Services

The operations [c*] [A] - [c] and [c*] [b] can now be carried out, as shown in the
exhibit, and the rest of the tableau completed. The portion of the tableau
corresponding to [c*] [b] contains the value of the objective function (the profit, or
Z, in this case) for the current solution.

The portion of the tableau corresponding to [c*] [A] - [c] contains the simplex
indicators for the current solution. Simplex indicators show in each column how
much Z will decrease per unit increase of the variable. These indicator numbers
are vital for the next four steps of the simplex algorithm.

STEP 7: CHECK TO SEE IF THE CURRENT SOLUTION IS MAXIMAL. If no


simplex indicator is negative, the solution is maximal, and the algorithm
terminates. The values of the decision variables and the objective function can
be read directly from the table. The optimization is complete.

Preparation of the First Simplex Tableau for the AeroTech Problem

 
               
  [c*] X1 X2 X3 X4 X5 [b]
X3 0 10 5 1 0 0 50
X4 0 6 6 0 1 0 36
X  5 0 4.5 18 0 0 1 81
               

[c] = [9 7 0 0 0] [c*] = [0 0 0]

[c*] [A] - [c] = [0 0 0]   - [9 7 0 0 0]

= [0 0 0 0 0 ] - [ 9 7 0 0 0 ]

= [-9 -7 0 0 0]

[c*][b] = [0 0 0]   = 0
X  X 
  [C*] X1 X2 X3 [b] Quotients  
4 5

X3 10 10 pivot entry 5 1 0 0 50 50/10 = 5 Minimum quotient (pivot row)


X  4 0 6 6 0 1 0 36 36/6 = 6  
X  5 0 4.5 18 0 0 1 81 81/4.5 = 18  
    -9 -7 0 0 0 0    
    Pivot column              

STEP 8: CHECK TO SEE IF NO MAXIMAL SOLUTION EXISTS. If some simplex


indicators are negative, and no entry in that column is positive, there is no
maximal solution, and the algorithm terminates. Either there is an inconsistency
Reviewer 237
Management Advisory Services

in the constraints of the LP problem, or the feasible region is unbounded. In


either case, the LP problem cannot be optimized as formulated.

STEP 9: CREATE A NEW TABLEAU TO FIND A BETTER SOLUTION. Another


tableau will be created that proceeds from the basic solution of the first tableau to
another solution that might satisfy steps 7 or 8. This iteration is performed by
pivoting matrices in the current tableau and reevaluating the simplex indicators.

STEP 10: REPEAT STEP 9 UNTIL EITHER STEP 7 OR STEP 8 IS SATISFIED.


The simplex algorithm is iterative. Recalling the graphical approach to optimizing
an LP problem, the simplex algorithm simply proceeds around the perimeter of
the feasible region, stopping at corner points along the way to test for optimality.
A new tableau is associated with each corner point.

To continue with the AeroTech problem, the bottom of Exhibit 16-11 should be
revisited. Two of the simplex indicators are negative; therefore, step 7 is not
satisfied. There is at least one positive value in the columns above these
negative indicators, so step 8 is not satisfied. Therefore, we must proceed to step
9 and generate a new tableau.

To create the new tableau, an entry in the current tableau is selected about
which to pivot. First, the most negative simplex indicator having a positive value
above it in [Al is selected (-9 in the exhibit). In essence, the greatest negative
value indicates which variable will increase Z by the greatest rate. The column
above this indicator is the pivot column.

Second, for each positive entry in the pivot column in [A], the entry in [b] on that
row is divided by the entry in the pivot column of [A], and the resulting quotient is
noted (three such quotients are shown in the exhibit, one for each row of [A] ).
Each of these quotients is the largest value the pivot column variable can be
without exceeding the constraint of that row.

Finally, the smallest of these quotients is determined. It indicates the largest


value of the pivot column variable that is assured of not violating any constraints.
The row corresponding to this quotient is the pivot row. The pivot entry is at the
intersection of the pivot row and the pivot column (the circled 10 is the pivot entry
in the exhibit).

Exhibit 16-12 shows the second tableau for AeroTech. Since [c*] is not needed
beyond the first tableau, the leftmost columns are omitted. To create this second
tableau, the entire first tableau is transformed by the pivoting process described
earlier. This pivoting also affects the entire last row of the tableau, and the right-
hand column. All values in the pivot row are divided by the pivot entry, which
leaves a I in the pivot position. Then, multiples of the resulting pivot row are
subtracted from the tableau's other rows to leave zeros in the pivot column.

Once again, the second tableau does not satisfy steps 7 and 8, so a third tableau
must be generated, starting with the selection of a pivot entry. The only negative
simplex indicator in the exhibit is -2.5, so that column becomes the pivot column.
Recomputing the three quotients and finding their minimum results in the second
row being chosen as the pivot row. The pivot entry (3) is circled.
Reviewer 238
Management Advisory Services

Exhibit 16-13 shows the third tableau produced by pivoting the second tableau
around its pivot entry. Note that in the third tableau, none of the simplex
indicators are negative, satisfying step 7 of the algorithm. Therefore, this tableau
represents an optimal solution to the AeroTech problem, and no further iterations
are necessary.

The solution values in the tableau in Exhibit 16-13 are read as follows. First, the
objective function (Z) value from the lower right corner of the tableau is read, in
this case, 50. AeroTech can expect to make a profit of $50 from the optimal
production run. But what is this optimal production run? How many fastening pins
of Types 1 and 2 should be produced? This question is answered in the
rightmost [b] column of the tableau.

The Second Simplex Tableau for the AeroTech Problem

 
X  1 X2 X3 X4 X5 [b]
1 0.5 0.1 0 0 5
0 3 -0.6 1 0 6
0 15.75 -0.45 0 1 58.5
0 -2.5 0.9 0 0 45

X1 X2 X3 X4 X5 [b] Quotients  

1 0.5 0.1 0 0 5 5/0.5 = 10  

0 3 Pivot entry -0.6 1 0 6 6/3 = 2 Minimum quotient (pivot row)

0 15.75 -0.45 0 1 58.5 58.5/15.75 = 3.7

0 -2.5 0.9 0 0 45    

  Pivot column            

First, find the elementary columns in [A] of the third tableau. Then, note the
decision variables to which these columns correspond and the row on which
each column's 1 is located. For instance, the first column of the tableau is
elementary and corresponds to the variable X1. This column has a 1 in the first
row. It is now possible to read across this row to the far right. The value of 4
appears in the first row of [b], meaning that four lots of pin Type 1 should be
produced.

Similarly, the value of 2 appears in the second row of [b], meaning that two lots
of pin Type 2 should be produced. The value of 27 in row three of [b] means that
27 hours of unused (slack) time remain on grinder G because the third
elementary column corresponds to X5. Having completed the LP problem,
AeroTech's owner can now run the machines to this optimal schedule.
Reviewer 239
Management Advisory Services

Sensitivity Analysis

When an optimal solution is reached, management would often like to know how
the optimal values would react to a change in the initial formulation of the LP
problem, but it is not practical to rework the entire problem for each possible
change. Fortunately, the information can be obtained directly through an
analytical approach called sensitivity analysis (also referred to as postoptimality
analysis). Sensitivity analysis basically looks at the question of “what if” a
variable is different from that originally estimated. The widespread use of
computers has made sensitivity analysis a common extension of linear
programming. Most linear programming computer packages include the results
of sensitivity analysis as a part of the normal printout.

The Third and Final Simplex Tableau for the AeroTech Problem

X  X 
X  1 X  3 X  4 [b]
2 5
1 0 0.2 -0.167 0 4
0 1 -0.2 0.33 0 2
0 0 2.7 -5.2 1 27
0 0 0.4 0.833 0 50

No simplex indicator is negative. Therefore, Maximal.

X1 X2 X3 X4 X5 [b]  

1 0 0.2 -0.167 0 4 Solution for Xt (4 lots)

0 1 -0.2 0.33 0 2 solution for X2 (2 lots)

0 0 2.7 -5.2 1 27 Solution for X5 (27 hours)

0 0 0.4 0.833 0 50\ Profit ($50)

Optimal solution

Shadow Prices

A shadow price represents the change in the objective function that would result
from the addition or reduction of one unit of a resource, such as machine time or
labor time. Shadow pricing, a form of sensitivity analysis, shows how sensitive
the optimal value of the objective function would be to adding or reducing
resources. For example, is it worthwhile to pay workers an overtime rate? If the
increase in overtime pay is $1,000 and results in an increase of $800 in the
optimal objective function, the addition of overtime work is not worthwhile.

The shadow price “value” of adding one additional unit of a resource can be
readily determined by examining the last row of the final tableau. Each value is
the shadow price for that variable. For example, as shown in Exhibit 16-13, the
shadow price of slack variable X, is 0.4. Since slack variable X3 is directly
Reviewer 240
Management Advisory Services

associated with constraint 1, this means that a one-hour increase in Lathe A's
time would result in an increase in Z of $0.40.

USING THE SIMPLEX METHOD TO SOLVE MINIMIZATION PROBLEMS

In the preceding discussion of maximization problems using the simplex


algorithm, the constraints were of the following form:

[E][x] <= [b]

[x] >= 0

[b] > = 0

where [E] contains the constraint coefficients for the variables, not including slack
variables.

The first simplex tableau could be formed around the subset vector [e*] = 0,
which relates to starting at the origin in a graphical plot of the feasible region.'
Most maximization problems will fit this form nicely, but the majority of
minimization problems will not contain the origin as a feasible point. They often
have constraints of the following form:

[E][xl >= [b]

A quick review of Exhibit 16-2 (a maximization problem) shows that the origin (0,
0) is a corner point of the feasible region, but in Exhibit 16-7 (a minimization
problem) the origin is outside the feasible region. Thus, such a simplex
minimization problem cannot be started from the origin because it is not a
feasible point.

To overcome this difficulty, a modification of simplex called the two-phase


procedure is employed. The first phase of this procedure uses subtracted slack
variables plus additional artificial variables to find a corner point of the feasible
region and to produce a tableau based on that point. The second phase then
drops the artificial variables and uses the feasible point to optimize the LP
problem similar to the way described for the LP maximization problem.

An additional concern with all minimization problems is the evaluation of the


simplex indicators. Whereas negative indicators were sought in the maximization
problem for AeroTech machine shop, positive indicators are looked for in a
minimization problem. The maximum simplex indicator in a minimization problem
indicates the variable that will reduce Z at the greatest rate. The pivot column will
contain a positive indicator, but the pivot row will be determined exactly as
before.

A Simplex Minimization Example

Consider the following LP problem, which was solved graphically as the K9


Kondo Company problem earlier in this chapter:

min Z = $200X + $50Y


Reviewer 241
Management Advisory Services

subject to the following:


 6X+ 3Y >= 60
 2X+ 3Y >= 36
 X >= 0
 Y >= 0

Phase One of the Two-Phase Procedure

First, the problem is converted to standard matrix form by subtracting the slack
variables J and K from the constraints. Then, the artificial variables M and N are
added to produce the following equality constraints:

  

For phase one, a “rigged” objective function is used, which contains only the
artificial variables. The use of these artificial variables and the rigged objective
function may seem strange and, for the purposes here, will have to be taken on
faith. In matrix form, the rigged objective function is:

min Z = [a] [x]

= [0 0 0 0 1 1] 

The first tableau is at the top of Exhibit 16-14. The vector [a*] contains the
coefficients of the artificial variables from the rigged objective function since the
artificial variables are defined by the elementary columns in matrix [Al. The
simplex indicators are then computed, and the value of the objective function is
determined as shown.

Because this is a minimization problem, the highest value positive simplex


indicator is selected to determine the pivot column; in this case, 8 is chosen. As
before, the quotients for this tableau are computed, and the minimum quotient is
chosen as the pivot row. Again, the minimum quotient value indicates the largest
value the pivot column variable can be without violating any of the constraints.
The pivot entry, in this case 6, is circled.

Pivoting twice yields the tableau at the bottom of Exhibit 16-14. Because there
are now no positive simplex indicators, phase one is completed. The value of the
objective function in the lower right corner of the tableau gives valuable
information. In the tableau, this value is zero, indicating that there is a solution to
the problem if one wishes to proceed with phase two. If the value were nonzero,
no solution to the LP problem would exist, and the process would not be
continued.
Reviewer 242
Management Advisory Services

Phase Two of the Two-Phase Procedure

Phase one found a corner point in the feasible region that permits the simplex
method to optimize the minimization LP problem. Exhibit 16-15 shows the first
tableau of phase two. The columns of [A] corresponding to the artificial variables
M and N are simply deleted and the rows are reordered, if necessary, so the l's
of the elementary columns are in the same order as the objective function
coefficients. The original objective function is, of course, restored to read in
matrix form.

Tableau Associated with Phase One of the Two-Phase Simplex Minimization


Problem

Artificial Slack Variables


    [a*]   X Y J K M N [b]
M   1   6 3 -1 0 1 0 60
N   1   2 3 0 -1 0 1 36
                  

[a]=[0 0 0 0 1 1] [a*]=[1 1]

[a*] [A]-[a] = [1 1]   -[0 0 0 0 1 1]

= [8 6 -1 -1 1 1] - [0 0 0 0 1 1]

= [8 6 -1 -1 0 0]

[a*] [b] = [1 1]   = 96

 
  [a*] X Y J K M N Ib] Quotients  
M 1 6 pivot entry 3 -1 0 1 0 60 60/6 = 10 Minimum Quotient (pivot row)
N 1 2 3 0 -1 0 1 36 36/2 = 18  
    8 6 -1 -1 0 0 96    
    Pivot column                

 
X Y J K M N [b] Quotients  
1 0.5 -0.167 0 0.167 0 10 10/0.5 = 20  
0 Pivot entry 2- 0.333 -1 -0.333 1 16 16/2 = 8 Minimum Quotient (pivot row)
0 2 0.333 -1 -1.333 0 16    
Pivot column                

 
Reviewer 243
Management Advisory Services

X YJ K M N [b]  
1 0 -0.25 0.25 0.25 -0.25 6  
0 1 0.167 -0.5 -0.167 0.5 8  
0 00 0 -1 -1 0 < The Value is zero, therefore a solution to the LP exists.

Creation of the First Tableau in Phase Two of the Simplex Minimization Problem

 
  [c*] X Y J K M N [b]
X 200 1 0 -0.25 0.25 0.25 -0.25 6
V 50 0 1 0.167 -0.5 -0.167 0.5 8
                 

[c] = [200 50 0 0] [c*] = [200 50]

[c*] [A] - [c] =[200 50]   - [200 50 0 0]

= [200 50 -41.65 25] - [200 50 0 0]

= [0 0 -41.65 25]

[c*] [b] = [200 50]   = 1,600

 
  [c*] X Y J K [b]
X 200 1 0 -0.25 0.25 Pivot entry 6
YI 50 0 1 0.167 -0.5 8
    0 0 -41.67 25 1 600
          Pivot column  

min Z = [c] [x]

= [200 50 0 0] 

Noting that the elementary columns in [A] correspond to the variables X and Y,
the subset vector [c*] can be determined and the simplex indicators recalculated
as shown in the exhibit. This leaves the complete first tableau of phase two
shown at the bottom. Note that the leftmost Z columns have been eliminated
from the tableau including the [a*] column. Also, note that no row ordering was
necessary since the i's of the elementary column were in the same order as the
objective function coefficients.
Reviewer 244
Management Advisory Services

There is one positive simplex indicator in the tableau, which is 25. In the [A]
column above this indicator, there is only one positive value, 0.25, which
automatically becomes the pivot value. In this case, it is not necessary to
compute quotients to determine the pivot row because there is only one positive
pivot column value to choose from. Exhibit 16-16 shows the second tableau
obtained by pivoting the first.

The Second and Final Tableau in Phase Two of the Simplex Minimization
Problem

 
X Y J K [b]
4 0 -1 1 24 ( Solution for K)
2 1 -0.333 0 20 ( Solution for Y)
-100 0
-16.67 0 1000 Z
   
  No simplex indicator is positive. Minimal    

Use the second constraint:

2X + 3Y + 0J = 36

to obtain a solution for X:

2X + 3 (20) + 0 - 1 (24) = 36
X=0

All simplex indicators are nonpositive so the solution is now minimized and the
simplex algorithm terminates. The location of the elementary columns in this
tableau should be studied. The number 24 in [b] is the optimal value of the slack
variable K. The number 20 in [b] is the optimal value of the decision variable Y.
The tableau provides no value for the decision variable X, so its optimal value is
zero. Finally, the value of the objective function is 1,000 (or $1,000,000). Note
that the simplex solution is the same as that found graphically earlier in the
chapter for the K9 Kondo Company problem.

THE LINDO COMPUTER PACKAGE

If the use of linear programming in industry has been restricted, it has been
because of two difficulties: (1) the cost of collecting the necessary input data and
(2) the cost of solving large LP problems. The first of these roadblocks is being
removed as many firms develop integrated information and database systems.
Since the solution of LP problems is purely mechanical, these problems are best
assigned to the computer. Therefore, rapid reductions in the cost of computer
hardware are removing the second roadblock.

Of course, computers require software to solve problems. Probably the most


common specialized linear programming software package used in the early
days was LINDO4, an acronym for Linear INteractive Discrete Optimizer. LINDO
Reviewer 245
Management Advisory Services

is most at home running on a mainframe or minicomputer. Ultimately, however,


LINDO was superceded by Excel and Lotus 123 when they adopted the
Karmarker method in about 1998. The Solver in Excel uses the Karmarker
method even though the Excel documentation does not refer to the Solver as a
LP tool.

SOLVING LINEAR PROGRAMMING PROBLEMS WITH KARMARKAR'S


METHOD

Prior to 1984, most scientists and mathematicians thought that the simplex
method was as far as they could go. A number of relatively easy-to-use decision
support packages based on the simplex method have been available for years.
Even when LP problems are not terribly complex, however, solving them can
chew up so much computer time that the answer is useless before it is found.

In November 1984, a 28-year-old mathematician named Narendra Karmarkar


announced he had discovered a quick way to solve problems so hideously
complicated that they often defied even the most powerful supercomputers.
Fittingly, Karmarkar worked at Bell Laboratories, the home of the first transistors,
the devices that made computerization possible in the first place. Karmarkar's
discovery is loaded with significance.

Testing has shown that Karmarkar's method is many times faster than the classic
simplex method. An implementation of Karmarkar's method outperformed one
implementation of the simplex method by a factor of over 50 on medium-scale
problems of 5,000 variables. AT&T (Bell Labs' parent) sold the first software
product based on Karmarkar's method to the US Air Force's Military Airlift
Command (MAC).

On a typical day, thousands of Air Force planes ferry cargo and passengers
among airfields scattered around the world. Determining how to fly various
routes, deciding which aircraft should be used, and scheduling pilots and ground
personnel are the primary functions of the MAC. Getting all the pieces to play
together is a classic challenge in linear programming. In fact, the MAC's LP
problem contains upward of 150,000 variables and 12,000 constraints. If a
computer could wring out just a couple of percentage points of added efficiency,
it would be worth millions of dollars. Karmarkar's method has enabled the MAC
to do just that. In fact, the most common software for scheduling classes at
colleges and universities uses this method to develop schedules which match
rooms, teachers and students.

Improving on the Simplex Method

How does Karmarkar's method work? A common analogy compares an LP


problem to a geodesic dome. In the graphical problems of this chapter, two-
dimensional feasible regions were plotted on the x and y axes. The simplex
examples had two variables and therefore had two-dimensional feasible regions.
Since the addition of each new variable adds another dimension to the feasible
region, LP problems with three variables are difficult, and problems with four or
more variables are impossible to visualize graphically. Therefore, it helps to think
Reviewer 246
Management Advisory Services

of the feasible region of an LP problem as a geodesic dome in multiple


dimensions.

Each of the dome's corners is a possible solution. The task is to find which one
holds the best solution. With the simplex method, the program “lands” on one
corner and inspects it. Then it scouts the adjacent corners to see if there is a
better answer; if so, it heads off in that direction. The procedure is repeated at
every corner until the program finds itself boxed in by worse solutions.

Karmarkar's method employs a radically different tactic. It starts from a point


within the multidimensional feasible region and finds the optimal solution by
taking a shortcut that avoids the tedious surface route. From this interior point,
the region is “projected” to reconfigure its shape. Then, the method determines in
which direction the solution lies. Finally, the problem structure is allowed to return
to its original shape, and the program jumps toward the solution, pausing at
intervals to repeat the exercise and home in on the answer5.

New Solutions to Old Problems

Karmarkar's method applies to a variety of commercial problems6. An airline


trying to coordinate scheduling efficiently is one example. An energy producer
managing the movement of oil among ships, storage tanks, and refineries is
another. Yet another example is the classic LP problem of locating distribution
centers geographically for a manufacturing firm. In truth, answering most of what
are termed “what-if” questions involves simulation, backed up by linear
programming.

Perhaps the greatest benefits will be in the use of simulation for everyday
operations7. Problems that used to take hours of expensive time on the
corporate mainframe can now be performed in minutes, or possibly even
performed on a desktop microcomputer in a short time. These benefits can only
be a boon to productivity and cost management.

SUMMARY OF LEARNING OBJECTIVES

The major goals of this chapter were to enable you to achieve four learning
objectives:

Learning objective 1. Explain linear programming and its components and


assumptions.

Linear programming consists of a sequence of steps that lead to an optimal


solution to a class of problems dealing with the allocation of scarce resources.
There are three popular linear programming methods:
 Graphical method
 Simplex method
 Karmarkar's method

All LP problems are composed of four components:


 Objective function
Reviewer 247
Management Advisory Services

 Decision variables
 Constraints
 Feasible region

The objective function deals with two types of objectives:


 Maximization of such things as profits, revenue, or productivity
 Minimization of such things as cost, time, or scrap

Decision variables are simply choices available to management in terms of the


amount of input or output. Constraints are limitations that restrict alternatives
available to management. Together, the constraints define the set of all feasible
combinations of decision variables, which is called the feasible region. LP
methods (graphical, simplex, or Karmarkar) systematically search the feasible
region for the combination of decision variables that will yield an optimal solution
in terms of the objective function.

To use linear programming effectively, certain assumptions must be satisfied:


 Linearity. All functions, such as costs, prices, and technological
requirements, must be linear in nature.
 Certainty. All parameters are assumed to be known with certainty.
 Nonnegativity. Negative values of decision variables are
unacceptable.

Learning objective 2. Describe the graphical method, and apply it in solving both
maximization and minimization linear programming problems.

The graphical method is used to find optimal solutions for two-variable LP


problems. This method involves plotting the constraints on a graph and
identifying the feasible region.

The optimal solution for a maximization problem is found by moving the objective
function, which is an isoprofit line, away from the origin until it intersects the
extreme corner point of the feasible region, which is a polygon. The optimal
solution occurs where the isoprofit line intersects the extreme corner point, or
where the isoprofit line overlays one of the boundaries.

Graphical minimization problems are similar to maximization problems except for


two differences. One is that the constraints are usually of the “greater than or
equal to” kind rather than “less than or equal to.” This difference causes the
feasible region to be outside the polygon instead of inside. The other difference
is that the objective is to minimize something, such as cost. The optimum corner
point is found by moving the objective function, which is an isocost line, toward
the origin rather than away from it.

Learning objective 3. Describe the simplex method, and use it in solving both
maximization and minimization linear programming problems.

Most real-world LP problems have more than two variables and are therefore too
complex for the graphical method. The simplex method is a general-purpose
algorithm that is widely used to solve multivariable and multiconstraint LP
problems.
Reviewer 248
Management Advisory Services

With the simplex method, a series of iterations is conducted until an optimal


solution is found. Often, the iterations and computations can become
overwhelming and thus call for the use of computer software packages, such as
LINDO. Still, some familiarity with manual calculations is useful in understanding
how the simplex algorithm works.

Learning objective 4. Discuss Karmarkar's method of solving linear programming


problems.

The most significant change in linear programming solution methods is


Karmarkar's method. This relatively new method often takes considerably less
computer time to solve very large LP problems.

The simplex method's algorithm discovers a solution by moving from one


adjacent corner point to the next, following the outside edges of the feasible
region. Alternatively, Karmarkar's method takes a shortcut by following a path of
points on the inside of the

feasible region. Although the simplex method will likely continue to be used for
many LP problems, software that supports Karmarkar's method is already being
used by a number of companies as well as federal government agencies.

IMPORTANT TERMS
 Constraint A limit to the degree to which an objective can be pursued.
 Decision variables Represent choices available to decision makers in
terms of amounts of either inputs or outputs.
 Elementary column A matrix column containing a single one (1) in any
position, with the remaining column entries being zeros.
 Feasible region A feasible solution space that contains the set of all
possible combinations of decision variables.
 Graphical method An approach to optimally solving LP problems
involving two decision variables and a limited number of constraints.
 Isocost lines A set of parallel lines that represent the objective function
of an LP problem. They indicate constant amounts of cost at various
solution values. They are used to solve an LP minimization problem
graphically.
 Isoprofit lines A set of parallel lines that represent the objective
function of an LP problem. They indicate constant amounts of profit at
various solution values. They are used to solve an LP maximization
problem graphically.
 Karmarkar's method An approach to optimally solving large-scale LP
problems efficiently. It starts from a point within the multidimensional
feasible region and finds the optimal solution by taking a shortcut that
avoids the tedious surface route of the simplex method.
 Linear equation An algebraic equation whose variable quantity or
quantities are in the first power only and whose graph is a straight
line.
 Linear programming (LP) An application of matrix algebra used to
solve a broad class of problems that can be represented by a system
of linear equations. It is used to determine the best allocation of
Reviewer 249
Management Advisory Services

multiple scarce resources to achieve an optimal solution. Matrix A


rectangular array of numbers having m rows and n columns; it is
typically contained in brackets.
 Objective function The linear mathematical equation that states the
objective of an LP problem. The major objective of a typical enterprise
is to maximize profits or minimize costs.
 Optimal solution The solution to an LP problem that provides the best
answer to the objective function.
 Pivoting The key iterating process of the simplex algorithm.
 Rank The number of nonzero rows in a matrix.
 Row-reduced form A matrix in which the number of elementary
columns is equal to the rank of the matrix.
 Scalars Single numbers or variables used to identify single numbers.
It is a quantity that has magnitude but no direction in space.
 Sensitivity analysis (postoptimality analysis) An analysis that projects
how much a solution might change, given changes in the variables.
 Shadow price The value of one additional unit of a resource in the
form of one more hour of machine time or labor time or other scarce
resource in linear programming.
 Simplex method An approach to optimally solving multivariable,
multiconstraint LP problems. The simplex method applies an algorithm
iteratively to locate feasible corner points in a systematic fashion until
it arrives at the best solution (i.e., the highest profit or lowest cost).
 Slack variables Represent the amount of each resource that will not
be used if the solution is implemented. Under the simplex method,
constraints (inequalities) are converted to equations (equalities) by
adding slack variables. Slack variables are always nonnegative.
 Tableau A table or matrix of the coefficients used in the problem
equations. It repre-sents a solution of the simplex algorithm in tabular
form. By inspecting the bottom row of each tableau in a series of
tableaus, one can immediately tell if it represents the optimal solution.
Each tableau corresponds to a corner point of the feasible region. The
initial tableau corresponds to the origin. Subsequent tableaus are
developed by shifting to an adjacent corner point in the direction that
yields an optimal solution.
 Vector A type of matrix having either an m dimension of 1 (row vector)
or an n dimension of 1 (column vector).

DEMONSTRATION PROBLEMS

DEMONSTRATION PROBLEM 1 Developing a linear programming problem.


The Marlowe Company manufactures and sells two products, A and B. Demand
for the two products has grown to such a level that Marlowe can no longer meet
the demand with its present resources. The company can work a total of 800,000
direct labor hours (DLhr) annually using three shifts. A total of 250,000 hours of
machine time is available annually. The unit sales price for product A is $49.90.
The unit sales price for product B is $84.50. The company plans to use linear
programming to determine a master production schedule that maximizes its
contribution margin. Overhead is assigned on a machine hour (Mhr) basis. The
unit production requirements and unit cost data follow:
Reviewer 250
Management Advisory Services

  Product A   Product B  
Raw materials   $4  $8
Direct labor 1 DLhr @ $6 6 2 DLhr @ $6 12
Variable overhead 0.5 Mhr @ $16 8 2 Mhr @ $8 16
Fixed overhead 1.5 Mhr @ $10 15 3 Mhr @ $10 30

Required:
 a. Develop the objective function that will maximize Marlowe's
contribution margin (CM).
 b. Develop the constraint function for the direct labor.
 c. Develop the constraint function for the machine capacity.

SOLUTION TO DEMONSTRATION PROBLEM I

a. The objective function that will maximize Marlowe's total contribution margin
(CM):

max CM = 31.90A + 48.50B

The total variable unit cost of product A is $18 ($4 raw materials + $6 direct labor
+ $8 variable overhead). The CM is $31.90 ($49.90 unit sales price - $18
variable cost). Similarly, the total variable unit cost of product B is $36 ($8 raw
materials + $12 direct labor + $16 variable overhead), and the CM is $48.50
($84.50 unit sales price - $36 variable cost). Thus, the objective function should
maximize the total CM from both products.

b. The constraint function for the direct labor is:

A + 2B <= 800,000

Because 800,000 direct labor hours are available, the function must be equal to
or less than 800,000. Every unit of product A requires 1 hour of direct labor, and
every unit of product B requires 2 direct labor hours.

c. The constraint function for the machine capacity is:

0.5A + 2B <= 250,000

Because 250,000 hours of machine time are available, the function must be
equal to or less than 250,000 machine hours. Every unit of product A requires
0.5 hours, and every unit of product B requires 2 hours.

DEMONSTRATION PROBLEM 2 Using the graphical method to solve a


maximization LP problem.

Office Designs manufactures and sells two kinds of desktop pen and pencil sets.
The Executive (E) is a high-quality set, while the Clerical (C) is of somewhat
lower quality. The contribution margin (CM) is $8 for each Executive set. sold
and $2 for each Clerical set sold. Each Executive set requires twice as much
manufacturing time as is required

for a Clerical set. If only Clerical sets are made, the company has the capacity to
manufacture 1,200 sets daily. Enough pen and pencil components are available
Reviewer 251
Management Advisory Services

to make 800 sets daily of Executive and Clerical combined. Executive requires a
special marble pedestal, of which only 500 per day are available. Clerical
requires a metal pedestal, of which 700 per day are available. The company can
sell all the Executive and Clerical sets that it produces.

Required:
 a. Formulate the problem.
 b. Use the graphical method to find the optimal solution.
 c. Management wants to know what the optimal solution will be if the
number of available marble pedestals is reduced to 400 Prepare a
graph showing this postoptimal solution.

SOLUTION TO DEMONSTRATION PROBLEM 2

a. The objective function maximizes the total contribution margin (CM), where the
CM is $8 for each Executive (E) set and $2 for each Clerical (C) set. Therefore,
the objective function for Office Designs is:

max CM = 8E + 2C

Certain constraints exist. First, only enough manufacturing time is available to


pro-duce 1,200 sets of C, if only C is made. It takes twice as much manufacturing
time to produce E. The company does not have to use all its manufacturing
capacity. The first constraint is stated as:

2E + C <= 1,200

Second, enough pen and pencil components are available to produce 800
desktop sets of any combination daily. This constraint is stated as:

E + C <= 800

Third, the pedestals for each set are also limited in supply. Only 500 marble
pedestals are available daily to make E, and only 700 metal pedestals are
available daily for C. Mathematically, these two constraints become:
 E <= 500
 C <= 700

Finally, although it's common sense, for mathematical completeness, the


following nonnegativity constraints are necessary:
 E >= 0
 C >= 0

Putting all the preceding material together, the LP problem is formulated as:

max CM = 8E + 2C where
 2E + C <= 1,200
 E + C <= 800
 E <= 500
 C <= 700
 E >= 0
Reviewer 252
Management Advisory Services

 C >= 0

b. Because there are only two variables, this LP problem lends itself to the
graphical method. The constraints are graphed first because they will determine
what solutions to the problem are possible. The feasible solution boundaries are
obtained by graph-ing the inequalities as if they were equalities and then noting
where the solution must lie relative to the equation. For example, the first
constraint, 2E + C 1,200 is graphed as 2E + C = 1,200. If C = 0, 2E = 1,200, and
E = 600. If E = 0, C = 1,200. Thus, the end points that are used to draw the 2E +
C < 1,200 constraint line are E = 600 and C = 1,200. This constraint line as well
as all the others are shown in the following graph. When all constraints are
simultaneously enforced, the shaded area results. This area is the feasible
region. It represents the set of all feasible solutions to Office Designs' LP
problem.

The CM equation can now be graphed. To do this, various levels of CM may be


assumed; for example, $2,400 and $3,200. As the CM level is increased, the
isoprofit lines (the dashed lines in the graph) will shift away from the origin. As
the graph makes clear, the maximum CM level will occur at a corner point of the
feasible region. This will always be the case. It is possible, however, that the
maximum isoprofit line could be parallel to a constraint line. In such a case, both
corners of the constraint as well as all the points on the constraint line would
represent optimum solutions. Normally, a unique optimum occurs at a single
point (vertex), as is the case with Office Designs.

Graphical Solution to Office Designs' LP Problem:

 
Reviewer 253
Management Advisory Services

To confirm the preceding discussion, the optimal solution is determined by


evaluating all corner points of the feasible region, as numbered in the graph. This
evaluation is presented as follows:
CORNERPOINT E, C COORDINATES CM = 8E + 2C
1 0,0 $8(0) + $2(0) = $-0-
2 500, 0 $8(500) + $2(0) = $4,000
3 500, 200 $8(500) + $2(200) = $4,400
4 400,400 $8(400) + $2(400) = $4,000
5 100, 700 $8(100) + $2(700) = $2,200
6 0, 700 $8(0) + $2(700) = $1,400

The total CM is maximized at $4,400, with 500 Executive and 200 Clerical
desktop pen and pencil sets being produced. This maximum profit is represented
by the isoprofit line that intersects corner point 3.

c. Once the optimal solution is reached, it is important to know how the solution
will change based on a change in the initial formulation. This is achieved by
employing sensitivity or postoptimality analysis. In the case of Office Designs,
management wants to know what the optimal solution will be if the number of
marble pedestals is reduced to 400. Such a change causes the original
constraint line E ~ 500 to shift leftward, reducing the feasible region, as shown in
the following graph. The revised optimal solution occurs at corner point 4, where
400 sets of E and 400 sets of C are the optimum number of sets to produce. The
total CM at this corner point is $8(400) + $2(400) = $4,000. The total CM is less
than could he made before, but it is still the best possible solution when the
supply of marble pedestals is reduced to 400.

• DEMONSTRATION PROBLEM 3 Using the graphical method to solve a


minimization LP problem.

Moran Chemicals produces two types of chemicals:


 Insecticide A
Reviewer 254
Management Advisory Services

 Herbicide B

Chemical A costs Moran $3,000 per ton; B costs $3,500 per ton. Moran's
production superintendent has specified that at least 30 tons of A and at least 20
tons of B must be produced during the next month. Moreover, the superintendent
observes that an existing inventory of a highly perishable raw material needed in
both chemicals must be used within 30 days. In order to prevent the loss of this
expensive raw material, Moran must produce a total of at least 70 tons of
chemicals next month.

Required:
 a. Formulate the LP problem
 b. Solve the LP problem graphically.

SOLUTION TO DEMONSTRATION PROBLEM 3

a. Total cost (TC) = 3,000A + 3,500B

where
 A >= 30
 B >= 20
 A+B >= 70
 A >= 0
 B >= 0

b. To solve Moran's LP problem graphically, the following feasible region is


constructed.Graph Showing Moran Chemicals' Feasible Region

 
Reviewer 255
Management Advisory Services

The minimization LP problem is unbounded on the right side and on the top. As
long as it is bounded inward, corner points can be determined. The optimal
solution will always occur at one of the corner points, or along one of the
boundary lines. In the case of Moran's LP problem, there arc only two corner
points, 1 and 2. At point 1, A = 50 and B = 20. At point 2, A = 30 and B = 40. The
optimal solution is found at the point yielding the lowest total cost:

TC at point 1 = 3,000A + 3,500B

= 3,000(50) + 3,500(20) = 150,000 + 70,000 = $220,000

TC at point 2 = 3,000A + 3,500B

= 3,000(30) + 3,500(40) = 90,000 + 140,000 = $230,000

The lowest cost to Moran is at point 1, Thus, Moran should produce 50 tons of A
and 20 tons of B.

DEMONSTRATION PROBLEM 4 Using the simplex method to solve a


maximization LP problem.

Rock Fellow Oil Refinery refines crude oil into gasoline and diesel. Crude oil
inputs to the refinery can be a maximum of 100 million barrels per quarter, and
the maximum energy usage per quarter is 42 million BTUs. Historical statistics
show that the maximum uptime for the refinery is 75 days per quarter. The diesel
process uses 2 million BTUs, which is twice as much energy as the gasoline
process, and a diesel batch process takes only 3 days compared to 4 days for a
gasoline batch. Each diesel and gasoline batch uses 4 million barrels and 10
million barrels, respectively. Each gasoline batch nets $50,000 and each diesel
batch nets $60,000.

Required:
 a. Formulate this maximization problem.
 b. Use the simplex method to find the optimal solution.

SOLUTION TO DEMONSTRATION PROBLEM 4

a. The objective function maximizes the total net contribution (NC) where the net
is

$60,000 for each diesel (D) batch and $50,000 for each gasoline (G) batch.
Therefore, the objective function for Rock Fellow is:

max NC = 60D + 50G

Three stated constraints exist. First, only 100 million barrels of crude oil can be
input to the refinery. Since D uses 4 million barrels per batch and G uses 10
million barrels per batch, the first constraint can be stated as:

4D + 10G <= 100

Second, energy consumption is limited to 42 million BTUs. It takes twice as much


energy to produce D as G. Thus, the constraint is:
Reviewer 256
Management Advisory Services

2D + G <= 42

Third, crude oil can be refined only 75 days per quarter. Each D batch takes 3
days, and each G batch takes 4 days. Mathematically, this constraint is:

3D + 4G <= 75

Putting all this together, the LP problem is formulated as:

max NC = 60D + 50G

where
 4D + lOG <= 100
 2D + G <= 42
 3D + 4G <= 75

b. Setting up the first tableau, with the formulated problem from above, we have:
X
D G   X2 X3 []b
1

4 10 1 0 0 100
2 1 0 1 0 42
3 4 0 0 1 75
-60 -50 0 0 0 0

where the bottom row is obtained by:

[c*] [A] - [c]

or

[0 0 0]   - [60 50 0 0 0]

The bottom right corner is calculated as:

[c*] [b]

and since [e*] is all zeros, the resulting scalar is 0.

Now, since the bottom row has negative values, we need to find the pivot entry.
Choosing the first column (D) as the pivot column since its simplex indicator is
the most negative, we calculate the quotients (b/D) to determine the pivot row, as
follows:

 
Reviewer 257
Management Advisory Services


D G X2 X3 [b] b/D
1
4 10 1 0 0 100 100/4 = 25
2 1 0 1 0 42 42/2 = 21 (pivot row)
3 4 0 0 1 75 75/3 = 25
-60 -50 0 0 0 0  
Pivot Column            

Thus, the pivot value is the 2.

Pivoting around the 2, we generate the second tableau:

 
D G X, X2 X, [b]
0 8 1 -2 0 16
1 0.5 0 0.5 0 21
0 2.5 0 -1.5 1 12
0 -20 0 30 0 1,260

Note that as a result of the pivot, all values in the pivot column (D) are now 0,
except for the 1 where the pivot value was.

Since the last row still has negative values, another pivot value must be
determined, as shown:
DG X, X2 X3 [b] b/D
0 8- 1 - 2 0 16 16/8 = 2 (pivot Row)
1 0.5 0 0.5 0 21 21/0.5 = 42
0 2.5 0 -1_5 1 12 12/2.5 = 4.8
0 -20 0 30 0 1,260  
  Pivot Column          

Thus, the pivot value is the 8.

Pivoting around the 8, we generate the third tableau:


D G X, X2 X3 [b]
0 1 0.125 -0.25 0 2
1 0 -0.0625 0.625 0 20
0 0 -0.3125 -0.875 1 7
0 0 2.5 25 0 1,300

Note that as a result of the pivot, all values in the pivot column (G) are now 0,
except for the 1 where the pivot value was. Also, note that the first pivot column
(D) did not change.

Since there are no negative values in the last row, we have converged to a
solution. The value in the bottom right corner is the maximum. In this case, the
maximum net is $1,300,000. (Recall that the objective function used thousands
of dollars.)
Reviewer 258
Management Advisory Services

What is the product mix? We look to the D column for the lone value of 1, then
look at the value in the rightmost column of that row-in this case, 20. This means
that Rock Fellow should make 20 batches of diesel (D). Doing the same for G,
we see a value of 2-thus, Rock Fellow should make only 2 batches of gasoline
(G).

In summary, the optimal solution is:


 $1,300,000 net per quarter
 2020 diesel batches per quarter
 2 gasoline batches per quarter

DEMONSTRATION PROBLEM 5 Using the simplex method to solve a


minimization LP problem.

Given the following minimization problem:

min Z=21X+18Y

subject to:
 5X + 10Y >= 100
 2X+Y >= 20

Required:

Use the simplex method to find the optimal solution.

SOLUTION TO DEMONSTRATION PROBLEM 5 PHASE 1:

First, the tableau is generated by subtracting the slack variables and then adding
artificial variables:
S A
X Y   S2   A2 [b]
1 1

5 10 -1 0 1 0 100
2 1 0 -1 0 1 20
7 11 -1 -1 0 0 120

The bottom row is calculated as follows:

[a*] [A] - [a]

where [a] is a vector of length [A] with only Is for each of the artificial columns
and [a*] has only is for the number of artificial columns of [A]. Thus, we have:

[1 1]   - [0 0 0 0 1 1]

which is essentially a summation of all the columns except the artificial columns.
For the bottom right, the following is used:

[a*] [b]

which results in 120, again just the summation of the values of [b].
Reviewer 259
Management Advisory Services

Now we pivot until all positive values are eliminated along the bottom row (except
for the far right “optimization” value). If the far right value is not zero, a solution
does not exist, and we would stop. The pivoting process for our problem is as
follows: After the first pivot:
S A
X Y S  1   A  1   [b]
2 2
0.5 1 -0.1 0 0.1 0 10
1.5 0 0.1 -1 -0.1 1 10
1.5 0 0.1 -1 -1.1 0 10

After the second pivot:


X Y S  1 S  2 A  1 A  2 [b]
0 1 -0.133 0.333 0.133 -0.33 6.67
1 0 0.067 -0.667 -0.067 0.67 6 67
0 00 0 -1 -1 0

Now that the bottom row is nonpositive and the bottom right value is zero, a
solution is assured, and we can proceed to phase 2.

PHASE 2:

Now a new first tableau is constructed, using the nonartificial values in the last
phase 1 tableau and sorting the rows. The bottom row is calculated from scratch
with now familiar maximization equations:

[c*] [A] - [c]

or

[21 18]   - [21 18 0 01

The bottom right corner is calculated as:

[c*][b]

or

[21 18] 

The resulting first tableau is thus:

 
X Y S  1 S2 [b]
1 0 0.067 -0.667 6.67
0 1 -0.133 0.33 6.67
0 0 -1.0 -8 260

if the bottom row contained any positive values, it would be necessary to pivot
until all the values in the bottom row (excluding the bottom right value) were
nonpositive, but that is not the case here.
Reviewer 260
Management Advisory Services

Since all the bottom row values are nonpositive, the values are optimized. This
tableau is read in the same way as the maximization case-the bottom right value
is the minimized Z-value; the X column reveals a single value in the first row; the
Y column has a single value in the second row.

In summary, the optimum solution is:

Z = 260

when
 X = 6.67
 Y = 6.67

DEMONSTRATION PROBLEM 6 Using linear programming for sensitivity


analysis.

Fugi Disk Company cannot meet the demand for its preformatted 3.5-inch floppy
disks. Fugi markets two types of disks: HD (high density) for newer computers
and DD (double density) for older ones. Fugi can realize a profit of $0.32 for each
DD disk and $0.30 for each HD disk. Unfortunately, Fugi is limited to 2,500
plastic DD disk cases and 400 disk boxes each day due to manufacturing
limitations. Each box holds either 10 HD or 10 DD disks. The supply of box and
disk labels is unlimited. The bulk formatting machine operates 420 minutes per
day and can format two disks at a time; it has auto-feed and auto-eject
mechanisms and can format an HD disk in 18 seconds and a DD disk in 15
seconds, including the loading and ejection times.

Required:

Use sensitivity analysis to find the optimal solution.

SOLUTION TO DEMONSTRATION PROBLEM 6

Formulating the linear programming problem, the objective function is:

0.30H + 0.32D = Maximize profit

The constraints are:


 1.00D <= 2,500 (disk cases)
 0.30H + 0.25D <= 840 (minutes)
 0.1 OH + 0.10D <= 400 (boxes)

Using Excel to calculate the solution, we input the objective ction and constraints
as: (click Tools Options Formulas, to show the formulas as seen below)
 
Reviewer 261
Management Advisory Services

When the formulas are not shown it looks like:


 

After running Solver, the output screens show: (with formulas shown)
 

 
 

Evaluating these output screens, Fugi management found that the optimal
solution would he 2,500 DD disks and 716 HD disks (rounded down to whole
disks). The shadow price (Lagrange multiplier) of 1 indicates that adding 1
minute would add $1 to profit. Therefore if the floor supervisor can add blocks of
minutes for less than $1/minute, profit will increase. After examining the shadow
prices, (Lagrange Multiplier) the floor supervisor suggested adding 2.25 hours to
Reviewer 262
Management Advisory Services

the operation of the formatting machine by shuffling schedules and assigning


some over-time.The extra cost would be $200 per day.

When the total format machine minutes were changed from 840 to 1,110 and the
computer program was rerun, the following output screens resulted:

 
 

Since the new profit of $1,250 per day is greater than the old profit plus the extra
cost ($200), it would be advantageous for Fugi to implement the increase in
formatting machine hours.

REVIEW QUESTIONS
 16.1 What does the feasible region represent?
 16.2 Give two reasons why the graphical method is only practical for
small LP prob-lems.
 16.3 What does moving an objective function line toward the origin
represent? Mov-ing the line away from the origin?
 16.4 Which values are used to construct the objective function and the
constraints?
a. LINDO parameters.
b. Decision variables.
c. Pricing policy.
d. Sensitivity constraints.
e. Shadow prices.
 16.5 Which of the following do almost all practical applications of
linear programming require?
a. Graphs.
b. Matrices.
c. Objective functions.
d. Computers,
e. Market surveys.
 16.6 What are the four steps for constructing an LP problem?
 16.7 Which of the following procedures is employed to solve simplex
linear programming problems?
a. Shadow prices.
b. Graphs.
c. Integral calculus.
d. Expected value.
e. Matrix algebra.
 16.8 In linear programming, shadow prices measure the:
Reviewer 263
Management Advisory Services

a. Cost of the optimum solution.


b. Contribution margins hidden from production.
c. Addition or reduction of one unit of each of the resources.
d. Contribution of a product.
e. Volume price discounts.
 16.9 In linear programming, the increase in profit when one more unit
of a limited resource is made available is indicated by:
a. The feasible region.
b. The objective function.
c. Nonnegativity constraints. d. Shadow prices.
e. Incremental decision variables.
 16.10 Which points do the simplex method “land on”?
 16.11 What steps are required in solving a maximization problem
using the simplex method?
 16.12 What is the purpose of the first phase in using the simplex
method for a minimization problem?
 16.13 What is the primary difference between the simplex method and
Karmarkar's method?

CHAPTER-SPECIFIC PROBLEMS

These problems require responses based directly on concepts and techniques


presented in the text. 16.14 Graphical solution to LP minimization problem. Given
the following LP problem:

min Z = 33X + 10Y

subject to:

X + Y >= 15

X >= 2

3X + Y >= 33

X + 2Y >= 18

Required:

Use the graphical method to find the optimal solution.

16.15 Graphical solution to LP maximization problem. Given the following LP


problem: max Z = 3X + 2Y subject to:

5X + Y <= 100

X + 2Y <= 50

Y <= 15

Required.

Use the graphical method to find the optimal solution.


Reviewer 264
Management Advisory Services

16.16 Determining the objective function and constraint. The Teaque Company
makes

three products: A, B, and C. Management wants to maximize profits on these


products. The contribution margin for each product follows:
PRODUCT CONTRIBUTION MARGIN

A $3
B 8
C 6

The production requirements and departmental capacities, by departments, are


as follows:
PRODUCTION REQUIREMENTS
 
BY PRODUCT (HOURS)
DEPARTMENT A BC
Assembling 2 42
Painting 1 23
Finishing 2 31
       
DEPARTMENT DEPARTMENTAL CAPACITY (TOTAL HOURS)    
Assembling 40,000    
Painting 27,000    
Finishing 42,000    

Required:
 a. Determine the objective function formula.
 b. Specify the constraint for the Finishing Department.

16.17 Developing the objective function, constraint function, and nonnegativity


constraints. The Galloway Company manufactures and sells shirts and dresses
in its two-department factory. Galloway uses linear programming to determine its
optimum product mix. Data related to the two products follow:
  SHIRTS DRESSES
Selling price per unit $25 $40
Cost data per unit:Variable manufacturing cost 8 10
Variable selling cost 3 5
Fixed manufacturing cost 4 8
Fixed selling cost 1 2

 
  MACHINE HOUR DATA
  CUTTING FINISHING
Shirts 10 minutes 15 minutes
Dresses' 6 minutes 30 minutes
Monthly capacity 1,000 hours 2,000 hours
Reviewer 265
Management Advisory Services

Required:

a. Develop the objective function that will maximize the contribution margin. b.
Develop the monthly machine hour constraints. c. Develop the monthly
nonnegativity constraints.

16.18 Graphical solution to LP problem. Neil and Neil book publishers are getting
ready to print covers for their latest mass-market novel. (The current trend is to
produce different covers for the same book, ideally to generate interest in the
book and increase sales.) For this book, the publisher has decided to use
predominantly green and predominantly blue covers. Based on a marketing
department request, at least 144 covers need to be produced each day, 32 of
which should be blue.

Due to idiosyncracies of the color printing process, blue covers take 2 minutes
and green covers take 4.5 minutes each to print. Daily deliveries of raw blue
pigment are at least 90 ounces due to vendor contracts. All other raw pigments
(e.g., yellow pigment) are not a constraint. Blue covers use one ounce each
whereas green covers use half an ounce each of the raw blue pigment. Raw
materials costs are predicted to be $0.10 for each green cover and $0.17 for
each blue cover. Efficiency goals require the printing press to be run at least 420
minutes each day.

Required:

Use the graphical method to find the optimal solution.

16.19 Graphical solution to LP problem. SpatulaCity Industries has only two


products, Thick and Thin spatulas. The company nets $200 on a batch of Thin
spatulas and $300 on each Thick spatula batch. SpatulaCity has only 100 tons of
plastic available each day, with Thick spatula batches using 4 tons each and
Thin spatula batches using 2 tons each. The finishing process takes 2 hours and
3 hours for each Thick and Thin batch, respectively, with a finishing process daily
capacity of 90 hours. A contract with a large department store requires
SpatulaCity to make at least 3 Thick batches per day.

Required:
 a. Use the graphical method to find the feasible region.
 b. Can 25 batches of Thin spatulas be optimally made each day?

16.20 Developing the objective function and constraints. The Harrington


Corporation manufactures and sells three products: anchor bolts (A), bearings
(B), and casters (C). There are 150 direct labor hours available. Machine hour
capacity allows 100 anchor bolts only; 50 bearings only; 40 casters only; or any
combination of the three that does not exceed the capacity. Data associated with
the products follow:
Product Selling price Variable Cost per Unit Fixed Cost per Unit Direct Labor Hours Per Unit
A $4.00 $1.00 $2.00 2
B 3.50 0.50 2.00 2
C 6.00 2.00 3.00 3

Required:
Reviewer 266
Management Advisory Services

 a. Develop the objective function to maximize the total contribution


margin from Harrington's three products.
 b. Develop the direct labor hour constraint.
 c. Develop the machine hour constraint.

[CMA adapted]

16: 1 Simplex solution to LP maximization problem. Given the following LP


problem: max Z = 15X + 13Y subject to:

X + Y <= 9

X <= 3

Y <= 8

Required:

Use the simplex method to find the optimal solution.

16.22 Simplex solution to LP minimization problem. Given the following LP


problem: min Z = 12X + 5 Y

X + Y >= 25

X + 3.5Y >= 30

6X + 5Y >= 50

Required:

Use the simplex method to find the optimal solution.

16.23 Simplex solution to LP problem. Bear Chemical Company is planning to


introduce two types of pain reliever products. The first, an aspirin and
decongestant combination, will be marketed as a cold pill. The second will be just
a plain aspirin pill. Due to spoilage of material, only 100 kilograms of raw
decongestant can be used each week. Every 100 batches of cold pills require a
half kilogram of decongestant and one-quarter kilogram of raw aspirin
compound. Every 100 batches of plain pills use only one-third kilogram of raw
aspirin compound. Bear can only use 540 kilograms of raw aspirin compound
each week due to purchase contract requirements. Marketing has determined
that no more than 300 batches of plain pills need to be manufactured each week.
The pill press machine takes 10 minutes to make each cold pill batch and 9
minutes to make each plain pill batch.

Required:

Use the simplex method to find the optimal solution to maximize the use of the
pill press machine.

16.24, Simplex solution to LP problem. A large food manufacturer is attempting


to minimize the cost of the raisins that are included in one of its cereal products.
Reviewer 267
Management Advisory Services

It has a choice between two types of raisins-Best Quality and Good Quality. Best
Quality raisins cost $0.023 and Good Quality raisins cost $0.010 each.
Advertising claims of “2 scoops” in each box create the necessity to have at least
500 raisins in each box. Customer taste tests have proven that at least 250 of the
raisins in each box have to be Best Quality to meet quality requirements. Due to
damage during the mixing and packaging processes and the fact that the Best
Quality raisins are more fragile, tests have shown that the Good Quality raisins
are more visually appealing in the box. Therefore, marketing requires a minimum
of 100 Good Quality raisins in each box.

Required: Use the simplex method to find the optimal solution_

THINK-TANK PROBLEMS

Although these problems are based on chapter material, reading extra material,
reviewing previous chapters, and using creativity may be required to develop
workable solutions.

t6.25 Comprehensive linear programming problem using the graphical method.


[CMA adapted] Home Cooking Company offers monthly service plans providing
prepared meals that are delivered to the customers' homes and that need only
be heated in a micro-wave or conventional oven. The target market for these
meal plans includes double-income families with no children and retired couples
in the upper-income brackets.

Home Cooking offers two monthly plans-Premier Cuisine and Haute Cuisine. The
Premier Cuisine plan provides frozen meals that are delivered twice each month:
this plan generates a profit of $120 for each monthly plan sold. The Haute
Cuisine plan provides freshly prepared meals delivered on a daily basis and
generates a profit of $90 for each monthly plan sold. Home Cooking's reputation
provides the company with a market that will purchase all the meals that can be
prepared.

All meals go through food preparation and cooking steps in the company's
kitchens. After these steps, the Premier Cuisine meals are flash frozen. The time
requirements per monthly meal plan and hours available per month are as
follows:
  PREPARATION COOKING FREEZING
Hours required:Premier Cuisine 2 2 1
Haute Cuisine 1 3 0
Hours available 60 120 45
       

For planning purposes, Home Cooking uses linear programming to determine the
most profitable number of Premier Cuisine and Haute Cuisine monthly meal
plans to produce.

Required:
Reviewer 268
Management Advisory Services

a. Using the notations P = Premier Cuisine and H = Haute Cuisine, state the
objective function and the constraints that Home Cooking should use to
maximize profits generated by the monthly meal plans.

b. Graph the constraints on Home Cooking's meal preparation process. Be sure


to clearly label your graph.

c. By using the graph prepared in Requirement (b) or by making the necessary


calculations, determine the optimal solution to Home Cooking's objective function
in terms of the number of:
 1. Premier Cuisine meal plans to produce.
 2. Haute Cuisine meal plans to produce.

d. Calculate the optimal value of Home Cooking's objective function.

e. If the constraint on preparation time could be eliminated, determine the revised


optimal solution in terms of the:
 1. Number of Premier Cuisine meal plans to produce.
 2. Number of Haute Cuisine meal plans to produce.
 3. Resulting profit.

16.26 Determing the objective function, constraints, and assumptions of an LP


problem. [CMA adapted] The Tripro Company produces and sells three products,
hereafter referred to as products A, B, and C. The company is currently changing
its short-range planning approach in an attempt to incorporate some of the newer
planning techniques. The controller and some of his staff have been conferring
with a consultant on the feasibility of using a linear programming model for
determining the optimum product mix.

Information for short-range planning has been developed in the same format as
in prior years. This information includes expected sales prices and expected
direct labor and material costs for each product. In addition, variable and fixed
overhead costs were assumed to be the same for each product because
approximately equal quantities of the products were produced and sold.

PRICE AND COST INFORMATION (PER UNIT}


  A B C
Selling price $25.00 $30.00 $40.00
Direct labor 7.50 10.00 12.50
Direct materials 9.00 6.00 10.50
Variable overhead 6.00 6.00 6.00
Fixed overhead 6.00 6.00 6.00

All three products use the same type of direct material, which costs $1.50 per
pound of material. Direct labor is paid at the rate of $5.00 per direct labor hour.
There are 2,000 direct labor hours and 20,000 pounds of direct materials
available each month.

Required:
 a. Formulate and label the linear programming objective function and
constraint functions necessary to maximize Tripro's contribution
margin. Use QA, QB, and Qc to represent units of the three products.
Reviewer 269
Management Advisory Services

 b. What underlying assumptions must be satisfied to justify the use of


linear programming?
 c. The controller, upon reviewing the data presented and the linear
programming functions developed, performed further analysis of
overhead costs. He used a multiple linear regression model to analyze
the overhead cost behavior. The regression model incorporated
observations from the past 48 months of total overhead cost and the
direct labor hours for each product. The following equation was the
result:

Y = $5,000 + 2XA + 4XD + 3X,

where:
 Y= Monthly total overhead in dollars
 XA = Monthly direct labor hours for product A
 XB = Monthly direct labor hours for product B
 X  C = Monthly direct labor hours for product C

The total regression has been determined to be statistically significant as has


each of the individual regression coefficients. Reformulate the objective function
for Tripro Company using the results of this analysis.

16.27 Determining product mix and shadow price. [CMA adapted] The Frey
Company manufactures and sells two products-a toddler bike and a toy
highchair. Linear programming is employed to determine the best production and
sales mix of bikes and chairs. This approach also allows Frey to speculate on
economic changes. For example, management is often interested in knowing
how variations in selling prices, resource costs, resource availabilities, and
marketing strategies would affect the company's performance.

The demand for bikes and chairs is relatively constant throughout the year. The
following economic data pertain to the two products:
  BIKE (B) CHAIR (C)
Selling price per unit $12 $10
Variable cost per unit 8 7
Contribution margin per unit $ 4 $3
Raw materials required:    
Wood 1 board foot 2 board feet
Plastic 2 pounds 1 pound
Direct labor required 2 hours 2 hours

Estimates of the resource quantities available in a nonvacation month during the


year are as follows:
Wood 10,000 board feet
Plastic 10,000 pounds
Direct labor 12,000 hours
Reviewer 270
Management Advisory Services

The graphic formulation of the constraints of the linear programming model that
Frey Company has developed for nonvacation months accompanies the
problem_ The algebraic formulation of the model for the nonvacation months is
as follows:

Objective function: max Z = 4B + 3C

The constraints are:


 B + 2C <= 10,000 hoard feet
 2B + C <= 10,000 pounds
 2B + 2C <= 12,000 direct labor hours
 B, C >= 0
 

The results from the linear programming model indicate that Frey Company can
maximize its contribution margin (and thus profits) for a nonvacation month by
producing and selling 4,000 toddler hikes and 2,000 toy highchairs. This sales
mix will yield a total contribution margin of $22,000 for a nonvacation month.

Required:

a. During the months of June, July, and August, the total direct labor hours
available are reduced from 12,000 to 10,000 hours per month due to vacations.

1. What would be the best product mix and maximum total contribution margin
when only 10,000 direct labor hours are available during a month?

2. The “shadow price” of a resource is defined as the marginal contribution of a


resource or the rate at which profit would increase (decrease) if the amount of
the resource were increased (decreased). Based on your solution to
Requirement (a)1, what is the shadow price on direct labor hours in the original
model for a vacation month?

b. Competition in the toy market is very strong. Consequently, the prices of the
two products tend to fluctuate. Can analysis of data from the linear programming
model provide information to management that will indicate when price changes
Reviewer 271
Management Advisory Services

to meet market conditions will alter the optimum product mix? Explain your
answer.

16.28 Identifying and discussing the application of linear programming. [CMA


adapted] The firm of Miller, Lombardi, and York was recently formed by the
merger of two companies providing accounting services. York's business was
providing personal financial planning, while Miller and Lombardi conducted audits
of small governmental units and provided tax planning and preparation for
several commercial firms. The combined firm has leased new offices and
acquired several microcomputers that are used by the professional staff in each
area of service. In the short run, however, the firm does not have the financial
resources to acquire computers for all of the professional staff.

The expertise of the professional staff can be divided into three distinct areas
that match the services provided by the firm, i.e., tax preparation and planning,
insurance and investments, and auditing. Since the merger, however, the new
firm has had to turn away business in all three areas of service. One of the
problems is that although the total number of staff seems adequate, the staff
members are not completely interchangeable. Limited financial resources do not
permit hiring any new staff in the near future, and, therefore, the supply of staff is
restricted in each area.

Rich Oliva has been assigned the responsibility of allocating staff and computers
to the various engagements. The management has given Oliva the objective of
maximizing revenues in a manner consistent with maintaining a high level of
professional service in each of the areas of service. Management's time is billed
at $100 per hour, and the staff's time is billed at $70 per hour for those with
experience, and $50 per hour for the inexperienced staff. Pam Wren, a member
of the staff, recently completed a course in quantitative methods at the local
university. She suggested to Oliva that he use linear programming to assign the
appropriate staff and computers to the various engagements.

Required:
 a. Identify and discuss the assumptions underlying the linear
programming model.
 b. Explain the reasons why linear programming would be appropriate
for Miller, Lombardi, and York in making staff assignments.
 c. Identify and discuss the data that would be needed to develop a
linear programming model for Miller, Lombardi, and York.
 d. Discuss objectives, other than revenue maximization, that Rich
Oliva should consider before making staff allocations.

16.29) Graphical solution to LP minimization problem. Modern Air is planning to


add jet service to Chattanooga. Before purchasing the plane, Modern needs to
deter-mine the seat split between first and coach class. Modern's marketing
department has already stated that the new plane should have at least 8 first-
class seats. Modern's flight attendants have stated that to maintain adequate
customer service, there should be, on average, one attendant for each 20 first-
class seats or 50 coach seats. Modern's cabin designers have found that the
average first-class seat takes the equivalent cabin space of one and one-half
coach seats. Modern's marketing department also found that the average first-
Reviewer 272
Management Advisory Services

class passenger is on a short business trip with a combined person and luggage
weight of 210 pounds. On the other hand, the average coach-class passenger is
on an extended trip with a combined person and luggage weight of 230 pounds.

Required:

a. Modern is considering purchasing a model G-535, which is designed to lift


50,000 pounds of passenger (people + baggage) weight, can accommodate four
attendants, and has the equivalent cabin room for 210 coach-class seats. With
this plane, Modern's cost accountants have calculated a contribution of $110 for
each first-class passenger and, due to severe competition from other airlines,
only $50 for each coach passenger. What should be the cabin seat split?

b. Modern is also considering the purchase of a model G-535e-the efficiency


version of the model G-535. With the G-535e, due to lower jet fuel costs, the
contribution for each passenger would increase by 10%. Part of the decrease in
operating costs is due to the passenger weight capacity being reduced by 10,000
pounds. What should the cabin seat split be for the model G-535e?

c. Based on the results from Requirements (a) and (b), briefly comment on the
following:
 1. What is the importance of each constraint?
 2. Should Modern Air consider a smaller cabin size (and thus less
expensive) jet?

16.30 Simplex solution to LP maximization problem. Matador Equipment


Company sells lawn mowers in the western part of the country. They offer four
models:

1. Optimal Solution
2. William G. Wild, Jr., and Iotas Port, “The Startling Discovery Bell
Labs Kept in the Shadows,” Busyness Week, September 21, 1987, p.
69.
3. Stewart Venit and Wayne Bishop, Elementary Linear Algebra
(Boston: PWS Publishers, 1985).
4. Linus Schrage, User's Manual: Linear, Integer, and Quadratic
Programming with UNDO, 2d ed. (Palo Alto, Calif.: scientific Press,
1985).
5. Wild and Port, op. cit., p. 70.
6. Birth of a Method,” Computer Decisions, February 12, 1985, p. 48.
7. Jack W. Farrell, “The Karmarkar Maneuver,” Traffic Management,
February 1985, p. 85.

2. Relevant Costing And Differential Analysis


a. Definition And Identification Of Relevant Costs

RELEVANT COSTS AND REVENUES


Reviewer 273
Management Advisory Services

Expected future costs and revenues that differ among alternative courses of
action.

The following are relevant:

 Differential costs – costs that are present in one alternative in a decision-


making case, but are absent in whole or in part in another alternative.

 Avoidable costs – costs that can be eliminated, in whole or in part, when one
alternative is chosen over another in a decision-making case.

 Opportunity costs – refers to the contribution to income that is forgone (or


lost) when one action is taken over the next best alternative course of action.

The following are irrelevant:

 Sunk (past / historical) costs – cost that has already been incurred and
therefore cannot be avoided regardless of the alternative taken by the
decision maker

ALTHOUGH PAST (SUNK, HISTORICAL) COSTS ARE ALWAYS


IRRELEVANT IN DECISION MAKING, THEY MAY SERVE AS A BASIS
FOR MAKING PREDICTIONS.

 Future costs that do not differ between or among the alternatives under
consideration

b. Concept Of Opportunity Costs

What is an 'Opportunity Cost'

Opportunity cost refers to a benefit that a person could have received, but gave
up, to take another course of action. Stated differently, an opportunity cost
represents an alternative given up when a decision is made. This cost is,
therefore, most relevant for two mutually exclusive events. In investing, it is the
difference in return between a chosen investment and one that is necessarily
passed up. 

BREAKING DOWN 'Opportunity Cost'


What is the Formula for Calculating Opportunity Cost?

When assessing the potential profitability of various investments, businesses


look for the option that is likely to yield the greatest return. Often, this can be
determined by looking at the expected rate of return for a given investment
vehicle. However, businesses must also consider the opportunity cost of each
option. Assume that, given a set amount of money for investment, a business
must choose between investing funds in securities or using it to purchase new
equipment. No matter which option is chosen, the potential profit that is forfeited
Reviewer 274
Management Advisory Services

by not investing in the other option is called the opportunity cost. This is often
expressed as the difference between the expected returns of each option:

Opportunity Cost = Return of Most Lucrative Option - Return of Chosen Option

Option A in the above example is to invest in the stock market in hopes of


generating returns. Option B is to reinvest the money back into the business with
the expectation that newer equipment will increase production efficiency, leading
to lower operational expenses and a higher profit margin. Assume the
expected return on investment in the stock market is 12%, and the equipment
update is expected to generate a 10% return. The opportunity cost of choosing
the equipment over the stock market is 12% - 10%, or 2%.

Opportunity cost analysis also plays a crucial role in determining a


business's capital structure. While both debt and equity require some degree of
expense to compensate lenders and shareholders for the risk of investment,
each also carries an opportunity cost. Funds that are used to make payments on
loans, for example, are therefore not being invested in stocks or bonds which
offer the potential for investment income. The company must decide if the
expansion made possible by the leveraging power of debt will generate greater
profits than could be made through investments.

Because opportunity cost is a forward-looking calculation, the actual rate of


return for both options is unknown. Assume the company in the above example
decides to forgo new equipment and invests in the stock market instead. If the
selected securities decrease in value, the company could end up losing money
rather than enjoying the anticipated 12% return. For the sake of simplicity,
assume the investment simply yields a return of 0%, meaning the company gets
out exactly what it put in. The actual opportunity cost of choosing this option is
10% - 0%, or 10%. It is equally possible that, had the company chosen new
equipment, there would be no effect on production efficiency and profits would
remain stable. The opportunity cost of choosing this option is then 12% rather
than the anticipated 2%.

It is important to compare investment options that have a similar degree of risk.


Comparing a Treasury bill (T-bill)—which is virtually risk-free—to investment in a
highly volatile stock can result in a misleading calculation. Both options may have
anticipated returns of 5%, but the rate of return of the T-bill is backed by the U.S.
government while there is no such guarantee in the stock market. While the
opportunity cost of either option is 0%, the T-bill is clearly the safer bet when the
relative risk of each investment is considered.

Using Opportunity Costs in Our Daily Lives

When making big decisions like buying a home or starting a business, you will


likely scrupulously research the pros and cons of your financial decision, but
most of our day-to-day choices aren't made with a full understanding of the
potential opportunity costs. If they're cautious about a purchase, most people just
look at their savings account and check their balance before spending money.
Reviewer 275
Management Advisory Services

For the most part, we don't think about the things that we must give up when we
make those decisions.

However, that kind of thinking could be dangerous. The problem lies when you
never look at what else you could do with your money or buy things blindly
without considering the lost opportunities. Buying takeout for lunch occasionally
can be a wise decision, especially if it gets you out of the office when your boss
is throwing a fit. However, buying one cheeseburger every day for the next 25
years could lead to several missed opportunities. Aside from the potentially
harmful health effects of high cholesterol, investing that $4.50 on a burger could
add up to just over $52,000 in that time frame, assuming a very doable rate of
return of 5%.

This is just one simple example, but the core message holds true for a variety of
situations. From choosing whether to invest in "safe" treasury bonds or deciding
to attend a public college over a private one in order to get a degree, there are
plenty of things to consider when making a decision in your personal finance life.

While it may sound like overkill to have to think about opportunity costs every
time you want to buy a candy bar or go on vacation, it's an important tool to use
to make the best use of your money. 

What is the Difference Between a Sunk Cost and an Opportunity Cost?

The difference between a sunk cost and an opportunity cost is the difference


between money already spent and potential returns not earned on an investment
because capital was invested elsewhere. Buying 1,000 shares of company A at
$10 a share, for instance, represents a sunk cost of $10,000. This is the amount
of money paid out to make an investment, and getting that money back requires
liquidating stock at or above the purchase price.

Opportunity cost describes the returns that could have been earned if the money
was invested in another instrument. Thus, while 1,000 shares in company A
might eventually sell for $12 each, netting a profit of $2 a share, or $2,000,
during the same period, company B rose in value from $10 a share to $15. In this
scenario, investing $10,000 in company A netted a yield of $2,000, while the
same amount invested in company B would have netted $5,000. The difference,
$3,000, is the opportunity cost of having chosen company A over company B.

The easiest way to remember the difference is to imagine "sinking" money into
an investment, which ties up the capital and deprives an investor of the
"opportunity" to make more money elsewhere. Investors must take both concepts
into account when deciding whether to hold or sell current investments. Money
has already been sunk into investments, but if another investment promises
greater returns, the opportunity cost of holding the underperforming asset may
rise to the point where the rational investment option is to sell and invest in a
more promising investment elsewhere.
Reviewer 276
Management Advisory Services

What is the Difference Between Risk and Opportunity Cost?

In economics, risk describes the possibility that an investment's actual and


projected returns are different and that some or all of the principle is lost as a
result. Opportunity cost concerns the possibility that the returns of a chosen
investment are lower than the returns of a necessarily forgone investment. The
key difference is that risk compares the actual performance of an investment
against the projected performance of the same investment, while opportunity
cost compares the actual performance of an investment against the actual
performance of a different investment.

c. Approaches In Analyzing Alternatives In Non-Routing Decisions (Total


And Differential)

TOTAL COST APPROACH

Approach in analyzing alternatives in non-routine decisions wherein the total


costs of one alternative is compared against another.

DIFFERENTIAL COST APPROACH

Approach in analyzing alternatives in non-routine decisions wherein the


differential (incremental) costs of one alternative is compared against another.

d. Types Of Decisions (Make Or Buy, Accept Or Reject Special Order,


Continue Or Drop/Shutdown, Sell Or Process Further, Best Product
Combination, Pricing Decisions)

MAKE OR BUY

Relevant Cost Approach

RELEVANT COSTS TO
KINDS OF COSTS
Make Buy
Cost of ingredients and other variable costs xx
Purchase price xx
Fixed costs avoided if bought xx
Total cost per unit xx xx
Level of activity xx xx
Total relevant costs xx xx

Total Cost Approach

KINDS OF COSTS RELEVANT COSTS TO


Reviewer 277
Management Advisory Services

Make Buy
Cost of ingredients and other variable costs xx
Purchase price xx
Fixed costs avoided if bought xx
Fixed costs that cannot be avoided if bought xx xx
Total cost per unit xx xx
Level of activity xx xx
Total relevant costs xx xx
Indifference Point:

TOTAL COST TO MAKE = TOTAL COST TO BUY


VC(x) + TFC = VC(x) + TFC

ACCEPT OR REJECT SPECIAL ORDER

There is an excess capacity

Special price of offer xx


Relevant cost of offer :
Variable cost xx
Cost savings (xx) (xx)
Contribution margin from offer xx

(+) = ACCEPT
(-) = REJECT

There is no excess capacity

Contribution margin from offer xx


Contribution margin lost if accepted (xx)
Net advantage (disadvantage) of accepting xx

NET ADVANTAGE = ACCEPT


NET DISADVANTAGE = REJECT

CONTINUE OR DROP/SHUTDOWN

Total normal fixed costs if to operate xx


Total shutdown costs:
Reduced fixed costs during the shutdown period
(unavailable fixed costs) xx
Any additional costs incurred if will shut down xx
Reviewer 278
Management Advisory Services

Any estimated costs to restart operation xx (xx)


Total shut down cost (savings) xx

SAVINGS = SHUT DOWN


COST = CONTINUE

or

Total fixed costs avoided xx


Less: Additional costs incurred if will shut down xx
Estimated costs to restart operation xx (xx)
Total shut down cost (savings) xx

Shut down point (SDP) = shut down savings / new CM per unit during shut down
period

 Demand* > SDP → continue


 Demand < SDP → shut down
 Demand = SDP → either (considering qualitative aspects, it is better to
continue so that employees for example, won’t be terminated)

SELL OR PROCESS FURTHER

Increase in selling price due to further processing xx


Increase in cost due to further processing (xx)
Net advantage (disadvantage) to process further xx

 ISP > FPC → process further


 ISP < FPC → sell as is
 ISP = FPC → either (to conserve resources however, sell as is)

BEST PRODUCT COMBINATION

Single Scarce Resource

7. Determine the contribution margin per unit of each product line.

8. Determine the required number of scarce resource needed to produce and


sell one unit of such product.

9. Determine the contribution margin per scarce resource ( CM per scarce


resource = CM per unit / required scarce resource per unit).

10. Rank the products using the CM per constrained resource. The highest is
most profitable.
Reviewer 279
Management Advisory Services

11. Maximize production of the most profitable product considering the demand
constraints for the product. (Produce units only up to its market limit & use
the remaining resources for the product with the next highest CM per scarce
unit; produce only up to market limit even if there is an excess number of
resources)

Multiple Scarce Resources

Use linear programming.

PRICING DECISIONS

Pricing Objectives

1. To maximize profit or target margin.

2. To meet the demand sales volume or market share.

3. To maintain a stable relationship between the company’s and the industry


leaders’ prices.

4. To enhance the image that the company wants to project in the market.

Factors That Influence Product Pricing

1. Internal Factors

o All the relevant costs in the value chain (from research and
development to customer service).

o The company’s marketing objectives, as well as its marketing mix


strategy.

o The company’s capacity.

 Peak-load-pricing – prices may vary inversely with capacity usage.


The company’s products would be sold at higher prices if the
company is not operating at full capacity.

2. External Factors

o The type of market where the products / services are sold.

 Perfect Competition – in this type of market, a firm can sell as much


of a product as it can produce, all at a single market price. Every
firm in this market will charge the market price – if a product is
priced higher than the market price, nobody will buy; if priced lower,
the company would sacrifice profit.
Reviewer 280
Management Advisory Services

 Imperfect Competition – the type of market wherein a firm’s price


will influence the quantity it sells. For example, a company would
have to reduce its prices to generate additional sales.

 Monopolistic Market – a monopolist is usually able to charge a


higher price because it has no competitors.

o Demand and supply

o Customer’s perception of value and price

o Price elasticity of demand – the effect of price changes on sales volume

 Highly Elastic Demand – small price increases cause large volume


declines

 Highly Inelastic Demand – prices have little or no effect on volume

o Legal requirements

SOME “ILLEGAL” PRICING SCHEMES:

 Predatory Pricing – establishing prices so low to drive out


competitions from the market, so that once the predatory pricer can
no longer has significant competition, it can dramatically raise
prices.

 Discriminatory Pricing – charging different prices to different


customers for the same product or service.

 Collusive Pricing – companies require conspire to restrict output


and set artificially high prices.

o Competitors’ Actions

Pricing Methods

1. Cost-Based Pricing – it starts with the determination of the cost, then a price
is set so that such price will recover all the costs in the value chain and
provide a desired return on investment.

COST-PLUS PRICE → Price = Cost + Markup

o Based on Total Costs

→ Price = Total cost + (Total cost x MU %)


Reviewer 281
Management Advisory Services

o Based on Absorption Product Cost

→ Price = Absorption Product Cost + (Absorption Product Cost x MU


%)

o Based on Variable Manufacturing Cost

→ Price = Variable manufacturing cost + (Variable manufacturing cost


x MU %)

o Based on Total Variable Cost

→ Price = Total variable cost + (Total variable cost x MU %)

Where MU % = mark-up percentage

2. Market-Based Pricing (or Buyer-Based Pricing) – prices are based on the


products’ perceived value and competitors’ actions, rather than on the
products / services’ costs.

Example: A glass of orange juice may have a higher price in a classy


restaurant than at the school canteen.

o Target Price – the expected market price for a product / service,


considering the consumers’ perceptions of value and competitors’
reactions

Target Price – Target Profit = Target Cost

 TARGET COSTING – a company first determines the price (the target


price or market price) at which it can sell its product or service, and then
design the product or service that can be produced at the target cost to
provide the target profit.

 VALUE ENGINEERING – a means of reaching the target cost. It


involves a systematic assessment of all the aspects of the value chain
costs of a product / service – from research and development, design of
the product, process design, production, marketing, distribution, and
customer service. The objective is to minimize cost without sacrificing
customer satisfaction.

 LIFE-CYCLE COSTING – involves the determination of a product’s


estimated revenues and expenses over its expected life-cycle.

Life-Cycle:

(1) Research and development stage


(2) Introduction stage
(3) Growth stage
Reviewer 282
Management Advisory Services

(4) Mature stage


(5) Harvest or decline stage and final provision of customer support

 WHOLE-LIFE COSTS – composed of:

(1) The life-cycle costs, and


(2) After-purchase costs incurred by customers
(3) Reduction of whole life cost provides benefits, both to the buyer
and the seller. Customers may pay a premium for a product with
low after-purchase costs.

3. Competition-Based Pricing – price is based largely on competitors’ prices.

4. New Product Pricing (Introductory Price Setting)

 Price Skimming – the introductory price is set at a very high level. The
objective is to sell the customers who are not concerned about price, so
that the firm may recover its research and development costs.

 Penetration Pricing – the introductory price is set at a very low level.


The objective is to gain deep market penetration quickly.

e. Apply the discounted cash flow method and the IRR method in
determining cash flows and in making business decisions concerning
capital expenditures

Structure of the chapter

Capital budgeting is very obviously a vital activity in business. Vast sums of


money can be easily wasted if the investment turns out to be wrong or
uneconomic. The subject matter is difficult to grasp by nature of the topic
covered and also because of the mathematical content involved. However, it
seeks to build on the concept of the future value of money which may be spent
now. It does this by examining the techniques of net present value, internal rate
of return and annuities. The timing of cash flows are important in new investment
decisions and so the chapter looks at this "payback" concept. One problem
which plagues developing countries is "inflation rates" which can, in some cases,
exceed 100% per annum. The chapter ends by showing how marketers can take
this in to account.

Capital budgeting versus current expenditures

A capital investment project can be distinguished from current expenditures by


two features:

1. Such projects are relatively large


Reviewer 283
Management Advisory Services

2. A significant period of time (more than one year) elapses between the
investment outlay and the receipt of the benefits.

As a result, most medium-sized and large organizations have developed special


procedures and methods for dealing with these decisions. A systematic approach
to capital budgeting implies:

1. The formulation of long-term goals

2. The creative search for and identification of new investment opportunities

3. Classification of projects and recognition of economically and/or statistically


dependent proposals

4. The estimation and forecasting of current and future cash flows

5. A suitable administrative framework capable of transferring the required


information to the decision level

6. The controlling of expenditures and careful monitoring of crucial aspects of


project execution

7. A set of decision rules which can differentiate acceptable from unacceptable


alternatives is required.

The last point (7) is crucial and this is the subject of later sections of the chapter.

The classification of investment projects

a) By project size

Small projects may be approved by departmental managers. More careful


analysis and Board of Directors' approval is needed for large projects of, say,
half a million dollars or more.

b) By type of benefit to the firm

 An increase in cash flow

 A decrease in risk

 An indirect benefit (showers for workers, etc.)

c) By degree of dependence

 Mutually exclusive projects (can execute project A or B, but not both)


Reviewer 284
Management Advisory Services

 Complementary projects: taking project A increases the cash flow of


project B.

 Substitute projects: taking project A decreases the cash flow of project


B.

d) By degree of statistical dependence

 Positive dependence

 Negative dependence

 Statistical independence.

e) By type of cash flow

 Conventional cash flow: only one change in the cash flow sign (e.g. -/++
++ or +/----, etc)

 Non-conventional cash flows: more than one change in the cash flow
sign (e.g. +/-/+++ or -/+/-/++++, etc.)

The economic evaluation of investment proposals

The analysis stipulates a decision rule for (1) accepting or (2) rejecting
investment projects

The time value of money

Recall that the interaction of lenders with borrowers sets an equilibrium rate of
interest. Borrowing is only worthwhile if the return on the loan exceeds the cost of
the borrowed funds. Lending is only worthwhile if the return is at least equal to
that which can be obtained from alternative opportunities in the same risk class.

The interest rate received by the lender is made up of:

1. The time value of money: the receipt of money is preferred sooner rather
than later. Money can be used to earn more money. The earlier the money
is received, the greater the potential for increasing wealth. Thus, to forego
the use of money, you must get some compensation.

2. The risk of the capital sum not being repaid. This uncertainty requires a
premium as a hedge against the risk, hence the return must be
commensurate with the risk being undertaken.

3. Inflation: money may lose its purchasing power over time. The lender must
be compensated for the declining spending/purchasing power of money. If
the lender receives no compensation, he/she will be worse off when the loan
is repaid than at the time of lending the money.
Reviewer 285
Management Advisory Services

a) Future values/compound interest

Future value (FV) is the value in dollars at some point in the future of one or
more investments.

FV consists of:

1. The original sum of money invested, and

2. The return in the form of interest.

The general formula for computing Future Value is as follows:


FVn = Vo (l + r)n

where
Vo is the initial sum invested
r is the interest rate
n is the number of periods for which the investment is to receive interest.

Thus we can compute the future value of what Vo will accumulate to in n years
when it is compounded annually at the same rate of r by using the above
formula.

Now attempt exercise 6.1.

Exercise 6.1 Future values/compound interest

i) What is the future value of $10 invested at 10% at the end of 1 year?
ii) What is the future value of $10 invested at 10% at the end of 5 years?

We can derive the Present Value (PV) by using the formula:

FVn = Vo (I + r)n

By denoting Vo by PV we obtain:

FVn = PV (I + r)n

by dividing both sides of the formula by (I + r)n we derive:

Rationale for the formula:


Reviewer 286
Management Advisory Services

As you will see from the following exercise, given the alternative of earning 10%
on his money, an individual (or firm) should never offer (invest) more than $10.00
to obtain $11.00 with certainty at the end of the year.

Now attempt exercise 6.2

Exercise 6.2 Present value


i) What is the present value of $11.00 at the end of one year?
ii) What is the PV of $16.10 at the end of 5 years?

b) Net present value (NPV)

The NPV method is used for evaluating the desirability of investments or


projects.

where:
Ct = the net cash receipt at the end of year t
Io = the initial investment outlay
r = the discount rate/the required minimum rate of return on investment
n = the project/investment's duration in years.

The discount factor r can be calculated using:

Examples:

N.B. At this point the tutor should introduce the net present value tables from any
recognised published source. Do that now.

Decision rule:
If NPV is positive (+): accept the project
If NPV is negative(-): reject the project
Reviewer 287
Management Advisory Services

Now attempt exercise 6.3.

Exercise 6.3 Net present value

A firm intends to invest $1,000 in a project that generated net receipts of $800,
$900 and $600 in the first, second and third years respectively. Should the firm
go ahead with the project?

Attempt the calculation without reference to net present value tables first.

c) Annuities
N.B. Introduce students to annuity tables from any recognised published source.

A set of cash flows that are equal in each and every period is called an annuity.

Example:
Year Cash Flow ($)
0 -800
1 400
2 400
3 400
PV = $400(0.9091) + $400(0.8264) + $400(0.7513)

= $363.64 + $330.56 + $300.52

= $994.72

NPV = $994.72 - $800.00

= $194.72

Alternatively,
PV of an annuity = $400 (PVFAt.i) (3,0,10)

= $400 (0.9091 + 0.8264 + 0.7513)

= $400 x 2.4868

= $994.72

NPV = $994.72 - $800.00

= $194.72

d) Perpetuities

A perpetuity is an annuity with an infinite life. It is an equal sum of money to be


paid in each period forever.
Reviewer 288
Management Advisory Services

where:
C is the sum to be received per period
r is the discount rate or interest rate

Example:

You are promised a perpetuity of $700 per year at a rate of interest of 15% per
annum. What price (PV) should you be willing to pay for this income?

= $4,666.67

A perpetuity with growth:

Suppose that the $700 annual income most recently received is expected to
grow by a rate G of 5% per year (compounded) forever. How much would this
income be worth when discounted at 15%?

Solution:

Subtract the growth rate from the discount rate and treat the first period's cash
flow as a perpetuity.

= $735/0.10

= $7,350

e) The internal rate of return (IRR)

Refer students to the tables in any recognised published source.


· The IRR is the discount rate at which the NPV for a project equals zero. This
rate means that the present value of the cash inflows for the project would equal
the present value of its outflows.

· The IRR is the break-even discount rate.


Reviewer 289
Management Advisory Services

· The IRR is found by trial and error.

 where r = IRR

IRR of an annuity:

where:
Q (n,r) is the discount factor
Io is the initial outlay
C is the uniform annual receipt (C1 = C2 =....= Cn).

Example:

What is the IRR of an equal annual income of $20 per annum which accrues for
7 years and costs $120?

=6

From the tables = 4%

Economic rationale for IRR:

If IRR exceeds cost of capital, project is worthwhile, i.e. it is profitable to


undertake. Now attempt exercise 6.4

Exercise 6.4 Internal rate of return

Find the IRR of this project for a firm with a 20% cost of capital:
YEAR CASH FLOW
$
0 -10,000
1 8,000
2 6,000
a) Try 20%
b) Try 27%
c) Try 29%
Net present value vs internal rate of return

Independent vs dependent projects

NPV and IRR methods are closely related because:


Reviewer 290
Management Advisory Services

i) both are time-adjusted measures of profitability, and


ii) their mathematical formulas are almost identical.

So, which method leads to an optimal decision: IRR or NPV?

a) NPV vs IRR: Independent projects

Independent project: Selecting one project does not preclude the choosing of the
other.

With conventional cash flows (-|+|+) no conflict in decision arises; in this case
both NPV and IRR lead to the same accept/reject decisions.

Figure 6.1 NPV vs IRR Independent projects

If cash flows are discounted at k1, NPV is positive and IRR > k1: accept project.

If cash flows are discounted at k2, NPV is negative and IRR < k 2: reject the
project.

Mathematical proof: for a project to be acceptable, the NPV must be positive, i.e.

Similarly for the same project to be acceptable:

where R is the IRR.

Since the numerators Ct are identical and positive in both instances:


Reviewer 291
Management Advisory Services

· implicitly/intuitively R must be greater than k (R > k);


· If NPV = 0 then R = k: the company is indifferent to such a project;
· Hence, IRR and NPV lead to the same decision in this case.

b) NPV vs IRR: Dependent projects

NPV clashes with IRR where mutually exclusive projects exist.

Example:

Agritex is considering building either a one-storey (Project A) or five-storey


(Project B) block of offices on a prime site. The following information is available:
Initial Investment Outlay Net Inflow at the Year End
Project A -9,500 11,500
Project B -15,000 18,000

Assume k = 10%, which project should Agritex undertake?

= $954.55

= $1,363.64

Both projects are of one-year duration:

IRRA: 

$11,500 = $9,500 (1 +RA)

= 1.21-1

therefore IRRA = 21%

IRRB: 
Reviewer 292
Management Advisory Services

$18,000 = $15,000(1 + RB)

= 1.2-1

therefore IRRB = 20%

Decision:

Assuming that k = 10%, both projects are acceptable because:


NPVA and NPVB are both positive
IRRA > k AND IRRB > k

Which project is a "better option" for Agritex?

If we use the NPV method:


NPVB ($1,363.64) > NPVA ($954.55): Agritex should choose Project B.

If we use the IRR method:


IRRA (21%) > IRRB (20%): Agritex should choose Project A. See figure 6.2.

Figure 6.2 NPV vs IRR: Dependent projects

Up to a discount rate of ko: project B is superior to project A, therefore project B


is preferred to project A.

Beyond the point ko: project A is superior to project B, therefore project A is


preferred to project B

The two methods do not rank the projects the same.

Differences in the scale of investment


Reviewer 293
Management Advisory Services

NPV and IRR may give conflicting decisions where projects differ in their scale of
investment. Example:
Years 0 1 2 3
Project A -2,500 1,500 1,500 1,500
Project B -14,000 7,000 7,000 7,000

Assume k= 10%.
NPVA = $1,500 x PVFA at 10% for 3 years
= $1,500 x 2.487
= $3,730.50 - $2,500.00
= $1,230.50.

NPVB == $7,000 x PVFA at 10% for 3 years


= $7,000 x 2.487
= $17,409 - $14,000
= $3,409.00.

IRRA = 

= 1.67.

Therefore IRRA = 36% (from the tables)

IRRB = 

= 2.0

Therefore IRRB = 21%

Decision:

Conflicting, as:
· NPV prefers B to A
· IRR prefers A to B
NPV IRR
Project A $ 3,730.50 36%
Project B $17,400.00 21%
Reviewer 294
Management Advisory Services

See figure 6.3.

Figure 6.3 Scale of investments

To show why:

i) the NPV prefers B, the larger project, for a discount rate below 20%

ii) the NPV is superior to the IRR


a) Use the incremental cash flow approach, "B minus A" approach
b) Choosing project B is tantamount to choosing a hypothetical project "B minus
A".
0 1 2 3
Project B - 14,000 7,000 7,000 7,000
Project A - 2,500 1,500 1,500 1,500
"B minus A" - 11,500 5,500 5,500 5,500

IRR"B Minus A" 

= 2.09

= 20%

c) Choosing B is equivalent to: A + (B - A) = B

d) Choosing the bigger project B means choosing the smaller project A plus an
additional outlay of $11,500 of which $5,500 will be realised each year for the
next 3 years.

e) The IRR"B minus A" on the incremental cash flow is 20%.

f) Given k of 10%, this is a profitable opportunity, therefore must be accepted.


Reviewer 295
Management Advisory Services

g) But, if k were greater than the IRR (20%) on the incremental CF, then reject
project.

h) At the point of intersection,


NPVA = NPVB or NPVA - NPVB = 0, i.e. indifferent to projects A and B.

i) If k = 20% (IRR of "B - A") the company should accept project A.


· This justifies the use of NPV criterion.

Advantage of NPV:
· It ensures that the firm reaches an optimal scale of investment.

Disadvantage of IRR:
· It expresses the return in a percentage form rather than in terms of absolute
dollar returns, e.g. the IRR will prefer 500% of $1 to 20% return on $100.
However, most companies set their goals in absolute terms and not in % terms,
e.g. target sales figure of $2.5 million.

The timing of the cash flow

The IRR may give conflicting decisions where the timing of cash flows varies
between the 2 projects.

Note that initial outlay Io is the same.


0 1 2
Project A - 100 20 125.00
Project B - 100 100 31.25
"A minus B" 0 - 80 88.15

Assume k = 10%
NPV IRR
Project A 17.3 20.0%
Project B 16.7 25.0%
"A minus B" 0.6 10.9%

IRR prefers B to A even though both projects have identical initial outlays. So,
the decision is to accept A, that is B + (A - B) = A. See figure 6.4.

Figure 6.4 Timing of the cash flow


Reviewer 296
Management Advisory Services

The horizon problem

NPV and IRR rankings are contradictory. Project A earns $120 at the end of the
first year while project B earns $174 at the end of the fourth year.
0 1 234
Project A -100 120 - - -
Project B -100 - - - 174

Assume k = 10%
NPV IRR
Project A 9 20%
Project B 19 15%

Decision:
NPV prefers B to A
IRR prefers A to B.

The profitability index - PI

This is a variant of the NPV method.

Decision rule:
PI > 1; accept the project
PI < 1; reject the project
Reviewer 297
Management Advisory Services

If NPV = 0, we have:
NPV = PV - Io = 0
PV = Io

Dividing both sides by Io we get:

PI of 1.2 means that the project's profitability is 20%. Example:


PV of CF Io PI
Project A 100 50 2.0
Project B 1,500 1,000 1.5

Decision:

Choose option B because it maximises the firm's profitability by $1,500.

Disadvantage of PI:

Like IRR it is a percentage and therefore ignores the scale of investment.

The payback period (PP)

The CIMA defines payback as 'the time it takes the cash inflows from a capital
investment project to equal the cash outflows, usually expressed in years'. When
deciding between two or more competing projects, the usual decision is to accept
the one with the shortest payback.

Payback is often used as a "first screening method". By this, we mean that when
a capital investment project is being considered, the first question to ask is: 'How
long will it take to pay back its cost?' The company might have a target payback,
and so it would reject a capital project unless its payback period were less than a
certain number of years.

Example 1:
Years 0 1 2 3 4 5
Project A 1,000,000 250,000 250,000 250,000 250,000 250,000

For a project with equal annual receipts:

= 4 years
Reviewer 298
Management Advisory Services

Example 2:
Years 0 1 2 3 4
Project B - 10,000 5,000 2,500 4,000 1,000

Payback period lies between year 2 and year 3. Sum of money recovered by the
end of the second year
= $7,500, i.e. ($5,000 + $2,500)

Sum of money to be recovered by end of 3rd year


= $10,000 - $7,500

= $2,500

= 2.625 years

Disadvantages of the payback method:


· It ignores the timing of cash flows within the payback period, the cash flows
after the end of payback period and therefore the total project return.

· It ignores the time value of money. This means that it does not take into
account the fact that $1 today is worth more than $1 in one year's time. An
investor who has $1 today can either consume it immediately or alternatively can
invest it at the prevailing interest rate, say 30%, to get a return of $1.30 in a
year's time.

· It is unable to distinguish between projects with the same payback period.

· It may lead to excessive investment in short-term projects.

Advantages of the payback method:


· Payback can be important: long payback means capital tied up and high
investment risk. The method also has the advantage that it involves a quick,
simple calculation and an easily understood concept.

The accounting rate of return - (ARR)

The ARR method (also called the return on capital employed (ROCE) or the
return on investment (ROI) method) of appraising a capital project is to estimate
the accounting rate of return that the project should yield. If it exceeds a target
rate of return, the project will be undertaken.

Note that net annual profit excludes depreciation.


Reviewer 299
Management Advisory Services

Example:

A project has an initial outlay of $1 million and generates net receipts of


$250,000 for 10 years.

Assuming straight-line depreciation of $100,000 per year:

= 15%

= 30%

Disadvantages:
· It does not take account of the timing of the profits from an investment.

· It implicitly assumes stable cash receipts over time.

· It is based on accounting profits and not cash flows. Accounting profits are
subject to a number of different accounting treatments.

· It is a relative measure rather than an absolute measure and hence takes no
account of the size of the investment.

· It takes no account of the length of the project.

· it ignores the time value of money.

The payback and ARR methods in practice

Despite the limitations of the payback method, it is the method most widely used
in practice. There are a number of reasons for this:
· It is a particularly useful approach for ranking projects where a firm faces
liquidity constraints and requires fast repayment of investments.

· It is appropriate in situations where risky investments are made in uncertain


markets that are subject to fast design and product changes or where future cash
flows are particularly difficult to predict.
Reviewer 300
Management Advisory Services

· The method is often used in conjunction with NPV or IRR method and acts as a
first screening device to identify projects which are worthy of further investigation.

· it is easily understood by all levels of management.

· It provides an important summary method: how quickly will the initial investment
be recouped?

Now attempt exercise 6.5.

Exercise 6.5 Payback and ARR

Delta Corporation is considering two capital expenditure proposals. Both


proposals are for similar products and both are expected to operate for four
years. Only one proposal can be accepted.

The following information is available:


Profit/(loss)
  Proposal A Proposal B
$ $
Initial investment 46,000 46,000
Year 1 6,500 4,500
Year 2 3,500 2,500
Year 3 13,500 4,500
Year 4 Loss 1,500 Profit 14,500
Estimated scrap value at the end of Year 4 4,000 4,000

Depreciation is charged on the straight line basis. Problem:

a) Calculate the following for both proposals:


i) the payback period to one decimal place
ii) the average rate of return on initial investment, to one decimal place.
Allowing for inflation

So far, the effect of inflation has not been considered on the appraisal of capital
investment proposals. Inflation is particularly important in developing countries as
the rate of inflation tends to be rather high. As inflation rate increases, so will the
minimum return required by an investor. For example, one might be happy with a
return of 10% with zero inflation, but if inflation was 20%, one would expect a
much greater return.

Example:

Keymer Farm is considering investing in a project with the following cash flows:
ACTUAL CASH FLOWS
Z$
TIME
0 (100,000)
Reviewer 301
Management Advisory Services

1 90,000
2 80,000
3 70,000

Keymer Farm requires a minimum return of 40% under the present conditions.
Inflation is currently running at 30% a year, and this is expected to continue
indefinitely. Should Keymer Farm go ahead with the project?

Let us take a look at Keymer Farm's required rate of return. If it invested $10,000
for one year on 1 January, then on 31 December it would require a minimum
return of $4,000. With the initial investment of $10,000, the total value of the
investment by 31 December must increase to $14,000. During the year, the
purchasing value of the dollar would fall due to inflation. We can restate the
amount received on 31 December in terms of the purchasing power of the dollar
at 1 January as follows:

Amount received on 31 December in terms of the value of the dollar at 1


January:

= $10,769

In terms of the value of the dollar at 1 January, Keymer Farm would make a profit
of $769 which represents a rate of return of 7.69% in "today's money" terms. This
is known as the real rate of return. The required rate of 40% is a money rate of
return (sometimes known as a nominal rate of return). The money rate measures
the return in terms of the dollar, which is falling in value. The real rate measures
the return in constant price level terms.

The two rates of return and the inflation rate are linked by the equation:
(1 + money rate) = (1 + real rate) x (1 + inflation rate)

where all the rates are expressed as proportions.

In the example,
(1 + 0.40) = (1 + 0.0769) x (1 + 0.3)

= 1.40

So, which rate is used in discounting? As a rule of thumb:


a) If the cash flows are expressed in terms of actual dollars that will be received
or paid in the future, the money rate for discounting should be used.

b) If the cash flows are expressed in terms of the value of the dollar at time 0 (i.e.
in constant price level terms), the real rate of discounting should be used.
Reviewer 302
Management Advisory Services

In Keymer Farm's case, the cash flows are expressed in terms of the actual
dollars that will be received or paid at the relevant dates. Therefore, we should
discount them using the money rate of return.
TIME CASH FLOW DISCOUNT FACTOR PV
$ 40% $
0 (150,000) 1.000 (100,000)
1 90,000 0.714 64,260
2 80,000 0.510 40,800
3 70,000 0.364 25,480
30,540

The project has a positive net present value of $30,540, so Keymer Farm should
go ahead with the project.

The future cash flows can be re-expressed in terms of the value of the dollar at
time 0 as follows, given inflation at 30% a year:
TIME ACTUAL CASH FLOW CASH FLOW AT TIME 0 PRICE LEVEL
$ $
0 (100,000) (100,000)
1 90,000 69,231

2 80,000 47,337

3 70,000 31,862

The cash flows expressed in terms of the value of the dollar at time 0 can now be
discounted using the real value of 7.69%.
TIME CASH FLOW DISCOUNT FACTOR PV
$ 7.69% $
0 (100,000) 1.000 (100,000)
1 69,231 64,246

2 47,337 40,804

3 31,862 25,490

30,540

The NPV is the same as before.

Expectations of inflation and the effects of inflation


Reviewer 303
Management Advisory Services

When a manager evaluates a project, or when a shareholder evaluates his/her


investments, he/she can only guess what the rate of inflation will be. These
guesses will probably be wrong, at least to some extent, as it is extremely difficult
to forecast the rate of inflation accurately. The only way in which uncertainty
about inflation can be allowed for in project evaluation is by risk and uncertainty
analysis.

Inflation may be general, that is, affecting prices of all kinds, or specific to


particular prices. Generalised inflation has the following effects:
a) Inflation will mean higher costs and higher selling prices. It is difficult to predict
the effect of higher selling prices on demand. A company that raises its prices by
30%, because the general rate of inflation is 30%, might suffer a serious fall in
demand.

b) Inflation, as it affects financing needs, is also going to affect gearing, and so


the cost of capital.

c) Since fixed assets and stocks will increase in money value, the same
quantities of assets must be financed by increasing amounts of capital. If the
future rate of inflation can be predicted with some degree of accuracy,
management can work out how much extra finance the company will need and
take steps to obtain it, e.g. by increasing retention of earnings, or borrowing.

However, if the future rate of inflation cannot be predicted with a certain amount
of accuracy, then management should estimate what it will be and make plans to
obtain the extra finance accordingly. Provisions should also be made to have
access to 'contingency funds' should the rate of inflation exceed expectations,
e.g. a higher bank overdraft facility might be arranged should the need arise.

Many different proposals have been made for accounting for inflation. Two
systems known as "Current purchasing power" (CPP) and "Current cost
accounting" (CCA) have been suggested.

CPP is a system of accounting which makes adjustments to income and capital


values to allow for the general rate of price inflation.

CCA is a system which takes account of specific price inflation (i.e. changes in
the prices of specific assets or groups of assets), but not of general price
inflation. It involves adjusting accounts to reflect the current values of assets
owned and used.

At present, there is very little measure of agreement as to the best approach to


the problem of 'accounting for inflation'. Both these approaches are still being
debated by the accountancy bodies.

Now attempt exercise 6.6.

Exercise 6.6 Inflation


Reviewer 304
Management Advisory Services

TA Holdings is considering whether to invest in a new product with a product life


of four years. The cost of the fixed asset investment would be $3,000,000 in
total, with $1,500,000 payable at once and the rest after one year. A further
investment of $600,000 in working capital would be required.

The management of TA Holdings expect all their investments to justify


themselves financially within four years, after which the fixed asset is expected to
be sold for $600,000.

The new venture will incur fixed costs of $1,040,000 in the first year, including
depreciation of $400,000. These costs, excluding depreciation, are expected to
rise by 10% each year because of inflation. The unit selling price and unit
variable cost are $24 and $12 respectively in the first year and expected yearly
increases because of inflation are 8% and 14% respectively. Annual sales are
estimated to be 175,000 units.

TA Holdings money cost of capital is 28%.

Is the product worth investing in?

II. FINANCIAL MANAGEMENT


A. Objectives And Scope Of Financial Management
1. Nature, Purpose And Scope Of Financial Management

NATURE

Financial Management is a decision making process concerned with planning,


acquiring, and utilizing funds in a manner that achieves the firm’s desired goals.

It is the process of planning decisions in order to maximize wealth.

Nature of financial management could be spotlighted with reference to the following


aspects of this discipline:

(i) Financial management is a specialized branch of general management, in the


present-day-times. Long back, in traditional times, the finance function was coupled,
either with production or with marketing; without being assigned a separate status.

(ii) Financial management is growing as a profession. Young educated persons,


aspiring for a career in management, undergo specialized courses in Financial
Management, offered by universities, management institutes etc.; and take up the
profession of financial management.

(iii) Despite a separate status financial management, is intermingled with other


aspects of management. To some extent, financial management is the responsibility
of every functional manager. For example, the production manager proposing the
installation of a new plant to be operated with modern technology; is also involved in
a financial decision.
Reviewer 305
Management Advisory Services

Likewise, the Advertising Manager thinking, in terms of launching an aggressive


advertising programme, is too, considering a financial decision; and so on for other
functional managers. This intermingling nature of financial management calls for
efforts in producing a coordinated financial system for the whole enterprise.

(iv) Financial management is multi-disciplinary in approach. It depends on other


disciplines, like Economics, Accounting etc., for a better procurement and utilisation
of finances.

For example, macro-economic guides financial management as to banking and


financial institutions, capital market, monetary and fiscal policies to enable the finance
manager decide about the best sources of finances, under the economic conditions,
the economy is passing through.

Micro-economics points out to the finance manager techniques for profit


maximisation, with the limited finances at the disposal of the enterprise. Accounting,
again, provides data to the finance manager for better and improved financial
decision making in future.

(v) The finance manager is often called the Controller; and the financial management
function is given name of controllership function; in as much as the basic guideline for
the formulation and implementation of plans-throughout the enterprise-come from this
quarter.

The finance manager, very often, is a highly responsible member of the Top
Management Team. He performs a trinity of roles-that of a line officer over the
Finance Department; a functional expert commanding subordinates throughout the
enterprise in matters requiring financial discipline and a staff adviser, suggesting the
best financial plans, policies and procedures to the Top Management.

In any case, however, the scope of authority of the finance manager is defined by the
Top Management; in view of the role desired of him- depending on his financial
expertise and the system of organizational functioning.

(vi) Despite a hue and cry about decentralisation of authority; finance is a matter to be
found still centralised, even in enterprises which are so called highly decentralised.
The reason for authority being centralised, in financial matters is simple; as every
Tom, Dick and Harry manager cannot be allowed to play with finances, the way
he/she likes. Finance is both-a crucial and limited asset-of any enterprise.
(vii) Financial management is not simply a basic business function along with
production and marketing; it is more significantly, the backbone of commerce and
industry. It turns the sand of dreams into the gold of reality.

No production, purchases or marketing are possible without being duly supported by


requisite finances. Hence, Financial Management commands a higher status vis-a-vis
all other functional areas of general management.

Finance management is a long term decision making process which involves lot of
planning, allocation of funds, discipline and much more. Let us understand the nature
of financial management with reference of this discipline.
Reviewer 306
Management Advisory Services

1. Finance management is one of the important education which has been realized
word wide. Now a day’s people are undergoing through various specialization
courses of financial management. Many people have chosen financial management
as their profession.

2. The nature of financial management is never a separate entity. Even as an


operational manager or functional manager one has to take responsibility of financial
management.

3. Finance is a foundation of economic activities. The person who Manages finance is


called as financial manager. Important role of financial manager is to control finance
and implement the plans. For any company financial manager plays a crucial role in
it. Many times it happens that lack of skills or wrong decisions can lead to heavy
losses to an organization.

4. Nature of financial management is multi-disciplinary. Financial management


depends upon various other factors like: accounting, banking, inflation, economy, etc.
for the better utilization of finances.

5. Approach of financial management is not limited to business functions but it is a


backbone of commerce, economic and industry.

Financial management is broadly concerned with the mobilization and development


of funds by a business organization .for efficient operation of business. It is necessary
to obtain and utilize the funds effectively. This job is done by financial management.

Basically therefore financial management centers on funds raising for business in the
most economical way and investing these funds in optimum way so that maximum
returns can be obtained for the shares holders. Practically all business decisions
have financial implication. Hence, financial management is interlinked with all others
functions of business.

PURPOSE

Financial management refers to the efficient and effective management of money


(funds) in such a manner as to accomplish the objectives of the organization. It is the
specialized function directly associated with the top management. The significance of
this function is not seen in the 'Line' but also in the capacity of 'Staff' in overall of a
company. It has been defined differently by different experts in the field.

The term typically applies to an organization or company's financial strategy,


while personal finance or financial life management refers to an individual's
management strategy. It includes how to raise the capital and how to allocate capital,
i.e. capital budgeting. Not only for long term budgeting, but also how to allocate the
short term resources like current liabilities. It also deals with the dividend policies of
the shareholders.
Reviewer 307
Management Advisory Services

Taking a commercial business as the most common organisational structure, the key
objectives of financial management would be to:

 Create wealth for the business


 Generate cash, and
 Provide an adequate return on investment bearing in mind the risks that the
business is taking and the resources invested

Objectives of Financial Management

The financial management is generally concerned with procurement, allocation and


control of financial resources of a concern. The objectives can be-

1. To ensure regular and adequate supply of funds to the concern.


2. To ensure adequate returns to the shareholders which will depend upon the
earning capacity, market price of the share, expectations of the shareholders.
3. To ensure optimum funds utilization. Once the funds are procured, they should
be utilized in maximum possible way at least cost.
4. To ensure safety on investment, i.e, funds should be invested in safe ventures so
that adequate rate of return can be achieved.
5. To plan a sound capital structure-There should be sound and fair composition of
capital so that a balance is maintained between debt and equity capital.

Functions of Financial Management

1. Estimation of capital requirements: A finance manager has to make estimation


with regards to capital requirements of the company. This will depend upon
expected costs and profits and future programmes and policies of a concern.
Estimations have to be made in an adequate manner which increases earning
capacity of enterprise.
2. Determination of capital composition: Once the estimation have been made, the
capital structure have to be decided. This involves short- term and long- term
debt equity analysis. This will depend upon the proportion of equity capital a
company is possessing and additional funds which have to be raised from
outside parties.
3. Choice of sources of funds: For additional funds to be procured, a company has
many choices like-
a. Issue of shares and debentures
b. Loans to be taken from banks and financial institutions
c. Public deposits to be drawn like in form of bonds.
Choice of factor will depend on relative merits and demerits of each
source and period of financing.
Reviewer 308
Management Advisory Services

4. Investment of funds: The finance manager has to decide to allocate funds into


profitable ventures so that there is safety on investment and regular returns is
possible.
5. Disposal of surplus: The net profits decision have to be made by the finance
manager. This can be done in two ways:
a. Dividend declaration - It includes identifying the rate of dividends and
other benefits like bonus.
b. Retained profits - The volume has to be decided which will depend upon
expansional, innovational, diversification plans of the company.
6. Management of cash: Finance manager has to make decisions with regards to
cash management. Cash is required for many purposes like payment of wages
and salaries, payment of electricity and water bills, payment to creditors, meeting
current liabilities, maintainance of enough stock, purchase of raw materials, etc.
7. Financial controls: The finance manager has not only to plan, procure and utilize
the funds but he also has to exercise control over finances. This can be done
through many techniques like ratio analysis, financial forecasting, cost and profit
control, etc.

SCOPE

The main objective of financial management is to arrange sufficient finance for


meeting short-term and long-term needs. With these things in mind, a Financial
Manager will have to concentrate on the following areas of finance function.

 1.Estimating Financial Requirements: The first task of a financial manager is to


estimate short-term and long-term financial requirements of his business for this
purpose, he will prepare a financial plan for present as well as for future. The amount
required for purchasing fixed assets as well as needs of funds for working capital will
have to be ascertained. The estimations should be based on sound financial
principles so that neither there are inadequate nor excess funds with the concern.
The inadequacy of funds will adversely affect the day-today working of the concern
whereas excess funds may tempt a management to indulge in extravagant spending
or speculative activities.

 2.Deciding Capital Structure: The capital structure refers to the kind and proportion of
different securities for raising funds. After deciding about the quantum of funds
required it should be decided which type of securities should be raised. It may be
wise to finance fixed assets through long-term debts. Even here if gestation period is
longer, then share capital may be most suitable. Long-term funds should be
employed to finance working capital also, if not wholly then partially. Entirely
depending upon overdrafts and cash credit for meeting working capital needs may
not be suitable. A decision about various sources for funds should be linked to the
cost of raising funds. If cost of raising funds is very high then such sources may not
be useful for long. A decision about the kind of securities to be employed and the
Reviewer 309
Management Advisory Services

proportion in which these should be used is an important decision which influences


the short-term and long-term financial planning of an enterprise.

3.Selecting a Source of Finance: After preparing a capital structure, an appropriate


source of finance is selected. Various sources from which finance may be raised,
include: share capital, debentures, financial institutions, commercial banks, public
deposits, etc. If finances are needed for short periods then banks, public deposits and
financial institutions may be appropriate; on the other hand, if long-term finances are
required then share capital and debentures may be useful. If the concern does not
want to tie down assets as securities then public deposits may be suitable source. If
management does not want to dilute ownership then debentures should be issued in
preference to shares. The need, purpose, object and cost involved may be the factors
influencing the selection of a suitable source of financing.

4.Selecting a pattern of investment: When funds have been procured then a decision


about investment pattern is to be taken. The selection of an investment pattern is
related to the use of funds. A decision will have to be taken as to which assets are to
be purchased? The funds will have to be spent first on fixed assets and then an
appropriate portion will be retained for working capital. Even in various categories of
assets, a decision about the type of fixed or other assets will be essential. While
selecting a plant and machinery, even different categories of them may be available.
The decision-making techniques such as Capital Budgeting, Opportunity Cost
Analysis etc. may be applied in making decisions about capital expenditures.  While
spending or various assets, the principles. One may not like to invest on a project
which may be risky even though there may be more profits.

5.Proper Cash Management: Cash management is also an important task of finance


manager. He has to assess various cash needs at different times and then make
arrangements for arranging cash. Cash maybe required to (a) purchase raw
materials, (b) make payments to creditors, (c) meet wage bills; (d) meet day-to-day
expenses. The usual sources of cash may be: (a) cash sales, (b) collection of debts,
(c) short-term arrangements with banks etc. The cash management should be such
that neither there is a shortage of it and nor it is idle An shortage of cash will damage
the creditworthiness of the enterprise. The idle cash with the business will mean that
it is not properly used. It will be better if Cash Flow Statement is regularly prepared so
that one is able to find out various sources and applications. If cash is spent on
avoidable expenses then such spending may be curtailed. A proper idea on sources
of cash inflow may also enable to assess the utility of various sources. Some sources
may not be providing that much cash which we should have thought. All this
information will help in efficient management of cash.

6.Implementing Financial Controls: An efficient system of financial


management necessitates the use of various control devices. Financial control
devices generally used are,: (a) Return on investment, (b) Budgetary Control, (c)
Reviewer 310
Management Advisory Services

Break Even Analysis, (d) Cost Control, (e) Ratio Analysis (f) Cost and Internal Audit.
Return on investment is the best control device to evaluate the performance of
various financial policies. The higher this percentage better may be the financial
performance. The use of various control techniques by the finance manager will help
him in evaluating the performance in various areas and take corrective measures
whenever needed. 

7.Proper Use of Surpluses: The utilization of profits or surpluses also an important


factor in financial management. A judicious use of surpluses is essential for
expansion and diversification plans and also in protecting the interests shareholders.
The ploughing back of profits is the best policy of further financing but it clashes with
the interests of shareholders. A balance should be struck in using funds for paying
dividend and retaining earnings for financing expansion plans, etc. The market value
of shares will also be influenced by the declaration of dividend and expected
profitability in future. A finance manager should consider the influence of various
factor, such as: 9a) trends of earning of the enterprises, (b) expected earnings in
future, (c) market value of shares, (d) need for funds for financing expansion, etc. A
judicious policy for distributing surpluses will be essential for maintaining proper
growth of the unit.

Some of the major scope of financial management are as follows: 1. Investment


Decision 2. Financing Decision 3. Dividend Decision 4. Working Capital Decision.

1. Investment Decision:

The investment decision involves the evaluation of risk, measurement of cost of


capital and estimation of expected benefits from a project. Capital budgeting and
liquidity are the two major components of investment decision. Capital budgeting is
concerned with the allocation of capital and commitment of funds in permanent
assets which would yield earnings in future.

Capital budgeting also involves decisions with respect to replacement and renovation
of old assets. The finance manager must maintain an appropriate balance between
fixed and current assets in order to maximise profitability and to maintain desired
liquidity in the firm.

Capital budgeting is a very important decision as it affects the long-term success and
growth of a firm. At the same time it is a very difficult decision because it involves the
estimation of costs and benefits which are uncertain and unknown.

2. Financing Decision:

While the investment decision involves decision with respect to composition or mix of
assets, financing decision is concerned with the financing mix or financial structure of
the firm. The raising of funds requires decisions regarding the methods and sources
of finance, relative proportion and choice between alternative sources, time of
Reviewer 311
Management Advisory Services

floatation of securities, etc. In order to meet its investment needs, a firm can raise
funds from various sources.
The finance manager must develop the best finance mix or optimum capital structure
for the enterprise so as to maximise the long- term market price of the company’s
shares. A proper balance between debt and equity is required so that the return to
equity shareholders is high and their risk is low.

Use of debt or financial leverage effects both the return and risk to the equity
shareholders. The market value per share is maximised when risk and return are
properly matched. The finance department has also to decide the appropriate time to
raise the funds and the method of issuing securities.

3. Dividend Decision:

In order to achieve the wealth maximisation objective, an appropriate dividend policy


must be developed. One aspect of dividend policy is to decide whether to distribute
all the profits in the form of dividends or to distribute a part of the profits and retain the
balance. While deciding the optimum dividend payout ratio (proportion of net profits to
be paid out to shareholders).

The finance manager should consider the investment opportunities available to the
firm, plans for expansion and growth, etc. Decisions must also be made with respect
to dividend stability, form of dividends, i.e., cash dividends or stock dividends, etc.

4. Working Capital Decision:

Working capital decision is related to the investment in current assets and current
liabilities. Current assets include cash, receivables, inventory, short-term securities,
etc. Current liabilities consist of creditors, bills payable, outstanding expenses, bank
overdraft, etc. Current assets are those assets which are convertible into a cash
within a year. Similarly, current liabilities are those liabilities, which are likely to
mature for payment within an accounting year.

The scope of financial management includes three groups. First – relating to finance
and cash, second – rising of fund and their administration, third – along with the
activities of rising funds, these are part and parcel of total management, Isra Salomon
felt that in view of funds utilisation third group has wider scope.
It can be said that all activities done by a finance officer are under the purview of
financial management. But the activities of these officers change from firm to firm, it
become difficult to say the scope of finance. Financial management plays two main
roles, one – participating in funds utilisation and controlling productivity, two –
Identifying the requirements of funds and selecting the sources for those funds.
Liquidity, profitability and management are the functions of financial management. Let
us know very briefly about them.
1. Liquidity:

Liquidity can be ascertained through the three important considerations.

i) Forecasting of cash flow:


Reviewer 312
Management Advisory Services

Cash inflows and outflows should be equalized for the purpose of liquidity.

ii) Rising of funds:

Finance manager should try to identify the requirements and increase of funds.

iii) Managing the flow of internal funds:

Liquidity at higher degree can be maintained by keeping accounts in many banks.


Then there will be no need to depend on external loans.

2. Profitability:

While ascertaining the profitability the following aspects should be taken into
consideration:

1) Cost of control:

For the purpose of controlling costs, various activities of the firm should be analyzed
through proper cost accounting system,

ii) Pricing:

Pricing policy has great importance in deciding sales level in company’s marketing.
Pricing policy should be evolved in such a way that the image of the firm should not
be affected.

iii) Forecasting of future profits:

Often estimated profits should be ascertained and assessed to strengthen the firm
and to ascertain the profit levels.

iv) Measuring the cost of capital:

Each fund source has different cost of capital. As the profit of the firm is directly
related to cost of capital, each cost of capital should be measured.

3. Management:

It is the duty of the financial manager to keep the sources of the assets in maintaining
the business. Asset management plays an important role in financial management.
Besides, the financial manager should see that the required sources are available for
smooth running of the firm without any interruptions.
A business may fail without financial failures. Financial failures also lead to business
failure. Because of this peculiar condition the responsibility of financial management
increased. It can be divided into the management of long run funds and short run
funds.

Long run management of funds relates to the development and extensive plans.
Short run management of funds relates to the total business cycle activities. It is also
Reviewer 313
Management Advisory Services

the responsibility of financial management to coordinate different activities in the


business. Thus, for the success of any firm or organization financial management is
said to be a must.

2. Role Of Financial Managers In Investment, Operating And Financing


Decisions

INVESTMENT / CAPITAL BUDGETING DECISIONS

One of the most important finance functions is to intelligently allocate capital to long
term assets. This activity is also known as capital budgeting. It is important to allocate
capital in those long term assets so as to get maximum yield in future. Following are
the two aspects of investment decision

a. Evaluation of new investment in terms of profitability


b. Comparison of cut off rate against new investment and prevailing investment.

Since the future is uncertain therefore there are difficulties in calculation of expected
return. Along with uncertainty comes the risk factor which has to be taken into
consideration. This risk factor plays a very significant role in calculating the expected
return of the prospective investment. Therefore while considering investment
proposal it is important to take into consideration both expected return and the risk
involved.

Investment decision not only involves allocating capital to long term assets but also
involves decisions of using funds which are obtained by selling those assets which
become less profitable and less productive. It wise decisions to decompose
depreciated assets which are not adding value and utilize those funds in securing
other beneficial assets. An opportunity cost of capital needs to be calculating while
dissolving such assets. The correct cut off rate is calculated by using this opportunity
cost of the required rate of return (RRR)

At present, efficient use and allocation of capital are the most important functions of
financial management. Practically, this function involves the decision of the firm to
commit its funds in long-term assets together with other profitable activities.

However, the decisions of the firm to invest funds in long-term assets needs
considerable importance as the same tends to influence the firm’s wealth, size,
growth and also affects the business risk. No doubt, the primary consideration of all
types of investment decisions is the rate of earning capacity, i.e., rate of return.

But there are other considerations as well, e.g. risk factor. In short, risk factor also
plays a significant role in investment decisions.

Generally, investment decisions fall under two broad categories:

(i) Investment in own business; and


Reviewer 314
Management Advisory Services

(ii) Investment in outside business, i.e., in securities and other companies.

We all know that the primary sources of supplying capital are:


(i) Owners and
(ii) Lenders / Outsiders.

It is also known to us that there is a cost of capital in all types of capital investment in
the business Therefore, investment in own business is justified only when the return
for the same will be at least equal to the estimated return resulting from the
investment by way of relevant cost of capital.

In other words, investment in own business is desirable provided the return from such
enterprise is higher than the estimated return on the relevant cost of capital.

The primary purpose, of course, of investment funds in business assets is to produce


future economic benefits in such a manner which will cover not only the cost of capital
and operating expenses but also will leave a sufficient margin in order to cover the
risk which is involved in it.

Investment Involves Risk:

The values of invested capital can, no doubt, be affected due to the following factors:

(i) Advancement in technology leads to an improved and efficient machine which may
prove existing machineries worthless; or,
(ii) A change in pattern and design which involves the scrapping of parts or materials
or tools lying in stock and which can no longer be used in future; or,
(iii) If the consumers’ tastes and preferences are changed, it is nothing but a loss of
value to a company; or,
(iv) Investments made in ‘Receivables’ may prove bad and irrecoverable and so on.
Therefore, adequate consideration relating to investment of capital should always be
made since investment involves risk.

Need for Funds:


We all know that funds are required by a firm for its different purposes. Naturally, how
much fund is required depends on the nature and types of the business enterprise.

Generally, two well-known classifications may be mentioned below:

(i) Investment in Fixed Assets; and


(ii) Investment in Current Assets (Working Capital).

Fixed assets (e.g., Land and building. Plant and Machinery, Furniture and Fixtures
etc.) are acquired not for sale and they are usually owned They help to continue the
production function for goods and services in order to earn revenues. Investment in
fixed assets must be made in such a way so that they are properly utilised, i.e., must
not be idle.

So, investment in fixed assets needs the following further consideration:


Reviewer 315
Management Advisory Services

(i) Provision to be made for adequate planned capital expenditure,


(ii) Proper evaluation of the project to be made before the actual execution, and
(iii) Estimates and schedules are made for approved capital projects, and so on.

Similarly, current assets (e.g.. Inventories, Debtors, Bills, Cash and Bank balances
etc.) are required for working capital purposes. The funds for investment in working
capital must also be properly utilised, since the idle working capital will increase the
cost.

Since the financial resources are always limited, proper allocation and use of funds
are necessary. Besides, limited financial resources leads to a firm considering the
alternative courses of action, viz.,

i. Rental — as an alternative to ownership;


ii. Buying— as an alternative to manufacturing.

An investment decision revolves around spending capital on assets that will yield the


highest return for the company over a desired time period. In other words, the
decision is about what to buy so that the company will gain the most value.

To do so, the company needs to find a balance between its short-term and long-term
goals. In the very short-term, a company needs money to pay its bills, but keeping all
of its cash means that it isn't investing in things that will help it grow in the future. On
the other end of the spectrum is a purely long-term view. A company that invests all
of its money will maximize its long-term growth prospects, but if it doesn't hold
enough cash, it can't pay its bills and will go out of business soon. Companies thus
need to find the right mix between long-term and short-term investment.

The investment decision also concerns what specific investments to make. Since
there is no guarantee of a return for most investments, the finance department must
determine an expected return.  This return is not guaranteed, but is the average
return on an investment if it were to be made many times.

The investments must meet three main criteria:


1. It must maximize the value of the firm, after considering the amount of risk the
company is comfortable with (risk aversion).
2. It must be financed appropriately (we will talk more about this shortly).
3. If there is no investment opportunity that fills (1) and (2), the cash must be
returned to shareholder in order to maximize shareholder value.

This decision relates to careful selection of assets in which funds will be invested by
the firms. A firm has many options to invest their funds but firm has to select the most
appropriate investment which will bring maximum benefit for the firm and deciding or
selecting most appropriate proposal is investment decision.

The firm invests its funds in acquiring fixed assets as well as current assets. When
decision regarding fixed assets is taken it is also called capital budgeting decision.

OPERATING DECISIONS
Reviewer 316
Management Advisory Services

A more routine or schedule form of decision. Examples are determination of the


amount of inventories, cash and account receivables to hold within a certain period.

Asset management is one of the main aspects for a company to adequately meet its
obligations and in turn to position itself to meet the objectives or growth targets that
have been laid out. In other words, the Financial Manager must stipulate and assure
that the existing assets are managed in the most efficient way possible. Generally,
this manager must prioritize current asset management before fixed asset
management. Current assets are those that will become effective in the near future,
such as accounts receivable or inventories. By contrast, fixed assets lack liquidity
since they are needed for permanent operations. This includes offices, warehouses,
machinery, vehicles, etc.

A financial manager at times may be faced with difficult choices because the
company does not have sufficient cash available to pay important expenses. He may
have to choose, for example, between making a tax payment on time and making a
loan payment on time. Missing the tax payment can result in the company being
charged penalties and interest. Missing the loan payment could jeopardize the
company’s relationship with a lender that the business owner hoped to obtain
additional financing from in the future.

Short-Run Vision

A financial manager’s natural conservatism -- wanting to make sure the company


always has sufficient cash -- can cause him to recommend against expenditures that
would allow the company to take advantage of opportunities for growth. He may urge
the business owner to not proceed with an acquisition opportunity that has been
presented to the company because he believes from a financial standpoint that the
company cannot afford the cost of the acquisition.

FINANCING DECISIONS

All functions of a company need to be paid for one way or another. It is up to the
finance department to figure out how to pay for them through the process of
financing.
There are two ways to finance an investment: using a company's own money or by
raising money from external funders. Each has its advantages and disadvantages.
There are two ways to raise money from external funders: by taking on debt or selling
equity. Taking on debt is the same as taking on a loan. The loan has to be paid back
with interest, which is the cost of borrowing. Selling equity is essentially selling part of
your company . When a company goes public, for example, they decide to sell their
company to the public instead of to private investors. Going public entails
selling stocks which represent owning a small part of the company. The company is
selling itself to the public in return for money.

Every investment can be financed through company money or from external funders.
It is the financing decision process that determines the optimal way to finance the
investment.
Reviewer 317
Management Advisory Services

Financial decision is yet another important function which a financial manger must
perform. It is important to make wise decisions about when, where and how should a
business acquire funds. Funds can be acquired through many ways and channels.

Broadly speaking a correct ratio of an equity and debt has to be maintained. This mix
of equity capital and debt is known as a firm’s capital structure.

A firm tends to benefit most when the market value of a company’s share maximizes
this not only is a sign of growth for the firm but also maximizes shareholders wealth.
On the other hand the use of debt affects the risk and return of a shareholder. It is
more risky though it may increase the return on equity funds.

A sound financial structure is said to be one which aims at maximizing shareholders


return with minimum risk. In such a scenario the market value of the firm will
maximize and hence an optimum capital structure would be achieved. Other than
equity and debt there are several other tools which are used in deciding a firm capital
structure.

A company can raise finance from various sources such as by issue of shares,
debentures or by taking loan and advances. Deciding how much to raise from which
source is concern of financing decision. Mainly sources of finance can be divided into
two categories:

1. Owners fund.
2. Borrowed fund.

Share capital and retained earnings constitute owners’ fund and debentures, loans,
bonds, etc. constitute borrowed fund.

The main concern of finance manager is to decide how much to raise from owners’
fund and how much to raise from borrowed fund.
While taking this decision the finance manager compares the advantages and
disadvantages of different sources of finance. The borrowed funds have to be paid
back and involve some degree of risk whereas in owners’ fund there is no fix
commitment of repayment and there is no risk involved. But finance manager prefers
a mix of both types. Under financing decision finance manager fixes a ratio of owner
fund and borrowed fund in the capital structure of the company.

Factors Affecting Financing Decisions:

While taking financing decisions the finance manager keeps in mind the following
factors:

1. Cost:

The cost of raising finance from various sources is different and finance managers
always prefer the source with minimum cost.
Reviewer 318
Management Advisory Services

2. Risk:

More risk is associated with borrowed fund as compared to owner’s fund securities.
Finance manager compares the risk with the cost involved and prefers securities with
moderate risk factor.

3. Cash Flow Position:

The cash flow position of the company also helps in selecting the securities. With
smooth and steady cash flow companies can easily afford borrowed fund securities
but when companies have shortage of cash flow, then they must go for owner’s fund
securities only.

4. Control Considerations:

If existing shareholders want to retain the complete control of business then they
prefer borrowed fund securities to raise further fund. On the other hand if they do not
mind to lose the control then they may go for owner’s fund securities.

5. Floatation Cost:
It refers to cost involved in issue of securities such as broker’s commission,
underwriters fees, expenses on prospectus, etc. Firm prefers securities which involve
least floatation cost.

6. Fixed Operating Cost:

If a company is having high fixed operating cost then they must prefer owner’s fund
because due to high fixed operational cost, the company may not be able to pay
interest on debt securities which can cause serious troubles for company.

7. State of Capital Market:

The conditions in capital market also help in deciding the type of securities to be
raised. During boom period it is easy to sell equity shares as people are ready to take
risk whereas during depression period there is more demand for debt securities in
capital market.

B. Financial Management Concepts & Techniques For Planning, Control & Decision
Making
1. Financial Statement Analysis
a. Vertical Analysis (Common-Size Financial Statements)

Vertical analysis (also known as common-size analysis) is a popular method of


financial statement analysis that shows each item on a statement as a
percentage of a base figure within the statement.
To conduct a vertical analysis of balance sheet, the total of assets and the total
of liabilities and stockholders’ equity are generally used as base figures. All
individual assets (or groups of assets if condensed form balance sheet is used)
Reviewer 319
Management Advisory Services

are shown as a percentage of total assets. The current liabilities, long term debts
and equities are shown as a percentage of the total liabilities and stockholders’
equity.

To conduct a vertical analysis of income statement, sales figure is generally


used as the base and all other components of income statement like cost of
sales, gross profit, operating expenses, income tax, and net income etc. are
shown as a percentage of sales.
In a vertical analysis the percentage is computed by using the following formula:

A basic vertical analysis needs an individual statement for a reporting period but
comparative statements may be prepared to increase the usefulness of the
analysis.

Example:

An example of the vertical analysis of balance sheet and income statement is


given below:

Comparative balance sheet with vertical analysis:


Reviewer 320
Management Advisory Services

Current assets:

2008: (550,000 / 1,139,500) × 100 = 48.3%


2007: (530,000 / 1,230,500) × 100 = 43.3%

Comparative income statement with vertical analysis:


Reviewer 321
Management Advisory Services

Cost of goods sold:

2008: (1,043,000/1,498,000) × 100 = 69.6%


2007: (820,000/1200,000) × 100 = 68.3%

Vertical analysis states financial statements in a comparable common-size


format (percentage form). One of the advantages of common-size analysis is that
it can be used for inter-company comparison of enterprises with different sizes
because all items are expressed as a percentage of some common number. For
example, suppose company A and company B belong to same industry. A is a
small company and B is a large company. Company A’s sales and gross profit
are $100,000 and $30,000 respectively whereas company B’s sales and gross
profit are $1,000,000 and $300,000 respectively. If vertical analysis is conducted
and sales figure is used as base, it would show a gross profit percentage of 30%
for both the companies as shown below:
Reviewer 322
Management Advisory Services

A common-size financial statement is displays line items as a percentage of one


selected or common figure. Creating common-size financial statements makes it
easier to analyze a company over time and compare it with its peers. Using
common-size financial statements helps investors spot trends that a raw financial
statement may not uncover.

All three of the primary financial statements can be put into a common-size
format. Financial statements in dollar amounts can easily be converted to
common-size statements using a spreadsheet, or they can be obtained from
online resources like Mergent Online. Below is an overview of each statement
and a more detailed summary of the benefits, as well as drawbacks, that such an
analysis can provide investors.

Balance Sheet Analysis

The common figure for a common-size balance sheet analysis is total assets.


Based on the accounting equation, this also equals total liabilities and
shareholders’ equity, making either term interchangeable in the analysis. It is
also possible to use total liabilities to indicate where a company’s obligations lie
and whether it is being conservative or risky in managing its debts.
Reviewer 323
Management Advisory Services

The common-size strategy from a balance sheet perspective lends insight into a
firm’s capital structure and how it compares to rivals. An investor can also look to
determine an optimal capital structure for an industry and compare it to the firm
being analyzed. Then he or she can conclude whether debt is too high, excess
cash is being retained on the balance sheet, or inventories are growing too high.

The goodwill level on a balance sheet also helps indicate the extent to which a


company has relied on acquisitions for growth.

Below is an example of a common-size balance sheet for technology giant


International Business Machines - IBM (NYSE:IBM
). Running through some of the examples touched on above, we can see
that long-term debt averages around 20% of total assets over the three-year
period, which is a reasonable level. It is even more reasonable when observing
that cash represents around 10% of total assets, and short-term debt accounts
for 6% to 7% of total assets over the past three years.

It is important to add short-term and long-term debt together and compare this
amount to total cash on hand in the current assets section. It lets the investor
know how much of a cash cushion is available or if a firm is dependent on the
markets to refinance debt when it comes due.

Analyzing the Income Statement


Reviewer 324
Management Advisory Services

The common figure for an income statement is total top-line sales. This is
actually the same analysis as calculating a company's margins. For instance, a
net profit margin is simply net income divided by sales, which also happens to be
a common-size analysis. The same goes for calculating gross and operating
margins. The common-size method is appealing for research-intensive
companies, for example, because they tend to focus on research and
development (R&D) and what it represents as a percent of total sales.

Below is a common-size income statement for IBM. We will cover it in more


detail below, but notice the R&D expense that averages close to 6% of revenues.

Looking at the peer group and companies overall, according to a Booz & Co.
analysis, this puts IBM in the top five among tech giants and the top 20 firms in
the world (2013) in terms of total R&D spending as a percent of total sales.

Common Size and Cash Flow

In similar fashion to an income statement analysis, many items in the cash flow
statement can be stated as a percent of total sales. This can give insight on a
number of cash flow items, including capital expenditures (capex) as a percent of
revenue. Share repurchase activity can also be put into context as a percent of
the total top line. Debt issuance is another important figure in proportion to the
amount of annual sales it helps generate. Because these items are calculated as
a percent of sales, they help indicate the extent to which they are being utilized
to generate overall revenue.
Reviewer 325
Management Advisory Services

Below is IBM’s cash flow statement in terms of total sales. It generated an


impressive level of operating cash flow that averaged 19% of sales over the
three-year period from 2010 to 2012. Share repurchase activity was also
impressive at more than 11% of total sales in each of the three years. You may
also notice the first row, which is net income as a percent of total sales, which
matches exactly with the common-size analysis from an income statement
perspective. This represents the net profit margin.

How is This Different from the Regular Financial Statements?

The key benefit of a common-size analysis is it allows for a vertical analysis by


line item over a single time period, such as a quarterly or annual period, and also
Reviewer 326
Management Advisory Services

from a horizontal perspective over a time period such as the three years we


analyzed for IBM above.

Just looking at a raw financial statement makes this more difficult. But looking up
and down a financial statement, using a vertical analysis allows an investor to
catch significant changes at a company on his or her own. A common-size
analysis helps put an analysis in context (on a percentage basis). It is the same
as a ratio analysis when looking at the profit and loss statement.

What the Common-Size Reveals

The biggest benefit of a common-size analysis is that it can let an investor


identify large or drastic changes in a firm’s financials. Rapid increases or
decreases will be readily observable, such as a rapid drop in reported profits
during one quarter or year.

In IBM's case, its results overall have been relatively steady. One item of note is
the Treasury stock in the balance sheet, which has grown to more than a
negative 100% of total assets. But rather than alarm investors, it indicates the
company has been hugely successful in generating cash to buy back shares,
which far exceeds what it has retained on its balance sheet.

A common-size analysis can also give insight into the different strategies that
companies pursue. For instance, one company may be willing to sacrifice
margins for market share, which would tend to make overall sales larger at the
expense of gross, operating or net profit margins. Ideally the company that
pursues lower margins will grow faster. While we looked at IBM on a stand-alone
basis, like the R&D analysis, IBM should also be analyzed by comparing it to key
rivals.

The Bottom Line

As the above scenario highlights, a common-size analysis on its own is unlikely


to provide a comprehensive and clear conclusion on a company. It must be done
in the context of an overall financial statement analysis, as detailed above.

Investors also need to be aware of temporary versus permanent differences. A


short-term drop in profitability could only indicate a short-term blip, rather than a
permanent loss in profit margins.

b. Horizontal Analysis (Trend Percentages And Index Analysis)

TREND PERCENTAGES
Reviewer 327
Management Advisory Services

Horizontal analysis (also known as trend analysis) is a financial statement


analysis technique that shows changes in the amounts of corresponding financial
statement items over a period of time. It is a useful tool to evaluate the trend
situations.

The statements for two or more periods are used in horizontal analysis. The
earliest period is usually used as the base period and the items on the
statements for all later periods are compared with items on the statements of the
base period. The changes are generally shown both in dollars and percentage.
Dollar and percentage changes are computed by using the following formulas:

Horizontal analysis may be conducted for balance sheet, income statement,


schedules of current and fixed assets and statement of retained earnings.

Example:

An example of the horizontal analysis of balance sheet, schedule of current


assets, income statement and statement of retained earnings is given below:

Comparative balance sheet with horizontal analysis:


Reviewer 328
Management Advisory Services

Comparative schedule of current assets:

Comparative income statement with horizontal analysis:


Reviewer 329
Management Advisory Services

Comparative retained earnings statement with horizontal analysis:

In above analysis, 2007 is the base year and 2008 is the comparison year. All
items on the balance sheet and income statement for the year 2008 have been
compared with the items of balance sheet and income statement for the year
2007.

The actual changes in items are compared with the expected changes. For
example, if management expects a 30% increase in sales revenue but actual
increase is only 10%, it needs to be investigated.
Reviewer 330
Management Advisory Services

Trend analysis calculates the percentage change for one account over a period
of time of two years or more.

Percentage change

To calculate the percentage change between two periods:

1. Calculate the amount of the increase/(decrease) for the period by


subtracting the earlier year from the later year. If the difference is negative,
the change is a decrease and if the difference is positive, it is an increase.

2. Divide the change by the earlier year's balance. The result is the percentage
change.

Calculation notes:

1. 20X0 is the earlier year so the amount in the 20X0 column is subtracted
from the amount in the 20X1 column.
2. The percent change is the increase or decrease divided by the earlier
amount (20X0 in this example) times 100. Written as a formula, the percent

change is:     
3. If the earliest year is zero or negative, the percent calculated will not be
meaningful. N/M is used in the above table for not meaningful.
4. Most percents are rounded to one decimal place unless more are
meaningful.
Reviewer 331
Management Advisory Services

5. A small absolute dollar item may have a large percentage change and be
considered misleading.

Trend percentages

To calculate the change over a longer period of time—for example, to develop a


sales trend—follow the steps below:

1. Select the base year.


2. For each line item, divide the amount in each nonbase year by the amount in
the base year and multiply by 100.
3. In the following example, 20W7 is the base year, so its percentages (see
bottom half of the following table) are all 100.0. The percentages in the other
years were calculated by dividing each amount in a particular year by the
corresponding amount in the base year and multiply by 100.
Reviewer 332
Management Advisory Services

Calculation notes:

1. The base year trend percentage is always 100.0%. A trend percentage of


less than 100.0% means the balance has decreased below the base year
level in that particular year. A trend percentage greater than 100.0% means
Reviewer 333
Management Advisory Services

the balance in that year has increased over the base year. A negative trend
percentage represents a negative number.

2. If the base year is zero or negative, the trend percentage calculated will not
be meaningful.

In this example, the sales have increased 59.3% over the five‐year period while
the cost of goods sold has increased only 55.9% and the operating expenses
have increased only 57.5%. The trends look different if evaluated after four
years. At the end of 20X0, the sales had increased almost 20%, but the cost of
goods sold had increased 31%, and the operating expenses had increased
almost 41%. These 20X0 trend percentages reflect an unfavorable impact on net
income because costs increased at a faster rate than sales. The trend
percentages for net income appear to be higher because the base year amount
is much smaller than the other balances.

INDEX ANALYSIS

This analysis considers changes in items
of financial statement from a base year to the following years to show
the direction of change. This is also called horizontal analysis.
In this, the figures of various years are placed side by side in adjacent columns in
the form of comparative financial statements.

An analysis of percentage financial statements where all balance sheet or


income statement figures for a base year equal 100.0 (percent) and subsequent
financial statement items are expressed as percentages of their values in the
base year.

c. Cash Flow Analysis (Interpretation Of Cash Flows Including Free Cash


Flow Concept)

What is 'Free Cash Flow - FCF'

Free cash flow (FCF) is a measure of a company's financial performance,


calculated as operating cash flow minus capital expenditures. FCF represents
the cash that a company is able to generate after spending the money required
to maintain or expand its asset base. FCF is important because it allows a
company to pursue opportunities that enhance shareholder value.

BREAKING DOWN 'Free Cash Flow - FCF'

FCF is an assessment of the amount of cash a company generates after


accounting for all capital expenditures, such as buildings or property, plant and
Reviewer 334
Management Advisory Services

equipment. The excess cash is used to expand production, develop new


products, make acquisitions, pay dividends and reduce debt. Specifically, FCF is
calculated as:
EBIT (1-tax rate) + (depreciation) + (amortization) - (change in net working
capital) - (capital expenditure).

FCF in Company Analysis

Some believe that Wall Street focuses only on earnings while ignoring the real
cash that a firm generates. Earnings can often be adjusted by various accounting
practices, but it's tougher to fake cash flow. For this reason, some investors
believe that FCF gives a much clearer view of a company's ability to generate
cash and profits.

However, it is important to note that negative free cash flow is not bad in itself. If
free cash flow is negative, it could be a sign that a company is making large
investments. If these investments earn a high return, the strategy has the
potential to pay off in the long run. FCF is also better indicator than the P/E ratio.
For more information, feel free to read Free Cash Flow Yield: The Best
Fundamental Indicator and FCF: Free, But Not Always Easy.
An Example of FCF

FCF is a good indicator of the performance of a public company. Many investors


base their investment decisions on the free cash generated by a company or its
equity price to FCF ratio. For example, Southwest Airlines, a leading provider of
domestic flights in the United States, is expected to realize large increases in its
FCF, thus making it an attractive investment.

The company has been able to generate increased revenues and profits in 2014
and 2015, reaching a record $2.4 billion in profits for the fiscal year 2015.
Additionally, its operating margin increased in 2015 to 20.1%, and is expected to
produce even higher margins throughout 2016. Further, capital expenditures
reached $2 billion in 2015 and are expected to cap out at $2.2 billion in 2017.
This means that its FCF, which is a function of revenue growth and expenditures,
is expected to double by the end of 2017.

d. Gross Profit Variance Analysis (Price, Cost And Volume Factors)


Reviewer 335
Management Advisory Services
Reviewer 336
Management Advisory Services
Reviewer 337
Management Advisory Services
Reviewer 338
Management Advisory Services
Reviewer 339
Management Advisory Services
Reviewer 340
Management Advisory Services

Gross Profit Variance


Analysis

Price Factor Cost Factor Volume Factor

Sales Price Variance Cost Price Variance Mix Variance

Yield Variance
Reviewer 341
Management Advisory Services

e. Financial Ratios (Liquidity, Solvency, Activity, Profitability, Growth And


Other Ratios; Du Pont Model)

LIQUIDITY

LIQUIDITY RATIO
NAME FORMULA
WORKING CAPITAL current assets - current liabilities
CURRENT RATIO current assets ÷ current liabilities
QUICK (ACID TEST) RATIO quick assets ÷ current liabilities
cash + A/R + marketable securities - uncollectible
QUICK ASSET
A/R + inventory if exclusively sold on cash basis
INVENTORY TO WORKING CAPITAL inventory ÷ working capital
CASH RATIO (cash + marketable securities) ÷ current liabilities
CURRENT CASH DEBT COVERAGE RATIO cash provided by operations ÷ current liabilities

SOLVENCY / DEBT MANAGEMENT / DEBT SERVICE / LEVERAGE /


STABILITY

SOLVENCY RATIO
NAME FORMULA
DEBT TO ASSETS RATIO total debt ÷ total assets
DEBT TO EQUITY RATIO total debt ÷ total stockholders' equity
LONG-TERM DEBT TO EQUITY RATIO long-term debt ÷ total stockholders' equity
profits before interest and taxes ÷ total interest
TIMES INTEREST EARNED RATIO
charges
(Profits before taxes and interest + fixed charges)
FIXED-CHARGE COVERAGE RATIO
÷ (Total interest charges + fixed charges)
(EBIT + fixed charges + depreciation) ÷ {interest
CASH FLOW COVERAGE RATIO charges + fixed charges + [(preferred stock
dividends + debt repayments) ÷ (1 - tax rate)]}

ACTIVITY / ASSET UTILIZATION / TURNOVER

ACTIVITY RATIO
NAME FORMULA
ACCOUNTS RECEIVABLE TURNOVER net credit sales ÷ average accounts receivable
365 days (in a year) ÷ accounts receivable
# OF DAYS IN ACCOUNTS RECEIVABLE
turnover
AVERAGE COLLECTION PERIOD average accounts receivable ÷ average daily sales
INVENTORY TURNOVER cost of sales ÷ average inventory
FINISHED GOODS TURNOVER cost of sales ÷ average finished goods inventory
cost of goods manufactured ÷ average work in
WORK IN PROCESS TURNOVER
process inventory
Reviewer 342
Management Advisory Services

raw materials used ÷ average raw materials


RAW MATERIALS TURNOVER
inventory
# OF DAYS IN INVENTORY 365 days (in a year) ÷ inventory turnover
FIXED ASSETS TURNOVER / FIXED ASSETS
net sales ÷ fixed assets, net
UTILIZATION RATIO
TOTAL ASSETS TURNOVER net sales ÷ total assets

PROFITABILITY

PROFITABILITY RATIO
NAME FORMULA
PROFIT MARGIN ON SALES net income available to common ÷ net sales
RETURN ON SALES net income ÷ net sales
OPTION 1: net income available to common ÷
average total assets
RETURN ON TOTAL ASSETS
OPTION 2: {net income + [interest charges x (1-tax
rate)]} ÷ average total assets
net income available to common ÷ average
RETURN ON COMMON EQUITY
common stock equity
BASIC EARNING POWER EBIT ÷ average total assets
net income available to common ÷ weighted
EARNINGS PER SHARE
average # of common stocks outstanding
DIVIDENDS PER SHARE dividends ÷ outstanding shares

GROWTH / MARKET VALUE

Growth ratios, or growth rates, tell the analyst just how fast a company is
growing.  The most important of these ratios include:

 Sales (%):  normally stated in terms of a percentage growth from the prior
year.  Sales is the term used for operating revenues, so it's important to see
the sales growth rate as high as possible.

 Net Income (%):  growth in net income is even more important than sales
because net income tells the investor how much money is left over after all
of the operating costs are subtracted from sales.

 Dividends (%):  a good indicator of the financial health of a company.  Some


companies do not pay stock dividends; rather they use these excess profits
to reinvest money back into the company to accelerate growth.  The change
in dividends (%) should never be negative.  That is, once a dividend rate is
established, a company needs to have a very good reason to decrease the
payout.
Reviewer 343
Management Advisory Services

GROWTH RATIO
NAME FORMULA
PRICE-EARNINGS RATIO market price per share ÷ earnings per share
MARKET-BOOK RATIO market price per share ÷ book value per share
DIVIDEND YIELD RATIO dividends per share ÷ market price per share
BOOK VALUE PER SHARE shareholders' equity ÷ average shares outstanding
DIVIDEND PAYOUT RATIO dividends per share ÷ earnings per share

DU PONT MODEL

Definition

DuPont formula (also known as the DuPont analysis, DuPont Model, DuPont


equation or the DuPont method) is a method for assessing a company's return
on equity (ROE) breaking its into three parts. The name comes from the DuPont
Corporation that started using this formula in the 1920s.

Calculation (formula)

ROE (DuPont formula) = (Net profit / Revenue) * (Revenue / Total assets) *


(Total assets / Equity) = Net profit margin * Asset Turnover * Financial leverage

DuPont model tells that ROE is affected by three things:

 Operating efficiency, which is measured by net profit margin;


 Asset use efficiency, which is measured by total asset turnover;
 Financial leverage, which is measured by the equity multiplier;

If ROE is unsatisfactory, the DuPont analysis helps locate the part of the
business that is underperforming.
Reviewer 344
Management Advisory Services

Return on
Investment (ROI)

Net Profit Margin Total Assets


Turnover

Net Income Sales Sales Total Assets

Sales Total Cost Current Assets Plant Assets

Cost of Goods
Sold Cash Land

Accounts
Selling Expenses Building
Receivable

Administrative Machinery and


Expenses Inventory Equipment

Marketable
Securities

Others

f. Financial Forecasting Using Additional Funds Needed (AFN)

AFN is "additional funds needed," and refers to the additional resources that will
be needed for a company to expand its operations.

LEARNING OBJECTIVE

 Calculate the additional funds needed equation

KEY POINTS

 AFN is a way of calculating how much new funding will be required, so


that the firm can realistically look at whether or not they will be able to
Reviewer 345
Management Advisory Services

generate the additional funding and therefore be able to achieve the


higher sales level.

 The simplified formula is: AFN = Projected increase in assets –


spontaneous increase in liabilities – any increase in retained earnings. If
this value is negative, this means the action or project which is being
undertaken will generate extra income for the company, which can be
invested elsewhere.
 The mathematical formulas used to determine AFN are based on
showing how liabilities will grow relative to new assets and sales when a
project is undertaken and can be used as tools to determine whether a
project or operational expansion is worthwhile.

TERMS
 liabilities
An amount of money in a company that is owed to someone and has to be paid
in the future, such as tax, debt, interest, and mortgage payments.
 asset
Something or someone of any value; any portion of one's property or effects so
considered.
 sales
Revenues

Additional funds needed (AFN) is the amount of money a company must raise


from external sources to finance the increase in assets required to support
increased level of sales. Additional funds needed (AFN) is also called external
financing needed.

Additional funds needed method of financial planning assumes that the


company's financial ratios do not change. In response to an increase in sales, a
company must increase its assets, such as property, plant and equipment,
inventories, accounts receivable, etc. Part of this increase in offset by
spontaneous increase in liabilities such as accounts payable, taxes, etc., and
part is offset by increase in retained earnings.

Formula and Calculation

Additional funds needed (AFN) is calculated as the excess of required increase


in assets over the increase in liabilities and increase in retained earnings.
Reviewer 346
Management Advisory Services

Example

TransWorld Inc. runs a shipping business and has forecasted a 10% increase in
sales over 20Y3. Its assets and liabilities at the end of 20Y2 amounted to $25
billion and $17 billion respectively. Sales for the period were $30 billion and it
earned a 4% profit margin. It reinvests 40% of its net income and pays out the
rest to its shareholders. Calculate additional funds needed.

Solution

Additional funds needed = increase in assets − increase in liabilities – increase in


retained earnings

Increase in assets = 20Y2 assets × sales growth rate = $25 billion × 10% = $2.5
billion

Spontaneous increase in liabilities = 20Y2 liabilities × sales growth rate = $17


billion × 10% = $1.7 billion

Increase in retained earnings = 20Y3 sales × profit margin × retention rate =


20Y2 sales × (1 + sales growth rate) × profit margin × retention rate = $30 billion
× (1 + 10%)×4%×40% = $0.528 billion

Plugging all the figures gives us a figure of $0.272 billion.

Additional funds needed = $2.5 billion – $1.7 billion − $0.528 billion = $0.272
billion

TransWorld must raise $272 million to finance the increased level of sales.

2. Working Capital Finance


Reviewer 347
Management Advisory Services

Working capital management refers to a company's managerial accounting strategy


designed to monitor and utilize the two components of working capital, current
assets and current liabilities, to ensure the most financially efficient operation of the
company. The primary purpose of working capital management is to make sure the
company always maintains sufficient cash flow to meet its short-term operating costs
and short-term debt obligations.

BREAKING DOWN 'Working Capital Management'

Working capital management commonly involves monitoring cash flow, assets and
liabilities through ratio analysis of key elements of operating expenses, including the
working capital ratio, collection ratio and the inventory turnover ratio. Efficient working
capital management helps with a company's smooth financial operation, and can also
help to improve the company's earnings and profitability. Management of working
capital includes inventory management and management of accounts receivables
and accounts payables.

Elements of Working Capital Management

The working capital ratio, calculated as current assets divided by current liabilities, is
considered a key indicator of a company's fundamental financial health since it
indicates the company's ability to successfully meet all of its short-term financial
obligations. Although numbers vary by industry, a working capital ratio below 1.0 is
generally indicative of a company having trouble meeting short-term obligations,
usually due to insufficient cash flow. Working capital ratios of 1.2 to 2.0 are
considered desirable, but a ratio higher than 2.0 may indicate a company is not
making the most effective use of its assets to increase revenues.

What is 'Working Capital'

Working capital is a measure of both a company's efficiency and its short-


term financial health. Working capital is calculated as:

Working Capital = Current Assets - Current Liabilities

The working capital ratio (Current Assets/Current Liabilities) indicates whether a


company has enough short term assets to cover its short term debt. Anything below 1
indicates negative W/C (working capital). While anything over 2 means that the
company is not investing excess assets. Most believe that a ratio between 1.2 and
2.0 is sufficient.  Also known as "net working capital".

BREAKING DOWN 'Working Capital'


Reviewer 348
Management Advisory Services

If a company's current assets do not exceed its current liabilities, then it may run into
trouble paying back creditors in the short term. The worst-case scenario
is bankruptcy. A declining working capital ratio over a longer time period could also
be a red flag that warrants further analysis. For example, it could be that the
company's sales volumes are decreasing and, as a result, its
accounts receivables number continues to get smaller and smaller.Working capital
also gives investors an idea of the company's underlying operational efficiency.

Money that is tied up in inventory or money that customers still owe to the company
cannot be used to pay off any of the company's obligations. So, if a company is not
operating in the most efficient manner (slow collection), it will show up as an increase
in the working capital. This can be seen by comparing the working capital from one
period to another; slow collection may signal an underlying problem in the company's
operations.

Things to Remember

 If the ratio is less than one then they have negative working capital.
 A high working capital ratio isn't always a good thing, it could indicate that they
have too much inventory or they are not investing their excess cash

Gross working capital is the sum of all of a company's current assets (assets that are
convertible to cash within a year or less). Gross working capital includes assets such
as cash, checking and savings account balances, accounts receivable, short-term
investments, inventory and marketable securities. From gross working capital,
subtract the sum of all of a company's current liabilities to get net working capital.

BREAKING DOWN 'Gross Working Capital'

A company needs just the right amount of working capital to function optimally. With
too much working capital, some current assets would be better put to other uses.
With too little working capital, a company may not be able to meet its day-to-day cash
requirements. The correct balance is obtained through working capital management.

a. Concepts and Significance Of Working Capital Management

In an ordinary sense, working capital denotes the amount of funds needed for
meeting day-to-day operations of a concern.
This is related to short-term assets and short-term sources of financing. Hence it
deals with both, assets and liabilities—in the sense of managing working capital
it is the excess of current assets over current liabilities. In this article we will
discuss about the various aspects of working capital.
Concept of Working Capital:
The funds invested in current assets are termed as working capital. It is the fund
that is needed to run the day-to-day operations. It circulates in the business like
Reviewer 349
Management Advisory Services

the blood circulates in a living body. Generally, working capital refers to the
current assets of a company that are changed from one form to another in the
ordinary course of business, i.e. from cash to inventory, inventory to work in
progress (WIP), WIP to finished goods, finished goods to receivables and from
receivables to cash.
There are two concepts in respect of working capital:
(i) Gross working capital and
ADVERTISEMENTS:

(ii) Networking capital.


Gross Working Capital:
The sum total of all current assets of a business concern is termed as gross
working capital. So,
ADVERTISEMENTS:

Gross working capital = Stock + Debtors + Receivables + Cash.


Net Working Capital:
The difference between current assets and current liabilities of a business con-
cern is termed as the Net working capital.
Hence,
ADVERTISEMENTS:

Net Working Capital = Stock + Debtors + Receivables + Cash – Creditors –


Payables.
Nature of Working Capital:
The nature of working capital is as discussed below:
i. It is used for purchase of raw materials, payment of wages and expenses.
ii. It changes form constantly to keep the wheels of business moving.
ADVERTISEMENTS:
iii. Working capital enhances liquidity, solvency, creditworthiness and reputation
of the enterprise.
iv. It generates the elements of cost namely: Materials, wages and expenses.
v. It enables the enterprise to avail the cash discount facilities offered by its
suppliers.
vi. It helps improve the morale of business executives and their efficiency
reaches at the highest climax.
vii. It facilitates expansion programmes of the enterprise and helps in maintaining
operational efficiency of fixed assets.
Need for Working Capital:
Working capital plays a vital role in business. This capital remains blocked in raw
materials, work in progress, finished products and with customers.
The needs for working capital are as given below:
i. Adequate working capital is needed to maintain a regular supply of raw
materials, which in turn facilitates smoother running of production process.
ii. Working capital ensures the regular and timely payment of wages and salaries,
thereby improving the morale and efficiency of employees.
iii. Working capital is needed for the efficient use of fixed assets.
iv. In order to enhance goodwill a healthy level of working capital is needed. It is
necessary to build a good reputation and to make payments to creditors in time.
v. Working capital helps avoid the possibility of under-capitalization.
Reviewer 350
Management Advisory Services

vi. It is needed to pick up stock of raw materials even during economic


depression.
vii. Working capital is needed in order to pay fair rate of dividend and interest in
time, which increases the confidence of the investors in the firm.
Importance of Working Capital:
It is said that working capital is the lifeblood of a business. Every business needs
funds in order to run its day-to-day activities.
The importance of working capital can be better understood by the following:
i. It helps measure profitability of an enterprise. In its absence, there would be
neither production nor profit.
ii. Without adequate working capital an entity cannot meet its short-term liabilities
in time.
iii. A firm having a healthy working capital position can get loans easily from the
market due to its high reputation or goodwill.
iv. Sufficient working capital helps maintain an uninterrupted flow of production
by supplying raw materials and payment of wages.
v. Sound working capital helps maintain optimum level of investment in current
assets.
vi. It enhances liquidity, solvency, credit worthiness and reputation of enterprise.
vii. It provides necessary funds to meet unforeseen contingencies and thus helps
the enterprise run successfully during periods of crisis.
Classification of Working Capital:
Working capital may be of different types as follows:
(a) Gross Working Capital:
Gross working capital refers to the amount of funds invested in various
components of current assets. It consists of raw materials, work in progress,
debtors, finished goods, etc.
(b) Net Working Capital:
The excess of current assets over current liabilities is known as Net working
capital. The principal objective here is to learn the composition and magnitude of
current assets required to meet current liabilities.
(c) Positive Working Capital:
This refers to the surplus of current assets over current liabilities.
(d) Negative Working Capital:
Negative working capital refers to the excess of current liabilities over current
assets.
(e) Permanent Working Capital:
The minimum amount of working capital which even required during the dullest
season of the year is known as Permanent working capital.
(f) Temporary or Variable Working Capital:
It represents the additional current assets required at different times during the
operating year to meet additional inventory, extra cash, etc.
It can be said that Permanent working capital represents minimum amount of the
current assets required throughout the year for normal production whereas
Temporary working capital is the additional capital required at different time of
the year to finance the fluctuations in production due to seasonal change. A firm
having constant annual production will also have constant Permanent working
capital and only Variable working capital changes due to change in production
caused by seasonal changes. (See Figure 7.1.)
Reviewer 351
Management Advisory Services

Similarly, a growth firm is the firm having unutilized capacity, however,


production and operation continues to grow naturally. As its volume of production
rises with the passage of time so also does the quantum of the Permanent
working capital. (See Figure 7.2.)

Components of Working Capital:


Working capital is composed of various current assets and current liabilities,
which are as follows:
(A) Current Assets:
These assets are generally realized within a short period of time, i.e. within one
year.
Current assets include:
(a) Inventories or Stocks
(i) Raw materials
(ii) Work in progress
(iii) Consumable Stores
(iv) Finished goods
(b) Sundry Debtors
(c) Bills Receivable
(d) Pre-payments
(e) Short-term Investments
Reviewer 352
Management Advisory Services

(f) Accrued Income and


(g) Cash and Bank Balances
(B) Current Liabilities:
Current liabilities are those which are generally paid in the ordinary course of
business within a short period of time, i.e. one year.
Current liabilities include:
(a) Sundry Creditors
(b) Bills Payable
(c) Accrued Expenses
(d) Bank Overdrafts
(e) Bank Loans (short-term)
(f) Proposed Dividends
(g) Short-term Loans
(h) Tax Payments Due

b. Working Capital Investment And Financing Policies (Conservative Versus


Aggressive)

A small business’s working capital represents its current assets minus current
liabilities. Current assets are cash or items that can convert to cash in less than a
year, such as accounts receivable, negotiable securities and inventory. Current
liabilities include the short-term payables: accounts, payroll, taxes and interest,
as well as any debt coming due within a year. Aggressive and conservative
levels of working capital sit at the opposite ends of a spectrum -- the optimal
amount of working capital lies somewhere in between.

Aggressive Working Capital

An aggressive working capital policy is one in which you try to squeeze by with a
minimal investment in current assets coupled with an extensive use of short-term
credit. Your goal is to put as much money to work as possible to decrease the
time needed to produce products, turn over inventory or deliver services.
Speeding up your business cycle grows your sales and revenues. You keep little
money on hand, cut slow-moving inventory and unnecessary supplies to the
bone and stretch out your bill payments for as long as possible. The one
payment you cannot delay is interest -- your creditors can sue you, force you into
bankruptcy and liquidate your assets. You would also want to avoid missing tax
payments.

Conservative Working Capital

Companies in volatile or seasonal industries such as tourism, farming or


construction might adopt conservative working capital policies to buffer against
risk. If you employ a conservative working capital policy, there’s plenty of cash in
the bank, your warehouses are full of inventory and your payables are all up to
date. Employees need not turn in their old pencils before they are allowed to
have new ones. If you compute the working capital ratio -- current assets divided
by current liabilities -- a conservative policy might yield a ratio above 2.0. That is,
you have more than $2 in current assets for every dollar of short-term liabilities.
Reviewer 353
Management Advisory Services

Conservatively managed working capital will help lower your risks of short-term
cash shortages but might hurt your long-term profitability, because excess cash
doesn’t earn much of a return.

Risk

Your risk of default and bankruptcy increases as you adopt more aggressive
working capital policies. For example, a sudden emergency can leave you
unable to make a bond interest payment. Tight inventories can lead to shortages
and lost sales. Vendors might balk at extending your further credit if you stretch
out payments beyond 90 days. Investors might be less willing to buy your bonds
and may force you to offer higher interest rates on newly issued long-term debt.
The major risk of a conservative working capital policy is the opportunity costs of
“lazy” assets that you could put to work. A conservative policy lowers your sales
efficiency -- sales revenue divided by working capital -- that can dissuade
potential investors.

Return

An aggressive working capital policy can produce a higher return on assets, as


measured by indicators such as gross income divided by working capital.
However, while your indicators might rise, your absolute amount of gross income
might fall. For example, as you tighten inventory, your sales and accounts
receivable might swoon because you could run short of product. Inventory
shortages might result in lower revenue and collections as competitors with well-
stocked inventories steal your customers. A conservative policy might mean that
some of your working capital is not working. This is like leaving money on the
table -- you might have used the excess assets more productively to increase
your return on assets. The optimal policy is one in which you allocate only the
amount of working capital necessary to simultaneously maximize your revenues
and minimize your risks.

3 Strategies of Working Capital Financing

There are three strategies or approaches or methods of working capital


financing – Maturity Matching (Hedging), Conservative and Aggressive. Hedging
approach is an ideal method of financing with moderate risk and profitability.

Other two are extreme strategies. Conservative approach is highly conservative


with very low risk and therefore low profitability. An aggressive approach is highly
aggressive having high risk and high profitability.

We will compare these three approaches on 6 parameters viz. liquidity,


profitability, risk, asset utilization, and working capital.

c. Cash And Marketable Securities Management (Cash Conversion Cycle,


Optimal Cash Balance, Collection And Disbursement Float, Cash
Management System)
Reviewer 354
Management Advisory Services

CASH MANAGEMENT – involves the maintenance of the appropriate level of


cash to meet the firm’s cash requirements and to maximize income on idle funds.

MARKETABLE SECURITIES MANAGEMENT – involves the process of planning


and controlling investment in marketable securities to meet the firm’s cash
requirements and to maximize income on idle funds.

OBJECTIVE: To minimize the amount of cash on hand while retaining sufficient


liquidity to satisfy business requirements (e.g., take advantage of cash discounts,
maintain credit rating, meet unexpected needs).

REASONS FOR HOLDING CASH “Why would a firm hold cash when, being idle,
it is a non-earning asset?”

TRANSACTION (liquidity) motive – cash is held to facilitate normal


transactions of the business.

PRECAUTIONARY (contingent) motive – cash is held beyond the normal


operating requirement level to provide for buffer against contingencies, such
as slow-down in accounts receivable collection and possibilities of strikes.

SPECULATIVE motive – cash is held to avail of business incentives (e.g.,


discounts) and investment opportunities

CONTRACTUAL motive (compensating balance requirements) – cash is


held as required by provisions of a contract (e.g., a company is sometimes
required to maintain a minimum balance in its bank account as a condition of
a loan granted by the bank)

Optimal Cash Balance

2( Annual Cash Requirement)(Cost Per Transaction)


√ Opportunity Cost of Holding Cash

Total costs of cash balance = holding costs + transaction costs


Transaction Costs = number of transactions x opportunity cost
*Average cash balance = OCB / 2
**Number of transactions per year = annual cash requirement / OCB

Cash Conversion Cycle

CCC = Inventory Conversion Period (ICP) + Receivable Collection Period (RCP)


– Payable Deferral Period (PDP)

ICP = Inventory / daily CGS


RCP = Receivable / daily sales
PDP = Payables / daily purchases

Collection and Disbursement Float


Reviewer 355
Management Advisory Services

FLOAT

Collection Disbursem
(Negative) (Positive

Mail

Processing

Clearing

d. Receivables Management (Average Balance Of And Investment In


Accounts Receivable, Incremental Analysis And Evaluation Of Discount,
Collection And Credit Policies)

AVERAGE BALANCE OF ACCOUNTS RECEIVABLE (ABAR)

ABAR = Average daily sales (ADS) x average collection period (ACP)

ADS = Annual credit sales / 360


ACP = (discount period x % of discounts available taken) + (credit period x % of
discounts available not taken)

INVESTMENT IN ACCOUNTS RECEIVABLE (IAR)

IAR = Cost of average accounts receivable = ABAR x cost ratio

INCREMENTAL ANALYSIS AND EVALUATION OF DISCOUNT, COLLECTION,


AND CREDIT POLICIES

Accelerating Collection

Cost (Benefit) of acceleration = [Average daily credit sales x (new credit period –
old credit period)]

Incremental (Decremental) financing charges = Cost (benefit) of acceleration x


interest rate
Reviewer 356
Management Advisory Services

Discount Policy

Cost (benefit) of change in discount policy = [Average daily credit sales x (new
discount period – old discount period)]

Net disadvantage (advantage) = [cost (benefit) of change in discount policy x


rate of return] – discounts taken

Credit Policy - Relaxation

Cost VS Benefit

Benefit = Incremental CM

Cost = costs of collection + bad debts + opportunity cost

Bad debts = increase in sales x % uncollectible

Opportunity cost = cost of policy relaxation x cost ratio x increase in sales

Cost of policy relaxation = (average daily credit sales new – average daily credit
sales old) x (collection period new – collection period old)

e. Inventory Management (Carrying, Ordering And Stock-Out Costs,


Inventory Control System Including EOQ Model, Safety Stock, Reorder
Point)

INVENTORY MANAGEMENT – refers to the process of formulating and


implementing plans and policies to efficiently meet production and merchandising
requirements while minimizing costs relative to inventories.

OBJECTIVES: To maintain inventory level that balances sales, demand, the cost
of carrying additional inventory, and the efficiency of inventory control.

INVENTORY MANAGEMENT TECHNIQUES

INVENTORY PLANNING – involves determination of the appropriate quantity


and quality, as well as the right time of ordering, in order to minimize costs while
meeting the demand for sales. Examples: Economic Order Quantity; Reorder
Point; Just in Time (JIT) System

INVENTORY CONTROL – involves regulation of inventory within predetermined


level; adequate stocks should be able to meet business requirements, but the
investment in inventory should be at the minimum.

CARRYING COSTS

In marketing, carrying cost, carrying cost of inventory or holding cost refers to the


total cost of holding inventory. This includes warehousing costs such as rent,
Reviewer 357
Management Advisory Services

utilities and salaries, financial costs such as opportunity cost, and inventory costs
related to perishability, shrinkage (theft) and insurance.[1] Holding cost also
includes the opportunity cost of reduced responsiveness to customers' changing
requirements, slowed introduction of improved items, and the inventory's value
and direct expenses, since that money could be used for other purposes.
When there are no transaction costs for shipment, carrying costs are minimized
when no excess inventory is held at all, as in a Just In Time production system.[1]
Excess inventory can be held for one of three reasons. Cycle stock is held based
on the re-order point, and defines the inventory that must be held for production,
sale or consumption during the time between re-order and delivery. Safety
stock is held to account for variability, either upstream in supplier lead time, or
downstream in customer demand. Physical stock is held by consumer retailers to
provide consumers with a perception of plenty.
Contents
  [hide] 
 1Definitions
 2Why do companies hold inventory
 3Ways to reduce the lower Carrying Cost
 4See also
 5Further reading
 6References
Definitions[edit]
The cost consist of four different factors:
1. The expenses of putting the inventory in storage
2. Salary and wages of workers
3. Maintenance in the long term
4. All utilities used in caring the storage [2]
Moreover, the carrying cost will mostly appear as a percentage number. It
provides an idea of how long the inventory could be held before the company
makes a loss, this also tells the manager how much to order.
Why do companies hold inventory[edit]
Inventory is a property of a company that is ready for them to sale.[3] There are
five basic reasons that why a company need inventory.
1. Safety inventory
This would act like a buffer to make sure that the company would have excess
products for sale if consumer demands exceed their expectation."[4]
2. Cater to Cyclical and Seasonal Demand
These kind of inventory are use for predicable events that would cause a change
in people’s demand. For example, candy companies can starts to produce extra
sweets that have long duration period. Build up seasonal inventory gradually to
match people’s sharply increasing demand before Halloween. "[4]
3. Cycle inventory
First of all, we need to go through the idea of economic order quantity (EOQ).
[5]
 EOQ is an attempt to balance inventory holding or carrying costs with the costs
Reviewer 358
Management Advisory Services

incurred from ordering or setting up machinery. The total cost will minimized
when the ordering cost and the carrying cost equal to each other. While
customer order a significant quantities of products, cycle inventory would be able
to save cost and act as a buffer for the company to purchase more supplies."[4]
4. In-transit Inventory[6]
This kind of inventory would save company a lot transportation cost and help the
transition process become less time-consuming. For example, if the company
request a particular raw material from overseas market. Purchase in bulk will
save them a lot transportation cost from overseas shipment fees.[citation needed]
5. Dead Inventory
Dead inventory or dead stock is consisting of different kinds of products that was
outdated or only a few consumer requests this kind of product. So manager
pulled them from store shelves. To reduce costs of holding this kinds of products,
company could hold discount events or imply price reduction to attraction
consumers attentions.[7]
Ways to reduce the lower Carrying Cost[edit]
For most firms they see profit maximizing, as their prior objectives. in order to
reach higher profit here are some methods of reducing Carrying cost.
1. Base the number of stocks on the situation of Economics: The number of
stocks should be changed with consumers demand, the situation of the industry
also the exchange rate of the currency. When the economic is in recession or the
currency depreciate residents’ purchasing power would decreased."[8]
2. Improve the layout of warehouse:[9] Instead of renting a new place, the
manager might consider about the idea of rearrange the layout of the warehouse
that they owned. An inefficient layout may increase the risk of shipping the wrong
products to consumers this would both increase transportation cost and become
time consuming. To improve the layout the company could either increase the
reception area or apply segmentation. This will reduce the cost as well as
increase labour’s productivity![10]
3.Build long-term agreements with suppliers: Signing long-term contract with
suppliers may increase the supplier’s financial security and the company may
receive a lower price. This will become a win-win situation. Also the supplier
might be willing to decrease the time period of delivery their products to the
warehouse, for example from once a month to once a week. Hence, the
company would be able to switch to a smaller warehouse, as they don’t need to
stock that much products at a time. Furthermore, this would also reduce the risks
of loss and depreciation of the products.[11]
4. Creating an effective database: The database should include things like
retailer, date, quantity, quality, degree of advertising and the time taken until sold
out. This will make sure that the future employees can learn from the past
experience while making decisions. For example, if they manager want to hold a
big discount event to clear the products that have been left in stock for a long
time. Then he can go through the past data to find out if there is any event like
Reviewer 359
Management Advisory Services

this before and how was the result. The manager would be able to forecast the
budget and make some improvements base on the past events’ record."[8]

ORDERING COSTS

Ordering costs are the expenses incurred to create and process an order to a
supplier. These costs are included in the determination of the economic order
quantity for an inventory item.

Examples of ordering costs are:

 Cost to prepare a purchase requisition


 Cost to prepare a purchase order
 Cost of the labor required to inspect goods when they are received
 Cost to putaway goods once they have been received
 Cost to process the supplier invoice related to an order
 Cost to prepare and issue a payment to the supplier

There will be an ordering cost of some size, no matter how small an order may
be. The total amount of ordering costs that a business incurs will increase with
the number of orders placed. This aggregate order cost can be mitigated by
placing large blanket orders that cover long periods of time, and then issuing
order releases against the blanket orders.

An entity may be willing to tolerate a high aggregate ordering cost if the result is
a reduction in its total inventory carrying cost. This relationship occurs when a
business orders raw materials and merchandise only as needed, so that more
orders are placed but there is little inventory kept on hand. A firm must monitor
its ordering costs and inventory carrying costs in order to properly balance order
sizes and thereby minimize overall costs.

STOCK-OUT COSTS

Stock-out Costs is the cost associated with the lost opportunity caused by the
exhaustion of the inventory. The exhaustion of inventory could be a result of
various factors. The most notable amongst them is defective shelf replenishment
practices. Stock-outs could prove to be very costly for the companies. The subtle
responses could be postponement of purchase. The more disastrous ones are
the consumers may get frustrated and switch stores or even purchase substitute
items (brands).  Various retailers follow the concept of “Safety Stock” in order to
avoid the situation of stock-outs. Stock-outs could occur at any point of the
supply chain.

Effective Inventory management is the solution to avoid stock-outs. Regular


audits of the inventory are carried out to check the frequency of stock-outs of
different items. Advanced modelling tools and frameworks are used to determine
the economic order quantity that will minimize the stock out and also the
inventory carrying cost.
Reviewer 360
Management Advisory Services

For example: Newsvendor problem that combines the concept of statistics and
operations is one of the advanced tools used by companies to avoid stock-out
costs.

INVENTORY CONTROL SYSTEM

Just in time production system


Fixed order quantity system
Periodic review / replacement system
Optional replenishment system
Materials requirement planning (MRP)
Manufacturing resource planning (MRP – II)
Enterprise resource planning
ABC classification items

EOQ Model

In inventory management, economic order quantity (EOQ) is the order quantity


that minimizes the total holding costs and ordering costs. It is one of the oldest
classical production scheduling models. The model was developed by Ford W.
Harris in 1913,[1] but R. H. Wilson, a consultant who applied it extensively, and K.
Andler are given credit for their in-depth analysis.[2]
Contents
  [hide] 
 1Overview
o 1.1Variables
o 1.2The Total Cost function and derivation of EOQ formula
o 1.3Example
 2Extensions of the EOQ model
o 2.1Quantity discounts
o 2.2Design of optimal quantity discount schedules
o 2.3Backordering costs and multiple items
 3For improving fuel economy of internal combustion engines
 4See also
 5References
 6Further reading
 7External links
Overview[edit]
EOQ applies only when demand for a product is constant over the year and each
new order is delivered in full when inventory reaches zero. There is a fixed cost
for each order placed, regardless of the number of units ordered. There is also a
cost for each unit held in storage, commonly known as holding cost, sometimes
expressed as a percentage of the purchase cost of the item.
Reviewer 361
Management Advisory Services

We want to determine the optimal number of units to order so that we minimize


the total cost associated with the purchase, delivery and storage of the product.
The required parameters to the solution are the total demand for the year, the
purchase cost for each item, the fixed cost to place the order and the storage
cost for each item per year. Note that the number of times an order is placed will
also affect the total cost, though this number can be determined from the other
parameters.
Variables[edit]
  = purchase unit price, unit production cost.
  = order quantity.
  = optimal order quantity.
  = annual demand quantity.
  = fixed cost per order, setup cost ( not per unit, typically cost of ordering and shipping and
handling. This is not the cost of goods)
  = annual holding cost per unit, also known as carrying cost or storage cost (capital cost,
warehouse space, refrigeration, insurance, etc. usually not related to the unit production cost)
The Total Cost function and derivation of EOQ formula[edit]
The single-item EOQ formula finds the minimum point of the following cost
function:
Total Cost = purchase cost or production cost + ordering cost + holding cost
Where:
 Purchase cost: This is the variable cost of goods: purchase unit price × annual demand quantity.
This is P × D
 Ordering cost: This is the cost of placing orders: each order has a fixed cost K, and we need to
order D/Q times per year. This is K × D/Q
 Holding cost: the average quantity in stock (between fully replenished and empty) is Q/2, so this
cost is h × Q/2
.
To determine the minimum point of the total cost curve, calculate the derivative
of the total cost with respect to Q (assume all other variables are constant) and
set it equal to 0:
Solving for Q gives Q* (the optimal order quantity):
Therefore:

Q* is independent of P; it is a function of only K, D, h.


The optimal value Q* may also be found by recognising that[3]
 where the non-negative quadratic term disappears for  which provides the cost
minimum 
Example[edit]
Reviewer 362
Management Advisory Services

 annual requirement quantity (D) = 10000 units


 Cost per order (K) = 40
 Cost per unit (P)= 50
 Yearly Carrying cost percentage (h/P)(percentage of P) = 10%
 Yearly carrying cost per unit (h) = 50 * 10% = 5
Economic order quantity =   = 400 units
Number of orders per year (based on EOQ) 
Total cost 
Total cost 
If we check the total cost for any order quantity other than 400(=EOQ), we will
see that the cost is higher. For instance, supposing 500 units per order, then
Total cost 
Similarly, if we choose 300 for the order quantity then
Total cost 
This illustrates that the economic order quantity is always in the best interests of
the firm.
Extensions of the EOQ model[edit]
Quantity discounts[edit]
An important extension to the EOQ model is to accommodate quantity discounts.
There are two main types of quantity discounts: (1) all-units and (2) incremental.
[4][5]
 Here is a numerical example:
 Incremental unit discount: Units 1–100 cost $30 each; Units 101–199 cost $28 each; Units 200 and
up cost $26 each. So when 150 units are ordered, the total cost is $30*100 + $28*50.
 All units discount: an order of 1–1000 units costs $50 each; an order of 1001–5000 units costs $45
each; an order of more than 5000 units costs $40 each. So when 1500 units are ordered, the total
cost is $45*1500.
Design of optimal quantity discount schedules[edit]
In presence of a strategic customer, who responds optimally to discount
schedule, the design of optimal quantity discount scheme by the supplier is
complex and has to be done carefully. This is particularly so when the demand at
the customer is itself uncertain. An interesting effect called the "reverse bullwhip"
takes place where an increase in consumer demand uncertainty actually reduces
order quantity uncertainty at the supplier.[6]
Backordering costs and multiple items[edit]
Several extensions can be made to the EOQ model developed by Mr. Pankaj
Mane, including backordering costs and multiple items. Additionally,
the economic order interval can be determined from the EOQ and the economic
production quantity model (which determines the optimal production quantity)
can be determined in a similar fashion.
A version of the model, the Baumol-Tobin model, has also been used to
determine the money demand function, where a person's holdings of money
balances can be seen in a way parallel to a firm's holdings of inventory.[7]
Reviewer 363
Management Advisory Services

Malakooti (2013)[8] has introduced the multi-criteria EOQ models where the


criteria could be minimizing the total cost, Order quantity (inventory), and
Shortages.
A version taking the time-value of money into account was developed by Trippi
and Lewin.[9]
For improving fuel economy of internal combustion engines[edit]
Recently an interesting similarity between EOQ of Melon picking and fuel
injection in Gasoline Direction Injection has been proposed.[10]

Safety Stock

Safety stock term used by logisticians to describe a level of extra stock that is


maintained to mitigate risk of stockouts (shortfall in raw material or packaging)
due to uncertainties in supply and demand. Adequate safety stock levels permit
business operations to proceed according to their plans. [1] Safety stock is held
when there is uncertainty in demand, supply, or manufacturing yield; it serves as
an insurance against stockouts.
Safety stock is an additional quantity of an item held in the inventory in order to
reduce the risk that the item will be out of stock, safety stock act as a buffer stock
in case the sales are greater than planned and or the supplier is unable to deliver
the additional units at the expected time.
With a new product, safety stock can be utilized as a strategic tool until the
company can judge how accurate their forecast is after the first few years,
especially when used with a material requirements planning worksheet. The less
accurate the forecast, the more safety stock is required to ensure a given level of
service. With a material requirements planning(MRP) worksheet a company can
judge how much they will need to produce to meet their forecasted sales demand
without relying on safety stock. However, a common strategy is to try and reduce
the level of safety stock to help keep inventory costs low once the product
demand becomes more predictable. This can be extremely important for
companies with a smaller financial cushion or those trying to run on lean
manufacturing, which is aimed towards eliminating waste throughout the
production process.
The amount of safety stock an organization chooses to keep on hand can
dramatically affect their business. Too much safety stock can result in high
holding costs of inventory. In addition, products which are stored for too long a
time can spoil, expire, or break during the warehousing process. Too little safety
stock can result in lost sales and, thus, a higher rate of customer turnover. As a
result, finding the right balance between too much and too little safety stock is
essential.

Reorder Point

The reorder point (ROP) is the level of inventory which triggers an action to


replenish that particular inventory stock. It is a minimum amount of an item which
Reviewer 364
Management Advisory Services

a firm holds in stock, such that, when stock falls to this amount, the item must be
reordered. It is normally calculated as the forecast usage during the
replenishment lead time plus safety stock. In the EOQ (Economic Order
Quantity) model, it was assumed that there is no time lag between ordering and
procuring of materials. Therefore the reorder point for replenishing the stocks
occurs at that level when the inventory level drops to zero and because instant
delivery by suppliers, the stock level bounce back.

Reorder point is a technique to determine when to order; it does not address how


much to order when an order is made.

f. Sources Of Short-Term Funds (Trade Credit, Bank Loans, Commercial


Papers, Receivable Factoring)

SOURCES OF
SHORT-TERM
FUNDS

Secured Loans
Credits (Current Asset
FInancing)

Unsecured Banking Receivable Inventory

Accruals Loan Pledging Blanket Lien Receipts

Trade Credit Line of Credit Discounting Trust

Commercial Revolving Credit


Papers Agreement Assignment Warehouse

Factoring

TRADE CREDIT

BANK LOANS

COMMERCIAL PAPERS
Reviewer 365
Management Advisory Services

interest +issue costs 360 days


x
face value−interest −issue costs term

RECEIVABLE FACTORING

FACTORS OF
CONSIDERATIONS IN
SELECTING SOURCES
OF SHORT TERM FUNDS

Cost Availability Influence Requirement

g. Estimating Cost Of Short-Term Funds (Annual Cost Of Trade Credit,


Effective And Nominal Rate Of Short-Term Funds)

ANNUAL COST OF TRADE CREDIT

Discount Rate 360 days


Annual Cost of Trade Credit = x
100 %−Discount Rate Credit Period −Discount Pe

RATE OF SHORT-TERM FUNDS


Reviewer 366
Management Advisory Services

COST OF BANK LOANS


(Effective Annual Rate)

With Compensating Without Compensating


Balance Balance

Discounted UN-discounted Discounted UN-discounted

Cash Proceeds = face


Interest / (Face Value - Interest / (Face Value - value net of interest, Cash proceeds = face
Interest - CB) CB) which is deducted in value
advance

Nominal % / (100% - Nominal % / (100% - Interest / (Face Value - Interest / Amount


Nominal % - CB %) CB %) Interest) Received (Face)

3. Capital Budgeting
a. Capital Investment Decision Factors (Net Investment For Decision
Making, Cost Of Capital, Cash And Accrual Net Returns)

Elements of a capital investment decision, such as the estimated cost of a


project, phasing of expenditure, duration (life) of investment, risks involved,
expected returns and their timing, interest rates, taxation rates, capital
allowances, and competitive, economic, and regulatory environment.

DEFINITION of 'Capital Investment Factors'

Factors affecting the decisions surrounding capital investment projects. Capital


investment factors are elements of a project decision, such as cost of capital or
duration of investment, which must be weighed in order to determine whether an
investment should be made, and if so, in what manner the investment is best
made in order to maximize utility for the investor.

BREAKING DOWN 'Capital Investment Factors'

Capital investment factors can relate to almost any aspect of an investment


decision, such as regulatory environment, risks associated to the investment,
macro-economic outlook, competitive landscape, time to complete a project,
concerns of shareholders, governance, probability of success/failure and
Reviewer 367
Management Advisory Services

opportunity costs, to name a few. All factors should be examined before coming
to a final decision on capital investment projects.

How and Why Capital Investment Decisions are Made


Capital investment decisions are made for a number of reasons. Capital
equipment suffers wear and tear and must be replaced, or new technology must
be introduced. If the business is expanding, new buildings and equipment will be
needed. Make sure you consider all of the factors in this decision.

 Why Capital Investment is Needed

 Capital investment is required at regular intervals during the life of a business


enterprise. As older machinery and equipment wears out, it must be replaced by
newer models that incorporate the latest technology to enable the business to
keep up with its competitors. The enterprise may be the first in its industry to
introduce certain cutting edge technology and forge ahead of its competitors by
making a capital investment decision at the right time.

Capital investment may be aimed at increasing earnings by producing higher


quality and technically advanced goods. Other types of investment may aim to
reduce the costs of production by manufacturing more quickly and efficiently.
Pressure from competitors means that both types of capital investment will be
needed regularly. For some enterprises, commencing a new project will involve
capital investment that needs to be assessed in terms of the cash outlay needed
and the expected future benefits. The factors affecting capital investment
decisions include the need for new capital investment, the number of possible
alternatives and the computation of expected return on each investment.

 Cash Flow Forecasts

 The future cash flow resulting from the capital investment is one of the major
factors affecting capital investment decisions. For each proposed capital
investment, the enterprise must put together a cash flow forecast that is as
accurate as possible. Often, the capital investment will involve a major cash
outflow at the start, followed at a later date by a series of cash inflows as the
benefits of the capital investment are realized. These benefits may result from
increased earnings into the future, or they may be in the form of cost savings.
The cash flow forecast must be as accurate as possible to enable the results of
the capital investment to be assessed correctly. The same criteria must be
applied to cash flow forecasting for each capital investment under consideration,
to ensure that the projects may be meaningfully compared. This enables
management to proceed with a realistic assessment of the capital investments
available and reach a correct decision on the investment to be made.
 
 Methods Used to Assess Capital Investments

Various methods are employed to assess potential capital investments and to


compare competing investment projects. Some management may use
the payback period as a rule of thumb. This looks at the length of time that will
pass before the earnings from the capital investment equal the initial outlay. This
Reviewer 368
Management Advisory Services

is a very rough way of assessing a project that does not take into account the
time value of money and does not therefore apply any discount rate to the future
cash flows.
 Another method for assessing projects is to look at the internal rate of return of
the project, which is the discount rate that when applied to the cash flows from
the capital investment arrives at a net present value of zero. This discount rate is
then compared to a benchmark rate such as the company's cost of capital to
arrive at a decision as to whether the capital investment would be worthwhile.
This method can give management some confidence that the capital investment
will benefit the business but is unreliable for comparing capital investments when
the period during which cash flows will continue varies, and where cash flows are
irregular.

A more straightforward method for assessing and comparing capital investments


is to consider the net present value of the project. This is determined by
discounting the cash flows from the investment at a predetermined discount rate
used by the enterprise. This discount rate may be the same as the company's
cost of capital. If the net present value of a capital investment is more than zero,
the investment is worthwhile for the enterprise. Different proposed capital
investments may be compared to one another by comparing the net present
value of each project. If the information is presented to management in this way,
comparison of the investment projects is easier and an informed management
decision may be made.

 Taking the Final Investment Decision

Apart from the numerical analysis of the benefit of capital investments,


management must consider the strategy of the company and the needs of the
shareholders. A proposed capital investment with a high net present value may
relate to a product line that is outside the core business of the enterprise, or to a
type of product that is subject to fluctuating demand owing to changes in public
taste and fashion. Management may therefore prefer an alternative capital
investment even though the cash forecast shows a lower net present value.

The needs of the shareholders may require the pursuit of capital investments that
will pursue growth and increase the value of the enterprise, rather than
necessarily increasing cash flow in the short term. The shareholders are the
owners of the company and management is responsible to them. If management
makes capital investments that are against the wishes of the shareholders they
will be held accountable. The shareholders are one of the most important factors
affecting capital investment decisions. The final decision must therefore always
take shareholder demands into account.

NET INVESTMENT FOR DECISION MAKING

Net investment is the measure of a company's investment in capital assets, such


as the property, plants, software and equipment that it uses for operations.

HOW IT WORKS (EXAMPLE):


The formula for net investment is:
Reviewer 369
Management Advisory Services

Net Investment = Capital Expenditures – Depreciation (non-cash)


In order to calculate the net investment of a company, you must first know the
amount of capital expenditures and non-cash depreciation they have.

Capital expenditures include the calculated worth of all assets (i.e. property,
software, equipment, etc.) and the amount of additional expenses being invested
into those assets (i.e. maintenance, repair, upkeep, installation, etc.).

Capital assets lose value over their useful life. Asset depreciation can be


calculated using two contrasting methods: the straight-line method or declining
method. The straight-line method assumes an asset depreciates by an equal
amount of its original value for each year it is used. The declining method
assumes the asset depreciates more in the earlier years of its use.

At the end of the asset's useful life, the amount the asset is sold for represents
its salvage value. Non-cash depreciation of an asset is represented as its
salvage value minus any taxes the company paid on the asset throughout its
useful life.

Let's assume that Company XYZ buys a new widget machine for $500,000 and
pays someone $10,000 to install the machine in the factory. The company also
expects to receive $75,000 from the sale of its old widget machine. Company
XYZ is taxed at a rate of 30%.

Using the formula above, Company XYZ's net investment is:

Net Investment = ($500,000 + $10,000) – [$75,000 - (.30)*($75,000)] = $412,500


The concept of net investment is similar to net book value, which is the cost of
the asset minus accumulated depreciation.

WHY IT MATTERS:

Because it is necessary to invest in capital assets that depreciate over time,


companies may use the net investment formula to keep track of the assets that
need to be replaced.

Comparing the net investment of companies to revenue will differ between


businesses and industries depending on how capital intensive a company or
industry is. Capital-intensive companies will typically have higher net investments
than companies using fewer assets.
Reviewer 370
Management Advisory Services

Comparisons of net investments are generally most meaningful among


companies within the same industry. The definition of a "high" or "low" net
investment should be made within this context.

COST OF CAPITAL

Cost of capital refers to the opportunity cost of making a specific investment. It is


therate of return that could have been earned by putting the same money into a
different investment with equal risk. Thus, the cost of capital is the rate of return
required to persuade the investor to make a given investment.

HOW IT WORKS (EXAMPLE):

Cost of capital is determined by the market and represents the degree of


perceived risk by investors. When given the choice between two investments of
equal risk, investorswill generally choose the one providing the higher return.

Let's assume Company XYZ is considering whether to renovate its warehouse


systems. The renovation will cost $50 million and is expected to save $10 million
per year over the next 5 years. There is some risk that the renovation will not
save Company XYZ a full $10 million per year. Alternatively, Company XYZ
could use the $50 million to buy equally risky 5-year bonds in ABC Co., which
return 12% per year.

Because the renovation is expected to return 20% per year ($10,000,000 /


$50,000,000), the renovation is a good use of capital, because the 20% return
exceeds the 12% required return XYZ could have gotten by taking the same risk
elsewhere.

The return an investor receives on a company security is the cost of that security
to the company that issued it. A company's overall cost of capital is a mixture of
returns needed to compensate all creditors and stockholders. This is often called
the weighted average cost of capital and refers to the weighted average costs of
the company's debtand equity.

WHY IT MATTERS:

Cost of capital is an important component of business valuation work. Because


an investor expects his or her investment to grow by at least the cost of capital,
cost of capital can be used as a discount rate to calculate the fair value of
an investment's cashflows.

Investors frequently borrow money to make investments, and analysts commonly


make the mistake of equating cost of capital with the interest rate on that money.
Reviewer 371
Management Advisory Services

It is important to remember that cost of capital is not dependent upon how and


where thecapital was raised. Put another way, cost of capital is dependent on
the use of funds, not the source of funds.

What is 'Cost Of Capital'

The cost of funds used for financing a business. Cost of capital depends on the


mode of financing used – it refers to the cost of equity if the business is financed
solely through equity, or to the cost of debt if it is financed solely through debt.
Many companies use a combination of debt and equity to finance their
businesses, and for such companies, their overall cost of capital is derived from
a weighted average of all capital sources, widely known as the weighted average
cost of capital (WACC). Since the cost of capital represents a hurdle rate that a
company must overcome before it can generate value, it is extensively used in
the capital budgeting process to determine whether the company should proceed
with a project.

BREAKING DOWN 'Cost Of Capital'

The cost of various capital sources varies from company to company, and
depends on factors such as its operating history, profitability, credit worthiness,
etc. In general, newer enterprises with limited operating histories will have higher
costs of capital than established companies with a solid track record, since
lenders and investors will demand a higher risk premium for the former.

Every company has to chart out its game plan for financing the business at an
early stage. The cost of capital thus becomes a critical factor in deciding which
financing track to follow – debt, equity or a combination of the two. Early-stage
companies seldom have sizable assets to pledge as collateral for debt financing,
so equity financing becomes the default mode of funding for most of them.

The cost of debt is merely the interest rate paid by the company on such debt.
However, since interest expense is tax-deductible, the after-tax cost of debt is
calculated as: Yield to maturity of debt x (1 - T) where T is the
company’s marginal tax rate.

The cost of equity is more complicated, since the rate of return demanded by


equity investors is not as clearly defined as it is by lenders. Theoretically, the
cost of equity is approximated by the Capital Asset Pricing Model (CAPM) =
Risk-free rate + (Company’s Beta x Risk Premium).
The firm’s overall cost of capital is based on the weighted average of these
costs. For example, consider an enterprise with a capital structure consisting of
70% equity and 30% debt; its cost of equity is 10% and after-tax cost of debt is
7%. Therefore, its WACC would be (0.7 x 10%) + (0.3 x 7%) = 9.1%. This is the
Reviewer 372
Management Advisory Services

cost of capital that would be used to discount future cash flows from potential


projects and other opportunities to estimate their Net Present Value (NPV) and
ability to generate value.

Companies strive to attain the optimal financing mix, based on the cost of capital
for various funding sources. Debt financing has the advantage of being more tax-
efficient than equity financing, since interest expenses are tax-deductible
and dividends on common shares have to be paid with after-tax dollars.
However, too much debt can result in dangerously high leverage, resulting in
higher interest rates sought by lenders to offset the higher default risk.

NET RETURNS

Investment performance is the return on an investment portfolio. The investment


portfolio can contain a single asset or multiple assets. The investment
performance is measured over a specific period of time and in a specific
currency. Investors often distinguish different types of return. One is the
distinction between the total return and the price return, where the former takes
into account income (interest and dividends), whereas the latter only takes into
account capital appreciation.

Another distinction is between net and gross return. The 'pure' net return to the
investor is the return net of all fees, expenses, and taxes, whereas the 'pure'
gross return is the return before all fees, expenses, and taxes. Various variations
between these two extremes exist. Which return one looks at depends on what
one is trying to measure. For example, if one wishes to measure the ability of an
investment manager to add value, then the return net of transaction expenses,
but gross of all other fees, expenses, and taxes is an appropriate measure to
look at since fees, expenses, and taxes other than transaction expenses are
often outside the control of the investment manager.

Another important distinction is between the money-weighted return and


the time-weighted return. The former is appropriate if the manager determines
the timing of inflows in or outflows from the portfolio. The latter is appropriate
when the manager is not responsible for the timing of cash inflows into and cash
outflows from the portfolio.

Net income from an investment after deducting all expenses from the gross
income generated by the investment. Depending on the analysis required, the
deductions may or may not include income tax and/or capital gains tax.

Investors use net returns to calculate the return on their investments after all
expenses and profits have been included. For example, stocks may have brokers
fees associated with their purchase and sale as well as extra income such as
dividends. The net return is measured as a percentage of the cost paid to obtain
the asset. To calculate the net return, you need to know how much the asset
Reviewer 373
Management Advisory Services

cost, how much it was sold for and any other costs or income associated with the
asset.

Higher net returns indicate better-performing investments.

Step 1

Calculate the total cost of your investment by adding what you paid for it to any
fees you paid to acquire it. For example, if you paid $1,500 for a stock and paid a
$10 broker's fee, your total cost would be $1,510.

Step 2

Calculate the total return on your investment by adding the amount the asset was
sold for and any payments, such as dividends, made to you while you owned it
and subtracting the costs associated with the sale. For example, if you sold the
stock for $1,700, received $50 in dividends while you owned it and paid a $10
broker's fee to sell it, you would add $1,700 to $50 and subtract $10 to get
$1,740.

Step 3

Divide the total return by the total cost. In this example, you would divide $1,740
by $1,510 to get about 1.152.

Step 4

Subtract 1 from the result from step 3 to find the net return expressed as a
decimal. In this example, you would subtract 1 from 1.152 to get 0.152.

Step 5

Multiply the result from step 4 by 100 to convert from a decimal to a percentage.
Finishing the example, you would multiply 0.152 by 100 to find your net return to
be about 15.2 percent.

The returns paid to investors minus fees to advisers or managers.

Cash

In investing, the cash-on-cash return is the ratio of annual before-tax cash flow to


the total amount of cash invested, expressed as a percentage.

It is often used to evaluate the cash flow from income-producing assets.


Generally considered a quick napkin test to determine if the asset qualifies for
further review and analysis. Cash on Cash analyses are generally used by
Reviewer 374
Management Advisory Services

investors looking for properties where cash flow is paramount, however, some
use it to determine if a property is undervalued, indicating instant equity in a
property.

Example[edit]

Suppose an investor purchases a $1,200,000 apartment complex with a


$300,000 down payment. Each month, the cash flow from rentals, less
expenses, is $5,000. Over the course of a year, the before-tax income would be
$5,000 × 12 = $60,000, so the NOI (Net Operating Income)-on-cash return would
be

.
However, because the investor used debt to service a portion of the asset, they
are required to make debt service payments and principal repayments in this
scenario (I.E. mortgage payments). Because of this, the Cash-on-Cash return
would be a lower figure which would be determined by dividing the NOI after all
mortgage payment expenses were deducted from it, by the total cash invested.
For example: If the investor made total mortgage payments (principal+interest) of
$2,000 a month in this scenario, then the Cash-on-Cash investment would be as
follows: $2,000x12= $24,000. $60,000-$24,000= $36,000. 

Limitations[edit]

 Because the calculation is based solely on before-tax cash flow relative to


the amount of cash invested, it cannot take into account an individual
investor's tax situation, the particulars of which may influence the desirability
of the investment. However the investor can usually deduct enough Capital
Cost Allowance to defer the taxes for a long time.

 The formula does not take into account any appreciation or depreciation.


When some cash is a return of capital (ROC) it will falsely indicate a higher
return, because ROC is not income.

 It does not account for other risks associated with the underlying property.

 It is essentially a simple interest calculation, and ignores the effect of


compounding interest. The implication for investors is that an investment
with a lower nominal rate of compound interest may be superior, in the long
run, to an investment with a higher cash-on-cash return.
Reviewer 375
Management Advisory Services

It is possible to perform an after-tax Cash on Cash calculation, but accurate


depictions of your adjusted taxable income are needed to correctly address how
much tax payment is being saved through depreciation and other losses.

Accrual

Accrual basis net income is compared against the investment cost to get the net
return.

b. Non-Discounted Capital Budgeting Techniques (Payback Period,


Accounting Rate Of Return On Original And Average Investment, Bail-
Out Payback And Payback Reciprocal)

A non-discount method of capital budgeting does not explicitly consider the time


value of money. In other words, each dollar earned in the future is assumed to
have the same value as each dollar that was invested many years earlier. 

PAYBACK PERIOD

What is the 'Payback Period'

The payback period is the length of time required to recover the cost of an
investment. The payback period of a given investment or project is an important
determinant of whether to undertake the position or project, as longer payback
periods are typically not desirable for investment positions. The payback period
ignores the time value of money, unlike other methods of capital budgeting, such
as net present value, internal rate of return or discounted cash flow.

BREAKING DOWN 'Payback Period'

Much of corporate finance is about capital budgeting. One of the most important
concepts that every corporate financial analyst must learn is how to value
different investments or operational projects. The analyst must figuring out a
reliable way to determine the most profitable project or investment to undertake.
One way corporate financial analysts do this is with the payback period.

Capital Budgeting and The Payback Period

Most capital budgeting formulas take the time value of money into consideration.
The time value of money (TVM) is the idea that cash in hand today is worth more
than it is in the future because it can be invested and make money from that
investment. Therefore, if you pay an investor tomorrow, it must include an
opportunity cost. The time value of money is a concept that assigns a value to
this opportunity cost.
Reviewer 376
Management Advisory Services

The payback period does not concern itself with the time value of money. In fact,
the time value of money is completely disregarded in the payback method, which
is calculated by counting the number of years it takes to recover the cash
invested. If it takes five years for the investment to earn back the costs, the
payback period is five years. Some analysts like the payback method for its
simplicity. Others like to use it as an additional point of reference in a capital
budgeting decision framework.

Payback Period Example

Assume company A invests $1 million in a project that will save the company
$250,000 every year. The payback period is calculated by dividing $1 million by
$250,000, which is four. In other words, it will take four years to pay back the
investment. Another project that costs $200,000 won't save the company money,
but it will make the company an incremental $100,000 every year for the next 20
years, which is $2 million. Clearly, the second project can make the company
twice as much money, but how long will it take to pay the investment back? The
answer is $200,000 divided by $100,000, or 2 years. Not only does the second
project take less time to pay back, but it makes the company more money.
Based solely on the payback method, the second project is better.

ACCOUNTING RATE OF RETURN

Accounting rate of return, also known as the Average rate of return, or ARR is


a financial ratio used in capital budgeting.[1] The ratio does not take into account
the concept of time value of money. ARR calculates the return, generated
from net income of the proposed capital investment. The ARR is a percentage
return. Say, if ARR = 7%, then it means that the project is expected to earn
seven cents out of each dollar invested (yearly). If the ARR is equal to or greater
than the required rate of return, the project is acceptable. If it is less than the
desired rate, it should be rejected. When comparing investments, the higher the
ARR, the more attractive the investment. More than half of large firms calculate
ARR when appraising projects.[2]

The key advantage of ARR is that is easy to compute and understand. The main
disadvantage of ARR is that it disregards the time factor in terms of time value of
money or risks for long term investments. The ARR is built on evaluation of
profits and it can be easily manipulated with changes in depreciation methods.
The ARR can give misleading information when evaluating investments of
different size.[3] 

Basic Formulas
Reviewer 377
Management Advisory Services

Pitfalls[edit]

1. This technique is based on profits rather than cash flow. It ignores cash flow
from investment. Therefore, it can be affected by non-cash items such
as bad debts and depreciation when calculating profits. The change of
methods for depreciation can be manipulated and lead to higher profits.

2. This technique does not adjust for the risk to long term forecasts.

3. ARR doesn't take into account the time value of money.

What is the 'Accounting Rate of Return - ARR'

The accounting rate of return (ARR) is the amount of profit, or return, an


individual can expect based on an investment made. Accounting rate of
return divides the average profit by the initial investment to get the ratio or return
that can be expected. ARR does not consider the time value of money, which
means that returns taken in during later years may be worth less than those
taken in now, and does not consider cash flows, which can be an integral part of
maintaining a business.

BREAKING DOWN 'Accounting Rate of Return - ARR'

Accounting rate of return is also called the simple rate of return and is a metric
useful in the quick calculation of a company’s profitability. ARR is used mainly as
a general comparison between multiple projects as it is a very basic look at how
a project is doing.
Reviewer 378
Management Advisory Services

Calculation of Accounting Rate of Return

The accounting rate of return is calculated by dividing the average annual


accounting profit by the initial investment of the project. The profit is calculated
using the appropriate accounting framework including generally accepted
accounting principles (GAAP) or international financial reporting standards
(IFRS). The profit calculation includes depreciation and amortization of project
assets. The initial investment is the fixed asset investment plus any changes to
working capital due to the asset. If the project spans multiple years, an average
of total revenue per year or investment per year is used.

Accounting Rate of Return Example

The total profit from a project over the past five years is $50,000. During this
span, a total investment of $250,000 has been made. The average annual profit
is $10,000 ($50,000/5 years) and the average annual investment is $50,000
($250,000/5 years). Therefore, the accounting rate of return is 20%
($10,000/$50,000).

Accounting Rate of Return Drawbacks

In addition to the lack of consideration given to the time value of money as well
as cash flow timing, accounting rate of return does not provide any insight as to
constraints, bottleneck ramifications or impacts on company throughput.
Accounting rate of return isolates individual projects and may not capture the
systematic impact a project may have on the entire entity – both positively and
negatively. Accounting rate of return is not ideal to use for comparative purposes
because financial measurements may not be consistent between projects and
other non-financial factors need consideration. Finally, accounting rate of return
does not consider the increased risk of long-term projects and the increased
variability associated with long periods of time.

FFM study guide reference E3b) requires candidates to not only be able to
calculate the accounting rate of return, but also to be able to discuss the
usefulness of the accounting rate of return as a method of investment appraisal.
Recent FFM exam sittings have shown that candidates are struggling with the
concept of the accounting rate of return and this article aims to help candidates
with this topic.

Candidates should note that accounting rate of return can not only be examined
within the FFM syllabus, but also the F9 syllabus.

DEFINITION
Reviewer 379
Management Advisory Services

The accounting rate of return, also known as the return on investment, gives the
annual accounting profits arising from an investment as a percentage of the
investment made.

As we can see from this, the accounting rate of return, unlike investment
appraisal methods such as net present value, considers profits, not cash flows.
This is a vital point that many candidates forget in the exam.
Calculation

The formula for the accounting rate of return is (average annual accounting
profits/investment) x 100%

Let us look at an example:

A company is considering in investing a project which requires an initial


investment in a machine of $40,000. Net cash inflows of $15,000 will be
generated for each of the first two years, $5,000 in each of years three and four
and $35,000 in year five, after which time the machine will be sold for $5,000.

Calculating the numerator

We need the average annual accounting profit. To find this, the profit for the
whole project needs to be calculated, which is then divided by the number of
years for which the project is running (in this case five years).

Considering the profit for the project, let us draw up a simple profit and loss
statement for the whole project:
Reviewer 380
Management Advisory Services

Next we need to convert this profit for the whole project into an average figure,
so dividing by five years gives us $8,000 ($40,000/5).

Calculating the denominator

Now we have the numerator, we need to consider the denominator, i.e. the
investment figure.
The investment figure can either be

 the initial investment, or


 the average investment, where the average investment is calculated:
(the initial investment + scrap value)/2

So in this case:

 the initial investment is $40,000


 the average investment is ($40,000 + $5,000)/2 = $22,500

Calculating the accounting rate of return

The accounting rate of return can now be calculated as either:

 ($8,000/$40,000) x 100% = 20% or


 ($8,000/$22,500) x 100% = 36%

This approach should be used for any accounting rate of return calculation, no
matter how easy or difficult:

Calculate the numerator:

1. Calculate the profit for the whole project. Include not only cash revenue and
cash costs, but also other costs such as depreciation, amortisation etc.
2. Calculate the average annual profit, by dividing the profit over the whole
project by the life of the project.

Calculate the denominator

Look in the question to see which definition of investment is to be used. If the


question does not give the information, then use the average investment method,
and state this in your answer.

Calculate the accounting rate of return.


Reviewer 381
Management Advisory Services

Show your answer as a percentage.

Usefulness

Having calculated the percentage answer, how can this be used for project
appraisal?
The accounting rate of return percentage needs to be compared to a target set
by the organisation. If the accounting rate of return is greater than the target,
then accept the project, if it is less then reject the project.

This leads to a couple of problems:

 How is the target set? Should it be 25%, or 30%? The target set could be
arbitrary
 Which calculation method should be used? If in the above example, the
target was 25%, the project would be rejected under one calculation method
but accepted under the other, so changing the calculation method can
change the decision as to whether the project should be accepted or
rejected.

Other problems with the accounting rate of return:

 The timing of the cash flows is not considered. In our example, the biggest
cash flow arises in year five, but by then, the organisation may have ceased
trading due to liquidity issues in years three and four when only $5,000 cash
is being received in each year.

 It is a relative measure rather than an absolute measure – it takes no


account of the size of the investment.

 The time value of money is ignored.

There are, however, some positive aspects to the accounting rate of return:

 It is simple to calculate from readily available accounting data – no


complicated discount factors to calculate!
 The concept of profit is easily understood by managers, and the answer is
easily interpreted – does the project give the necessary accounting return or
not?
 The method looks at the whole life of the project, unlike, for example, the
payback method which may not.

CONCLUSION
Reviewer 382
Management Advisory Services

Candidates need to be able to calculate the accounting rate of return, and


assess its usefulness as an investment appraisal method. It is hoped that this
article will help candidates with both of these.

BAILOUT PAYBACK

In accounting, bailout payback period shows the length of time required to repay
the total initial investment through investment cash flows combined with salvage
value. The shorter the payback period, the more attractive a company is.

Bailout Payback Calculation

Example: a company invested $20,000 for a project and expected $5,000 cash
flow annually.

1. Payback period = 20,000 / 5,000 = 4


2. Bailout payback

AT THE END OF YEAR CASH FLOW SALVAGE VALUE CUMULATIVE PAYBACK


1 5,000 12,000 17,000
2 10,000 10,000 20,000
3 15,000 8,000 23,000
4 20,000 6,000 26,000

Bailout payback = 2, at the end of year 2, the cumulative payback of $20,000 is


equal to the initial investment of $20,000.

Bailout Payback vs Payback Period

Bailout payback method is similar like payback period method. The difference


between these two is that bailout payback model incorporates the salvage value
of the asset into the calculation and measures the length of the payback period
when the periodic cash inflows are combined with the salvage value.

PAYBACK RECIPROCAL

The payback reciprocal is a crude estimate of the rate of return for a project or
investment. The payback reciprocal is computed by dividing the digit "1" by a
project's payback period expressed in years. For example, if a project's payback
period is 4 years, the payback reciprocal is 1 divided by 4 = 0.25 = 25%.

The payback reciprocal overstates the true rate of return because it assumes
that the annual cash flows will continue forever. It also assumes that the annual
cash flows are identical in amount. Since these two conditions are unrealistic you
should avoid the use of the payback reciprocal. Instead, you should compute
the internal rate of return or the net present value because they will discount
each of the actual cash amounts to reflect the time value of money.
Reviewer 383
Management Advisory Services

The payback reciprocal is the payback period for an investment divided by 1.


This reciprocal yields an approximation of the rate of return on an investment,
though only under the following circumstances:

 Annual cash flows are uniformly even over the lifetime of the investment
 The cash flows from the project will continue forever
For example, a financial analyst is reviewing a possible investment of $50,000,
which will generate positive cash flows of $10,000 per year. The payback period
is 5 years, since cash flows of $50,000 will accumulate over the next five years.
The payback reciprocal is 1 / 5 years, or 20%. The calculated internal rate of
return using this reciprocal is 15% if the assumed cash flow period is 10 years,
and reaches 20% only when the assumed cash flows cover a period of 30 years.
Since it is quite unlikely that cash flows will continue uninterrupted for a long
ways into the future, it is more realistic to instead evaluate a project based on the
net present value method or the internal rate of return.

c. Discounted Capital Budgeting Techniques (Net Present Value, Internal


Rate Of Return, Profitability Index, Equivalent Annual Annuity, Fisher
Rate/NPV Point Of Indifference)

Discounted capital budgeting techniques consider the time value of money.

NET PRESENT VALUE

What is 'Net Present Value - NPV'


Net Present Value (NPV) is the difference between the present value of cash
inflows and the present value of cash outflows. NPV is used in capital
budgeting to analyze the profitability of a projected investment or project. 
The following is the formula for calculating NPV: 

where
Ct = net cash inflow during the period t
Co = total initial investment costs
r = discount rate, and
t = number of time periods 

A positive net present value indicates that the projected earnings generated by a


project or investment (in present dollars) exceeds the anticipated costs (also in
present dollars). Generally, an investment with a positive NPV will be a profitable
one and one with a negative NPV will result in a net loss. This concept is the
basis for the Net Present Value Rule, which dictates that the only investments
that should be made are those with positive NPV values.
Reviewer 384
Management Advisory Services

When the investment in question is an acquisition or a merger, one might also


use the Discounted Cash Flow (DCF) metric.

Apart from the formula itself, net present value can often be calculated using
tables, spreadsheets such as Microsoft Excel or Investopedia’s own NPV
calculator.

BREAKING DOWN 'Net Present Value - NPV'

Determining the value of a project is challenging because there are different


ways to measure the value of future cash flows. Because of the time value of
money (TVM), money in the present is worth more than the same amount in the
future. This is both because of earnings that could potentially be made using the
money during the intervening time and because of inflation. In other words, a
dollar earned in the future won’t be worth as much as one earned in the present.
The discount rate element of the NPV formula is a way to account for this.
Companies may often have different ways of identifying the discount rate.
Common methods for determining the discount rate include using the expected
return of other investment choices with a similar level of risk (rates of
return investors will expect), or the costs associated with borrowing
money needed to finance the project.

For example, if a retail clothing business wants to purchase an existing store, it


would first estimate the future cash flows that store would generate, and then
discount those cash flows (r) into one lump-sum present value amount of, say
$500,000. If the owner of the store were willing to sell his or her business for less
than $500,000, the purchasing company would likely accept the offer as it
presents a positive NPV investment. If the owner agreed to sell the store for
$300,000, then the investment represents a $200,000 net gain ($500,000 -
$300,000) during the calculated investment period. This $200,000, or the
net gain of an investment, is called the investment’s intrinsic value. Conversely, if
the owner would not sell for less than $500,000, the purchaser would not buy the
store, as the acquisition would present a negative NPV at that time and would,
therefore, reduce the overall value of the larger clothing company.

Let's look at how this example fits into the formula above. The lump-sum present
value of $500,000 represents the part of the formula between the equal sign and
the minus sign. The amount the retail clothing business pays for the store
represents Co. Subtract Co from $500,000 to get the NPV: if C o is less than
$500,000, the resulting NPV is positive; if Co is more than $500,000, the NPV is
negative and is not a profitable investment.

Drawbacks and Alternatives


Reviewer 385
Management Advisory Services

One primary issue with gauging an investment’s profitability with NPV is that
NPV relies heavily upon multiple assumptions and estimates, so there can be
substantial room for error. Estimated factors include investment costs, discount
rate and projected returns. A project may often require unforeseen expenditures
to get off the ground or may require additional expenditure at the project’s end.

Additionally, discount rates and cash inflow estimates may not inherently account
for risk associated with the project and may assume the maximum possible cash
inflows over an investment period. This may occur as a means of artificially
increasing investor confidence. As such, these factors may need to be adjusted
to account for unexpected costs or losses or for overly optimistic cash inflow
projections.

Net present value (NPV) is a core component of corporate budgeting. It is a


comprehensive way to calculate whether a proposed project will be value added
or not. The calculation of NPV encompasses many financial topics in one
formula: cash flows, the time value of money, the discount rate over the duration
of the project (usually WACC), terminal valueand salvage value.

INTERNAL RATE OF RETURN

Internal rate of return (IRR) is the interest rate at which the net present value of
all the cash flows (both positive and negative) from a project or investment equal
zero.

Internal rate of return is used to evaluate the attractiveness of a project


or investment. If the IRR of a new project exceeds a company’s required rate of
return, that project is desirable. If IRR falls below the required rate of return, the
project should be rejected.

HOW IT WORKS (EXAMPLE):

The formula for IRR is:


0 = P0 + P1/(1+IRR) + P2/(1+IRR)2 + P3/(1+IRR)3 + . . . +Pn/(1+IRR)n
where P0, P1, . . . Pn equals the cash flows in periods 1, 2, . . . n, respectively;
and
IRR equals the project's internal rate of return.

Let's look at an example to illustrate how to use IRR.

Assume Company XYZ must decide whether to purchase a piece of factory


equipment for $300,000. The equipment would only last three years, but it is
expected to generate $150,000 of additional annual profit during those years.
Company XYZ also thinks it can sell the equipment for scrap afterward for about
$10,000. Using IRR, Company XYZ can determine whether the equipment
Reviewer 386
Management Advisory Services

purchase is a better use of its cash than its other investment options, which


should return about 10%.

Here is how the IRR equation looks in this scenario:

0 = -$300,000 + ($150,000)/(1+.2431) + ($150,000)/(1+.2431)2 +


($150,000)/(1+.2431)3 + $10,000/(1+.2431)4

The investment's IRR is 24.31%, which is the rate that makes the present


value of the investment's cash flows equal to zero. From a purely financial
standpoint, Company XYZ should purchase the equipment since this generates a
24.31% return for the Company --much higher than the 10% return available
from other investments.

A general rule of thumb is that the IRR value cannot be derived analytically.
Instead, IRR must be found by using mathematical trial-and-error to derive the
appropriate rate. However, most business calculators and spreadsheet programs
will automatically perform this function.

IRR can also be used to calculate expected returns on stocks or investments,


including the yield to maturity on bonds.  IRR calculates the yield on an
investment and is thus different than net present value (NPV) value of an
investment.

WHY IT MATTERS:

IRR allows managers to rank projects by their overall rates of return rather than


their net present values, and the investment with the highest IRR is usually
preferred.  Ease of comparison makes IRR attractive, but there are limits to its
usefulness. For example, IRR works only for investments that have an
initial cash outflow (the purchase of the investment) followed by one or more
cash inflows.

Also, IRR does not measure the absolute size of the investment or the return.
This means that IRR can favor investments with high rates of return even if the
dollar amount of the return is very small. For example, a $1 investment returning
$3 will have a higher IRR than a $1 million investment returning $2 million.
Another short-coming is that IRR can’t be used if the investment generates
interim cash flows. Finally, IRR does not consider cost of capital and can’t
compare projects with different durations.

IRR is best-suited for analyzing venture capital and private equity investments,


which typically entail multiple cash investments over the life of the business, and
a single cash outflow at the end via IPOor sale.
Reviewer 387
Management Advisory Services

What is 'Internal Rate Of Return - IRR'

Internal rate of return (IRR) is a metric used in capital budgeting measuring the


profitability of potential investments. Internal rate of return is a discount rate that
makes the net present value (NPV) of all cash flows from a
particular project equal to zero. IRR calculations rely on the same formula as
NPV does.

The following is the formula for calculating NPV: 

where:
Ct = net cash inflow during the period t
Co= total initial investment costs
r = discount rate, and
t = number of time periods 

To calculate IRR using the formula, one would set NPV equal to zero and solve
for the discount rate r, which is here the IRR. Because of the nature of the
formula, however, IRR cannot be calculated analytically, and must instead be
calculated either through trial-and-error or using software programmed to
calculate IRR.

Generally speaking, the higher a project's internal rate of return, the more
desirable it is to undertake the project. IRR is uniform for investments of varying
types and, as such, IRR can be used to rank multiple prospective projects
a firm is considering on a relatively even basis. Assuming the costs of investment
are equal among the various projects, the project with the highest IRR would
probably be considered the best and undertaken first.
IRR is sometimes referred to as "economic rate of return” (ERR).

BREAKING DOWN 'Internal Rate Of Return - IRR'

You can think of IRR as the rate of growth a project is expected to generate.


While the actual rate of return that a given project ends up generating will often
differ from its estimated IRR rate, a project with a substantially higher IRR value
than other available options would still provide a much better chance of strong
growth. One popular use of IRR is in comparing the profitability of establishing
new operations with that of expanding old ones. For example,
an energy company may use IRR in deciding whether to open a new power plant
Reviewer 388
Management Advisory Services

or to renovate and expand a previously existing one. While both projects are
likely to add value to the company, it is likely that one will be the more logical
decision as prescribed by IRR.

In theory, any project with an IRR greater than its cost of capital is a profitable
one, and thus it is in a company’s interest to undertake such projects. In planning
investment projects, firms will often establish a required rate of return (RRR) to
determine the minimum acceptable return percentage that the investment in
question must earn in order to be worthwhile. Any project with an IRR that
exceeds the RRR will likely be deemed a profitable one, although companies will
not necessarily pursue a project on this basis alone. Rather, they will likely
pursue projects with the highest difference between IRR and RRR, as chances
are these will be the most profitable.

IRRs can also be compared against prevailing rates of return in


the securities market. If a firm can't find any projects with IRRs greater than the
returns that can be generated in the financial markets, it may simply choose to
invest its retained earnings into the market.

Although IRR is an appealing metric to many, it should always be used in


conjunction with NPV for a clearer picture of the value represented by a potential
project a firm may undertake.

Issues with 'Internal Rate of Return (IRR)'

While IRR is a very popular metric in estimating a project’s profitability, it can be


misleading if used alone. Depending on the initial investment costs, a project
may have a low IRR but a high NPV, meaning that while the pace at which the
company sees returns on that project may be slow, the project may also be
adding a great deal of overall value to the company.

A similar issue arises when using IRR to compare projects of different lengths.
For example, a project of a short duration may have a high IRR, making it appear
to be an excellent investment, but may also have a low NPV. Conversely, a
longer project may have a low IRR, earning returns slowly and steadily, but may
add a large amount of value to the company over time.

Another issue with IRR is not one strictly inherent to the metric itself, but rather to
a common misuse of IRR. People may assume that, when positive cash
flows are generated during the course of a project (not at the end), the money
will be reinvested at the project’s rate of return. This can rarely be the case.
Rather, when positive cash flows are reinvested, it will be at a rate that more
resembles the cost of capital. Miscalculating using IRR in this way may lead to
the belief that a project is more profitable than it actually is in reality. This, along
Reviewer 389
Management Advisory Services

with the fact that long projects with fluctuating cash flows may have multiple
distinct IRR values, has prompted the use of another metric called modified
internal rate of return (MIRR). MIRR adjusts the IRR to correct these issues,
incorporating cost of capital as the rate at which cash flows are reinvested, and
existing as a single value. Because of MIRR’s correction of the former issue of
IRR, a project’s MIRR will often be significantly lower than the same project’s
IRR. (For more, see: Internal Rate Of Return: An Inside Look.)

PROFITABILITY INDEX

Profitability index (PI), also known as profit investment ratio (PIR) and value


investment ratio (VIR), is the ratio of payoff to investment of a proposed project.
It is a useful tool for ranking projects because it allows you to quantify the
amount of value created per unit of investment.
The ratio is calculated as follows:

Assuming that the cash flow calculated does not include the investment made in
the project, a profitability index of 1 indicates breakeven. Any value lower than
one would indicate that the project's present value (PV) is less than the initial
investment. As the value of the profitability index increases, so does the financial
attractiveness of the proposed project.

Rules for selection or rejection of a project:

 If PI > 1 then accept the project


 If PI < 1 then reject the project

For example:

 Investment = $40,000
 Life of the Machine = 5 Years
Reviewer 390
Management Advisory Services

EQUIVALENT ANNUAL ANNUITY

What is the 'Equivalent Annual Annuity Approach - EAA'

The equivalent annual annuity approach (EAA) is one of two methods used
in capital budgeting to compare mutually exclusive projects with unequal lives.
The equivalent annual annuity (EAA) approach calculates the constant
annual cash flow generated by a project over its lifespan if it was an annuity.
When used to compare projects with unequal lives, the one with the higher EAA
should be selected.

BREAKING DOWN 'Equivalent Annual Annuity Approach - EAA'

The EAA approach uses a three-step process to compare projects. The present


value of the constant annual cash flows is exactly equal to the project's net
present value (NPV). The first thing an analyst does is calculate each project's
NPV over its lifetime. After that, he computes each project's EAA so that the
present value of the annuities is exactly equal to the project's NPV. Lastly, the
analyst compares each project's EAA and selects the one with the highest EAA.
Reviewer 391
Management Advisory Services

For example, assume that a company with a weighted average cost of


capital (WACC) of 10% is comparing two projects, A and B. Project A has a NPV
of $3 million and an estimated life of five years, while Project B has a NPV of $2
million and an estimated life of three years. Using a financial calculator, Project A
has an EAA of $791,392.44, and Project B has an EAA of $804,229.61. Under
the EAA approach, Project B would be selected, since it has the higher
equivalent annual annuity value.

Formula for Equivalent Annual Annuity Approach

Often, an analyst uses a financial calculator, using the typical present value (PV)
and future value (FV) functions to find the EAA. An analyst can use the following
formula in a spreadsheet or with a normal non-financial calculator with exactly
the same results.

C = (r x NPV) / (1 - (1 + r)-n )

Where:

C = equivalent annuity cash flow


NPV = net present value
r = interest rate per period
n = number of periods

For example, consider two projects. One has a seven-year term and an NPV of
$100,000. The other has a nine-year term and an NPV of $120,000. Both
projects are discounted at a 6% rate. The EAA of each project is:

EAA Project one = (0.06 x $100,000) / (1 - (1 + 0.06)-7 ) = $17,914


EAA Project two = (0.06 x $120,000) / (1 - (1 + 0.06)-9 ) = $17,643

Project one should be chosen.

FISHER RATE

SAN JOSÉ STATE UNIVERSITY


ECONOMICS DEPARTMENT
Thayer Watkins

Irving Fisher's Theory of Interest Rates

With and Without Adjustment for Tax Rates and Risk Premiums

The Original Fisher Model


Reviewer 392
Management Advisory Services

Irving Fisher's theory of interest rates relates the nominal interest rate i to the
rate of inflation π and the "real" interest rate r. The real interest rate r is the
interest rate after adjustment for inflation. It is the interest rate that lenders have
to have to be willing to loan out their funds. The relation Fisher postulated
between these three rates is:

(1+i) = (1+r) (1+π) = 1 + r + π + r π

This is equivalent to:

i = r + π(1 + r)

Thus, according to this equation, if π increases by 1 percent the nominal interest


rate increases by more than 1 percent.

This means that if r and π are known then i can be determined. On the other
hand, if i and π are known then r can be determined and the relationship is:

1+r = (1+i)/ (1+π)


or
r = (i - π)/ (1+π)

When π is small then r is approximately equal to i-π, but in situation involving a


high rate of inflation the more accurate relationship must be taken into account.

Adjustment for Variation in Tax Rates

The next step in the analysis is to take into account the effect of taxes on the real
rate of return. Let iC be the nominal risk-free interest rate in the country with
currency C and rC and πC be the corresponding real interest rate and expected
rate of inflation, respectively. Let tC be the corresponding tax rate on interest
income and r*C be the after-tax real rate of return. The rate of return after-taxes is
iC(1-tC). Then
r*C = [iC(1-tC)- πC] /(1+πC).
If we know r*C,tC and πC and want to determine iC the formula is:
iC = [r*C(1+πC)+ πC]/ (1-tC)
= r*C/(1-tC)               
+ (1 + r* )π /(1-t ).
C C C

This means that when the rate of inflation increases the nominal interest rate
increase by some multiple of the increase in the rate of inflation; i.e.,
∂iC/∂πC = (1+r*C)/(1-tC).
William Crowder and Dennis Hoffman in their article, "The Long- Run
Relationship between Nominal Interest Rates and Inflation: the Fisher Effect
Reviewer 393
Management Advisory Services

Revisted," Journal of Money, Credit and Banking (Feb. 1996) report that a 1.0


percent increase in the inflation rate yields a 1.34 percent increase in the nominal
interest rate. This is consistent with a marginal tax rate of about 25 percent.

Adjustment for Variation in Risk

The preceding analysis presumes that the level of risk is the same in all
countries. If countries differ in risk, lenders and investors will need a risk
premium, an increment in the interest rate, to compensate them for accepting
higher levels of risk. Let sC be the risk premium required for country C. If the
international capital market is in equilibrium the real, after-tax rates of return in
the different countries must be equal. Then rC-sC=r* for all countries and hence

(iC(1-tC)- πC)/ (1+πC) = r* + sC.


Thus,
iC = [(r*+sC)(1+πC) +πC)]/(1-tC)

Suppose tC = 0.4 so 1-tC=0.6 and r* + sC = 0.05. Then


iC = [0.05(1+πC) + πC]/0.6 = 0.0833 + 1.75πC)

so that each 1 percent increase in the expected rate of inflation gets translated in
to a 1.75 percent increase in the nominal interest rate.

An alternate approach to incorporating country risk premiums into the analysis is


to reformulate Fisher's original equation to include a factor of (1+ρ) where ρ is
the risk premium for the country. This means that the nomimal interest rate is
given by:

(1+i) = (1+r)(1+ρ) (1+π)

Thus when inflation increases by 1 percent the nomimal rate will increase by
(1+r)(1+ρ) percent, which could be significantly greater than 1.0.

To take into account the tax rate on interest, the term on the left should be 1 plus
the after-tax nominal interest rate; i.e.,

(1+i(1-t)) = (1+r)(1+ρ) (1+π)

Thus the before-tax nominal interest rate is given by:


i = [(1+r)(1+ρ) (1+π) - 1]/(1-t)
and hence
∂i/∂π = (1+r*)(1+ρ)/(1-t).
Reviewer 394
Management Advisory Services

Finding the Expected Rate of Inflation


from the Nominal Interest Rate

In order to use the PPP principle for forecasting future exchange rates we need
the expected rate of inflation. The way this would be determined for a country
would be.

(1+π) = (1+i(1-t))/(1+r*)(1+ρ)

For two countries in financial equilibrium the values of r* would be the same.
Thus the factor required for forecasting exchange rates by the PPP principle is
given by:

(1+πF)/(1+π$) = 
(1+iF(1-tF))/(1+i$(1-t$))/ 
[(1+ρF)/(1+ρ$)]

Estimates of country risk premiums


Suppose the nominal risk-free interest rates in the U.S. and France are 8% and
11%, respectively and the tax rates are 0.3 and 0.4, also respectively.
Furthermore, suppose the country risk premiums for the U.S. and France are 0%
and 0.5 of 1%, respectively. Then the after-tax nominal rates are 5.6% and 6.6%.
The ratio of 1 plus the expected rates of inflation are given by:

(1+πF)/(1+π$) = (1.066/1.056)/[1.005/1.0] = 1.00445.


Thus the French franc should depreciate 0.445 of 1% per year with respect to the
dollar.

NPV POINT OF INDIFFERENCE

Actually it’s very simple.

In the same problem, you’re given the cashflows of the two projects. Take the
difference of every two cashflows and input the difference as if it’s a new project.
Calculate its IRR and that is your crossover rate.

This is how you solve the problem:


-100 - (-100) = 0 36 - 0 = 36 36 - 0 = 36 36 - 0 = 36 36 - 175 = -139
CF0 = 0
CF1 = 36
CF2 = 36
CF3 = 36
CF4 = -139
Reviewer 395
Management Advisory Services

Solve for IRR. It will return 13.16%, which is the ratio provided by the key
answer.

The NPV profile is a graph that illustrates a project's NPV against various
discount rates, with the NPV on the y-axis and the cost of capital on the x-axis.
To begin, simply calculate a project's NPV using different cost-of-capital
assumptions. Once these are calculated, plot the values on the graph.

Figure 11.5

Since the IRR is the discount rate where the NPV of a project equals zero, the
point where the NPV crosses the x-axis is also the project’s IRR.

d. Project Screening, Project Ranking And Capital Rationing (Independent


And Mutually Exclusive Capital Investment Projects)

Capital budgeting is vital in marketing decisions. Decisions on investment, which


take time to mature, have to be based on the returns which that investment will
make. Unless the project is for social reasons only, if the investment is
unprofitable in the long run, it is unwise to invest in it now.
Often, it would be good to know what the present value of the future investment
is, or how long it will take to mature (give returns). It could be much more
profitable putting the planned investment money in the bank and earning interest,
or investing in an alternative project.
Typical investment decisions include the decision to build another grain silo,
cotton gin or cold store or invest in a new distribution depot. At a lower level,
marketers may wish to evaluate whether to spend more on advertising or
increase the sales force, although it is difficult to measure the sales to advertising
ratio.
Reviewer 396
Management Advisory Services

Chapter objectives
This chapter is intended to provide:
· An understanding of the importance of capital budgeting in marketing decision
making
· An explanation of the different types of investment project
· An introduction to the economic evaluation of investment proposals
· The importance of the concept and calculation of net present value and internal
rate of return in decision making
· The advantages and disadvantages of the payback method as a technique for
initial screening of two or more competing projects.
Structure of the chapter
Capital budgeting is very obviously a vital activity in business. Vast sums of
money can be easily wasted if the investment turns out to be wrong or
uneconomic. The subject matter is difficult to grasp by nature of the topic
covered and also because of the mathematical content involved. However, it
seeks to build on the concept of the future value of money which may be spent
now. It does this by examining the techniques of net present value, internal rate
of return and annuities. The timing of cash flows are important in new investment
decisions and so the chapter looks at this "payback" concept. One problem
which plagues developing countries is "inflation rates" which can, in some cases,
exceed 100% per annum. The chapter ends by showing how marketers can take
this in to account.
Capital budgeting versus current expenditures
A capital investment project can be distinguished from current expenditures by
two features:
a) such projects are relatively large
b) a significant period of time (more than one year) elapses between the
investment outlay and the receipt of the benefits..
As a result, most medium-sized and large organisations have developed special
procedures and methods for dealing with these decisions. A systematic approach
to capital budgeting implies:
a) the formulation of long-term goals
b) the creative search for and identification of new investment opportunities
c) classification of projects and recognition of economically and/or statistically
dependent proposals
d) the estimation and forecasting of current and future cash flows
e) a suitable administrative framework capable of transferring the required
information to the decision level
f) the controlling of expenditures and careful monitoring of crucial aspects of
project execution
g) a set of decision rules which can differentiate acceptable from unacceptable
alternatives is required.
The last point (g) is crucial and this is the subject of later sections of the chapter.
The classification of investment projects
Reviewer 397
Management Advisory Services

a) By project size
Small projects may be approved by departmental managers. More careful
analysis and Board of Directors' approval is needed for large projects of, say,
half a million dollars or more.
b) By type of benefit to the firm
· an increase in cash flow
· a decrease in risk
· an indirect benefit (showers for workers, etc).
c) By degree of dependence
· mutually exclusive projects (can execute project A or B, but not both)
· complementary projects: taking project A increases the cash flow of project B.
· substitute projects: taking project A decreases the cash flow of project B.
d) By degree of statistical dependence
· Positive dependence
· Negative dependence
· Statistical independence.
e) By type of cash flow
· Conventional cash flow: only one change in the cash flow sign
e.g. -/++++ or +/----, etc
· Non-conventional cash flows: more than one change in the cash flow sign,
e.g. +/-/+++ or -/+/-/++++, etc.
The economic evaluation of investment proposals
The analysis stipulates a decision rule for:
I) accepting or
II) rejecting
investment projects
The time value of money
Recall that the interaction of lenders with borrowers sets an equilibrium rate of
interest. Borrowing is only worthwhile if the return on the loan exceeds the cost of
the borrowed funds. Lending is only worthwhile if the return is at least equal to
that which can be obtained from alternative opportunities in the same risk class.
The interest rate received by the lender is made up of:
i) The time value of money: the receipt of money is preferred sooner rather than
later. Money can be used to earn more money. The earlier the money is
received, the greater the potential for increasing wealth. Thus, to forego the use
of money, you must get some compensation.
ii) The risk of the capital sum not being repaid. This uncertainty requires a
premium as a hedge against the risk, hence the return must be commensurate
with the risk being undertaken.
iii) Inflation: money may lose its purchasing power over time. The lender must be
compensated for the declining spending/purchasing power of money. If the
lender receives no compensation, he/she will be worse off when the loan is
repaid than at the time of lending the money.
a) Future values/compound interest
Reviewer 398
Management Advisory Services

Future value (FV) is the value in dollars at some point in the future of one or
more investments.
FV consists of:
i) the original sum of money invested, and
ii) the return in the form of interest.
The general formula for computing Future Value is as follows:
FVn = Vo (l + r)n
where
Vo is the initial sum invested
r is the interest rate
n is the number of periods for which the investment is to receive interest.
Thus we can compute the future value of what Vo will accumulate to in n years
when it is compounded annually at the same rate of r by using the above
formula.
Now attempt exercise 6.1.
Exercise 6.1 Future values/compound interest
i) What is the future value of $10 invested at 10% at the end of 1 year?
ii) What is the future value of $10 invested at 10% at the end of 5 years?
We can derive the Present Value (PV) by using the formula:
FVn = Vo (I + r)n
By denoting Vo by PV we obtain:
FVn = PV (I + r)n
by dividing both sides of the formula by (I + r)n we derive:

Rationale for the formula:


As you will see from the following exercise, given the alternative of earning 10%
on his money, an individual (or firm) should never offer (invest) more than $10.00
to obtain $11.00 with certainty at the end of the year.
Now attempt exercise 6.2
Exercise 6.2 Present value
i) What is the present value of $11.00 at the end of one year?
ii) What is the PV of $16.10 at the end of 5 years?
b) Net present value (NPV)
The NPV method is used for evaluating the desirability of investments or
projects.

where:
Ct = the net cash receipt at the end of year t
Io = the initial investment outlay
Reviewer 399
Management Advisory Services

r = the discount rate/the required minimum rate of return on investment


n = the project/investment's duration in years.
The discount factor r can be calculated using:

Examples:

N.B. At this point the tutor should introduce the net present value tables from any
recognised published source. Do that now.
Decision rule:
If NPV is positive (+): accept the project
If NPV is negative(-): reject the project
Now attempt exercise 6.3.
Exercise 6.3 Net present value
A firm intends to invest $1,000 in a project that generated net receipts of $800,
$900 and $600 in the first, second and third years respectively. Should the firm
go ahead with the project?
Attempt the calculation without reference to net present value tables first.
c) Annuities
N.B. Introduce students to annuity tables from any recognised published source.
A set of cash flows that are equal in each and every period is called an annuity.
Example:
Year Cash Flow ($)
0 -800
1 400
2 400
3 400
PV = $400(0.9091) + $400(0.8264) + $400(0.7513)
= $363.64 + $330.56 + $300.52
= $994.72
NPV = $994.72 - $800.00
= $194.72
Alternatively,
PV of an annuity = $400 (PVFAt.i) (3,0,10)
= $400 (0.9091 + 0.8264 + 0.7513)
= $400 x 2.4868
= $994.72
NPV = $994.72 - $800.00
Reviewer 400
Management Advisory Services

= $194.72
d) Perpetuities
A perpetuity is an annuity with an infinite life. It is an equal sum of money to be
paid in each period forever.

where:
C is the sum to be received per period
r is the discount rate or interest rate
Example:
You are promised a perpetuity of $700 per year at a rate of interest of 15% per
annum. What price (PV) should you be willing to pay for this income?

= $4,666.67
A perpetuity with growth:
Suppose that the $700 annual income most recently received is expected to
grow by a rate G of 5% per year (compounded) forever. How much would this
income be worth when discounted at 15%?
Solution:
Subtract the growth rate from the discount rate and treat the first period's cash
flow as a perpetuity.

= $735/0.10
= $7,350
e) The internal rate of return (IRR)
Refer students to the tables in any recognised published source.
· The IRR is the discount rate at which the NPV for a project equals zero. This
rate means that the present value of the cash inflows for the project would equal
the present value of its outflows.
· The IRR is the break-even discount rate.
· The IRR is found by trial and error.

 where r = IRR
IRR of an annuity:

where:
Reviewer 401
Management Advisory Services

Q (n,r) is the discount factor


Io is the initial outlay
C is the uniform annual receipt (C1 = C2 =....= Cn).
Example:
What is the IRR of an equal annual income of $20 per annum which accrues for
7 years and costs $120?

=6
From the tables = 4%
Economic rationale for IRR:
If IRR exceeds cost of capital, project is worthwhile, i.e. it is profitable to
undertake. Now attempt exercise 6.4
Exercise 6.4 Internal rate of return
Find the IRR of this project for a firm with a 20% cost of capital:
YEAR CASH FLOW
$
0 -10,000
1 8,000
2 6,000
a) Try 20%
b) Try 27%
c) Try 29%
Net present value vs internal rate of return
Independent vs dependent projects
NPV and IRR methods are closely related because:
i) both are time-adjusted measures of profitability, and
ii) their mathematical formulas are almost identical.
So, which method leads to an optimal decision: IRR or NPV?
a) NPV vs IRR: Independent projects
Independent project: Selecting one project does not preclude the choosing of the
other.
With conventional cash flows (-|+|+) no conflict in decision arises; in this case
both NPV and IRR lead to the same accept/reject decisions.
Figure 6.1 NPV vs IRR Independent projects
Reviewer 402
Management Advisory Services

If cash flows are discounted at k1, NPV is positive and IRR > k1: accept project.
If cash flows are discounted at k2, NPV is negative and IRR < k 2: reject the
project.
Mathematical proof: for a project to be acceptable, the NPV must be positive, i.e.

Similarly for the same project to be acceptable:

where R is the IRR.


Since the numerators Ct are identical and positive in both instances:
· implicitly/intuitively R must be greater than k (R > k);
· If NPV = 0 then R = k: the company is indifferent to such a project;
· Hence, IRR and NPV lead to the same decision in this case.
b) NPV vs IRR: Dependent projects
NPV clashes with IRR where mutually exclusive projects exist.
Example:
Agritex is considering building either a one-storey (Project A) or five-storey
(Project B) block of offices on a prime site. The following information is available:
Initial Net
Investment Inflow
Outlay at the
Year
End
Project -9,500 11,500
A
Project -15,000 18,000
Reviewer 403
Management Advisory Services

B
Assume k = 10%, which project should Agritex undertake?

= $954.55

= $1,363.64
Both projects are of one-year duration:

IRRA: 
$11,500 = $9,500 (1 +RA)

= 1.21-1
therefore IRRA = 21%

IRRB: 
$18,000 = $15,000(1 + RB)

= 1.2-1
therefore IRRB = 20%
Decision:
Assuming that k = 10%, both projects are acceptable because:
NPVA and NPVB are both positive
IRRA > k AND IRRB > k
Which project is a "better option" for Agritex?
If we use the NPV method:
NPVB ($1,363.64) > NPVA ($954.55): Agritex should choose Project B.
If we use the IRR method:
IRRA (21%) > IRRB (20%): Agritex should choose Project A. See figure 6.2.
Figure 6.2 NPV vs IRR: Dependent projects
Reviewer 404
Management Advisory Services

Up to a discount rate of ko: project B is superior to project A, therefore project B


is preferred to project A.
Beyond the point ko: project A is superior to project B, therefore project A is
preferred to project B
The two methods do not rank the projects the same.
Differences in the scale of investment
NPV and IRR may give conflicting decisions where projects differ in their scale of
investment. Example:
Year 0 1 2 3
s
Proje - 1,500 1,500 1,500
ct A 2,500
Proje - 7,000 7,000 7,000
ct B 14,00
0
Assume k= 10%.
NPVA = $1,500 x PVFA at 10% for 3 years
= $1,500 x 2.487
= $3,730.50 - $2,500.00
= $1,230.50.
NPVB == $7,000 x PVFA at 10% for 3 years
= $7,000 x 2.487
= $17,409 - $14,000
= $3,409.00.

IRRA = 
Reviewer 405
Management Advisory Services

= 1.67.
Therefore IRRA = 36% (from the tables)

IRRB = 

= 2.0
Therefore IRRB = 21%
Decision:
Conflicting, as:
· NPV prefers B to A
· IRR prefers A to B
NPV IRR
Project A $ 3,730.50 36%
Project B $17,400.00 21%
See figure 6.3.
Figure 6.3 Scale of investments

To show why:
i) the NPV prefers B, the larger project, for a discount rate below 20%
ii) the NPV is superior to the IRR
a) Use the incremental cash flow approach, "B minus A" approach
b) Choosing project B is tantamount to choosing a hypothetical project "B minus
A".
0 1 2 3
Reviewer 406
Management Advisory Services

Proje - 7,000 7,000 7,000


ct B 14,00
0
Proje - 1,500 1,500 1,500
ct A 2,500
"B - 5,500 5,500 5,500
minu 11,50
s A" 0

IRR"B Minus A" 


= 2.09
= 20%
c) Choosing B is equivalent to: A + (B - A) = B
d) Choosing the bigger project B means choosing the smaller project A plus an
additional outlay of $11,500 of which $5,500 will be realised each year for the
next 3 years.
e) The IRR"B minus A" on the incremental cash flow is 20%.
f) Given k of 10%, this is a profitable opportunity, therefore must be accepted.
g) But, if k were greater than the IRR (20%) on the incremental CF, then reject
project.
h) At the point of intersection,
NPVA = NPVB or NPVA - NPVB = 0, i.e. indifferent to projects A and B.
i) If k = 20% (IRR of "B - A") the company should accept project A.
· This justifies the use of NPV criterion.
Advantage of NPV:
· It ensures that the firm reaches an optimal scale of investment.
Disadvantage of IRR:
· It expresses the return in a percentage form rather than in terms of absolute
dollar returns, e.g. the IRR will prefer 500% of $1 to 20% return on $100.
However, most companies set their goals in absolute terms and not in % terms,
e.g. target sales figure of $2.5 million.
The timing of the cash flow
The IRR may give conflicting decisions where the timing of cash flows varies
between the 2 projects.
Note that initial outlay Io is the same.
0 1 2
P - 2 1
r 1 0 2
o 0 5
j 0 .
e 0
c 0
t
A
Reviewer 407
Management Advisory Services

P - 1 3
r 1 0 1
o 0 0 .
j 0 2
e 5
c
t
B
" 0 - 8
A 8 8
0 .
m 1
i 5
n
u
s
B
"
Assume k = 10%
NPV IRR
Project A 17.3 20.0%
Project B 16.7 25.0%
"A minus B" 0.6 10.9%
IRR prefers B to A even though both projects have identical initial outlays. So,
the decision is to accept A, that is B + (A - B) = A. See figure 6.4.
Figure 6.4 Timing of the cash flow
Reviewer 408
Management Advisory Services

The horizon problem


NPV and IRR rankings are contradictory. Project A earns $120 at the end of the
first year while project B earns $174 at the end of the fourth year.
0 1 2 3 4
Project -100 120 - - -
A
Project -100 - - - 174
B
Assume k = 10%
NPV IRR
Project A 9 20%
Project B 19 15%
Decision:
NPV prefers B to A
IRR prefers A to B.
The profitability index - PI
This is a variant of the NPV method.

Decision rule:
PI > 1; accept the project
PI < 1; reject the project
If NPV = 0, we have:
NPV = PV - Io = 0
PV = Io
Dividing both sides by Io we get:

PI of 1.2 means that the project's profitability is 20%. Example:


P I PI
V o

o
f

C
F
P 1 5 2.
r 0 0 0
o 0
j
e
c
Reviewer 409
Management Advisory Services

t
A
P 1 1 1.
r , , 5
o 5 0
j 0 0
e 0 0
c
t
B
Decision:
Choose option B because it maximises the firm's profitability by $1,500.
Disadvantage of PI:
Like IRR it is a percentage and therefore ignores the scale of investment.
The payback period (PP)
The CIMA defines payback as 'the time it takes the cash inflows from a capital
investment project to equal the cash outflows, usually expressed in years'. When
deciding between two or more competing projects, the usual decision is to accept
the one with the shortest payback.
Payback is often used as a "first screening method". By this, we mean that when
a capital investment project is being considered, the first question to ask is: 'How
long will it take to pay back its cost?' The company might have a target payback,
and so it would reject a capital project unless its payback period were less than a
certain number of years.
Example 1:
Years 0 1 2 3 4 5
Project A 1,000,000 250,000 250,000 250,000 250,000 250,000
For a project with equal annual receipts:

= 4 years
Example 2:
Years 0 1 2 3 4
Project B - 10,000 5,000 2,500 4,000 1,000
Payback period lies between year 2 and year 3. Sum of money recovered by the
end of the second year
= $7,500, i.e. ($5,000 + $2,500)
Sum of money to be recovered by end of 3rd year
= $10,000 - $7,500
= $2,500
Reviewer 410
Management Advisory Services

= 2.625 years
Disadvantages of the payback method:
· It ignores the timing of cash flows within the payback period, the cash flows
after the end of payback period and therefore the total project return.
· It ignores the time value of money. This means that it does not take into
account the fact that $1 today is worth more than $1 in one year's time. An
investor who has $1 today can either consume it immediately or alternatively can
invest it at the prevailing interest rate, say 30%, to get a return of $1.30 in a
year's time.
· It is unable to distinguish between projects with the same payback period.
· It may lead to excessive investment in short-term projects.
Advantages of the payback method:
· Payback can be important: long payback means capital tied up and high
investment risk. The method also has the advantage that it involves a quick,
simple calculation and an easily understood concept.
The accounting rate of return - (ARR)
The ARR method (also called the return on capital employed (ROCE) or the
return on investment (ROI) method) of appraising a capital project is to estimate
the accounting rate of return that the project should yield. If it exceeds a target
rate of return, the project will be undertaken.

Note that net annual profit excludes depreciation.


Example:
A project has an initial outlay of $1 million and generates net receipts of
$250,000 for 10 years.
Assuming straight-line depreciation of $100,000 per year:

= 15%

= 30%
Disadvantages:
· It does not take account of the timing of the profits from an investment.
· It implicitly assumes stable cash receipts over time.
Reviewer 411
Management Advisory Services

· It is based on accounting profits and not cash flows. Accounting profits are
subject to a number of different accounting treatments.
· It is a relative measure rather than an absolute measure and hence takes no
account of the size of the investment.
· It takes no account of the length of the project.
· it ignores the time value of money.
The payback and ARR methods in practice
Despite the limitations of the payback method, it is the method most widely used
in practice. There are a number of reasons for this:
· It is a particularly useful approach for ranking projects where a firm faces
liquidity constraints and requires fast repayment of investments.
· It is appropriate in situations where risky investments are made in uncertain
markets that are subject to fast design and product changes or where future cash
flows are particularly difficult to predict.
· The method is often used in conjunction with NPV or IRR method and acts as a
first screening device to identify projects which are worthy of further investigation.
· it is easily understood by all levels of management.
· It provides an important summary method: how quickly will the initial investment
be recouped?
Now attempt exercise 6.5.
Exercise 6.5 Payback and ARR
Delta Corporation is considering two capital expenditure proposals. Both
proposals are for similar products and both are expected to operate for four
years. Only one proposal can be accepted.
The following information is available:
Profit/(loss)
  Proposa Proposa
lA lB
$ $
Initial 46,000 46,000
investme
nt
Year 1 6,500 4,500
Year 2 3,500 2,500
Year 3 13,500 4,500
Year 4 Loss Profit
1,500 14,500
Estimate 4,000 4,000
d scrap
value at
the end
of Year 4
Depreciation is charged on the straight line basis. Problem:
a) Calculate the following for both proposals:
Reviewer 412
Management Advisory Services

i) the payback period to one decimal place


ii) the average rate of return on initial investment, to one decimal place.
Allowing for inflation
So far, the effect of inflation has not been considered on the appraisal of capital
investment proposals. Inflation is particularly important in developing countries as
the rate of inflation tends to be rather high. As inflation rate increases, so will the
minimum return required by an investor. For example, one might be happy with a
return of 10% with zero inflation, but if inflation was 20%, one would expect a
much greater return.
Example:
Keymer Farm is considering investing in a project with the following cash flows:
ACTUAL CASH FLOWS
TIME Z$
0 (100,000)
1 90,000
2 80,000
3 70,000
Keymer Farm requires a minimum return of 40% under the present conditions.
Inflation is currently running at 30% a year, and this is expected to continue
indefinitely. Should Keymer Farm go ahead with the project?
Let us take a look at Keymer Farm's required rate of return. If it invested $10,000
for one year on 1 January, then on 31 December it would require a minimum
return of $4,000. With the initial investment of $10,000, the total value of the
investment by 31 December must increase to $14,000. During the year, the
purchasing value of the dollar would fall due to inflation. We can restate the
amount received on 31 December in terms of the purchasing power of the dollar
at 1 January as follows:
Amount received on 31 December in terms of the value of the dollar at 1
January:

= $10,769
In terms of the value of the dollar at 1 January, Keymer Farm would make a profit
of $769 which represents a rate of return of 7.69% in "today's money" terms. This
is known as the real rate of return. The required rate of 40% is a money rate of
return (sometimes known as a nominal rate of return). The money rate measures
the return in terms of the dollar, which is falling in value. The real rate measures
the return in constant price level terms.
The two rates of return and the inflation rate are linked by the equation:
(1 + money rate) = (1 + real rate) x (1 + inflation rate)
where all the rates are expressed as proportions.
In the example,
(1 + 0.40) = (1 + 0.0769) x (1 + 0.3)
= 1.40
Reviewer 413
Management Advisory Services

So, which rate is used in discounting? As a rule of thumb:


a) If the cash flows are expressed in terms of actual dollars that will be received
or paid in the future, the money rate for discounting should be used.
b) If the cash flows are expressed in terms of the value of the dollar at time 0 (i.e.
in constant price level terms), the real rate of discounting should be used.
In Keymer Farm's case, the cash flows are expressed in terms of the actual
dollars that will be received or paid at the relevant dates. Therefore, we should
discount them using the money rate of return.
TI C D P
M A I V
E S S
H C
O
F U
L N
O T
W F
A
C
T
O
R
$ 4 $
0
%
0 ( 1 (
1 . 1
5 0 0
0 0 0
, 0 ,
0 0
0 0
0 0
) )
1 9 0 6
0 . 4
, 7 ,
0 1 2
0 4 6
0 0
2 8 0 4
0 . 0
, 5 ,
0 1 8
Reviewer 414
Management Advisory Services

0 0 0
0 0
3 7 0 2
0 . 5
, 3 ,
0 6 4
0 4 8
0 0
3
0
,
5
4
0
The project has a positive net present value of $30,540, so Keymer Farm should
go ahead with the project.
The future cash flows can be re-expressed in terms of the value of the dollar at
time 0 as follows, given inflation at 30% a year:
TIM A CASH FLOW AT TIME 0
E C PRICE LEVEL
T
U
A
L
C
A
S
H

F
L
O
W
$ $
0 ( (100,0
1 00)
0
0
,
0
0
0
)
Reviewer 415
Management Advisory Services

1 9 69,
0 23
, 1
0
0
0
2 8 47,
0 33
, 7
0
0
0
3 7 31,
0 86
, 2
0
0
0
The cash flows expressed in terms of the value of the dollar at time 0 can now be
discounted using the real value of 7.69%.
TI C D P
M A I V
E S S
H C
O
F U
L N
O T
W F
A
C
T
O
R
$ 7 $
.
6
9
%
0 ( 1 (
1 . 1
0 0 0
0 0 0
, 0 ,
Reviewer 416
Management Advisory Services

0 0
0 0
0 0
) )
1 6 6
9 4
, ,
2 2
3 4
1 6
2 4 4
7 0
, ,
3 8
3 0
7 4
3 3 2
1 5
, ,
8 4
6 9
2 0
3
0
,
5
4
0
The NPV is the same as before.
Expectations of inflation and the effects of inflation
When a manager evaluates a project, or when a shareholder evaluates his/her
investments, he/she can only guess what the rate of inflation will be. These
guesses will probably be wrong, at least to some extent, as it is extremely difficult
to forecast the rate of inflation accurately. The only way in which uncertainty
about inflation can be allowed for in project evaluation is by risk and uncertainty
analysis.
Inflation may be general, that is, affecting prices of all kinds, or specific to
particular prices. Generalised inflation has the following effects:
a) Inflation will mean higher costs and higher selling prices. It is difficult to predict
the effect of higher selling prices on demand. A company that raises its prices by
30%, because the general rate of inflation is 30%, might suffer a serious fall in
demand.
b) Inflation, as it affects financing needs, is also going to affect gearing, and so
the cost of capital.
Reviewer 417
Management Advisory Services

c) Since fixed assets and stocks will increase in money value, the same
quantities of assets must be financed by increasing amounts of capital. If the
future rate of inflation can be predicted with some degree of accuracy,
management can work out how much extra finance the company will need and
take steps to obtain it, e.g. by increasing retention of earnings, or borrowing.
However, if the future rate of inflation cannot be predicted with a certain amount
of accuracy, then management should estimate what it will be and make plans to
obtain the extra finance accordingly. Provisions should also be made to have
access to 'contingency funds' should the rate of inflation exceed expectations,
e.g. a higher bank overdraft facility might be arranged should the need arise.
Many different proposals have been made for accounting for inflation. Two
systems known as "Current purchasing power" (CPP) and "Current cost
accounting" (CCA) have been suggested.
CPP is a system of accounting which makes adjustments to income and capital
values to allow for the general rate of price inflation.
CCA is a system which takes account of specific price inflation (i.e. changes in
the prices of specific assets or groups of assets), but not of general price
inflation. It involves adjusting accounts to reflect the current values of assets
owned and used.
At present, there is very little measure of agreement as to the best approach to
the problem of 'accounting for inflation'. Both these approaches are still being
debated by the accountancy bodies.
Now attempt exercise 6.6.
Exercise 6.6 Inflation
TA Holdings is considering whether to invest in a new product with a product life
of four years. The cost of the fixed asset investment would be $3,000,000 in
total, with $1,500,000 payable at once and the rest after one year. A further
investment of $600,000 in working capital would be required.
The management of TA Holdings expect all their investments to justify
themselves financially within four years, after which the fixed asset is expected to
be sold for $600,000.
The new venture will incur fixed costs of $1,040,000 in the first year, including
depreciation of $400,000. These costs, excluding depreciation, are expected to
rise by 10% each year because of inflation. The unit selling price and unit
variable cost are $24 and $12 respectively in the first year and expected yearly
increases because of inflation are 8% and 14% respectively. Annual sales are
estimated to be 175,000 units.
TA Holdings money cost of capital is 28%.
Is the product worth investing in?

INDEPENDENT PROJECTS

A project whose acceptance or rejection is independent of the acceptance or


rejection of other projects.
Reviewer 418
Management Advisory Services

Project Screening

Project Ranking

Capital Rationing

MUTUALLY EXCLUSIVE PROJECTS

Project Screening

Project Ranking

Capital Rationing

Capital Rationing is the process of selecting that mix of acceptable projects that
provides the highest overall net present value (NPV). The profitability index is
widely used in ranking projects competing for limited funds.

CAPITAL
INVESTMENT PROJECT
BUDGETING PROCEDURES
INDEPENDENT MUTUALLY EXCLUSIVE
Project Screening Payback Period, ARR, NPV, IRR ?
Project Ranking Profitability Index, NPV Profitability Index, NPV
Capital Rationing ? ?
Reviewer 419
Management Advisory Services
Reviewer 420
Management Advisory Services

e. Sensitivity Analysis (Effects Of Changes In Project Cash Flow, Tax Rates


And Other Assumptions)

Used to determine the effect of certain variables in NPV.

Sensitivity analysis can be incorporated into DCF analysis by examining how the
DCF of each project changes with changes in the inputs used. These could
include changes in revenue assumptions, cost assumptions, tax rate
assumptions, and discount rates.

Sensitivity analysis enables management to see those assumptions for which


input variations have sizable impact on NPV. Extra resources could be devoted
to getting more informed estimates of those inputs with the greatest impact on
NPV.

Sensitivity analysis also enables management to have contingency plans in


place if assumptions are not met. For example, if a 20% reduction in selling price
is viewed as occurring with a reasonable probability, management may wish to
line up bank loan facilities.

What is a 'Sensitivity Analysis'

A sensitivity analysis is a technique used to determine how different values of an


independent variable impact a particular dependent variable under a given set of
assumptions. This technique is used within specific boundaries that depend on
one or more input variables, such as the effect that changes in interest rates
have on bond prices.

BREAKING DOWN 'Sensitivity Analysis'

Sensitivity analysis, also referred to as what-if or simulation analysis, is a way to


predict the outcome of a decision given a certain range of variables. By creating
Reviewer 421
Management Advisory Services

a given set of variables, the analyst can determine how changes in one variable


impact the outcome.

Sensitivity Analysis Example

Assume Sue, a sales manager, wants to understand the impact of customer


traffic on total sales. She determines that sales are a function of price and
transaction volume. The price of a widget is $1,000 and Sue sold 100 last year
for total sales of $100,000. Sue also determines that a 10% increase in customer
traffic increases transaction volume by 5%, which allows her to build a financial
model and sensitivity analysis around this equation based on what-if statements.
It can tell her what happens to sales if customer traffic increases by 10%, 50% or
100%. Based on 100 transactions today, a 10%, 50% or 100% increase in
customer traffic equates to an increase in transactions by 5, 25 or 50. The
sensitivity analysis demonstrates that sales are highly sensitive to changes in
customer traffic.

Sensitivity vs. Scenario Analysis

In finance, a sensitivity analysis is created to understand the impact a range of


variables has on a given outcome. It is important to note that a sensitivity
analysis is not the same as a scenario analysis. As an example, assume
an equity analyst wants to do a sensitivity analysis and a scenario analysis
around the impact of earnings per share (EPS) on the company's relative
valuation by using the price-to-earnings (P/E) multiple.

The sensitivity analysis is based on the variables impacting valuation, which a


financial model can depict using the variables' price and EPS. The sensitivity
analysis isolates these variables and then records the range of possible
outcomes. A scenario analysis, on the other hand, is based on a scenario. The
analyst determines a certain scenario such as a market crash or change in
industry regulation. He then changes the variables within the model to align with
that scenario. Put together, the analyst has a comprehensive picture. He knows
the full range of outcomes, given all extremes, and has an understanding for
what the outcomes would be given a specific set of variables defined by real-life
scenarios.

4. Risks And Rates Of Returns

What is the 'Risk-Free Rate Of Return'

The risk-free rate of return is the theoretical rate of return of an investment with zero
risk. The risk-free rate represents the interest an investor would expect from an
absolutely risk-free investment over a specified period of time.
Reviewer 422
Management Advisory Services

In theory, the risk-free rate is the minimum return an investor expects for any
investment because he will not accept additional risk unless the potential rate of
return is greater than the risk-free rate.

In practice, however, the risk-free rate does not exist because even the safest
investments carry a very small amount of risk. Thus, the interest rate on a three-
month U.S. Treasury bill is often used as the risk-free rate for U.S.-based investors.

BREAKING DOWN 'Risk-Free Rate Of Return'

Determination of a proxy for the risk-free rate of return for a given situation must
consider the investor's home market, while negative interest rates can complicate the
issue.

Currency Risk

The three-month U.S. Treasury bill is a useful proxy because the market considers
there to be virtually no chance of the government defaulting on its obligations. The
large size and deep liquidity of the market contribute to the perception of safety.
However, a foreign investor whose assets are not denominated in dollars
incurs currency risk when investing in U.S. Treasury bills. The risk can be hedged
via currency forwards and/or options but impacts the rate of return.

The short-term government bills of other highly rated countries, such as Germany and
Switzerland, offer a risk-free rate proxy for investors with assets in euros or Swiss
francs. Investors based in less highly rated countries that are within the eurozone,
such as Portugal and Greece, are able to invest in German bonds without incurring
currency risk. By contrast, an investor with assets in Russian rubles cannot invest in a
highly rated government bond without incurring currency risk.
Negative Interest Rates

Flight to quality and away from high-yield instruments amid the long-running


European debt crisis has pushed interest rates into negative territory in the countries
considered safest, such as Germany and Switzerland. In the United States, partisan
battles in Congress over the need to raise the debt ceiling have sometimes sharply
limited bill issuance, with the lack of supply driving prices sharply lower. The lowest
permitted yield at a Treasury auction is zero, but bills sometimes trade with negative
yields in the secondary market. And in Japan, stubborn deflation has led the Bank of
Japan to pursue a policy of ultra-low, and sometimes negative, interest rates to
stimulate the economy. Negative interest rates essentially push the concept of risk-
free return to the extreme; investors are willing to pay to place their money in an
asset they consider safe.
Reviewer 423
Management Advisory Services

a. Types Of Risks (Business/Operating, Financing)

Businesses face all kinds of risks, some of which can cause serious loss of
profits or even bankruptcy. But while all large companies have extensive "risk
management" departments, smaller businesses tend not to look at the issue in
such a systematic way.

So in this four-part series of tutorials, you’ll learn the basics of risk management
and how you can apply them in your business.
In this first tutorial, we’ll look at the main types of risk your business may face.

You’ll get a rundown of strategic risk, compliance risk, operational risk, financial
risk, and reputational risk, so that you understand what they mean, and how they
could affect your business. Then we’ll get into the specifics of identifying and
dealing with these risks in later tutorials in the series.

1. Strategic Risk

Everyone knows that a successful business needs a comprehensive, well-


thought-out business plan. But it’s also a fact of life that things change, and your
best-laid plans can sometimes come to look very outdated, very quickly.

This is strategic risk. It’s the risk that your company’s strategy becomes less
effective and your company struggles to reach its goals as a result. It could be
due to technological changes, a powerful new competitor entering the market,
shifts in customer demand, spikes in the costs of raw materials, or any number of
other large-scale changes.

History is littered with examples of companies that faced strategic risk. Some
managed to adapt successfully; others didn’t.
A classic example is Kodak, which had such a dominant position in the film
photography market that when one of its own engineers invented a digital
camera in 1975, it saw the innovation as a threat to its core business model, and
failed to develop it.

It’s easy to say with hindsight, of course, but if Kodak had analyzed the strategic
risk more carefully, it would have concluded that someone else would start
producing digital cameras eventually, so it was better for Kodak to cannibalize its
own business than for another company to do it.

Failure to adapt to a strategic risk led to bankruptcy for Kodak. It’s now emerged
from bankruptcy as a much smaller company focusing on corporate imaging
solutions, but if it had made that shift sooner, it could have preserved its
dominance.
Reviewer 424
Management Advisory Services

Facing a strategic risk doesn’t have to be disastrous, however. Think of Xerox,


which became synonymous with a single, hugely successful product, the Xerox
photocopier. The development of laser printing was a strategic risk to Xerox’s
position, but unlike Kodak, it was able to adapt to the new technology and
change its business model. Laser printing became a multi-billion-dollar business
line for Xerox, and the company survived the strategic risk.

2. Compliance Risk

Are you complying with all the necessary laws and regulations that apply to your
business?

Of course you are (I hope!). But laws change all the time, and there’s always a
risk that you’ll face additional regulations in the future. And as your own business
expands, you might find yourself needing to comply with new rules that didn’t
apply to you before.

For example, let’s say you run an organic farm in California, and sell your
products in grocery stores across the U.S. Things are going so well that you
decide to expand to Europe and begin selling there.

That’s great, but you’re also incurring significant compliance risk. European
countries have their own food safety rules, labeling rules, and a whole lot more.
And if you set up a European subsidiary to handle it all, you’ll need to comply
with local accounting and tax rules. Meeting all those extra regulatory
requirements could end up being a significant cost for your business.

Even if your business doesn’t expand geographically, you can still incur new
compliance risk just by expanding your product line. Let’s say your California
farm starts producing wine in addition to food. Selling alcohol opens you up to a
whole raft of new, potentially costly regulations.

And finally, even if your business remains unchanged, you could get hit with new
rules at any time. Perhaps a new data protection rule requires you to beef up
your website’s security, for example. Or employee safety regulations mean you
need to invest in new, safer equipment in your factory. Or perhaps you’ve
unwittingly been breaking a rule, and have to pay a fine. All of these things
involve costs, and present a compliance risk to your business.

In extreme cases, a compliance risk can also affect your business’s future,
becoming a strategic risk too. Think of tobacco companies facing new advertising
restrictions, for example, or the late-1990s online music-sharing services that
Reviewer 425
Management Advisory Services

were sued for copyright infringement and were unable to stay in business. We’re
breaking these risks into different categories, but they often overlap.

3. Operational Risk

So far, we’ve been looking at risks stemming from external events. But your own
company is also a source of risk.

Operational risk refers to an unexpected failure in your company’s day-to-day


operations. It could be a technical failure, like a server outage, or it could be
caused by your people or processes.

In some cases, operational risk has more than one cause. For example, consider
the risk that one of your employees writes the wrong amount on a check, paying
out $100,000 instead of $10,000 from your account.

That’s a “people” failure, but also a “process” failure. It could have been
prevented by having a more secure payment process, for example having a
second member of staff authorize every major payment, or using an electronic
system that would flag unusual amounts for review.

In some cases, operational risk can also stem from events outside your control,
such as a natural disaster, or a power cut, or a problem with your website host. 
Anything that interrupts your company’s core operations comes under the
category of operational risk.

While the events themselves can seem quite small compared with the large
strategic risks we talked about earlier, operational risks can still have a big
impact on your company. Not only is there the cost of fixing the problem, but
operational issues can also prevent customer orders from being delivered or
make it impossible to contact you, resulting in a loss of revenue and damage to
your reputation.

4. Financial Risk

Most categories of risk have a financial impact, in terms of extra costs or lost
revenue. But the category of financial risk refers specifically to the money flowing
in and out of your business, and the possibility of a sudden financial loss.

For example, let’s say that a large proportion of your revenue comes from a
single large client, and you extend 60 days credit to that client (for more on
extending credit and dealing with cash flow, see our earlier cash flow tutorial).

In that case, you have a significant financial risk. If that customer is unable to
pay, or delays payment for whatever reason, then your business is in big trouble.
Reviewer 426
Management Advisory Services

Having a lot of debt also increases your financial risk, particularly if a lot of it is
short-term debt that’s due in the near future. And what if interest rates suddenly
go up, and instead of paying 8% on the loan, you’re now paying 15%? That’s a
big extra cost for your business, and so it’s counted as a financial risk.

Financial risk is increased when you do business internationally. Let’s go back to


that example of the California farm selling its products in Europe. When it makes
sales in France or Germany, its revenue comes in euros, and its UK sales come
in pounds. The exchange rates are always fluctuating, meaning that the amount
the company receives in dollars will change. The company could make more
sales next month, for example, but receive less money in dollars. That’s a big
financial risk to take into account.

5. Reputational Risk

There are many different kinds of business, but they all have one thing in
common: no matter which industry you’re in, your reputation is everything.

If your reputation is damaged, you’ll see an immediate loss of revenue, as


customers become wary of doing business with you. But there are other effects,
too. Your employees may get demoralized and even decide to leave. You may
find it hard to hire good replacements, as potential candidates have heard about
your bad reputation and don’t want to join your firm. Suppliers may start to offer
you less favorable terms. Advertisers, sponsors or other partners may decide
that they no longer want to be associated with you.

Reputational risk can take the form of a major lawsuit, an embarrassing product
recall, negative publicity about you or your staff, or high-profile criticism of your
products or services. And these days, it doesn’t even take a major event to
cause reputational damage; it could be a slow death by a thousand negative
tweets and online product reviews.

Next Steps

So now you know about the main risks your business could face. We’ve covered
five types of business risk, and given examples of how they can affect your
business.

This is the foundation of a risk management strategy for your business, but of
course there’s much more work to be done. The next step is to look more deeply
at each type of risk, and identify specific things that could go wrong, and the
impact they could have.

It’s not much use, for example, to say, “Our business is subject to operational
risk.” You need to get very granular, and go through every aspect of your
Reviewer 427
Management Advisory Services

operations to come up with specific things that could go wrong. Then you can
come up with a strategy for dealing with those risks.

BUSINESS / OPERATING

What is 'Operational Risk'

Operational risk summarizes the risks a company undertakes when it attempts to


operate within a given field or industry. Operational risk is the risk not inherent in
financial, systematic or market-wide risk. It is the risk remaining after
determining financing and systematic risk, and includes risks resulting
from breakdowns in internal procedures, people and systems.

BREAKING DOWN 'Operational Risk'

Operational risk can be summarized as human risk; it is the risk of business


operations failing due to human error. It changes from industry to industry, and is
an important consideration to make when looking at potential investment
decisions. Industries with lower human interaction are likely to have lower
operational risk.

Focus of Operational Risk

Operational risk focuses on how things are accomplished within an organization


and not necessarily what is produced or inherent within an industry. These risks
are often associated with active decisions relating to how the organization
functions and what it prioritizes. While the risks are not guaranteed to result in
failure, lower production or higher overall costs, they are seen as higher or lower
depending on various internal management decisions.

Examples of Operational Risk

One area that may involve operational risk is the maintenance of necessary
systems and equipment. If two maintenance activities are required, but it is
determined only one can be afforded at the time, making the choice to perform
one over the other alters the operational risk depending on which system is left in
disrepair. If a system fails, the negative impact is associated directly with the
operational risk.

Other areas that qualify as operational risk tend to involve the human element
within the organization. If a sales-oriented business chooses to maintain a
subpar sales staff, due to its lower salary costs or any other factor, this is
considered an operational risk. The same can be said for failing to properly staff
to avoid certain risks. In manufacturing, choosing not to have a qualified
Reviewer 428
Management Advisory Services

mechanic on staff, and having to rely on third parties for that work, can be
classified as an operational risk. Not only does this impact a system's operation,
it also involves additional time delays as it relates to the third party.

Willing participating in fraudulent activity may also be seen as operational risk. In


this case, the risk involves the possibility of repercussions if the activity is
uncovered. Since the decision is active, it is considered a risk relating to how the
business operates.

Operational Risk vs. Business Risk


By Fraser Sherman

When you own or manage a business, there's always a risk of loss or failure.
Your decisions can affect how much risk your company faces, whether it's a
financial risk, the risk of adopting a bad business strategy or the risk of your
employees making mistakes. Business analysts have divided the risks
companies face into subcategories, two of which are operational risk and
business risk.

Business Risk

Business risk is the risk that results from your decisions about the products and
services you offer. When you decide to develop and market a particular product,
there's a risk that the product won't work as well as you hoped or that your
marketing campaign will fail. Other business risks include changes in the cost of
raw materials or shipping and managing technological developments that affect
sales or manufacturing.

Operational Risk
Operational risks exist in the way your company tries to carry out your decisions.
Even if you decide on the right product to manufacture, weaknesses in your
supply chain, outdated manufacturing equipment or a poor sales force can make
it impossible to generate the profits you anticipate. A risk-management strategy
that focuses on management decisions and ignores how the staff operates can
leave you with a dangerously high risk level. If your IT department doesn't
maintain Internet security, for example, one hacking incident could cost you vital
corporate information or customers' credit card numbers.

Managing Business Risk

There's rarely a 100 percent safe path in the business world. Developing a new
product or moving into a new market carries a risk of losing money, but not
expanding or growing can be just as risky, allowing more daring competitors to
gain market share. When weighing alternatives, look at the probability of
business risk from each choice and the consequences if the worst happens.
Then you have to balance the chance of success against the loss to your
company if you fail.

Managing Operational Risk


Reviewer 429
Management Advisory Services

Strategic business decisions may seem full of risk, but lower-level operational
risks can be a bigger challenge, as there are so many points where your
operations can go off the rails. What you can do is make sure there are control
systems in place to keep your staff following the right procedures. Other
protective steps include insurance and having a contingency plan in place. If your
equipment breaks down, for instance, having a plan to keep operating until
insurance covers the losses could be vital.

What is business/operational risk?

‘Business/operational risk relates to activities carried out within an entity, arising


from structure, systems, people, products or processes.’ (CIMA Official
Terminology, 2005)

Operational risk has also been defined as: ‘The risk of loss resulting from
inadequate or failed internal processes, people and systems, or from external
events.’ (Basel Committee on Banking Supervision, 2004)

Risk management is: ‘A process of understanding and managing the risks that
the entity is inevitably subject to in attempting to achieve its corporate objectives.
For management purposes, risks are usually divided into categories such as
operational, financial, legal compliance, information and personnel. One example
of an integrated solution to risk management is enterprise risk management.’
(CIMA Official Terminology, 2005)

Overview There is a huge variety of specific operational risks. By their nature,


they are often less visible than other risks and are often difficult to pin down
precisely. Operational risks range from the very small, for example, the risk of
loss due to minor human mistakes, to the very large, such as the risk of
bankruptcy due to serious fraud. Operational risk can occur at every level in an
organisation. The type of risks associated with business and operation risk relate
to: • business interruption • errors or omissions by employees • product failure •
health and safety • failure of IT systems • fraud • loss of key people • litigation •
loss of suppliers. Operational risks are generally within the control of the
organisation through risk assessment and risk management practices, including
internal control and insurance. Operational risk Topic Gateway Series 5
Application Risk categorisation Risks can be categorised in a number of ways. A
popular way is to use one of four main categories, namely operational risk,
financial risk, environmental risk and reputational risk. It is important that risks
are categorised in a way that is relevant to the needs of the organisation. Some
of the benefits of categorisation include: • providing a framework that can be
used to define who is responsible, to design appropriate internal controls and to
assist in simplified risk reporting • assisting managers to identify how they can
use their past experience to categorise risk • helping organisations to identify
related risks in the same category • giving assistance in recognising which risks
are inter-related. Operational risk identification Operational risk sources may be
internal or external to the business and are usually generated by people,
processes and technology. Identification is one of the most important areas of
managing risk. Failure to identify risk will certainly mean that no action is taken to
Reviewer 430
Management Advisory Services

manage that risk. There are a number of different techniques that can be used to
identify risk. A common method used in risk identification is the use of workshops
to ‘brainstorm’. This can be used at different levels of the organisation and can
identify a large number of risks in a short time. To keep ideas flowing, it is
important to keep identification sessions focused on identifying risks and not to
move on to evaluate the risks. Operational risks are largely based on procedures
and processes, so this lends itself to the use of audit for risk identification
purposes. Risk based audit can be used as a tool to identify risks, as well as a
method of reporting to the board on the effectiveness of the organisation’s risk
management framework. Operational risk Topic Gateway Series Risk based
audit can use the following methods to assess risks: • intuitive or judgemental
assessment • risk assessment matrix • risk ranking. Another approach to
identifying operational risk is to look for critical dependencies in people,
processes, systems and external structures. Once identified, the dependencies
can be managed or engineered by adding fail-safes and system redundancies.
Other approaches include physical inspection and incident investigation. Once
risks have been identified based on a suitable way of categorising them, it
becomes possible to think of tools that may be used to measure and manage
them. Risk assessment and measuring Various methods may be used to assess
the severity of each risk once it has been identified. One of the reasons for
measuring risk is that it allows the most significant risks to be prioritised. The
result or impact of a risk occurring may be financial loss, damage to reputation,
process change or a combination of these. One of the simplest ways to measure
risks is to apply an impact and likelihood matrix which provides an overall risk
rating. Adapted from: Emergency Preparedness (Guidance on part 1 of the Civil
Contingencies Act 2004) 6 Operational risk Topic Gateway Series 7 One of the
issues with measuring risk is that there are objective or subjective risks. Many
risks are subjective and qualitative, rather than objectively identifiable and
measurable. For example, the risks of litigation, economic downturn, loss of key
employees, natural disasters and loss of reputation are all subjective
judgements. There is an important distinction between objective, measurable
risks and subjective, perceived risks. Some of the factors that influence this
distinction are: • how recently the risk has occurred • how visible the risk is • how
management perceives the risk • how the organisation establishes formal or
informal ways of dealing with the risk. The analysis can be either quantitative or
qualitative, but it should allow for comparison and trend analysis. One of the
issues with risk assessment is that traditional risk assessment techniques often
focus on those elements that can be quantified easily. Such techniques fail to
address all critical drivers of successful risk management. Impact When
considering the impact of operational risk there are three primary areas that
affect the business activity. Property exposures – these relate to the physical
assets belonging to or entrusted to the business. Personnel exposures – these
relate to the risks faced by all those who work for and with the business,
including customers, suppliers and contractors. Financial exposures – these
relate to all aspects of the company’s ability to trade, whether profitability or not,
and cover internal and external exposures of all types. Financial exposures also
include intellectual property, goodwill and patents. Operational risk Topic
Gateway Series 8 Managing operational risks Risk evaluation is used to make
decisions about the significance of the risks to the organisation and whether
each specific risk should be accepted or treated. When looking at operational risk
Reviewer 431
Management Advisory Services

management, it is important to align it with the organisation’s risk appetite. The


risk appetite will be influenced by the size and type of organisation, its capacity
for risk and its ability to exploit opportunities and withstand setbacks. Once the
severity of the risk has been established, one or more of the following methods of
controlling risk can be applied: • accepting the risk • sharing or transferring the
risk • risk reduction • risk avoidance. Insurance is a long established control
method for transferring risk. This applies to a number of types of operational risk,
for example, damage to buildings. However, more recently there has been an
increase in the use of insurance combined with other methods such as business
continuity management. One issue with measuring and managing subjective
operational risks is that unless the risk occurs, it is not possible to be certain of
the impact of the risk. The severity of the risk may be underestimated. One of the
issues with operational risk is the continuously changing business environment.
This is stressed in Internal control: guidance for directors on the Combined Code,
also known as the Turnbull Report (1999), which states: ‘A company’s objectives,
its internal organisation and the environment in which it operates, are continually
evolving and, as a result, the risks it faces are continually changing. A sound
system of internal control therefore depends on a thorough and regular
evaluation of the risks to which it is exposed.’ Once a decision has been made
about how to manage or control the risk, it is important to have a process in
place to monitor actively and to review and report regularly on the risk
management framework. Operational risk Topic Gateway Series 9 Critical
success factors in risk management are: • clearly identified senior management
to support, own and lead on risk management • existence and adoption of a
framework for risk management that is transparent and repeatable • risk is
actively monitored and regularly reviewed • management of risk is fully
embedded in the management process and consistently applied • clear
communication with all staff • management of risks is closely linked to the
achievement of objectives. Case studies Case: Managing business interruption –
Lehman Brothers This case study looks at the lessons learned from 11
September 2001 in relation to business continuity management. Available from:
http://digbig.com/4xewr [Accessed 17 July 2008] One of the key operational risks
to any organisation is business interruption. To manage this risk, organisations
must have a robust business continuity plan. There is a close link between
business continuity management (BCM) and operational risk. There have been
significant developments in the area of BCM. Earlier disaster recovery plans
anticipated a failure and subsequent recovery from it, while many business
operations now are so time critical that no outage whatsoever can be tolerated.
BCM now embraces both the creation of a ‘non-stop’ infrastructure and
operational capability, as well as recovery from operational failure. Five key steps
in business continuity management: 1. Assessing and objective setting. 2.
Critical process identification. 3. Business impact analysis. 4. Business continuity
planning (BCP). 5. Monitoring, testing and improving.

Top 10 operational risks for 2017


Risk.net presents the top 10 operational risks of 2017, as chosen by risk
practitioners
Reviewer 432
Management Advisory Services

Financial institutions face a range of operational challenges in 2017


 Risk.net staff

@riskdotnet

 23 Jan 2017
 Tweet 
 Facebook 
 LinkedIn 
 Save this article
 Send to 
 Print this page 
In a series of interviews that took place in November and December
2016, Risk.net spoke to chief risk officers, heads of operational risk and other op
Reviewer 433
Management Advisory Services

risk practitioners at financial services firms, including banks, insurers and asset
managers. Based on the op risk concerns most frequently selected by those
practitioners, we present our ranking of the top 10 operational risks for 2017.
Click to go to section
#1 Cyber risk and data security | #2 Regulation | #3 Outsourcing | #4 Geopolitical
risk | #5 Conduct risk | #6 Organisational change | #7 IT failure | #8 AML, CTF
and sanctions compliance | #9 Fraud | #10 Physical attack
 

#1: Cyber risk and data security


An overwhelming number of risk managers ranked the threat from cyber attacks
as their top operational risk for 2017 – the second year in a row it has topped the
rankings, this year by an even larger margin.
And this is no surprise as the threat from cyber attacks is not only growing, but
also mutating into new and insidious forms, say risk practitioners.
From the Bangladesh Bank heist back in February – which saw hackers exploit
vulnerabilities in the Swift financial communications network to steal $81 million
from accounts belonging to the central bank – to November's theft of £2.5 million
($3.1 million) from 9,000 Tesco Bank customers' accounts following a data
breach, the threat from cyber attacks was an ever-present over the past year.
As if the reputational damage alone weren't enough to spur banks into action, the
threat of action from regulators for firms whose cyber resiliency isn't up to scratch
probably will be. In September 2016, the UK Financial Conduct Authority
revealed that the number of reported incidents of cyber crimes at firms under its
jurisdiction had jumped to 75 for the year to date, from just five in 2014. That
followed comments from the regulator at June's Cyber Risk Europe conference
that it would be challenging firms more regularly on cyber security going forward.
Under the European Union's forthcoming General Data Protection Regulation
(GDPR), which comes into force in May 2018, financial organisations face eye-
watering fines of up to 4% of their global annual turnover for data privacy
breaches. If GDPR were in force now, Tesco Bank's fine for its data breach could
have been as high as £1.9 billion, according to some estimates.
The source of potential cyber threats is hard to pin down, say banks, making
building appropriate controls a serious challenge, and attacks nearly impossible
to avoid.
According to the head of operational risk at one large European bank: "There are
three categories of people that carry out cyber attacks. There's the guy that's
sitting alone in his bedroom doing it; there are organised groups doing it; and
there are governments doing it."
Cyber criminals do not discriminate between organisations based on their size
and location, but the financial sector enjoys the dubious privilege of being one of
the most targeted industries, alongside healthcare. Organisations would do well
Reviewer 434
Management Advisory Services

to spend more time defining their risk appetite instead of trying to ensure their
systems are impenetrable, practitioners counsel.
Industry view
Rajat Baijal, head of enterprise risk at BGC and Cantor Fitzgerald :
"Cyber risk will stay pertinent for a while. What I find quite fascinating about
cyber risk is the sheer pace of change: recent events suggest that the hackers
are one step ahead of the banks in this rapidly evolving space. Given the
uncertainties, firms may choose to strike a balance between actively managing
the risk by investing in suitable resource and infrastructure, and accepting or
transferring the risk by buying a suitable insurance policy for example. This
balance between managing and accepting and transferring the risk will vary
across firms, and should be a key part of defining the firm's risk appetite."
Stephanie Snyder, senior vice president, Aon professional risk solutions:
"We talk about the evolving nature of cyber risk, which is only going to increase
with the Internet of Things and additional automation. I believe that, as we move
into 2017, we're going to start seeing more cyber-related business interruption
losses; you're not going to read about them in the press, but every organisation
that runs off of a technology infrastructure – which is, really, every organisation –
is going to be impacted."
Jonathan Wyatt, global lead of IT governance and risk management, Protiviti :
"What a cyber strategy should really be doing is not trying to prevent the attack –
because that is very difficult – but trying to manage the outcome. The problem
we have with cyber is most people in financial services are not doing it this way.
They're not stepping back and thinking about outcomes, risk appetite and what
they do; they're throwing money at it, trying to make the door more secure – but
there are still plenty of people who know how to open the door. When you get
techies talking to board executives about threats, vulnerabilities, weaknesses,
the dialogue breaks down."
 

#2: Regulation
To many op risk practitioners, the landmark regulations of the post-crisis era –
the overhaul of the capital adequacy framework, widespread market structure
reforms, far-reaching changes to accounting practices – represent a laundry list
of potential operational risks for their institution.
Fines and penalties for noncompliance, the restructuring of desks and operations
and the shuttering of businesses all present complex and hard-to-model threats.
In the US, the Dodd-Frank Act alone – irrespective of President Trump's promise
to expunge it – has produced thousands of pages of rulemakings from prudential
and markets regulators, covering everything from stress testing to clearing, trade
execution to hedge fund reporting.
Reviewer 435
Management Advisory Services

Closer to home for op risk professionals, the Basel Committee on Banking


Supervision's proposal to replace the advanced measurement approach (AMA)
for modelling operational risk is already presenting all manner of issues.
By requiring firms to hold the same amounts of operational risk capital against all
forms of business, regulators are encouraging firms to enter businesses that
exclusively expose themselves to operational risks to maximise their return on
equity, argue op risk practitioners.
"Operational risk seems to be the one that's causing regulators the most
concern; they struggle with it," says the head of operational risk at an
international bank in London. "There is a danger they will push something
through in order to get [the Basel IV agenda] out at the same time. As the SMA
proposal stands now, it will have a huge impact on operational risk capital, and
group heads are committed to not having an increase in capital overall – so it will
be interesting to see where that all comes out."
Industry view
Fenton Aylmer, operational risk management lead for business practices and
conduct, Citi:
"All the rules and regulations since the financial crisis makes us need to be very
quick in our adoption and interpretation. It doesn't give us a lot of time to react.
Because there's so many people that need to be informed, appropriate and
relevant awareness and education programmes are critical. We need to make
sure that each of our employees is fully aware of their roles and responsibilities,
as well as the ethical repercussions that are associated with these rules. That
creates a challenge to ensure that we have proper business practices around
each product that we launch so we fully address the client's needs and don't end
up on the wrong side of regulatory surveillance."
Senior op risk manager at a London-based bank:
"Regulatory change has been a constant for a number of years, and it should be
the number one risk in any organisation. With change comes elevated
operational risk that needs to be appropriately managed. The challenges faced
by banks, especially internationally active ones, is keeping up with the global
change agenda and understanding the interlinkage of regulatory changes across
jurisdiction."
Industry consultant and former head of op risk:
"Given the backdrop of a series of financial scandals, global regulators have
used the stick of fines and sanctions to bring more order. There is a danger that
these will become more and more punitive, such that it will be difficult for firms to
recover."
Zahra Al Halwachi, operational risk manager, Mashreq Bank:
"Regulations are changing frequently, which for banks with international
branches may result in fines and penalties if not implemented [properly]. And
they are becoming more complex as well."
 
Reviewer 436
Management Advisory Services

#3: Outsourcing
Outsourcing makes it into our top three operational risks this year, spurred by a
clear message from regulators that firms must improve oversight of third-party
risk management, or else face punitive sanctions.
Aviva provided one of the highest-profile examples of last year. In October 2016,
the firm was hit with an £8.2 million fine from the UK Financial Conduct Authority
for failure to ensure adequate controls and oversight of outsourced client money
handling arrangements.
The size of the penalty, combined with the undesirable publicity the case
attracted, caused alarm for many op risk practitioners, and emphasised that
regulators are actively hunting for breaches.
Under the EU's forthcoming GDPR legislation (see Cyber segment), financial
organisations must review their existing outsourcing arrangements to ensure
they don't face eye-watering fines – even if the failures are those of third-party
service providers.
GDPR compliance will represent a significant burden, managers say. Banks will
need to know exactly where their customer data is held at all times, and be able
to present this data on demand in a portable format. That will require a thorough
understanding of a complex web of relationships with various outsourcers,
practitioners say.
Industry view
Steve Holt, financial services partner, EY:
"Many companies are only worried about the top 10% of outsourced
arrangements – the ones that they spend most money on. That's not necessarily
reflective of their risk profile; you may be spending millions with a global
outsourcer, but it may be a small outsourcer with not-very-mature controls that's
holding some key customer personal data where you suffer a loss... In many
cases, outsourcing providers actually outsource to other organisations, so it
becomes a massively complex ecosystem. [But] financial services firms still have
overall responsibility for ensuring that the data is controlled and secure. This is a
key requirement of the GDPR."

Simon Ashby, associate professor of financial services, Plymouth Business


School:
"In general, outsourcing is not necessarily cheaper – plus there are downsides.
Reputational risk is definitely one of the key risks; service delivery, quality,
continuity of service are others. Another key risk is, if there is a big disruption to
services – say your outsourcing company goes bankrupt or there's another major
business continuity effect – can you bring that activity back in house and can you
do it quickly?"
 
Reviewer 437
Management Advisory Services

#4: Geopolitical risk


The election of Donald Trump as US president, along with the UK's shock vote to
withdraw from the European Union, have combined to push geopolitical risk into
the top 10 this year, rocketing all the way to number four.
The prospect of a so-called hard Brexit, including a departure from the European
single market, as outlined in UK prime minister Theresa May's January 17
speech, will have serious implications for the financial services industry, with
London home to the European headquarters of most of the world's top banking,
insurance and asset management companies.
Banks are expected to start moving staff out of London in 2017. Those plans are
unlikely to be reversed even if the UK secures favourable access to the
European single market, say op risk practitioners. The consequences could be
as painful as they are idiosyncratic; witness fears of a politically motivated
attempt by European legislators to forcibly relocate euro clearing to the
eurozone, the cost of which could be as high as $100 billion in additional margin
requirements for banks and their clients.
Banks with relatively small operations inside the eurozone, such as the Japanese
banks, are likely to bear the heaviest fallout from Brexit. But even banks with
large eurozone operations will be exposed to increased local market regulator
risks, such as not being allowed to ramp up derivatives trading within a given
jurisdiction.
In addition to its direct costs, Brexit – because it will occur against a backdrop of
significant economic, regulatory and business change – could indirectly
exacerbate other operational risks such as outsourcing (#3), organisational and
business change (#6), regulation (#2), and conduct risk (#5). For example, the
need rapidly to form new supplier relationships opens banks up to heightened
outsourcing risk, say practitioners.
In the US meanwhile, the Trump administration's likely rollback of financial
legislation could create its own risks, risk managers warn. There is also
widespread speculation that supranational regulatory commitments, in particular
the package of prudential reforms collectively dubbed Basel IV, could now be
revisited, creating further uncertainty for banks.
Regulatory capital requirements for political risk differ across jurisdictions:
European banks that rely on Basel III's advanced approaches for calculating risk-
based capital typically set aside capital against political risk.
Industry view
Senior bank op risk manager:
"Excluding the biggest overall risk for banks – the changing environment in the
financial industry itself – as a strategic risk, the biggest remaining risk results
from our rapidly changing world order and its implications for the financial sector.
No banking group can be sure that an investment or market entry into foreign
Reviewer 438
Management Advisory Services

countries that makes sense at the moment will not backfire in a couple of years.
To ignore this reality and not think about possible scenarios might prove very
costly for international banks in the upcoming years."
Ariane Chapelle, director at Chapelle Consulting:
"Brexit will likely be an important cause of uncertainty, loss of business, third-
party risk, relocation risk and project management risk, caused by uncertainty
and unfamiliarity with new processes"
 

#5: Conduct risk


At first glance, 2016 was fairly unremarkable from the point of view of conduct
risk, with a lack of newly uncovered high-profile instances of wrongdoing perhaps
serving to push it further down practitioners' list of worries, from #2 last year to
#5 this.
But an absence of recent incidents doesn't indicate that the risk to an
organisation from misconduct has decreased, say managers; quite the contrary.
In the UK, the Senior Managers Regime (SMR), which came into force in March,
seeks to codify a culture of personal responsibility for risk managers, with
individuals who fulfil certain designated control functions now personally liable for
various forms of misconduct.
Under the US Dodd-Frank Act, individuals whose input helps the Securities and
Exchange Commission (SEC) take successful enforcement action against
wrongdoers are entitled to a reward of up to 30% of the fine imposed on an
organisation. Since the legislation came into force, the SEC has levied more than
$500 million in misconduct-related fines.
Industry view
Nick Leeson, speaking at the Risk South Africa conference in March:
"Risk managers have to take more on. If a risk manager doesn't understand the
trade a star trader is trying to put on, there has to be a way of stopping them.
Someone on the risk committee has to say they fully understand it, and that
they're going to take responsibility for it. To this day, a lot of traders are still able
to railroad certain trades through. Until that changes, there will always be a
problem."
Paul Fisher, Bank of England:
"[The SMR's] purpose is to make it clear who is accountable for what within a
firm. The foremost objective of that is not so we know who to punish when things
go wrong. It is to make sure someone is taking full responsibility for the right
outcomes so misbehaviour becomes very much rarer."
 
Reviewer 439
Management Advisory Services

#6: Organisational change


Organisational change comes in many forms. But whether prompted by
regulation, technological change or a corporate restructuring, the result is always
upheaval, and enforced changes to op risk frameworks to cope with new and
often idiosyncratic sources of risk.
The convoluted changes to desk structures and internal risk transfer processes
banks will be forced to enact under the Basel Committee on Banking
Supervision's revised market risk capital framework are one of the highest-profile
instances of forced organisational change impacting bank's front-office
businesses at the moment.
The fear of not being able to adapt a business model to technological change
haunts many companies. From Kodak and Blockbuster to Blackberry, many
once-prosperous firms have been sidelined by more tech-savvy and customer-
focused competitors.
The past year in finance has seen technological innovations that present big
opportunities as well as threats to many of the existing financial organisations. A
2016 report from Capgemini showed that, although 96% of banking executives
agree that the industry is moving towards a digital banking ecosystem, only 13%
have the systems in place to keep up with it.
Industry view
Jodi Richard, chief operational risk officer at US Bank:
"The evolution we're seeing in a lot of new systems and technologies being
implemented mean it's difficult to stay on top of innovation and fintech, as well as
just general technologies advancing. So changing that technology demands
change management, and redesigning processes and controls in other spaces.
That's the core of operational risk there: it's process and systems, and staying on
top of the changes in that space."
Head of operational risk at a European bank:
"Digitisation, fintech, blockchain – all these developments are really threatening
banks' business models. But whether you see them as an operational risk is
moot; I would see them as a strategic development that banks need to adapt to.
But you cannot leave it out of an op risk framework."
 

#7: IT failure
Unlike cyber crime, IT failure involves fewer unknown variables. For that reason,
it is perhaps perceived as more manageable by op risk practitioners; but its
impact can be just as debilitating.
Reviewer 440
Management Advisory Services

Cloud computing was flagged by many respondents to this year's survey as one
of the most important technological trends in 2017. But as well as its advantages
in terms of flexibility and cost-effectiveness, it is prone to outages, with
undesirable consequences potentially including financial losses and damaged
relationships with clients.
Amazon Web Services – now used by many banks for additional processing
capacity, as well as for data storage – experienced a disruption in services in
Sydney in June 2016, causing multiple websites and online services reliant on
the platform to shut down, affecting everything from banking services to pizza
deliveries.
At the beginning of 2016, HSBC suffered a two-day service outage during which
millions of retail customers were unable to access their accounts. That wasn't the
only IT failure to hit the bank in the last couple of years: in 2015 its electronic
payment system experienced disruptions affecting thousands of clients just
before a UK bank holiday weekend.
Industry view
Head of operational risk at a European bank:
"[The impact of IT failure] can be big, not just in terms of direct losses but also
indirect losses, like losing a lot of customers. Many banks, not in Europe but in
Asia, are already talking about cloud solution storing. I can't assess right now
how [disruptions] might affect the business, but I think in terms of mobility of
clients, this could be severe."
 

#8: AML, CTF and sanctions compliance


Tighter anti-money laundering (AML) controls and efforts to prevent transactions
with internationally sanctioned entities have been a priority of regulators around
the world in recent years, nowhere more so than in the US.
In guidance issued in October 2016, the US Office of the Comptroller of the
Currency said banks should have processes for periodic risk re-evaluations and
account decisions which address a bank's risk appetite for the level of Bank
Secrecy Act (BSA) and AML compliance risk it is willing to accept and can
effectively manage. Banks should provide for an assessment of the implications
of account closure on managing overall exposure to BSA/AML compliance risk
that is consistent with the bank's articulated risk appetite.
For lenders that provide banking services across multiple jurisdictions, that's
easier said than done, say practitioners.
"Increasing global cross-border banking activities, real-time speed of financial
transactions, and sophistication of technology provide alternative means and
opportunity for various manifestations of financial crimes, including AML," says
the head of op risk at a US financial institution.
Industry view
Reviewer 441
Management Advisory Services

Bradley Bennett, Financial Industry Regulatory Authority speaking in April 2016


at an industry AML conference:
"You need to know your customers. You need to conduct due diligence on the
securities you're selling. You need to tailor your programme to the risks inherent
in your business model. You need to test your programme, and make updates as
your business changes or expands. You need to be sure your employees are
trained, especially when you have new business lines. You need to make sure
you have good supervisory systems when you do high-risk business like micro-
caps."
Maria Vullo, New York State Department of Financial Services' superintendent,
welcomes the state's new anti-terrorism transaction monitoring and filtering
programme regulation:
"This regulation represents an important milestone in DFS's long-standing
mission to improve and strengthen BSA and AML compliance among New York's
financial institutions and make certain that banks are not being used to help
finance terrorism and other illegal activities. DFS will continue its mission to
protect the integrity of New York's financial system and will continue to take
necessary enforcement action to protect against illicit activities."
 

#9: Fraud
The threat from internal fraud can be as pernicious as that from external actors,
as Wells Fargo found out the hard way last year. Though the $187.5 million in
penalties and restitution the bank incurred for fabricating customer approval to
open checking and credit card accounts in order to meet sales targets might
barely dent its bottom line, the blow to its reputation was far more serious.
The US Office of the Comptroller of the Currency (OCC) has identified internal
control weaknesses, such as the lack of an effective audit programme, as
common deficiencies in many banks. Even though reliance on strong internal
controls has never been more critical, its supervisory examinations indicate
weakness in audit coverage and other internal controls in some banks.
"Internal and external fraud, which the OCC views as increasing, generally
results in operational losses," says Beth Dugan, deputy comptroller for
operational risk at the OCC in Washington, DC. "A strong internal control system
can help a bank avoid fraud and unintentional errors. Industry trends show that
internal control weakness can lead to increased levels of fraud related losses
and longer times for fraud identification."
Pressure to achieve sales targets or investor expectations can cause otherwise
conscientious employees to act in a way that is ethically or morally wrong, say
practitioners. The chief executive of peer-to-peer lending company Lending Club,
for example, was forced out in May amid allegations the company had altered
the dates on some of its loans to satisfy criteria that allowed it to securitise them.
Reviewer 442
Management Advisory Services

The threat from external actors – some sophisticated, some dull but malignant –
is a growing threat too, say risk managers.
"We continue to see bad actors developing new schemes and fraudulent
techniques," says the head of operational risk at a US bank. "We've seen
widespread fraud targeting credit card accounts; now we're seeing the same
thing happen in payments. It's a matter of trying to remain a step ahead of bad
actors. When the fraud event happens at another entity, like a store or a hotel
chain, it's a fraud event at our bank, because now the criminals have access to
credit card data and account numbers."
Industry view
Rajat Baijal, head of enterprise risk, BGC and Cantor Fitzgerald :
"Banks are having to make strategic changes as a result of falling volumes,
which puts additional pressure on the front office. This could further aggravate
the risk of market manipulation, fraud and collusion with external third parties, as
traders strive to meet aggressive targets."
Zahra Al Halwachi, operational risk manager, Mashreq Bank:
"Frauds internally and externally are critical risks to any organisation. Controls
and measures need to be put in place to overcome these types of risk."
 

#10: Physical attack


Physical attack, often in the form of terrorism, has fallen one place in our annual
survey, from #9 to #10, possibly reflecting a modest reduction in the global
incidence of terrorist activity since 2015, according to research. Despite this, the
risk to financial services companies of terrorist attack is an ongoing concern for
op risk professionals, making protection of employees, customers and buildings
a high priority.
As the incidents in the European cities of Nice and Berlin last year demonstrate,
the threat from attacks carried out by a few individuals and requiring little
planning can be as devastating as well-financed, state-sponsored acts of
terrorism.
Lenders are taking action: US Bank plans to introduce a new mobile app to aid
crisis communication, and more frequent compulsory staff training programmes.
As well as terrorism, the effort will help it prepare for other violent disruptions –
for instance, the possibility of sabotage by disgruntled employees, or widespread
civil disobedience.
Reviewer 443
Management Advisory Services

"We are assessing physical security of our people and our buildings in response
to domestic and international terrorist attacks. The risk of increasing terrorist
attacks impacts our physical security preparedness as well as our business
continuity preparedness," says Jodi Richard, head of op risk at US Bank in
Minneapolis.
A recent study from the Institute for Economics and Peace put the cost of
terrorism to the global economy at $89.6 billion in 2015 – the second-highest
level since 2000. Over the last 15 years, the economic and
opportunity costs arising from terrorism have increased roughly eleven-fold, it
estimates.
Industry view
Industry consultant and former op risk manager :
"A physical terrorist attack is feasible as many capital cities remain on high alert.
Should such an attack include the use of biological or chemical components,
whole areas or cities could become 'no-go' areas, leaving companies at the
mercy of their distributed business continuity plans, which in turn might be
rendered obsolete if the city's infrastructure is affected also."

FINANCING

Financial risk is any of various types of risk associated with financing,


including financial transactions that include company loans in risk of default.[1]
[2]
 Often it is understood to include only downside risk, meaning the potential for
financial loss and uncertainty about its extent.[3][4]
Reviewer 444
Management Advisory Services

A science has evolved around managing market and financial risk under the


general title of modern portfolio theory initiated by Dr. Harry Markowitz in 1952
with his article, "Portfolio Selection".[5] In modern portfolio theory,
the variance (or standard deviation) of a portfolio is used as the definition of risk.

What is 'Financial Risk'

Financial risk is the possibility that shareholders will lose money when they invest
in a company that has debt, if the company's cash flow proves inadequate to
meet its financial obligations. When a company uses debt financing,
its creditors are repaid before its shareholders if the company becomes
insolvent. Financial risk also refers to the possibility of a corporation or
government defaulting on its bonds, which would cause those bondholders to
lose money.

BREAKING DOWN 'Financial Risk'

Financial risk is the general term for many different types of risks related to the
finance industry. These include risks involving financial transactions such us
company loans, and its exposure to loan default. The term is typically used to
reflect an investor's uncertainty of collecting returns and the potential for
monetary loss.

Investors can use a number of financial risk ratios to assess an investment's


prospects. For example, the debt-to-capital ratio measures the proportion of debt
used, given the total capital structure of the company. A high proportion of debt
indicates a risky investment. Another ratio, the capital expenditure ratio, divides
cash flow from operations by capital expenditures to see how much money a
company will have left to keep the business running after it services its debt.

Types of Financial Risks


There are many types of financial risks. The most common ones include credit
risk, liquidity risk, asset backed risk, foreign investment risk, equity risk and
currency risk.

Credit risk is also referred to as default risk. This type of risk is associated with
people who borrowed money and who are unable to pay for the money they
borrowed. As such, these people go into default. Investors affected by credit risk
suffer from decreased income and lost principal and interest, or they deal with a
rise in costs for collection.

Liquidity risk involves securities and assets that cannot be purchased or sold fast
enough to cut losses in a volatile market. Asset-backed risk is the risk that asset-
backed securities may become volatile if the underlying securities also change in
Reviewer 445
Management Advisory Services

value. The risks under asset-backed risk include prepayment risk and interest
rate risk.

Changes in prices because of market differences, political changes, natural


calamities, diplomatic changes or economic conflicts may cause volatile foreign
investment conditions that may expose businesses and individuals to foreign
investment risk. Equity risk covers the risk involved in the volatile price changes
of shares of stock.

Investors holding foreign currencies are exposed to currency risk because


different factors, such as interest rate changes and monetary policy changes,
can alter the value of the asset that investors are holding.

DEFINITION of 'Risk Financing'

The determination of how an organization will pay for loss events in the most
effective and least costly way possible. Risk financing involves the identification
of risks, determining how to finance the risk, and monitoring the effectiveness of
the financing technique that is chosen.

BREAKING DOWN 'Risk Financing'

Risk financing is designed to help a business align its desire to take on new risks
in order to grow, with its ability to pay for those risks. Businesses must weigh the
potential costs of its actions against whether the action will help the business
reach its objectives. The business will examine its priorities in order to determine
whether it is taking on the appropriate amount of risk in order to reach its
objectives, whether it is taking the right types of risks, and whether the costs of
these risks are being accounted for financially.

Companies have a variety of options when it comes to protecting themselves


from risk. Commercial insurance policies, captive insurance, self-insurance, and
other alternative risk transfer schemes are available, though the effectiveness of
each depends on the size of the organization, the organization’s financial
situation, the risks that the organization faces, and the organization’s overall
objectives. Risk financing seeks to choose the option that is the least costly, but
that also ensures that the organization has the financial resources available to
continue its objectives after a loss event occurs.

Companies typically forecast the losses that they expect to experience over a
period of time, and then determine the net present value of the costs associated
with the different risk financing alternatives available to them. Each option is
likely to have different costs depending on the risks that need coverage, the loss
development index that is most applicable to the company, the cost of
Reviewer 446
Management Advisory Services

maintaining a staff to monitor the program, and any consulting, legal, or external
experts that are needed.

b. Measures Of Risks (Coefficient Of Variation And Standard Deviation)

Single Asset Risk: Standard Deviation and Coefficient of Variation

The return of any investment has an average, which is also the expected return,
but most returns will be different from the average: some will be more, others will
be less. The more individual returns deviate from the expected return, the greater
the risk and the greater the potential reward. The degree to which all returns for a
particular investment or asset deviate from the expected return of the investment
is a measure of its risk.

Standard Deviation: Measure of Absolute Risk

If you recorded the returns of a sample population of investors who invested in 5-


year Treasury notes (T-notes), you would note that everyone received a constant
rate of return that didn't deviate, since, once bought, T-notes pay a constant rate
of interest with no credit risk. On the other hand, if you had recorded the returns
of a sample of investors who had invested in small stocks at the same time, you
would see a much wider variation in their returns—some would have done much
better than the T-note investors, while others would have done worse, and each
of their returns would vary over time. This variability can be measured with
statistical methods, because investment returns generally follow a normal
distribution, which shows the probability of each deviation from the mean, which
is the average return, or the expected return, for a particular asset.

The sum of the deviations, both positive and negative, forms a normal
distribution about the mean. The normal distribution describes the variation of
many natural quantities, such as height and weight. It also describes the
distribution of investment returns. The normal distribution has the property that
small deviations from the mean are more probable than larger deviations. When
graphed, it forms a bell-shaped curve.

The mean is subtracted from each deviation, then squared to ensure that all
deviations are positive numbers, then divided by the number of returns minus 1,
which is the degrees of freedom for a small sample. This is called the variance.

The square root of the variance is the standard deviation, which is simply the
average deviation from the expected return. Standard deviations can measure
the probability that a value will fall within a certain range. For normal
distributions, 68% of all values will fall within 1 standard deviation of the mean,
Reviewer 447
Management Advisory Services

95% of all values will fall within 2 standard deviations, and 99.7% of all values
will fall within 3 standard deviations.

A normal distribution can be completely described by its mean and standard


deviation. The extent of the deviation of investment returns is referred to as
the volatility, which is, thus, measured by the standard deviation of the
investment returns for a particular asset. Volatility differs according to the type of
asset, such as stocks and bonds. Individual assets also differ in volatility, such as
the stocks of different companies and bonds by different issuers. Volatility is
commensurate with the investment's risk, and this risk can be quantified by
calculating the standard deviation for particular investments, which is done by
measuring the historical variation in the investment returns of particular assets or
classes of assets. The greater the standard deviation, the greater the volatility,
and, therefore, the greater the risk. More volatile assets have a wider bell-shaped
curve, reflecting a greater dispersion in their returns. Likewise, 1 standard
deviation will cover a wider dispersion of investment returns for a volatile asset
than for a nonvolatile asset. Hence, more volatile assets are more likely to
outperform or underperform less volatile assets.

Standard Deviation Formula for Investment Returns

s = Standard Deviation
rk = Specific Return
rexpected = Expected Return
n = Number of Returns (sample
size).

Coefficient of Variation: Measure of Relative Risk


The greater the standard deviation, the greater the risk of an investment.
However, the standard deviation cannot be used to compare investments unless
they have the same expected return. For instance, consider the following table.

Sample 1 Sample 2

Return 11 6 9

Return 2 4 11

Return 3 6 9

Return 4 4 11

Expected Return 5 10
Reviewer 448
Management Advisory Services

1.15470053 1.15470053
Standard Deviation
8 8

Coefficient of 0.23094010 0.11547005


Variation 8 4

On the left hand side, you have an investment with an expected return of $5
where each specific return deviates by $1 from the expected return. On the right
hand side, the specific returns also deviate by $1, but the expected return is $10.
Because the difference between the expected returns and the specific returns for
each sample is 1, the standard deviation is the same, but, nonetheless, the risk
is not the same, because $1 is only 10% of $10, but 20% of $5.

The coefficient of variation is a better measure of risk, quantifying the dispersion


of an asset's returns in relation to the expected return, and, thus, the relative risk
of the investment. Hence, the coefficient of variation allows the comparison of
different investments.

Coefficient of Variation = Standard Deviation / Average Return

In the above case, both samples have the same standard deviation, but have a
significant difference in the coefficient of variation. It is obvious that the
investment with the smaller return has the greater risk in this case.

So while the standard deviation measures the dispersion of returns, the


coefficient of variation measures their relative dispersion.

Example — Calculating the Standard Deviation and Coefficient of Variation


Using the data from Sample 1 in the above table, where the average or expected
return = 5, and the formulas for the standard deviation and coefficient of variation
and remembering that x1/2 = √x, we find that:

Standard
Deviation = ((6 - 5)2 + (4 - 5)2 + (6 - 5)2 + (4 -
5)2 / 4 - 1)1/2
Using Microsoft = (4/3)1/2 = 1.154700538
Excel = STDEV(6,4,6,4) = 1.154700538
Coefficient of = 1.154700538 / 5 = 0.230940108
Variation

Microsoft Excel also has a function to calculate the standard deviation, STDEV,


using the format STDEV(number 1, number 2, ...), an example calculation is also
shown in the above table for Sample 1. You can also select the numbers in the
Reviewer 449
Management Advisory Services

table as the input to the STDEV function. There is no Excel function for the
coefficient of variation, but it is simple enough to calculate, knowing the standard
deviation.

COEFFICIENT OF VARIATION

What is the Coefficient of Variation?

The coefficient of variation (CV) is a measure of relative variability. It is the ratio


of the standard deviation to the mean (average). For example, the expression
“The standard deviation is 15% of the mean” is a CV.

The CV is particularly useful when you want to compare results from two different
surveys or tests that have different measures or values. For example, if you are
comparing the results from two tests that have different scoring mechanisms. If
sample A has a CV of 12% and sample B has a CV of 25%, you would say that
sample B has more variation, relative to its mean.
Formula

The formula for the coefficient of variation is:

Coefficient of Variation = (Standard Deviation / Mean) * 100.

In symbols: CV = (SD/ ) * 100.

Multiplying the coefficient by 100 is an optional step to get a percentage, as


opposed to a decimal.

Coefficient of Variation Example

A researcher is comparing two multiple-choice tests with different conditions. In


the first test, a typical multiple-choice test is administered. In the second test,
alternative choices (i.e. incorrect answers) are randomly assigned to test takers.
The results from the two tests are:
Reviewer 450
Management Advisory Services

Regular Test Randomized Answers

Mean 59.9 44.8

SD 10.2 12.7

Trying to compare the two test results is challenging. Comparing standard


deviations doesn’t really work, because the means are also different. Calculation
using the formula CV=(SD/Mean)*100 helps to make sense of the data:

Regular Test Randomized Answers

Mean 59.9 44.8

SD 10.2 12.7

CV 17.03 28.35

Looking at the standard deviations of 10.2 and 12.7, you might think that the
tests have similar results. However, when you adjust for the difference in the
means, the results have more significance:
Regular test: CV = 17.03
Randomized answers: CV = 28.35

The coefficient of variation can also be used to compare variability between


different measures. For example, you can compare IQ scores to scores on
the Woodcock-Johnson III Tests of Cognitive Abilities.

Note: The Coefficient of Variation should only be used to compare positive data


on a ratio scale. The CV has little or no meaning for measurements on
an interval scale. Examples of interval scales include temperatures in Celsius or
Fahrenheit, while the Kelvin scale is a ratio scale that starts at zero and cannot,
by definition, take on a negative value (0 degrees Kelvin is the absence of heat).
How to Find a Coefficient of Variation: Overview.
Watch the video, or read the article below:

Use the following formula to calculate the CV by hand for a population or


a sample.
Reviewer 451
Management Advisory Services

σ is the standard deviation for a population, which is the same as “s” for the
sample.

μ is the mean for the population, which is the same as XBar in the sample.

In other words, to find the coefficient of variation, divide the standard deviation by
the mean and multiply by 100.

How to find a coefficient of variation in Excel.

You can calculate the coefficient of variation in Excel using the formulas for
standard deviation and mean. For a given column of data (i.e. A1:A10), you
could enter: “=stdev(A1:A10)/average(A1:A10)) then multiply by 100.

How to Find a Coefficient of Variation by hand: Steps.


Sample question: Two versions of a test are given to students. One test has pre-
set answers and a second test has randomized answers. Find the coefficient of
variation.

Regular Test Randomized Answers

Mean 50.1 45.8

SD 11.2 12.9

Step 1: Divide the standard deviation by the mean for the first sample:
11.2 / 50.1 = 0.22355

Step 2: Multiply Step 1 by 100:


0.22355 * 100 = 22.355%
Reviewer 452
Management Advisory Services

Step 3: Divide the standard deviation by the mean for the second sample:
12.9 / 45.8 = 0.28166

Step 4: Multiply Step 3 by 100:


0.28166 * 100 = 28.266%

That’s it! Now you can compare the two results directly.

In probability theory and statistics, the coefficient of variation (CV), also known


as relative standard deviation (RSD), is a standardized measure of dispersion of
a probability distribution or frequency distribution. It is often expressed as a
percentage, and is defined as the ratio of the standard deviation  to the mean  (or
its absolute value, ). The CV or RSD is widely used in analytical chemistry to
express the precision and repeatability of an assay. It is also commonly used in
fields such as engineering or physics when doing quality assurance studies
and ANOVA gauge R&R. In addition, CV is utilized by economists and investors
in economic models and in determining the volatility of asecurity.

STANDARD DEVIATION

Standard deviation
From Wikipedia, the free encyclopedia
For other uses, see Standard deviation (disambiguation).

A plot of normal distribution (or bell-shaped curve) where each band has a width
of 1 standard deviation – See also: 68–95–99.7 rule

Cumulative probability of a normal distribution with expected value 0 and


standard deviation 1.
In statistics, the standard deviation (SD, also represented by the Greek letter
sigma σ or the Latin letter s) is a measure that is used to quantify the amount of
Reviewer 453
Management Advisory Services

variation or dispersion of a set of data values.[1] A low standard deviation


indicates that the data points tend to be close to the mean (also called the
expected value) of the set, while a high standard deviation indicates that the data
points are spread out over a wider range of values.
The standard deviation of a random variable, statistical population, data set,
or probability distribution is the square root of its variance. It
is algebraically simpler, though in practice less robust, than the average absolute
deviation.[2][3] A useful property of the standard deviation is that, unlike the
variance, it is expressed in the same units as the data. There are also other
measures of deviation from the norm, including average absolute deviation,
which provide different mathematical properties from standard deviation.[4]
In addition to expressing the variability of a population, the standard deviation is
commonly used to measure confidence in statistical conclusions. For example,
the margin of error in polling data is determined by calculating the expected
standard deviation in the results if the same poll were to be conducted multiple
times. This derivation of a standard deviation is often called the "standard error"
of the estimate or "standard error of the mean" when referring to a mean. It is
computed as the standard deviation of all the means that would be computed
from that population if an infinite number of samples were drawn and a mean for
each sample were computed. It is very important to note that the standard
deviation of a population and the standard error of a statistic derived from that
population (such as the mean) are quite different but related (related by the
inverse of the square root of the number of observations). The reported margin of
error of a poll is computed from the standard error of the mean (or alternatively
from the product of the standard deviation of the population and the inverse of
the square root of the sample size, which is the same thing) and is typically about
twice the standard deviation—the half-width of a 95 percent confidence interval.
In science, researchers commonly[citation needed] report the standard deviation of
experimental data, and only effects that fall much farther than two standard
deviations away from what would have been expected are
considered statistically significant—normal random error or variation in the
measurements is in this way distinguished from likely genuine effects or
associations. The standard deviation is also important in finance, where the
standard deviation on the rate of return on an investment is a measure of
the volatility of the investment.
When only a sample of data from a population is available, the term standard
deviation of the sample or sample standard deviation can refer to either the
above-mentioned quantity as applied to those data or to a modified quantity that
is an unbiased estimate of the population standard deviation (the standard
deviation of the entire population).
Contents
  [hide] 
 1Basic examples
o 1.1Sample standard deviation of metabolic rate of Northern Fulmars
Reviewer 454
Management Advisory Services

o 1.2Population standard deviation of grades of eight students


o 1.3Standard deviation of average height for adult men
 2Definition of population values
o 2.1Discrete random variable
o 2.2Continuous random variable
 3Estimation
o 3.1Uncorrected sample standard deviation
o 3.2Corrected sample standard deviation
o 3.3Unbiased sample standard deviation
o 3.4Confidence interval of a sampled standard deviation
 4Identities and mathematical properties
 5Interpretation and application
o 5.1Application examples
 5.1.1Experiment, industrial and hypothesis testing
 5.1.2Weather
 5.1.3Finance
o 5.2Geometric interpretation
o 5.3Chebyshev's inequality
o 5.4Rules for normally distributed data
 6Relationship between standard deviation and mean
o 6.1Standard deviation of the mean
 7Rapid calculation methods
o 7.1Weighted calculation
 8History
 9See also
 10References
 11External links
Basic examples[edit]
Sample standard deviation of metabolic rate of Northern Fulmars[edit]
Logan [5] gives the following example. Furness and Bryant [6] measured the
resting metabolic rate for 8 male and 6 female breeding Northern fulmars. The
table shows the furness data set.
Reviewer 455
Management Advisory Services

The graph shows the metabolic rate for males and females. By visual inspection,
it appears that the variability of the metabolic rate is greater for males than for
females.

The sample standard deviation of the metabolic rate for the female fulmars is
calculated as follows. The formula for the sample standard deviation is
where  are the observed values of the sample items,  is the mean value of these
observations, and N is the number of observations in the sample.
In the sample standard deviation formula, for this example, the numerator is the
sum of the squared deviation of each individual animal's metabolic rate from the
mean metabolic rate. The table below shows the calculation of this sum of
squared deviations for the female fulmars. For females, the sum of squared
deviations is 886047.09, as shown in the table.

The denominator in the sample standard deviation formula is N – 1, where N is


the number of animals. In this example, there are N = 6 females, so the
denominator is 6 – 1 = 5. The sample standard deviation for the female fulmars
is therefore
Reviewer 456
Management Advisory Services

For the male fulmars, a similar calculation gives a sample standard deviation of
894.37, approximately twice as large as the standard deviation for the females.
The graph shows the metabolic rate data, the means (red dots), and the
standard deviations (red lines) for females and males.

Use of the sample standard deviation implies that these 14 fulmars are a sample
from a larger population of fulmars. If these 14 fulmars comprised the entire
population (perhaps the last 14 surviving fulmars), then instead of the sample
standard deviation, the calculation would use the population standard deviation.
In the population standard deviation formula, the denominator is N instead of N-
1. It is rare that measurements can be taken for an entire population, so, by
default, statistical software packages calculate the sample standard deviation.
Similarly, journal articles report the sample standard deviation unless otherwise
specified.
Population standard deviation of grades of eight students[edit]
Suppose that the entire population of interest was eight students in a particular
class. For a finite set of numbers, the population standard deviation is found by
taking the square rootof the average of the squared deviations of the values from
their average value. The marks of a class of eight students (that is, a statistical
population) are the following eight values:
These eight data points have the mean (average) of 5:
First, calculate the deviations of each data point from the mean, and square the
result of each:
The variance is the mean of these values:
and the population standard deviation is equal to the square root of the variance:
This formula is valid only if the eight values with which we began form the
complete population. If the values instead were a random sample drawn from
some large parent population (for example, they were 8 marks randomly and
independently chosen from a class of 2 million), then one often divides
Reviewer 457
Management Advisory Services

by 7 (which is n − 1) instead of 8 (which is n) in the denominator of the last


formula. In that case the result of the original formula would be called
the sample standard deviation. Dividing by n − 1 rather than by n gives an
unbiased estimate of the variance of the larger parent population. This is known
as Bessel's correction.[7]
Standard deviation of average height for adult men[edit]
If the population of interest is approximately normally distributed, the standard
deviation provides information on the proportion of observations above or below
certain values. For example, the average height for adult men in the United
States is about 70 inches (177.8 cm), with a standard deviation of around
3 inches (7.62 cm). This means that most men (about 68%, assuming a normal
distribution) have a height within 3 inches (7.62 cm) of the mean (67–73 inches
(170.18–185.42 cm)) – one standard deviation – and almost all men (about 95%)
have a height within 6 inches (15.24 cm) of the mean (64–76 inches (162.56–
193.04 cm)) – two standard deviations. If the standard deviation were zero, then
all men would be exactly 70 inches (177.8 cm) tall. If the standard deviation were
20 inches (50.8 cm), then men would have much more variable heights, with a
typical range of about 50–90 inches (127–228.6 cm). Three standard deviations
account for 99.7% of the sample population being studied, assuming the
distribution is normal (bell-shaped). (See the 68-95-99.7 rule, or the empirical
rule, for more information.)
Definition of population values[edit]
Let X be a random variable with mean value μ:
Here the operator E denotes the average or expected value of X. Then
the standard deviation of X is the quantity
(derived using the properties of expected value).
In other words, the standard deviation σ (sigma) is the square root of
the variance of X; i.e., it is the square root of the average value of (X − μ)2.
The standard deviation of a (univariate) probability distribution is the same as
that of a random variable having that distribution. Not all random variables have
a standard deviation, since these expected values need not exist. For example,
the standard deviation of a random variable that follows a Cauchy distribution is
undefined because its expected value μ is undefined.
Discrete random variable[edit]
In the case where X takes random values from a finite data set x1, x2, ..., xN, with
each value having the same probability, the standard deviation is
or, using summation notation,
If, instead of having equal probabilities, the values have different probabilities,
let x1 have probability p1, x2 have probability p2, ..., xN have probability pN. In this
case, the standard deviation will be
Continuous random variable[edit]
The standard deviation of a continuous real-valued random
variable X with probability density function p(x) is
Reviewer 458
Management Advisory Services

and where the integrals are definite integrals taken for x ranging over the set of
possible values of the random variable X.
In the case of a parametric family of distributions, the standard deviation can be
expressed in terms of the parameters. For example, in the case of the log-normal
distribution with parameters μ and σ2, the standard deviation is
[(exp(σ ) − 1)exp(2μ + σ2)]1/2.
2

Estimation[edit]
See also: Sample variance
Main article: Unbiased estimation of standard deviation
One can find the standard deviation of an entire population in cases (such
as standardized testing) where every member of a population is sampled. In
cases where that cannot be done, the standard deviation σ is estimated by
examining a random sample taken from the population and computing
a statistic of the sample, which is used as an estimate of the population standard
deviation. Such a statistic is called an estimator, and the estimator (or the value
of the estimator, namely the estimate) is called a sample standard deviation,and
is denoted by s (possibly with modifiers). However, unlike in the case of
estimating the population mean, for which the sample mean is a simple estimator
with many desirable properties (unbiased, efficient, maximum likelihood), there is
no single estimator for the standard deviation with all these properties,
and unbiased estimation of standard deviation is a very technically involved
problem. Most often, the standard deviation is estimated using the corrected
sample standard deviation (using N − 1), defined below, and this is often referred
to as the "sample standard deviation", without qualifiers. However, other
estimators are better in other respects: the uncorrected estimator (using N) yields
lower mean squared error, while using N − 1.5 (for the normal distribution)
almost completely eliminates bias.
Uncorrected sample standard deviation[edit]
The formula for the population standard deviation (of a finite population) can be
applied to the sample, using the size of the sample as the size of the population
(though the actual population size from which the sample is drawn may be much
larger). This estimator, denoted by sN, is known as the uncorrected sample
standard deviation, or sometimes the standard deviation of the
sample (considered as the entire population), and is defined as follows:[citation needed]
where  are the observed values of the sample items and  is the mean value of
these observations, while the denominator N stands for the size of the sample:
this is the square root of the sample variance, which is the average of
the squared deviations about the sample mean.
This is a consistent estimator (it converges in probability to the population value
as the number of samples goes to infinity), and is the maximum-likelihood
estimate when the population is normally distributed.[citation needed] However, this is
a biased estimator, as the estimates are generally too low. The bias decreases
as sample size grows, dropping off as 1/N, and thus is most significant for small
or moderate sample sizes; for  the bias is below 1%. Thus for very large sample
Reviewer 459
Management Advisory Services

sizes, the uncorrected sample standard deviation is generally acceptable. This


estimator also has a uniformly smaller mean squared error than the corrected
sample standard deviation.
Corrected sample standard deviation[edit]
If the biased sample variance (the second central moment of the sample, which
is a downward-biased estimate of the population variance) is used to compute an
estimate of the population's standard deviation, the result is
Here taking the square root introduces further downward bias, by Jensen's
inequality, due to the square root being a concave function. The bias in the
variance is easily corrected, but the bias from the square root is more difficult to
correct, and depends on the distribution in question.
An unbiased estimator for the variance is given by applying Bessel's correction,
using N − 1 instead of N to yield the unbiased sample variance, denoted s2:
This estimator is unbiased if the variance exists and the sample values are
drawn independently with replacement. N − 1 corresponds to the number
of degrees of freedom in the vector of deviations from the mean, 
Taking square roots reintroduces bias (because the square root is a nonlinear
function, which does not commute with the expectation), yielding the corrected
sample standard deviation, denoted by s:
As explained above, while s2 is an unbiased estimator for the population
variance, s is still a biased estimator for the population standard deviation,
though markedly less biased than the uncorrected sample standard deviation.
The bias is still significant for small samples ( N less than 10), and also drops off
as 1/N as sample size increases. This estimator is commonly used and generally
known simply as the "sample standard deviation".
Unbiased sample standard deviation[edit]
For unbiased estimation of standard deviation, there is no formula that works
across all distributions, unlike for mean and variance. Instead, s is used as a
basis, and is scaled by a correction factor to produce an unbiased estimate. For
the normal distribution, an unbiased estimator is given by s/c4, where the
correction factor (which depends on N) is given in terms of the Gamma function,
and equals:
This arises because the sampling distribution of the sample standard deviation
follows a (scaled) chi distribution, and the correction factor is the mean of the chi
distribution.
An approximation can be given by replacing N − 1 with N − 1.5, yielding:
The error in this approximation decays quadratically (as 1/ N2), and it is suited for
all but the smallest samples or highest precision: for n = 3 the bias is equal to
1.3%, and for n = 9 the bias is already less than 0.1%.
For other distributions, the correct formula depends on the distribution, but a rule
of thumb is to use the further refinement of the approximation:
where γ2 denotes the population excess kurtosis. The excess kurtosis may be
either known beforehand for certain distributions, or estimated from the data.
[citation needed]
Reviewer 460
Management Advisory Services

Confidence interval of a sampled standard deviation[edit]


See also: Margin of error, Variance §  Distribution of the sample variance,
and Student's_t-distribution §  Robust_parametric_modeling
The standard deviation we obtain by sampling a distribution is itself not
absolutely accurate, both for mathematical reasons (explained here by the
confidence interval) and for practical reasons of measurement (measurement
error). The mathematical effect can be described by the confidence interval or CI.
To show how a larger sample will make the confidence interval narrower,
consider the following examples: A small population of N = 2 has only 1 degree
of freedom for estimating the standard deviation. The result is that a 95% CI of
the SD runs from 0.45 × SD to 31.9 × SD; the factors here are as follows:
where  is the p-th quantile of the chi-square distribution with k degrees of
freedom, and  is the confidence level. This is equivalent to the following:
With k = 1,  and . The reciprocals of the square roots of these two numbers give
us the factors 0.45 and 31.9 given above.
A larger population of N = 10 has 9 degrees of freedom for estimating the
standard deviation. The same computations as above give us in this case a 95%
CI running from 0.69*SD to 1.83*SD. So even with a sample population of 10,
the actual SD can still be almost a factor 2 higher than the sampled SD. For a
sample population N=100, this is down to 0.88*SD to 1.16*SD. To be more
certain that the sampled SD is close to the actual SD we need to sample a large
number of points.
These same formulae can be used to obtain confidence intervals on the variance
of residuals from a least squares fit under standard normal theory, where k is
now the number of degrees of freedom for error.
Identities and mathematical properties[edit]
The standard deviation is invariant under changes in location, and scales directly
with the scale of the random variable. Thus, for a constant c and random
variables X and Y:
The standard deviation of the sum of two random variables can be related to
their individual standard deviations and the covariance between them:
where  and  stand for variance and covariance, respectively.
The calculation of the sum of squared deviations can be related
to moments calculated directly from the data. In the following formula, the letter E
is interpreted to mean expected value, i.e., mean.
The sample standard deviation can be computed as:
For a finite population with equal probabilities at all points, we have
This means that the standard deviation is equal to the square root of the
difference between the average of the squares of the values and the square of
the average value. See computational formula for the variance for proof, and for
an analogous result for the sample standard deviation.
Interpretation and application[edit]
Further information: Prediction interval and Confidence interval
Reviewer 461
Management Advisory Services

Example of samples from two populations with the same mean but different
standard deviations. Red population has mean 100 and SD 10; blue population
has mean 100 and SD 50.
A large standard deviation indicates that the data points can spread far from the
mean and a small standard deviation indicates that they are clustered closely
around the mean.
For example, each of the three populations {0, 0, 14, 14}, {0, 6, 8, 14} and {6, 6,
8, 8} has a mean of 7. Their standard deviations are 7, 5, and 1, respectively.
The third population has a much smaller standard deviation than the other two
because its values are all close to 7. It will have the same units as the data
points themselves. If, for instance, the data set {0, 6, 8, 14} represents the ages
of a population of four siblings in years, the standard deviation is 5 years. As
another example, the population {1000, 1006, 1008, 1014} may represent the
distances traveled by four athletes, measured in meters. It has a mean of 1007
meters, and a standard deviation of 5 meters.
Standard deviation may serve as a measure of uncertainty. In physical science,
for example, the reported standard deviation of a group of
repeated measurements gives the precision of those measurements. When
deciding whether measurements agree with a theoretical prediction, the standard
deviation of those measurements is of crucial importance: if the mean of the
measurements is too far away from the prediction (with the distance measured in
standard deviations), then the theory being tested probably needs to be revised.
This makes sense since they fall outside the range of values that could
reasonably be expected to occur, if the prediction were correct and the standard
deviation appropriately quantified. See prediction interval.
While the standard deviation does measure how far typical values tend to be
from the mean, other measures are available. An example is the mean absolute
deviation, which might be considered a more direct measure of average
Reviewer 462
Management Advisory Services

distance, compared to the root mean square distanceinherent in the standard


deviation.
Application examples[edit]
The practical value of understanding the standard deviation of a set of values is
in appreciating how much variation there is from the average (mean).
Experiment, industrial and hypothesis testing[edit]
Standard deviation is often used to compare real-world data against a model to
test the model. For example, in industrial applications the weight of products
coming off a production line may need to comply with a legally required value. By
weighing some fraction of the products an average weight can be found, which
will always be slightly different to the long-term average. By using standard
deviations, a minimum and maximum value can be calculated that the averaged
weight will be within some very high percentage of the time (99.9% or more). If it
falls outside the range then the production process may need to be corrected.
Statistical tests such as these are particularly important when the testing is
relatively expensive. For example, if the product needs to be opened and drained
and weighed, or if the product was otherwise used up by the test.
In experimental science, a theoretical model of reality is used. Particle
physics conventionally uses a standard of "5 sigma" for the declaration of a
discovery.[8][not in citation given] A five-sigma level translates to one chance in 3.5 million
that a random fluctuation would yield the result. This level of certainty was
required in order to assert that a particle consistent with the Higgs boson had
been discovered in two independent experiments at CERN,[9] and this was also
the significance level leading to the declaration of the first detection of
gravitational waves.[10]
Weather[edit]
As a simple example, consider the average daily maximum temperatures for two
cities, one inland and one on the coast. It is helpful to understand that the range
of daily maximum temperatures for cities near the coast is smaller than for cities
inland. Thus, while these two cities may each have the same average maximum
temperature, the standard deviation of the daily maximum temperature for the
coastal city will be less than that of the inland city as, on any particular day, the
actual maximum temperature is more likely to be farther from the average
maximum temperature for the inland city than for the coastal one.
Finance[edit]
In finance, standard deviation is often used as a measure of the risk associated
with price-fluctuations of a given asset (stocks, bonds, property, etc.), or the risk
of a portfolio of assets[11] (actively managed mutual funds, index mutual funds, or
ETFs). Risk is an important factor in determining how to efficiently manage a
portfolio of investments because it determines the variation in returns on the
asset and/or portfolio and gives investors a mathematical basis for investment
decisions (known as mean-variance optimization). The fundamental concept of
risk is that as it increases, the expected return on an investment should increase
as well, an increase known as the risk premium. In other words, investors should
Reviewer 463
Management Advisory Services

expect a higher return on an investment when that investment carries a higher


level of risk or uncertainty. When evaluating investments, investors should
estimate both the expected return and the uncertainty of future returns. Standard
deviation provides a quantified estimate of the uncertainty of future returns.
For example, assume an investor had to choose between two stocks. Stock A
over the past 20 years had an average return of 10 percent, with a standard
deviation of 20 percentage points (pp) and Stock B, over the same period, had
average returns of 12 percent but a higher standard deviation of 30 pp. On the
basis of risk and return, an investor may decide that Stock A is the safer choice,
because Stock B's additional two percentage points of return is not worth the
additional 10 pp standard deviation (greater risk or uncertainty of the expected
return). Stock B is likely to fall short of the initial investment (but also to exceed
the initial investment) more often than Stock A under the same circumstances,
and is estimated to return only two percent more on average. In this example,
Stock A is expected to earn about 10 percent, plus or minus 20 pp (a range of 30
percent to −10 percent), about two-thirds of the future year returns. When
considering more extreme possible returns or outcomes in future, an investor
should expect results of as much as 10 percent plus or minus 60 pp, or a range
from 70 percent to −50 percent, which includes outcomes for three standard
deviations from the average return (about 99.7 percent of probable returns).
Calculating the average (or arithmetic mean) of the return of a security over a
given period will generate the expected return of the asset. For each period,
subtracting the expected return from the actual return results in the difference
from the mean. Squaring the difference in each period and taking the average
gives the overall variance of the return of the asset. The larger the variance, the
greater risk the security carries. Finding the square root of this variance will give
the standard deviation of the investment tool in question.
Population standard deviation is used to set the width of Bollinger Bands, a
widely adopted technical analysis tool. For example, the upper Bollinger Band is
given as x + nσx. The most commonly used value for n is 2; there is about a five
percent chance of going outside, assuming a normal distribution of returns.
Financial time series are known to be non-stationary series, whereas the
statistical calculations above, such as standard deviation, apply only to stationary
series. To apply the above statistical tools to non-stationary series, the series
first must be transformed to a stationary series, enabling use of statistical tools
that now have a valid basis from which to work.
Geometric interpretation[edit]
To gain some geometric insights and clarification, we will start with a population
of three values, x1, x2, x3. This defines a point P = (x1, x2, x3) in R3. Consider the
line L = {(r, r, r) : r ∈ R}. This is the "main diagonal" going through the origin. If
our three given values were all equal, then the standard deviation would be zero
and P would lie on L. So it is not unreasonable to assume that the standard
deviation is related to the distance of P to L. That is indeed the case. To move
orthogonally from L to the point P, one begins at the point:
Reviewer 464
Management Advisory Services

whose coordinates are the mean of the values we started out with.

A little algebra shows that the distance between P and M (which is the same as


the orthogonal distance between P and the line L)  is equal to the standard
deviation of the vector x1, x2, x3, multiplied by the square root of the number of
dimensions of the vector (3 in this case.)
Chebyshev's inequality[edit]
Main article: Chebyshev's inequality
An observation is rarely more than a few standard deviations away from the
mean. Chebyshev's inequality ensures that, for all distributions for which the
standard deviation is defined, the amount of data within a number of standard
deviations of the mean is at least as much as given in the following table.

Rules for normally distributed data[edit]

Dark blue is one standard deviation on either side of the mean. For the normal
distribution, this accounts for 68.27 percent of the set; while two standard
deviations from the mean (medium and dark blue) account for 95.45 percent;
three standard deviations (light, medium, and dark blue) account for 99.73
percent; and four standard deviations account for 99.994 percent. The two points
of the curve that are one standard deviation from the mean are also the inflection
points.
The central limit theorem says that the distribution of an average of many
independent, identically distributed random variables tends toward the famous
bell-shaped normal distribution with a probability density function of
Reviewer 465
Management Advisory Services

where μ is the expected value of the random variables, σ equals their


distribution's standard deviation divided by n1/2, and n is the number of random
variables. The standard deviation therefore is simply a scaling variable that
adjusts how broad the curve will be, though it also appears in the normalizing
constant.
If a data distribution is approximately normal, then the proportion of data values
within z standard deviations of the mean is defined by:
where  is the error function. The proportion that is less than or equal to a number,
x, is given by the cumulative distribution function:
.[13]
If a data distribution is approximately normal then about 68 percent of the data
values are within one standard deviation of the mean (mathematically, μ ± σ,
where μ is the arithmetic mean), about 95 percent are within two standard
deviations (μ ± 2σ), and about 99.7 percent lie within three standard deviations
(μ ± 3σ). This is known as the 68-95-99.7 rule, or the empirical rule.
For various values of z, the percentage of values expected to lie in and outside
the symmetric interval, CI = (−zσ, zσ), are as follows:

Percentage within(z)

z(Percentage within)
Reviewer 466
Management Advisory Services

Relationship between standard deviation and mean[edit]


The mean and the standard deviation of a set of data are descriptive
statistics usually reported together. In a certain sense, the standard deviation is a
"natural" measure of statistical dispersion if the center of the data is measured
about the mean. This is because the standard deviation from the mean is smaller
than from any other point. The precise statement is the following:
suppose x1, ..., xn are real numbers and define the function:
Using calculus or by completing the square, it is possible to show that σ( r) has a
unique minimum at the mean:
Variability can also be measured by the coefficient of variation, which is the ratio
of the standard deviation to the mean. It is a dimensionless number.
Standard deviation of the mean[edit]
Main article: Standard error of the mean
Often, we want some information about the precision of the mean we obtained.
We can obtain this by determining the standard deviation of the sampled mean.
Assuming statistical independence of the values in the sample, the standard
deviation of the mean is related to the standard deviation of the distribution by:
Reviewer 467
Management Advisory Services

where N is the number of observations in the sample used to estimate the mean.
This can easily be proven with (see basic properties of the variance):
(Statistical Independence is assumed.)
hence
Resulting in:
It should be emphasized that in order to estimate the standard deviation of the
mean  it is necessary to know the standard deviation of the entire
population  beforehand. However, in most applications this parameter is
unknown. For example, if a series of 10 measurements of a previously unknown
quantity is performed in a laboratory, it is possible to calculate the resulting
sample mean and sample standard deviation, but it is impossible to calculate the
standard deviation of the mean.
Rapid calculation methods[edit]
See also: Algorithms for calculating variance
The following two formulas can represent a running (repeatedly updated)
standard deviation. A set of two power sums s1 and s2 are computed over a set
of N values of x, denoted as x1, ..., xN:
Given the results of these running summations, the values N, s1, s2 can be used
at any time to compute the current value of the running standard deviation:
Where N, as mentioned above, is the size of the set of values (or can also be
regarded as s0).
Similarly for sample standard deviation,
In a computer implementation, as the three sj sums become large, we need to
consider round-off error, arithmetic overflow, and arithmetic underflow. The
method below calculates the running sums method with reduced rounding errors.
[14]
 This is a "one pass" algorithm for calculating variance of n samples without
the need to store prior data during the calculation. Applying this method to a time
series will result in successive values of standard deviation corresponding
to n data points as n grows larger with each new sample, rather than a constant-
width sliding window calculation.
For k = 1, ..., n:
where A is the mean value.
Note:  since  or 
Sample variance:
Population variance:
Weighted calculation[edit]
When the values xi are weighted with unequal weights wi, the power
sums s0, s1, s2 are each computed as:
And the standard deviation equations remain unchanged. Note that s0 is now the
sum of the weights and not the number of samples N.
The incremental method with reduced rounding errors can also be applied, with
some additional complexity.
A running sum of weights must be computed for each k from 1 to n:
and places where 1/n is used above must be replaced by wi/Wn:
Reviewer 468
Management Advisory Services

In the final division,


and
or
where n is the total number of elements, and n' is the number of elements with
non-zero weights. The above formulas become equal to the simpler formulas
given above if weights are taken as equal to one.
History[edit]
The term standard deviation was first used[15] in writing by Karl Pearson[16] in
1894, following his use of it in lectures. This was as a replacement for earlier
alternative names for the same idea: for example, Gauss used mean error.[17]

c. Degree Of Operating, Financial And Total Leverage

Leverage computes the effect in the numerator for every unit change in the
denominator.

What is a 'Degree Of Combined Leverage - DCL'

A degree of combined leverage (DCL) is a leverage ratio that summarizes the


combined effect that the degree of operating leverage (DOL) and the degree of
financial leverage have on earnings per share (EPS), given a particular change
in sales. This ratio can be used to help determine the most optimal level of
financial and operating leverage to use in any firm.

BREAKING DOWN 'Degree Of Combined Leverage - DCL'

For illustration, the formula is:

This ratio summarizes the effects of combining financial and operating leverage,
and what effect this combination, or variations of this combination, has on the
corporation's earnings. Not all corporations use both operating and financial
leverage, but this formula can be used if they do. A firm with a relatively high
level of combined leverage is seen as riskier than a firm with less combined
leverage, as the high leverage means more fixed costs to the firm.

Degree of Operating Leverage

The degree of operating leverage measures the effects that operating leverage
has on a company's earnings potential and indicates how earnings are affected
Reviewer 469
Management Advisory Services

by sales activity. The degree of operating leverage is calculated by dividing the


percentage change of a company's earnings before interest and taxes (EBIT) by
the percentage change of its sales over the same period.

Degree of Financial Leverage

The degree of financial leverage is calculated by dividing the percentage change


in a company's EPS by its percentage change in EBIT. The ratio indicates how a
company's EPS is affected by percentage changes in its EBIT. A higher degree
of financial leverage means that the company has more volatile EPS.

Degree of Combined Leverage Example

As stated previously, the degree of combined leverage may be calculated by


multiplying the degree of operating leverage by the degree of financial leverage.
Assume hypothetical company SpaceRocket had an EBIT of $50 million for the
current fiscal year and an EBIT of $40 million for the previous fiscal year, or a
25% increase year over year (YOY). SpaceRocket reported sales of $80 million
for the current fiscal year and sales of $65 million for the previous fiscal year, a
23.08% increase. Additionally, SpaceRocket reported an EPS of $2.50 for the
current fiscal year and an EPS of $2.00 for the previous fiscal year, a 25%
increase.

Therefore, SpaceRocket had a degree of operating leverage of 1.08 and a


degree of financial leverage of 1. Consequently, SpaceRocket had a degree of
combined leverage of 1.08. For every 1% change in SpaceRocket's sales, its
EPS would change by 1.08%.

DEGREE OF OPERATING LEVERAGE (DOL)

% ∆ EBIT CM Sales−VC
DOL= = =
% ∆ Sales EBIT Sales−VC−FC

The degree of operating leverage (DOL) is a measure used to evaluate how a


company's operating income changes with respect to a percentage change in its
sales. A company's operating leverage involves fixed costs and variable costs. A
company with a high degree of operating leverage has high fixed costs relative to
its variable costs. If the degree of operating leverage is high, the earnings before
interest and taxes (EBIT) experiences volatility with respect to a percentage
change in sales, all else remaining the same, and vice versa. There are a few
formulas you can use to calculate a company's degree of operating leverage.
The main formula used to calculate the degree of operating leverage divides the
percent change in EBIT by the percent change in sales. For example, The Walt
Disney Company (DIS) had its EBIT increase by 8.58% from 2015 to 2016, and
Reviewer 470
Management Advisory Services

its sales increased by 6.04% during the same period. The degree of operating
leverage is:

%change in EBIT / %change in sales = 8.58% / 6.04% = 1.42. Therefore, if there


is a 15% increase in the company's sales, its EBIT increases by 21.3%.

The degree of operating leverage can also be calculated by subtracting variable


costs from sales and dividing it by sales minus variable costs and fixed costs. For
example, for the fiscal year ended 2016, The Walt Disney Company had sales of
$55.63 billion, fixed costs of $11.28 billion, and variable costs of $30 billion. Time
Warner Inc. (TWX), a competitor of Disney, has sales of $29.32 billion, fixed
costs of $5.47 billion, and variable costs of $16.38 billion.

Disney's degree of operating leverage is ($55.63 billion - $30 billion) / ($55.63


billion - $30 billion - $11.28 billion) = 1.78. Time Warner's degree of operating
leverage is ($29.32 billion - $16.38 billion) / ($29.32 billion - $16.38 billion - $5.47
billion) = 1.73. If both companies experience a 20% increase in sales, Disney's
profits rise by 35.6% and Time Warner's profits rise by 34.6%.

DEGREE OF FINANCIAL LEVERAGE (DFL)

EBIT
DFL=
EBT

Degree of financial leverage (DFL) is a metric that measures the sensitivity of a


company’s operating income due to changes in its capital structure.  

The DFL calculation focuses on EBIT with and without interest.  This formula is:

DFL = EBIT/(EBIT-Interest)
Reviewer 471
Management Advisory Services

DFL is best used to help a company determine an appropriate amount of debt,


and how that debt will affect its operating income. The higher the DFL, the higher
the financial risk.

ABC Company earned $500,000 in Year 1. It had no debt, so its EBIT and EBIT
– Interest are the same. The DFL ratio is 1. Now assume ABC is considering
expanding its manufacturing facility, at a cost of $1 million. If ABC borrows the
money, it will incur $60,000 in interest expenses. The decision to borrow is based
on the amount ABC’s managers think revenue will increase because of the
expansion.

Assume it is estimated that ABC’s revenue for Year 2 will increase to $600,000
as a result of the expanded business.  Now ABC’s DFL is:

Year 2 DFL = $600,000/($600,000 - $60,000) = 1.11

This means that for every change in earnings before taxes, there is a 1.11x
change in EBIT. If this, in fact, does happen, then management’s decision to
borrow the money paid off, because the increase in revenue more than covered
the debt incurred to fund the expansion. 

DEGREE OF TOTAL LEVERAGE (DTL)

% ∆ EPS % ∆ EBIT % ∆ EPS


DTL=DFL x DOL= x =
% ∆ EBIT % ∆ Sales % ∆ Sales

The degree of total leverage equation shows the total leverage of a company.
You can find the DTL either by multiplying the degree of operating leverage and
degree of financial leverage or by dividing the percentage change in earnings per
share by the percentage change in sales -- both produce the same result. When
the result is greater than 1, the company has total leverage.
DOL x DFL

The first way to figure the DTL is by multiplying the DOL by the DFL. The DOL
equals the company's percentage change in earnings before interest and taxes
divided by the company's percentage change in sales, while the DFL equals the
percentage change in earnings per share divided by the percentage change in
EBIT. For example, if the company has a 40 percent increase in EBIT, a 30
percent change in sales and a 50 percent increase in earnings per share, divide
40 by 30 to get 1.333 and 50 by 40 to get 1.25. Then, multiply 1.333 by 1.25 to
get a DTL of 1.67.

5. Capital Structure And Long-Term Financing Decision

CAPITAL STRUCTURE

What is a 'Capital Structure'


The capital structure is how a firm finances its overall operations and growth by using
different sources of funds. Debt comes in the form of bond issues or long-term notes
Reviewer 472
Management Advisory Services

payable, while equity is classified as common stock, preferred stock or retained


earnings. Short-term debt such as working capital requirements is also considered to
be part of the capital structure.

BREAKING DOWN 'Capital Structure'

A firm's capital structure can be a mixture of long-term debt, short-term debt, common
equity and preferred equity. A company's proportion of short- and long-term debt is
considered when analyzing capital structure. When analysts refer to capital structure,
they are most likely referring to a firm's debt-to-equity (D/E) ratio, which provides
insight into how risky a company is. Usually, a company that is heavily financed by
debt has a more aggressive capital structure and therefore poses greater risk to
investors. This risk, however, may be the primary source of the firm's growth.

Debt vs. Equity

Debt is one of the two main ways companies can raise capital in the capital markets.
Companies like to issue debt because of the tax advantages. Interest payments are
tax-deductible. Debt also allows a company or business to retain ownership, unlike
equity. Additionally, in times of low interest rates, debt is abundant and easy to
access.

Equity is more expensive than debt, especially when interest rates are low. However,
unlike debt, equity does not need to be paid back if earnings decline. On the other
hand, equity represents a claim on the future earnings of the company as a part
owner.

Debt-to-Equity Ratio as a Measure of Capital Structure

Both debt and equity can be found on the balance sheet. The assets listed on the
balance sheet are purchased with this debt and equity. Companies that use more
debt than equity to finance assets have a high leverage ratio and an aggressive
capital structure. A company that pays for assets with more equity than debt has a
low leverage ratio and a conservative capital structure. That said, a high leverage
ratio and/or an aggressive capital structure can also lead to higher growth rates,
whereas a conservative capital structure can lead to lower growth rates. It is the goal
of company management to find the optimal mix of debt and equity, also referred to
as the optimal capital structure.

Analysts use the D/E ratio to compare capital structure. It is calculated by dividing
debt by equity. Savvy companies have learned to incorporate both debt and equity
into their corporate strategies. At times, however, companies may rely too heavily on
external funding, and debt in particular. Investors can monitor a firm's capital
structure by tracking the D/E ratio and comparing it against the company's peers.
Reviewer 473
Management Advisory Services

LONG-TERM FINANCING DECISION

• The goal of the capital structure decision is to determine the financial leverage that maximizes the
value of the company (or minimizes the weighted average cost of capital).
• In the Modigliani and Miller theory developed without taxes, capital structure is irrelevant and has
no effect on company value.
• The deductibility of interest lowers the cost of debt and the cost of capital for the company as a
whole. Adding the tax shield provided by debt to the Modigliani and Miller framework suggests that
the optimal capital structure is all debt.
• In the Modigliani and Miller propositions with and without taxes, increasing a company’s relative
use of debt in the capital structure increases the risk for equity providers and, hence, the cost of
equity capital.
• When there are bankruptcy costs, a high debt ratio increases the risk of bankruptcy.
• Using more debt in a company’s capital structure reduces the net agency costs of equity.
• The costs of asymmetric information increase as more equity is used versus debt, suggesting the
pecking order theory of leverage, in which new equity issuance is the least preferred method of
raising capital.
• According to the static trade-off theory of capital structure, in choosing a capital structure, a
company balances the value of the tax benefit from deductibility of interest with the present value
of the costs of financial distress. At the optimal target capital structure, the incremental tax shield
benefit is exactly offset by the incremental costs of financial distress.
• A company may identify its target capital structure, but its capital structure at any point in time may
not be equal to its target for many reasons.
• Many companies have goals for maintaining a certain credit rating, and these goals are influenced
by the relative costs of debt financing among the different rating classes.
• In evaluating a company’s capital structure, the financial analyst must look at the capital structure
of the company over time, the capital structure of competitors that have similar business risk, and
company-specific factors that may affect agency costs.
• Good corporate governance and accounting transparency should lower the net agency costs of
equity.
• When comparing capital structures of companies in different countries, an analyst must consider a
variety of characteristics that might differ and affect both the typical capital structure and the debt
maturity structure.

a. Basic Concepts And Tools Of Capital Structure Management

1. 1. 1 Chapter 12 Part 2 Determining the Financing Mix Lecture Notes© 1996, Prentice Hall, Inc.
2. 2. 2Learning Objectives  Understand the concept of an optimal capital structure.  Explain the
main underpinnings of capital structure theory.  Distinguish between the independence
hypothesis and dependence hypothesis as these concepts relate to capital structure theory theory,
and identify the Nobel prize winners in economics who are leading proponents of the
independence hypothesis.  Understand and be able to graph the moderate position on capital
Reviewer 474
Management Advisory Services

structure importance.  Incorporate the concepts of agency costs and free cash flow into a
discussion on capital structure management.  Use the basic tools of capital structure
management.  Familiarize others with corporate financing policies in practice.
3. 3. 3Planning the Firm’s Financial MixFinancial Structure and Capital Structure Financial structure
is the mix of all sources of financing used by the firm Balance Sheet Assets Liabilities Current
Liabilities Long Term Liabilities Financial Structure Equity Total Assets
4. 4. 4Planning the Firm’s Financial MixFinancial Structure and Capital Structure Financial structure
is the mix of all sources of financing used by the firm Capital structure is the mix of the long term
sources of funds Balance Sheet Assets Liabilities Current Liabilities Long Term Liabilities Capital
Structure Equity Total Assets
5. 5. 5Planning the Firm’s Financial MixFinancial Structure and Capital Structure Financial structure
is the mix of all sources of financing used by the firm Capital structure is the mix of the long term
sources of funds Capital structure is the focus of this chapter, so current liabilities will not be
included. Balance Sheet Assets Liabilities Current Liabilities Long Term Liabilities Capital Structure
Equity Total Assets
6. 6. 6Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price
7. 7. 7Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price There are three theories on choosing the optimal capital structure
Independence Theory Dependence Theory Moderate Theory
8. 8. 8Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price There are three theories on choosing the optimal capital structure
Independence Theory Dependence Theory Moderate Theory For all theories, will use a
simple valuation model: D where: P0 = price of stock P0 = kc D = constant dividend Kc = cost of
equity capital
9. 9. 9Capital Structure Theories Choose capital structure that minimizes cost of capital which in
turn maximizes stock price There are three theories on choosing the optimal capital structure
Independence Theory Dependence Theory Moderate Theory For all theories, will use a
simple valuation model: D where: P0 = price of stock P0 = kc D = constant dividend Kc = cost of
equity capital If all earnings paid as dividends, so there is no growth: D EPS where: EPS =
Earnings per share P0 = kc = kc
10. 10. 10Capital Structure TheoriesModerate Position Interest is tax deductible The use of
financial leverage increases the likelihood of bankruptcy. The costs of equity and debt rise
causing a “saucer- shaped” cost of capital function. Firms should choose financial leverage with
lowest cost of capital Capital kc Costs kO kd Financial Leverage
11. 11. 11Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors.
12. 12. 12Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors.
13. 13. 13Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
Reviewer 475
Management Advisory Services

monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
Financial Leverage
14. 14. 14Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
Value of Unlevered Firm Financial Leverage
15. 15. 15Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm I ndep f Lever o Value Financial Leverage
16. 16. 16Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm I ndep f Lever o Value PV of Tax Shields Financial Leverage
17. 17. 17Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm I ndep f Lever o Value Actual Value of the Firm Financial Leverage
18. 18. 18Agency Costs and Capital Structure Agency problems arise when management does not
work in the best interests of the creditors. Firms incur agency costs such as paying for outside
monitors to reassure creditors. The higher the leverage, the higher the agency costs. Firm Value
eT heory en denc ed Firm ndep f Lever PV of Agency and I Value o } Bankruptcy Costs Actual
Value of the Firm Financial Leverage
19. 19. 19Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13)
20. 20. 20Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13) The use of financial
leverage also changes EPS at any given EBIT.
21. 21. 21Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13) The use of financial
leverage also changes EPS at any given EBIT. EBIT-EPS Analysis Graphically demonstrates
the impact of leverage on EPS at different levels of EBIT. EPS 50% Leverage 40% Leverage EBIT
22. 22. 22Capital StructureBasic Tools of Capital Structure Management The use of financial
leverage increases variability of EPS (as seen by DFL in Chapter 13) The use of financial
leverage also changes EPS at any given EBIT. EBIT-EPS Analysis Graphically demonstrates
the impact of leverage on EPS at different levels of EBIT. EPS 50% Leverage 40% Leverage
Indifference Point EBIT
23. 23. 23EBIT-EPS Analysis Compute EBIT at which EPS will be the same regardless of financing
plan
24. 24. 24EBIT-EPS Analysis Compute EBIT at which EPS will be the same regardless of financing
plan Set EPS for each plan equal to each otherAt the EBIT indifference level: EPS50% debt =
Reviewer 476
Management Advisory Services

EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% where: I = Interest cost of
plan S = # of shares of plan
25. 25. 25EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%
26. 26. 26EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40%
27. 27. 27EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 S = $500,000/$10 = 50,000
28. 28. 28EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 I = $400,000 x 8% = $32,000 S = $500,000/$10 = 50,000 S = $600,000/$10 =
60,000
29. 29. 29EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 I = $400,000 x 8% = $32,000 S = $500,000/$10 = 50,000 S = $600,000/$10 =
60,000 (EBIT - $40,000)(1 - .40) (EBIT - $32,000)(1 - .40) = 50,000 60,000
30. 30. 30EBIT-EPS Analysis Example: $1 million of financing are currently needed. Can raise the
money with debt costing 8%, or stock at $10/share. Tax rate = 40%At the EBIT indifference level:
EPS50% debt = EPS40% debt (EBIT - I50%)(1 - t) (EBIT - I40%)(1 - t) = S50% S40% I = $500,000
x 8% = $40,000 I = $400,000 x 8% = $32,000 S = $500,000/$10 = 50,000 S = $600,000/$10 =
60,000 (EBIT - $40,000)(1 - .40) (EBIT - $32,000)(1 - .40) = 50,000 60,000 Solve for EBIT: EBIT =
$80,000
31. 31. 31Capital Structure in Practice The majority of financial officers believe there is an optimal
capital structure for their company. Managers adapt financial leverage to the business cycle,
taking advantage of debt when it is less expensive. The most important factor in determining
leverage is a firm’s business risk. Managers’ optimal choice to finance new projects is to use
retained earnings. Only after internal funds are exhausted, managers’ choice of leverage is
consistent with the Moderate Theory of financial leverage.

For stock investors that favor companies with good fundamentals, a


strong balance sheet is an important consideration for investing in a company's
stock. The strength of a company' balance sheet can be evaluated by three
broad categories of investment-quality measurements: working capital adequacy,
asset performance and capital structure. In this section, we'll consider the
importance of capital structure.

A company's capitalization (not to be confused with market capitalization)


describes its composition of permanent or long-term capital, which consists of a
combination of debt and equity. A company's reasonable, proportional use of
debt and equity to support its assets is a key indicator of balance sheet strength.
Reviewer 477
Management Advisory Services

A healthy capital structure that reflects a low level of debt and a corresponding
high level of equity is a very positive sign of financial fitness. (Learn about market
capitalization in Market Capitalization Defined.)

Clarifying Capital Structure-Related Terminology


The equity part of the debt-equity relationship is the easiest to define. In a
company's capital structure, equity consists of a company's common and
preferred stock plus retained earnings, which are summed up in the
shareholders' equity account on a balance sheet. This invested capital and debt,
generally of the long-term variety, comprises a company's capitalization and acts
as a permanent type of funding to support a company's growth and related
assets.

A discussion of debt is less straightforward. Investment literature often equates a


company's debt with its liabilities. Investors should understand that there is a
difference between operational and debt liabilities - it is the latter that forms the
debt component of a company's capitalization. That's not the end of the debt
story, however.

Among financial analysts and investment research services, there is no universal


agreement as to what constitutes a debt liability. For many analysts, the debt
component in a company's capitalization is simply a balance sheet's long-term
debt. However, this definition is too simplistic. Investors should stick to a stricter
interpretation of debt where the debt component of a company's capitalization
should consist of the following: short-term borrowings (notes payable), the
current portion of long-term debt, long-term debt, and two-thirds (rule of thumb)
of the principal amount of operating leases and redeemable preferred stock.
Using a comprehensive total debt figure is a prudent analytical tool for stock
investors.

Capital Ratios and Indicators


In general, analysts use three different ratios to assess the financial strength of a
company's capitalization structure. The first two, the debt and debt/equity ratios,
are popular measurements; however, it's the capitalization ratio that delivers the
key insights to evaluating a company's capital position.

The debt ratio compares total liabilities to total assets. Obviously, more of the
former means less equity and, therefore, indicates a more leveraged position.
The problem with this measurement is that it is too broad in scope, which, as a
consequence, gives equal weight to operational and debt liabilities. The same
criticism can be applied to the debt/equity ratio, which compares total liabilities to
total shareholders' equity. Current and non-current operational liabilities,
particularly the latter, represent obligations that will be with the company forever.
Also, unlike debt, there are no fixed payments of principal or interest attached to
operational liabilities.

The capitalization ratio (total debt/total capitalization) compares the debt


component of a company's capital structure (the sum of obligations categorized
as debt plus the total shareholders' equity) to the equity component. Expressed
as a percentage, a low number is indicative of a healthy equity cushion, which is
Reviewer 478
Management Advisory Services

always more desirable than a high percentage of debt. (To continue reading
about ratios, see Debt Reckoning.)

Additional Evaluative Debt-Equity Considerations


Funded debt is the technical term applied to the portion of a company's long-term
debt that is made up of bonds and other similar long-term, fixed-maturity types of
borrowings. No matter how problematic a company's financial condition may be,
the holders of these obligations cannot demand immediate and full repayment as
long the company pays the interest on its funded debt. In contrast, bank debt is
usually subject to acceleration clauses and/or covenants that allow the lender to
call its loan. From the investor's perspective, the greater the percentage of
funded debt to total debt, the better. Funded debt gives a company more wiggle
room.

Factors That Influence a Company's Capital-Structure Decision

The primary factors that influence a company's capital-structure decision are as


follows:

1. Business Risk 
Excluding debt, business risk is the basic risk of the company's operations. The
greater the business risk, the lower the optimal debt ratio. 

As an example, let's compare a utility company with a retail apparel company. A


utility company generally has more stability in earnings. The company has less
risk in its business given its stable revenue stream. However, a retail apparel
company has the potential for a bit more variability in its earnings. Since the
sales of a retail apparel company are driven primarily by trends in the fashion
industry, the business risk of a retail apparel company is much higher. Thus, a
retail apparel company would have a lower optimal debt ratio so that investors
feel comfortable with the company's ability to meet its responsibilities with the
capital structure [E1] in both good times and bad.

2. Company's Tax Exposure 


Debt payments are tax deductible. As such, if a company's tax rate is high, using
debt as a means of financing a project is attractive because the tax deductibility
of the debt payments protects some income from taxes.

3. Financial Flexibility
Financial flexibility is essentially the firm's ability to raise capital in bad times. It
should come as no surprise that companies typically have no problem raising
capital when sales are growing and earnings are strong. However, given a
company's strong cash flow in the good times, raising capital is not as hard.
Companies should make an effort to be prudent when raising capital in the good
times and avoid stretching their capabilities too far. The lower a company's debt
level, the more financial flexibility a company has.

Let's take the airline industry as an example. In good times, the industry
generates significant amounts of sales and thus cash flow. However, in bad
times, that situation is reversed and the industry is in a position where it needs to
Reviewer 479
Management Advisory Services

borrow funds. If an airline becomes too debt ridden, it may have a decreased
ability to raise debt capital during these bad times because investors may doubt
the airline's ability to service its existing debt when it has new debt loaded on top.
(Learn more about this industry in Dead Airlines And What Killed Them  and 4
Reasons Why Airlines Are Always Struggling.)

4. Management Style
Management styles range from aggressive to conservative. The more
conservative a management's approach is, the less inclined it is to use debt to
increase profits. An aggressive management may try to grow the firm quickly,
using significant amounts of debt to ramp up the growth of the
company's earnings per share (EPS).

5. Growth Rate
Firms that are in the growth stage of their cycle typically finance that growth
through debt by borrowing money to grow faster. The conflict that arises with this
method is that the revenues of growth firms are typically unstable and unproven.
As such, a high debt load is usually not appropriate.

More stable and mature firms typically need less debt to finance growth as their
revenues are stable and proven. These firms also generate cash flow, which can
be used to finance projects when they arise.

6. Market Conditions
Market conditions can have a significant impact on a company's capital-structure
condition. Suppose a firm needs to borrow funds for a new plant. If the market is
struggling, meaning that investors are limiting companies' access to capital
because of market concerns, the interest rate to borrow may be higher than a
company would want to pay. In that situation, it may be prudent for a company to
wait until market conditions return to a more normal state before the company
tries to access funds for the plant. (Read more about market conditions in The
Cost Of Unemployment To The Economy  and Betting On The Economy: What
Are The Odds?)

b. Sources Of Intermediate And Long-Term Financing (Including Hybrid


Financing)

1. HYBRID FINANCING PREFERENCE SHARE CONVERTIBLE


DEBENTURE WARRANT
2. 2. WHAT IS HYBRID FINANCING Hybrid financing defined as a combined
face of equity and debt. This means that the characteristics of both equity
and bonds can be found in hybrid financing. There are several types of
hybrid financing like preference capital, convertible debenture, warrants,
innovative hybrids and so on.
3. 3. PREFERENCE SHARE Preference capital : Carries a fixed rate of
dividend which is payable a the discretion of directors when the company
has distributable surplus. FEATURES Convertibility Redeemability
Participation in surplus profits and assets Voting rights
Reviewer 480
Management Advisory Services

4. 4. CONVERTIBLE DEBENTURE Convertible debentures: is a debenture


that is convertible, partially or fully, into equity shares. It is debenture that
can be changed into a specified number of ordinary shares, at the option of
the owner. The most notable feature of this debenture is that it promises a
fixed income associated with debenture as well as change of capital gains
associated with equity share. Because of this combination of fixed income
and capital gains in the convertible debenture, it has been called a hybrid
security.
5. 5. CONVERTIBLE DEBENTURE CERTIFICATE
6. 6. WARRANT A warrant entitles the purchase to buy a fixed number of
ordinary shares, at a particular price, during a specified time period.
Warrants are issued along with debentures as ‘sweeteners’. Warrants: gives
its holder the right to subscribe to the equity shares’ of a company during a
certain period at a specified price.
7. 7. FEATURES OF WARRANT Exercise price: is the price at which its holder
can purchase the issuing firm’s ordinary shares Exercise ratio: states the
number of ordinary shares that can be purchases, at the given exercise
price per warrant. Expiration date: is the date when the option to buy
ordinary shares in exchange for warrant expires. Detachability: if a warrant
can be sold separately from the debenture to which it was originally
attached, it is called a detachable warrant. Right: warrants entitle to
purchase ordinary shares. Therefore, the holders of warrants are not the
shareholders of the company until they exercise their options.
8. 8. Reasons for issuing warrants Generally, three reasons are cited for
issuing warrants: Sweetening debt: warrants help to make the issue of
equity and debentures attractive. Deferred equity financing: warrants provide
a company an opportunity for deferred equity financing where in
shareholders can exercise price if market price in future does not rise. Cash
inflow in future: company obtains cash when investors exercise their
warrants. No cash inflow takes place when convertible debentures were
offered.
9. 9. . Recap the differences between warrants and convertible Debenture
Warrants bring in new capital, while convertibles do not. Most convertibles
are callable, while warrants are not. Warrants typically have shorter
maturities than convertibles, and expire before the accompanying debt.
Warrants usually provide for fewer common shares than do convertibles.
Bonds with warrants typically have much higher flotation costs than do
convertible issues. Bonds with warrants are often used by small start-up
firms. Why?

1. . Intermediate Sources of Capital


2. 2. Intermediate Sources of Capital Short – term Financing Business Finance
Functions Intermediate – term Financing Long – term Financing
Reviewer 481
Management Advisory Services

3. 3. Intermediate Sources of Capital Short – term Financing Business Finance


Functions Intermediate – term Financing Long – term Financing • Short-term
capital is funds that are borrowed for less than one year. • Used when
companies have expanded their initial capital • Occurs when one has
neglected the task of preparing a projected cash flow budget statement •
This include Trade credit obtained from creditors.
4. 4. Intermediate Sources of Capital Short – term Financing Business Finance
Functions Intermediate – term Financing Long – term Financing •Early-stage
capital (intermediate capital) is funds to be paid back within a period of five
years. •Need arises as need for working capital increases. •Used for small
expansion activities. •This includes leasing and medium term loans.
5. 5. Intermediate Sources of Capital Short – term Financing Business Finance
Functions Intermediate – term Financing Long – term Financing • Covers
long-term projects lasting longer than five years. • Used for fixed assets and
real estate purchases, expensive machinery, and franchise financing. • Used
for major expansions or the acquisition of expensive equipment. • This
includes long term loans and share capital.
6. 6. Intermediate Sources of Capital - Intermediate term financing or
Intermediate Source of Capital. Refers to borrowings with repayment
schedule of more than one year but less than ten years. Intermediate – term
Financing
7. 7. Intermediate Sources of Capital Features of intermediate-term financing
1. Maturity: The maturity of intermediate-term financing is one year to five
years. Sometimes the finance period can be exceeding up to seven or ten
years. 2. Size of loan: The size of loan is generally small. Because the main
sources of intermediate-term financing are commercial bank and insurance
company. Normally commercial bank does not finance to the company for
expanding their business. 3. Users of term financing: Generally small,
middle and large businesses use this loan. Specifically commercial bank
does not finance which business can not excess the capital market.
8. 8. Intermediate Sources of Capital 4. Objective of credit: The objective of
this loan is to expand capital, machinery or replace capital machinery. 5.
Repayment method: Repayment of loan is made instalment basis that may
me in balloon payment method or capital recovery method. 6. Security
provision: Though loan is used to purchase assets, security is required to
get this type of loan. Especially small or middle business must require
security to collect this loan. Generally buildings, machineries, plant etc. are
used as security. Now-a- days share, debenture, etc. also used as security.
9. 9. Intermediate Sources of Capital 7. Cost of financing: Cost of this financing
is relatively more than short-term finance and less than long-term finance. 8.
Flexibility: The amount of loan or repayment of loan is flexible. Suppose you
have got five years period loan from a bank against a plant asset. Your
assets life time is 20 or 25 years; here you can enhance your loan or
repayment period. 9. Renewable: Intermediate-term financing agreement is
Reviewer 482
Management Advisory Services

renewable. Commercial bank normally offers renewable option to business


at the end of loan period.
10. 10. Intermediate Sources of Capital Advantage of Intermediate Term
Financing 1. Flexibility: the borrower can get loan as his/her need. 2. Low
cost: cost is less than long-term financing. 3. Convenience in repayment: the
borrower can repay the loan as instalment or at a time. 4. Renewable: if the
borrower fails to repay instalment, the loan repayment period can be
expand. 5. Maintaining secrecy 6. Goodwill for the borrower 7. Rapid
financing: collecting loan from capital market by selling share or debenture is
time consuming. So business collect intermediate-term credit in a short time.
8. Control 9. Only source for small business 10. Get ownership of asset
without capital 11. Tax advantage
11. 11. Intermediate Sources of Capital Disadvantage of Intermediate Term
Financing 1. It is comparatively high cost than short-term financing. 2.
Inconvenience of instalment payment if inflow of cash is decreasing. 3. If
borrower fails to repay instalment, the lender collect money by selling
borrower’s collateral security. 4. It is not easy to get loan for financially weak,
small and new business. Because banks, financial institutions give more
afford to borrower’s financially solvency when considering loan. 5.
Sometimes lenders impose some restrictions over the borrower which limits
the borrower’s power. 6. The borrower is required to keep a portion of loan
as compensating balance.
12. 12. Sources of INTERMEDIATE – TERM FINANCING Private Financial
Institution The Government Private Commercial Banks Insurance
Companies Finance Companies Factors Pre – Need Companies Conduit
Private Banks Conduit Private Banks Private Commercial Banks Private
Thrift Banks Private Development Banks Rural Banks Private Savings and
Mortgage Banks Private Savings and Loans Associations LDP DBP
13. 13. Private Financial Institution (Private commercial Bank) Private
Commercial Banks An establishment that focuses on dealing with financial
transactions, such as investments, loans and deposits. -
14. 14. Term Loan Defined Is a bank advance for a specific period repaid, with
interest, usually by regular periodic payments. - 3 Types of term loan
Straight term loan – granted to finance fixed assets. Revolving Credit – is a
legally assured line of credit, normally extended for two or three year time
period. Evergreen Credit – is a revolving credit a arrangement without a
stated maturity. Private Financial Institution (Private commercial Bank)
15. 15. Term Loan Agreement Formal loan agreements are required in the
granting of term loans. - Loan Agreement Features :  Repayment
Schedules  Interest rates  Maximum commitments Private Financial
Institution (Private commercial Bank)
16. 16. Common Provision of a Loan Agreement: 1. The borrower is required to
maintain a certain amount of working capital or a given current ratio. 2. The
Reviewer 483
Management Advisory Services

borrower is required to furnish the book of the creditor with audited annual
financial statements and directed quarterly or monthly statements. 3. The
borrower is prohibited to dispose his business property, except inventories.
4. The borrower is prohibited from incurring additional long – term debts or
additional lease obligation. 5. The borrower is not allowed to repurchase the
company’s own stock. Private Financial Institution (Private commercial
Bank)
17. 17. 1. Equal Principal Payments 2. Equal Amortization 3. Balloon Payment
4. Deferred Payment of Principal with Grace Period. Repayment of Term
Loans: Private Financial Institution (Private commercial Bank)
18. 18. EqualPrincipal Payments EqualAmortization BalloonPayment
DeferredPaymentof PrincipalwithGracePeriod Original Principal: 100,000
Lone Term: 10 Yrs. Annual Interest rate: 8% Under this arrangement, the
loan is repaid in equal amount of principal - Year Outstanding Principal at
the beginning of year(P) Interest due at end of year(I) (P*8%) Repayment of
principal at end of year (RP) Total Payment at end of year(TP) (I+RP) 1 2 3
4 5 6 7 8 9 10 PhP. 100,000 90,000 80,000 70,000 60,000 50,000 40,000
30,000 20,000 10,000 PhP. 80,000 7,200 6,400 5,600 4,800 4,000 3,200
2,400 1,600 800 PhP. 10,000 10,000 10,000 10,000 10,000 10,000 10,000
10,000 10,000 10,000 PhP. 18,000 17,200 16,400 15,600 14,800 14,000
13,200 12,200 16,600 10,800 Private Financial Institution (Private
commercial Bank)
19. 19. EqualPrincipalPayments Equal Amortization BalloonPayment
DeferredPaymentof PrincipalwithGracePeriod Original Principal: 100,000
Lone Term: 10 Yrs. Annual Interest rate: 8% -Under this arrangement, the
loan is repaid in equal installments. Years Outstanding Principal at the
beginning of year(P) (P-RP) Interest due at end of year(I) (P*8%)
Repayment of principal at end of year (RP) Total Payment at end of
year(TP) 1 2 3 4 5 6 7 8 9 10 PhP.100,000.00 93,097.00 85,641.76
77,590.10 68,894.30 59,502.84 49,360.06 38,405.86 26,575.32 13,798.34
PhP. 8,000.00 7,447.76 6,851.34 6,207.20 5,511.54 4,760.22 3,948.80
3,073.46 2,126.02 1,103.86 PhP. 6,903.00 7,455.24 8,051.66 8,695.80
9,391.46 10,142.78 10,954.20 11,830.54 12,776.98 13,798.34 PhP. 14,903
14,903 14,903 14,903 14,903 14,903 14,903 14,903 14,903 14,903 Private
Financial Institution (Private commercial Bank)
20. 20. EqualPrincipalPayments EqualAmortization Balloon Payment
DeferredPaymentof PrincipalwithGracePeriod Original Principal: 100,000
Lone Term: 10 Yrs. Annual Interest rate: 8% -Loan is repaid in equal
installments for a number of years, then, a large and final payment is made
at maturity date. Years Outstanding Principal at the beginning of year(P) (P-
RP) Interest due at end of year(I) (P*8%) Repayment of principal at end of
year (RP) Total Payment at end of year(TP) 1 2 3 4 5 6 7 8 9 10
PhP.100,000.00 94,000.00 87,520.00 80,521.60 72,693.32 64,800.38
55,984.41 46,463.16 36,180.21 25,074.62 PhP. 8,000.00 7,520.00 7,001.60
Reviewer 484
Management Advisory Services

6,441.72 5,837.06 5,184.03 4,478.75 3,717.05 2,894.41 2,005.96 PhP.


6,000.00 6,480.00 6,998.40 7,558.28 8,162.94 8,815.97 9,521.25 10,282.95
11,105.59 25,074.62 PhP. 14,000 14,000 14,000 14,000 14,000 14,000
14,000 14,000 14,000 27,080.58 Private Financial Institution (Private
commercial Bank)
21. 21. EqualPrincipalPayments EqualAmortization BalloonPayment
DeferredPaymentof PrincipalwithGrace Period Original Principal: 100,000
Lone Term: 10 Yrs. Annual Interest rate: 8% -The payment of principal
under this program is deferred, although payments on interest are made.
Years Outstanding Principal at the beginning of year(P) (P-RP) Interest due
at end of year(I) (P*8%) Repayment of principal at end of year (RP) Total
Payment at end of year(TP) 1 2 3 4 5 6 7 8 9 10 PhP.100,000.00 100,000.00
100,000.00 100,000.00 88,792.80 76,689.02 63,616.94 49,499.09
34,251.81 17,784.75 PhP. 8,000.00 8,000.00 8,000.00 8,000.00 7,103.42
6,135.12 5,089.35 3,959.92 2,740.14 1,422.78 PhP. - - - 11,207.20
12,103.78 13,072.08 14,119.85 15,247.28 16,467.06 17,784.75 PhP.
8,000.00 8,000.00 8,000.00 19,207.20 19,207.20 19,207.20 19,207.20
19,207.20 19,207.20 19,207.53 Private Financial Institution (Private
commercial Bank)
22. 22. Private Financial Institution (Insurance Company) Term lending by
Insurance Company - Insurance companies are important sources of term
loans. The premiums generated constitute advances to the insurance
companies for periods varying from six months to five years.
23. 23. Private Financial Institution (Finance Companies) Term lending by
Finance Companies Finance companies has developed special instalment
financing plans for firms acquiring machinery and equipment -
24. 24. Private Financial Institution (Finance Companies) Fund which may be
derived from such borrowings may be used for the following purpose: 1. As
additional working capital; 2. For the purchase of machinery; 3. For the
construction of additional plant and equipment; 4. For the retirement of
maturing securities; 5. For buying out partners or stockholders; and 6. For
the purchase of other companies
25. 25. The Government Government financial institutions have become regular
sources of intermediate loans. Among them are the Land Bank of the
Philippines, the Development Bank of the Philippines, the Social Security
System, the Government Service Insurance System, and some other.
26. 26. Corporate Finance Ross  Westerfield  Jaffe Sixth Edition Intermediate
Sources of Capital http://waysoftraffic.blogspot.com/2011/08/9-important-
features-of- intermediate.html Medina, Roberto G. Business Finance,
Second Edition. Sampaloc, Manila: Recto Avenue, 2007
http://slidegur.com/doc/1756905/powerpoint-presentations-chapter-12
http://www.slideshare.net/nufc11/sources-of-finance-12116188 Thank You!
Reviewer 485
Management Advisory Services

c. Cost Of Capital (Cost Of Long-Term Debt, Cost Of Preferred Shares, Cost


Of Equity, Weighted Average Cost Of Capital, Marginal Cost Of Capital)

COST OF CAPITAL

WHAT IT IS:

Cost of capital refers to the opportunity cost of making a specific investment. It is


therate of return that could have been earned by putting the same money into a
different investment with equal risk. Thus, the cost of capital is the rate of return
required to persuade the investor to make a given investment.

HOW IT WORKS (EXAMPLE):

Cost of capital is determined by the market and represents the degree of


perceived risk by investors. When given the choice between two investments of
equal risk, investorswill generally choose the one providing the higher return.

Let's assume Company XYZ is considering whether to renovate its warehouse


systems. The renovation will cost $50 million and is expected to save $10 million
per year over the next 5 years. There is some risk that the renovation will not
save Company XYZ a full $10 million per year. Alternatively, Company XYZ
could use the $50 million to buy equally risky 5-year bonds in ABC Co., which
return 12% per year.

Because the renovation is expected to return 20% per year ($10,000,000 /


$50,000,000), the renovation is a good use of capital, because the 20% return
exceeds the 12% required return XYZ could have gotten by taking the same risk
elsewhere.

The return an investor receives on a company security is the cost of that security
to the company that issued it. A company's overall cost of capital is a mixture of
returns needed to compensate all creditors and stockholders. This is often called
the weighted average cost of capital and refers to the weighted average costs of
the company's debtand equity.

WHY IT MATTERS:

Cost of capital is an important component of business valuation work. Because


an investor expects his or her investment to grow by at least the cost of capital,
cost of capital can be used as a discount rate to calculate the fair value of
an investment's cashflows.
Reviewer 486
Management Advisory Services

Investors frequently borrow money to make investments, and analysts commonly


make the mistake of equating cost of capital with the interest rate on that money.
It is important to remember that cost of capital is not dependent upon how and
where thecapital was raised. Put another way, cost of capital is dependent on
the use of funds, not the source of funds.

What is 'Cost Of Capital'

The cost of funds used for financing a business. Cost of capital depends on the


mode of financing used – it refers to the cost of equity if the business is financed
solely through equity, or to the cost of debt if it is financed solely through debt.

Many companies use a combination of debt and equity to finance their


businesses, and for such companies, their overall cost of capital is derived from
a weighted average of all capital sources, widely known as the weighted average
cost of capital (WACC). Since the cost of capital represents a hurdle rate that a
company must overcome before it can generate value, it is extensively used in
the capital budgeting process to determine whether the company should proceed
with a project.

BREAKING DOWN 'Cost Of Capital'

The cost of various capital sources varies from company to company, and
depends on factors such as its operating history, profitability, credit worthiness,
etc. In general, newer enterprises with limited operating histories will have higher
costs of capital than established companies with a solid track record, since
lenders and investors will demand a higher risk premium for the former.

Every company has to chart out its game plan for financing the business at an
early stage. The cost of capital thus becomes a critical factor in deciding which
financing track to follow – debt, equity or a combination of the two. Early-stage
companies seldom have sizable assets to pledge as collateral for debt financing,
so equity financing becomes the default mode of funding for most of them.

The cost of debt is merely the interest rate paid by the company on such debt.
However, since interest expense is tax-deductible, the after-tax cost of debt is
calculated as: Yield to maturity of debt x (1 - T) where T is the
company’s marginal tax rate.

The cost of equity is more complicated, since the rate of return demanded by


equity investors is not as clearly defined as it is by lenders. Theoretically, the
cost of equity is approximated by the Capital Asset Pricing Model (CAPM) =
Risk-free rate + (Company’s Beta x Risk Premium).
Reviewer 487
Management Advisory Services

The firm’s overall cost of capital is based on the weighted average of these
costs. For example, consider an enterprise with a capital structure consisting of
70% equity and 30% debt; its cost of equity is 10% and after-tax cost of debt is
7%. Therefore, its WACC would be (0.7 x 10%) + (0.3 x 7%) = 9.1%. This is the
cost of capital that would be used to discount future cash flows from potential
projects and other opportunities to estimate their Net Present Value (NPV) and
ability to generate value.

Companies strive to attain the optimal financing mix, based on the cost of capital
for various funding sources. Debt financing has the advantage of being more tax-
efficient than equity financing, since interest expenses are tax-deductible
and dividends on common shares have to be paid with after-tax dollars.
However, too much debt can result in dangerously high leverage, resulting in
higher interest rates sought by lenders to offset the higher default risk.

Cost of Long-Term Debt

What is 'Long-Term Debt'

Long-term debt consists of loans and financial obligations lasting over one year.
Long-term debt for a company would include any financing or leasing obligations
that are to come due in a greater than 12-month period. Long-term debt also
applies to governments: nations can also have long-term debt.
In the U.K., long-term debts are known as "long-term loans."

BREAKING DOWN 'Long-Term Debt'

Financial and leasing obligations, also called long-term liabilities, or fixed


liabilities, would include company bond issues or long-term leases that have
been capitalized on a firm's balance sheet. Often, a portion of these long-term
liabilities must be paid within the year; these are categorized as current liabilities,
and are also documented on the balance sheet. The balance sheet can be used
to track the company's debt and profitability.

On a balance sheet, the company's debts are categorized as either financial


liabilities or operating liabilities. Financial liabilities refer to debts owed to
investors or stockholders; these include bonds and notes payable. Operating
liabilities refer to the leases or unsettled payments incurred in order to maintain
facilities and services for the company. These include everything from rented
building spaces and equipment to employee pension plans. For more on how a
company uses its debt, see Financial Statements: Long-Term Liabilities.

Bonds are one of the most common types of long-term debt.   Companies may
issuing bonds to raise funds for a variety of reasons. Bond sales bring in
Reviewer 488
Management Advisory Services

immediate income, but the company ends up paying for the use of investors'
capital due to interest payments.

Why Incur Long-Term Debt?

A company takes on long-term debt in order to acquire immediate capital. For


example, startup ventures require substantial funds to get off the ground and pay
for basic expenses, such as research expenses, Insurance, License and Permit
Fees,  Equipment and Supplies and  Advertising and Promotion. All businesses
need to generate income, and long-term debt is an effective way to get
immediate funds to finance and operations. 

Aside from need, there are many factors that go into a company's decision to
take on more or less long-term debt. During the Great Recession, many
companies learned the dangers of relying too heavily on long-term debt. In
addition, stricter regulations have been imposed to prevent businesses from
falling victim to economic volatility. This trend affected not only businesses, but
also individuals, such as homeowners.

Long-Term Debt: Helpful or Harmful?

Since debt sums tend to be large, these loans take many years to pay off.
Companies with too much long-term debt will find it hard to pay off these debts
and continue to thrive, as much of their capital is devoted to interest payments
and it can be difficult to allocate money to other areas. A company can determine
whether it has accrued too much long-term debt by examining its debt to equity
ratio.

A high debt to equity ratio means the company is funding most of its ventures
with debt. If this ratio is too high, the company is at risk of bankruptcy if it
becomes unable to finance its debt due to
decreased income or cash flow problems. A high debt to equity ratio also tends
to put a company at a disadvantage against its competitors who may have more
cash. Many industries discourage companies from taking on too much long-term
debt in order to reduce the risks and costs closely associated with unstable forms
of income, and they even pass regulations that restrict the amount of long-term
debt a company can acquire.

For example, since the Great Recession, banks have begun to scrutinize


companies' balance sheets more closely, and a high level of debt now can
prevent a company from getting further debt financing. Consequently, many
companies are adapting to this rule to avoid being penalized, such as taking
steps to reduce their long-term debt and rely more heavily on stable sources of
income.
Reviewer 489
Management Advisory Services

A low debt to equity ratio is a sign that the company is growing or thriving, as it is
no longer relying on its debt and is making payments to lower it. It consequently
has more leverage with other companies and a better position in the current
financial environment. However, the company must also compare its ratio to
those of its competitors, as this context helps determines economic leverage.

For example, Adobe Systems Inc. (ADBE) reported a higher amount of long-term
debt in Q2 of 2015 than it had in the previous seven years. This debt is still low
compared with many of its competitors, such as Microsoft Corp. (MSFT) and
Apple Inc. (AAPL), so Adobe retains relatively the same place in the market.
However, comparisons fluctuate with competitors such as Symantec
Corp. (SYMC) and Quintiles Transnational (Q), who carry a similar amount of
long-term debt as Adobe.
A company's long-term debt may also put bond investors at risk in an illiquid
bond market. The question of the liquidity of the bond market has become an
issue since the Great Recession, as banks that used to make markets for bond
traders have been constrained by greater regulatory oversight.

Long-term debt is not all bad, though, and in moderation, it is necessary for any
company. Think of it as a credit card for a business: in the short-term, it allows
the company to invest in the tools it needs to advance and thrive while it is still
young, with the goal of paying off the debt when the company is established and
in the financial position to do so. Without incurring long-term debt, most
companies would never get off the ground. Long-term debt is a given variable for
any company, but how much debt is acquired plays a large role in the company's
image and its future.

Bank loans and financing agreements, in addition to bonds and notes that have
maturities greater than one year, would be considered long-term debt. Other
securities such as repos and commercial papers would not be long-term debt,
because their maturities are typically shorter than one year.

What is the 'Cost of Debt'

Cost of debt refers to the effective rate a company pays on its current debt. In
most cases, this phrase refers to after-tax cost of debt, but it also refers to a
company's cost of debt before taking taxes into account. The difference in cost of
debt before and after taxes lies in the fact that interest expenses are deductible.

BREAKING DOWN 'Cost of Debt'

Cost of debt is one part of a company's capital structure, which also includes


the cost of equity. A company may use various bonds, loans and other forms of
Reviewer 490
Management Advisory Services

debt, so this measure is useful for giving an idea as to the overall rate being paid
by the company to use debt financing. The measure can also give investors an
idea of the riskiness of the company compared to others, because riskier
companies generally have a higher cost of debt.

How to Calculate the Cost of Debt

To calculate its cost of debt, a company needs to figure out the total amount of
interest it is paying on each of its debts for the year. Then, it divides this number
by the total of all of its debt. The quotient is its cost of debt.

For example, say a company has a $1 million loan with a 5% interest rate and a
$200,000 loan with a 6% rate. It has also issued bonds worth $2 million at a 7%
rate. The interest on the first two loans is $50,000 and $12,000, respectively, and
the interest on the bonds equates to $140,000. The total interest for the year is
$202,000. As the total debt is $3.2 million, the company's cost of debt is 6.31%.

How to Calculate the Cost of Debt After Taxes

To calculate after-tax cost of debt, subtract a company's effective tax rate from 1,
and multiply the difference by its cost of debt. Do not use the
company's marginal tax rate; rather, add together the company's state and
federal tax rate to ascertain its effective tax rate.

For example, if a company's only debt is a bond it has issued with a 5% rate, its
pre-tax cost of debt is 5%. If its tax rate is 40%, the difference between 100%
and 40% is 60%, and 60% of 5% is 3%. The after-tax cost of debt is 3%.

The rationale behind this calculation is based on the tax savings the company
receives from claiming its interest as a business expense. To continue with the
above example, imagine the company has issued $100,000 in bonds at a 5%
rate. Its annual interest payments are $5,000. It claims this amount as an
expense, and this lowers the company's income on paper by $5,000. As the
company pays a 40% tax rate, it saves $2,000 in taxes by writing off its interest.

As a result, the company only pays $3,000 on its debt. This equates to a 3%
interest rate on its debt.

It is important to note that k d represents thecost to issue new debt, not the firm\'s
existing debt.

Cost of Preferred Shares


Reviewer 491
Management Advisory Services

Preferred stocks straddle the line between stocks and bonds. Technically, they
are equity securities, but they share many characteristics with debt instruments.

Preferred stocks are issued with a fixed par value and pay dividends based on a
percentage of that par at a fixed rate.

Cost of preferred stock (Rps) can be calculated as follows:

R
D
where:

Dps = preferred

dividends

Pnet = net issuing

price

Example: Cost of Preferred Stock

Assume Newco's preferred stock pays a dividend of $2 per share and sells for
$100 per share. If the cost to Newco to issue new shares is 4%, what is Newco's
cost of preferred stock?

Answer:
Rps = Dps/Pnet = $2/$100(1-0.04) = 2.1%

For more on this subject, read Prefer Dividends? Why Not Look At Preferred
Stock? 

Next, we'll take a look at the weighted average cost of capital, a calculation that
will put our formulas for both the cost of equity and the cost of debt to work.

The Cost of Preferred Stock Preferred stock represents a special type of


ownership interest in the firm. It gives preferred stockholders the right to receive
their stated dividends before any earnings can be distributed to common
stockholders. Because preferred stock is a form of ownership, the proceeds from
its sale are expected to be held for an infinite period of time. The key
characteristics of preferred stock were described in Chapter 7. However, the one
aspect of preferred stock that requires review is dividends. Preferred Stock
Dividends Most preferred stock dividends are stated as a dollar amount: “x
dollars per year.” When dividends are stated this way, the stock is often referred
to as “xdollar preferred stock.” Thus a “$4 preferred stock” is expected to pay
preferred stockholders $4 in dividends each year on each share of preferred
stock owned. Sometimes preferred stock dividends are stated as an annual
percentage rate. This rate represents the percentage of the stock’s par value, or
face value, that equals the annual dividend. For instance, an 8 percent preferred
stock with a $50 par value would be expected to pay an annual dividend of $4 a
Reviewer 492
Management Advisory Services

share (0.08$50 par$4). Before the cost of preferred stock is calculated, any
dividends stated as percentages should be converted to annual dollar dividends.
Calculating the Cost of Preferred Stock The cost of preferred stock, kp, is the
ratio of the preferred stock dividend to the firm’s net proceeds from the sale of
the preferred stock. The net proceeds represents the amount of money to be
received minus any flotation costs. Equation 10.3 gives the cost of preferred
stock, kp, in terms of the annual dollar dividend, Dp, and the net proceeds from
the sale of the stock, Np: kp (10.3) Dp Np LG2 cost of preferred stock, kp The
ratio of the preferred stock dividend to the firm’s net proceeds from the sale of
preferred stock; calculated by dividing the annual dividend, Dp , by the net
proceeds from the sale of the preferred stock, Np. 396 PART 4 Long-Term
Financial Decisions constant-growth valuation (Gordon) model Assumes that the
value of a share of stock equals the present value of all future dividends
(assumed to grow at a constant rate) that it is expected to provide over an infinite
time horizon. Because preferred stock dividends are paid out of the firm’s after-
tax cash flows, a tax adjustment is not required. EXAMPLE Duchess Corporation
is contemplating issuance of a 10% preferred stock that is expected to sell for its
$87-per-share par value. The cost of issuing and selling the stock is expected to
be $5 per share. The first step in finding the cost of the stock is to calculate the
dollar amount of the annual preferred dividend, which is $8.70 (0.10$87). The net
proceeds per share from the proposed sale of stock equals the sale price minus
the flotation costs ($87$5$82). Substituting the annual dividend, Dp, of $8.70 and
the net proceeds, Np, of $82 into Equation 10.3 gives the cost of preferred stock,
10.6% ($8.70 $82). The cost of Duchess’s preferred stock (10.6%) is much
greater than the cost of its long-term debt (5.6%). This difference exists primarily
because the cost of long-term debt (the interest) is tax deductible.

Cost of Equity

In finance, the cost of equity is the return (often expressed as a rate of return) a


firm theoretically pays to its equity investors, i.e., shareholders, to compensate
for the risk they undertake by investing their capital. Firms need to acquire capital
from others to operate and grow. Individuals and organizations who are willing to
provide their funds to others naturally desire to be rewarded. Just as landlords
seek rents on their property, capital providers seek returns on their funds, which
must be commensurate with the risk undertaken.

Firms obtain capital from two kinds of sources: lenders and equity investors.
From the perspective of capital providers, lenders seek to be rewarded
with interest and equity investors seek dividends and/or appreciation in the value
of their investment (capital gain). From a firm's perspective, they must pay for the
capital it obtains from others, which is called its cost of capital. Such costs are
separated into a firm's cost of debt and cost of equity and attributed to these two
kinds of capital sources.

While a firm's present cost of debt is relatively easy to determine from


observation of interest rates in the capital markets, its current cost of equity is
unobservable and must be estimated. Finance theory and practice offers various
Reviewer 493
Management Advisory Services

models for estimating a particular firm's cost of equity such as the capital asset
pricing model, or CAPM. Another method is derived from the Gordon Model,
which is a discounted cash flow model based on dividend returns and eventual
capital return from the sale of the investment. Another simple method is the Bond
Yield Plus Risk Premium (BYPRP), where a subjective risk premium is added to
the firm's long-term debt interest rate. Moreover, a firm's overall cost of capital,
which consists of the two types of capital costs, can be estimated using
the weighted average cost of capital model.

According to finance theory, as a firm's risk increases/decreases, its cost of


capital increases/decreases. This theory is linked to observation of human
behavior and logic: capital providers expect reward for offering their funds to
others. Such providers are usually rational and prudent preferring safety over
risk. They naturally require an extra reward as an incentive to place their capital
in a riskier investment instead of a safer one. If an investment's risk increases,
capital providers demand higher returns or they will place their capital elsewhere.
Knowing a firm's cost of capital is needed in order to make better decisions.
Managers make capital budgeting decisions while capital providers make
decisions about lending and investment. Such decisions can be made after
quantitative analysis that typically uses a firm's cost of capital as a model input.

The Cost of Common Stock The cost of common stock is the return required on
the stock by investors in the marketplace. There are two forms of common stock
financing: (1) retained earnings and (2) new issues of common stock. As a first
step in finding each of these costs, we must estimate the cost of common stock
equity. Finding the Cost of Common Stock Equity The cost of common stock
equity, ks, is the rate at which investors discount the expected dividends of the
firm to determine its share value. Two techniques are used to measure the cost
of common stock equity. One relies on the constantgrowth valuation model, the
other on the capital asset pricing model (CAPM). Using the Constant-Growth
Valuation (Gordon) Model In Chapter 7 we found the value of a share of stock to
be equal to the present value of all future dividends, which in one model were
assumed to grow at a constant annual rate over an infinite time horizon. This is
the constant-growth valuation model, also known as the Gordon model. The key
expression derived for this model was presented as Equation 7.4 and is restated
here: P0 (10.4) D1 ksg cost of common stock equity, ks The rate at which
investors discount the expected dividends of the firm to determine its share
value. LG3 CHAPTER 10 The Cost of Capital 397 where P0value of common
stock D1per-share dividend expected at the end of year 1 ksrequired return on
common stock gconstant rate of growth in dividends Solving Equation 10.4 for ks
results in the following expression for the cost of common stock equity: ks g
(10.5) Equation 10.5 indicates that the cost of common stock equity can be found
by dividing the dividend expected at the end of year 1 by the current price of the
stock and adding the expected growth rate. Because common stock dividends
are paid from after-tax income, no tax adjustment is required. EXAMPLE
Duchess Corporation wishes to determine its cost of common stock equity, ks.
The market price, P0, of its common stock is $50 per share. The firm expects to
Reviewer 494
Management Advisory Services

pay a dividend, D1, of $4 at the end of the coming year, 2004. The dividends
paid on the outstanding stock over the past 6 years (1998–2003) were as
follows: Using the table for the present value interest factors, PVIF (Table A–2),
or a financial calculator in conjunction with the technique described for finding
growth rates in Chapter 4, we can calculate the annual growth rate of dividends,
g. It turns out to be approximately 5% (more precisely, it is 5.05%). Substituting
D1$4, P0$50, and g5% into Equation 10.5 yields the cost of common stock
equity: ks 0.050.080.050.130, or 13 . 0 % The 13.0% cost of common stock
equity represents the return required by existing shareholders on their
investment. If the actual return is less than that, shareholders are likely to begin
selling their stock. Using the Capital Asset Pricing Model (CAPM) Recall from
Chapter 5 that the capital asset pricing model (CAPM) describes the relationship
between the required return, ks, and the nondiversifiable risk of the firm as
measured by the beta coefficient, b. The basic CAPM is ksRF[b(km RF)] (10.6)
$4 $50 Year Dividend 2003 $3.80 2002 3.62 2001 3.47 2000 3.33 1999 3.12
1998 2.97 D1 P0 capital asset pricing model (CAPM) Describes the relationship
between the required return, ks, and the nondiversifiable risk of the firm as
measured by the beta coefficient, b. 398 PART 4 Long-Term Financial Decisions
where RFrisk-free rate of return km market return; return on the market portfolio
of assets Using CAPM indicates that the cost of common stock equity is the
return required by investors as compensation for the firm’s nondiversifiable risk,
measured by beta. EXAMPLE Duchess Corporation now wishes to calculate its
cost of common stock equity, ks, by using the capital asset pricing model. The
firm’s investment advisers and its own analyses indicate that the risk-free rate,
RF, equals 7%; the firm’s beta, b, equals 1.5; and the market return, km, equals
11%. Substituting these values into Equation 10.6, the company estimates the
cost of common stock equity, ks, to be ks7.0%[1.5(11.0%7.0%)]7.0%6.0%1 3 . 0
% The 13.0% cost of common stock equity represents the required return of
investors in Duchess Corporation common stock. It is the same as that found by
using the constant-growth valuation model. The Cost of Retained Earnings As
you know, dividends are paid out of a firm’s earnings. Their payment, made in
cash to common stockholders, reduces the firm’s retained earnings. Let’s say a
firm needs common stock equity financing of a certain amount; it has two choices
relative to retained earnings: It can issue additional common stock in that amount
and still pay dividends to stockholders out of retained earnings. Or it can
increase common stock equity by retaining the earnings (not paying the cash
dividends) in the needed amount. In a strict accounting sense, the retention of
earnings increases common stock equity in the same way that the sale of
additional shares of common stock does. Thus the cost of retained earnings, kr,
to the firm is the same as the cost of an equivalent fully subscribed issue of
additional common stock. Stockholders find the firm’s retention of earnings
acceptable only if they expect that it will earn at least their required return on the
reinvested funds. Viewing retained earnings as a fully subscribed issue of
additional common stock, we can set the firm’s cost of retained earnings, kr,
equal to the cost of common stock equity as given by Equations 10.5 and 10.6.4
krks (10.7) It is not necessary to adjust the cost of retained earnings for flotation
costs, because by retaining earnings, the firm “raises” equity capital without
incurring these costs. 4. Technically, if a stockholder received dividends and
wished to invest them in additional shares of the firm’s stock, he or she would
first have to pay personal taxes on the dividends and then pay brokerage fees
Reviewer 495
Management Advisory Services

before acquiring additional shares. By using pt as the average stockholder’s


personal tax rate and bf as the average brokerage fees stated as a percentage,
we can specify the cost of retained earnings, kr, as krks(1pt)(1bf). Because of the
difficulty in estimating pt and bf, only the simpler definition of kr given in Equation
10.7 is used here. cost of retained earnings, kr The same as the cost of an
equivalent fully subscribed issue of additional common stock, which is equal to
the cost of common stock equity, ks. CHAPTER 10 The Cost of Capital 399
EXAMPLE The cost of retained earnings for Duchess Corporation was actually
calculated in the preceding examples: It is equal to the cost of common stock
equity. Thus kr equals 13.0%. As we will show in the next section, the cost of
retained earnings is always lower than the cost of a new issue of common stock,
because it entails no flotation costs. The Cost of New Issues of Common Stock
Our purpose in finding the firm’s overall cost of capital is to determine the aftertax
cost of new funds required for financing projects. The cost of a new issue of
common stock, kn, is determined by calculating the cost of common stock, net of
underpricing and associated flotation costs. Normally, for a new issue to sell, it
has to be underpriced—sold at a price below its current market price, P0. Firms
underprice new issues for a variety of reasons. First, when the market is in
equilibrium (that is, the demand for shares equals the supply of shares),
additional demand for shares can be achieved only at a lower price. Second,
when additional shares are issued, each share’s percent of ownership in the firm
is diluted, thereby justifying a lower share value. Finally, many investors view the
issuance of additional shares as a signal that management is using common
stock equity financing because it believes that the shares are currently
overpriced. Recognizing this information, they will buy shares only at a price
below the current market price. Clearly, these and other factors necessitate
underpricing of new offerings of common stock. Flotation costs paid for issuing
and selling the new issue will further reduce proceeds. We can use the constant-
growth valuation model expression for the cost of existing common stock, ks, as
a starting point. If we let Nn represent the net proceeds from the sale of new
common stock after subtracting underpricing and flotation costs, the cost of the
new issue, kn, can be expressed as follows: kn g (10.8) The net proceeds from
sale of new common stock, Nn, will be less than the current market price, P0.
Therefore, the cost of new issues, kn, will always be greater than the cost of
existing issues, ks, which is equal to the cost of retained earnings, kr. The cost of
new common stock is normally greater than any other long-term financing cost.
Because common stock dividends are paid from aftertax cash flows, no tax
adjustment is required. EXAMPLE In the constant-growth valuation example, we
found Duchess Corporation’s cost of common stock equity, ks, to be 13%, using
the following values: an expected dividend, D1, of $4; a current market price, P0,
of $50; and an expected growth rate of dividends, g, of 5%. To determine its cost
of new common stock, kn, Duchess Corporation has estimated that on the
average, new shares can be sold for $47. The $3-per-share underpricing is due
to the competitive nature of the market. A second cost associated with a new
issue is flotation costs of $2.50 per share that would be paid to issue and sell the
new shares. The total underpricing and flotation costs per share are therefore
expected to be $5.50. D1 Nn cost of a new issue of common stock, kn The cost
of common stock, net of underpricing and associated flotation costs. underpriced
Stock sold at a price below its current market price, P0. 400 PART 4 Long-Term
Financial Decisions weighted average cost of capital (WACC), ka Reflects the
Reviewer 496
Management Advisory Services

expected average future cost of funds over the long run; found by weighting the
cost of each specific type of capital by its proportion in the firm’s capital structure.
Subtracting the $5.50 per share underpricing and flotation cost from the current
$50 share price results in expected net proceeds of $44.50 per share ($50.00
$5.50). Substituting D1$4, Nn$44.50, and g5% into Equation 10.8 results in a
cost of new common stock, kn, as follows: kn 0.050.090.050.140, or 14 . 0 %
Duchess Corporation’s cost of new common stock is therefore 14.0%. This is the
value to be used in subsequent calculations of the firm’s overall cost of capital.
Revi

Weighted Average Cost of Capital

The weighted average cost of capital (WACC) is the rate that a company is


expected to pay on average to all its security holders to finance its assets. The
WACC is commonly referred to as the firm’s cost of capital. Importantly, it is
dictated by the external market and not by management. The WACC represents
the minimum return that a company must earn on an existing asset base to
satisfy its creditors, owners, and other providers of capital, or they will invest
elsewhere.[1]
Companies raise money from a number of sources: common stock, preferred
stock, straight debt, convertible debt, exchangeable
debt, warrants, options, pension liabilities, executive stock options, governmental
subsidies, and so on. Different securities, which represent different sources of
finance, are expected to generate different returns. The WACC is calculated
taking into account the relative weights of each component of the capital
structure. The more complex the company's capital structure, the more laborious
it is to calculate the WACC.
Companies can use WACC to see if the investment projects available to them
are worthwhile to undertake.[2]
Contents
  [hide] 
 1Calculation
o 1.1Tax effects
 2See also
 3References
 4External links
o 4.1Tools to find WACC
Calculation[edit]
In general, the WACC can be calculated with the following formula:[3]
where  is the number of sources of capital (securities, types of liabilities);  is the
required rate of return for security ; and  is the market value of all outstanding
securities .
In the case where the company is financed with only equity and debt, the
average cost of capital is computed as follows:
Reviewer 497
Management Advisory Services

where D is the total debt, E is the total shareholder’s equity, Ke is the cost of


equity, and Kd is the cost of debt. The market values of debt and equity should
be used when computing the weights in the WACC formula.[4]
Tax effects[edit]
Tax effects can be incorporated into this formula. For example, the WACC for a
company financed by one type of shares with the total market value of  and cost
of equity and one type of bonds with the total market value of  and cost of debt ,
in a country with corporate tax rate , is calculated as:
Actually carrying out this calculation has a problem. There are many plausible
proxies for each element. As a result, a fairly wide range of values for the WACC
for a given firm in a given year, may appear defensible.

The Weighted Average Cost of Capital Now that we have calculated the cost of
specific sources of financing, we can determine the overall cost of capital. As
noted earlier, the weighted average cost of capital (WACC), ka, reflects the
expected average future cost of funds over the long run. It is found by weighting
the cost of each specific type of capital by its proportion in the firm’s capital
structure. Calculating the Weighted Average Cost of Capital (WACC) Calculating
the weighted average cost of capital (WACC) is straightforward: Multiply the
specific cost of each form of financing by its proportion in the firm’s capital
structure and sum the weighted values. As an equation, the weighted average
cost of capital, ka, can be specified as follows: ka(wiki )(wpkp)(wskr or n) (10.9)
where wiproportion of long-term debt in capital structure wpproportion of
preferred stock in capital structure wsproportion of common stock equity in
capital structure wi wp ws1.0 Three important points should be noted in
Equation 10.9: 1. For computational convenience, it is best to convert the
weights into decimal form and leave the specific costs in percentage terms.
$4.00 $44.50 LG4 CHAPTER 10 The Cost of Capital 401 2. The sum of the
weights must equal 1.0. Simply stated, all capital structure components must be
accounted for. 3. The firm’s common stock equity weight, ws, is multiplied by
either the cost of retained earnings, kr, or the cost of new common stock, kn.
Which cost is used depends on whether the firm’s common stock equity will be
financed using retained earnings, kr, or new common stock, kn. EXAMPLE In
earlier examples, we found the costs of the various types of capital for Duchess
Corporation to be as follows: Cost of debt, ki 5.6% Cost of preferred stock,
kp10.6% Cost of retained earnings, kr13.0% Cost of new common stock,
kn14.0% The company uses the following weights in calculating its weighted
average cost of capital: Because the firm expects to have a sizable amount of
retained earnings available ($300,000), it plans to use its cost of retained
earnings, kr, as the cost of common stock equity. Duchess Corporation’s
weighted average cost of capital is calculated in Table 10.1. The resulting
weighted average cost of capital for Duchess is 9.8%. Assuming an unchanged
risk level, the firm should accept all projects that will earn a return greater than
9.8%. Source of capital Weight Long-term debt 40% Preferred stock 10 Common
stock equity 5 0 Total 10 0 % TABLE 10.1 Calculation of the Weighted Average
Cost of Capital for Duchess Corporation Weighted cost Weight Cost [(1)(2)]
Source of capital (1) (2) (3) Long-term debt 0.40 5.6% 2.2% Preferred stock 0.10
10.6 1.1 Common stock equity 0. 5 0 13.0 6. 5 Totals 1.00 9. 8 % Weighted
average cost of capital9.8% 402 PART 4 Long-Term Financial Decisions market
Reviewer 498
Management Advisory Services

value weights Weights that use market values to measure the proportion of each
type of capital in the firm’s financial structure. historical weights Either book or
market value weights based on actual capital structure proportions. target
weights Either book or market value weights based on desired capital structure
proportions. Weighting Schemes Weights can be calculated on the basis of
either book value or market value and using either historical or target
proportions. Book Value Versus Market Value Book value weights use
accounting values to measure the proportion of each type of capital in the firm’s
financial structure. Market value weights measure the proportion of each type of
capital at its market value. Market value weights are appealing, because the
market values of securities closely approximate the actual dollars to be received
from their sale. Moreover, because the costs of the various types of capital are
calculated by using prevailing market prices, it seems reasonable to use market
value weights. In addition, the long-term investment cash flows to which the cost
of capital is applied are estimated in terms of current as well as future market
values. Market value weights are clearly preferred over book value weights.
Historical Versus Target Historical weights can be either book or market value
weights based on actual capital structure proportions. For example, past or
current book value proportions would constitute a form of historical weighting, as
would past or current market value proportions. Such a weighting scheme would
therefore be based on real—rather than desired—proportions. Target weights,
which can also be based on either book or market values, reflect the firm’s
desired capital structure proportions. Firms using target weights establish such
proportions on the basis of the “optimal” capital structure they wish to achieve.
(The development of these proportions and the optimal structure are discussed
in detail in Chapter 11 .) When one considers the somewhat approximate nature
of the calculation of weighted average cost of capital, the choice of weights may
not be critical. However, from a strictly theoretical point of view, the preferred
weighting scheme is target market value proportions, and these are assumed
throughout this chapter.

Marginal Cost of Capital

The Marginal Cost and Investment Decisions The firm’s weighted average cost
of capital is a key input to the investment decision-making process. As
demonstrated earlier in the chapter, the firm should make only those investments
for which the expected return is greater LG5 LG6 book value weights Weights
that use accounting values to measure the proportion of each type of capital in
the firm’s financial structure. CHAPTER 10 The Cost of Capital 403 break point
The level of total new financing at which the cost of one of the financing
components rises, thereby causing an upward shift in the weighted marginal cost
of capital (WMCC). than the weighted average cost of capital. Of course, at any
given time, the firm’s financing costs and investment returns will be affected by
the volume of financing and investment undertaken. The weighted marginal cost
of capital and the investment opportunities schedule are mechanisms whereby
financing and investment decisions can be made simultaneously. The Weighted
Marginal Cost of Capital (WMCC) The weighted average cost of capital may vary
over time, depending on the volume of financing that the firm plans to raise. As
the volume of financing increases, the costs of the various types of financing will
increase, raising the firm’s weighted average cost of capital. Therefore, it is
Reviewer 499
Management Advisory Services

useful to calculate the weighted marginal cost of capital (WMCC), which is simply
the firm’s weighted average cost of capital (WACC) associated with its next dollar
of total new financing. This marginal cost is relevant to current decisions. The
costs of the financing components (debt, preferred stock, and common stock)
rise as larger amounts are raised. Suppliers of funds require greater returns in
the form of interest, dividends, or growth as compensation for the increased risk
introduced by larger volumes of new financing. The WMCC is therefore an
increasing function of the level of total new financing. Another factor that causes
the weighted average cost of capital to increase is the use of common stock
equity financing. New financing provided by common stock equity will be taken
from available retained earnings until this supply is exhausted and then will be
obtained through new common stock financing. Because retained earnings are a
less expensive form of common stock equity financing than the sale of new
common stock, the weighted average cost of capital will rise with the addition of
new common stock. Finding Break Points To calculate the WMCC, we must
calculate break points, which reflect the level of total new financing at which the
cost of one of the financing components rises. The following general equation
can be used to find break points: BPj (10.10) where BPjbreak point for financing
source j AFjamount of funds available from financing source j at a given cost
wjcapital structure weight (stated in decimal form) for financing source j
EXAMPLE When Duchess Corporation exhausts its $300,000 of available
retained earnings (at kr13.0%), it must use the more expensive new common
stock financing (at kn14.0%) to meet its common stock equity needs. In addition,
the firm expects that it can borrow only $400,000 of debt at the 5.6% cost;
additional debt will have an after-tax cost (ki ) of 8.4%. Two break points
therefore exist: (1) when the $300,000 of retained earnings costing 13.0% is
exhausted, and (2) when the $400,000 of long-term debt costing 5.6% is
exhausted. AFj wj weighted marginal cost of capital (WMCC) The firm’s
weighted average cost of capital (WACC) associated with its next dollar of total
new financing. 404 PART 4 Long-Term Financial Decisions The break points can
be found by substituting these values and the corresponding capital structure
weights given earlier into Equation 10.10. We get the dollar amounts of total new
financing at which the costs of the given financing sources rise: BPcommon
equity $600,000 BPlong-term debt $1,000,000 Calculating the WMCC Once the
break points have been determined, the next step is to calculate the weighted
average cost of capital over the range of total new financing between break
points. First, we find the WACC for a level of total new financing between zero
and the first break point. Next, we find the WACC for a level of total new
financing between the first and second break points, and so on. By definition, for
each of the ranges of total new financing between break points, certain
component capital costs (such as debt or common equity) will increase. This will
cause the weighted average cost of capital to increase to a higher level than that
over the preceding range. Together, these data can be used to prepare a
weighted marginal cost of capital (WMCC) schedule. This is a graph that relates
the firm’s weighted average cost of capital to the level of total new financing.
EXAMPLE Table 10.2 summarizes the calculation of the WACC for Duchess
Corporation over the three ranges of total new financing created by the two break
points— $400,000 0.40 $300,000 0.50 TABLE 10.2 Weighted Average Cost of
Capital for Ranges of Total New Financing for Duchess Corporation Weighted
cost Range of total Source of capital Weight Cost [(2)(3)] new financing (1) (2)
Reviewer 500
Management Advisory Services

(3) (4) $0 to $600,000 Debt .40 5.6% 2.2% Preferred .10 10.6 1.1 Common .50
13.0 6. 5 Weighted average cost of capital 9. 8 % $600,000 to $1,000,000 Debt .
40 5.6% 2.2% Preferred .10 10.6 1.1 Common .50 14.0 7 . 0 Weighted average
cost of capital 10 . 3 % $1,000,000 and above Debt .40 8.4% 3.4% Preferred .10
10.6 1.1 Common .50 14.0 7 . 0 Weighted average cost of capital 11 . 5 %
weighted marginal cost of capital (WMCC) schedule Graph that relates the firm’s
weighted average cost of capital to the level of total new financing. CHAPTER 10
The Cost of Capital 405 investment opportunities schedule (IOS) A ranking of
investment possibilities from best (highest return) to worst (lowest return). 5.
Because the calculated weighted average cost of capital does not apply to risk-
changing investments, we assume that all opportunities have equal risk similar to
the firm’s risk. 0 1,000 1,500 500 Total New Financing ($000) 9.5 10.0 10.5 11.0
11.5 Weighted Average Cost of Capital (%) Range of total new financing WACC
$0 to $600,000 $600,000 to $1,000,000 $1,000,000 and above 9.8% 10.3 11.5
9.8% 10.3% 11.5% WMCC FIGURE 10.1 WMCC Schedule Weighted marginal
cost of capital (WMCC) schedule for Duchess Corporation $600,000 and
$1,000,000. Comparing the costs in column 3 of the table for each of the three
ranges, we can see that the costs in the first range ($0 to $600,000) are those
calculated in earlier examples and used in Table 10.1. The second range
($600,000 to $1,000,000) reflects the increase in the common stock equity cost
to 14.0%. In the final range, the increase in the long-term debt cost to 8.4% is
introduced. The weighted average costs of capital (WACC) for the three ranges
are summarized in the table shown at the bottom of Figure 10.1. These data
describe the weighted marginal cost of capital (WMCC), which increases as
levels of total new financing increase. Figure 10.1 presents the WMCC schedule.
Again, it is clear that the WMCC is an increasing function of the amount of total
new financing raised. The Investment Opportunities Schedule (IOS) At any given
time, a firm has certain investment opportunities available to it. These
opportunities differ with respect to the size of investment, risk, and return.5 The
firm’s investment opportunities schedule (IOS) is a ranking of investment
possibilities from best (highest return) to worst (lowest return). Generally, the first
project selected will have the highest return, the next project the second highest,
and so on. The return on investments will decrease as the firm accepts additional
projects. 406 PART 4 Long-Term Financial Decisions EXAMPLE Column 1 of
Table 10.3 shows Duchess Corporation’s current investment opportunities
schedule (IOS) listing the investment possibilities from best (highest return) to
worst (lowest return). Column 2 of the table shows the initial investment required
by each project. Column 3 shows the cumulative total invested funds necessary
to finance all projects better than and including the corresponding investment
opportunity. Plotting the project returns against the cumulative investment
(column 1 against column 3) results in the firm’s investment opportunities
schedule (IOS). A graph of the IOS for Duchess Corporation is given in Figure
10.2. Using the WMCC and IOS to Make Financing/Investment Decisions As
long as a project’s internal rate of return is greater than the weighted marginal
cost of new financing, the firm should accept the project.6 The return will
decrease with the acceptance of more projects, and the weighted marginal cost
of capital will increase because greater amounts of financing will be required.
The decision rule therefore would be: Accept projects up to the point at which the
marginal return on an investment equals its weighted marginal cost of capital.
Beyond that point, its investment return will be less than its capital cost. This
Reviewer 501
Management Advisory Services

approach is consistent with the maximization of net present value (NPV) for
conventional projects for two reasons: (1) The NPV is positive as long as the IRR
exceeds the weighted average cost of capital, ka. (2) The larger the difference
between the IRR and ka, the larger the resulting NPV. Therefore, the acceptance
of projects beginning with those that have the greatest positive difference
between IRR and ka, down to the point at which IRR just equals ka, should result
in the maximum total NPV for all independent projects accepted. Such an
outcome is completely consistent with the firm’s goal of maximizing owner
wealth. EXAMPLE Figure 10.2 shows Duchess Corporation’s WMCC schedule
and IOS on the same set of axes. By raising $1,100,000 of new financing and
investing these funds in TABLE 10.3 Investment Opportunities Schedule (IOS)
for Duchess Corporation Internal rate Initial Cumulative Investment of return
(IRR) investment investmenta opportunity (1) (2) (3) A 15.0% $100,000 $
100,000 B 14.5 200,000 300,000 C 14.0 400,000 700,000 D 13.0 100,000
800,000 E 12.0 300,000 1,100,000 F 11.0 200,000 1,300,000 G 10.0 100,000
1,400,000 aThe cumulative investment represents the total amount invested in
projects with higher returns plus the investment required for the corresponding
investment opportunity. 6. Although net present value could be used to make
these decisions, the internal rate of return is used here because of the ease of
comparison it offers. CHAPTER 10 The Cost of Capital 407 projects A, B, C, D,
and E, the firm should maximize the wealth of its owners, because these projects
result in the maximum total net present value. Note that the 12.0% return on the
last dollar invested (in project E) exceeds its 11.5% weighted average cost.
Investment in project F is not feasible, because its 11.0% return is less than the
11.5% cost of funds available for investment. The firm’s optimal capital budget of
$1,100,000 is marked with an X in Figure 10.2. At that point, the IRR equals the
weighted average cost of capital, and the firm’s size as well as its shareholder
value will be optimized. In a sense, the size of the firm is determined by the
market—the availability of and returns on investment opportunities, and the
availability and cost of financing. In practice, most firms operate under capital
rationing. That is, management imposes constraints that keep the capital
expenditure budget below optimal (where IRRka). Because of this, a gap
frequently exists between the theoretically optimal capital budget and the firm’s
actual level of financing/investment.

Marginal cost of capital (MCC) schedule is a graph that relates the


firm's weighted average cost of each unit of capital to the total amount of new
capital raised.
The WACC is the minimum rate of return allowable, and still meeting financial
obligations such as debt, interest payments, dividends etc... Therefore, the
WACC averages the required returns from all long-term financing sources (debt
and equity).
the WACC is based on cash flows, which are after-tax. By the same notion, the
WACC should be calculated on an after-tax basis.
WACC components[edit]
Debt[edit]
Advantages:
 usually cheaper than equity
Reviewer 502
Management Advisory Services

 no loss of control (voting rights)


 upper limit is placed on share of profits
 flotation costs are typically lower than equity
 interest expense is tax deductible
Disadvantages:
 legally obliged to make payments no matter how tight the funds on hand are
 in the case of bonds, full face value comes due at one time
 taking on more debt = taking on more financial risk (more systematic risk) requiring higher cash
flows

The firm's debt component is stated as kd and since there is a tax benefit from
interest payments then the after tax WACC component is k d(1-T); where T is
the tax rate.
Equity[edit]
Advantages:
 no legal obligation to pay (depends on class of shares)
 no maturity
 lower financial risk
 it could be cheaper than debt, with good prospects of profitability
Disadvantages:
 new equity dilutes current ownership share of profits and voting rights (control)
 cost of underwriting equity is much higher than debt
 too much equity = target for a leveraged buy-out by another firm
 no tax shield, dividends are not tax deductible, and may exhibit double-taxation
3 ways of calculating KKe:
1. Capital Asset Pricing Model
2. Dividend Discount Method
3. Bond Yield Plus Risk Premium Approach
Cost of new equity should be the adjusted cost for any underwriting fees terme
flotation costs (F)
Ke = D1/P0(1-F) + g; where F = flotation costs, D1 is dividends, P0 is price of the
stock, and g is the growth rate.
Weighted average cost of capital equation:
WACC= (Wd)[(Kd)(1-t)]+ (Wpf)(Kpf)+ (Wce)(Kce)
More to come: (K preferred shares, EVA, MCC, MCC schedule and
demonstration, IOS schedule and demonstration, MCC/IOS schedules)

The marginal cost of capital (MCC) is the cost of the last dollar of capital raised,
essentially the cost of another unit of capital raised. As more capital is raised, the
marginal cost of capital rises. 
With the weights and costs given in our previous example, we computed
Newco's weighted average cost of capital as follows:
Reviewer 503
Management Advisory Services

WACC = (wd)(kd)(1-t) + (wps)(kps) + (wce)(kce)


WACC = (0.4)(0.07)(1-0.4) + (0.05)(0.021) + (0.55)(0.12) 
WACC = 0.084, or 8.4%
We originally determined the WACC for Newco to be 8.4%. Newco's cost of
capital will remain unchanged as new debt, preferred stock and retained
earnings are issued until the company's retained earnings are depleted.
Example: Marginal Cost of Capital
Once retained earnings are depleted, Newco decides to access the capital
markets to raise new equity. As in our previous example for Newco, assume the
company's stock is selling for $40, its expected ROE is 10%, next year's dividend
is $2.00 and the company expects to pay out 30% of its earnings. Additionally,
assume the company has a flotation cost of 5%. Newco's cost of new equity (k c)
is thus 12.3%, as calculated below:
kc = 2 + 0.07 = 0.123, or 12.3% 
40(1-0.05)
Answer:
Using this new cost of equity, we can determine the WACC as follows:
WACC = (wd)(kd)(1-t) + (wps)(kps) + (wce)(kce)
WACC = (0.4)(0.07)(1-0.4) + (0.05)(0.021) + (0.55)(0.123) 
WACC = 0.086, or 8.6%
The WACC has been stepped up from 8.4% to 8.6% given Newco's need to
raise new equity.
Figure 11.1

ompany continues to raise capital, the MCC can be higher than the WACC.

MCC Vs. WACC


The marginal cost of capital is simply the weighted average cost of the last dollar
of capital raised. As mentioned previously, in making capital decisions, a
company keeps with a target capital structure. There comes a point, however,
when retained earnings have been depleted and new common stock has to be
Reviewer 504
Management Advisory Services

used. When this occurs, the company's cost of capital increases. This is known
as the "breakpoint" and can be calculated as follows:
Formula 11.9
for retained earnings
                               wce

Example:
For Newco, assume we expect it to earn $50 million next year. As mentioned in
our previous examples, Newco's payout ratio is 30%. What is Newco's
breakpoint on the marginal cost curve, if we assume wce = 55%?
Answer:
Newco's breakpoint = $50 million (1-0.3) = $63.6 million
0.55
Thus, after Newco raises roughly $64 million of total capital, new common equity
will need to be issued and Newco's WACC will increase to 8.6%.
Factors that affect the cost of capital can be categorized as those that are
controlled by the company and those that are not.

Read more: Marginal Cost of Capital http://www.investopedia.com/exam-


guide/cfa-level-1/corporate-finance/marginal-cost-of-capital.asp#ixzz4qjR6I9jw 
Follow us: Investopedia on Facebook

III. MANAGEMENT CONSULTANCY


A. Management Consultancy Practice By Certified Public Accounting (CPAs)
1. Nature, Objectives, and Scope Of Management Consultancy Engagements

NATURE

Management advisory services by independent accounting firms can be described as


the function of providing professional advisory (consulting) services, the primary
purpose of which is to improve the client’s use of its capabilities and resources to
achieve the objectives of the organization.

Management consulting can also be described as an independent and objective


advisory service provided by qualified persons to clients in order to help them identify
and analyze management problems or opportunities. Management consultants also
recommend solutions or suggested actions with respect to these issues and help,
when requested, in their implementation.

In essence, management consultants help to effect constructive change in private or


public sector organization through the sound application of substantive and process
skills.

These activities of management consultants can involve two types of encounters with
clients:

 Consultations; and
Reviewer 505
Management Advisory Services

 Engagements

A consultation normally consists of providing advice and information during a short


time frame.

An engagement consists of that form of management advisory or consulting service


in which an analytical approach and process is applied in a study or project.

OBJECTIVES

An independent accounting firm’s purpose in engaging in MAS is:

“To utilize the essential qualifications it has available to provide advice and technical
assistance which will enable client management to conduct its affairs effectively.”

These essential qualifications are based in part on attributes acquired in conducting


other aspects of practice and include

1) Technical competence.
2) Familiarity with the client’s finance and control systems and his business
problems.
3) Analytical ability and experience in problem solution.
4) Professional independence, objectivity, and integrity.

Management consultants are generally engaged by key administrators of


organizations although they are not expected to be as familiar with the organization
as are the managers and administrators.

A management consultant is hired for at least four valuable reasons:

1) Independent viewpoint
2) Professional advisor and counselor
3) Temporary professional service
4) Agent of change

SCOPE

A consultant’s dream or a nightmare both for the company and the consultant alike?
And what is this terrible thing?

Simply, an un-scoped/unplanned engagement exercise for that consultant.

A management team will often engage the services of a consultant or contractor in


some shape or form during the business year.

This person is brought in to perhaps;

 Run a project,
 Bring about organisational change,
 Implement new practices and procedures,
Reviewer 506
Management Advisory Services

 Deliver specific  expertise,


 Put some control on a situation that appears to have gotten out of control.

All very worthy reasons to engage a professional in the necessary space.

However what can happen, particularly in larger organisations, is that what started as
a short term engagement can result in the consultant or contractor being in the
company for months or even years.

This is allowed to happen for a few reasons.

 The consultancy objectives and outcomes were not clearly defined and agreed in
the beginning.  The consultant is at fault from the perspective of not highlighting
the absence of defined scope and the company management is at fault for not
ensuring that the scope existed correctly in the first place.
 The consultant becomes involved with areas outside of the specific scope remit
and becomes, in essence, an operational resource i.e. like a standard member of
the team.
 The in-house skills are insufficient to cover once the consultant leaves i.e. the
competence in the organisation is absent.

The outcomes of the above are;


 That the initial engagement normally does not end up delivering what is required
of it.
 The budget originally assigned for the activity is well and truly shattered.
 Reputations of the consultant can become tarnished. They will feel like they have
failed to deliver and you will feel you have not seen the benefits of their
engagement.
 The consultant becomes a key dependency within the organisation.

These are just some of the consequences of an uncontrolled engagement and none
of it does either party any good!

To prevent these types of issues occurring, the scope of the engagement needs to


be crystal clear to both parties and both have an obligation to flag if there is any doubt
at all.

Further, if the business end up using the consultant for “other things” normally
covered by an operational, full-time employee then the business case needs to be
addressed and understood for doing so.  If there is benefit to that activity – great, if
not, it should be stopped straight away.

In reality, if you are using contractors or consultants to fulfil longer term key roles
within your organisation, then you would be better to bite the bullet and engage a full
time member of staff or to examine the existing teams skill-sets and ensuring that you
have the right team on board.

So when you are hiring any sort of external consultant or contract resource, ensure
that the lines are drawn as to what they will/will not be responsible for and what is
expected as the end result.
Reviewer 507
Management Advisory Services

The Consulting Industry

 Information technology (IT)


 Consulting and system integration
 Corporate strategy
 Operations management
 Human resources management
 Outsourcing

2. Professional Attributes Of Management Consultants

In an article of Professor Owen Cherrington of Brigham Young University, he


enumerated the professional attributes of consultants. According to this renowned
academician, a management consultant must possess the following broad areas of
skills:

 Technical Skills

These include both understanding and experience in a technical discipline – such


as information technology, marketing, engineering, and organizational behavior.

 Interpersonal Skills

These include personal attributes that make an individual amiable among people
and effective in accomplishing desirable objectives through people.

 Consulting Process Skills

These involve the ability to understand and use the following approach in solving
business problems:

 Identify the cause of problems or inefficiencies


 Identify alternative solutions
 Select the most desirable alternative, and
 Implement the chosen solution

3. Areas, Stages And Management Of Management Consultancy Engagements

AREAS

Dimensions

 Nature of the problem


 Service delivery area
 Phase(s) of the analytic process
 Techniques and methodologies applied
 Industry (or nature of organization) to which the client belongs, and
 Geographical area(s) where the engagement take place
Reviewer 508
Management Advisory Services

Management consultancy services by CPAs can be categorized as follows:

Traditional Services:

 Managerial Accounting
 Design and appraisal of accounting system
 Financial Management-related services
 Project Feasibility Studies

Emerging Consultancy Services:

 Global Risk Management Solution


 Transaction Services
 Financial Advisory Services
 Project Finance and Privatization
 Valuation Services
 Business Recovery Services
 Dispute Analysis and Investigations
 Computer Risk Management
 Application Software Selection and Implementation

STAGES

Generally, a management consulting engagement involves the following stages:

 Negotiating the engagement


 Engagement planning
 Conducting a consulting assignment
 Problem identification and solution
 Identification of suitable and accurate sources of information
 Data analysis and diagnosis
 Solution development
 Preparation and presentation of the report and recommended solution
 Implementation
 Follow-up evaluation of the implemented solution
 Evaluating the engagement and post-engagement follow-up

Solution development is the third phase of the problem solving process. The steps
involved in this phase are:

 Generation of solution alternatives


 Evaluation of solution alternatives
 Choice of the preferred solution alternative
 Detailed development of the selected solution

MANAGEMENT

Project evaluation and controls provide the means of successfully administering the
work plan, which defines what tasks are to be performed, when the tasks are to be
performed, and who will perform them. Without this information, the consultant and
Reviewer 509
Management Advisory Services

client management would have no means of knowing how well a project is


progressing.

The specific objectives of project evaluation and controls are:

 To provide assurance that the project is on schedule and within budget.


 To communicate the exact project status of all concerned personnel.
 To ensure that a quality product will be implemented.

The project controls generally include the following:

 Administrative controls
 Time reporting procedures
 Independent quality assurance reviews

B. Project Feasibility Studies


1. Nature, Purpose And Components (Economic/Marketing, Technical And
Financial)

NATURE

A project feasibility study is the systematic investigation which ascertains whether a


business undertaking is viable and if so, the degree of its profitability.

What is a 'Feasibility Study'

A feasibility study is an analysis of how successfully a project can be completed,


accounting for factors that affect it such as economic, technological, legal and
scheduling factors. Project managers use feasibility studies to determine potential
positive and negative outcomes of a project before investing a considerable amount
of time and money into it.

BREAKING DOWN 'Feasibility Study'

For example, a small school looking to expand its campus might perform a feasibility
study to determine if it should follow through, taking into account material and labor
costs, how disruptive the project would be to the students, the public opinion of the
expansion, and laws that might have an effect on the expansion.

A feasibility study tests the viability of an idea, a project or even a new business. The
goal of a feasibility study is to place emphasis on potential problems that could occur
if a project is pursued and determine if, after all significant factors are considered, the
project should be pursued. Feasibility studies also allow a business to address where
and how it will operate, potential obstacles, competition and the funding needed to
get the business up and running.
Reviewer 510
Management Advisory Services

It is a systematic gathering and analysis of data and information which aims to find
out the practicability and profitability of a proposed business undertakings.

PURPOSE

The feasibility study provides a base – technical, economic, and commercial – for an
investment decision on an industrial project. A feasibility study is not an end in itself,
but only a means to arrive at an investment decision. It will define and analyze the
critical elements that relate to the production of a given product together with
alternative approaches to such production. It will likewise provide a project of a
defined production capacity at a selected location, using a particular technology in
relation to defined materials and inputs, at identified investment and production costs
and sales revenues yielding a defined return on investment.

One of the most important uses of a project feasibility study is the minimization of the
risk of failure of business ventures thereby reducing the waste of valuable resources.
Immediate causes of business failures such as (1) undetected presence of a more
superior competing product; (2) failure to perfect the manufacturing process; (3)
failure to sell the goods at a reasonable price; (4) failure to raise adequate working
capital and many more.

COMPONENTS

There are several components of a feasibility study:

Description – a layout of the business, the products and/or services to be offered and
how they will be delivered.

Market feasibility – describes the industry, the current and future market potential,
competition, sales estimations and prospective buyers.

Technical feasibility – lays out details on how a good or service will be delivered,
which includes transportation, business location, technology needed, materials and
labor.

Financial feasibility – a projection of the amount of funding or startup capital needed,


what sources of capital can and will be used, and what kind of return can be expected
on the investment.

Organizational feasibility – a definition of the corporate and legal structure of the


business; this may include information about the founders, their professional
background and the skills they possess necessary to get the company off the ground
and keep it operational.

A Feasibility Study is a formal project document that shows results of the analysis,
research and evaluation of a proposed project and determines if this project is
Reviewer 511
Management Advisory Services

technically feasible, cost-effective and profitable. The primary goal of feasibility study
is to assess and prove the economic and technical viability of the business idea. A
project feasibility study allows exploring and analysing business opportunities and
making a strategic decision on the necessity to initiate the project. For each project
passing through the Initiation Phase, a feasibility study should be developed in order
for investors to ensure that their project is technically feasible, cost-effective and
profitable.  A thorough feasibility study can give you the right answer before you
spend money, time and resources on an idea that is not viable. It must therefore be
conducted with an objective, unbiased approach to provide information upon which
decisions can be based.  
If you are planning on conducting a feasibility study, you will need to include the
following important elements:

The project scope: The first step is to clearly define the business problem/opportunity
that has to be addressed. The project scope has to definitive and to the point.
Rambling narratives serves no purpose and can actually confuse participants. Also
ensure that you define the parts of the business that would be affected either directly
or indirectly. This would include project participants and end-users. A well-defined
project scope can ensure an accurate feasibility study. Starting a project without a
well-defined scope can easily lead to wandering outside budget and time.

The current Market analysis: This step is critical as it examines the business
environment in which the new product or service will be placed.  From this analysis,
you can discover the strengths and weaknesses of the current approach. Reviewing
the strengths, weaknesses, opportunities, and threats faced by a project helps
decision makers focus on the big picture. In some organizations, the executives may
not want to approach a new market unless they know they can dominate it. Other
companies prefer to focus on profits gained instead of market share.  

The requirements: This component represents two groups of requirements, including


technical requirements and organizational requirements. If there is a potential market
and demand for the product or service then you need to identify what technical and
resource requirements are needed for the new venture.  You will need to define your
requirements depending on the objective of your project. Project managers that
understate the physical and fiscal resources required for a new product or service
often end up with failed projects or unfulfilled promises.

The approach: You will next have to consider and choose the recommended solution
or course of action to meet your requirements. You can consider various alternatives
and then choose a solution that is the most preferable. Before you finalize on the
approach, ask yourself the following questions: Does the approach meet my
requirements? Is the approach taken a practical and viable solution?

Evaluation: Examines the cost effectiveness of the selected approach and the


estimated total cost of the project. Other alternatives will also be estimated for
comparison purposes. After the total cost of the project has been calculated, an
evaluation and cost summary will be prepared to include a return on investment,
cost/benefit analysis etc.
Reviewer 512
Management Advisory Services

Review: Finally, all the above elements will be assembled into a feasibility study and
a formal review will be conducted. The review will be used verify the accuracy of the
feasibility study and to make a project decision. At this stage, you can approve, reject
or even revise the study for making a decision. If the feasibility study is approved,
make sure that all the involved parties sign the document.

Economic / Marketing

Before the project is formulated, the size and composition of the present effective
market demand, by segment, should be determined in order to estimate the possible
degree of market penetration by a particular product. Also, the income from sales
should be projected taking into account technology, plant capacity, production
program, and marketing strategy. The latter has to be set up during the feasibility
study giving due consideration to product pricing, promotional measures, distribution
systems, and costs.

Once the sales projections are available, a detailed production program should be
made showing the various production activities and their timing. The final step at this
stage of a feasibility study is to determine the plant capacity taking into account
alternative levels of production, investment outlay and sales revenues.

Technical

The technical aspect of a project feasibility study will cover the following:

Production Program
 Data and alternatives
 Selection of production program
 Estimate costs of emissions disposal
Plant Capacity
 Data and alternatives
 Determination of feasible normal plant capacity
Materials and Inputs
 Data alternative
 Supply program
Location and Site
 Location
o Data and alternatives
 Site
o Data and alternatives
o Site selection
o Cost estimate
Process Engineering
 Project layouts
 Scope of project
 Technology(ies)
 Equipment
 Civil engineering works
Reviewer 513
Management Advisory Services

Financial

This aspect requires determination / evaluation of the following:

Total Investment Costs


 Initial fixed investment costs
 Pre-operating expenditures
 Minimum net working capital requirement
o Projected operating costs (production, selling & administrative cost)
Project Financing
 Determining funds requirement using required financial statements
o Projected Statement of Comprehensive Income
o Projected Statement of Financial Position
o Projected Cash Flow Statement
 Determining sources of financing
Commercial Profitability Criteria
 Net Present Value
 Internal Rate of Return
 Break-even Time or Discounted Payback Period
 Payback Period
 Simple Rate of Return
 Break-even Analysis
 Sensitivity Analysis

2. Analysis Of Project Revenue And Costs Under Specific Assumptions

The will to win means nothing without the will to prepare”


Juma Ikangaa 
New York City Marathon
The development of realistic financial planning documents for a business is an
important process. The following pages provides you with tips, that if followed, will
result in the completion of financial forecasts worthy of presentation to lenders,
investors, and others. The development of a good financial plan takes a team effort
which involves your internal accounting/bookkeeping team, your external
accountants, your management team, Alberta Agriculture and Rural Development
staff, and you as the owner.

By reading through the following pages you will receive a high level understanding of
the following:
1. The purpose of good financial planning
2. The approach to arriving at realistic Start-up or Expansion Costs
3. The up-front homework and planning process in developing Key Assumptions for
sales, cost of production and general and administration expenses
4. The up-front homework and planning process required in developing Key
Assumptions for cash flow planning
5. An overview and an example of a Balance Sheet and Income Statement
Reviewer 514
Management Advisory Services

6. The importance of accurate Cash Flow Planning


7. Overview of Key Financial Performance Ratios – purpose and formulas
8. Comments on suggestions for monitoring the financial plan developed
TIP: Remember it takes time, good research and a great team effort to achieve a
realistic financial plan on which good decisions can be made!

Introduction

Entrepreneurs, start-up companies, or existing companies will utilize and require the
development of numerous financial documents during the planning and operational
stages. Each plays an important role in planning and managing your business. Some
may be used in the earliest stages - simply to determine whether or not your
proposed or existing business is feasible or sustainable. Others will be used to
provide information that will enable you to attract partners, investors or financing
capital; while some will monitor and benchmark your business activities on an
ongoing basis.

The structure of your business will determine the variation and format of some of the
financial documents that you will utilize. The typical business structures are: sole
proprietorship, partnerships or corporations. Additional types of business structures
may possibly include new generation co-ops or joint ventures. Your financial and/or
legal professional will assist you in determining the structure best suited to your
business needs. 

Critical business decisions need to be made before you invest significant time and
capital. It is important to adequately complete market research, hold discussions with
possible suppliers and be able to place estimated costs into models that will enable
you to more accurately complete feasibility assessments.

The development of your financial documents is an important step in bringing your


new start-up business, or new product launch to reality. Once prepared, these
financial documents will assist you in attracting investors, satisfying the needs of your
lenders, and monitoring your business on an ongoing basis.

Building these documents requires utilizing key assumptions. These key assumptions


are the building blocks of information that are collected and used to develop your
financial and business plans - and to help make critical decisions based on solid
information. Key assumptions are critical to all aspects of the financial forecasts –
balance sheets, income statements, cash flow, business plans and so on. They
include detailed forecasted sales volumes; cost of sales, general administration
expenses, and others. 

Tip: It is important to understand that all three financial statements are related and
connected indicators of the businesses feasibility, risk and profitability. (Balance
Reviewer 515
Management Advisory Services

Sheet, Income Statement, and Cash Flow).

As you go through the preparation of your financial documents and business plans,
you will need to document and sort the information that is used to create these
documents. A spreadsheet (or combination of several spreadsheets) is one of the
most effective tools for gathering, compiling and managing this information. 

Tip: Linking your spreadsheets to one another and merging the data together will
make it much simpler and faster to update your documents.

It is highly recommended that you discuss your business start-up or expansion idea in
advance with your financial coach so he or she may provide you with guidance in the
key assumptions they suggest or recommend. They may help you develop detailed
spreadsheets, and provide supporting comments. 

Tip: The greater the accuracy of the key assumptions/information that is used in the
initial planning stages of your business - the greater will be your ability to make good
business decisions moving forward. Utilize your suppliers and other business
contacts (as needed) to aid you in gathering up-to-date information.

Not all assumptions require a detailed breakdown. Your financial professional will aid
you in finding the best spreadsheet tools suited to your needs. Every business is
unique and therefore each may require additional or specific information to be
collected. 

Start-up Costs

What will it cost to get your business off the ground or implement expansion plans?
Begin collecting the data. Talk to potential suppliers for initial pricing of supplies and
materials. If you require capital, make some early inquiries to determine anticipated
borrowing expenses and terms. 

As you collect your information, keep a record of the information you gather. Below is
a simple example of a common Start-up/Expansion Capital Worksheet. This example
shows some of the basic information that would commonly be used in a start-up
business. 

Combine and add your own specific information that is right for your business. 

Tip: You should use startup cost planning for a start-up company and also when
expanding your business or launching a new product line. Customize the spreadsheet
for your own purposes.

Tip: The greater the accuracy of the key assumptions/information that is used in the
Reviewer 516
Management Advisory Services

initial planning stages of your business - the greater will be your ability to make good
business decisions moving forward. Utilize your suppliers and other business
contacts (as needed) to aid you in gathering up-to-date information.

In addition to tracking the total estimated costs of starting up your business, this
particular spreadsheet example also allows you to assign the source(s) of the capital
required.

Figure 1-2

Key Assumptions for Planning Forecasts 

Similar to startup or expansion costs, you need to investigate and give careful
consideration to the development of other key data that would be utilized in the
completion of the opening balance sheet, forecasted profit and loss statements and
the development of cash flows. 

One of the first key assumptions that needs to be addressed in the startup of a new
business venture, and or expansion, is the source of equity and or debt. This would
be the assumption around the contributions to be made to the business by ownership,
whether sole proprietor, partners, or shareholders. Contributions can take the form of
cash contributions through share purchase, shareholders/partners loans, and
contributions of assets in return for equity. You would be advised to develop a
spreadsheet that shows the timing and amount of each contribution and the terms in
which they are being made. The spreadsheet should show both contributions and the
formation of the business and throughout the planning period.

Key Assumptions - Cost of Production and Sales 

Production costs need to be forecasted. The production cost is determined by your


research and accurate determination of the cost of all inputs that make up all your
manufacturing costs. These costs should include all material costs, labour, service
and manufacturing overhead requirements that are required in the development of
your products. 

Prior to forecasting your sales projections and revenue, you need to calculate a
realistic cost for your product(s) and break the cost down into a per unit basis. The
cost must include all production inputs: raw materials, utilities (power/water etc),
packaging, handling expenses and any other items involved in production. Labour
costs associated with production should be addressed here as well. Below is an
example of a basic worksheet to calculate product cost.
Reviewer 517
Management Advisory Services

Tip: If you manufacture a product, it is advisable that you include not only your
material costs in your cost of sales, but all manufacturing costs such as rent (only
equipment rent) utilities and labour - anything that is variable and related to
manufacturing your product. 

Key Assumptions – Pricing your Product or Service

Placing the right selling price on your product or service can be the difference
between financial success and failure. In order to price your product or service
profitably, you need to take into consideration many factors such as cost of
production, your customer, your competitors and how much value the market places
on your product. 

The cost of production includes both variable and fixed costs. This is a very important
step and is the foundation to establishing an accurate price for you product. Do not
guess, know your costs and be sure to include all costs. 

Price is not the same as value. Value is a perception in your customer’s mind. If you
have a unique product that the customer needs or wants, they will place a higher
value on it. Your price should reflect how much value your customer places on your
product. If the product you are producing is commonly available and you have
considerable competition customers will place less value on your product and it may
be very difficult to establish a market share.
Critical Questions to ask yourself are:
1. Do you have a unique product with high consumer value?
2. Can you produce your product better or cheaper than all the other suppliers?
3. Do you have much competition?
4. What is the competition doing to maintain or grow their market share?
5. Will people buy your product over the competition and why?
6. How much would your customers be willing to pay?
7. Is there room for your product in the marketplace?
Answers to these and many other critical questions will require thorough market
research and other investigation efforts. Consider consulting a market analyst if you
are unsure of your product/service potential.

Once you have established that you have a product worthwhile to market, and you
have established a realistic price for your product (a cost price to produce, ship and
market, plus a profit margin) you can then determine if the market will support your
venture. 

Tip: Research into pricing of similar or like products can include the use of your own
inquiries into the marketplace, focus groups, trial markets or enlisting the assistance
of professionals.
Reviewer 518
Management Advisory Services

Key Assumptions – General and Administration Expenses

One of the most significant expenses a business will incur is that of salaries (wages
and benefits). Create an accurate monthly estimate of your labour costs through each
of your planning stages. You will also need to project labour costs in your cash flow
summaries, to ensure your business can manage and meet payroll obligations. Below
is an example of a labour cost spreadsheet that also estimates the company costs of
employee benefits. If you intend to pay bonuses, you would simply add another row
or rows as required. It will be critical to outline your assumptions as to the timing of
these bonuses as your financial advisor will require this information to manage your
cash flow. Bonuses should only be paid out if the company is profitable.

Figure 1-4 - A larger version of the Wages and Labour Worksheet (12K PDF) is


available for your review.

Tip: Using a spreadsheet that allows you to easily make quick adjustments


throughout the forecasted year and handle changes such as wage increases,
personnel changes and so on, will help you manage and prepare for your cash-flow
requirements document.

In this particular spreadsheet example, the jobs have been highlighted in different
colours. This is to help assign their associated cost to either overhead costs (fixed) or
cost of sales.

Often janitorial and maintenance services will be split between fixed costs and cost of
sales.

Tip: You may wish to consider the development of additional spreadsheets to support


Reviewer 519
Management Advisory Services

other general and administration expenses.

Tip: At times you may have special sales, (seasonal highs or lows) that affect your
forecasts. It is very important that you include in your key assumptions how you
managed to arrive at these various forecasted levels. Maintain a record of your
specific assumptions in these areas.

Key Assumptions - Sales Forecast

The preparation of your projected income statement is the planning for the profit of
your financial plan. The example below is for a single product, you would need to
complete this for each additional product and/or source of revenue. 

Figure 1-5

Tip: As you are developing your sales forecast, it is critical that you document and
develop a narrative in your business plan that can support your projections including
the best estimate of timing of the conversion of sales to cash. The assumption of the
timing from invoice to conversion of cash is required by your financial coach. Are
these sales projections reasonable? Can they be supported though signed orders,
contracts or letters of intent from your customers? Do you have a competitive
advantage with your product that fills a consumer need or is at a price better than
anything else currently on the market? Can your operation’s infrastructure support the
volume of sales? Lenders or investors will need evidence that these projections are
realistic. Over-estimating your sales forecasts could result in financial disaster. 

Key Assumptions – Cash Flow Planning

To complete an accurate cash flow forecast it will be critical to make key assumptions
around the following:
1. The amount and timing of cash equity contributions by the owners
2. The amount and timing (advancements) of any loans that will be requested for approval
3. The timing and amount of payment for capital acquisitions (ie. land, building and development)
Reviewer 520
Management Advisory Services

4. The terms in which credit will be extended to clients – accounts receivable


5. an understanding of the terms to be provided by suppliers – accounts payable
6. You will need to obtain amortization tables for all loans applied for. This will provide you with the
interest and principal split required for both your income statement and cash flow planning.
7. Make an assumption on how the general administration expenses are paid (general administration
expenses are paid in the month they are incurred)
Tip: In completing cash flow forecasts for existing businesses, to be accurate, the
following additional steps will be required:
1. The bank reconciliation for the previous month-end
2. Have an aged listing for all outstanding accounts receivable as at previous month-end (you need to
be prepared to make an assumption on how/if these accounts will be collected otherwise if
uncollectable, they are a bad debt expense)
3. Have an aged listing of accounts payable (and the timing of when they will be paid)
4. If any loan payments are in arrears, a plan to catch up and make them current
One of the first steps in the cash flow planning for the next year of an existing
operation will be to determine when opening accounts receivables will be collected in
the next period and when outstanding accounts payable will be paid in the next
forecasted period.

Tip: Quite often the development of an initial cash flow statement will initiate a revised
cash flow statement that will include the additional financing required to fund the cash
flow deficit.

The Balance Sheet

The Balance Sheet is a summary of the assets and liabilities and equity of a business
at a specific point of time. In addition it provides a picture of the financial solvency
and risk bearing ability of the business. 

The Balance Sheet will vary slightly depending on the legal structure of your company
whether it is a sole proprietorship, partnership or corporation. This is an example of
what a typical balance sheet may look like for a corporate entity (Limited Company). If
your business is a sole proprietorship, the equity section of the balance sheet will
simply be the difference between the assets and liabilities - there will be no indication
of original share capital reflected. If you choose to operate the business as a
partnership or corporation, the owners' equity section will reflect the equity
breakdown amongst partners depending on their percentage of ownership.
Reviewer 521
Management Advisory Services

Tip: As
mentioned,
balance
sheets will
look
different
depending
on
corporate
structures.

A Sole
Proprietors
hip will not
be showing
any share
capital.
Equity will
simply be
the
difference
between
assets and
liabilities.
For
Partnership
s the equity
portion will
be shown
as per the
breakdown
amongst
the
partners. In
a
corporation,
(as per the
example on
the left)
equity will
be shown
as share
capital and
retained
earnings of
Reviewer 522
Management Advisory Services

Figure 1.6 - A larger version of the Balance Sheet (13K PDF) is available for your
review.

Income Statement (Profit and Loss Statement)

The Income Statement, commonly referred to as the P&L statement, summarizes the
revenue and expenses for a specific time period (one month, one quarter, one year,
etc.) The Projected Income Statement is a snapshot of your forecasted sales, cost of
sales, and expenses. For existing companies the projected income statement should
be for the 12 month period from the end of the latest business yearend and compared
to your previous results. Any large differences in line items should be explained in
detail.
Reviewer 523
Management Advisory Services

Tip: T
here
will
be no
forec
ast in
the
incom
e
state
ment
for
the
paym
ent of
taxes
(for a
sole
propri
etors
hip)
The
main
differ
ence
betwe
en a
comp
any,
partn
ershi
p and
the
sole
propri
etors
hip is
the
area
of
taxes
paya
ble
and
Reviewer 524
Management Advisory Services

remu
nerati
on.
Your
financ
ial
advis
or will
assist
you in
how
you
will
reflec
t this
in
your
forec
ast(s)
. For
exam
ple
there
may
be no
salary
expe
nse in
a sole
propri
etors
hip or
partn
ershi
p
(they
may
be
show
n as
withdr
awals
after
profit
Reviewer 525
Management Advisory Services

calcul
ations
wher
eas
active
share
holde
rs'
remu
nerati
on for
wage
s and
bonu
ses
may
be
show
n as
a
mana
geme
nt
expe
nse in
the
gener
al
admi
nistra
tion
sectio
n of
the
incom
e
state
ment.
Depr
eciati
on
expe
nses
could
Reviewer 526
Management Advisory Services

also
be
handl
ed
differ
ently
in a
sole
propri
etors
hip if
these
asset
s are
utilize
d in
the
gener
ation
of
reven
ues
not
assoc
iated
to this
ventu
re.
You
are
enco
urage
d to
enga
ge
profe
ssion
al
assist
ance
in the
creati
on of
these
Reviewer 527
Management Advisory Services

docu
ment
s.
Your
advis
or will
help
you
compl
ete
these
forms
in
accor
danc
e with
gener
al
accep
ted
accou
nting
princi
pals
(GAP
P).

Figure 1-7 - A larger version of the Income Statement (13K PDF) is available for your
review.

Tip: The above example is for a startup company and this is why no beginning
inventory is shown. Professional accounts may choose to show the cost of goods
sold section in various formats depending on the industry.

Tip: If the whole area of financial documents is new to you, you may wonder the
difference between the income and cash flow statements. The income statement is your
revenue and expenses for a point in time. The revenue is recorded at the point it is
earned, not when payment is received and the expense is recorded at the time it is
incurred, not paid. The cash flow statement forecasts the assumptions as to when
revenues from sales, and other incoming funds are going to be received , and the
assumptions on the timing of paying of expenses, capital putchases, and any loan
repayments. 

Cash Flow Projections


Reviewer 528
Management Advisory Services

Once you have made your sales projections based on volume, calculate the cash flow
projections by converting your sales volumes into income. In the example below accounts
receivable are shown based on cash sales with 30/60/and 90 day receivables. Deduct
outflows from all cash inflows and you will be able to predict your cash flow requirements
for each month. If you find yourself in a negative position, it becomes a critical decision
whether or not to move forward, with your business unless you can make valid
adjustments to either your inflows or outflows through the extension of accounts payable
or approved operating lines of credit. These options should only be considered if in future
months there will be cash excess to pay down operating loans and or accounts payable. 

For a new business, the cash flow forecast can be more important than the forecast of the
Income Statement because it details the amount and timing of expected cash inflow and
outflows. Usually the levels of profits, particularly during the startup years of a business,
will not be sufficient to finance operating cash needs. Moreover, cash inflows do not
match the outflows on a short-term basis. The cash flow forecasts will indicate these
conditions and if necessary the aforementioned cash flow management strategies may
have to be implemented. 

Given a level of projected sales, associated expenses and capital expenditure plans over
a specific period, the cash flow statement will highlight the need for and the timing of
additional financing and show your peak requirements for working capital. You must
decide how this additional financing is to be obtained, on what terms and how it is to be
repaid.
Reviewer 529
Management Advisory Services

Figure 1-8 - A larger version of the Monthly Cash Flow Projection (18K PDF) is


available for your review.

Tip A good cash flow projection should forecast monthly amounts for month end
receivables, payables and inventory. This information is often required so that
management can calculate their operating loan margin requirements as stipulated by
their lender. Forecasting these month end numbers and testing them against margin
conditions, in advance, eliminates challenges you may experience with your lender if
your unable to meet your conditions at a later date. Being able to test these numbers,
allows you to alter your financial projections and take alternative measures.

Tip: The advantage of good upfront homework to arrive at realistic key assumptions


will greatly assist your professional advisor, who may utilize existing financial
automated spreadsheet planning and analyst tools. You should also be prepared to
provide identified “what-if” scenarios (changes to revenues, cost of sales expenses
and assumptions impacting cash flow) so that alternative projections could be quickly
produced to provide for risk analysis. 

Financial Ratios

Ratios are useful when comparing your company with the competition on financial
performance and also when benchmarking the performance of your company. Ratios
can measure your company’s performance against the performance of other
companies. Most ratios will be calculated from information provided by the financial
statements. Financial ratios can analyze trends and compare your financial status to
other similar companies. They can also be used to monitor your companies overall
financial status. In the table below, many of the common ratios are shown along with
the formulas that are used to calculate them. 
Reviewer 530
Management Advisory Services

Figure 1-9 - A larger version of the Ratio Analysis (24K PDF) is available for your
review.

Liquidity ratios provide information about your company’s ability to meet its short term
debt. The Current Ratio and Quick Ratio (also known as the acid test) represent
assets that can quickly be converted to cash to cover creditor demands.

Asset Turnover Ratios indicate how well you are utilizing your company’s assets.
Receivable Turnover, Average Collection Period and Inventory Turnover are the main
tools to monitor your assets.

Financial Leverage Ratios indicate your financial state and the solvency of your
company. They measure your company’s ability to manage and use long term debt.
The Debt Ratio and Debt-to-Equity (Leverage Ratio) Ratio are used in these
calculations.

Profitability Ratios include Gross Profit Margin, Return on Assets and Return on
Equity ratios. These ratios primarily are used to indicate your company’s ability to
generate profits, and return to the shareholders’ investments.

Your financial advisor will assist you in these ratio calculations and utilize the ones
that best measure your company’s financial well being. 

Monitoring Your Financial Plan


Reviewer 531
Management Advisory Services

If you are new or uncomfortable in working with your financial business plan, work
with a financial advisor who can guide you through the processes involved in
continually monitoring the financial affairs of your business or business venture.

Keep your information current and review the documents on a regular basis (monthly
or more often if needed). Review them with key individuals within your company.

Utilize monthly financial statements as part of your business management process.


By reviewing these documents monthly, you will be prepared to make changes if and
when necessary, always compare changes between your actual performance and
your previously forecasted projection.

Use these documents to make adjustments to your business’ financial plan or


strategies. Use them to plan new initiatives or new product launches. 

A simple checklist such as the one below may help you in your ongoing management
practices.

Figure 1-10 - A larger version of the Checklist (10K PDF) is available for your review.

Tip: Create and customize your own monthly checklist that helps you to be in control
of the day to day operations. Take immediate action if you find areas that need
attention on anything appears to be questionable. 
Reviewer 532
Management Advisory Services

Review these suggested tasks with your financial advisor to see if he or she has other
recommendations to add. 

Tip: If Key Performance Indicators (KPI) are not being met, an action plan needs to
be implemented.

Conclusion

The information provided here gives you some guidelines and examples from which
to begin the development of your own financial documents and/or business plan.
Every company has a unique set of circumstances and due diligence is required on
your part to seek out professional guidance in preparation of these important
documents. The more you are able to accurately forecast and estimate your
expenses, sales volumes and revenues – the more you will be able to make sound
business decisions to proceed, stop or alter your business plans moving forward.

As you complete your documents, time will pass and some of the key assumptions in
the information will change. Keep this information current; update the most critical
assumptions regularly. Maintaining accurate up-to-date financial documents will
enable you to have accurate information to present to a lender or potential investor.
These documents will provide you with the management tools you need to make
sound business decisions at any time.

Tip: Before a business and financial justification can be made to proceed with a start-
up business, and or expansion the target market must be sharply defined, the product
concept and positioning strategy must be confirmed; the benefits to be delivered and
the value proposition defined and validated, as well as the physical attributes of the
product features, specifications, and performance requirements. All costs of the
proposed plans need to be well investigated and key assumptions documented. 

It will be important to review the core competencies and determine additional


resources and capabilities needed to achieve the financial plan. You need to clearly
have a plan for sourcing additional resources, partnering, or outsourcing. Your
financial plan is a way to clearly demonstrate the financial costs of that execution
strategy. Ensure you have considered everything required to achieve your goals and
have planned for their costs in your plans. 

A good financial plan, developed with the assistance of financial professionals will be
invaluable to ensuring good decisions are made.

3. Preparation Of Projected Financial Statements

Projected Financial Statements is summary of various component projections of


revenues and expenses for the budget period. They indicate the expected net income
for the period.

Projected Financial Statements are an important tool in determining the overall


performance of a company. They include the balance sheet, income
statementand cash flow statements to indicate the company performance.
Reviewer 533
Management Advisory Services

The Balance Sheet shows your assets, liabilities and equity at a particular point in


time. It is basically a snapshot of your financial position. The basic accounting formula
is assets equal liabilities plus owner’s equity. The asset section of the balance sheet
should be presented in order of liquidity starting with the most liquid assets such as
cash, accounts receivable and inventory. The liabilities section should be presented
in order of maturity starting with liabilities that are payable over the next year such as
a demand note payable and accounts payable.

The Income Statement captures profit performance, demonstrates immediate


capability to service debt for banks or real potential for growth in returns for venture
capital. This is often expressed in terms of sales volume, or compared to industry
benchmarks.

The Statement of Cash Flows is the most critical forecast since it reflects viability
rather than profitability. It can also be the most uncertain statement as projections
extend into the future. Therefore, monthly cash flow is a key statement since it
enables calculation of “coverage” at any given point.

Preparing projected financial statements can be very time consuming and it requires
a careful analysis of the company’s past and present financial health. Projected
financial statements project or forecast a company’s performance in the near future.

Preparing Projected Financial Statements:

Preparing projected financial statements require careful analysis. Prior to preparing


projected financial statements, an analyst studies the financial history of the
company. There may be some drawbacks, which the company may have
encountered down the years. To eradicate such hurdles and for the betterment of the
company’s financial status, an analysis is conducted.

Factors Considered while Preparing Projected Financial Statements:

Various factors are considered for analysis of the financial health of the company. An
analyst uses the following points to evaluate the position of the company:
Whether the company’s operational activities are up to the mark
If the company is well equipped financially
Condition of the market- if the market is in the process of growth, is at equilibrium or
shriveling up.

The status of the company in relation to the other companies in the industry.
Strengths, weaknesses prevailing in the management of the company, type of
product produced by the company, economic cycle of the company, accompanying
hazards in the production of goods.
Reviewer 534
Management Advisory Services

Role of the management’s performance in company growth


Risks associated with operational activities

Company’s past performance records.

By carefully studying the various trends in the company’s past performances, the
analyst tries to predict the company’s performance in future. Even if the financial
health of a company has remained fairly stable over the years and the projected
financial statements forecast a still better growth trend in the financial statement, any
unforeseen event may change the course, in the projected financial statement.

The unforeseen events may occur in any part of the globe thereby impacting global
economy in an adverse manner. An analyst keeps provision for such events and
prepares details of a contingency fund, which can be made use of, if the above
mentioned circumstances are encountered by any company.

Understanding Financial Projections And Forecasting

In order to get the attention of serious investors, it is important to have realistic


financial projections incorporated into your business plan. Projections can be a tricky
business as you try to anticipate expenses while trying to predict how quickly your
business will grow. With a quick outline and some forethought, though, you can easily
get a handle on your business’ financial projections.

What Is a Financial Projection?

In its simplest form, a financial projection is a forecast of future revenues and


expenses. Typically, the projection will account for internal or historical data and will
include a prediction of external market factors.
In general, you will need to develop both short- and mid-term financial projections. A
short-term projection accounts for the first year of your business, normally outlined
month by month. A mid-term financial projection typically accounts for the coming
three years of business, outlined year by year.

Formatting Your Financial Projection

There are many online templates for financial projections that are a good place to
start when you are preparing to draft your projections. It is also recommended that
you include charts and tables when explaining copious amounts of numerical data; it
is a much cleaner and engaging presentation than just paragraphs of numbers and
figures.

Key Elements of Your Financial Projection

All financial projections should include three types of financial statements:


Reviewer 535
Management Advisory Services

Income Statement: An Income Statement shows your revenues, expenses and profit
for a particular period. If you are developing these projections prior to starting your
business, this is where you will want to do the bulk of your forecasting. The key
sections of an income statement are:

 Revenue – This is the money you will earn from whatever goods or services you
provide.
 Expenses – Be sure to account for all of the expenses you will encounter,
including Direct Costs (i.e. materials, equipment rentals, employee wages, your
salary, etc.) and General and Administrative Costs(i.e. accounting and legal fees,
advertising, bank charges, insurance, office rent, telecommunications, etc.).
 Total Income – Your revenue minus your expenses, before income taxes.
 Income Taxes
 Net Income – Your total income without income taxes.

Cash Flow Projection: A Cash Flow Projection will demonstrate to a loan officer or
investor that you are a good credit risk and can pay back a loan if it’s granted. The
three sections of a Cash Flow Projection are:

 Cash Revenues – This is an overview of your estimated sales for a given time
period. Be sure that you only account for cash sales you will collect and not
credit.
 Cash Disbursements – Look through your ledger and list all of the cash
expenditures that you expect to pay that month.
 Reconciliation of Cash Revenues to Cash Disbursements – This one is pretty
easy: you just take the amount of cash disbursements and subtract it from your
total cash revenue. If you have a balance from the previous month, you’ll want to
carry this amount over and add it to your cash revenue total.
 Note – One of the key pitfalls of working on your cash flow projections is being
overly optimistic about your revenue.

Balance Sheet: This overview will present a picture of your business’ net worth at a
particular time. It is a summary of all your business’ financial data in three categories:
assets, liabilities and equity.

 Assets – These are the tangible objects of financial value owned by your
company.
 Liabilities – These are any debts your business owes to a creditor.
 Equity – The net difference between your organization’s total liabilities minus its
total assets.
 Note – You will want to be sure that the information contained in the balance
sheet is a summary of the information you previously presented in the Income
Statement and Cash Flow Projection. This is the place to triple-check your work
Reviewer 536
Management Advisory Services

– investors and creditors will be looking for any inconsistencies, and that can
greatly impact their willingness to extend your company a line of credit.

To complete your financial projections, you’ll want to provide a quick overview and
analysis of the included information. Think of this overview as an executive summary,
providing a concise overview of the figures you’ve presented.

While preparing your financial projections, it’s most important to be as realistic as


possible. You don’t want to over- or underestimate the revenue your business will
generate. It’s a good idea to have a trusted friend or business partner review your
financial projections. Also, be sure to avail yourself of all the online
resources available – it’s best to learn from people who have created projections
before.

4. Analysis Of Financial Projections

Financial Projections Making sense of the money The Burning Questions… • What
are your capital needs? – Projections • How will you get that capital? – Structure:
Equity or debt? • Ownership structure – Up-front or staged? • What about a return for
your investors? – How soon? – How much? – What is the exit-strategy? It’s the
cash… • Many entrepreneurs of profitable and rapidly growing companies are
puzzled by the fact that they never seem to have enough cash. Financial Forecasting
• Build a set of assumptions • Estimate your operating cycle • Forecast sales • Use
the sales to create – Pro forma balance sheet, income statement and cash flow
statements – Factor risk into the projections • Sum up your cash needs to get past
the burn out point Financial Forecasting Creating the Pro Forma Analysis • Develop
assumptions – Pricing assumptions – Sales level and growth assumptions –
Inventory needs assumptions – Payables and wage cycle assumptions – Fixed cost
and tax expectations • Project cash needs – Monthly or quarterly • Project an Income
Statement • Project an Balance Sheet 6 Example OPERATING & CASH BUDGET
CASH BUDGET APRIL MAY JUNE JULY Begin. Cash $23,000 $23,000 $23,000
$23,000 Cash receipts: Customer collections 105,800 156,400 156,400 124,200 Totl
cash before financing 128,800 179,400 179,400 147,200 Cash disbursements:
Merchandise 98,210 111,090 93,380 75,670 Wages & Comm 21,275 28,175 29,900
24,725 Misc Exp 5,750 9,200 6,900 5,750 Rent 4,600 4,600 4,600 4,600 Truck
Purchase 6,900 0 0 0 Total Disbursements 136,735 153,065 134,780 110,745 7
Example OPERATING & CASH BUDGET APRIL MAY JUNE JULY Total
Disbursements 136,735 153,065 134,780 110,745 Min. cash balance 23,000 23,000
23,000 23,000 Total cash needed 159,735 176,065 157,780 133,745 Excess of total
cash -$30,935 $3,335 $21,620 $13,455 Financing New borrowing $30,935 $0 $0 $0
Repayments 2,871 21,199 6,865 Loan balance $30,935 $28,064 $6,865 $0 Interest 0
464 421 103 Total effects of financing $30,935 -$3,335 -$21,620 -$6,968 Cash
balance 23,000 23,000 23,000 29,487 Building Pro Forma Statements Income
Statement Net Income Historical data or industry ratios Cash Flow Statement NI +
Dep.≈ Op. Cash Flow Balance Sheet Assets needed to support sales Current
Permanent Liabilities (debt) Sales estimates and assumptions Debt determines
interest expense Building Pro Forma Statements Income Statement Net Income
Historical data or industry ratios Cash Flow Statement NI + Dep.≈ Op. Cash Flow
Reviewer 537
Management Advisory Services

+NWC needs +Capital investment needs Balance Sheet Assets needed to support
sales Current Permanent Liabilities (debt) Sales estimates and assumptions Year on
year changes determine cash flow needs Building Pro Forma Statements Income
Statement Net Income Historical data or industry ratios Cash Flow Statement NI +
Dep.≈ Op. Cash Flow +NWC needs +Capital investment needs Balance Sheet
Assets needed to support sales Current Permanent Liabilities (debt) Sales estimates
and assumptions Debt determines interest expense Critical Determinants of Financial
Needs • Minimum Efficient Scale • Profitability • Sales Growth • Cash Flow Critical
Determinants of Financial Needs Minimum Efficient Scale • Estimating how much
volume is needed to get to the industry MES – Capital intensive high MES –
Consulting  low MES • How to know – Look at existing structure of the industry –
Look at the fixed and intangible assets needed to compete Critical Determinants of
Financial Needs Profitability • High profit margins  lower cash needs • Rapid
profitability  lower cash needs • However, high profitability can lead to rapid growth
– High growth  high cash needs Critical Determinants of Financial Needs Sales
Growth • Key Questions – When will the venture begin to generate revenues? – Once
revenues are being generated how rapidly will they grow? – What is the best time
frame for forecasting? • 3 years, 5 years, 10 years…. – What is the appropriate
forecasting interval? • Monthly, quarterly annually Critical Determinants of Financial
Needs Sales Growth • Identify a yardstick company – Comparability? • Target
audience • Distribution channels • Substitutes • Manufacturing technologies Critical
Determinants of Financial Needs Sales Growth • Identify a yardstick company •
Gather data • Supply-side approach – Test market – Fundamental analysis Critical
Determinants of Financial Needs Sales Growth • Growth assumptions drive revenues
• Collection assumptions drive cash inflows Start-up Growth Growth Rate 0 0.2 0.4
0.6 0.8 1 1.2 1.4 1.6 0-18 19-23 24 30 36 42 48 54 66 78 on Growth Rate 0 5 10 15
20 25 30 0-18 19-23 24 30 36 42 48 54 66 78 on Revenues growth rate Fundamental
Determinants of Sales Revenues • What geographic market will the venture serve? •
How many potential customers are in the market? – What segments will be interested
in this product? • How rapidly is the market growing? • How much, in terms of
quantity, is the typical customer expected to purchase during the forecast period? •
How are purchase amounts likely to change in the future? • What is the expected
average price of the venture’s product? – How price elastic is the product? • How
aggressively and effectively, compared to competitors, will the venture be able to
promote its product? • How are competitors likely to react to the venture? • Who else
in considering entering the market, and how likely are they to do so? Critical
Determinants of Financial Needs Cash Flow Projections • Cash Flow • You can’t pay
the bills with profits • Things that affect cash flow – Capitalized assets – Terms of
trade • To your customers • From your suppliers – Debt servicing The Cash Flow
Cycle Capital (Debt and Equity) Beginning Cash Materials Product Ending cash
Accounts Rec. Labor Fixed Assets Equity Returns Debt Service Taxes Factors
Impacting a Firm’s Cash Needs • High MES markets – Need for fixed asset
investment – High start-up costs • Tight profit margins • Expect high rates of growth •
Must depend on depreciable assets • Must offer attractive terms of trade to attract
customers • Aren’t able to access favorable terms of trade from suppliers Estimating
the Cash Conversion Cycle Inventory conversion period plus Receivables collection
period minus Payables conversion period Inventory Conversion Period New Venture
Considerations • From raw material to customer-ready • How long is the product in
process? – How much variability is there in the production cycle? • How many days of
raw material inventory is needed to keep production going? • Do you need to keep
Reviewer 538
Management Advisory Services

finished inventory on hand? Pricing and Credit Constraints New Venture


Considerations • New and small businesses are often price takers • New products
often require price incentives to attract the interest of customers • Commercial
customers may want a trial period with a new product • If established players offer
credit terms, the new venture may need to match or beat them Credit and the AR
Conversion Cycle New Venture Considerations • Typically not negotiable • New
players may have to pay in cash or face very tight terms of trade How will you get the
capital? • Debt – Advantages: • You retain ownership and control • Potential profit is
yours – Disadvantages • Expensive for start-ups • Limited amounts available when
you are unproven • Creates greater financial risk for the company – Harder to break-
even • Equity – Advantages: • No fixed charges to meet • May come with good
management advice – Disadvantages • Venture capitalists will want high returns for
their investment • Based on valuation – Difficult to achieve • Diminished ownership
and control From your financials… • How much you expect to need • When you will
need the cash • For how long you will need cash • When you will breakeven – When
will you be liquid • When your investors can expect a return

5. Apply Effective Communication to Stakeholders

The most important element in stakeholder communications is identifying the target


audience. Be deliberate and seek out input from all known groups to find the
unknown groups. It can be tough when too late in the project a critical person or
group is identified that has not received any of the communication through course of
project and has valuable links that need to be addressed. So make sure you avoid
this scenario and take all the steps early to create a document with all stakeholders
you need to manage communication with. Once you have that the ways below can
help you keep communication active, frequent and ongoing collaboration so there is
strong support for you project.

Formal Methods for Communicating– If they don’t exist already, create them. Make
occasions when info should be presented.

1. Meetings – One of the most common ways to communicate. They can vary from
only 1 person to thousands based on message and audience appropriate. It is up to
you to maximize every minute of the time spent to have dialogue. Make sure it is a
dialogue and not a monologue. It is the best way as you have the verbal and non
verbal cues that enhance the communication and avoid misinterpretation.

2. Conference Calls– These days this is the most common as it does not require the
time and expense of travel. The dialogue can take place though its dependant on
voice intonation and clarity of the verbal message. They only require cost of phone
call and there are many paid and free services that will facilitate use of a conference
call line for many people to dial into. Its also a common way for classes to be
recorded and replayed when its convenient for you.
Reviewer 539
Management Advisory Services

3. Newsletters/ Email/ Posters – This strategy is one way communication and utilizes
emailed updates, hard copy brochures, posters, newsletters mailed or emailed. One
of the weaknesses is that messages are delivered and you cannot guage if they were
read and understood, deleted as sometimes there is no feedback. That immediate
feedback is valuable for strengthening your message and making sure impacts and
feedback are quickly received.

Informal Methods – It is important to not only rely on formal channels but to utilize
informal communication as well. The impromptu channels are often more information
rich and critical for relationship building.

4. Hallway Conversations, Bathroom conversations – These meetings are great for


one on one communication, but also be clear and do not establish false expectations
with casual comments dropped.

5. Lunch Meetings, Drink at the bar after work – These casual environments can be
great for connecting, getting feedback, ideas, and work to build support
6. Sporting events – tennis, golf, etc are an easy forum to get the input on what
support exists, feedback on ideas, brainstorming to strengthen your communication
and build stakeholder support

7. Voice mail – this is often underutilized since email is so common but still shown to
be more often listened to than an email will be read. By using voice intonation for
excitement, urgency, etc it can be more compelling. This can be a solo voice mail, a
voice mail broadcast to large team or you could pursue use of automated calling to
get the word out depending on the size of audience

Project Communication Plans

Its not enough to just have a plan. It is critical to seek to understand what your
stakeholders desire both spoken and unspoken. The expectations must be carefully
managed from beginning to end. Every team and project varies in its rate of change,
so pick the most advantageous communication channel, frequency and make sure its
effective. Just as having the plan is important, monitoring its effectiveness, adding
and canceling supplemental ways of communicating will be required.

Communication is a constant, error on the side of over communicating as there are


always people that didn’t hear, understand or make connection when they heard it the
first time.

IV. ECONOMIC CONCEPTS ESSENTIAL TO OBTAINING AN UNDERSTANDING OF ENTITY’S


BUSINESS AND INDUSTRY
Reviewer 540
Management Advisory Services

A. Macroeconomics (National Economic Issues And Measures Of Economic


Performance Such As GDP; Unemployment And Inflation; Fiscal And Monetary
Policies; International Trade And Foreign Exchange Rates)
1. National Economic Issues

Economic issues top Filipinos' concerns — Pulse Asia


By Paolo Taruc, CNN Philippines
Updated 07:26 AM PHT Fri, March 27, 2015
9540

(CNN Philippines) — Filipinos are more concerned with economic issues compared
to national security and socio-political affairs, according to a recent survey by Pulse
Asia.

In a statement released on Tuesday (March 24), the pollster noted that the leading
urgent concerns among Filipinos are inflation control (46%), the increase of workers'
pay (44%), and the fight against government corruption (40%).

Rounding up the upper half are poverty reduction (37%), job creation (34%), and the
fight against crime (22%).

Related: Income increases but so does poverty

On the other hand, Filipinos are least concerned with national territorial integrity (5%),
terrorism (5%), and charter change (4%).

The results are not much different when grouped according to the country's three
major island chains. Mindanaoans (52%) are most concerned with the increasing cost
of goods and services. A majority of Visayans (53%) and residents of Luzon (48%) —
exluding Metro Manila — cite low worker's pay as their top issue.

Those from Metro Manila (49%) rank government corruption as their most pressing
issue. 

Inflation is the top concern of all social classes.

Read: Exclusive growth in the Philippines

In the same survey, Pulse Asia found that the Aquino administration's highest
approval ratings were in calamity response (49%), environmental protection (48%),
and in defending Philippine territorial integrity (43%).

In all issues raised by Pulse Asia among its respondents, the pollster notes that "The
Aquino administration fails to score a majority approval rating..."

The administration's lowest ratings were in economic issues — the very topics that
Filipinos are most concerned about. Only about three out of 10 Filipinos approve its
Reviewer 541
Management Advisory Services

performance improving workers' pay (33%), poverty reduction (28%). and inflation
control (29%).

On a similar note, the latest figures from the Philippine Statistics Authority show that
about one in four FIlipinos (25.8%) lived in poverty during the first half of 2014.
However, the same agency notes that the country's unemployment fell to 6.6% in
January 2015 from, 7.5% the previous year.

"Public assessment of the national administration’s performance remains largely


unchanged between November 2014 and March 2015," Pulse Asia said.

Economic issues, not criminality, top Filipinos’ concerns—poll


By: Aries Joseph Hegina - @inquirerdotnet
INQUIRER.net / 12:39 PM July 20, 2016

President Rodrigo Roa Duterte. INQUIRER FILE PHOTO/JOAN BONDOC


While President Rodrigo Roa Duterte won through an anti-crime and anti-corruption
platform, Filipinos said that they want the new chief executive to address economic
concerns.

According to the results of the Pulse Asia’s Ulat ng Bayan survey conducted from July 2
to 8 which was released on Wednesday, Filipinos want the new Duterte administration
to prioritize three economic issues: control increase in the prices of goods (68 percent),
create jobs (56 percent) and implement pro-poor initiatives (55 percent).
Busting criminality has been cited as the fourth most pressing concern among Filipinos
at 48 percent.

Other concerns that Filipinos want to be addressed by the Duterte administration


include granting of loans to small entrepreneurs and the self-employed (23 percent);
crafting of a program addressing the government’s debt problem (17 percent); and
continuing the peace negotiations with different armed groups (17 percent).
Less than one percent of Filipinos consider forging the national unity and amending the
1987 Constitution as immediate national concerns.
Sizable to big majorities in Metro Manila, Luzon, Visayas, and Mindanao share the view
that the Duterte administration should take steps to control inflation, generate job
opportunities and create pro-poor programs.

Across socio-economic classes, those belonging to Class ABC and D believe that the
President should address the economic issues immediately while a majority of Class E
respondents said that they want inflation to be controlled.

The survey has 1,200 respondents and has a ± 3 percent error margin at 95 percent
confidence level. Subnational estimates for Metro Manila, the rest of Luzon, Visayas
and Mindanao have a ± 6 percent error margin.

Pulse: Filipinos most concerned about economic-related issues


By Patricia Lourdes Viray (philstar.com) | Updated September 28, 2015 - 11:30am
 
Reviewer 542
Management Advisory Services

The Philippine economy slowed to a 5.6 percent growth in the second quarter, falling
below the government’s target.
MANILA, Philippines - Most Filipinos consider two economic-related issues - inflation
and workers' pay - as the most urgent national concerns, according to a Pulse Asia
survey released last week.
The survey showed that 47 percent of Filipinos are most concerned with the country's
inflation while 46 percent are concerned with workers' pay.
Corruption in government (39 percent), employment (36 percent) and poverty (35
percent) are among the second cluster of national issues deemed urgent by Filipinos.
The third group of urgent national issues include peace (21 percent), criminality (20
percent), rule of law (16 percent) and environmental destruction (15 percent).
The survey also showed that Filipinos are least concerned about rapid population
growth (9 percent), national territorial integrity (7 percent), charter change (4 percent)
and terrorism (4 percent).
Workers' pay, employment and inflation are the only issues considered urgent by
majorities across geographic areas and socio-economic classes.
Headlines ( Article MRec ), pagematch: 1, sectionmatch: 1
 

Location Class
National Concerns
Overall NCR Luzon Visayas Mindanao ABC D E
Controlling inflation 47 42 45 47 53 40 46 52
Improving/increasing
46 53 46 42 45 48 46 46
the pay of workers
Fighting garft and
corruption in 39 40 38 33 44 43 42 31
government
Creating more jobs 36 37 34 51 26 32 35 41
Reducing poverty of
35 31 36 35 36 30 34 38
many Filipinos
Increasing peace in the
21 15 19 21 29 13 22 21
country
Fighting criminality 20 26 18 18 22 29 19 21
Enforcing the law on all,
whether influential or 16 18 16 12 20 19 16 15
ordinary people
Stopping the destruction
and abuse of our 15 13 16 18 12 12 18 9
environment
Controlling fast
9 9 13 8 4 18 10 5
population growth
Defending integrity of 7 10 9 4 3 8 6 9
Philippine territory
Reviewer 543
Management Advisory Services

against foreigners
Changing the
4 2 4 5 3 1 4 5
Constitution
Preparing to
successfully face any 4 3 4 6 3 6 3 6
kind of terrorism
 
The survey was conducted from May 30 to June 5 using face-to-face interviews
among 1,200 respondents who are 18 years old and above.
The respondents were asked which issues that the current administration should
immediately address. They were allowed to have multiple response, up to three
answers.

2. Measures of Economic Performance

GROSS DOMESTIC PRODUCT (GDP)

 The total value of goods produced and services provided in a country during one
year.

Household Final Consumption Expenditure xx


Investment or Capital Formation xx
Government Consumption Expenditures xx
Net Exports (Imports):
Exports xx
(xx
Imports xx
)
GROSS DOMESTIC PRODUCT xx

3. Unemployment and Inflation

UNEMPLOYMENT

What is 'Unemployment'

Unemployment is a phenomenon that occurs when a person who is actively


searching for employment is unable to find work. Unemployment is often used as a
measure of the health of the economy. The most frequently measure of
unemployment is the unemployment rate, which is the number of unemployed people
divided by the number of people in the labor force.

BREAKING DOWN 'Unemployment'


Reviewer 544
Management Advisory Services

While the definition of unemployment is clear, economists divide unemployment into


many different categories. The broadest two categories of unemployment are
voluntary and involuntary unemployment. When unemployment is voluntary, it means
that a person has left his job willingly in search of other employment. When it is
involuntary, it means that a person has been fired or laid off and now must look for
another job. Digging deeper, unemployment, both voluntary and involuntary, is
broken down into three types.

Frictional Unemployment

Frictional unemployment arises when a person is in-between jobs. After a person


leaves a company, it naturally takes time to find another job, making this type of
unemployment short-lived. It is also the least problematic from an economic
standpoint. Arizona, for example, has faced rising frictional unemployment in May of
2016, due to the fact that unemployment has been historically low for the state.
Arizona citizens feel confident leaving their jobs with no safety net in search of better
employment.

Cyclical Unemployment

Cyclical unemployment comes around due to the business cycle itself. Cyclical
unemployment rises during recessionary periods and declines during periods of
economic growth. For example, the number of weekly jobless claims in the United
States has slowed in the month of June, as oil prices begin to rise and the economy
starts to stabilize, adding jobs to the market.

Structural Unemployment

Structural unemployment comes about through technological advances, when people


lose their jobs because their skills are outdated. Illinois, for example, after seeing
increased unemployment rates in May of 2016, seeks to implement "structural
reforms" that will give people new skills and therefore more job opportunities.

Differences in Theories of Unemployment

Many variations of the unemployment rate exist with different definitions concerning


who is an "unemployed person" and who is in the "labor force." For example, the U.S.
Bureau of Labor Statistics' commonly cites the "U-3" unemployment rate as the
official unemployment rate, but this definition of unemployment does not include
unemployed workers who have become discouraged by a tough labor market and are
no longer looking for work.
Additionally, various schools of economic thought differ on the cause of
unemployment. Keynesian economics, for example, proposes that there is a "natural
rate" of unemployment even under the best economic conditions. Neoclassical
economics, on the other hand, postulates that the labor market is efficient if left alone
but that various interventions, such a minimum wage laws and unionization, put
supply and demand out of balance.

INFLATION
Reviewer 545
Management Advisory Services

What is 'Inflation'

Inflation is the rate at which the general level of prices for goods and services is rising
and, consequently, the purchasing power of currency is falling. Central banks attempt
to limit inflation, and avoid deflation, in order to keep the economy running smoothly.

BREAKING DOWN 'Inflation'

As a result of inflation, the purchasing power of a unit of currency falls. For example,
if the inflation rate is 2%, then a pack of gum that costs $1 in a given year will cost
$1.02 the next year. As goods and services require more money to purchase, the
implicit value of that money falls.

The Federal Reserve uses core inflation data, which excludes volatile industries such
as food and energy prices. External factors can influence prices on these types of
goods, which does not necessarily reflect the overall rate of inflation. Removing these
industries from inflation data paints a much more accurate picture of the state of
inflation.

The Fed's monetary policy goals include moderate long-term interest rates, price
stability and maximum employment, and each of these goals is intended to promote a
stable financial environment. The Federal Reserve clearly communicates long-term
inflation goals in order to keep a steady long-term rate of inflation, which in turn
maintains price stability. Price stability, or a relatively constant level of inflation, allows
businesses to plan for the future, since they know what to expect. It also allows the
Fed to promote maximum employment, which is determined by nonmonetary factors
that fluctuate over time and are therefore subject to change. For this reason, the Fed
doesn't set a specific goal for maximum employment, and it is largely determined by
members' assessments. Maximum employment does not mean zero unemployment,
as at any given time people there is a certain level of volatility as people vacate and
start new jobs.

Monetarism theorizes that inflation is related to the money supply of an economy. For


example, following the Spanish conquest of the Aztec and Inca empires, massive
amounts of gold and especially silver flowed into the Spanish and other European
economies. Since the money supply had rapidly increased, prices spiked and the
value of money fell, contributing to economic collapse.

Historical Examples of Inflation and Hyperinflation

Today, few currencies are fully backed by gold or silver. Since most world currencies
are fiat money, the money supply could increase rapidly for political reasons, resulting
in inflation. The most famous example is the hyperinflation that struck the German
Weimar Republic in the early 1920s. The nations that had been victorious in World
War I demanded reparations from Germany, which could not be paid in German
paper currency, as this was of suspect value due to government borrowing.
Germany attempted to print paper notes, buy foreign currency with them, and use
that to pay their debts. 
Reviewer 546
Management Advisory Services

This policy led to the rapid devaluation of the German mark, and with it,
hyperinflation. German consumers exacerbated the cycle by trying to spend their
money as fast as possible, expecting that it would be worth less and less the longer
they waited. More and more money flooded the economy, and its value plummeted to
the point where people would paper their walls with the practically worthless
bills. Similar situations have occurred in Peru in 1990 and Zimbabwe in 2007-2008.

Inflation and the 2008 Global Recession

Central banks have tried to learn from such episodes, using monetary policy tools to
keep inflation in check. Since the 2008 financial crisis, the U.S. Federal Reserve has
kept interest rates near zero and pursued a bond-buying program – now discontinued
– known as quantitative easing. Some critics of the program alleged it would cause a
spike in inflation in the U.S. dollar, but inflation peaked in 2007 and declined steadily
over the next eight years. There are many, complex reasons why QE didn't lead to
inflation or hyperinflation, though the simplest explanation is that the recession was a
strong deflationary environment, and quantitative easing ameliorated its effects.

Inflation in Moderation: Harms and Benefits

While excessive inflation and hyperinflation have negative economic


consequences, deflation's negative consequences for the economy can be just as
bad or worse. Consequently, policy makers since the end of the 20th century have
attempted to keep inflation steady at 2% per year. The European Central Bank has
also pursued aggressive quantitative easing to counter deflation in the Eurozone, and
some places have experienced negative interest rates, due to fears that deflation
could take hold in the eurozone and lead to economic stagnation. Moreover,
countries that are experiencing higher rates of growth can absorb higher rates of
inflation. India's target is around 4%, Brazil's 4.5%.

Real World Example of Inflation

Inflation is generally measured in terms of a consumer price index (CPI), which tracks


the prices of a basket of core goods and services over time. Viewed another way, this
tool measures the "real"—that is, adjusted for inflation—value of earnings over time. It
is important to note that the components of the CPI do not change in price at the
same rates or even necessarily move the same direction. For example, the prices of
secondary education and housing have been increasing much more rapidly than the
prices of other goods and services; meanwhile fuel prices have risen, fallen, risen
again and fallen again—each time very sharply—in the past ten years.

Inflation is one of the primary reasons that people invest in the first place. Just as the
pack of gum that costs a dollar will cost $1.02 in a year, assuming 2% inflation, a
savings account that was worth $1,000 would be worth $903.92 after 5 years, and
$817.07 after 10 years, assuming that you earn no interest on the deposit. Stuffing
cash into a mattress, or buying a tangible asset like gold, may make sense to people
who live in unstable economies or who lack legal recourse. However, for those who
can trust that their money will be reasonably safe if they make
prudent equity or bond investments, this is arguably the way to go.
Reviewer 547
Management Advisory Services

There is still risk, of course: bond issuers can default, and companies that issue stock
can go under. For this reason it's important to do solid research and create a diverse
portfolio. But in order to keep inflation from steadily gnawing away at your money, it's
important to invest it in assets that can be reasonably be expected to yield at a
greater rate than inflation.

Inflation is defined as a sustained increase in the general level of prices for goods
and services in a county, and is measured as an annual percentage change. Under
conditions of inflation, the prices of things rise over time. Put differently, as inflation
rises, every dollar you own buys a smaller percentage of a good or service. When
prices rise, and alternatively when the value of money falls you have inflation.

The value of a dollar (or any unit of money) is expressed in terms of its purchasing
power, which is the amount of real, tangible goods or actual services that money can
buy at a moment in time. When inflation goes up, there is a decline in the purchasing
power of money. For example, if the inflation rate is 2% annually, then theoretically a
$1 pack of gum will cost $1.02 in a year. After inflation, your dollar does not go as far
as it did in the past. This why a pack of gum cost just $0.05 in the 1940’s – the price
has risen, or from a different perspective, the value of the dollar has declined. In
recent years, most developed countries have attempted to sustain an inflation rate of
2-3% by using monetary policy tools put to use by central banks. This general form of
monetary policy is known as inflation targeting.

Causes of Inflation

There is no single theory for the cause of inflation that is universally agreed upon by
economists and academics, but there are a few hypotheses that are commonly held.

Demand-Pull Inflation – Inflation is caused by the overall increase in demand for


goods and services, which bids up their prices. This theory can be summarized as
"too much money chasing too few goods". In other words, if demand is growing faster
than supply, prices will increase. This usually occurs in rapidly growing economies.
This theory is often promoted by the Keynesian school of economics.

Cost-Push Inflation – Inflation is caused when companies' costs of production go up.


When this happens, they need to increase prices to maintain their profit margins.
Increased costs can include things such as wages, taxes, or increased costs of
natural resources or imports.

Monetary Inflation – Inflation is caused by an oversupply of money in the economy.


Just like any other commodity, the prices of things are determined by their supply and
demand. If there is too much supply, the price of that thing goes down. If that thing is
money, and too much supply of money makes its value go down, the result is that the
prices of everything else priced in dollars must go up! This theory is often promoted
by the “Monetarist” school of economics.

Costs of Inflation

Inflation affects different people in different ways, with some benefiting from its effects
at the expense of some who lose out. It also depends on whether changes to the rate
Reviewer 548
Management Advisory Services

of inflation are anticipated or unanticipated. If the inflation rate corresponds to what


the majority of people are expecting (anticipated inflation), then we can compensate
and the impact isn't necessarily as severe. For example, banks can vary their interest
rates and workers can negotiate contracts that include automatic wage hikes as
prices go up.

Here is a brief account of the typical winners and losers from inflation:

 Creditors (lenders) lose and debtors (borrowers) gain under inflation. For
example, suppose a bank issues you a 30-year mortgage to buy a house at
a fixed interest rate of 5% per year, costing $1,000 per month. As inflation
rises, the “cost” of that $1,000 per month decreases, which benefits the
homeowner, especially if the rate of inflation exceeds the interest rate on the
loan.
 Inflation hurts savers since a dollar saved will be worth less in the future.
Unless the money is saved in an account that pays an interest rate at or
above the rate of inflation, the purchasing power of savings will erode. This
phenomenon is sometimes called "cash-drag."
 Workers with fixed salaries or contracts that do not adjust with inflation will
be hurt as the buying power of their incomes stay the same relative to rising
prices.
 Similarly, people living off a fixed-income, such as those below the poverty
line, retirees or annuitants, see a decline in their purchasing power and,
consequently, their standard of living.
 Landlords benefit, if they have a fixed mortgage (or no mortgage) as they
are able to raise the rent more each year.
 Uncertainty about what will happen next makes corporations and consumers
less likely to spend. This hurts economic output in the long run.
 The entire economy must absorb repricing costs (menu costs) as price lists,
labels, menus and more have to be updated.
 If the domestic inflation rate is greater than that of other countries, domestic
products become less competitive.

Variations on the Theme of Inflation

There are several variations on the theme of inflation.


Deflation is when the general level of prices are falling. It is the opposite effect of
inflation. Deflation tends to occur more rarely and for shorter periods of time than
inflation. Deflation occurs typically during times of recession or economic crisis and
can lead to deep economic crises including depression. The reason for this is the so-
called deflationary spiral: when prices are going down, why would you spend your
money today, when each dollar will be more valuable tomorrow? And why spend
tomorrow when each dollar can buy more the day after? The result is that people stop
spending and hoard their money in anticipation of prices falling even further. If money
is being hoarded, it isn’t being spent, so business profits collapse and people are laid
off. Increasing unemployment leaves the economy with even less spending, and the
spiral continues.

Disinflation is a condition where inflation is still positive, but the rate of inflation is


decreasing – for example from +3% to +2%.
Reviewer 549
Management Advisory Services

Hyperinflation is unusually rapid inflation, typically more than 50% in a single month.
In extreme cases, this inflation gone awry can lead to the breakdown of a nation's
monetary system or even its economy. One of the most notable examples of
hyperinflation occurred in Germany in 1923, when prices rose 2,500% in one month!
Likewise, in Zimbabwe, hyperinflation led to Z$100 trillion bills being printed that were
worth only a few U.S. dollars. Hyperinflations have also famously occurred in
Hungary and Argentina in the 20th century.

Stagflation is the rare combination of high unemployment and economic stagnation


along with high rates of inflation. This happened in industrialized countries during the
1970s, when a rocky economy was confronted with OPEC raising oil prices resulting
in a demand shock for oil. This sent the price of oil – and all of the products and
services that use oil as an input – higher, even as the economy slackened.

People often complain when prices go up, but they often ignore the fact that wages
should be rising as well. The question shouldn't be whether inflation is rising, but
whether it's rising at a quicker pace than your wages. A modest inflation is a sign that
an economy is growing. In some situations, little inflation can be just as bad as high
inflation. The lack of inflation may be an indication that the economy is weakening. As
you can see, it's not so easy to label inflation as either good or bad – it depends on
the overall economy as well as your personal situation.

4. Fiscal and Monetary Policies

FISCAL POLICIES

Fiscal policy is the means by which a government adjusts its spending levels and tax
rates to monitor and influence a nation's economy. It is the sister strategy to monetary
policy through which a central bank influences a nation's money supply. These two
policies are used in various combinations to direct a country's economic goals. Here
we look at how fiscal policy works, how it must be monitored and how its
implementation may affect different people in an economy.

Before the Great Depression, which lasted from Sept. 4, 1929, to the late 1930s or
early 1940s, the government's approach to the economy was laissez-faire. Following
World War II, it was determined that the government had to take a proactive role in
the economy to regulate unemployment, business cycles, inflation and the cost of
money. By using a mix of monetary and fiscal policies (depending on the political
orientations and the philosophies of those in power at a particular time, one policy
may dominate over another), governments can control economic phenomena.

How Fiscal Policy Works

Fiscal policy is based on the theories of British economist John Maynard Keynes.


Also known as Keynesian economics, this theory basically states that governments
can influence macroeconomic productivity levels by increasing or decreasing tax
levels and public spending. This influence, in turn, curbs inflation (generally
considered to be healthy when between 2-3%), increases employment and maintains
a healthy value of money. Fiscal policy is very important to the economy. For
Reviewer 550
Management Advisory Services

example, in 2012 many worried that the fiscal cliff, a simultaneous increase in tax
rates and cuts in government spending set to occur in January 2013, would send the
U.S. economy back to recession. The U.S. Congress avoided this problem by
passing the American Taxpayer Relief Act of 2012 on Jan. 1, 2013.

Balancing Act

The idea, however, is to find a balance between changing tax rates and public
spending. For example, stimulating a stagnant economy by increasing spending or
lowering taxes runs the risk of causing inflation to rise. This is because an increase in
the amount of money in the economy, followed by an increase in consumer demand,
can result in a decrease in the value of money - meaning that it would take more
money to buy something that has not changed in value.

Let's say that an economy has slowed down. Unemployment levels are up, consumer
spending is down, and businesses are not making substantial profits. A government
thus decides to fuel the economy's engine by decreasing taxation, which gives
consumers more spending money, while increasing government spending in the form
of buying services from the market (such as building roads or schools). By paying for
such services, the government creates jobs and wages that are in turn pumped into
the economy. Pumping money into the economy by decreasing taxation and
increasing government spending is also known as "pump priming." In the meantime,
overall unemployment levels will fall.

With more money in the economy and fewer taxes to pay, consumer demand for
goods and services increases. This, in turn, rekindles businesses and turns the cycle
around from stagnant to active.

If, however, there are no reins on this process, the increase in economic productivity
can cross over a very fine line and lead to too much money in the market. This
excess in supply decreases the value of money while pushing up prices (because of
the increase in demand for consumer products). Hence, inflation exceeds the
reasonable level.

For this reason, fine tuning the economy through fiscal policy alone can be a difficult,
if not improbable, means to reach economic goals. If not closely monitored, the line
between a productive economy and one that is infected by inflation can be easily
blurred.

And When the Economy Needs to Be Curbed …

When inflation is too strong, the economy may need a slowdown. In such a situation,
a government can use fiscal policy to increase taxes to suck money out of the
economy. Fiscal policy could also dictate a decrease in government spending and
thereby decrease the money in circulation. Of course, the possible negative effects of
such a policy, in the long run, could be a sluggish economy and high unemployment
levels. Nonetheless, the process continues as the government uses its fiscal policy to
fine-tune spending and taxation levels, with the goal of evening out the business
cycles.
Reviewer 551
Management Advisory Services

Who Does Fiscal Policy Affect?

Unfortunately, the effects of any fiscal policy are not the same for everyone.
Depending on the political orientations and goals of the policymakers, a tax cut could
affect only the middle class, which is typically the largest economic group. In times of
economic decline and rising taxation, it is this same group that may have to pay more
taxes than the wealthier upper class.

Similarly, when a government decides to adjust its spending, its policy may affect only
a specific group of people. A decision to build a new bridge, for example, will give
work and more income to hundreds of construction workers. A decision to spend
money on building a new space shuttle, on the other hand, benefits only a small,
specialized pool of experts, which would not do much to increase aggregate
employment levels.

That said, the markets also react to fiscal policy. For example, in response to
President Trump's proposed corporate tax deduction plans, the S&P has been trading
higher according to Barclays.

The Bottom Line

One of the biggest obstacles facing policymakers is deciding how much involvement
the government should have in the economy. Indeed, there have been various
degrees of interference by the government over the years. But for the most part, it is
accepted that a degree of government involvement is necessary to sustain a vibrant
economy, on which the economic well-being of the population depends.

MONETARY POLICIES

What is 'Monetary Policy'

Monetary policy consists of the actions of a central bank, currency board or other
regulatory committee that determine the size and rate of growth of the money supply,
which in turn affects interest rates. Monetary policy is maintained through actions
such as modifying the interest rate, buying or selling government bonds, and
changing the amount of money banks are required to keep in the vault (bank
reserves).

The Federal Reserve is in charge of the United States' monetary policy.

BREAKING DOWN 'Monetary Policy'

Broadly, there are two types of monetary policy, expansionary and contractionary.

Expansionary monetary policy increases the money supply in order to lower


unemployment, boost private-sector borrowing and consumer spending,
and stimulate economic growth. Often referred to as "easy monetary policy," this
description applies to many central banks since the 2008 financial crisis, as interest
rates have been low and in many cases near zero. 
Reviewer 552
Management Advisory Services

Contractionary monetary policy slows the rate of growth in the money supply or
outright decreases the money supply in order to control inflation; while sometimes
necessary, contractionary monetary policy can slow economic growth, increase
unemployment and depress borrowing and spending by consumers and businesses.
An example would be the Federal Reserve's intervention in the early 1980s: in order
to curb inflation of nearly 15%, the Fed raised its benchmark interest rate to 20%.
This hike resulted in a recession, but did keep spiraling inflation in check.

Central banks use a number of tools to shape monetary policy. Open market


operations directly affect the money supply through buying short-term government
bonds (to expand money supply) or selling them (to contract it). Benchmark interest
rates, such as the LIBOR and the Fed funds rate, affect the demand for money by
raising or lowering the cost to borrow—in essence, money's price. When borrowing is
cheap, firms will take on more debt to invest in hiring and expansion; consumers will
make larger, long-term purchases with cheap credit; and savers will have more
incentive to invest their money in stocks or other assets, rather than earn very little—
and perhaps lose money in real terms—through savings accounts. Policy makers
also manage risk in the banking system by mandating the reserves that banks must
keep on hand. Higher reserve requirements put a damper on lending and rein in
inflation.

In recent years, unconventional monetary policy has become more common. This
category includes quantitative easing, the purchase of varying financial assets from
commercial banks. In the US, the Fed loaded its balance sheet with trillions of dollars
in Treasury notes and mortgage-backed securitiesbetween 2008 and 2013. The Bank
of England, the European Central Bank and the Bank of Japan have pursued similar
policies. The effect of quantitative easing is to raise the price of securities, therefore
lowering their yields, as well as to increase total money supply. Credit easing is a
related unconventional monetary policy tool, involving the purchase of private-sector
assets to boost liquidity. Finally, signaling is the use of public communication to ease
markets' worries about policy changes: for example, a promise not to raise interest
rates for a given number of quarters.

Central banks are often, at least in theory, independent from other policy makers.
This is the case with the Federal Reserve and Congress, reflecting the separation of
monetary policy from fiscal policy. The latter refers to taxes and government
borrowing and spending.

The Federal Reserve has what is commonly referred to as a "dual mandate": to


achieve maximum employment (in practice, around 5% unemployment) and stable
prices (2-3% inflation). In addition, it aims to keep long-term interest rates relatively
low, and since 2009 has served as a bank regulator. Its core role is to be the lender
of last resort, providing banks with liquidity in order to prevent the bank failures and
panics that plagued the US economy prior to the Fed's establishment in 1913. In this
role, it lends to eligible banks at the so-called discount rate, which in turn influences
the Federal funds rate (the rate at which banks lend to each other) and interest rates
on everything from savings accounts to student loans, mortgages and corporate
bonds.
Reviewer 553
Management Advisory Services

Monetary policy is the process by which the monetary authority of a country, like
the central bank or currency board, controls the supply of money, often targeting
an inflation rate or interest rate to ensure price stability and general trust in the
currency.[1][2][3]

Further goals of a monetary policy are usually to contribute to economic growth and


stability, to lower unemployment, and to maintain predictable exchange rates with
other currencies.

Monetary economics provides insight into how to craft an optimal monetary policy.


Since the 1970s, monetary policy has generally been formed separately from fiscal
policy, which refers to taxation, government spending, and associated borrowing.[4]

Monetary policy is referred to as either being expansionary or contractionary.


Expansionary policy is when a monetary authority uses its tools to stimulate the
economy. An expansionary policy increases the total supply of money in the economy
more rapidly than usual. It is traditionally used to try to combat unemployment in
a recession by lowering interest rates in the hope that easy credit will entice
businesses into expanding. Also, this increases the aggregate demand (the overall
demand for all goods and services in an economy), which boosts growth as
measured by gross domestic product (GDP). Expansionary monetary policy usually
diminishes the value of the currency, thereby decreasing the exchange rate.[5]

The opposite of expansionary monetary policy is contractionary monetary policy,


which slows the rate of growth in the money supply or even shrinks it. This slows
economic growth to prevent inflation. Contractionary monetary policy can lead to
increased unemployment and depressed borrowing and spending by consumers and
businesses, which can eventually result in an economic recession; it should hence be
well managed and conducted with care.

5. International Trade and Foreign Exchange Rates

INTERNATIONAL TRADE

What Is International Trade?

International trade is the exchange of goods and services between countries. This
type of trade gives rise to a world economy, in which prices, or supply and demand,
affect and are affected by global events. Political change in Asia, for example, could
result in an increase in the cost of labor, thereby increasing the manufacturing costs
for an American sneaker company based in Malaysia, which would then result in an
increase in the price that you have to pay to buy the tennis shoes at your local mall. A
decrease in the cost of labor, on the other hand, would result in you having to pay
less for your new shoes.

Trading globally gives consumers and countries the opportunity to be exposed to


goods and services not available in their own countries. Almost every kind of product
can be found on the international market: food, clothes, spare parts, oil, jewelry, wine,
stocks, currencies, and water. Services are also traded: tourism, banking, consulting
and transportation. A product that is sold to the global market is an export, and a
Reviewer 554
Management Advisory Services

product that is bought from the global market is an import. Imports and exports are
accounted for in a country's current account in the balance of payments.

Increased Efficiency of Trading Globally

Global trade allows wealthy countries to use their resources—whether labor,


technology or capital— more efficiently. Because countries are endowed with
different assets and natural resources (land, labor, capital and technology), some
countries may produce the same good more efficiently and therefore sell it more
cheaply than other countries. If a country cannot efficiently produce an item, it can
obtain the item by trading with another country that can. This is known
as specialization in international trade.

Let's take a simple example. Country A and Country B both produce cotton sweaters
and wine. Country A produces ten sweaters and six bottles of wine a year while
Country B produces six sweaters and ten bottles of wine a year. Both can produce a
total of 16 units. Country A, however, takes three hours to produce the ten sweaters
and two hours to produce the six bottles of wine (total of five hours). Country B, on
the other hand, takes one hour to produce ten sweaters and three hours to produce
six bottles of wine (total of four hours).

But these two countries realize that they could produce more by focusing on those
products with which they have a comparative advantage. Country A then begins to
produce only wine, and Country B produces only cotton sweaters. Each country can
now create a specialized output of 20 units per year and trade equal proportions of
both products. As such, each country now has access to 20 units of both products.

We can see then that for both countries, the opportunity cost of producing both
products is greater than the cost of specializing. More specifically, for each country,
the opportunity cost of producing 16 units of both sweaters and wine is 20 units of
both products (after trading). Specialization reduces their opportunity cost and
therefore maximizes their efficiency in acquiring the goods they need. With the
greater supply, the price of each product would decrease, thus giving an advantage
to the end consumer as well.
Note that, in the example above, Country B could produce both wine and cotton more
efficiently than Country A (less time). This is called an absolute advantage, and
Country B may have it because of a higher level of technology. However, according
to the international trade theory, even if a country has an absolute advantage over
another, it can still benefit from specialization.

Other Possible Benefits of Trading Globally

International trade not only results in increased efficiency but also allows countries to
participate in a global economy, encouraging the opportunity of foreign direct
investment (FDI), which is the amount of money that individuals invest into foreign
companies and other assets. In theory, economies can, therefore, grow more
efficiently and can more easily become competitive economic participants.

For the receiving government, FDI is a means by which foreign currency and
expertise can enter the country. These raise employment levels, and, theoretically,
Reviewer 555
Management Advisory Services

lead to a growth in the gross domestic product. For the investor, FDI offers company
expansion and growth, which means higher revenues.

Free Trade Vs. Protectionism

As with other theories, there are opposing views. International trade has two
contrasting views regarding the level of control placed on trade: free
trade and protectionism. Free trade is the simpler of the two theories: a laissez-
faire approach, with no restrictions on trade. The main idea is that supply and
demand factors, operating on a global scale, will ensure that production happens
efficiently. Therefore, nothing needs to be done to protect or promote trade and
growth, because market forces will do so automatically.

In contrast, protectionism holds that regulation of international trade is important to


ensure that markets function properly. Advocates of this theory believe that market
inefficiencies may hamper the benefits of international trade and they aim to guide the
market accordingly. Protectionism exists in many different forms, but the most
common are tariffs, subsidies, and quotas. These strategies attempt to correct any
inefficiency in the international market.

The Bottom Line

As it opens up the opportunity for specialization and therefore more efficient use of
resources, international trade has the potential to maximize a country's capacity to
produce and acquire goods. Opponents of global free trade have argued, however,
that international trade still allows for inefficiencies that leave developing nations
compromised. What is certain is that the global economy is in a state of continual
change, and, as it develops, so too must all of its participants.

International trade is the exchange of capital, goods,


and services across international borders or territories. It is the exchange of goods
and services among nations of the world. [1] In most countries, such trade represents a
significant share of gross domestic product (GDP). While international trade has
existed throughout history (for example Uttarapatha, Silk Road, Amber
Road, scramble for Africa, Atlantic slave trade, salt roads), its economic, social, and
political importance has been on the rise in recent centuries.

FOREIGN EXCHANGE RATES

In finance, an exchange rate of two currencies is the rate at which one currency will
be exchanged for another. It is also regarded as the value of one country’s currency
in relation to another currency.[1] For example, an interbank exchange rate of
114 Japanese yen to the United States dollar means that ¥114 will be exchanged for
each US$1 or that US$1 will be exchanged for each ¥114. In this case it is said that
the price of a dollar in relation to yen is ¥114, or equivalently that the price of a yen in
relation to dollars is $1/114.

Exchange rates are determined in the foreign exchange market,[2] which is open to a


wide range of different types of buyers and sellers, and where currency trading is
continuous: 24 hours a day except weekends, i.e. trading from 20:15 GMT on Sunday
Reviewer 556
Management Advisory Services

until 22:00 GMT Friday. The spot exchange rate refers to the current exchange rate.
The forward exchange rate refers to an exchange rate that is quoted and traded
today but for delivery and payment on a specific future date.

In the retail currency exchange market, different buying and selling rates will be
quoted by money dealers. Most trades are to or from the local currency. The buying
rate is the rate at which money dealers will buy foreign currency, and the selling rate
is the rate at which they will sell that currency. The quoted rates will incorporate an
allowance for a dealer's margin (or profit) in trading, or else the margin may be
recovered in the form of a commission or in some other way. Different rates may also
be quoted for cash , a documentary form or electronically . The higher rate on
documentary transactions has been justified as compensating for the additional time
and cost of clearing the document. On the other hand, cash is available for resale
immediately, but brings security, storage, and transportation costs, and the cost of
tying up capital in a stock of banknotes (bills).

Exchange Rates

Nominal Exchange Rates versus Real Exchange Rates

As we begin discussing exchange rates, we must make the same distinction that we
made when discussing GDP. Namely, how do nominal exchange rates and real
exchange rates differ?

The nominal exchange rate is the rate at which currency can be exchanged. If the
nominal exchange rate between the dollar and the lira is 1600, then one dollar will
purchase 1600 lira. Exchange rates are always represented in terms of the amount of
foreign currency that can be purchased for one unit of domestic currency. Thus, we
determine the nominal exchange rate by identifying the amount of foreign currency
that can be purchased for one unit of domestic currency.

The real exchange rate is a bit more complicated than the nominal exchange rate.
While the nominal exchange rate tells how much foreign currency can be exchanged
for a unit of domestic currency, the real exchange rate tells how much the goods and
services in the domestic country can be exchanged for the goods and services in a
foreign country. The real exchange rate is represented by the following equation: real
exchange rate = (nominal exchange rate X domestic price) / (foreign price).

Let's say that we want to determine the real exchange rate for wine between the US
and Italy. We know that the nominal exchange rate between these countries is 1600
lira per dollar. We also know that the price of wine in Italy is 3000 lira and the price of
wine in the US is $6. Remember that we are attempting to compare equivalent types
of wine in this example. In this case, we begin with the equation for the real exchange
rate of real exchange rate = (nominal exchange rate X domestic price) / (foreign
price). Substituting in the numbers from above gives real exchange rate = (1600 X
$6) / 3000 lira = 3.2 bottles of Italian wine per bottle of American wine.
Reviewer 557
Management Advisory Services

By using both the nominal exchange rate and the real exchange rate, we can deduce
important information about the relative cost of living in two countries. While a high
nominal exchange rate may create the false impression that a unit of domestic
currency will be able to purchase many foreign goods, in reality, only a high real
exchange rate justifies this assumption.

Net Exports and the Real Exchange Rate

An important relationship exists between net exports and the real exchange rate
within a country. When the real exchange rate is high, the relative price of goods at
home is higher than the relative price of goods abroad. In this case, import is likely
because foreign goods are cheaper, in real terms, than domestic goods. Thus, when
the real exchange rate is high, net exports decrease as imports rise. Alternatively,
when the real exchange rate is low, net exports increase as exports rise. This
relationship helps to show the effects of changes in the real exchange rate.

B. Microeconomics (Concept Of And Factors Affecting Supply; Concept Of And


Factors Affecting Demand; Market Equilibrium; Price Elasticity Of Demand;
Market Structure; Production And Cost Functions)
1. Supply

CONCEPT

What is 'Supply'

Supply is a fundamental economic concept that describes the total amount of a


specific good or service that is available to consumers. Supply can relate to the
amount available at a specific price or the amount available across a range of prices
if displayed on a graph. This relates closely to the demand for a good or service at a
specific price; all else being equal, the supply provided by producers will rise if the
price rises because all firms look to maximize profits.
BREAKING DOWN 'Supply'

Supply and demand trends form the basis of the modern economy. Each specific
good or service will have its own supply and demand patterns based on
price, utility and personal preference. If people demand a good and are willing to pay
more for it, producers will add to the supply. As the supply increases, the price will fall
given the same level of demand. Ideally, markets will reach a point
of equilibrium where the supply equals the demand (no excess supply and no
shortages) for a given price point; at this point, consumer utility and producer profits
are maximized.

‘Supply’ Basics

The concept of supply in economics is complex with many mathematical formulas,


practical applications and contributing factors. While supply can refer to anything in
demand that is sold in a competitive marketplace, supply is most used to refer to
goods, services or labor. One of the most important factors that affects supply is the
Reviewer 558
Management Advisory Services

good’s price. Generally, if a good’s price increases so will the supply. The price of
related goods and the price of inputs (energy, raw materials, labor) also affect supply
as they contribute to increasing the overall price of the good sold.

The conditions of the production of the item in supply is also significant; for example,
when a technological advancement increases the quality of a good being supplied, or
if there is a disruptive innovation, such as when a technological advancement renders
a good obsolete or less in demand. Government regulations can also affect supply,
such as environmental laws, as well as the number of suppliers (which increases
competition) and market expectations. An example of this is when environmental laws
regarding the extraction of oil affect the supply of such oil.

Supply is represented in microeconomics by a number of mathematical formulas. The


supply function and equation expresses the relationship between supply and the
affecting factors, such as those mentioned above or even inflation rates and other
market influences. A supply curve always describes the relationship between the
price of the good and the quantity supplied. A wealth of information can be gleaned
from a supply curve, such as movements (caused by a change in price), shifts
(caused by a change that is not related to the price of the good) and price elasticity. 

History of ‘Supply’

Supply in economics and finance is often, if not always, associated with demand.
The Law of Supply and Demand is a fundamental and foundational principle of
economics. The law of supply and demand is a theory that describes how supply of a
good and the demand for it interact. Generally, if supply is high and demand low, the
corresponding price will also be low. If supply is low and demand is high, the price will
also be high. This theory assumes market competition in a capitalist system. Supply
and demand in modern economics has been historically attributed to John Locke in
an early iteration, as well as definitively used by Adam Smith’s well-known “An
Enquiry into the Nature and Causes of the Wealth of Nations,” published in Britain in
1776.

The graphical representation of supply curve data was first used in the 1870s by
English economic texts, and then popularized in the seminal textbook “Principles of
Economics” by Alfred Marshall in 1890. It has long been debated why Britain was the
first country to embrace, utilize and publish on theories of supply and demand, and
economics in general. The advent of the industrial revolution and the ensuing British
economic powerhouse, which included heavy production, technological innovation
and an enormous amount of labor, has been a well-discussed cause.

Related Terms & Concepts

Related terms and concepts to supply in today’s context include supply chain


finance and money supply. Money supply refers specifically to the entire stock of
currency and liquid assets in a country. Economists will analyze and monitor this
supply, formulating policies and regulations based on its fluctuation through
controlling interest rates and other such measures. Official data on a country’s money
supply must be accurately recorded and made public periodically. The recent and
Reviewer 559
Management Advisory Services

ongoing European sovereign debt crisis, which began in 2007, is a good example of


the role of a country’s money supply and the global economic impact.

Global supply chain finance is another important concept related to supply in today’s


globalized world. Supply chain finance aims to effectively link all tenets of a
transaction, including the buyer, seller, financing institution—and by proxy the
supplier—to lower overall financing costs and speed up the process of business.
Supply chain finance is often made possible through a technology-based platform,
and is affecting industries such as the automobile and retail sectors.

The total amount of a product (good or service) available for purchase at any
specified price.

Supply is determined by: (1) Price: producers will try to obtain the highest possible
price whereas the buyers will try to pay the lowest possible price both settling at the
equilibrium price where supply equals demand. (2) Cost of inputs: the lower the input
price the higher the profit at a price level and more product will be offered at that
price. (3) Price of other goods: lower prices of competing goods will reduce the price
and the supplier may switch to switch to more profitable products thus reducing the
supply.

In economics, supply is the amount of something that firms, consumers, labours,


providers of financial assets, or other economic agents are willing to provide to
the marketplace. Supply is often plotted graphically with the quantity provided
(the dependent variable) plotted horizontally and the price (the independent variable)
plotted vertically.

In the goods market, supply is the amount of a product per unit of time that producers
are willing to sell at various given prices when all other factors are held constant. In
the labor market, the supply of labor is the amount of time per week, month, or year
that individuals are willing to spend working, as a function of the wage rate. In
the financial markets, the money supply is the amount of highly liquid assets available
in the money market, which is either determined or influenced by a
country's monetary authority.

A schedule showing the amounts of a good or service that sellers (or a seller) will
offer at various prices during some period.

FACTORS AFFECTING SUPPLY

8 Factors that Influence the Supply of a Product

In economics, supply refers to the quantity of a product available in the market for
sale at a specified price at a given point of time.

Unlike demand, supply refers to the willingness of a seller to sell the specified amount
of a product within a particular price and time.
Reviewer 560
Management Advisory Services

Supply is always defined in relation to price and time. For example, if a seller agrees
to sell 500 kgs of wheat, it cannot be considered as supply of wheat as the price and
time factors are missing.

Similarly, if a seller is ready to sell 500 kgs at a price of Rs. 30 per kg then again it
would not be considered as supply as the time element is missing. Therefore, the
statement “a seller is willing to sell 500 kgs at the price of Rs. 30 per kg in a week” is
ideal to understand the concept of supply as it relates supply with price and time.

Apart from this, the supply also depends on the stock and market price of the product.
Stock of a product refers to quantity of a product available in the market for sale
within a specified point of time.

Both stock and market price of a product affect its supply to a greater extent. If the
market price is more than the cost price, the seller would increase the supply of a
product in the market. However, the decrease in market price as compared to cost
price would reduce the supply of product in the market.

For example Mr. X has 100 kgs of a product. He expects the minimum price to be Rs.
90 per kg and the market price is Rs. 95 per kg. Therefore he would release certain
amount of the product, say around 50 kgs in the market, but would not release the
whole amount. The reason being he would wait for better rates for his product. In
such a case, the supply of his product would be 50kgs at Rs. 95 per kg.

Determinants of Supply:

Supply can be influenced by a number of factors that are termed as determinants of


supply. Generally, the supply of a product depends on its price and cost of
production. In simple terms, supply is the function of price and cost of production.

Some of the factors that influence the supply of a product are described as follows:

i. Price:

Refers to the main factor that influences the supply of a product to a greater extent.
Unlike demand, there is a direct relationship between the price of a product and its
supply. If the price of a product increases, then the supply of the product also
increases and vice versa. Change in supply with respect to the change in price is
termed as the variation in supply of a product.

Speculation about future price can also affect the supply of a product. If the price of a
product is about to rise in future, the supply of the product would decrease in the
present market because of the profit expected by a seller in future. However, the fall
in the price of a product in future would increase the supply of product in the present
market.

ii. Cost of Production:


Reviewer 561
Management Advisory Services

Implies that the supply of a product would decrease with increase in the cost of
production and vice versa. The supply of a product and cost of production are
inversely related to each other. For example, a seller would supply less quantity of a
product in the market, when the cost of production exceeds the market price of the
product.

In such a case the seller would wait for the rise in price in future. The cost of
production rises due to several factors, such as loss of fertility of land, high wage
rates of labor, and increase in the prices of raw material, transport cost, and tax rate.

iii. Natural Conditions:

Implies that climatic conditions directly affect the supply of certain products. For
example, the supply of agricultural products increases when monsoon comes on
time. However, the supply of these products decreases at the time of drought. Some
of the crops are climate specific and their growth purely depends on climatic
conditions. For example Kharif crops are well grown at the time of summer, while
Rabi crops are produce well in winter season.

iv. Technology:

Refers to one of the important determinant of supply. A better and advanced


technology increases the production of a product, which results in the increase in the
supply of the product. For example, the production of fertilizers and good quality
seeds increases the production of crops. This further increase the supply of food
grains in the market.

v. Transport Conditions:

Refer to the fact that better transport facilities increase the supply of products.
Transport is always a constraint to the supply of products, as the products are not
available on time due to poor transport facilities. Therefore even if the price of a
product increases, the supply would not increase.

In India sellers usually use road transport and the poorly maintained road makes it
difficult to reach the destination on time the products that are manufactured in one
part of the city need to be spread in the whole country through road transport This
may result in the damage of most of the products during the journey, which can cause
heavy loss for a seller. In addition the seller can also lose his/her customers because
of the delay in. the delivery of products.

vi. Factor Prices and their Availability:

Act as one of the major determinant of supply. The inputs, such as raw material man,
equipment, and machines, required at the time of production are termed as factors. If
Reviewer 562
Management Advisory Services

the factors are available in sufficient quantity and at lower price, then there would be
increase in production.

This would increase the supply of a product in the market. For example, availability of
cheap labor and raw material nearby the manufacturing plant of an organization
would help in reducing the labor and transportation costs. Consequently, the
production and supply of the product would increase.

vii. Government’s Policies:

Implies that the different policies of government, such as fiscal policy and industrial
policy, has a greater impact on the supply of a product. For example, increase in tax
on excise duties would decrease the supply of a product. On the other hand, if the tax
rate is low, then the supply of a product would increase.

viii. Prices of Related Goods:

Refer to fact that the prices of substitutes and complementary goods also affect the
supply of a product. For example, if the price of wheat increases, then farmers would
tend to grow more wheat than nee. This would decrease the supply of rice in the
market.

An increase in supply occurs when more is supplied at each price, this could
occur for the following reasons:

1. A decrease in costs of production. This means business can supply more at each
price. Lower costs could be due to lower wages, lower raw material costs
2. More firms. An increase in the number of producers will cause an increase in
supply.
3. Investment in capacity. Expansion in capacity of existing firms, e.g. building a
new factory
4. Related supply. An increase in supply of a related good e.g. beef and leather
5. Weather. Climatic conditions are very important for agricultural products
6. Technological improvements. Improvements in technology, e.g. computers,
reducing firms costs
7. Lower taxes. Lower direct taxes (e.g. tobacco tax, VAT) reduce the cost of goods
8. Government subsidies. Increase in government subsidies will also reduce the
cost of goods, e.g. train subsidies reduce the price of train tickets.

2. Demand

CONCEPT

A schedule showing the amounts of a good or service that buyers (or a buyer) wish to
purchase at various prices during some time period.

What is 'Demand'
Reviewer 563
Management Advisory Services

Demand is an economic principle that describes a consumer's desire and willingness


to pay a price for a specific good or service. Holding all other factors constant, an
increase in the price of a good or service will decrease demand, and vice versa.

Think of demand as your willingness to go out and buy a certain product. For
example, market demand is the total of what everybody in the market wants.

BREAKING DOWN 'Demand'

Businesses often spend a considerable amount of money to determine the amount of


demand the public has for their products and services. Incorrect estimations either
result in money left on the table if demand is underestimated or losses if demand is
overestimated.

Demand is closely related to supply. While consumers try to pay the lowest prices
they can for goods and services, suppliers try to maximize profits. If suppliers charge
too much, demand drops and suppliers do not sell enough product to earn sufficient
profits. If suppliers charge too little, demand increases but lower prices may not cover
suppliers’ costs or allow for profits. Some factors affecting demand include the appeal
of a good or service, the availability of competing goods, the availability of financing
and the perceived availability of a good or service.

Aggregate Demand vs. Individual Demand

Every consumer faces a different set of circumstances. The factors she faces vary in
type and degree. The extent to which these factors affect market demand overall is
different from the way they affect the demand of a particular individual. Aggregate
demand refers to the overall or average demand of many market participants.
Individual demand refers to the demand of a particular consumer. For example, a
particular consumer’s demand for a product is strongly influenced by her personal
income. However, her personal income does not significantly affect aggregate
demand in a large economy.
Supply and Demand Curves

Supply and demand factors are unique for a given product or service. These factors
are often summed up in demand and supply profiles plotted as slopes on a graph. On
such a graph, the vertical axis denotes the price, while the horizontal axis denotes
the quantity demanded or supplied. A demand profile slopes downward, from left to
right. As prices increase, consumers demand less of a good or service. A supply
curve slopes upward. As prices increase, suppliers provide less of a good or service.

Market Equilibrium

The point where supply and demand curves intersect represents the market clearing
or market equilibrium price. An increase in demand shifts the demand curve to the
right. The curves intersect at a higher price and consumers pay more for the product.
Equilibrium prices typically remain in a state of flux for most goods and services
because factors affecting supply and demand are always changing. Free, competitive
markets tend to push prices toward market equilibrium.
Reviewer 564
Management Advisory Services

In economics, demand is the quantity of a commodity or a service that people are


willing or able to buy at a certain price.

Demand in economics is how many goods and services are bought at various prices
during a certain period of time. Demand is the consumer's need or desire to own the
product or experience the service. It's constrained by the willingness and ability of the
consumer to pay for the good or service at the price offered.

Demand is the underlying force that drives everything in the economy. Fortunately for
economics, people are never satisfied.

They always want more. This drives economic growth and expansion. Without


demand, no business would ever bother producing anything.

FACTORS AFFECTING DEMAND

The demand for a product will be influenced by several factors:

Price

Usually viewed as the most important factor that affects demand.  Products have
different sensitivity to changes in price.  For example, demand for necessities such as
bread, eggs and butter does not tend to change significantly when prices move up or
down

Income levels

When an individual’s income goes up, their ability to purchase goods and services
increases, and this causes demand to increase. When incomes fall there will be a
decrease in the demand for most goods      
Consumer tastes and preferences

Changing tastes and preferences can have a significant effect on demand for
different products. Persuasive advertising is designed to cause a change in tastes
and preferences and thereby create an increase in demand. A good example of this
is the recent surge in sales of smoothies!

Competition

Competitors are always looking to take a bigger share of the market, perhaps by
cutting their prices or by introducing a new or better version of a product

Fashions

When a product becomes unfashionable, demand can quickly fall away.


Reviewer 565
Management Advisory Services

Factors Affecting Demand

Even though the focus in economics is on the relationship between the price of a
product and how much consumers are willing and able to buy, it is important to
examine all of the factors that affect the demand for a good or service.
These factors include:

Price of the Product

There is an inverse (negative) relationship between the price of a product and the
amount of that product consumers are willing and able to buy. Consumers want to
buy more of a product at a low price and less of a product at a high price. This
inverse relationship between price and the amount consumers are willing and able to
buy is often referred to as The Law of Demand.  

The Consumer's Income

The effect that income has on the amount of a product that consumers are willing and
able to buy depends on the type of good we're talking about. For most goods, there is
a positive (direct) relationship between a consumer's income and the amount of the
good that one is willing and able to buy. In other words, for these goods when income
rises the demand for the product will increase; when income falls, the demand for the
product will decrease. We call these types of goods normal goods.

However, for some goods the effect of a change in income is the reverse. For
example, think about a low-quality (high fat-content) ground beef. You might buy this
while you are a student, because it is inexpensive relative to other types of meat. But
if your income increases enough, you might decide to stop buying this type of meat
and instead buy leaner cuts of ground beef, or even give up ground beef entirely in
favor of beef tenderloin. If this were the case (that as your income went up, you were
willing to buy less high-fat ground beef), there would be an inverse relationship
between your income and your demand for this type of meat. We call this type of
good an inferior good. There are two important things to keep in mind about inferior
goods. They are not necessarily low-quality goods. The term inferior (as we use it in
economics) just means that there is an inverse relationship between one's income
and the demand for that good. Also, whether a good is normal or inferior may be
different from person to person. A product may be a normal good for you, but an
inferior good for another person.

The Price of Related Goods

As with income, the effect that this has on the amount that one is willing and able to
buy depends on the type of good we're talking about. Think about two goods that are
Reviewer 566
Management Advisory Services

typically consumed together. For example, bagels and cream cheese. We call these
types of goods compliments. If the price of a bagel goes up, the Law of Demand tells
us that we will be willing/able to buy fewer bagels. But if we want fewer bagels, we
will also want to use less cream cheese (since we typically use them together).
Therefore, an increase in the price of bagels means we want to purchase less cream
cheese. We can summarize this by saying that when two goods are complements,
there is an inverse relationship between the price of one good and the demand for the
other good. 

On the other hand, some goods are considered to be substitutes for one another: you
don't consume both of them together, but instead choose to consume one or the
other. For example, for some people Coke and Pepsi are substitutes (as with inferior
goods, what is a substitute good for one person may not be a substitute for another
person). If the price of Coke increases, this may make Pepsi relatively more
attractive. The Law of Demand tells us that fewer people will buy Coke; some of
these people may decide to switch to Pepsi instead, therefore increasing the amount
of Pepsi that people are willing and able to buy. We summarize this by saying that
when two goods are substitutes, there is a positive relationship between the price of
one good and the demand for the other good.

The Tastes and Preferences of Consumers

This is a less tangible item that still can have a big impact on demand. There are all
kinds of things that can change one's tastes or preferences that cause people to want
to buy more or less of a product. For example, if a celebrity endorses a new product,
this may increase the demand for a product. On the other hand, if a new health study
comes out saying something is bad for your health, this may decrease the demand
for the product. Another example is that a person may have a higher demand for an
umbrella on a rainy day than on a sunny day.
The Consumer's Expectations

It doesn't just matter what is currently going on - one's expectations for the future can
also affect how much of a product one is willing and able to buy. For example, if you
hear that Apple will soon introduce a new iPod that has more memory and longer
battery life, you (and other consumers) may decide to wait to buy an iPod until the
new product comes out. When people decide to wait, they are decreasing the current
demand for iPods because of what they expect to happen in the future. Similarly, if
you expect the price of gasoline to go up tomorrow, you may fill up your car with gas
now. So your demand for gas today increased because of what you expect to happen
tomorrow. This is similar to what happened after Huricane Katrina hit in the fall of
2005. Rumors started that gas stations would run out of gas. As a result, many
consumers decided to fill up their cars (and gas cans), leading to long lines and a big
increase in the demand for gas. This was all based on the expectation of what would
happen.
Reviewer 567
Management Advisory Services

The Number of Consumers in the Market

As more or fewer consumers enter the market this has a direct effect on the amount
of a product that consumers (in general) are willing and able to buy. For example,
a pizza shop located near a University will have more demand and thus higher sales
during the fall and spring semesters. In the summers, when less students are taking
classes, the demand for their product will decrease because the number of
consumers in the area has significantly decreased. 

3. Market Equilibrium

MARKET EQUILIBRIUM

When the supply and demand curves intersect, the market is in equilibrium.  This is where the quantity
demanded and quantity supplied are equal.  The corresponding price is the equilibrium price or market-
clearing price, the quantity is the equilibrium quantity.
   
Putting the supply and demand
curves from the previous sections
together. These two curves will
intersect at Price = $6, and
Quantity = 20. 
In this market, the equilibrium
price is $6 per unit, and
equilibrium quantity is 20 units.
At this price level, market is in
equilibrium. Quantity supplied is
equal to quantity demanded ( Qs
= Qd). 
Market is clear.

Surplus and shortage:

If the market price is above the equilibrium price, quantity supplied is greater than quantity demanded,
creating a surplus.  Market price will fall.

Example: if you are the producer, you have a lot of excess inventory that cannot sell. Will you put them on
sale? It is most likely yes. Once you lower the price of your product, your product’s quantity demanded will
rise until equilibrium is reached. Therefore, surplus drives price down.
Reviewer 568
Management Advisory Services

If the market price is below the equilibrium price, quantity supplied is less than quantity demanded, creating
a shortage. The market is not clear. It is in shortage. Market price will rise because of this shortage.

Example: if you are the producer, your product is always out of stock. Will you raise the price to make more
profit? Most for-profit firms will say yes. Once you raise the price of your product, your product’s quantity
demanded will drop until equilibrium is reached.  Therefore, shortage drives price up.

If a surplus exist, price must fall in order to entice additional quantity demanded and reduce quantity
supplied until the surplus is eliminated.  If a shortage exists, price must rise in order to entice additional
supply and reduce quantity demanded until the shortage is eliminated.
If the market price (P) is higher than $6 (where Qd = Qs),
for example,  P=8, Qs=30, and Qd=10.
Since  Qs>Qd, there are excess quantity supplied  in the
market, the market is not clear. Market is in surplus.

THE PRICE WILL DROP BECAUSE OF THIS SURPLUS.

 
If the market price is lower than equilibrium price,  $6,
for example,  P=4, Qs=10, and Qd=30.
Since Qs<Qd, There are excess quanitty demanded in the
market. Market is not clear. Market is in shortage.

THE PRICE WILL RISE DUE TO THIS SHORTAGE.


 

Government regulations will create surpluses and shortages in the market.  When a price ceiling is set,
there will be a shortage. When there is a price floor, there will be a surplus.

Price Floor: is legally imposed minimum price on the market. Transactions below this price is prohibited.

•Policy makers set floor price above the market equilibrium price which they believed is too low.
•Price floors are most often placed on markets for goods that are an important source of income for the
sellers, such as labor market.  
•Price floor  generate surpluses on the market. 
•Example: minimum wage.
 
Price Ceiling: is legally imposed maximum price on the market. Transactions above this price is prohibited. 

•Policy makers set ceiling price below the market equilibrium price which they believed is too high. 
•Intention of price ceiling is keeping stuff affordable for poor people. 
•Price ceiling generates shortages on the market. 
•Example: Rent control.
 
Reviewer 569
Management Advisory Services

Changes in equilibrium price and quantity:

Equilibrium price and quantity are determined by the intersection of supply and demand. A change in
supply, or demand, or both, will necessarily change the equilibrium price, quantity or both. It is highly
unlikely that the change in supply and demand perfectly offset one another so that equilibrium remains the
same.
Example: This example is based on the assumption of Ceteris Paribus.
1) If there is an exporter who is willing to export oranges from Florida to Asia, he will increase the
demand for Florida’s oranges. An increase in demand will create a shortage, which increases the
equilibrium price and equilibrium quantity.        
 2) If there is an importer who is willing to import oranges from Mexico to Florida, he will increase
the supply for Florida’s oranges. An increase in supply will create a surplus, which lowers the equilibrium
price and increase the equilibrium quantity.              
 3) What will happen if the exporter and importer enter the Florida’s orange market at the same
time? From the above analysis, we can tell that equilibrium quantity will be higher. But the import and
exporter’s impact on price is opposite. Therefore, the change in equilibrium price cannot be determined
unless more details are provided. Detail information should include the exact quantity the exporter and
importer is engaged in. By comparing the quantity between importer and exporter, we can determine who
has more impact on the market.
In the following table, an example of demand and supply increase is illustrated. 

In this graph, supply is constant, demand increases.


As the new demand curve (Demand 2) has shown,
the new curve is located on the right hand side of the
original demand curve.
The new curve intersects the original supply curve at
a new point. At this point, the equilibrium price
(market price) is higher, and equilibrium quantity is
higher also.

In this graph, demand is constant, and supply


increases. As the new supply curve (SUPPLY 2) has
shown, the new curve is located on the right side of
the original supply curve.
The new curve intersects the original demand curve at
a new point. At this point, the equilibrium price
(market price) is lower, and the equilibrium quantity is
higher.
Reviewer 570
Management Advisory Services

In this graph, the increased demand curve and


increased supply were drawn together.  The new
intersection point is located on the right hand side of
the original intersection point.
This new equilibrium point indicated an equilibrium
quantity which is higher than the original equilibrium
quantity. The equilibrium price is also higher. It is
because demand has increased relatively more than
supply in this case.

4. Price Elasticity of Demand

The ratio of the percentage change in quantity demanded of a product or resource to


the percentage change in its price; a measure of the responsiveness of buyers to a
change in the price of a product or resource.

What is 'Price Elasticity Of Demand'

Price elasticity of demand is a measure of the relationship between a change in the


quantity demanded of a particular good and a change in its price. Price elasticity of
demand is a term in economics often used when discussing price sensitivity. The
formula for calculating price elasticity of demand is:

Price Elasticity of Demand = % Change in Quantity Demanded / % Change in Price

If a small change in price is accompanied by a large change in quantity demanded,


the product is said to be elastic (or responsive to price changes). Conversely, a
product is inelastic if a large change in price is accompanied by a small amount of
change in quantity demanded.

BREAKING DOWN 'Price Elasticity Of Demand'

Price elasticity of demand measures the responsiveness of demand to changes in


price for a particular good. If the price elasticity of demand is equal to 0, demand is
perfectly inelastic (i.e., demand does not change when price changes). Values
between zero and one indicate that demand is inelastic (this occurs when the percent
change in demand is less than the percent change in price). When price elasticity of
demand equals one, demand is unit elastic (the percent change in demand is equal to
the percent change in price). Finally, if the value is greater than one, demand is
perfectly elastic (demand is affected to a greater degree by changes in price).
Reviewer 571
Management Advisory Services

For example, if the quantity demanded for a good increases 15% in response to a
10% decrease in price, the price elasticity of demand would be 15% / 10% = 1.5. The
degree to which the quantity demanded for a good changes in response to a change
in price can be influenced by a number of factors. Factors include the number of
close substitutes (demand is more elastic if there are close substitutes) and whether
the good is a necessity or luxury (necessities tend to have inelastic demand while
luxuries are more elastic).

Businesses evaluate price elasticity of demand for various products to help predict
the impact of a pricing on product sales. Typically, businesses charge higher prices if
demand for the product is price inelastic.

Price elasticity of demand (PED or Ed) is a measure used in economics to show the


responsiveness, or elasticity, of the quantity demanded of a good or service to a
change in its price, ceteris paribus. More precisely, it gives the percentage change in
quantity demanded in response to a one percent change in price (ceteris paribus).

Price elasticities are almost always negative, although analysts tend to ignore the
sign even though this can lead to ambiguity. Only goods which do not conform to
the law of demand, such as Veblen and Giffen goods, have a positive PED. In
general, the demand for a good is said to be inelastic (or relatively inelastic) when the
PED is less than one (in absolute value): that is, changes in price have a relatively
small effect on the quantity of the good demanded. The demand for a good is said to
be elastic (or relatively elastic) when its PED is greater than one (in absolute value):
that is, changes in price have a relatively large effect on the quantity of a good
demanded. Demand for a good is:
Revenue is maximized when price is set so that the PED is exactly one. The
PED of a good can also be used to predict the incidence (or "burden") of a tax on
that good. Various research methods are used to determine price elasticity,
including test markets, analysis of historical sales data and conjoint analysis.

5. Market Structure

Market structure is best defined as the organisational and other characteristics of a


market. We focus on those characteristics which affect the nature of competition and
pricing – but it is important not to place too much emphasis simply on the market
share of the existing firms in an industry.
Key Summary on Market Structures

Traditionally, the most important features of market structure are:

1. The number of firms (including the scale and extent of foreign competition)


2. The market share of the largest firms (measured by the concentration ratio –
see below)
3. The nature of costs (including the potential for firms to exploit economies of
scale and also the presence of sunk costs which affects market
contestability in the long term)
4. The degree to which the industry is vertically integrated - vertical integration
explains the process by which different stages in production and distribution
of a product are under the ownership and control of a single enterprise. A
Reviewer 572
Management Advisory Services

good example of vertical integration is the oil industry, where the major oil
companies own the rights to extract from oilfields, they run a fleet of tankers,
operate refineries and have control of sales at their own filling stations.
5. The extent of product differentiation (which affects cross-price elasticity of
demand)
6. The structure of buyers in the industry (including the possibility of
monopsony power)
7. The turnover of customers (sometimes known as "market churn") – i.e. how
many customers are prepared to switch their supplier over a given time
period when market conditions change. The rate of customer churn is
affected by the degree of consumer or brand loyalty and the influence of
persuasive advertising and marketing

6. Production and Cost Functions

PRODUCTION FUNCTION

Production function
From Wikipedia, the free encyclopedia
Reviewer 573
Management Advisory Services

Graph of total, average, and marginal product

In economics, a production function relates physical output of a production process to


physical inputs or factors of production. The production function is one of the key
concepts of mainstream neoclassical theories, used to define marginal product and to
distinguish allocative efficiency, the defining focus of economics. The primary
purpose of the production function is to address allocative efficiency in the use of
factor inputs in production and the resulting distribution of income to those factors,
while abstracting away from the technological problems of achieving technical
efficiency, as an engineer or professional manager might understand it. Production
function denotes an efficient combination of inputs and outputs.

In macroeconomics, aggregate production functions are estimated to create a


framework in which to distinguish how much of economic growth to attribute to
changes in factor allocation (e.g. the accumulation of capital) and how much to
attribute to advancing technology. Some non-mainstream economists, however,
reject the very concept of an aggregate production function.[1][2]
Reviewer 574
Management Advisory Services

The theory of production functions[edit]

In general, economic output is not a (mathematical) function of input, because any


given set of inputs can be used to produce a range of outputs. To satisfy the
mathematical definition of a function, a production function is customarily assumed to
specify the maximum output obtainable from a given set of inputs. The production
function, therefore, describes a boundary or frontier representing the limit of output
obtainable from each feasible combination of input. (Alternatively, a production
function can be defined as the specification of the minimum input requirements
needed to produce designated quantities of output.) Assuming that maximum output
is obtained from given inputs allows economists to abstract away from technological
and managerial problems associated with realizing such a technical maximum, and to
focus exclusively on the problem of allocative efficiency, associated with
the economic choice of how much of a factor input to use, or the degree to which one
factor may be substituted for another. In the production function itself, the relationship
of output to inputs is non-monetary; that is, a production function relates physical
inputs to physical outputs, and prices and costs are not reflected in the function.

In the decision frame of a firm making economic choices regarding production—how


much of each factor input to use to produce how much output—and facing market
prices for output and inputs, the production function represents the possibilities
afforded by an exogenous technology. Under certain assumptions, the production
function can be used to derive a marginal product for each factor. The profit-
maximizing firm in perfect competition (taking output and input prices as given) will
choose to add input right up to the point where the marginal cost of additional input
matches the marginal product in additional output. This implies an ideal division of the
income generated from output into an income due to each input factor of production,
equal to the marginal product of each input.

The inputs to the production function are commonly termed factors of production and


may represent primary factors, which are stocks. Classically, the primary factors of
production were Land, Labor and Capital. Primary factors do not become part of the
output product, nor are the primary factors, themselves, transformed in the production
process. The production function, as a theoretical construct, may be abstracting away
from the secondary factors and intermediate products consumed in a production
process. The production function is not a full model of the production process: it
deliberately abstracts from inherent aspects of physical production processes that
some would argue are essential, including error, entropy or waste, and the
consumption of energy or the co-production of pollution. Moreover, production
functions do not ordinarily model the business processes, either, ignoring the role of
strategic and operational business management. (For a primer on the fundamental
elements of microeconomic production theory, see production theory basics).

The production function is central to the marginalist focus of neoclassical economics,


its definition of efficiency as allocative efficiency, its analysis of how market prices can
govern the achievement of allocative efficiency in a decentralized economy, and an
analysis of the distribution of income, which attributes factor income to the marginal
product of factor input.

Specifying the production function[edit]


Reviewer 575
Management Advisory Services

A production function can be expressed in a functional form as the right side of


where  is the quantity of output and  are the quantities of factor inputs (such as
capital, labour, land or raw materials).
If  is not a matrix (i.e., a scalar, a vector, or even a diagonal matrix), then this form
does not encompass joint production, which is a production process that has multiple
co-products. On the other hand, if  maps from  then it is a joint production function
expressing the determination of  different types of output based on the joint usage of
the specified quantities of the  inputs.
One formulation, unlikely to be relevant in practice, is as a linear function:
where  are parameters that are determined empirically. Another is as a Cobb-
Douglas production function:
The Leontief production function applies to situations in which inputs must be used in
fixed proportions; starting from those proportions, if usage of one input is increased
without another being increased, output will not change. This production function is
given by
Other forms include the constant elasticity of substitution production function (CES),
which is a generalized form of the Cobb-Douglas function, and the quadratic
production function. The best form of the equation to use and the values of the
parameters () vary from company to company and industry to industry. In a short run
production function at least one of the 's (inputs) is fixed. In the long run all factor
inputs are variable at the discretion of management.

Production function as a graph[edit]


Reviewer 576
Management Advisory Services

Quadratic production function

Any of these equations can be plotted on a graph. A typical (quadratic) production


function is shown in the following diagram under the assumption of a single variable
input (or fixed ratios of inputs so they can be treated as a single variable). All points
above the production function are unobtainable with current technology, all points
below are technically feasible, and all points on the function show the maximum
quantity of output obtainable at the specified level of usage of the input. From point A
to point C, the firm is experiencing positive but decreasing marginal returns to the
variable input. As additional units of the input are employed, output increases but at a
decreasing rate. Point B is the point beyond which there are diminishing average
returns, as shown by the declining slope of the average physical product curve (APP)
beyond point Y. Point B is just tangent to the steepest ray from the origin hence the
average physical product is at a maximum. Beyond point B, mathematical necessity
requires that the marginal curve must be below the average curve (See production
theory basics for further explanation.).

Stages of production[edit]
Reviewer 577
Management Advisory Services

To simplify the interpretation of a production function, it is common to divide its range


into 3 stages. In Stage 1 (from the origin to point B) the variable input is being used
with increasing output per unit, the latter reaching a maximum at point B (since the
average physical product is at its maximum at that point). Because the output per unit
of the variable input is improving throughout stage 1, a price-taking firm will always
operate beyond this stage.

In Stage 2, output increases at a decreasing rate, and the average and marginal


physical product are declining. However, the average product of fixed inputs (not
shown) is still rising, because output is rising while fixed input usage is constant. In
this stage, the employment of additional variable inputs increases the output per unit
of fixed input but decreases the output per unit of the variable input. The optimum
input/output combination for the price-taking firm will be in stage 2, although a firm
facing a downward-sloped demand curve might find it most profitable to operate in
Stage 1. In Stage 3, too much variable input is being used relative to the available
fixed inputs: variable inputs are over-utilized in the sense that their presence on the
margin obstructs the production process rather than enhancing it. The output per unit
of both the fixed and the variable input declines throughout this stage. At the
boundary between stage 2 and stage 3, the highest possible output is being obtained
from the fixed input.

Shifting a production function[edit]

By definition, in the long run the firm can change its scale of operations by adjusting
the level of inputs that are fixed in the short run, thereby shifting the production
function upward as plotted against the variable input. If fixed inputs are lumpy,
adjustments to the scale of operations may be more significant than what is required
to merely balance production capacity with demand. For example, you may only need
to increase production by million units per year to keep up with demand, but the
production equipment upgrades that are available may involve increasing productive
capacity by 2 million units per year.
Reviewer 578
Management Advisory Services

Shifting a production function

If a firm is operating at a profit-maximizing level in stage one, it might, in the long run,
choose to reduce its scale of operations (by selling capital equipment). By reducing
the amount of fixed capital inputs, the production function will shift down. The
beginning of stage 2 shifts from B1 to B2. The (unchanged) profit-maximizing output
level will now be in stage 2.

Homogeneous and homothetic production functions[edit]

There are two special classes of production functions that are often analyzed. The
production function  is said to be homogeneous of degree , if given any positive
constant , . If , the function exhibits increasing returns to scale, and it
exhibits decreasing returns to scale if . If it is homogeneous of degree , it
exhibits constant returns to scale. The presence of increasing returns means that a
one percent increase in the usage levels of all inputs would result in a greater than
one percent increase in output; the presence of decreasing returns means that it
would result in a less than one percent increase in output. Constant returns to scale is
the in-between case. In the Cobb-Douglas production function referred to above,
returns to scale are increasing if , decreasing if , and constant if .
If a production function is homogeneous of degree one, it is sometimes called
"linearly homogeneous". A linearly homogeneous production function with inputs
capital and labour has the properties that the marginal and average physical products
of both capital and labour can be expressed as functions of the capital-labour ratio
alone. Moreover, in this case if each input is paid at a rate equal to its marginal
product, the firm's revenues will be exactly exhausted and there will be no excess
economic profit.[3]:pp.412–414

Homothetic functions are functions whose marginal technical rate of substitution (the
slope of the isoquant, a curve drawn through the set of points in say labour-capital
space at which the same quantity of output is produced for varying combinations of
the inputs) is homogeneous of degree zero. Due to this, along rays coming from the
origin, the slopes of the isoquants will be the same. Homothetic functions are of the
form  where  is a monotonically increasing function (the derivative of  is positive ()),
and the function  is a homogeneous function of any degree.

Aggregate production functions[edit]


See also: Cambridge capital controversy

In macroeconomics, aggregate production functions for whole nations are sometimes


constructed. In theory they are the summation of all the production functions of
individual producers; however there are methodological problems associated with
aggregate production functions, and economists have debated extensively whether
the concept is valid.[2]

Criticisms of the production function theory[edit]

There are two major criticisms[which?] of the standard form of the production function.[4]

On the concept of capital[edit]


Reviewer 579
Management Advisory Services

During the 1950s, '60s, and '70s there was a lively debate about the theoretical
soundness of production functions (see the Capital controversy). Although the
criticism was directed primarily at aggregate production functions, microeconomic
production functions were also put under scrutiny. The debate began in 1953
when Joan Robinson criticized the way the factor input capital was measured and
how the notion of factor proportions had distracted economists. She wrote:

"The production function has been a powerful instrument of miseducation. The


student of economic theory is taught to write Q = f (L, K ) where L is a quantity of
labor, K a quantity of capital and Q a rate of output of commodities. He is instructed to
assume all workers alike, and to measure L in man-hours of labor; he is told
something about the index-number problem in choosing a unit of output; and then he
is hurried on to the next question, in the hope that he will forget to ask in what units K
is measured. Before he ever does ask, he has become a professor, and so sloppy
habits of thought are handed on from one generation to the next".[5]

According to the argument, it is impossible to conceive of capital in such a way that


its quantity is independent of the rates of interest and wages. The problem is that this
independence is a precondition of constructing an isoquant. Further, the slope of the
isoquant helps determine relative factor prices, but the curve cannot be constructed
(and its slope measured) unless the prices are known beforehand.

On the empirical relevance[edit]

As a result of the criticism on their weak theoretical grounds, it has been claimed that
empirical results firmly support the use of neoclassical well behaved aggregate
production functions. Nevertheless, Anwar Shaikh has demonstrated that they also
have no empirical relevance, as long as alleged good fit outcomes from an
accounting identity, not from any underlying laws of production/distribution.[6]

Natural resources[edit]
See also: Nicholas Georgescu-Roegen §  Criticising neoclassical economics (weak
versus strong sustainability)

Natural resources are usually absent in production functions. When Robert


Solow and Joseph Stiglitz attempted to develop a more realistic production function
by including natural resources, they did it in a manner economist Nicholas
Georgescu-Roegen criticized as a "conjuring trick": Solow and Stiglitz had failed to
take into account the laws of thermodynamics, since their variant allowed man-made
capital to be a complete substitute for natural resources. Neither Solow nor Stiglitz
reacted to Georgescu-Roegen's criticism, despite an invitation to do so in the
September 1997 issue of the journal Ecological Economics.[1] [7]:127-136 [2] [8]
The practice of production functions[edit]
The theory of production function depicts the relation between physical outputs of a
production process and physical inputs, i.e. factors of production. The practical
application of production function is obtained by valuing the physical outputs and
inputs by their prices. The economic value of physical outputs minus the economic
value of physical inputs is the income generated by the production process. By
keeping the prices fixed between two periods under review we get the income change
Reviewer 580
Management Advisory Services

generated by the change of production function. This is the principle how the
production function is made a practical concept, i.e. measureable and understandable
in practical situations.
About production[edit]
Economic well-being is created in a production process, meaning all economic
activities that aim directly or indirectly to satisfy human needs. The degree to which
the needs are satisfied is often accepted as a measure of economic well-being. In
production there are two features which explain increasing economic well-being. They
are improving quality-price-ratio of commodities and increasing incomes from growing
and more efficient market production. The most important forms of production are
 market production
 public production
 household production
In order to understand the origin of the economic well-being we must understand
these three production processes. All of them produce commodities which have value
and contribute to well-being of individuals.
The satisfaction of needs originates from the use of the commodities which are
produced. The need satisfaction increases when the quality-price-ratio of the
commodities improves and more satisfaction is achieved at less cost. Improving the
quality-price-ratio of commodities is to a producer an essential way to improve the
competitiveness of products but this kind of gains distributed to customers cannot be
measured with production data. Improving the competitiveness of products means
often to the producer lower product prices and therefore losses in incomes which are
to compensated with the growth of sales volume.
Economic well-being also increases due to the growth of incomes that are gained
from the growing and more efficient market production. Market production is the only
one production form which creates and distributes incomes to stakeholders. Public
production and household production are financed by the incomes generated in
market production. Thus market production has a double role in creating well-being,
i.e. the role of producing developing commodities and the role to creating income.
Because of this double role market production is the “primus motor” of economic well-
being and therefore here under review.
Main processes of a producing company[edit]
A producing company can be divided into sub-processes in different ways; yet, the
following five are identified as main processes, each with a logic, objectives, theory
and key figures of its own. It is important to examine each of them individually, yet, as
a part of the whole, in order to be able to measure and understand them. The main
processes of a company are as follows:
Reviewer 581
Management Advisory Services

Main processes of a producing company (Saari 2006,3)


 real process.
 income distribution process
 production process.
 monetary process.
 market value process.
Production output is created in the real process, gains of production are distributed in
the income distribution process and these two processes constitute the production
process. The production process and its sub-processes, the real process and income
distribution process occur simultaneously, and only the production process is
identifiable and measurable by the traditional accounting practices. The real process
and income distribution process can be identified and measured by extra calculation,
and this is why they need to be analysed separately in order to understand the logic
of production and its performance.
Real process generates the production output from input, and it can be described by
means of the production function. It refers to a series of events in production in which
production inputs of different quality and quantity are combined into products of
different quality and quantity. Products can be physical goods, immaterial services
and most often combinations of both. The characteristics created into the product by
the producer imply surplus value to the consumer, and on the basis of the market
price this value is shared by the consumer and the producer in the marketplace. This
is the mechanism through which surplus value originates to the consumer and the
producer likewise. It is worth noting that surplus values to customers cannot be
measured from any production data. Instead the surplus value to a producer can be
measured. It can be expressed both in terms of nominal and real values. The real
surplus value to the producer is an outcome of the real process, real income, and
measured proportionally it means productivity.
The concept “real process” in the meaning quantitative structure of production
process was introduced in Finnish management accounting in 1960´s. Since then it
has been a cornerstone in the Finnish management accounting theory. (Riistama et
al. 1971)
Income distribution process of the production refers to a series of events in which the
unit prices of constant-quality products and inputs alter causing a change in income
distribution among those participating in the exchange. The magnitude of the change
in income distribution is directly proportionate to the change in prices of the output
and inputs and to their quantities. Productivity gains are distributed, for example, to
customers as lower product sales prices or to staff as higher income pay.
Reviewer 582
Management Advisory Services

The production process consists of the real process and the income distribution
process. A result and a criterion of success of the owner is profitability. The
profitability of production is the share of the real process result the owner has been
able to keep to himself in the income distribution process. Factors describing the
production process are the components of profitability, i.e., returns and costs. They
differ from the factors of the real process in that the components of profitability are
given at nominal prices whereas in the real process the factors are at periodically
fixed prices.
Monetary process refers to events related to financing the business. Market value
process refers to a series of events in which investors determine the market value of
the company in the investment markets.
Production growth and performance[edit]
Production growth is often defined as a production increase of an output of a
production process. It is usually expressed as a growth percentage depicting growth
of the real production output. The real output is the real value of products produced in
a production process and when we subtract the real input from the real output we get
the real income. The real output and the real income are generated by the real
process of production from the real inputs.
The real process can be described by means of the production function. The
production function is a graphical or mathematical expression showing the
relationship between the inputs used in production and the output achieved. Both
graphical and mathematical expressions are presented and demonstrated. The
production function is a simple description of the mechanism of income generation in
production process. It consists of two components. These components are a change
in production input and a change in productivity.[9][10]

Components of production growth (Saari 2006,2)


The figure illustrates an income generation process(exaggerated for clarity). The
Value T2 (value at time 2) represents the growth in output from Value T1 (value at
time 1). Each time of measurement has its own graph of the production function for
that time (the straight lines). The output measured at time 2 is greater than the output
measured at time one for both of the components of growth: an increase of inputs
and an increase of productivity. The portion of growth caused by the increase in
inputs is shown on line 1 and does not change the relation between inputs and
outputs. The portion of growth caused by an increase in productivity is shown on line
2 with a steeper slope. So increased productivity represents greater output per unit of
input.
Reviewer 583
Management Advisory Services

The growth of production output does not reveal anything about the performance of
the production process. The performance of production measures production’s ability
to generate income. Because the income from production is generated in the real
process, we call it the real income. Similarly, as the production function is an
expression of the real process, we could also call it “income generated by the
production function”.
The real income generation follows the logic of the production function. Two
components can also be distinguished in the income change: the income growth
caused by an increase in production input (production volume) and the income
growth caused by an increase in productivity. The income growth caused by
increased production volume is determined by moving along the production function
graph. The income growth corresponding to a shift of the production function is
generated by the increase in productivity. The change of real income so signifies a
move from the point 1 to the point 2 on the production function (above). When we
want to maximize the production performance we have to maximize the income
generated by the production function.
The sources of productivity growth and production volume growth are explained as
follows. Productivity growth is seen as the key economic indicator of innovation. The
successful introduction of new products and new or altered processes, organization
structures, systems, and business models generates growth of output that exceeds
the growth of inputs. This results in growth in productivity or output per unit of input.
Income growth can also take place without innovation through replication of
established technologies. With only replication and without innovation, output will
increase in proportion to inputs. (Jorgenson et al. 2014,2) This is the case of income
growth through production volume growth.
Jorgenson et al. (2014,2) give an empiric example. They show that the great
preponderance of economic growth in the US since 1947 involves the replication of
existing technologies through investment in equipment, structures, and software and
expansion of the labor force. Further they show that innovation accounts for only
about twenty percent of US economic growth.
In the case of a single production process (described above) the output is defined as
an economic value of products and services produced in the process. When we want
to examine an entity of many production processes we have to sum up the value-
added created in the single processes. This is done in order to avoid the double
accounting of intermediate inputs. Value-added is obtained by subtracting the
intermediate inputs from the outputs. The most well-known and used measure of
value-added is the GDP (Gross Domestic Product). It is widely used as a measure of
the economic growth of nations and industries.
Absolute (total) and average income[edit]
The production performance can be measured as an average or an absolute income.
Expressing performance both in average (avg.) and absolute (abs.) quantities is
helpful for understanding the welfare effects of production. For measurement of the
average production performance, we use the known productivity ratio
Reviewer 584
Management Advisory Services

Average and marginal productivity (Saari 2011,8)


 Real output / Real input.
The absolute income of performance is obtained by subtracting the real input from the
real output as follows:
 Real income (abs.) = Real output – Real input
The growth of the real income is the increase of the economic value which can be
distributed between the production stakeholders. With the aid of the production model
we can perform the average and absolute accounting in one calculation. Maximizing
production performance requires using the absolute measure, i.e. the real income
and its derivatives as a criterion of production performance.
The differences between the absolute and average performance measures can be
illustrated by the following graph showing marginal and average productivity. The
figure is a traditional expression of average productivity and marginal productivity.
The maximum for production performance is achieved at the volume where marginal
productivity is zero. The maximum for production performance is the maximum of the
real incomes. In this illustrative example the maximum real income is achieved, when
the production volume is 7.5 units. The maximum average productivity is reached
when the production volume is 3.0 units. It is worth noting that the maximum average
productivity is not the same as the maximum of real income.
Figure above is a somewhat exaggerated depiction because the whole production
function is shown. In practice, decisions are made in a limited range of the production
functions, but the principle is still the same; the maximum real income is aimed for.
An important conclusion can be drawn. When we try to maximize the welfare effects
of production we have to maximize real income formation. Maximizing productivity
leads to a suboptimum, i.e. to losses of incomes.
A practical example illustrates the case. When a jobless person obtains a job in
market production we may assume it is a low productivity job. As a result, average
productivity decreases but the real income per capita increases. Furthermore, the
well-being of the society also grows. This example reveals the difficulty to interpret
the total productivity change correctly. The combination of volume increase and total
productivity decrease leads in this case to the improved performance because we are
on the “diminishing returns” area of the production function. If we are on the part of
“increasing returns” on the production function, the combination of production volume
increase and total productivity increase leads to improved production performance.
Unfortunately we do not know in practice on which part of the production function we
Reviewer 585
Management Advisory Services

are. Therefore, a correct interpretation of a performance change is obtained only by


measuring the real income change.
Production models[edit]
A production model is a numerical description of the production process and is based
on the prices and the quantities of inputs and outputs. There are two main
approaches to operationalize the concept of production function. We can use
mathematical formulae, which are typically used in macroeconomics (in growth
accounting) or arithmetical models, which are typically used in microeconomics and
management accounting.[11]
We use here arithmetical models because they are like the models of management
accounting, illustrative and easily understood and applied in practice. Furthermore,
they are integrated to management accounting, which is a practical advantage. A
major advantage of the arithmetical model is its capability to depict production
function as a part of production process. Consequently, production function can be
understood, measured, and examined as a part of production process.
There are different production models according to different interests. Here we use a
production income model and a production analysis model in order to demonstrate
production function as a phenomenon and a measureable quantity. Malakooti (2013)
provides an overview and problems of production models such as Aggregate
planning, Push-and-Pull Systems, Inventory Planning and Control, and so on.[12]
Production income model[edit]

Profitability of production measured by surplus value (Saari 2006,3)


The scale of success run by a going concern is manifold, and there are no criteria
that might be universally applicable to success. Nevertheless, there is one criterion by
which we can generalise the rate of success in production. This criterion is the ability
to produce surplus value. As a criterion of profitability, surplus value refers to the
difference between returns and costs, taking into consideration the costs of equity in
addition to the costs included in the profit and loss statement as usual. Surplus value
indicates that the output has more value than the sacrifice made for it, in other words,
the output value is higher than the value (production costs) of the used inputs. If the
surplus value is positive, the owner’s profit expectation has been surpassed.
The table presents a surplus value calculation. We call this set of production data a
basic example and we use the data through the article in illustrative production
models. The basic example is a simplified profitability calculation used for illustration
and modelling. Even as reduced, it comprises all phenomena of a real measuring
situation and most importantly the change in the output-input mix between two
Reviewer 586
Management Advisory Services

periods. Hence, the basic example works as an illustrative “scale model” of


production without any features of a real measuring situation being lost. In practice,
there may be hundreds of products and inputs but the logic of measuring does not
differ from that presented in the basic example.
In this context we define the quality requirements for the production data used in
productivity accounting. The most important criterion of good measurement is the
homogenous quality of the measurement object. If the object is not homogenous,
then the measurement result may include changes in both quantity and quality but
their respective shares will remain unclear. In productivity accounting this criterion
requires that every item of output and input must appear in accounting as being
homogenous. In other words, the inputs and the outputs are not allowed to be
aggregated in measuring and accounting. If they are aggregated, they are no longer
homogenous and hence the measurement results may be biased.
Both the absolute and relative surplus value have been calculated in the example.
Absolute value is the difference of the output and input values and the relative value
is their relation, respectively. The surplus value calculation in the example is at a
nominal price, calculated at the market price of each period.
Production analysis model[edit]

Production Model Saari 2004(Saari 2006,4)


A model used here is a typical production analysis model by help of which it is
possible to calculate the outcome of the real process, income distribution process and
production process.[13][14][15] The starting point is a profitability calculation using surplus
value as a criterion of profitability. The surplus value calculation is the only valid
Reviewer 587
Management Advisory Services

measure for understanding the connection between profitability and productivity or


understanding the connection between real process and production process. A valid
analysis of production necessitates considering all production inputs, and the surplus
value calculation is the only calculation to conform to the requirement. If we omit an
input in productivity or income accounting, this means that the omitted input can be
used unlimitedly in production without any cost impact on accounting results.
Accounting and interpreting[edit]
The process of calculating is best understood by applying the term ceteris paribus,
i.e. "all other things being the same," stating that at a time only the impact of one
changing factor be introduced to the phenomenon being examined. Therefore, the
calculation can be presented as a process advancing step by step. First, the impacts
of the income distribution process are calculated, and then, the impacts of the real
process on the profitability of the production.
The first step of the calculation is to separate the impacts of the real process and the
income distribution process, respectively, from the change in profitability (285.12 –
266.00 = 19.12). This takes place by simply creating one auxiliary column (4) in which
a surplus value calculation is compiled using the quantities of Period 1 and the prices
of Period 2. In the resulting profitability calculation, Columns 3 and 4 depict the
impact of a change in income distribution process on the profitability and in Columns
4 and 7 the impact of a change in real process on the profitability.
The accounting results are easily interpreted and understood. We see that the real
income has increased by 58.12 units from which 41.12 units come from the increase
of productivity growth and the rest 17.00 units come from the production volume
growth. The total increase of real income (58.12) is distributed to the stakeholders of
production, in this case 39.00 units to the customers and to the suppliers of inputs
and the rest 19.12 units to the owners.
Here we can make an important conclusion. Income formation of production is always
a balance between income generation and income distribution. The income change
created in a real process (i.e. by production function) is always distributed to the
stakeholders as economic values within the review period. Accordingly, the changes
in real income and income distribution are always equal in terms of economic value.
Based on the accounted changes of productivity and production volume values we
can explicitly conclude on which part of the production function the production is. The
rules of interpretations are the following:
The production is on the part of “increasing returns” on the production function, when
 productivity and production volume increase or
 productivity and production volume decrease
The production is on the part of “diminishing returns” on the production function, when
 productivity decreases and volume increases or
 productivity increases and volume decreases.
In the basic example the combination of volume growth (+17.00) and productivity
growth (+41.12) reports explicitly that the production is on the part of “increasing
returns” on the production function (Saari 2006 a, 138–144).
Another productivity model also gives details of the income distribution. [16]:13 Because
the accounting techniques of the two models are different, they give differing,
although complementary, analytical information. The accounting results are, however,
identical. We do not present the model here in detail but we only use its detailed data
on income distribution, when the objective functions are formulated in the next
section.
Objective functions[edit]
Reviewer 588
Management Advisory Services

An efficient way to improve the understanding of production performance is to


formulate different objective functions according to the objectives of the different
interest groups. Formulating the objective function necessitates defining the variable
to be maximized (or minimized). After that other variables are considered as
constraints or free variables. The most familiar objective function is profit
maximization which is also included in this case. Profit maximization is an objective
function that stems from the owner’s interest and all other variables are constraints in
relation to maximizing of profits.

Summary of objective function formulations (Saari 2011,17)


The procedure for formulating objective functions[edit]
The procedure for formulating different objective functions, in terms of the production
model, is introduced next. In the income formation from production the following
objective functions can be identified:
 Maximizing the real income
 Maximizing the producer income
 Maximizing the owner income.
These cases are illustrated using the numbers from the basic example. The following
symbols are used in the presentation: The equal sign (=) signifies the starting point of
the computation or the result of computing and the plus or minus sign (+ / −) signifies
a variable that is to be added or subtracted from the function. A producer means here
the producer community, i.e. labour force, society and owners.
Objective function formulations can be expressed in a single calculation which
concisely illustrates the logic of the income generation, the income distribution and
the variables to be maximized.
The calculation resembles an income statement starting with the income generation
and ending with the income distribution. The income generation and the distribution
are always in balance so that their amounts are equal. In this case it is 58.12 units.
The income which has been generated in the real process is distributed to the
stakeholders during the same period. There are three variables which can be
maximized. They are the real income, the producer income and the owner income.
Producer income and owner income are practical quantities because they are
addable quantities and they can be computed quite easily. Real income is normally
not an addable quantity and in many cases it is difficult to calculate.
The dual approach for the formulation[edit]
Here we have to add that the change of real income can also be computed from the
changes in income distribution. We have to identify the unit price changes of outputs
and inputs and calculate their profit impacts (i.e. unit price change x quantity). The
change of real income is the sum of these profit impacts and the change of owner
income. This approach is called the dual approach because the framework is seen in
terms of prices instead of quantities (ONS 3, 23).
Reviewer 589
Management Advisory Services

The dual approach has been recognized in growth accounting for long but its
interpretation has remained unclear. The following question has remained
unanswered: “Quantity based estimates of the residual are interpreted as a shift in
the production function, but what is the interpretation of the price-based growth
estimates?”[17]:18 We have demonstrated above that the real income change is
achieved by quantitative changes in production and the income distribution change to
the stakeholders is its dual. In this case the duality means that the same accounting
result is obtained by accounting the change of the total income generation (real
income) and by accounting the change of the total income distribution.

COST FUNCTION

Definition: A cost function is a mathematical formula used to used to chart how


production expenses will change at different output levels. In other words, it estimates
the total cost of production given a specific quantity produced.

Management uses this model to run different production scenarios and help predict
what the total cost would be to produce a product at different levels of output. The
cost function equation is expressed as C(x)= FC + V(x), where C equals total
production cost, FC is total fixed costs, V is variable cost and x is the number of units.

Understanding a firm’s cost function is helpful in the budgeting process because it


helps management understand the cost behavior of a product. This is vital to
anticipate costs that will be incurred in the next operating period at the planned
activity level. Also, this allows management to evaluate how efficiently the production
process was at the end of the operating period.

Let’s look at an example.

Example

The management of Duralex Companies, a manufacturer of toys, has asked for a


new cost study to improve next year’s budget forecasts. They pay rent of $300 a
month and they pay an average of $30 a month for electricity. Each toy requires $5 in
plastic and $2 in cloth.

A. How much will it cost them to manufacture 1200 toys annually?

B. How much will it cost them to manufacture 1500 toys annually?

First thing to do is to determine which costs are fixed and which ones are variable.

Remember, fixed costs are incurred whether or not we manufacture,


whereas variable costsare incurred per unit of production. That means rent and
electricity are fixed while plastic and cloth are variable costs.

Remember our cost function:


Reviewer 590
Management Advisory Services

C(x) = FC + V(x)

Substitute the amounts.

A. At 1200 
C(1,200) = $3,960* + 1,200 ($5 + $2)
C(1,200) = $ 12,360
Therefore, it would take $11,360 to produce 1,200 toys in a year.

B. At 1500
C(1,500) = $3,960* + 1,500 ($5 +$2)
C(1500)= $14,460

Therefore, it would take $13,460 to produce 1,500 toys in a year.


*FC = (300 +30) * 12 months (remember we are asked at an annual basis).
Thus, FC= $ 3,960
(Notice that the fixed costs remain unchanged even at varying outputs)

In economics, the cost curve, expressing production costs in terms of the amount


produced.

You might also like