How To Maximise The Profit From Customers PDF

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

How to maximise the profit from customers & products in cement

plants
George Handley
MBA Management Consultants

Top ways to increase profits quickly


Dogs and cows insert diagram
LEAGUE TABLES OF CUSTOMER PROFITABILITY
FIND BOTTLENECKS & MAXIMISE PROFIT
MILK COWS & SHOOT DOGS
OPTIMISE ALL PRODUCTION & DISTRIBUTION

Very quick ways to increase profits

Simple first steps 4 week programme


(Exercises you can do yourself in 4 weeks)
Advanced steps. 2 to 4 months

Within months you can increase profits by 10% to 50%


Reduce cost of production and distribution by 5% to 20%

Simple ideas steps

How to measure profitability


Find plant bottleneck(s) and critical path
Determine the most profitable products and customers
Make your bottleneck/critical path more effective
Milk cows and shoot dogs

Why throughput, critical path and bottlenecks are important

Added value per kiln hour varies from $1,300/hr up to


$9,750/hr depending on product and customer
An inefficient plant can still double added value and increase
profits tenfold by changing marketing mix
The smaller the plant the easier this is to achieve
Many investments are a waste of money
Cost/tonne is a poor way to measure cost
Use cost/bottleneck-hr & profit/bottleneck-hr

Exercises you can do yourself

Find the critical path and bottlenecks


League table of customer & product profit at bottlenecks
Create Boston Grid of cows and dogs

How to measure profitability

Use added value per hour on bottleneck


First find the bottleneck
Then calculate the added value of each customer and product
on the bottleneck
Create a league table for customers
Create a league table for products
Get rid of dog customers

Finding the bottleneck

Make a diagram of the plant about 10 to 20 boxes


Mark the non-constrained boxes in green
Mark the constrained boxes in red
Join up the boxes in main process sequence
This is the critical path
The red boxes on the critical path are bottlenecks

KILN
MILLS
SILOS
CRUSH

IF KILN IS BOTTLENECK..SWITCH TO HIGH ADDED


VALUE/KILN HR CUSTOMERS/PRODUCTS. THEY ADD LOTS
OF VALUE AFTER KILN

KILN
MILLS
SILOS
CRUSH
IF MILLS ARE BOTTLENECK..SWITCH TO HIGH ADDED
VALUE/MILL HR CUSTOMERS/PRODUCTS. THEY ADD LOTS
OF VALUE AFTER MILLBAGGED/ADDITIVES

Profit on the critical path/bottleneck

For every customer calculate


Added value (price-materials-energy-transport-packaging)
Calculate added value per bottleneck hour
Create a league table of customer added value per hour
Remove the worst customer added value per hour until
bottleneck is free/removed
Always compare bottleneck investment to worst customers
through the bottleneck

Each dot is a customer


DOGS, STARS, CASHCOWS, & PROBLEMS
Busy fools
Added value per hour
Customer size
small
large
low
high
average
average

Dos and Donts

Do not invest in or speed up a non-bottleneck


Only invest in bottlenecks
All investment is paid for by the least profitable products and
customers that will use the investment
If bottleneck is a very expensive process, change the products
and customers that use it/improve the mix/ remove
&
find more
If the bottleneck is not expensive, buy more capacity

Advanced steps

A linear programming optimisation model of production and


distribution to optimise complex groups of resources
Optimise pricing to maximise profits in market
Optimise distribution to minimise costs
Capacity investment plan

examples

European group of 3 plants reduced total production &


distribution cost by 18%
A small producer improved customer and product mix.
Profits increased by 30%
One plant dropped 3 simple products and doubled bagging
capacity. Customer profit increased by 25%
A major group reduced distribution cost by 12%
Used by the worlds top 2 groups

If you have several sites/works

Optimise over all the sites combined


Minimising the combined cost of production & distribution
Which plant should make which product
Minimise cost of distribution
Which plant should supply which customer
Which plant should reduce production
Costs can be reduced by 10% to 20%

Benefits

No major investment
Short time scale
In a reduced market, increase profits by
o Reduction in production & distribution cost per tonne
o Sell to high profit customers, even at lower prices
o Maximise yield from plant
o Decide which plant to scale down/reduce

How to maximise the profit from customers & products in cement


plants
George Handley
MBA Management Consultants

The company is a $200 million group with 6 cement works in several European countries.
Three of the works are within 500 kilometres of each other. The other 3 are over 1500
kilometres from the headquarters. Over 20 different varieties of cement were produced,
including 12 which were bagged. Four of the works had bagging facilities.

The problem.. There were 3 main problems. Profitability was very low and no-one knew
which customers and products were profitable. All the works were operating well below
maximum capacity and market share was falling in some areas.
The project MBA examined the true profitability of all products and customers. We created a
model of the business containing all the works, the customers, the silos, the bagging plants, the
kilns, the mills, and the products. All was in one simple database which was then optimised to
maximise profit.

The solution. The kilns were found to be the key bottleneck resource of the company.
They are the greatest expense and need to be maximised. All customers and products were
ranked in added value per kiln hour. All variable costs were included. Raw materials,
energy, transport etc.
10% of customers were unprofitable and their prices were increased by an average of 25%.
3 of the 20 products were hardly profitable and their prices were increased by 10% to 12%.
Most customers were retained
Some products were overpriced and discounts were offered to new large customers. Two
extra bagging plants were commissioned and one works was taken out of commission for two
years. Profitability and market share was increased.

Doing More with Less: NCR avoids server bottlenecks, wasted capacity with
TeamQuest tools

by Suzanne Thornberry | Aug 20, 2002 7:00:00 AM


Tags: Suzanne Thornberry

As global IT planning and procurement manager at NCR Corporation, Paul Armstrong


decides how to allocate processing and server capacity for applications ranging from
human resources management to the analysis of vast stores of historical data.
One of his greatest concerns is keeping the dispatch program for field engineers up and
running for service calls. Its an especially important business application because NCR
provides service not only for its own equipment and applications, but also for those of
third parties. Bottlenecks at the Dayton, OH, corporate data center can slow service
activity to a crawl around the world, whether for NCRs point-of-sale signature capture
devices or a third-partys satellite dishes.
If someone is not able to make a call, what we risk is a customer, and that can be really
big bucks, said Armstrong, a 25-year NCR veteran.
To head off problems that could cost the company customers, Armstrong began
investigating analysis and optimization tools in the late 1990s. As a more tangible
payoff, the company stood to save money by avoiding the need to purchase new
processing and server capacity as its needs grew.
Managing the data center
The massive size of the data center makes any opportunities for savings significant. The
Dayton center has roughly 900 servers and 100 terabytes of storage run by about 140
employees. It houses an enterprise production environment and runs an off-site recovery
center, which also is home to the development environment. For added security, the
sites back up to each other. Among the most important applications are the servicedispatch system and the companys ERP system, which runs on Oracle.

The servers primarily run Sun Solaris, although some older servers that run an in-house
UNIX-based operating system are still being migrated to Solaris. Armstrong says that
most disk storage is from EMC and LSI.
With heavy demands for processing and storage, Armstrong was shopping for software
to help improve operations in two ways:

Performance management
Capacity management

In performance management, he was especially concerned with keeping the enterprise


production applications running well. To do that effectively, system administrators
would need to review reports and statistics daily to spot trends and predict and avoid
bottlenecks.
In his capacity management efforts, Armstrong wanted to develop a team to drive the
procurement process. The team was to review all requests for additional storage and
processing capacity and choose to reallocate resources on the fly, or, if absolutely
unavoidable, request additional capital expenditures to handle the growth.
Armstrong realized that without good analytic tools, NCR would probably end up
wasting money on unneeded capacity.
Lets face ityou never wanted to be short, so you guessed high, he said about
estimating storage needs.
That philosophy may be a fairly common cause of waste in many IT departments. In an
analysis of how well IT resources are used, Gartner estimated that many Web server and
storage environments have peak utilization levels as low as 30 to 40 percent. Gartner
estimated that consolidation projects can push the level of peak utilization to around 70
percent.
Sizing up the products
In the late 1990s, Armstrong and his team evaluated several analysis and optimization
products. Among the first was BestOne, a company later acquired by BMC. At the time,
BestOne still had strong ties to IBM and was just beginning to enter the open systems
market. Armstrong was impressed by the phenomenal algorithms in the product, but
couldnt overcome concerns about expense and support for the open systems
environment. He also tried some tools from OpenView MeasureWare and OpenVision
(which merged with Veritas), but at the time found them too rudimentary.
Then, on the suggestion of Sun representatives, Armstrong contacted TeamQuest for a
demonstration. He was impressed by the what-if capabilities of the software, which
could show the impact of placing more demands on the system. For example,
TeamQuest could show how a change would affect CPU utilization, as shown in Figure
A, and memory capacity.

We had a good base of information, Armstrong said, and we made decisions within a
week.
Figure A

TeamQuest displays CPU usage.

TeamQuests support also helped make the sale. During the two-week demo process,
TeamQuest made some revisions to the software based on NCRs feedback. Armstrong
decided to purchase the product and install it on all new Sun or NT servers, and the
department also installs it as they retrofit older servers.
Armstrong said new releases of the software are easy to install and require little
downtime. Also, the department rarely has to change any of its scripts after installing a
new release.
Armstrong declined to say how much NCR invested in TeamQuest, although he said
that the system quickly paid for itself. TeamQuests Rebecca Kauten said that the
pricing structure varies.
A software license in the Windows/Linux/UNIX environment is less than $1,000 for a
single server, and fees scale according to the number of servers installed, she said.
Saving costsand customers
As Armstrong expected, one of the strengths of the system has come in keeping the
field service operations running. The field engineers use wireless terminals to access all
customer information, such as updates on the problem, the dispatch calls, priority of the
customer, even directions to the site. If the application is down, the work can grind to a
halt.
One day, there was a significant slowdown in the performance of the field service

application. There were no obvious reasons for the slowdown, such as changes to the
code or databases in the last 24 hours, and the equipment was up and running.
TeamQuests analysis revealed the cause.
The performance group found an i/o string that was eating the processor alive,
Armstrong said. TeamQuest enabled them to spot it right away and recommended that
they reload the database.
Once they did and the keys were rebuilt, the system was up and running again, saving
hours of downtime for the field engineersand, possibly, saving customers for NCRs
service business.
Although Armstrong finds it difficult to quantify the return on those kinds of
performance issues, he can quantify some of the savings on the capacity side when
departments request IT resources.
Prior to having this product on site, wed throw more servers at it, wed throw more
hardware at it, and wed throw it out there with either a bigger processor, if available, or
multiple processors, Armstrong said.
He estimates that by finding ways to run new applications on existing equipment, the
company has avoided the need to buy at least five servers at $60,000 each, for a total of
$300,000. That figure doesnt include the overhead costs of adding a server, including
the need for floor space in the data center, the technical issues involved in spreading
applications over multiple servers, and the added complexity of backing up and, if
needed, restoring the system while keeping it in sync.
You start adding all this up, and the cost of a decision like this becomes phenomenal,
Armstrong said. Instead, we make a decision based on the architecture we have today
and spend no money, other than the time it takes for a system administrator to look at
the issue and make a recommendation and resolution.

You might also like