Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Prioritization Matrix

Govind Ramu
Beyond the Basics April 2012 Quality Progress article-
Seven Management Tools)
Background
• Many of the seven new management and planning tools packaged and
promoted by the Union of Japanese Scientists and Engineers (JUSE) are
referred to by terms different than what JUSE originally called them.
• These tools were renamed, modified and adapted to American industry.
• Prioritization matrix was one of the seven new tools that was modified and
replaced math heavy matrix data analysis.
• There are few variations of prioritization matrix
– Analytical criteria method
– Consensus criteria method.
– Combination ID/matrix method.
• Analytical criteria method is discussed in this presentation.
• For other variations of this tool, please refer to The Quality Toolbox
by Nancy R. Tague.
Tool & Application
• The prioritization matrix is a simple, easy-to-use L-shaped tool.
• It helps narrow options through a step-by-step approach applying
selection of criteria, assigning weights and arriving at a conclusion through
basic mathematics.
• The prioritization matrix is useful when
• multiple criteria (typically three to six)
and multiple options (typically five to 10)
are involved in making a final selection.
• Applications of Prioritization Matrix:
– Selecting improvement projects,
– Products and service offerings.
• The matrix even can be used effectively
for personal and social applications such
as deciding on a school, choosing a
financial institution or purchasing property.
1. Form a Team
• Assemble a team of stakeholders with
influence and interest in the decision
at hand.
• Identify a well-trained facilitator.
• The team should be briefed on ground
rules.
• Clarify the goal.
• Team members should be open-minded
about the weighting and scoring process.
• Objectivity is the key! No members should attempt to skew the results in
favor of past decisions.
• Team members should be willing to invest time and effort in this structured
process.
• Actual data are encouraged to use for comparisons where available.
2. Select Criteria
• There are situations where predefined criteria is
available. (E.g. government contracts, customer
service level agreement).
• If there is no predefined set of criterions
(e.g. adding new features to a product or service
offering) , use brainstorming to generate a list.
• Narrow down the list (criterion) to a manageable
number.
• For business applications, these criterions also may
come from management, customer and market
conditions.
• For social and personal applications, these criterions
may come from personal preferences, choices and
social inkling.
• Spend time to iron out clear criteria. Support with
“operational definition” for such criteria to avoid
misinterpretation and incorrect scoring.
3. Weight Criteria
• To obtain criteria weighting, the
criterions are compared with each other
and scored on relative importance.

• The scoring rules are as follows:


– If the criteria in a row are equally important as in a column, assign a
score of one.
– If the row is significantly more important than the column, assign a
score of five.
– If the row is much more significantly important than the column, assign
a score of 10.
– If the row is less important than the column, assign a score of 1/5 (0.2).
– If the row is much less important as the column, assign a score of 1/10
(0.1).
Example
• An organization is selecting quality information system software from
three possible vendors in the market A, B and C.
• A cross-functional team was chosen.
• Criteria were identified (through brainstorming, narrowing down) as
– Initial cost (purchase price and the cost of concurrent licenses)
– Flexibility for customization
– Scalability as the business grows
– Integration with existing data sources
– Ability to migrate legacy data into the new system
– Ease of use
• The criteria were compared against one another for relative importance.
The team scored relative importance and calculated weights
(Online Table 1).
Criteria Weighting

Initial cost is less important than the flexibility, scalability ,and usability, hence all
the cells are assigned a score of 1/5.
Initial cost is much less important than integration, assign a score of 1/10.
Initial cost is equally important as migration, assign a score of 1.
Initial cost “sum”= (1/5 + 1/5 + 1/10 + 1 + 1/5) = 1.7
Top Weighted Criteria
• After the criteria weighting, initial cost did not
make it to most weighted criteria. The team found
this noteworthy:
– Usability carried the most weight 0.42 (Online
Table 1). If the system is not easy to use, it
does not matter how inexpensive or how
convenient other criteria are. The users simply
would not use the system.
– Ability to integrate with data sources carried
the next important weight 0.3. The team’s
proactive decision to reduce data duplication
errors and ensure the ability to make quick
decisions without workarounds or manually
populating data into the system.
– Flexibility was the next important score 0.14,
as it allows IT to customize workflows for
future new applications.
Team’s reflection
• Weighting heavily on initial cost would
have resulted in selecting a low-initial
cost product.
• The overall cost involved in customizing,
entering data through manual sources
and scaling up would have increased the
maintenance cost significantly over the
lifetime.
• Having the right team composition and
being open minded in scoring helped the
team to pick the top criteria.

Note: It is OK for the team to select all


criteria for further analysis.
Comparison of Options
• There are three options for every criteria.
The options are vendor A, B and C.
• The vendors are compared against one
another using the same scoring rules for
the top three weighted criteria:
– Usability (Online Table 2),
– Integration (Online Table 3), and
– Flexibility (Online Table 4).
• The corresponding criteria weights
are multiplied with the vendor weights
to select a vendor that meets the most important criterions (Online Table 5).
• Vendor C is selected.
• The team agrees with the collective decision.
Vendor comparisons against top criteria

Vendor A weight for Usability


Vendor selection
Vendor A weight for Usability Criteria weight for Usability
Bibliography
• Brassard, Michael, The Memory Jogger Plus +, Goal/QPC Inc., 1989.
• Tague, Nancy R., The Quality Toolbox, second edition, ASQ Quality Press,
2004.
About the Author
Govind Ramu is a senior manager for global quality systems at SunPower
Corp. in San Jose, CA. He is a licensed professional mechanical engineer
from Ontario, Canada. An ASQ member since 1998 and an ASQ fellow,
Ramu holds six ASQ certifications:
– Certified Manager of Quality/Organizational Excellence (CMQ/OE)
– Certified Quality Engineer (CQE),
– Certified Six Sigma Black Belt (CSSBB),
– Certified Quality Auditor (CQA),
– Certified Software Quality Engineer (CSQE)
– Certified Reliability Engineer (CRE).
He is co-author of
The Certified Six Sigma Green Belt Handbook
(ASQ Quality Press, 2008).

You might also like