Professional Documents
Culture Documents
Spba02 - Quality Management
Spba02 - Quality Management
POSTGRADUATE COURSE
MBA - OPTIONAL
SECOND YEAR
FOURTH SEMESTER
OPTIONAL SUBJECT - II
QUALITY MANAGEMENT
WELCOME
Warm Greetings.
I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.
DIRECTOR
(i)
MBA - OPTIONAL SUBJECTS OPTIONAL SUBJECT - II
SECOND YEAR - FOURTH SEMESTER QUALITY MANAGEMENT
COURSE WRITER
Dr. B. Devamaindhan
Associate Professor in Management Studies
Institute of Distance Education
University of Madras
Chennai - 600 005.
Dr. B. Devamaindhan
Associate Professor in Management Studies
Institute of Distance Education
University of Madras
Chennai - 600 005.
(ii)
MBA - OPTIONAL SUBJECTS
SECOND YEAR
FOURTH SEMESTER
OPTIONAL SUBJECTS - II
QUALITY MANAGEMENT
SYLLABUS
UNIT I
UNIT II
Tools and Techniques: Design Tools – Quality Planning Tools – Continuous Improvement
Tools - 5S and Kaizen– Lean Concept.
Six Sigma: Concepts – Steps and Tools – Define, Measure, Analyse, Improve and Control
(DMAIC) Methodology of Six Sigma Implementation – Define, Measure, Analyse, Design
and Verify (DMADV) Methodology for High Performance Designs – TQM vs. Six Sigma –
Lean Six Sigma – Assessing Readiness for Six Sigma
UNIT III
(iii)
Statistical process control: Quality control measurements – capability and control – SPC
methodology – control charts for variables data - control charts for attributes – summary
of control chart construction – designing control charts.
UNIT IV
Quality Function Deployment – Failure Mode and Effect Analysis – Taguchi Loss Function
Approach and Robust Design
Reliability: Definition and Concepts – Product Life Characteristic Curve – Bath Tub Curve
– Reliability Function – Reliability Engineering.
UNIT V
Reference Books
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8 th Edition,
South Western, 2012.
(iv)
MBA - OPTIONAL SUBJECTS
SECOND YEAR
FOURTH SEMESTER
OPTIONAL SUBJECTS - II
QUALITY MANAGEMENT
SCHEME OF LESSONS
2 Quality Issues 22
4 Six Sigma 60
10 Reliability 174
(v)
1
LESSON - 1
Quality Management - An Introduction
Learning Objectives
Structure
1.1 Introduction
1.10 Summary
1.11 Keywords
1.1 Introduction
Quality management is a recent phenomenon, but important for an organization.
Civilizations that supported the arts and crafts allowed clients to choose goods meeting higher
quality standards rather than normal goods. The aim was to produce large numbers of the
same goods. The first proponent in the US for this approach was Eli Whitney, who proposed
(interchangeable) parts manufacture for muskets, hence producing the identical components
and creating a musket assembly line.
The next step forward was promoted by several people, including Frederick Winslow
Taylor, a mechanical engineer who sought to improve industrial efficiency. He is sometimes
called ”the father of scientific management.” He was one of the intellectual leaders of the
Efficiency Movement and part of his approach laid a further foundation for quality management,
including aspects like standardization and adopting improved practices. Henry Ford was also
important in bringing process and quality management practices into operation in his assembly
lines.
Walter A. Shewhart made a major step in the evolution towards quality management by
creating a method for quality control of production, using statistical methods, first proposed in
1924. This became the foundation for his ongoing work in statistical quality control. W. Edwards
Deming later applied statistical process control methods in the United States during World
War II, thereby successfully improving quality in the manufacture of munitions and other
strategically important products.
Definitions
According to Joseph M. Juran, "Quality is defined as fitness for the purpose at reasonable
cost”.
Defining quality is far from easy. Just try to find why one finds that a product is not of
quality. Quality refers to grade of service, product, reliability, safety, consistency and consumer’s
perception. The notion of quality often subsumes a comparison between products. Product A is
better than B and therefore has a higher quality (Lorente, 1998). However, the word “better” is
vague and different definitions can be used. Quality: means “degree of excellence”; implies
“comparison”, is not absolute. Quality – is to satisfy customers’ requirement continually, where
3
as Total Quality is to achieve quality at low cost. Broadly quality includes fitness for use, grade,
degree of preference, degree of excellence and conformity to requirements.
According to ISO 8402, quality is “the totality of features and characteristics of a product
or service that bear on its ability to satisfy stated or implied needs”.
1. Quality planning
2. Quality assurance
3. Quality control
4. Quality improvement
TQM is mainly concerned with continuous improvement in all work, from high level strategic
planning and decision-making, to detailed execution of work elements on the shop floor. It
stems from the belief that mistakes can be avoided and defects can be prevented. It leads to
continuously improving results, in all aspects of work, as a result of continuously improving
capabilities, people, processes and technology. Continuous improvement must deal not only
4
with the improving results, but more importantly with improving capabilities to produce better
results in the future.
1. Demand generation
2. Supply generation
3. Technology
4. Operations
5. People capability
A central principle of TQM is that mistakes may be made by people, but most of them are
caused, or at least permitted, by faulty systems and processes. This means that the root cause
of such mistakes can be identified and eliminated and repetition can be prevented by changing
the process (Gilbert, 1992).
2. Where mistakes can’t be absolutely prevented, detecting them early to prevent them
being passed down the value-added chain (inspection at source or by the next operation).
3. Where mistakes recur, stopping production until the process can be corrected, to
prevent the production of more defects. (Stop in time).
Performance:
Features:
Features are additional characteristics that enhance the appeal of the product or service
to the user.
6
Reliability:
Reliability is the likelihood that a product will not fail within a specific time period. This is
a key element for users who need the product to work without fail.
Conformance:
Conformance is the precision with which the product or service meets the specified
standards.
Durability:
Durability measures the length of a product’s life. When the product can be repaired,
estimating durability is more complicated. The item will be used until it is no longer economical
to operate it. This happens when the repair rate and the associated costs increase significantly.
Serviceability:
Serviceability is the speed with which the product can be put into service when it breaks
down, as well as the competence and the behavior of the service person.
Aesthetics:
Aesthetics is the subjective dimension indicating the kind of response a user has to a
product. It represents the individual’s personal preference.
Perceived Quality:
Perceived Quality is the quality attributed to a good or service based on indirect measures.
Edwards Deming (1982) was an experienced statistician who says that management
must concentrate on the setting following by improving the systems continuously in which the
human resources worked. Deming insisted that when working with other employees the
managers are important because, a better feedback will be obtained from the employees who
do the work correctly. Unlike the scientific management approach which involves the managers
to set job methods and standards, Deming also insisted the need for training employees in the
statistical process and work analysis methods. He believed that it gave the ability to the workforce
to denote how and where there is a change in the needs.
The main focus of TQM was and is continuous quality improvement in the areas of
product or service, employer-employee relations, and consumer-business relations using the
following 14 Deming’s Principles.
4. End the practice of awarding business on the basis of a price tag-instead, minimize
the total cost;
10. Eliminate slogans, exhortations and targets for the work force;
11. Eliminate numerical quotas for the work force and eliminate numerical goals for
people in management;
Juran worked at the Hawthorne Electricity Plant in Chicago in the 1920s. He was visiting
Japan in the early 1850s and his teaching is based loosely on the Pareto principle. Juran
suggested that typically 95% of the problems of quality at work are the result of a system
where the employees work inside the environment. So there is a small way to resolve the
results by asking to develop the motivation of an employee. His advice was for the managers
to specify all major quality problems, highlighting the major problems and if it is worked out will
give many advantages and starts the projects to deal with the employees. Juran believes that
any person who is influenced by the product is specified as a customer by establishing the idea
of external and internal customers.
The following are the components given by Juran (2005) reveal the dimensions of quality.
3. Performance Timeliness
4. Reliability Completeness
9
8. Esthetics
9. Availability reputation
10. Reputation
Crosby, an engineer is known for accessing the concept of Zero Defects which was
produced at a company he once worked for. Eventually, Crosby became the Corporate Vice
President of the ITT Corporation and the Director of Quality. Crosby’s mantra was ‘Quality is
Free’. Further that it is not an issue of degree. He emphasizes that the management must note
the quality by tracking the non-conformance cost and cost of wrong things continually. Crosby
denoted that the major point is the requirement of conformance.
Crosby (1980) championed a quality improvement process based on the following four
criteria:
3. The standard of a quality should be set at zero defects which must be assumed as total
quality.
4. The quality measurement is the non-conformance cost that is the incurred cost which
undertakes quality management measures.
This process must be used to ensure that the customers, internal staff and suppliers all
must understand the process of quality.
Quality Planning
Quality planning is the pre determined activities in order to achieve conformation to the
requirements. Many organizations are finding that strategic quality plans and business plans
are inseparable. The quality planning procedure given by Juran(2005) has the following steps:
10
Total Quality Management is a management approach that originated in the 1950s and
has steadily become more popular since the early 1980s. Total Quality is a description of the
culture, attitude and organization of a company that strives to provide customers with products
and services that satisfy their needs. The culture requires quality in all aspects of the company’s
operations, with processes being done right for the first time to eradicate defects waste from
operations.
Total Quality Management is a method by which management and employees can become
involved in the continuous improvement of the production of goods and services. It is a
combination of quality and management tools aimed at increasing the business and reducing
losses due to wasteful practices. The quality of a library is defined and assessed from a
perspective of different groups of people. Moreover, the quality of library services decides on
the perception of the library within its parent organization (Gilbert, 1992).
The practice of Quality Management in Library and Information Science existed since
the evolution of the subject itself, but the terminology used for these varies widely. Performance
indicators; performance evaluation; evaluation of reference sources using precision and recall
rations; cost benefit and cost effectiveness studies; user surveys electing opinions on library
services all these make a part and parcel of quality studies using different mechanisms of
assessment and methodologies (Shewhart, 1986).
11
Quality assurance studies were mostly restricted to technical libraries and academic
libraries. Although quality assurance studies based on ISO 9001:2000 and other accreditation
schemes were conducted in other countries, such studies are rarely reported in Indian Libraries
and Information system.
TQM is a business philosophy that champions, the idea that the long-term success of a
company comes from customer satisfaction. TQM requires that all stakeholders in a business
work together to improve processes, products, services and the culture of the company itself.
While TQM seems like an intuitive process, it came about as a revolutionary idea. The
1920s saw the rise in reliance on statistics and statistical theory in business, and the first-ever
known control chart was made in 1924. People began to build on theories of statistics and
ended up collectively creating the method of statistical process control (SPC). However, it
wasn’t successfully implemented in a business setting until the 1950s.
It was during this time that Japan was faced with a harsh industrial economic environment.
Its citizens were thought to be largely illiterate, and its products were known to be of low quality.
Key businesses in Japan saw these deficiencies and looked to make a change. Relying on
pioneers in statistical thinking, companies such as Toyota integrated the idea of quality
management and quality control into their production processes.
By the end of the 1960s, Japan completely flipped its narrative and became known as
one of the most efficient export countries, with some of the most admired products. Effective
quality management resulted in better products that could be produced at a cheaper price.
A Gap Tool.
Benchmarking.
ISO 9000 family of standards, The Deming Prize, The Malcolm Baldrige National Quality
Award and The European Foundation for Quality Management Excellence Award.
Integrated Planning, which sets the focus for “what we do” as provider of excellence.
Integrated Review, which sets the balances and measures to verify “how we know”
provider of excellence
Risk Management, ensuring that remains flexible to risks and their potential impact.
This framework enables to achieve its strategic promises, ensure quality outcomes and
meet its statutory and regulatory obligations. The following are quality management principles:
Monitoring outcomes against its stated goals, performance indicators and targets
Similar awards are presented by the EFQM’s National Partner organisations across
Europe. For example, in the UK the British Quality Foundation (BQF) runs the UK Excellence
Awards. These awards are based on the EFQM Excellence Model, an organizational framework.
www.bqf.org.uk
14
The Malcolm Baldrige National Quality Award (MBNQA) was created by the United States
congress and was named after Malcolm Baldrige. It is administered by the National Institute of
Standards and Technology (NIST), and aims to improve the performance of U.S. businesses
by identifying and recognizing role-model businesses, establishing criteria for evaluating
improvement efforts,and disseminating and sharing best practices.
2. Customer outcomes
3. Workforce outcomes
In addition to the MBNQA, NIST also promotes Baldrige Excellence Framework, a “systems
approach to improving your organization’s approach” for Business / Nonprofit, Education, and
Health Care sectors.
The Deming Prize was created by the Union of Japanese Scientists and Engineers (Juse)
as appreciation to contribution of Edward Deming. Its aim is to promote excellence in Japanese
businesses. The categories of the Deming Prize (Table 1.2) are the Deming Prize for Individuals,
the Deming Prize, and the Deming Grand Prize. Applicants are examined on basis of the
following viewpoints:
15
Sl No Category Description
1 The Deming Prize for Individuals Given to those who have made
outstanding contributions to the study
of TQM or those who have made
outstanding contributions in the
dissemination of TQM
1. Reflecting its management principles, type of industry, business scope, and business
environment, the applicant has established challenging and customer-oriented
business objectives and strategies under its clear management leadership.
2. TQM has been implemented properly to achieve business objectives and strategies
as mentioned Item 1 above.
3. As an outcome of Item 2, the outstanding results have been obtained for business
objectives and strategies as stated in Item 1.
Deming Prize is an efficient way of benchmarking and steering your TQM system. Er.Juse
publishes summaries of prize winners, allowing you to benchmark yourself against the pioneers
of quality:
European Quality-Award
European Foundation for Quality Management (EFQM) was founded in 1988 with aim to
promote TQM approach and improve performance in European businesses. European Quality
Award was launched in 1991, and was later renamed to EFQM Excellence Award. Companies
are assessed on the basis of nine aspects:
1. Leadership
2. People
3. Strategy
5. Processes
7. People results
8. Customer results
ISO 9000 proposes the use of process approach to achieve continual improvement seven
Quality Management Principles (ISO, 2015):
18
1. Customer focus
2. Leadership
3. Engagement of people
4. Process approach
5. Improvement
7. Relationship management
In addition, ISO/TC 176 (2009) proposes seven steps for successful implementation of
QMS:
2. Identify key processes and the interactions needed to meet quality objectives
3. Implement and manage the QMS and its processes (using process management
techniques)
5. Implement the system, train company staff and verify effective operation of your
processes
Based on the aforementioned works, benefits of implementing ISO 9000 could be summed
to the following:
ISO 9000 is a series, or family, of quality management standards, while ISO 9001 is a
standard within the family. The ISO 9000 family of standards also contains an individual standard
named ISO 9000. This standard lays out the fundamentals and vocabulary for quality
management systems (QMS).
ISO 9001 and ISO 14001 are among ISO’s most well known standards ever. They are
implemented by more than a million organizations in some 175 countries. ISO 9001 helps
organizations to implement quality management. ISO 14001 helps organizations to implement
environmental management.
ISO 9001 is a Quality Management System (QMS) which gives organizations a systematic
approach for meeting customer objectives. ISO 14001 is an Environmental Management System
(EMS) which provides a system for measuring and improving an organization’s environmental
impact.
ISO 9000 refers to a generic series of standards published by the ISO that provide quality
assurance requirements and quality management guidance. ISO 9000 is a quality system
standard, not a technical product standard.ISO 14000 refers to a series of standards on
environmental management tools and systems.
Clause Comparison
The first three clauses in ISO 9001:2015 are largely the same as those in ISO 9001:2008,
but there are considerable differences between ISO 9001:2008 and ISO 9001:2015 from the
20
fourth clause onwards. The last seven clauses are now arranged according to the PDCA cycle
(Plan, Do, Check, Act).
1.10 Summary
Quality management is the act of overseeing all activities and tasks needed to maintain
a desired level of excellence,It is also referred to as TQM.A frequently used definition of quality
is “Delighting the customer by fully meeting their needs and expectations”. It is the way of
managing for the future, and is far wider in its application than just assuring product or service
quality. These may include performance, appearance, availability, delivery, reliability,
maintainability, cost effectiveness and price. Therefore, imperative that the organisation knows
what these needs and expectations are. It is a way of managing people and business processes
to ensure complete customer satisfaction at every stage, internally and externally.
1.11 Keywords
QM - Quality Management
6. Write short notes on award, Deming award, European award and ISO 9000.
21
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8thEdition,
South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 2
Quality Issues
Learning Objectives
Structure
2.1 Introduction
2.9 Summary
2.10 Keywords
2.1 Introduction
In 1950’s the concept of “Quality Cost” emerged. Different people assigned different
meanings to the term. Some people equated quality cost with the cost of attaining quality; some
23
people equated the term with the extra incurred due to poor quality. But, the widely accepted
thing is “Quality cost is the extra cost incurred due to poor or bad quality of the product or
service” Juran, (2005). Quality starts with market research to establish the true requirements
for the product or service and the true needs of the customers. However, for an organisation to
be really effective, quality must span all functions, all people, all departments and all activities
and be a common language for improvement. The cooperation of everyone at every interface is
necessary to achieve a total quality organisation, in the same way that the Japanese achieve
this with companywide quality control.
They are;
Appraisal costs
Prevention costs
Internal failure costs - The cost associated with defects that are found prior to transfer
of the product to the customer.
External failure costs - The cost associated with defects that are found after product is
shipped to the customer.
24
Appraisal costs - The cost incurred in determining the degree of conformance to quality
requirement.
Prevention costs - The cost incurred in keeping failure and appraisal costs to a minimum.
Sometimes we can also include the hidden costs i.e. implicit costs Juran (2005).
Emerging Quality cost model, it has been argued that higher quality doesn’t mean higher
costs. The companies estimate quality costs for the following reasons:
To identify the opportunities for reducing customer dissatisfaction and associated threats
to product salability.
Cost of quality (COQ) includes all the costs that conform to the required quality of the
project, including the cost to ensure conformance to requirements as well as the cost of
nonconformance, and finding the right balance. Modern quality management philosophy
emphasizes preventing mistakes rather than detecting them later because the cost of
nonconformance is very high. The following costs are associated with quality:
Prevention costs are associated with keeping defective products away from the customer.
Examples include quality training, quality planning, reliability engineering, test engineering,
or data analysis.
Appraisal costs are associated with checking the product to make sure it is conforming,
such as inspection, testing, calibration, studies, or surveys.
o Internal: Nonconformance that is found while the product is still within the performing
organization is called internal failure costs, and includes rework or scrap.
o External: Nonconformance that is found when the product has been given to the
customer is called external failure costs, and includes repair or returns.
o Direct: Direct failure costs include scrap, warranty costs, rework, engineering
changes, liability insurance, or inventory costs.
25
o Indirect: Indirect failure costs include fewer sales, lost customers, increased costs
to get customers back, decreased team morale, or decreased project efficiency
2. Satisfaction
3. Innovation
4. Finance
1. Timely payments
3. Flexibility
4. Share information
There exists in each department, each office, each home, a series of customers, suppliers
and customer-supplier interfaces. These are “the quality chains”, and they can be broken at
any point by one person or one piece of equipment not meeting the requirements of the customer,
26
internal or external. The failure usually finds its way to the interface between the organisation
and its external customer, or in the worst case, actually to the external customer. Failure to
meet the requirements in any part of a quality chain has a way of multiplying, and failure in one
part of the system creates problems elsewhere, leading to yet more failure and problems, and
so the situation is exacerbated. The ability to meet customers’ (external and internal) requirements
is vital. To achieve quality throughout an organisation, every person in the quality chain must be
trained to ask the following questions about every customer-supplier interface:
Organisation should measure our ability to meet their needs and expectations.
Organisation should have the capability to meet their needs and expectations (If
not, what must we do to improve this capability?)
Organisation needs to measure our suppliers have meet these capability and
expectations.
As well as being fully aware of customers’ needs and expectations, each person must
respect the needs and expectations of their suppliers. The ideal situation is an open partnership
style relationship, where both parties share and benefit.
27
For design, development and implementation of a QMS, the ISO 9000 approach is
completely compatible with the total quality philosophy, though it is not as all encompassing.
ISO 9000 is composed of three standards:ISO 9000:2000 Quality Management Systems -
Fundamentals and Vocabulary.
ISO 9001 and ISO 9004 are known as Consistent Pair and are based and follow PDCA
methodology.
By the time ISO 9000: 1987 was released, TQM was a mature management system, well
understood by many in the West. It is clear that ISO’s Technical Committee 176 (TC 176),
which was charged with ISO 9000’s development, borrowed some TQM elements, most notably
its documentation requirements.
ISO 9000: 1994 moved a bit closer to TQM, at least mentioning (though not requiring)
continual improvement. ISO 9000:2000 made a giant leap in comparison, especially in the area
28
of continual improvement, which has gone from receiving just cursory treatment to becoming a
firm requirement. In addition, the standard now incorporates eight quality management principles
that come directly from TQM. They are:
1. Customer focus
2. Leadership
3. Involvement of people
Ensuring that all employees at all levels are able to fully use their abilities for the
organization’s benefit.
4. Process approach
Recognizing that all work is done through processes, and managing them accordingly.
Expands on the previous principle in that achieving any objective requires a system of
interrelated processes.
6. Continual improvement
As a permanent organizational objective, recognizing and actin on the fact that no process
is so good that further improvement is impossible.
Acknowledging that sound decisions must be based on analysis of factual data and
information.
As a result of ISO 9000, any organization supplying products or service is able to develop
and employ a quality management system that is recognized by customers worldwide. Customers
around the globe who deal with ISO 9000-registrered organizations can expect that purchased
goods or services will conform to a set of recognized standards.
ISO 9001’s requirements for quality management systems are generic in nature, and are
applicable to organizations in any industry or economic sector. Whether the organization
manufactures a product or provides a service, whether it is a company or a governmental
agency, whether it is large or small,
To be registered the organization must go through a process that includes the following
steps:
1. Develop (or upgrade) a quality manual that describes how the company will assure the
quality of its products or services.
2. Document procedures (or upgrade existing documentation) that describe how the various
processes for design, production, continual improvement, and so forth, will be operated.
This must include procedures for management review/audits and the like.
3. The organization must provide evidence of top management’s commitment to the QMS
and its continual improvement.
4. The organization’s top management must ensure that customer requirements are
determined and met.
5. The organization must hire an accredited registrar company to examine its systems,
processes, procedures, quality manual, and related items. If everything is in order,
registration will be granted. Otherwise, the registrar will inform the company of which are
as require work (but will not inform the company specifically what must be done), and a
second visit will be scheduled.
6. Once registration is accomplished, the company will conduct its own internal audits to
ensure that the systems, processes, and procedures are working as intended.
7. Also once registered, the outside registrar will make periodic audits for the same purpose.
These audits must be passed to retain registration.
30
An important point to understand about ISO 9000 is that the organization has to respond
to all ISO 9000requirements and tell the registrar specifically what it is going to do and how. ISO
does not tell the organization. Assuming the registrar agrees with the organization’s plan,
registration is awarded. To retain that registration, the organization must do what it said it would
do.
Before the advent of the year 2000 release, ISO 9000 was concerned only with the
standards which an organization could build its own version of a quality management system.
ISO 9000:2000 has closed much of the gap that existed with TQM. The primary remaining
difference between ISO 9000 and TQM is in the degree to which the total organization is involved,
ISO 9000 does not require the QMS to include functions and levels that do not play a direct role
in the management and execution of the product/service realization processes. Functions that
are typically not involved under the QMS include human resources, finance (accounting), sales,
and marketing.
For most companies, the design process leads to a more effective organization design,
significantly improved results (profitability, customer service, internal operations), and employees
who are empowered and committed to the business. The hallmark of the design process is a
comprehensive and holistic approach to organizational improvement that touches all aspects of
organizational life, so you can achieve:
Increased profitability
By design we’re talking about the integration of people with core business processes,
technology and systems. A well-designed organization ensures that the form of the organization
matches its purpose or strategy, meets the challenges posed by business realities and significantly
increases the likelihood that the collective efforts of people will be successful.
31
As companies grow and the challenges in the external environment become more complex,
businesses processes, structures and systems that once worked become barriers to efficiency,
customer service, employee morale and financial profitability. Organizations that don’t periodically
renew themselves suffer from such symptoms as:
Redundancies in effort (“we don’t have time to do things right, but do have time to do
them over”)
Fragmented work with little regard for good of the whole (Production ships bad parts to
meet their quotas)
Delays in decision-making
People don’t have information or authority to solve problems when and where they occur
Management, rather than the front line, is responsible for solving problems when things
go wrong
Now the task is to make the design live. People are organized into natural work groups
which receive training in the new design, team skills and start-up team building. New work roles
are learned and new relationships within and without the unit are established. Equipment and
facilities are rearranged. Reward systems, performance systems, information sharing, decision-
making and management systems are changed and adjusted. Some of this can be accomplished
quickly. Some may require more detail and be implemented over a longer period of time.
32
A business process, simply defined, is any activity, or set of activities designed to change
one or more inputs – which may be physical or information- into one or more outputs. It is
desirable, although not universally true, that a process should in some way add value to the
inputs so that the output is worth more than the combined value of the inputs and the processing.
• This is flowed down to suppliers who pass material into the organisation.
• Processes, machines, methods etc. are monitored as the material flows through
the production process.
• On successful completion goods flow into the distribution chain to consumers, whose
feedback is sought to drive design changes as appropriate, and the cycle begins
again.
This concept is hardly revolutionary now and, indeed, the wording of the model may look
rather dated. However, the recognition that outputs of a process are clearly driven by inputs
was the vital first step on the road to managing processes rather than outcomes. It may also be
worthy of note that, even today, many management approaches spend more time focusing on
the outcome than the means to achieve them (MBO and performance appraisal are perhaps
chief amongst these).
“Every organisation is perfectly designed to achieve the results that they do” (Deming,
1990)
33
By implementing quality management in our organization, we can boost the quality of our
deliverables and achieve total success.
Methodology
Although adaptable to the size, complexity and needs of any organization, the design
process consists of the following steps.
As senior leaders, you come together to discuss current business results, organizational
health, environmental demands, etc. and the need to embark on such a process. You establish
a charter for the design process that includes a “case for change,” desired outcomes, scope,
allocation of resources, time deadlines, participation, communications strategy, and other
parameters that will guide the project.
(At times, senior teams may go through either a strategic planning process or an executive
team development process prior to beginning a redesign initiative, depending on how clear
they are about their strategy and how well they work together as a team.)
We don’t want to begin making changes until we have a good understanding of the current
organization. Using our Transformation Model, we facilitate a comprehensive assessment of
our organization to understand how it functions, its strengths and weaknesses, and alignment
to your core ideology and business strategy.
During my Six Sigma training sessions, frequently we come across some questions from
ourselves -
Do we wait for any customer complaint or any outside view about process issues?
Hence, identify critical success factors for your business process. Define process metrics
to measure process performance. With six sigma methodology, you can improve this process
performance. Six sigma is a logical structured approach to improve business processes.
The Greek letter “Sigma” a statistical term; measures how much a given process deviates
from perfection. Sigma is also known as standard deviation of the process from its mean. Six
Sigma process enables an organization to measure the number of “defects” in a process,
methods to eliminate them and get close to “zero defects” as much as possible.
Managers face challenges in improving the quality and efficiency of the business. To
overcome, they need to implement the best methodology and tools to analyze and control the
process. The best way to improve the result is to improve the process.
35
This phase improves the process by determining potential solutions, ways to implement
them, test and implement them for improvement. In this phase, process owners are consulted
and improvements are suggested. Action plan for the improvement is circulated to relevant
stakeholders. This action plan specifies – Action to be taken; By when By whom etc. The
improvement plan is designed to mitigate the risk and include customer feedback and satisfaction.
With the formation of improvement action plan, implementation phase starts simultaneously.
During implementation, actions are carried out, tested for effectiveness and implemented finally.
Tools used to eliminate the defects are Brainstorming, Mistake-proofing (Poka Yoke),
Simulation software, Prototyping, Piloting and Pugh Matrix.
Background
There are three major problems that can be identified with such a system:
It doesn’t work: 100% inspection is not 100% effective. No matter how good the inspector,
some good products will always be rejected or sent for rework due to fatigue, boredom or
a dozen other factors. More significantly, bad product will get shipped to customers.
It misplaces responsibility: Responsibility for quality devolves from the person making
the item to the inspector of the item whilst the control of quality remains where it always
will remain, with the person in control of the production process. Thus, the only one with
the ability to affect the final quality of the finishedproduct has no incentive to pursue such
improvements.
The main objective of this phase is to generate a detailed solution monitoring plan. This
plan ensures that the required performance is maintained. It defines and validates the monitoring
system, develops standards and procedures, verifies benefits and profit growths, and
communicates to business. Hence, the main purpose of Control phase is to ensure – Holding
the gains.
The most important part of this phase is to provide training on new changes to all relevant
stakeholders.
Quality control (QC) is a process by which entities review the quality of all factors involved
in production. ISO 9000 defines quality control as “A part of quality management focused on
fulfilling quality requirements.
1. Elements such as controls, job management, defined and well managed processes,
performance and integrity criteria, and identification of records
Important tools used in control phase are Process sigma calculation, Control charts, Cost
saving calculations and Control plan.
On the flip side, a single poor quality deliverable can create a cycle of low performance,
creating an environment where quality is not valued and people do not put in the extra effort.
Control Quality is the process of monitoring and recording results of executing the quality
activities to assess performance and recommend necessary changes. The key benefits of this
process include:
(1) Identifying the causes of poor process or product quality and recommending and/or
taking action to eliminate them.
(2) Validating that project deliverables and work meet the requirements specified by key
stakeholders necessary for final acceptance.
38
2.9 Summary
TQM, combined with effective leadership, results in an organisation doing the right things
right, first time. A Quality Management Process is critical process within any business, as it
helps you to ensure that the deliverables produced, actually meet the requirements of your
customer. This Quality Management Process will help you to improve the quality of your
39
deliverables, today. We should implement a Quality Management Process any time that we
want to improve the quality of our work. Whether we are producing deliverables as part of a
project or operational team, an effective quality management and quality assurance process
will be beneficial. By implementing this Quality Management Process, we can ensure that our
team’s outputs meet the expectations of our customer.
2.10 Keywords
PMBOK - Project Management Body of Knowledge
QC - Quality control
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8th
Edition, South Western, 2012.
40
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 3
Quality Tools and Techniques
Learning Objectives
Structure
3.1 Introduction
3.7 Implementation of 5S
3.8 Summary
3.9 Keywords
3.1 Introduction
Quality management planning determines quality policies and procedures relevant to the
project for both project deliverables and project processes, defines who is responsible for what,
and documents compliance. The periodic review of the quality management activities and
42
measurements assures stakeholders that the measures taken are appropriate and that quality
processes are followed. If the measurements result in variances beyond the defined tolerance,
corrective actions can be identified. Kaizen means “continuous improvement of processes and
functions of an organization through change”. In a layman’s language, Kaizen brings continuous
small improvements in the overall processes and eventually aims towards organization’s success.
2. Flow Chart
3. Pareto Chart
4. Histogram
5. Check Sheet
6. Scatter Plot
7. Control Chart
This tool is used to explore causes to a single effect (or event) through brainstorming.
These causes are put under different common categories known as 5 M or 6 M. Where, 6 M
expands as
Man
Material
Method
Machine
Measurement
Mother Nature.
43
2. Flowcharts
3. Pareto Chart:
Also, known as 80:20 principles. Attributed to Vilfredo Pareto, Principle states, 80% of
the outcome is a result of 20% causes. It’s a kind of bar chart showing the frequencies of
different causes or factors in descending order. The main purpose of this chart is to highlight
the most significant factors among a number of factors. In the figure below the total no. of
defects are plotted against the reasons for those defects. The problems are rank-ordered
according to their frequency and percentage of defects. By doing this ordering it is easier for
you to identify the primary areas for corrective action.
4. Histogram:
Histograms are a type of bar charts that depict the distribution of variables over time. This
represents the distribution by mean. This graph may take different shapes based on the condition
of the distribution. It’s a bar chart to study the frequency distribution of data set. It’s used to
understand nature of data. Histogram can be used to measure something against time i.e. the
graph is plotted with a variable on x-axis and time on the y-axis.
Consider the example: The following histogram shows number of hits on the company’s
website in different time of the day. The x-axis shows the number of users or customers active
on the website and the y-axis shows the time of the day.
45
5. Checksheet
It is used for data collection. A frequency of factorized data is collected in check sheet.
Based on analysis of collected data, the information as an objective.
6. Scatter Plot
Scatter plot represents the relationship between two variables. It shows, how one variable
changes with respect to any change in another variable. Scatter plot can depict following
relationships:
Strong positive
Strong negative
Weak Positive
Weak Negative
No relation
7. Control Chart
These charts are used to check, whether process data remains under control for the
shorter time span. They involve process control limits and sometimes customer specification
limits as operational ranges or bands. Process data is analyzed to remain within process control
limits. Whenever data goes out of control limits, it certainly has some special causes to be
investigated & removed immediately.
The aim of these charts is to ensure process data doesn’t go beyond control limits. However,
some exception rules are also there to ascertain the condition of a process going out of control,
while well within control limits.
Interrelationship diagram.
Tree diagram.
Prioritization matrix.
Affinity Diagram
Affinity diagrams are a special kind of brainstorming tool that organize large amount of
disorganized data and information into groupings based on natural relationships.
It was created in the 1960s by the Japanese anthropologist Jiro Kawakita. It is also
known as KJ diagram, after Jiro Kawakita. An affinity diagram is used when:
Interrelationship diagram
Tree diagram
This tool is used to break down broad categories into finer and finer levels of detail. It can
map levels of details of tasks that are required to accomplish a goal or solution or task. Developing
a tree diagram directs concentration from generalities to specifics.
Prioritization matrix
This tool is used to prioritize items and describe them in terms of weighted criteria. It uses
a combination of tree and matrix diagramming techniques to do a pair-wise evaluation of items
and to narrow down options to the most desired or most effective. Popular applications for the
prioritization matrix include return on investment (ROI) or cost–benefit analysis (investment vs.
return), time management matrix (urgency vs. importance), etc.
This tool shows the relationship between two or more sets of elements. At each intersection,
a relationship is either absent or present. It then gives information about the relationship, such
as its strength, the roles played by various individuals or measurements. The matrix diagram
enables you to analyze relatively complex situations by exposing interactions and dependencies
between things. Six differently shaped matrices are possible: L, T, Y, X, C, R and roof-shaped,
depending on how many groups must be compared.
A useful way of planning is to break down tasks into a hierarchy, using a tree diagram.
The process decision program chart (PDPC) extends the tree diagram a couple of levels to
identify risks and countermeasures for the bottom level tasks. Different shaped boxes are used
to highlight risks and identify possible countermeasures (often shown as “clouds” to indicate
their uncertain nature). The PDPC is similar to the failure modes and effects analysis (FMEA) in
that both identify risks, consequences of failure, and contingency actions; the FMEA also rates
relative risk levels for each potential failure point.
50
This tool is used to plan the appropriate sequence or schedule for a set of tasks and
related subtasks. It is used when subtasks must occur in parallel. The diagram helps in
determining the critical path (longest sequence of tasks). The purpose is to help people
sequentially define, organize, and manage a complex set of activities.
1. PDCA
The PDCA cycle (short for plan, do, check, and act) provides you with a systematic
approach to testing different ideas and hypotheses. It can help you to implement continuous
improvement throughout your organization using a structured framework. If you want to improve
business processes, efficiency, or productivity, then the PDCA cycle can help.
The framework gives front line teams a four-step guide for executing incremental
improvement practices. It enables them to avoid making the same mistakes repeatedly and is
commonly used in lean manufacturing. PDCA stands for:
51
Plan – define your strategic goals and how you’ll achieve them.
Do – implement the plan and make any changes required to ensure it works.
Some companies follow a slightly modified PDSA cycle, where the S stands for ‘study’
instead of check. It’s very similar to PDCA but involves passively observing instead of proactively
checking. The simple format means that PDCA is one of the most easily adopted continuous
improvement tools. Everyone in a company can understand and follow the four steps as they’re
relatable in a wide variety of job roles, from Human resource to R&D. It facilitates continuous
process improvement and empowers employees to test ideas on a small scale. Over time, this
creates a culture of creativity and innovation which is difficult for your competitors to replicate.
One of the key benefits of PDCA is that it’s easy to understand and remember. The
acronym can quickly become a mantra that is repeated and utilized by everyone in the business.
Some companies display the process on posters around their buildings while others print it on
to mouse mats and coffee mugs. This gives employees a visual reminder and encourages
them to adopt it as a consistent part of their work routine. The 4-step process doesn’t require
weeks of training to understand either – it can be summarised clearly in a matter of minutes.
Managers can then follow-up with staff as they implement it and help them to learn on the go.
This approach to coaching means there isn’t a large barrier to implementation in terms of
training. Companies can hit the ground running and then tweak it as they go along.
2. Gemba Walks
When it comes to continuous improvement tools, Gemba walks can be particularly powerful.
They enable you to tap into the most valuable resource a company has its people. The most
innovative improvement ideas often come from the employees who are working on the front
line and problem-solving on a daily basis. They have an in-depth understanding of their particular
area of the manufacturing process and are able to provide potential solutions.
Smart managers understand that the best way to capture these valuable insights is to get
out of their offices and into the ‘Gemba’. This is the place where things actually happen, such as
manufacturing or product development. Gemba walks involve interacting with staff on an informal
basis at the location where they do their work (as opposed to a meeting room). It enables
52
observation of real-life situations or the actual production process so that leaders have a better
idea of things that are happening. This casual yet accurate form of data collection can be a
powerful tool for companies that like simple improvement tools and techniques.
Regular Gemba walks also develop better employee relationships and a greater focus on
continuous improvement. They provide a framework for regular interaction and create a habit
of consistent feedback collection.
In actuality, the principles within 5S were being utilized decades before by Mr. Henry
Ford. It has been reported that prior to 1920, Mr. Ford was using CANDO in his manufacturing
processes. The acronym CANDO stands for Cleaning up, Arranging, Neatness, Discipline and
Ongoing improvement. In the 1950s, representatives from Toyota visited the Ford facilities to
be trained in automotive mass production methods. The Japanese later adapted the CANDO
methods and applied them in their production facilities. Some commonly used words describing
the steps in 5S are Sort, Set, Shine, Standardize and Sustain. Throughout different companies,
various words are used that have similar meanings. No matter what specific words are used to
identify the steps in 5S, the purpose remains the same: create a clean, organized and efficient
work environment.
1. SEIRI - SEIRI stands for Sort Out. According to Seiri, employees should sort out and
organize things well. Label the items as “Necessary”, “Critical”, “Most Important”, “Not
needed now”, “Useless and so on. Throw what all is useless. Keep aside what all is not
needed at the moment. Items which are critical and most important should be kept at a
safe place.
2. SEITION -Seition means to Organize. Research says that employees waste half of their
precious time searching for items and important documents. Every item should have its
own space and must be kept at its place only.
54
3. SEISO - The word “SEISO” means shine the workplace. The workplace ought to be kept
clean. De-clutter your workstation. Necessary documents should be kept in proper folders
and files. Use cabinets and drawers to store your items.
Kaizen focuses on continuous small improvements and thus gives immediate results.
Why Implement 5S
There are many benefits to implementing the 5S Methods into a work area on the production
line or in the business office. To not only survive but thrive in business today, cost must be
controlled and waste must be avoided or eliminated. The 5S steps, when implemented properly,
can identify and reduce many forms of waste in any process or workstation. An organized work
area reduces excessive motion and wasted time looking for the right tool. The visual aspect of
the 5S Methodology is also very effective. When everything has a place, it is easier to spot
something missing or misplaced. A clean work area helps draw attention to possible problems
or safety hazards. A clean floor helps spot any leaks or spills could indicate machine maintenance
and prevent slips and falls. Furthermore, encouraging people to watch for and address problems
can result in a positive change to an organizations culture. Therefore, the 5S Principles
implemented as part of a larger Lean initiative or as a standalone tool can reduce waste, improve
quality, promote safety and drive continuous improvement.
3.7 Implementation of 5S
Sort
The first step in 5S is sorting. During sorting the team should go through all items in the
work area including any tools, supplies, bulk storage parts, etc. The 5S team leader should
review and evaluate every item with the group. This will help to identify which items are essential
for getting the job done effectively and efficiently. If the item is essential for everyday operations
it should be tagged and cataloged. If the item is not essential, determine how often it is used in
55
the performance of work in that area. If it is a bulk item, decide the proper amount to be kept in
the area and move the remaining quantity to storage. Excess inventory is one form of waste
and should be eliminated during the 5S activities.
Straighten
Designate a place for all items that remain in the work area. Put all items in their designated
location. An often referenced quote is “A place for everything and everything in its place”. During
the straighten step, look for ways to reduce or eliminate waste. One form of waste in a process
is unnecessary operator motion or movement. Therefore, frequently used tools and supplies
should be stored in the immediate work area close to the operator. One effective method
commonly used to avoid wasted time searching for the correct tool is constructing shadow
boards for all essential tools. Items that are not used as often should be stored based on their
frequency of use. All parts bins should be properly labeled. The label should include part number,
part description, storage location and the recommended min / max quantities. A properly
straightened work area allows the operator to quickly review and verify that they have everything
they need to successfully perform their task at hand.
Shine
The next step is to clean everything in the area and remove any trash. To be effective we
must keep the area and any related equipment clean. Dirty process equipment can actually
increase the potential for process variability and lead to equipment failure. Lost time due to
equipment failure is considered waste and non-value-added time. A dirty area can also contribute
to safety issues that have the potential to cause a worker to be injured. Operators should clean
their areas at the end of each shift. By doing this they will likely notice anything out of the
ordinary such as oil or lubricant leaks, worn lift cables, burnt out bulbs, dirty sensors, etc. The
purpose is to reduce waste and improve operator safety and efficiency.
Standardize
The fourth step has been called the most important step in the 5S Process. In this step we
must develop the standards for the 5S system. They will be the standards by which the previous
5S steps are measured and maintained. In this step, work instructions, checklists, standard
work and other documentation are developed. Without work instructions or standard work,
operators tend to gradually just do things their own way instead of what was determined by the
team. The use of visual management is very valuable in this phase. Color coding and standard
56
colors for the surroundings are sometimes used. Photos of the area in the standard 5S
configuration are often posted for easier identification of non-conformance’s. The operators are
trained to detect non-conforming conditions and correct them immediately. Schedules should
also be developed for regular maintenance activities in each area.
Sustain
This step in the 5S Process can sometimes become the most challenging of all the five
steps. Sustaining is the continuation of the Sort, Straighten, Shine and Standardize steps. It is
the most important step in that it addresses the need to perform 5S on a consistent and systematic
basis. During this step a standard audit system is usually developed and implemented. The
goal of the sustain step is to ingrain the 5S process into the company culture. The company
must strive to make 5S a way of life so the benefits gained through the exercise can be maintained.
5S is not a one-time exercise. Following the 5S Process must become a habit.
5S + 1
Some organizations have added an additional step and titled their process 5S + 1. The
additional step being applied is safety. The goal of adding this step is to foster a culture that
enhances safety by identifying any workplace hazards and removing them. In addition, tools
and workstations are selected or designed with proper ergonomics in mind. The emphasis
being that in each of the other 5S steps the motto is “Safety First”.
Many companies have implemented 5S into all areas of their business. The greatest
benefits are usually realized when 5S is implemented as part of a larger Lean initiative within
the organization. If implemented properly, 5S can help drive your company’s lean initiatives and
be a powerful stimulus for developing a continuous improvement culture.
As previously stated, 5S Principles are effective tools for reducing waste, improving quality,
increasing efficiency, promoting safety and encouraging continuous improvement. When applying
5S Methods you should always remember the various forms which waste can take:
Overproduction – Producing more product than required or producing parts faster than
the downstream processes can utilize it. Strive to produce the proper amount at the proper
time.
57
Inappropriate or Non Value Added Processing – Waste is incurred through use of the
wrong tool, performing needless operations or not using the most efficient processes or
tools for the job. Beware of the phrase “Because we have always done it this way”. The
right process and the correct tools can reduce waste in your process.
Waiting – Time and resources are wasted when waiting on parts, supplies or information.
Unnecessary Motion – Any movement or motion performed by the operator that does not
add value is waste. During your 5S exercise examine the motions required to perform the
task. Organize the workstation so all tools and supplies are easily located and within easy
reach. In some cases, re-sequencing certain process steps can reduce excessive and
redundant movement or motion by the operator. By reducing or eliminating the waste of
excess motion you are also creating a more ergonomic workstation. Always consider
safety first.
Defects – This form of waste is one of the worst of all. Producing non-conforming parts or
assemblies increases scrap, reduces process efficiency, wastes machine, process or
assembly time and causes non-value added tool wear. Defects can also create additional
waste in the form of wait time when the downstream operations run out of usable parts.
Untapped Employee Creativity (potential) – Many companies are now realizing that their
best asset is their employees. Companies must create an atmosphere where ideas are
encouraged. Some of the most successful organizations have created a culture where
employee’s ideas are really heard and evaluated. When their good ideas are implemented,
the employee is recognized and rewarded. You never know where the next great idea is
going to come from.
58
3.8 Summary
There are seven basic quality tools identified as appropriate for use in both the quality
management plan and control quality processes. They are known as Ishikawa’s seven basic
tools of quality: cause-and-effect diagrams, flowcharting, check sheets, Pareto diagrams, control
charts, histograms and scatter diagrams. Total Quality Management focuses the organization’s
goals on a system of quality and meeting the needs of the customer. Strategic planning is a tool
that helps to prioritize the efforts of the organization in the implementation of a Total Quality
Management approach. Kaizen process aims at continuous improvement of processes not
only in manufacturing sector but all other departments as well. Implementing Kaizen tools is not
the responsibility of a single individual but involves every member who is directly associated
with the organization. Every individual, irrespective of his/her designation or level in the hierarchy
needs to contribute by incorporating small improvements and changes in the system.
3.9 Keywords
UCL - Upper control limit
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8th
Edition, South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 4
Six Sigma
Learning Objectives
Structure
4.1 Introduction
4.8 Summary
4.9 Keywords
4.1 Introduction
The process that led to Six Sigma was originated in the 19th Century with the bell curve
developed by Carl Fredrick Grauss. In the 1920s, statistician Carl Shewhart, a founding
member of the Institute of Mathematical Statistics, showed that a process required correction
after it had deviated three sigma from the mean.
61
Move forward to the 1970s, when Motorola senior executive Art Sundry complained about
the lack of consistent quality in the company’s products, according to the 2006 book “Six Sigma”
by Richard Schroeder and Harry Mikel.
According to the accepted story from numerous sources, Motorola Engineer Bill Smith
eventually answered the call to consistently manufacture quality products by working out the
methodologies of Six Sigma in 1986. The system is influenced by, but different than, other
management improvement strategies of the time, including Total Quality Management and
Zero Defects.
The name was derived from the goal to allow six standard deviations between the process
mean and the specification limit, thus reaching less than 3.4 defects per million parts. GE,
under the helm of Jack Welch, went a step further to establish Six Sigma as the company’s
mandatory and all pervasive company culture. Six Sigma as a Quality Approach, “Six Sigma is
a highly disciplined process that helps us focus on developing and delivering near-perfect products
and services” (GE.com, 2010).
This definition of Six Sigma by one of its fore-most promoters, General Electric (GE),
illustrates the whole philosophy behind it. As a response to the increasing awareness by the
U.S. public and industry, triggered by the Japanese quality superiority over the U.S industry,
quality moved into the focus of business strategists. The book Quality is Free by Philip Crosby
marked an important step of the development process, presenting a 14 step approach aimed at
quality improvements with the ultimate goal of zero defects (Crosby, 1979). Motorola adopted
the idea in the late 1980s and advanced it, creating the concept of Six Sigma.
Motorola devised Six Sigma as an aggregation of statistical tools, based on the principles
of process capability and product specifications, which evolved into the statistical concept of
defects per million opportunities (DPMO) and was aimed at finding inefficiencies in product
development processes (Folaron, 2003).
62
The concept of continuous quality improvements focused on driving down defects both in
production and designs and was refined with the underlying principle of DMAIC (Define, Measure,
Analyze, Improve, and Control). The concept provided an effective quality management approach,
and it was due to the famous success stories of Motorola and GE, which both claimed to have
increased quality and process efficiency drastically. Motorola, for instance, claims to have saved
$17 billion from 1986 to 2004, (Motorola University, 2005). Six Sigma was also eventually adopted
in the automobile industry, with Ford as the most prominent example. Ford, which had previous
experience in TQM approaches, was able to profit from Six Sigma after it experienced increasing
issues with quality management approaches.
Six Sigma originates from a 19th Century mathematical theory, but found its way into
today’s mainstream business world through the efforts of an engineer at Motorola in the 1980s.
Now heralded as one of the foremost methodological practices for improving customer
satisfaction and improving business processes, Six Sigma has been refined and perfected over
the years into what we see today.
No matter the setting, the goal remains the same: Six Sigma seeks to improve business
processes by removing the causes of errors that lead to defects in a product or service. It
accomplishes this by setting up a management system that systematically identifies errors and
provides methods for eliminating them.
In an effort to bring operations to a “six sigma” level essentially 3.4 defects for every one
million opportunities the methodology calls for continuous efforts to get processes to the point
where they produce stable and predictable results.
Deconstructing the manufacturing process down to its essential parts, Six Sigma defines
and evaluates each step of a process, searching for ways to improve efficiencies in a business
structure, improve the quality of the process and increase the bottom line profit.
Six Sigma needs a drive & support from top management to realize its full potential.
Hence, it’s a top-down approach. Top management commitment is a key to success of Six
63
Sigma projects. Six Sigma Master Black belts or Black belts should seek for the best possible
management support to bring successful six sigma improvements. Apart from top management,
commitment from all stakeholders and employees lead to exceptional outcomes. In the
organizations, where six sigma is a part of organizational culture, excellence is pursued in each
work area. Six Sigma approach is highly driven by the culture & values of the organization. The
pursuit of continuous improvement & being excellent is an obvious buy- into six sigma
applications.
Toward that end, the methodology calls for the training of personnel in Six Sigma, including
beginner Green Belts, Black Belts who often head up individual projects, and Master Black
Belts who look for ways to apply Six Sigma across a business structure to make improvements.
There are two major methodologies used within Six Sigma, both of which are composed
of five sections, according to the 2005 book “Juran Institute Six Sigma Breakthrough and Beyond”
by Joseph A. De Feo and William Barnard.
In all Six Sigma projects, there are 2 main methods of achieving the same defect-free
goals. Below, we detail these 2 methods.
DMAIC
DMADV
DMAIC focuses on improving existing business practices. DMADV, on the other hand
focuses on creating new strategies and policies.
The first and most-used method in Six Sigma is a 5-step process called DMAIC:
Define
Measure
Analyze
Improve
Control
As any project manager knows, projects are fluid, dynamic creatures, each with its own
unique challenges and hurdles to wrangle. With any number of process improvement
methodologies to integrate into an organization, finding the one that works for you and your
team can be a challenge all its own.
The DMAIC process, a subset within the Six Sigma methodology, hails from the
manufacturing industry and is most well-known for eliminating inefficiencies within a project.
Favored by engineers, the DMAIC process is particularly rigorous, designed to help companies
manufacture products nearly 100% defect-free.
It is made up of five phases that can benefit any organization—read about them below or
open our free DMAIC process template to start process mapping your next project.
The purpose of this step is to clearly pronounce the business problem, goal, potential
resources, project scope and high-level project timeline. This information is typically captured
within project charter document. Write down what you currently know. Seek to clarify facts and
set objectives.
65
At this stage, you’ll need to decide what process to improve, taking customer needs and
company goals into account, and how you’ll measure success. You might want to put together
a project charter to get approval and buy-in from stakeholders before you make significant
changes to a process.
Process maps and timelines will then keep you on track to finish your project. Define the
problem and the project goals. In the first phase, various problems which need to be addressed
to are clearly defined. Feedbacks are taken from customers as to what they feel about a particular
product or service. Feedbacks are carefully monitored to understand problem areas and their
root causes.
M - Measure and find out the key points of the current process.
Measure the process to determine current performance and quantify the problem.
Measure in detail the various aspects of the current process once the problem is identified,
employees collect relevant data which would give an insight into current processes. You have to
know where you are before you can get to your destination (or, in this case, the goals you
defined earlier).
Cause and effect diagram is very helpful to find the root cause of the defect. Cause-and-
effect diagrams show the relationship between the results of problems and the root cause of
these problems. This diagram shows all the primary and secondary causes of a problem and
the effect of all the proposed solutions. This Ishikawa diagram is also called fishbone diagram
due to its fish-like shape. In the above diagram: poor training, old equipment, funds are the
causes and “Excessive downtime” is the effect. Analyze the process to identify core causes
of poor performance.
The purpose of this step is to objectively establish current baselines as the basis for
improvement. This is a data collection step, the purpose of which is to establish process
performance baselines. The performance metric baseline(s) from the Measure phase will be
compared to the performance metric at the conclusion of the project to determine objectively
whether significant improvement has been made. The team decides on what should be measured
and how to measure it. It is usual for teams to invest a lot of effort into assessing the suitability
of the proposed measurement systems. Good data is at the heart of the DMAIC process.
Analyze data to, among other things; find the root defects in a process, the information
collected in the second stage is thoroughly verified. The root cause of the defects are carefully
studied and investigated as to find out how they are affecting the entire process.
Using all the data you’ve collected, you can determine the root of the issue. Return to the
value stream map or other diagrams that you created earlier, or use a fishbone diagram, also
known as a cause-and-effect diagram or Ishikawa diagram, to visualize possible causes.
The purpose of this step is to identify, validate and select root cause for elimination. A
large number of potential root causes (process inputs, X) of the project problem are identified
via root cause analysis (for example a fishbone diagram). The top 3-4 potential root causes are
selected using multi-voting or other consensus tool for further validation. A data collection plan
is created and data are collected to establish the relative contribution of each root causes to the
project metric, Y. This process is repeated until “valid” root causes can be identified. Within Six
Sigma, often complex analysis tools are used. However, it is acceptable to use basic tools if
these are appropriate. Of the “validated” root causes, all or some can be.
67
Prioritize the root causes (key process inputs) to pursue in the Improve step.
Identify how the process inputs (Xs) affect the process outputs (Ys). Data has
analyzed to understand the magnitude of contribution of each root cause, X, to the
project metric, Y. Statistical tests using p-values accompanied by Histograms, Pareto
charts, and line plots are often used to do this.
Detailed process maps can be created to help pin-point where in the process the
root causes reside, and what might be contributing to the occurrence.
Now comes the essential piece of this process: finding solutions and making changes to
the process to reduce waste and better meet customer needs. Create a revised process map to
show the difference between your as-is and to-be processes.
Improve the process. Based on the research and analysis done in the previous stage.
Efforts are made to create new projects which would ensure superior quality.
The purpose of this step is to identify, test and implement a solution to the problem; in part
or in whole. This depends on the situation. Identify creative solutions to eliminate the key root
causes in order to fix and prevent process problems. Use brainstorming or techniques like Six
Thinking Hats and Random Word. Some projects can utilize complex analysis tools like DOE
(Design of Experiments), but try to focus on obvious solutions if these are apparent. However,
the purpose of this step can also be to find solutions without implementing them.
Create.
Based on PDCA results, attempt to anticipate any avoidable risks associated with
the “improvement” using the Failure mode and effects analysis (FMEA).
Deploy improvements.
68
Control how the process is done in the future. So that they do not lead to defects. While
DMAIC is useful for improving your current processes, DMADV is used to develop a new process,
product, or service. The DMADV process uses data and thorough analyse to help you create an
efficient process or develop a high-quality product or service.
Once you have verified that your solution improved performance, you need to maintain
that momentum. Share the process maps you have created so everyone understands and
follows the new process.
The purpose of this step is to embed the changes and ensure sustainability; this is
sometimes referred to as making the change ‘stick’. Control is the final stage within the DMAIC
improvement method. In this step; Amend ways of working; Quantify and sign-off benefits;
Track improvement; officially close the project; Gain approval to release resources.
A Control chart can be useful during the Control stage to assess the stability of the
improvements over time by serving as
o 2. Provide a response plan for each of the measures being monitored in case of the
process becomes unstable.
Process confirmation
Development plans
Transition plans
· Control plan
Benefit delivery
69
Fundamentals of DMADV
The Six Sigma DMADV methodology is used when an organization is creating new
processes in order to achieve their customers’ needs. The acronym DMADV represents five
project phases: Define, Measure, Analyze, Design, and Verify.
During the define phase, the Six Sigma team will define the project’s goals and deliverables.
In the measure phase, the team measures the project’s factors that are critical to its deliverables.
Working in conjunction with the measure phase is the analyze phase, when the team analyzes
the process options that will best meet the customer’s required deliverables. The design phase
is the phase in which the team will document the detailed process that meets the customer’s
deliverables. The final phase of DMADV is the verify phase, in which the team will verify that the
customer’s needs are met through the use of the newly designed process.
Define
Measure
Analyze
Design
Verify
Define
In this first phase of the DMADV Process, it is about identifying the goal of the project, the
process or the service. Not just from the perspective of the organisation, but also from the
perspective of other stakeholders, including customers. It should be clearly defined which
guidelines are important for the development of a product or service, and if there are any
potential risks and what the production schedule is.
During this first phase, the project manager determines what the most important customer
needs are regarding the product or service to be newly developed. He determines this by using
relevant, previously gathered customer information and customer feedback.
The goals of the first phase are to identify the purpose of the project, process or service,
to identify and then set realistic and measurable goals as seen from the perspectives of the
organization and the stakeholder(s), to create the schedule and guidelines for the review and to
identify and assess potential risks. A clear definition of the project is established during this
step, and every strategy and goal must be aligned with the expectations of the company and
the customers.
71
In the definition phase, a garden furniture manufacturer can decide to focus on the
production of wooden sun loungers. Based on previously collected customer information, the
manufacturer knows it is very important to customers that the wood being used comes from fair
trade.
In addition, customers have indicated that the lounger should be adjustable in three
positions, have a high back support and head support and have an environmental friendly
coating that allows the lounger to be left outside. During this definition phase, the manufacturer
can also determine if it would be lucrative to design such a sun lounger.
Project leaders identify wants and needs believed to be considered most important to
customers. Wants and needs are identified through the historical information, customer feedback,
and other information sources.
Metrics and other tests are developed in alignment with customer information.
Define strategies and processes which ensure hundred percent customer satisfaction,
Also define the project goals.
Measure
Next comes measuring the factors that are critical to quality, or CTQs. Steps taken should
include: defining requirements and market segments, identifying the critical design parameters,
designing scorecards that will evaluate the design components more important to the quality,
reassessing risk and assessing the production process capability and product capability. Once
the values for these factors are known, then an effective approach can be taken to start the
production process. It is important here to determine which metrics are critical to the stakeholder
and to translate the customer requirements into clear project goals.
This phase of the DMADV Process is aimed at the collecting and recording of data that
are relevant to the CTQ measures that have been identified during the first phase. The data
that is collected during the measuring phase are essential to the process, as it will be used to
drive the rest of the process.
In the case of DMADV, there are no CTQs in the measuring phase yet. After all, there is
no new product yet, let alone a production process. During this measuring phase, it is about
72
determining what the customer thinks is important about a new product. These factors are
subsequently linked to quality, which leads to CTQs. If a value is assigned to all design
components, this leads to an effective approach to start the production process.
It is important to determine which components of the production process are critical to all
stakeholders. The customer requirements will eventually be translated to clear project objectives,
in order to get a product that can distinguish itself from the competition.
The second part of the process is to use the defined metrics to collect data and record
specifications in a way that can be utilized to help drive the rest of the process.
All the processes needed to successfully manufacture the product or service are
assigned metrics for later evaluation.
Measure and identify parameters that are important for quality, Measure critical
components of the process and the product capabilities.
The garden furniture manufacturer now links what the customers think is important to the
CTQs. If it is not possible to source fair trade wood, production cannot start. The same goes for
the environmentally-friendly coating and the design that needs to meet the minimum
requirements; 3 positions, high back support and head support. During this measuring phase,
the manufacturer checks whether the design costs, manufacturing costs and raw materials
costs are worth the eventual selling price.
Analyze
The analysis phase of DMADV Process is closely linked to the measuring phase, because
the project team will analyse and test all the gathered data. This results in a good basis to
measure improvements during the production process. During this analysis phase, design
alternatives are developed and they determine the optimum combination of requirements. An
estimate of the total life cycle costs of the design is also made during this stage. After exploring
the different design alternatives, a rough product design is created (functional specification)
that meets the previously defined CTQ’s as much as possible.
Actions taken during this phase will include: developing design alternatives, identifying
the optimal combination of requirements to achieve value within constraints, developing
73
conceptual designs, evaluating then selecting the best components, then developing the best
possible design. It is during this stage that an estimate of the total life cycle cost of the design
is determined. After thoroughly exploring the different design alternatives, what is the best
design option available for meeting the goals?
In this phase, the garden furniture manufacturer will check out different importers from
who they can purchase fair trade wood. They will determine the origin of the wood, so they can
use it as background information for sales. They also analyse different environmentally-friendly
coatings, their advantages and disadvantages and the strengths of the different options. The
analysis of different designs will receive close attention as well. Analyzing is a time-consuming
phase and the manufacturer would be wise to set a deadline to prevent cost-overruns.
The result of the manufacturing process (i.e. finished product or service) is tested by
internal teams to create a baseline for improvement.
Leaders use data to identify areas of adjustment within the processes that will deliver
improvement to either the quality or manufacturing process of a finished product or
service.
Analyze and develop high level alternatives to ensure superior quality, Analyze the data
and develop various designs for the process, eventually picking the best one.
Design
The design phase of the DMADV Process consists of the design of the product or service
that fully matches the customer requirements. During this phase, the project team uses data
from the previous phases, leading to a product that is suitable for the customer with all possible
additional adjustments that might be needed. It is a detailed and high-quality design which will
be made into a prototype. During production of this prototype, they also look at the production
process. The goal is not just to develop a production process that creates good products, but
one which is also logistically efficient.
This stage of DMADV includes both a detailed and high level design for the selected
alternative. The elements of the design are prioritized and from there a high level design is
developed. Once this step is complete, a more detailed model will be prototyped in order to
identify where errors may occur and to make necessary modifications.
74
Based on the earlier analysis, the garden furniture manufacturer has made certain choices.
They have found a supplier for fair-trade wood, know which environmentally-friendly coating
they will use and they have chosen a design in which adjusting the lounger is quickest, safest
and easiest and in which the back support and neck support are connected in a good way. In
the manufacturing process, close attention will have to be paid to the layout of the woodworking
machinery and what route the process will follow (routing), so no time is wasted and an x
number of loungers will leave the factory every hour.
The results of internal tests are compared with customer wants and needs. Any additional
adjustments needed are made.
The improved manufacturing process is tested and test groups of customers provide
feedback before the final product or service is widely released.
Design details and processes, Design and test details of the process.
Verify
In the final phase, the team validates that the design is acceptable to all stakeholders.
Will the design be effective in the real world? Several pilot and production runs will be necessary
to ensure that the quality is the highest possible. Here, expectations will be confirmed, deployment
will be expanded and all lessons learned will be documented. The Verify step also includes a
plan to transition the product or service to a routine operation and to ensure that this change is
sustainable.
The last stage in the methodology is continuous. While the product or service is being
released and customer reviews are coming in, the processes may be adjusted.
New data may lead to other changes that need to be addressed, so the initial process
may lead to new applications of DMADV in subsequent areas.
The applications of these methodologies are generally rolled out over the course of many
months or even years. The end result is a product or service that is completely aligned with
customer expectations, wants and needs.
75
The verification phase of the DMADV Process might be the final phase, but it is not the
end of the process. To safeguard quality, it is important to continue to verify and make adjustment
to the product where necessary. In this last phase, the design is final and the product is ready
to be sold. During this phase, the project team receives feedback from the customers and user
experiences, and they will make necessary adjustments to better meet the customers’ needs.
The project team will also determine additional CTQ measures to be able to monitor customer
feedback after delivery of the final product.
DMAIC and DMADV do have a number of similarities that are worth noting. They both use
statistical tools and facts in order to find solutions to common quality-related problems and
focus on reaching the business and financial goals of an organization. DMAIC and DMADV are
implemented by Green Belts, Black Belts and Master Black Belts and are used to reduce defects
to fewer than 3.4 per million available opportunities, or Six Sigma. Their solutions are data
intensive and based only on hard facts.
The two most widely used Six Sigma methodologies are DMAIC and DMADV. Both methods
are designed so a business process will be more efficient and effective. While both of these
methodologies share some important characteristics, they are not interchangeable and were
developed for use in differing business processes. Before comparing these two approaches in
more detail, let’s review what the acronyms stand for.
Despite the shared first three letters of their names, there are some notable differences
between them. The main difference exists in the way the final two steps of the process are
handled. With DMADV, the Design and Verify steps deal with redesigning a process to match
customer needs, as opposed to the Improve and Control steps that focus on determining ways
to readjust and control the process. DMAIC typically defines a business process and how
applicable it is; DMADV defines the needs of the customer as they relate to a service or product.
76
Six Sigma methodologies are aimed at reducing the errors in a product line by looking at
all the processes contributing to the completion and delivery of an item or service. Improving
the effectiveness of these processes and omitting redundancies are ways to make the entire
manufacturing process more efficient. This leads to shortened lead times, improvements in
gross margin and more reliable production lines.
Coupling improvements in the manufacturing processes with those that govern customer
service can help to deliver a more complete and profitable product or service. The Six Sigma
processes that look at the customer service aspects of a business are outlined in the acronym
“DMADV” which refers to Define, Measure, Analyze, Design, and Verify.
First off, DMADV and DFSS are essentially the same process. DFSS stands for “Design
for Six Sigma,” and is just another name for DMADV. DMAIC is the more well-known and most-
used Lean Six Sigma project methodology and is focused on improving an existing process,
rather than creating a new product or process like DMADV.
In general, DMADV is associated with new services and product designs; it may not
always work with existing products and processes. When there is no existing product, DMADV
can be implemented to design the product or process. Another way of looking at it would be to
use DMADV when a process improvement doesn’t meet expectations or simply fails.
DMAIC is used on a product or process that already exists but is no longer meeting
customer needs and/or specifications. Companies without previous Six Sigma experience may
want to enlist help from professionals such as Six Sigma Black Belts and Master Black Belts,
professionals who can help make the best choice between DMAIC and DMADV.
77
4.8 Summary
Six Sigma is about process improvement as part of quality management. Existing products
and/or services are improved using analytical techniques and statistics. Six Sigma focuses on
reducing variability in matters that are perceived as Critical to Quality (CTQ) by the customer.
These CTQs are very important and vital to quality; it is about the internal critical quality
parameters that relate to the customer’s wishes and needs. As such, CTQs are quality
characteristics of the process or service that meet the requirements of the customer.
4.9 Keywords
PDCA Cycle - Plan-Do-Check-Act Cycle
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8th
Edition, South Western, 2012.
78
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 5
High Performance Design
Learning Objectives
Structure
5.1 Introduction
5.9 Waste
5.10 Summary
5.11 Keywords
5.1 Introduction
Organizations practicing Six Sigma create special levels for employees within the
organization. Such levels are called as: “Green belts”, “Black belts” and so on. Individuals
certified with any of these belts are often experts in six sigma process. According to Six Sigma
any process which does not lead to customer satisfaction is referred to as a defect and has to
be eliminated from the system to ensure superior quality of products and services. Every
organization strives hard to maintain excellent quality of its brand and the process of six sigma
ensures the same by removing various defects and errors which come in the way of customer
satisfaction.
Six Sigma
§ Teamwork
§ Personal Discipline
§ Improved Morale
§ Quality Circles
The Five S’s of lean is a methodology that results in a workplace that is clean, uncluttered,
safe, and well organized to help reduce waste and optimize productivity. It’s designed to help
build a quality work environment, both physically and mentally. The 5S philosophy applies in
any work area suited for visual control and lean production.
Value stream mapping (VSM) is a pencil and paper tool used in two stages. First, follow a
product’s production path from beginning to end and draw a visual representation of every
81
process in the material and information flows. Second, draw a future state map of how value
should flow. The most important map is the future state map.
Flow:
Flow is the progressive achievement of tasks along the value stream so a product proceeds
from design to launch, order to delivery, and raw to finished materials in the hands of the
customer with no stoppages, scrap, or backflows.
Visual workplace:
Quality function deployment (QFD) begins with an exploration and discovery of customer
needs. The first step is to capture the voice of the customer (VOC) and then create a voice of
the customer table (VOCT). Common sources can include sales and technical trip reports,
warranty claims, user support forums or help lines, and social media.
Six Sigma team leaders also use project management tools, such as Gantt charts, and
team engagement tools, such as brainstorming and nominal group technique.
Problems that won’t go away are the symptom of a deeper issue that you need to resolve.
To get the root of chronic issues, you can use the tool known as the 5 whys.
The 5 whys originated in the 1930’s with the Japanese industrial revolution. The method
is simple: when the problem arises again, you get to the cause by asking “why” five times. This
method is most effective when used to deal with moderately difficult problems. If you deal with
more complex issues, you may achieve better results by using a cause-and-effect diagram.
82
This is a method that uses questions to get to the root cause of a problem. The method is
simple: simply state the final problem (the car wouldn’t start, I was late to work again today) and
then ask the question “why,” breaking down the issue to its root cause. In these two cases, it
might be: because I didn’t maintain the car properly and because I need to leave my house
earlier to get to work on time. The process first came to prominence at Toyota.
Organizations often find that one and the same problem occurs over and over again. No
matter how many times they address it, it keeps creeping back in at a later date.
The 5 whys sounds like a much unsophisticated method, but don’t underestimate it. Its
simplicity is what makes it so helpful. Besides, this tool works well in combination with other Six
Sigma tools.
The 5 why’s is one of the best continuous improvement tools for root cause analysis. It
can help you to identify the source of a problem and see beyond the superficial issue. By asking
‘why’ several times in a row, you can dive deeper into the heart of a problem. This enables you
to them come up with potential solutions that accurately address it instead of just treating the
symptoms. It also helps teams to move past apportioning blame or finger pointing to find the
real issue.
Using the 5 why’s technique can also help you to determine the relationships between
cause and effect (ideal for creating a fishbone diagram). It’s a simple tool that anyone can
utilize without the need for statistical analysis like data regression or hypothesis testing.
Businesses may find that they need to ask ‘why’ a few more times or a few less to get to the root
of an issue. But this approach is a full proof way of getting to the heart of anything that isn’t
working.
The 5 Whys tool is employed in order to move past the symptoms toward assimilating the
actual cause of a problem. By asking the question “Why” five times, a true cause can often be
determined.
83
A great number of organizations rely on the 5 Whys method for many reasons, including
it is simple, easy to accomplish, and does not require statistical analysis. The 5 Whys method
is ideal when the problem source is from human interactions and/or other quantifiable factors.
To apply this methodology, the following steps can be followed:
Question why the problem occurred and come up with possible answers.
If answers to this question do not lead to a root cause, ask again 3 Why3 and again
document responses.
Repeat the process again and again until the root cause has been determined.
Note, you may ask the question “Why” more or less than five times.
The Critical to Quality Tree diagram breaks down the components of a process that
produces the features needed in your product and service if you wish to have satisfied customers.
Much like the Five Whys, this is a process by which a business attempts to identify the
root cause of a defect and then correct it, rather than simply correcting the surface “symptom”
of the problem.
Ultimately, all of the tools and methodologies in Six Sigma serve one purpose: to streamline
business processes in order to produce the best products and services possible with the smallest
amount of defects. Its adoption by corporations around the globe is both an indicator of and
testament to its remarkable success in today’s business environment.
Six-Sigma is a relatively newer concept than Total Quality Management but not exactly its
replacement. The basic difference between Total Quality Management and Six Sigma is that
85
TQM delivers superior quality manufactured goods whereas six sigma on the other hand results
in better results. Total Quality management refers to continuous effort by employees to ensure
high quality products. The process of Six Sigma incorporates many small changes in the systems
to ensure effective results and better customer satisfaction.
Total Quality Management involves designing and developing new systems and processes
and ensures effective coordination among various departments. New Processes are developed
based on various customer feedbacks and researches.
The main focus of Total quality management is to maintain existing quality standards
whereas Six Sigma primarily focuses on making small necessary changes in the processes
and systems to ensure high quality.
The process of Total quality management does reach to a saturation level after a certain
period of time. After reaching the saturation stage, no further improvements in quality can be
made. Six Sigma on the other hand seldom reaches the saturation stage by initiating a next
level quality process.
The process of Total quality management involves improvement in existing policies and
procedures to ensure high quality.
Six-Sigma involves specially trained individuals whereas total quality management does
not require extensive training. The process of Six Sigma creates special levels for employees
who are only eligible to implement the same. Employees trained for Six Sigma are often certified
as “Green Belts” or “Black Belts” depending on their level of proficiency. Six-Sigma requires
86
participation of only certified professionals whereas total quality management can be referred
to a part time activity which does not require any special training. Six-Sigma can be implemented
by dedicated and well trained professionals.
Six-Sigma is known to deliver better and effective results as compared to total quality
management. The process of Six Sigma is based on customer feedbacks and is more accurate
and result oriented. Customer feedbacks play an important role in Six Sigma. Experts predict
that six sigma will outshine total quality management in due course of time.
For each of this belt, levels skill sets are available that describe which of the overall Lean
Six Sigma tools are expected to be part at a certain Belt level. These skill sets provide a detailed
description of the learning elements that a participant will have acquired after completing a
training program. The skill sets reflect elements from Six Sigma, Lean and other process
improvement methods like the theory of constraints (TOC) total productive maintenance (TPM).
Lean six sigma organizes lean and six sigma to cut production costs, improve quality,
speed up, stay competitive, and save money. From six sigma they gain the reduced variation
on parts. Also, lean focuses on saving money for the company by focusing on the types of
waste and how to reduce waste. The two coming into lean six sigma to better each other
creating a well balanced and organized solution to save money and produce better parts
consistently.
term was coined by Taichi Ohno, an industrial engineer at Toyota. Ohno based his system on
how supermarkets control their inventory depending on the demand.
When you shop at a supermarket, you don’t stock up for month or years ahead. Neither
does the store stock items that it doesn’t expect to sell right now. Instead, you tailor your shopping
list to what you need right now, just like the store bases its supply of the products on customer
demand. Kanban mimics this arrangement by allowing the demand for the firm’s output to
control the supply of its inventory.
Kanban system sets limits for the inventory-holding for all current business processes.
This frees additional resources and allows using them better. Kanban system works on a simple
and elegant idea: only activate the supply chain when the demand requires it. This system both
brings more focus to the business process itself and increases its efficiency.
The term poka-yoke was applied by Shigeo Shingo in the 1960s to industrial processes
designed to prevent human errors. Shingo redesigned a process in which factory workers,
while assembling a small switch, would often forget to insert the required spring under one of
the switch buttons. In the redesigned process, the worker would perform the task in two steps,
first preparing the two required springs and placing them in a placeholder, then inserting the
springs from the placeholder into the switch. When a spring remained in the placeholder, the
workers knew that they had forgotten to insert it and could correct the mistake effortlessly.
Shingo distinguished between the concepts of inevitable human mistakes and defects in
the production. Defects occur when the mistakes are allowed to reach the customer. The aim of
poka-yoke is to design the process so that mistakes can be detected and corrected immediately,
eliminating defects at the source.
88
More broadly, the term can refer to any behavior-shaping constraint designed into a process
to prevent incorrect operation by the user.
A simple poka-yoke example is demonstrated when a driver of the car equipped with a
manual gearbox must press on the clutch pedal (a process step, therefore a poka-yoke) prior to
starting an automobile. The interlock serves to prevent unintended movement of the car. Another
example of poka-yoke would be the car equipped with an automatic transmission, which has a
switch that requires the car to be in “Park” or “Neutral” before the car can be started (some
automatic transmissions require the brake pedal to be depressed as well). These serve as
behavior-shaping constraints as the action of “car in Park (or Neutral)” or “foot depressing the
clutch/brake pedal” must be performed before the car is allowed to start. The requirement of a
depressed brake pedal to shift most of the cars with an automatic transmission from “Park” to
any other gear is yet another example of a poka-yoke application. Over time, the driver’s behavior
is conformed to the requirements by repetition and habit.
Implementation in manufacturing
Shigeo Shingo recognized three types of poka-yoke for detecting and preventing errors
in a mass production system:
1. The contact method identifies product defects by testing the product’s shape, size,
color, or other physical attributes.
2. The fixed-value (or constant number) method alerts the operator if a certain number
of movements are not made.
3. The motion-step (or sequence) method determines whether the prescribed steps of
the process have been followed.
Either the operator is alerted when a mistake is about to be made, or the poka-yoke
device actually prevents the mistake from being made. In Shingo’s lexicon, the former
implementation would be called a warning poka-yoke, while the latter would be referred to as a
control poka-yoke.
89
Shingo argued that errors are inevitable in any manufacturing process, but that if
appropriate poka-yokes are implemented, then mistakes can be caught quickly and prevented
from resulting in defects. By eliminating defects at the source, the cost of mistakes within a
company is reduced.
This approach can be used to emphasize the technical aspect of finding effective solutions
during brainstorming sessions.
A typical feature of poka-yoke solution is that they don’t let an error in a process happen.
Other advantages include:
Switching from defect detection to defect prevention reduces costs and removes
waste
Lean Six Sigma works for any size organization. The same success achieved by large
businesses can be attained by small and medium businesses. Smaller organizations may actually
be more nimble with fewer people and lower levels of red tape to navigate.
This method works for businesses looking for a roadmap to effectively meet their strategic
goals. Applying Lean Six Sigma helps to increase revenue and reduce costs, while freeing up
resources to add value where the organization needs them most. The ultimate winners are the
customers of the business who receive consistent, reliable products and services.
Lean Six Sigma not only improves profit margins, it positively affects employees by
engaging them in the work of improving their own processes. Since employees are closest to
the actual work of an organization—the delivery of products and services—their intimate
knowledge makes them the best resources to analyze and improve the efficiency and
effectiveness of those processes.
91
By participating in successful Lean Six Sigma efforts, employees build confidence and
become increasingly valuable assets to the business. Studies show that employees who feel
they’re able to positively impact an organization will perform better, be more accountable and
live happier lives. By quickly mastering basic Lean Six Sigma skills, they will continually
standardize work, root out problems and remove waste in an organization.
There are many approaches to doing a readiness assessment. Here’s a typical sequence;
the steps are described in detail below:
4. Engage key influencers (those who wield formal or informal power in the organization)
through focus groups and interviews
Before reviewing these steps in more depth, here’s one tip: The way you conduct the
assessment will set the tone for what people expect out of Lean Six Sigma. By including a wide
range of people in the assessment you can create a lot of positive feelings towards the initiative,
especially if you go in with an open mind and “listen” more than you “tell.”
The reason for selecting or designating a corporate Lean Six Sigma Champion first is
simple: he or she should lead the rest of the work involved in preparing for and rolling out the
initiative. Having the Champion involved early on and reporting directly to the CEO is important
because:
Regular communication between the corporate leader and Champion will help ensure
consistency in the messages being sent to the organization
The Champion will be more effective if he or she can speak with authority about the
reasons why the organization is undertaking Lean Six Sigma
92
All of this early work associated with deploying Lean Six Sigma revolves around building
alliances and becoming connected with management’s priorities. That’s why an effective
Champion needs to have a combination of top-notch people skills coupled with the ability to
understand the business not to mention planning and deployment skills, knowledge of Lean Six
Sigma, and so on.
The first step in any plan knows what you’re starting with. The Champion, working in
conjunction with the executive team, should compile basic information on two fronts: the business
status of the company overall and its major subdivisions and existing knowledge/attitudes towards
change in general and Lean Six Sigma in particular.
Though the executive team should be up to speed about the organization’s overall status,
it helps to document some basic information up front to make sure that the decision makers are
all starting from the same point. Think of it like an annual physical: you just want to compile data
on how the organization and its major subdivisions are doing fiscally, where people are currently
deployed, and so on, include any existing information on customer satisfaction.
What also helps here is benchmarking: visiting other companies who are involved in Six
Sigma or Lean Six Sigma to see what has worked or not worked for them, see how they
adapted the initiative to their work style, culture, business needs, and so on.
Typically, the Champion and/or outside experts will meet the CEO and his/her direct reports
in one-on-one interviews. The purpose of these interviews is to identify critical elements of
success for the business as a whole (what will it take to increase ROIC? market share?) and for
the Lean Six Sigma initiative itself (what do we need to pay attention to make sure Lean Six
Sigma is a tool we use to drive corporate strategy? what could stand in the way?).
Since the purpose is to uncover factors that will shape deployment plans, the topics
covered typically include:
Experiences with change initiatives from the past (are they still in place? why or why
not? have they made people enthusiastic or cynical?).
Key barriers that may hinder or derail deployment of strategy (such as whether
people think they can afford to dedicate 1% of the workforce as full-time Black
Belts)
Current attitude towards Lean Six Sigma (do they see it as a means for accomplishing
their goals? as a necessary evil?).
What people consider key to their personal success within the organization; how
strategic planning and individual goals are aligned in performance evaluations.
The organization’s and these individuals’ understanding of and experience with any
element of Lean Six Sigma (processes, data collection, cycle time reduction, best
practice sharing, etc.).
Training history: what training has the company provided in the past? What skills
have been emphasized? How well has it worked?
Union issues: To what extent will unions be a factor in the Lean Six Sigma
implementation?
How strategies, goals, success measurements, and targets are cascaded throughout
the organization. What structures and processes exist that determine improvement
priorities? How is progress monitored and who participates in the processes?
Teamwork/collaboration (or the lack thereof) within the organization; turf wars.
Openness to new approaches. How prevalent is the “not invented here syndrome”?
How authority is exercised and how conflict resolved are issues that coalesce around
decision making. Exploring how decisions are made can therefore reveal important dynamics
that will influence plans and tactics for deployment.
94
Asking the above questions of all top managers will reveal the extent to which executing
strategy is an issue. A skilled interviewer will be able to gain the confidence of interviewees and
pick up on inconsistencies in the interpretation of roles and strategy. Because many major Lean
Six Sigma opportunities lie in the “white space” between functions or in processes that cross
traditional boundaries, you’ll need to know how willingly different parts of the organization will
come together and support cross-functional goals that may not directly benefit their organization.
One organization that was embarking on a Lean Six Sigma initiative had a frontline
employee who also happened to be a part-time pastor, and who, not incidentally, had presided
at the weddings of half the company. He was looked up to by most employees, and his opinions
were always sought out. Unfortunately, no one bothered to talk to this employee or involve him
in any aspect of the Lean Six Sigma planning or launch. It is widely acknowledged at this
company that this oversight was one of the biggest reasons why the initiative encountered
major resistance from many parts of the organization.
In any organization, there is a core group of perhaps 5% to 10% of the employees who
have a bigger effect on what does and doesn’t get done than do their coworkers. Everybody
knows who they are. As shown by the story above, these key influencers can be anywhere in
the organization from the board room to the reception desk. Their influence can arise from
formal authority or from a number of any other factors (personality, longevity, connections).
Anyone with formal P&L responsibility, and often their direct reports, should be included in
these lists.
Key influencers can come from anywhere in the organization. This notion is incredibly
powerful. Why? As a Champion at one otherwise successful example of Lean Six Sigma
discovered, many of the richest opportunities are cross-functional. But addressing those
opportunities was impossible when individual silo leaders or key influencers didn’t appreciate
how Lean Six Sigma could help them and their staff. In fact, results in one division of that
company are marginal because a key influencer keeps saying, “I don’t need this.” You can
avoid this situation by following the engagement guidelines given in the next chapter.
95
The one caveat is that you have to be diligent in finding all the people who fall into this
category. If Joe is the “go to” guy in IT, you’d better talk to Joe. If Maria knows the ins-and-outs
of accounting better than anyone else in the department, you’d better talk to Maria. The more of
the key influencers you include, the greater the chances that the deployment will progress
smoothly and receive support.
From a practical standpoint, the contact can occur either one-on-one or in focus groups
depending on how the logistics work out for your assessment, but the key point is to have face-
to-face contact with as many of these influencers as possible. Though you should let the
discussions go in any direction that these people want to cover, it helps if there is at least some
overlap with the topics discussed with top management (see list of topics/questions, above)—
that way you can compare perceptions at different levels of the organization.
The information from top management and key influencers is usually synthesized into a
leadership training course that outlines the critical issues that will impact the Lean Six Sigma
strategy and unique challenges faced by the company regarding deployment, training, and
infrastructure.
You will likely find patterns that indicate some areas of your company will be more receptive
to Lean Six Sigma than others. If the less-receptive areas are involved with value streams that
are critical to your business, you won’t have any choice but to include them in the deployment,
though you will have to do more communication and education up front to convince people that
Lean Six Sigma can help them.
Though every organization is unique, there are some general patterns often seen in service
organizations that have predictable effects on how a deployment should be structured. Here
are a few of the most common issues:
96
2. Little history with improvement; little or no process orientation, little use of data
5.9 Waste
Waste is defined by Fujio Cho of Toyota as “anything other than the minimum amount of
equipment, materials, parts, space, and workers time, which are absolutely essential to add
value to the product.”
Idle time waste, or wait time waste - downtime that is spent waiting for a product to
be created.
Delivery waste, or transportation waste - the time spent getting the product shipped
to the recipient.
Waste in the work, inventory, and operations - When time is spent loosely which
does not make money. Waste in the work is also known as extra-processing waste,
and waste in operations is also known as motion waste.
Rejected parts waste, or defects waste - when certain pieces should be thrown out
or reworked because they are not within tolerance.
Non-utilized talent waste - when a person is overqualified for the assigned job.
There are also many different management tools used within Six Sigma. While there are
too many to list, here are details on a few of them.
5.10 Summary
The process of Six Sigma originated in manufacturing processes but now it finds its use
in other businesses as well. Proper budgets and resources need to be allocated for the
implementation of Six Sigma in organizations. Six Sigma is a data-driven process that seeks to
97
reduce product defects down to 3.4 defective parts per million, or 99.99966% defect-free products
over the long-term. In other words, the goal is to produce nearly perfect products for your
customers. By using statistical models, Six Sigma practitioners will methodically improve and
enhance a company’s manufacturing process until they reach the level of Six Sigma.
5.11 Keywords
VOCT - voice of the customer table
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8thEdition,
South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 6
TOTAL QUALITY MANAGEMENT
Learning Objectives
Definition of TQM
Structure
6.1 Introduction
6.9 Summary
6.10 Keywords
6.1 Introduction
Total quality management originated in the industrial sector of Japan (1954). Since that
time the concept has been developed and can be used for almost all types of organizations
such as schools, motorway maintenance, hotel management and churches. Nowadays, Total
Quality Management is also used within the e-business sector and it perceives quality
99
management entirely from the point of view of the customer. The objective of total quality
management is doing things right the first time over and over again. This saves the organization
the time that is needed to correct poor work and failed product and service implementations
(such as warranty repairs). Total Quality Management can be set up separately for an organization
as well as for a set of standards that must be followed- for instance the International Organization
for Standardization (ISO) in the ISO 9000 series. Total Quality Management uses strategy, data
and communication channels to integrate the required quality principles into the organization’s
activities and culture.
Definitions of TQM:
“TQM is both a philosophy and a set of guiding principles that represent the foundation
for a continuously improving organization. TQM is the application of quantitative methods and
human resources to improve the product and services supplied to an organization, and the
degree to which the needs of the customers are met, now and in the future. TQM integrates
fundamental management techniques, existing improvement efforts, and technical tools under
a disciplined approach focused on continuous improvement.”
100
Under the TQM concept, quality is defined and judged by the customer. Therefore, it
acknowledges a customer-driven economy. It focuses on continuous process improvement to
achieve high quality of product (or service). Its strategy tries to achieve “total quality” throughout
the entire business, not just in the product. It suggests that any improvement that is made in the
business, be it a better design of a component or a better process of a system, will help to
improve the “total quality” of the organization and the quality of the final product.
Under this philosophy, the view of quality is very different from the traditional one:
Productivity gains can be achieved through quality improvements. Better quality of product
and process will reduce rework, errors, and waste. This, in turn, improves the productivity.
The ultimate quality of a product is its ability to satisfy user’s needs. One should take one
step further to get the consumer involved in defining the product requirements. It is plausible to
say that quality is defined and judged by the customer.
Just as one would expect, customers prefer to purchase software that fits their needs and
performs beyond the quality standards.
Relying on product inspection implies that errors will definitely be made. Quality cannot
be achieved by inspection. It should be built in, not added on. To build in quality, one must
perform effective product design and process controls.
Zero defect and perfection of processes should be the goals if a company wishes to keep
improving quality.
101
6) Quality is a part of every function in all phases of the product life cycle.
It simply does not make sense to go about production haphazardly or without a quality-
laden plan and expect a quality good or service and a happy customer for that matter.
Only management has the authority to change the working conditions and processes,
and only management has the knowledge to coordinate quality function in all phases of the
product life cycle. Therefore, management should be responsible for quality, not the workers.
Suppliers are just as an important part of the team as any other members. Since
management is responsible for quality, it must also take charge of building long-term and quality-
oriented relationships with suppliers.
According to Joseph Juran ‘Quality control is the regularity process through which we
measure actual quality performance; compare it with standards and act on the difference’.
Focus on customer
When using total quality management it is of crucial importance to remember that only
customers determine the level of quality. Whatever efforts are made with respect to training
employees or improving processes, only customers determine, for example through evaluation
or satisfaction measurement, whether your efforts have contributed to the continuous
improvement of product quality and services.
102
Employee involvement
Process centered
Process thinking and process handling are a fundamental part of total quality management.
Processes are the guiding principle and people support these processes based on basis
objectives that are linked to the mission, vision and strategy.
Integrated system
A strategic plan must embrace the integration and quality development and the
development or services of an organization.
Decision-making within the organization must only be based on facts and not on opinions
(emotions and personal interests). Data should support this decision-making process.
Communication
A communication strategy must be formulated in such a way that it is in line with the
mission, vision and objectives of the organization. This strategy comprises the stakeholders,
the level within the organization, the communications channels, the measurability of effectiveness,
timeliness, etc.
Continuous improvement
By using the right measuring tools and innovative and creative thinking, continuous
improvement proposals will be initiated and implemented so that the organization can develop
into a higher level of quality.
103
When you implement total quality management, you implement a concept. It is not a
system that can be implemented but a line of reasoning that must be incorporated into the
organization and its culture.
Practice has proved that there are a number of basic assumptions that contribute to a
successful roll-out of total quality management within an organization.
Train senior management on total quality management principles and ask for their
commitment with respect to its roll-out.
Assess the current culture, customer satisfaction and the quality system.;
Senior management determines the desired core values and principles and communicates
this within the organization.
Develop a basic total quality management plan using the basic starting principles mentioned
above.
Identify and prioritize customer needs and the market and determine the organization’s
products and services to meet those needs.
Determine the critical processes that can make a substantial contribution to the products
and services.
Create teams that can work on process improvement for example quality circles.
Managers support these teams using planning, resources, and by providing time training.
Management integrates the desired changes for improvement in daily processes. After
the implementation of improved processes, standardization takes place.;
Evaluate progress continuously and adjust the planning or other issues if necessary.
Quality is the concern of not only the management but also the workers. By empowerment,
that is empowering employees with the ability to stop development if quality is sacrificed, quality
can be dramatically improved. Workers feel a sense of belonging to the process and a pride in
the quality of their work. Quality is perceived as a team effort. Software industries are now
empowering their employees with the ability to stop the whole development if someone discovers
a quality defect.
2) Customer Emphasis
One must focus on satisfying internal and external needs, requirements, and expectations,
not just on meeting specifications. This is essentially creating a customer focus. Since customers
are the ones that drive production, their needs and expectations should be the focus of all
improvement efforts. Customers are not only those who buy finished products. There are also
workers within the company who use the components produced by other workers. These workers
are internal customers. A software development can be conceived as having a string of customers,
starting client. Each person is responsible for improving the quality of the product that they pass
on to the next customer. Under the TQM culture, everyone has a customer.
TQM must be implemented from the top down in every organization. If management
does not have a commitment to a TQM culture, it will fail. The management must provide
leadership in implementing the change; the workers do not have the power to do so. Do not
blame the workers for poor quality; the management and the systems are responsible for quality.
We have all heard the slogan: “Quality is job one.” This example is from Ford Motor
Company. However, slogan means nothing when we say it but we do not do it. Ford still produces
buggy cars despite its great success in Team Taurus project. What Ford needs is better leadership
in quality improvement.
We must understand how things work in the organization to be able to improve it.
Understanding how it works involves being able to measure the process in order to compare
“improvements” against it.
106
Statistical quality control and process control techniques should be used to identify special
causes of variation that are points outside the control limits. Actions should be taken to remove
these special causes. Moreover, any abrupt shifts or distinct trends within limits are also signals
for investigation. Quality control tools such as the Quality Seven (Q7) tools and the Management
Seven (M7) tools may be used to plan for actions, collect valuable data, and chart for progress.
The Q7 tools are used to analyze historical data for solving a particular problem. Most problems
occurring in production-related areas fall into this category. One the other hand, not all data
needed for decision making are readily available and many problems call for collaborative
decision among different functional areas. Under these situations, the M7 tools (also called the
New Seven tools are useful in areas such as product quality improvement, cost reduction, new-
product development, and policy deployment, etc.
A culture of constant improvement must be developed for TQM to succeed. All employees
should be empowered with the ability to influence an organizational process that helps to improve
quality. Once given this authority, employees must show their desire and commitment to constantly
improve the company. They must be always looking for ways to improve not only their part of
the organization, but also the organization as a whole. Management must foster this culture
through proper reward and recognition.
Incentive is a form of position reinforcement that is the fuel of the TQM torch. Most TQM
implementers use a suggestion program to solicit cost reduction ideas from employees. The
ideas are evaluated by a cross-functional suggestion evaluation team and the ones with significant
107
contributions are implemented, and the suggesters are recognized and rewarded with money
and fame.
Teamwork is the key to the success of TQM, yet it relies on sharing the necessary
information and know-how among the team members and across functional areas. It has been
proven that sharing such information as profit, budget, schedule, progress, errors, etc. can
provide the employees a sense of ownership and importance. It encourages the employees to
push themselves to work harder in order to achieve the company goals as well as their personal
goals. Nonetheless, any unnecessary or problematic information such as pay scale or bonus
level should not be shared because it is dysfunctional and counter-productive.
Under TQM culture, there should be no communication barriers between workers and
management, and between functional areas. The management must make themselves available
to and easily accessible by the workers. Employee suggestion program could be implemented
in order to eliminate communication barriers.
A company cannot produce a quality product if the components of which it is made are
faulty. Therefore, the supplier of a company must be trained and certified as a TQM supplier.
Without such a certification, any components that are purchased from the supplier cannot be
guaranteed to have the quality necessary for a company to establish a TQM culture. Similar to
the JIT philosophy, the TQM philosophy advocates a strong relationship with its suppliers. One
should cut down the number of suppliers and provide only a few TQM certified suppliers with
long-term business commitments. This motivates the suppliers to make changes for continual
quality improvement and ensures that the quality of the company’s products will not be sacrificed.
If a particular feature suits a particular customer need then that feature is going to win a
customer’s heart. Any product or service falling in this zone will be a surefire recipe for
organization’s success. Let us take example of Maggi noodles. When it was launched in India
108
in early eighties the taste was not accepted by the Indian taste-bud. Nestle researched properly
and came with ‘Masala Tastemaker’, which was lapped up by the customer. Now after two
decades Maggi can be found in almost every household in India.
Total quality management ensures that employees understand their target customers
well before making any changes in the processes and systems to deliver superior quality products
for better customer satisfaction. Infact, organizations introduce total quality management or
any other quality management process to increase their customer base and levels of customer
satisfaction. Total Quality management increases an organization’s database of loyal customers
who would not go anywhere, no matter what. Believe me, without customers a business can’t
even exist.
Quality of a product is not defined only in terms of its durability, packaging, reliability,
timely delivery and so on but also a customer’s overall experience with the organization.
Remember customer dissatisfaction leads to loss of business. In service industry, employees
need to interact with the customers sensibly and with utmost care and professionalism to expect
happy and loyal customers.
Design various feedback forms for the customers for them to share what they feel about
your products and services. The feedbacks may be in favour of your organization, may not be
in favour of your business. Negative comments or feedbacks of the customers should not be
ignored. As a part of total quality management, employees should sit on a common platform,
brainstorm ideas and come to concrete solutions which would improve the systems and processes
to eventually delivery what the customer expects. No amount of total quality management would
help if you ignore your customers.
In case of physical products, customers are satisfied when the products are:
Durable
Reliable
Easy to Use
Adaptable
Appropriate
109
Employees make better decisions using their expert knowledge of the process.
Employees are more likely to implement and support decisions they had a part in
making.
Employees are better able to spot and pinpoint areas of for improvement.
Employees are better able to accept change because they control the work
environment.
Employees have an increased commitment to unit goals because they are involved.
Employees’ involvement should not be looked at as a fad that will go away soon. It is a
way of life, crucial to TQM, and it can mean difference between being competitive and going out
of business. Employees not senior management, hold future in their hands. The sign over the
plant entrance that says, “Through these doors pass our most important asset, our employees”
does not ring true when employees have a feeling that no one really cares. More involvement
might be encouraged by the sign “No one of us knows as much as all of us.”
110
As organizational culture begins the process change, resistance to this change will certainly
be present. Keeping people informed will reduce resistance, especially when they see the
benefits. Change is an ongoing process that must occur if an organization is to continue to exist
in competitive world. People do not necessarily resist change; they resist being changed, and
problems arise when a person’s comfort zone is disturbed.
Total quality management (TQM) has far-reaching implications for the management of
human resources. It emphasizes self-control, autonomy, and creativity among employees and
calls for greater active cooperation rather than just compliance.
Indeed, it is becoming a maxim of good management that human factors are the most
important dimension in quality and productivity improvement.
The Lean (Toyota) systems, utilizing JIT techniques are more productive, smaller and
more efficient, increases worker pride and involvement on shop floor.
Suggestion System
Suggestion systems are designed to provide the individual with the opportunity to be
involved by contributing to the organization. The key to an effective system is management
commitment. It is the responsibility of management to make it easy for employees to suggest
improvements. Stimulating and encouraging employee participation starts the creative process.
Five Ground Rules for Stimulating and Encouraging Suggestion System are:
5. Reward the idea with published recognition so that everyone knows the value of
contribution.
The first step in training process is to make everyone aware of what the training is
all about. Thoughts suggestions should be gathered.
The second step is to get acceptance. The trainees must feel that training will be of
value to them.
· The third step is to adept to adapt the program. Is everyone ready to buy into it?
Does everyone feel they are a part of what is going to take place?
The fourth step is to adept to what has been agreed upon. What changes must be
made in behavior and attitudes.
112
Involving employees, empowering them, and bringing them into decision making process
provides the opportunity for continuous process improvement. The untapped ideas, innovations,
and creative thoughts of employees can make the difference between success and failure.
Competition is so fierce that it would be unwise not use every available tool.
Total productive maintenance (TPM) is a method of maintaining and improving the integrity
of production and quality systems through the machines, equipment, employees and the
supporting processes. TPM can be of great value and its target is to improve core business
processes. The phrase TPM was first used in 1961 by the Japanese company Denso. This
supplier to the automotive industry, carried out an improvement project with continuous
improvement as their starting point and they introduced autonomous and preventive maintenance
to machines. TPM is especially meant for companies with a lot of machines that involve high
maintenance costs.
TPM is not just about maintaining productivity but also about the maintenance of machines
and the prevention of possible breakdowns. TPM is about productivity improvement and
optimization of machine availability through which machines operate at their optimal level.
Everyone within the organization has to be aware of the hidden losses with respect to
machine failure or the time needed for machine repair. Also, when a machine cannot run at full
speed or produces inferior products, this is considered to be a loss-making activity for the
organization.
The aim is to have an Overall Equipment Effectiveness (OEE) score of 100% and this
represents perfect production. In that case, machines always work at full speed and deliver
products of perfect quality.
113
Pillars of TPM
Kobetsu Kaizen
Planned Maintenance
Quality Maintenance
Office TPM
Training
5S
Autonomous maintenance
Planned Maintenance
114
Quality maintenance
Office TPM
Running machines until they break down is not an option. Curative maintenance is
characterized by waiting for a breakdown to happen which will consequently be repaired.
Especially when productions need to continue round the clock, this type of maintenance is far
too costly. TPM puts machinery at the heart of the organization and does not just safeguard
production continuously but improves it where possible. TPM focuses on productivity improvement
and its primary purpose is to maximize availability of machines.
Responsibility of Everyone
The starting point of TPM is that everyone is responsible for the day-today maintenance
of the machines. Employee participation in improvement proposals and maintenance are key
features within TPM, so that they can jointly improve the machine efficiency, step-by-step.
Maintenance therefore also means ‘improvement’. Machines are purchased for their intended
purpose only. After that, it is possible to expose and eliminate hidden defects in the machines.
Multidisciplinary teams
Everyone, from operator to maintenance engineer, should make joint efforts to improve
the OEE. This can be achieved by forming small multidisciplinary teams. This can be achieved
by giving attention to autonomous maintenance, preventive maintenance, training of the
employees involved, security and standardization of work processes, the goal is Zero Defects:
zero errors, zero losses and zero work-related accidents. By using such multidisciplinary teams,
the availability of machines will improve greatly. TPM focuses on the effective and efficient use
of production means and aims at involvement of all departments. The small multidisciplinary
teams work together from seven different TPM pillars, supported by 5S, to improve equipment
reliability and increase productivity.
115
Continuous improvement
TPM will improve productivity by 70% and reduce complaints by approximately 60%. In
addition to measuring OEE, TPM also focuses on maintenance backlog which is also referred
to as Total Clean Out (TCO). After this, a start can be made with the continuous improvement
cycles. Each multidisciplinary team tackles one specific problem that limits the OEE, therefore
the continuous improvement cycle can be carried out very effectively. The project that this small
team carries out is called Small Group Activity (SGA). In such a SGA project team there are
both machine operators and mechanics as well as quality inspectors and logistics managers.
The whole group is responsible for the functioning of a specific machine.
Perfection
Like other process improvement methods, TPM has grown into a general process
management method, which can be applied broadly to strive for machine perfection. In addition
to machine availability, other factors play a role such as logistical and human aspects. Therefore,
the strong involvement of employees across different disciplines forms an important part of
TPM. Everyone is involved from start to finish. This makes TPM a useful method that monitors
complex and/or expensive machines; it prevents maintenance costs from becoming too high
and it ensures that production does not stagnate .
6.9 Summary
TQM is a way of doing business that requires a permanent commitment from everyone.
Quality is an important aspect of any manufacturing process. The purpose of any quality system
is not defect detection and rejection, but defect prevention. Management recognizes that total
quality management will not happen by accident, it is a planned process.TQM is a managed
process, which involves people, system and supporting tools and techniques. Quality is not an
assignable task. It must be rooted and institutionalized within every steps of business process
“IT IS EVERYONES’ RESPONSIBILITY”. TQM is therefore a change agent, which is aimed at
providing a customer driven organization. This lesson covers principles and practices of TQM,
customer satisfaction, total employee involvement and total production maintenance.
116
6.10 Keywords
ASQC - American Society for Quality Control
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8thEdition,
South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 7
TOTAL QUALITY CONTROL
Learning Objectives
Structure
7.1 Introduction
7.8 Summary
7.9 Keywords
7.1 Introduction
A number of thinkers began to see that Scientific Management and associated approaches
de-humanized the work place; workers were not paid to think, but to carry out to the letter the
work instructions of supervision and management. After a while the workers gave up any attempt
118
to correct things that were wrong in the production operation and began to disassociate
themselves from the success of the organisation.
Apart from the human aspects of the inspection-based organisation, routine 100%
inspection quite simply does not work. It is inevitable that an inspection process will lead to
products that should have been scrapped or returned for rework being dispatched to the customer,
and good products will be scrapped or returned for rework. Each of the adverse outcomes of
inspection is serious; customers quite rightly do not like to receive sub-standard products and if
sufficiently upset will take their business elsewhere.
Rework lines receiving good or scrap product believe that the hapless inspectors deserve
the poor reputation that they have on the shop floor. The key issue is that inspection is an
activity that takes place after a defective product is made. At best the defective product is not
dispatched to the customer. However, quality cannot be inspected into a product - quality has to
be built into each process.
By as early as the 1920s, Walter A Shewhart, an American statistician who worked for
the Bell Telephone Company, became involved in the manufacture of millions of telephone
relays, and he realised that inspection after the event was not a good way of ensuring quality.
He studied how the manufacturing process could be monitored in such a way as to prevent
non-conforming items being produced and in 1924 he invented the control chart. In 1931 he
published the world’s first book on quality control “Economic Control of Quality of Manufactured
Product” (Shewhart, 1980) and his work forms the basis of all teaching on Statistical Process
Control today.
Dr William Edwards Deming had been a student of Walter Shewhart and he spent his
early years as a Government employee, mainly in the Department of Agriculture and the Bureau
of Census. Following the Second World War the US Government played a significant role in
rebuilding Japanese industry, and Deming was invited to apply his statistical knowledge to the
Japanese situation. He taught them to apply the statistical method and team approach to quality
improvement that has transformed Japan into market leaders of virtually every form of
manufactured goods. He has been referred to as the father of the Third Industrial Revolution.
The principal focus of the quality control era was to replace inspection with more informative
process control systems which aimed to reduce variation in outputs (be they product or service)
and deliver more consistency by focusing on inputs. Its modern day incarnation is Six Sigma.
119
“To practice quality control is to develop, design, produce and service a quality product
which is most economical, most useful and always satisfactory to the consumer. To meet this
goal, everyone in the company must participate in and promote quality control, including top
executives, all divisions, within the company and all employees.”
2. Focus full scale efforts on the control of cost, price and profit.
Dr. Deming‘s research forms the cornerstone of Japan‘s adoption of Total Quality Control
post World War Two. Dr. Deming introduced the cycle of design, production, sales and market
research which is to be followed by another cycle that begins with redesign based on the
experience obtained from the previous cycle. In this way, quality improves continuously.
“What this approach suggests”, states Dr. Ishikawa, is that the manufacturer must always
be keenly attentive to consumer requirements, and the opinions of consumers must be anticipated
as the manufacturer establishes his own standards. Unless this is done, QC cannot achieve its
goals, nor can it assure quality to consumers.”
2. Do
Education and Training - work standards and technical standards must be taught. Workers
must be mentored and encouraged to do their best.
According to Mr. Yamauuchi, former Managing Director of Toyota Motor Corporation, “in
order to practice the standards perfectly, workers must know the true meaning and value of
each standard not only in theory. They must have the skill and knowledge to put it into practice.”
He also states that,” The role of the supervisor is a very important one. Education and training
to supervisors is essential. We create standards based on the supervisor`s skill and knowledge;
with the benefits for the company in mind.”Implement Work.
Mr. Yamauuchi asserts that “Motivation is Key! Unless we have vitalized front line workers,
we cannot be successful. They are the ones who actually produce the product and the profit.
Our job in management, is to make them energized. Sometimes, implementation of work
standards is not enough. The operators may carry out the work sequence and standards but
feel some uneasiness, this is the time for them to suggest kaizen. The key: create a working
environment where workers can suggest improvements. Work standards must be followed, but
once workers realize that a particular standard is not enough, it is the time for kaizen. When
there is a need for Kaizen, supervisors must be able to improve the work sequence or fix the
abnormality.”
Dr. Kaoru Ishikawa says “I repeat once more. Standards and regulations are always
inadequate. Even if they are strictly followed, defects and flaws will appear. It is experience and
skill that make up for inadequacies in standards and regulations.”
3. Check
Inspection, It is the supervisor‘s duty to check and confirm the standards have been put
into practice exactly. When problems occur, check every possible angle, focus on each process.
4. Action
Downside: Sometimes the end result is very different from the original target -
employees tend to lose sight of the goal because they are too focused on the
process.
TQM
Emphasis is placed on the target and achieving the target as soon as possible.
Zero Defects, a term coined by Mr. Philip Crosby in his book “Absolutes of Quality
Management” has emerged as a popular and highly-regarded concept in quality management
so much, so that Six Sigma is adopting it as one of its major theories. Unfortunately, the concept
has also faced a fair degree of criticism, with some arguing that a state of zero defects simply
cannot exist. Others have worked hard to prove the naysayers wrong, pointing out that “zero
defects” in quality management doesn’t literally mean perfection, but rather refers to a state
122
where waste is eliminated and defects are reduced. It means ensuring the highest quality
standards in projects.
From a literal standpoint, it’s pretty obvious that attaining zero defects is technically not
possible in any sizable or complex manufacturing project. According to the Six Sigma standard,
the definition of zero defects is defined as 3.4 defects per million opportunities (DPMO), allowing
for a 1.5-sigma process shift. The zero defects concept should pragmatically be viewed as a
quest for perfection in order to improve quality in the development or manufacturing process.
True perfection might not be achievable but at least the quest will push quality and improvements
to a point that is acceptable under even the most stringent metrics.
Zero defects theory ensures that there is no waste existing in a project. Waste refers to all
unproductive processes, tools, and employees and so on. Anything that is unproductive and
does not add value to a project should be eliminated, called the process of elimination of waste.
Eliminating waste creates a process of improvement and correspondingly lowers costs. Common
with the zero defects theory is the concept of “doing it right the first time” to avoid costly and
time-consuming fixes later in the project management process.
Zero defects theory is based on four elements for implementation in real projects.
2. Right the first time. Quality should be integrated into the process from the beginning,
rather than solving problems at a later stage.
3. Quality is measured in financial terms. One needs to judge waste, production and
revenue in terms of budgetary impact.
Zero Defects is not the first application of motivational techniques to production: During
World War II, the War Department’s “E for Excellence” program sought to boost production and
minimize waste.
The Cold War resulted in increased spending on the development of defense technology
in the 1950s and 1960s. Because of the safety-critical nature of such technology, particularly
weapons systems, the government and defense firms came to employ hundreds of thousands
of people in inspection and monitoring of highly complex products assembled from hundreds of
thousands of individual parts. This activity routinely uncovered defects in design, manufacture,
and assembly and resulted in an expensive, drawn out cycle of inspection, rework, re-inspection,
and retest. Additionally, reports of spectacular missile failures appearing in the press heightened
the pressure to eliminate defects.
In 1961, the Martin Company’s Orlando Florida facility embarked on an effort to increase
quality awareness and specifically launched a program to drive down the number of defects in
the Pershing missile to one half of the acceptable quality level in half a year’s time. Subsequently,
the Army asked that the missile be delivered a month earlier than the contract date in 1962.
Martin marshaled all of its resources to meet this challenge and delivered the system with no
discrepancies in hardware and documentation and was able to demonstrate operation within a
day of the start of setup. After reviewing how Martin was able to overachieve, its management
came to the conclusion that while it had not insisted on perfection in the past, it had in this
instance, and that was all that was needed to attain outstanding product quality.
Martin claimed a 54% defect reduction in defects in hardware under government audit
during the first two years of the program. General Electric reported a $2 million reduction in
rework and scrap costs, RCA reported 75% of its departments in one division were achieving
Zero Defects, and Sperry Corporation reported a 54% defect reduction over a single year.
124
During its heyday, it was adopted by General Electric, ITT Corporation, Montgomery Ward,
Rolls-Royce Limited, and the United States Army among other organizations.
While Zero Defects began in the aerospace and defense industry, thirty years later it was
regenerated in the automotive world. During the 1990s, large companies in the automotive
industry tried to cut costs by reducing their quality inspection processes and demanding that
their suppliers dramatically improve the quality of their supplies. This eventually resulted in
demands for the “Zero Defects” standard. It is implemented all over the world.
Later developments
In 1979, Crosby penned the Art of making quality certain which preserved the idea of
Zero Defects in a Quality Management Maturity Grid, in a 14step quality improvement program,
and in the concept of the “Absolutes of Quality Management”. The quality improvement program
incorporated ideas developed or popularized by others (for example, cost of quality(step 4),
employee education (step 8), and quality councils (step 13)) with the core motivation techniques
of booklets, films, posters, speeches, and the “ZD Day” centerpiece.
Newcomers to manufacturing bring their own vague impressions of what quality involves.
But in order to tackle quality-related problems, there must be widespread agreement on the
specifics of what quality means for a particular product. Customer needs and expectations
must be reduced to measurable quantities like length, or smoothness, or roundness and a
standard must be specified for each.
These become the requirements for a product and the organization must inspect, or
measure what comes out of the production process against those standards to determine whether
125
the product conforms to those requirements or not. An important implication of this is that if
management does not specify these requirements workers invent their own which may not
align with what management would have intended had they provided explicit requirements to
begin with.
Companies typically focus on inspection to ensure that defective product doesn’t reach
the customer. But this is both costly and still lets non conformances through. Prevention, in the
form of “pledging ourselves to make a constant conscious effort to do our jobs right the first
time”, is the only way to guarantee zero defects. Beyond that, examining the production process
for steps where defects can occur and mistake proofing them contributes to defect-free
production.
Workers, at least during the post–World War II economic expansion, had a lackadaisical
attitude on the whole toward work. Crosby saw statistical quality control and the MIL-Q-9858A
standard as contributing to this through acceptable quality levels, a concept that allows a certain
number of acceptable defects and reinforces the attitude that mistakes are inevitable. Another
contributor is the self-imposed pressure to produce something to sell, even if that thing is
defective Workers must “make the attitude of Zero Defects [their] personal standard.”
To convince executives to take action to resolve issues of poor quality, costs associated
with poor quality must be measured in monetary terms. Crosby uses the term “the price of
nonconformance” in preference to “the cost of quality” to overcome the misimpression that
higher quality requires higher costs. The point of writing ‘Quality is free’ was to demonstrate
that quality improvement efforts pay for themselves. Crosby divides quality-related costs into
the price of conformance and the price of nonconformance. The price of conformance includes
quality-related planning, inspection, and auditing; the price of nonconformance includes scrap,
rework, claims against warranty, unplanned service.
The clear advantage of achieving a zero defect level is waste and cost reduction when
building products to customer specifications. Zero defects means higher customer satisfaction
126
and improved customer loyalty, which invariably leads to better sales and profits. Nonetheless,
a zero defects goal could lead to a scenario where a team is striving for a perfect process that
cannot realistically be met. The time and resources dedicated to reaching zero defects may
negatively impact performance and put a strain on employee morale and satisfaction. There
can also be negative implications when you consider the full supply chain with other manufacturers
that might have a different definition of zero defects.
In the end, the quest for zero defects is an admirable objective in itself, and most companies
find that the pros outweigh the cons. By striving for stringent but accepted standards of defects,
companies can build better processes and create an environment of continuous service
improvement.
Quality assurance helps a company create products and services that meet the needs,
expectations and requirements of customers. It yields high-quality product offerings that build
trust and loyalty with customers. The standards and procedures defined by a quality assurance
program help prevent product defects before they arise.
127
Failure testing, which continually tests a product to determine if it breaks or fails. For
physical products that need to withstand stress, this could involve testing the product
under heat, pressure or vibration. For software products, failure testing might involve
placing the software under high usage or load conditions.
Statistical process control (SPC), a methodology based on objective data and analysis
and developed by Walter Shewhart at Western Electric Company and Bell Telephone
Laboratories in the 1920’s and 1930’s. This methodology uses statistical methods to
manage and control the production of products.
Total quality management (TQM), which applies quantitative methods as the basis for
continuous improvement. TQM relies on facts, data and analysis to support product
planning and performance reviews.
Although simple concepts of quality assurance can be traced back to the Middle Ages,
QA practices became more important in the United States during World War II, when high
volumes of munitions had to be inspected.
The ISO opened in Geneva in 1947 and published its first standard in 1951 on reference
temperatures for industrial measurements. The ISO gradually grew and expanded its scope of
standards.
The ISO 9000 family of standards was published in 1987; each 9000 number offers different
standards for different scenarios.
QA standards
QA standards have changed and been updated over time, and ISO standards need to
change in order to stay relevant to today’s businesses.
The latest in the ISO 9000 series is ISO 9001:2015. The guidance in ISO 9001:2015
includes a stronger customer focus, top management practices and how they can change a
company, and keeping apace of continuing improvements. Along with general improvements to
128
ISO 9001, ISO 9001:2015 includes improvements to its structure and more information for risk-
based decision-making.
QA uses by industry
Manufacturing, the industry that formalized the quality assurance discipline. Manufacturers
need to ensure that assembled products are created without defects and meet the defined
product specifications and requirements.
Food production, which uses X-ray systems, among other techniques, to detect physical
contaminants in the food production process. The X-ray systems ensure that contaminants
are removed and eliminated before products leave the factory.
Pharmaceutical, which employs different quality assurance approaches during each stage
of a drug’s development. Across the different stages, the QA processes include reviewing
documents, approving equipment calibration, reviewing training records, reviewing
manufacturing records and investigating market returns.
The quality of products and services is a key competitive differentiator. Quality assurance
helps ensure that organizations create and ship products that are clear of defects and meet the
needs and expectations of customers. High-quality products result in satisfied customers, which
can result in customer loyalty, repeat purchases, up sell and advocacy.
Quality assurance can lead to cost reductions stemming from the prevention of product
defects. If a product is shipped to customers and a defect is discovered, an organization incurs
cost in customer support, such as receiving the defect report and troubleshooting. It also acquires
the cost in addressing the defect, such as service or engineering hours to correct it, testing to
validate the fix and cost to ship the updated product to the market.
QA does require a substantial investment in people and process. People must define a
process workflow and oversee its implementation by members of a QA team. This can be a
time-consuming process that impacts the delivery date of products. With few exceptions, the
disadvantage of QA is more a requirement — a necessary step that must be undertaken to ship
a quality product. Without QA, more serious disadvantages arise, such as product bugs and
the market’s dissatisfaction or rejection of the product.
129
A Quality Circle is a small voluntary cell of operators sharing a common work situation
who meet as they deem necessary for the reduction, by their efforts, of the countless number of
problems that impede the effectiveness of their work. Each circle member is an equal partner in
the venture and meetings take place in company time. The frequency and duration of meetings
is set by the group, but it will be regular and often on a weekly basis.
Although they have a common work interest the members do not necessarily do the
same job. For example, a foundry circle may have two moulders, one pattern-maker, one furnace
man, one foreman moulder, one sand technician, one fettler and one inspector. The term operator
is used to describe people working at the same level, usually at the producing end of the
enterprise, although producing can have a fairly wide interpretation and circles are coming to
be seen in very varied spheres of activity.
The actions of quality circles save money, by working on problems and waste they generate
a direct pay back to the company for the time and effort invested.
There are five fundamental benefits expected from the operation of Quality Circles. The
relative importance attached to these benefits will largely be determined by the task or people
orientation of those responsible for their introduction into the company. They are offered, as
follows, and no level of importance is implied by the order in which they are presented:
130
The formal organization structure evident in the majority of companies can often be counter-
productive to communication, causing the organisation to operate at a sub-optimal level of
effectiveness. Messages are difficult to transmit through more than one control level, and even
when coming down they seem frequently to become strangulated or distorted unless they
originate from the very top.
Under these circumstances it is little wonder that the problems that beset and bedevil the
operators remain unseen or unheard, or lose their urgency by default of the system. Invariably,
the shop floor has no mechanism to transmit its problems directly to another part of the
organisation. Quite frequently problems causing a significant loss of output hang around for
many years, without any attempt being made to find a solution.
At operator level the abandonment of any attempt to solve their problems, is seen as an
expression of lack of interest by the rest of the organisation. The frustration that results is the
breeding ground for a change of attitude away from enthusiasm and towards indifference and
disillusionment.
The circle can utilise its creative ability in a large variety of ways, to set-up rewarding
information links with managers, to generate alternative solutions to a given problem, (whatever
the nature of that problem), to determine the optimum way of implementing a particular solution.
All this leads to a directness of approach and speed of solution that in turn leads to the removal
of many habitual and long standing problems in a short space of time. Additionally, this same
directness keeps its shop floor peer group well informed of management ideas and intentions,
and keeps management well informed of shop floor opinion.
The complacency of many managers, believing they are doing their job well, often receives
a severe jolt when they become involved in a circle. They are shocked to find their desired
131
Membership of a quality circle is likely to be the first time that operators work together to
solve work-related problems; the operation of the circle will provide team building. Once the
formation of the circle has proven successful, the initial skepticism and reluctance to believe
that circles can achieve a change and gradually a positive attitude towards the work place will
occur.
Following the positive change in operator attitude to the work place and management, a
shift in attitude towards improvement will occur. This change is significant because the
organisation will then have teams of people proactively seeking change for the better, rather
than a work force united by resistance to change imposed by an apparently uncaring
management.
As people realise that by their own efforts, together with supportive management, they
can improve their processes a quality mindedness will develop. Deming would relate this to the
workforce being allowed pride of workmanship. As more circles form then the shift from a few
(management) people thinking about improvement in the organisation, towards everyone being
harnessed to achieve excellence occurs.
For many people the education and training process ends when they finish school, apart
from informal on-the-job training. The successful operation of quality circles is based on training
ranging from problem solving techniques to report writing and development of interpersonal
skills. Since this training is applied when it is required for immediate use, the skills of the
operators are developed to give a more capable workforce.
The principal benefits of Quality Circles are to change attitudes and harness the efforts of
all the company’s employees to improving the way it does business. The Japanese description
of the effectiveness of a quality circle is expressed as:
132
“It is better for one hundred people to take one step than for one person to take a hundred”.
Quality Circles should be formed and managed using the following guidelines:
Management Support
Recognition System
Integration
Quality Circles should start on the shop floor; it is vital that the formation of the circle is not
seen a yet another management flavour of the month or other similar initiative that has failed
after a short time. In the opening session Quality Circles should be explained and particular
emphasis should be placed that the circle belongs to the members and not the management.
Quality Circles should be based on training to ensure the participants have the appropriate
skills before attempting any improvement work. The training should be conducted in a professional
manner; if necessary get someone from outside, not associated with management, to give the
training sessions. It is important that the training is not delivered on a shoestring. Nothing sends
a clearer message than a poorly delivered course with inadequate supporting notes. The
formation of a Quality Circle is an investment in people, and there is an opportunity to foster
initial changes in attitude by providing well executed training.
The circle should have a degree of autonomy and be encouraged to form its own group.
In organisations that have unions it may be a good idea to get the shop stewards involved. The
group size should be between 6 and 10; any smaller than 6 makes it likely to have insufficient
combined experience, and if greater than 10 makes self management difficult. It is advisable to
133
run 2 pilot groups when starting Quality Circles so that if one fails and the other survives there
is more information available to understand the difference between success and failure.
Management Support:
The group must be allowed time and resources to enable it to conduct its activities properly.
If action beyond the circle’s span of control is required, management must become involved to
facilitate, coach and encourage. During their problem-solving activities the circle is likely to
require information that is usually retained by management. It is vital that this information is
provided when requested.
If the circle needs additional skills or experience from a member of staff, these should be
provided to enable them to conduct their work.
Recognition System:
Some form of recognition system for achievements should be devised. This does not
necessarily have to be in the form of monetary payment, and the author believes that financial
recognition is probably the least effective. Referring back to Maslow, esteem needs are far
more likely to be fulfilled by presentation by the circle of their work to senior management, or an
article in the company newsletter.
Integration:
The Quality Circles must not be started in isolation; they are part of a wider programme of
Company-wide Continuous Improvement. Thus, the education of management as part of a
larger programme must precede the formation of Quality Circles.
The audit is a structured review of quality management activities to identify good or best
practices and lessons learned for use on current or future projects. Audits ensure that the
134
product is fit for use and that applicable laws are followed or safety standards met. The information
learned as a result of the quality audit should lead to improvements in the quality processes,
which is the result of quality assurance.
Quality audit is the process of systematic examination of a quality system carried out by
an internal or external quality auditor or an audit team. It is an important part of an organization’s
quality management system and is a key element in the ISO quality system standard, ISO
9001. Quality audit is the process of systematic examination of a quality system carried out by
an internal or external quality auditor or an audit team. This can help determine if the organization
complies with the defined quality system processes and can involve procedural or results-
based assessment criteria.
Quality audits are typically performed at predefined time intervals and ensure that the
institution has clearly defined internal system monitoring procedures linked to effective action.
This can help determine if the organization complies with the defined quality system processes
and can involve procedural or results-based assessment criteria.
With the upgrade of the ISO9000 series of standards from the 1994 to 2008 series, the
focus of the audits has shifted from purely procedural adherence towards measurement of the
actual effectiveness of the Quality Management System (QMS) and the results that have been
achieved through the implementation of a QMS.
Audits are an essential management tool to be used for verifying objective evidence of
processes, to assess how successfully processes have been implemented, for judging the
effectiveness of achieving any defined target levels, to provide evidence concerning reduction
and elimination of problem areas. For the benefit of the organization, quality auditing should not
only report non-conformances and corrective actions, but also highlight areas of good practice.
In this way other departments may share information and amend their working practices as a
result, also contributing to continual improvement.
135
Several countries have adopted quality audits in their higher education system (New
Zealand, Australia, Sweden, Finland, Norway and USA) Initiated in the UK, the process of
quality audit in the education system focused primarily on procedural issues rather than on the
results or the efficiency of a quality system implementation.
Audits can also be used for safety purposes. Evans & Parker (2008) describe auditing as
one of the most powerful safety monitoring techniques and ‘an effective way to avoid complacency
and highlight slowly deteriorating conditions’, especially when the auditing focuses not just on
compliance but effectiveness.
The processes and tasks that a quality audit involves can be managed using a wide
variety of software and self-assessment tools. Some of these relate specifically to quality in
terms of fitness for purpose and conformance to standards, while others relate to Quality costs
or, more accurately, to the Cost of poor quality. In analyzing quality costs, a cost of quality audit
can be applied across any organization rather than just to conventional production or assembly
processes.
7.8 Summary
Total Quality Control is a continual process. Quality standards must be continually reviewed,
revised and improved. Zero Defects is a new dimension in Quality Assurance. When the quality
problem in the organization is severe and pervasive enough, major quality improvements are
necessary from shop floors to board rooms. Such efforts have an organization-wide view or
product quality and enable the organizations to achieve position to compete with their competitors
who offer superior quality products to the consumers. Such efforts are often referred to as
“Total Quality Control”.
7.9 Keywords
MIL-Q-9858A-Military Specification (Quality Program Requirements) of the U.S.
government.
136
Pros and Cons: Pros are arguments which aim to promote the issue, while cons suggest
points against it.
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8thEdition,
South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 8
STATISTICAL PROCESS CONTROL
Learning Objectives
Define SPC
Structure
8.1 Introduction
8.9 Summary
8.10 Keywords
8.1 Introduction
Traditionally manufacturers have accepted that every product has to have tolerance limits
because it is impossible to manufacture products without variation. Tolerance limits have been
used historically as the basis for quality control i.e. product test results are either inside or
138
outside of them. Intermediates and end-products are therefore inspected after manufacture
and either pass or fail. With mass production, every product is checked and people use sampling
plans to check if a lot can be shipped to the customer. The problem with these methods is that
variation is not taken into account and in case of inspection in the warehouse that inspection is
not immediately after a product is produced so people are not able to correct problems instantly
and are also not able to find the root cause because the people doing the inspection are normally
not the people in production.
Better quality;
SPC supplies the techniques for measuring, recording, analyzing and decision making.
When all disturbances or special causes of variation are eliminated the process is said to be in
statistical control. However, SPC is much more than just analyzing manufacturing processes,
it’s not just the implementation of control charts it must be seen as or must be used in combination
with, a total manufacturing quality program.
Decisions must be taken based on the analysis and a proper procedure (OCAP);
Objectives of SPC
Increase productivity;
More and more industries and customers are requiring SPC implementation. It is mandatory
in automotive (IATF 16949, VDA), semiconductor, aerospace (AS13006), medical device etc.
To get the benefits from a SPC implementation you need to implement SPC real-time. Data
should be gathered on the shop floor and SPC results should be available instantly for operators
and manufacturing support. This means you need to address SPC automation in an early stage
of the implementation. This is also important to make sure SPC is implemented efficiently. You
do not want to lose expensive engineering time on gathering and processing Excel documents
to get an SPC analysis after the fact.
Just as processes that produce a product may vary, the process of obtaining measurements
and data may also have variation and produce incorrect results. A measurement systems analysis
evaluates the test method, measuring instruments, and the entire process of obtaining
measurements to ensure the integrity of data used for analysis (usually quality analysis) and to
understand the implications of measurement error for decisions made about a product or process.
140
MSA is an important element of Six Sigma methodology and of other quality management
systems.
MSA analyzes the collection of equipment, operations, procedures, software and personnel
that affects the assignment of a number to a measurement characteristic.
The quality control measurements are used to analyze as well as evaluate the quality of
the different processes involved in a project against the standards of the organization or on the
requirements specified during the project management planning.
The steps include turning on the coffee maker, measuring and adding the coffee and
water and the output is a pot or cup of coffee. The variation can occur in the amount of coffee
or water introduced in the process and the performance of the coffee maker itself. Not every
cup of coffee is exactly the same but in most cases, if the measurements are controlled and
reasonably consistent, it tastes the same. By utilizing process controls, taking measurements
141
and using reliable, well-maintained equipment, variation in a process can have less effect on
the quality of the output. The process can be capable of producing acceptable product on a
consistent basis. We can maintain Process Capability.
The Cp and Cpk calculations use sample deviation or deviation mean within rational
subgroups.
The Pp and Ppk calculations use standard deviation based on studied data (whole
population).
The Cp and Cpk indices are used to evaluate existing, established processes in
statistical control.
The Pp and Ppk indices are used to evaluate a new process or one that is not in
statistical control.
Process capability indices Cp and Cpk evaluate the output of a process in comparison to
the specification limits determined by the target value and the tolerance range. Cp tells you if
your process is capable of making parts within specifications and Cpk tells you if your process
is centered between the specification limits. When engineers are designing parts, they must
consider the capability of the machine or process selected to produce the part.
To illustrate, let us use a real world example. Imagine that you are driving your vehicle
over a bridge. The width of your vehicle is equivalent to the spread or range of the data. The
guardrails on each side of the bridge are your specification limits. You must keep your vehicle
on the bridge to reach the other side. The Cp value is equivalent to the distance your vehicle
stays away from the guardrails and Cpk represents how well you are driving down the middle of
the bridge. Obviously if the spread of your data is narrower (your car width is smaller), the more
distance there is between the vehicle and the guardrails and the more likely you are to stay on
the bridge.
142
The Cpk index of process center goes a step further by examining how close a process is
performing to the specification limits considering the common process variation. The larger the
Cpk value the closer the mean of the data is to the target value. Cpk is calculated using the
specification limits, standard deviation or sigma, and the mean value. The Cpk value should be
between 1 and 3. If the value is lower than 1 the process is in need of improvement. The Cp and
Cpk indices are only as good as the data used. Accurate process capability studies are dependent
upon three basic assumptions regarding the data:
1. There are no special causes of variation in the process and it is in a state of statistical
control. Any special causes must be discovered and resolved.
2. The data fits a Normal distribution, exhibiting a bell shaped curve and can be
calculated to plus or minus three sigma. There are cases when the data does not fit
a normal distribution.
3. The sample data is representative of the population. The data should be randomly
collected from a large production run. Many companies require at least 25 to
preferably 50 sample measurements be collected.
In manufacturing and many other types of businesses, reduction of waste and providing
a quality product is imperative if they are to survive and thrive in today’s marketplace. Waste
exists in many forms in a process. When we look at the bigger picture, process capability is
more than just measuring Cp and Cpk values. Process capability is just one tool in the Statistical
Process Control (SPC) toolbox.
Implementing SPC involves collecting and analyzing data to understand the statistical
performance of the process and identifying the causes of variation within. Important knowledge
is obtained through focusing on the capability of process. Monitoring process capability allows
the manufacturing process performance to be evaluated and adjusted as needed to assure
products meet the design or customer’s requirements. When used effectively this information
can reduce scrap, improve product quality and consistency and lower the cost to manufacture
and the cost of poor quality.
143
The capability indices can be calculated manually, although there are several software
packages available that can complete the calculations and provide graphical data illustrating
process capability. For the example in this section, we will utilize a popular statistical software
package. For our example, we will utilize data from randomly collected measurements of a key
characteristic of a machined part. To better represent the population values, the sample data
must be randomly collected, preferably over time from a large production run. A few things to
keep in mind:
· Range = 0.254 mm
First, we will examine our data with a simple histogram to determine if it could fit a normal
distribution. In addition, we can generate a probability plot evaluating our data’s best fit to a line
further indicating we are 95% confident that our data fits a normal distribution.
Using the graph, we can further evaluate process capability by comparing the spread or
range of the product specifications to the spread of the process data, as measured by Six
Sigma (process standard deviation units).
Through examination of the reports, we can determine that our example process is in a
state of statistical control. All the data points fall well within the specification limits with a normal
distribution. A process where almost all the measurements fall inside the specification limits is
deemed a capable process. Process capability studies are valuable tools when used properly.
144
As previously mentioned the information gained is generally used to reduce waste and
improve product quality. In addition, by knowing your process capabilities, the design team can
work with manufacturing to improve product quality, and processes that are “not in control” may
be targeted for improvement. During a typical Kaizen event or other quality improvement
initiatives, Process Capability is calculated at the start and end of the study to measure the level
of improvement achieved. Accurate knowledge of process capability enables management to
make decisions regarding where to apply available resources based on data.
Process Capability Services: At Quality One, we can help you utilize Process Capability to
evaluate your processes, reduce waste and improve overall quality. Our experienced team of
Subject Matter Experts (SME) can develop a customized approach for developing your people
and processes based on your unique needs. Whether you’re interested in Process Capability
Consulting to assist in planning and implementation, Process Capability Training to bring your
team up to speed or Process Capability Support to help you drive process improvement using
Process Capability studies or other SPC methodology, we can provide the help you need,
enabling you to accomplish your process improvement goals.
Using this method, graphs are charted or plotted featuring specific pre-determined control
limits. The limits are frequently based upon the general capability of the processes. Traditionally,
products are inspected once the production process has been completed; however, using SPC
processes are statistically analyzed and improved before producing a faulty or inadequate
product. Using the SPC methodology it is crucial that data be collected, organized, understood,
and fully analyzed to create improvement.
145
In the early 1920 a man by the name of Walter Shewhart of Bell Telephone Laboratories
pioneered the concept of SPC by first developing a control chart. Through trial and error Shewhart
continued to improve what is now known as SPC. In 1931, Shewhart authored a book entitled
‘Economic Control of Quality of Manufactured Product’ which set the stage for the statistical
use within processes to enhance product control.
A professional society was formed in 1945 in regards to SPC - The American Society for
Quality Control. During this stint of time SPC methods were introduced to the Japanese industry
as well.
Control charts (also known as Shewhart charts - after Walter Shewhart) are an integral
component within the SPC methodology to determine the state of statistic control in either a
business or manufacturing process. The main purpose of a control chart is to continuously
record data so discrepancies or unusual events can be observed within the typical process
performance. There are two separate types of process variation: Common cause variation and
Special cause variation.
The data from control charts is used in an attempt to locate discrepancies between
‘common’ and ‘special’ sources. This is not a one-time observation technique but instead
considered more of a long-term, continuous and ongoing monitoring activity.
If there are not any triggers identified within the control chart using the specified detection
criteria/rules, then the chart is considered stable. Conversely, if any of the detection rules are
triggered, other tools can be utilized to identify the root cause of the unwarranted levels of
variation to expose the cause.
146
The control chart is a graph used to study how a process changes over time. Data are
plotted in time order. A control chart always has a central line for the average, an upper line for
the upper control limit, and a lower line for the lower control limit. These lines are determined
from historical data. By comparing current data to these lines, you can draw conclusions about
whether the process variation is consistent (in control) or is unpredictable (out of control, affected
by special causes of variation). This versatile data collection and analysis tool can be used by a
variety of industries and is considered one of the seven basic quality tools.
Control charts for variable data are used in pairs. The top chart monitors the average, or
the centering of the distribution of data from the process. The bottom chart monitors the range,
or the width of the distribution. If your data were shots in target practice, the average is where
the shots are clustering, and the range is how tightly they are clustered. Control charts for
attribute data are used singly.
147
Let’s assume from a sample you have determined the measurement that mean is 300
and the standard deviation equals 44.72. Three standard deviations on either side of the mean
become your upper and lower control points on this chart. In this case 3 standard deviations are
equal to 300 + (or) - (134.16). Therefore, if all control points fall within plus or minus three
standard deviations on either side of the mean, the process is in control. If points fall outside the
acceptable limits, the process is not in control and corrective action is needed. UCL and LCL
are Upper control limit and lower control limit respectively. USL and LSL are upper specification
limit and lower specification limit.
When determining whether your quality improvement project should aim to prevent
specific problems or to make fundamental changes to the process.
2. Determine the appropriate time period for collecting and plotting data.
148
4. Look for “out-of-control signals” on the control chart. When one is identified, mark it
on the chart and investigate the cause. Document how you investigated, what you
learned, the cause and how it was corrected.
For each sample, the average value of all the measurements and the range R are
calculated. The grand average (equal to the average value of all the sample average, )
and R ( is equal to the average of all the sample ranges R) are found and from these we can
calculate the control limits for the and R charts.
Therefore,
LCLR = D3
Where d2 is a factor, whose value depends on number of units in a sample. Its value is
seen from S.Q.C. Tables 8.1
As long as X and it values for each sample are within the control limits, the process is said
to be in statistical control. When the process is not in control then the point fall outside the
control limits on either X or R charts. It means assignable causes (human controlled causes)
are present in the process. When all the points are inside the control limits even then we cannot
definitely say that no assignable cause is present but it is not economical to trace the cause. No
statistical test can be applied.
After computing the control limits, the next step is to determine whether the process is in
statistical control or not. If not, it means there is an external cause that throws the process out
of control. This cause must be traced and removed so that the process may return to operate
under stable statistical conditions.
The various reasons for the process being out of control may be:
Tracing of these causes is sometimes simple and straight forward but when the process
is subject to the combined effect of several external causes, then it may be lengthy and
complicated business.
Process in Control:
(a) Re-evaluate the specifications. Whether the tight tolerances are actually needed or
they can be relaxed without affecting quality.
(b) If relaxation in specifications is not allowed then a more accurate process is required
to be selected.
(c) If both the above alternatives are not acceptable then 100% inspection is carried out
to trace out the defectives.
No of Units in a sample A2 D3 D4 d2
Here the factors A2, D4 and D3 depend on the number of units per sample. Larger the
number, the close the limits. The value of the factors A2, D4 and D3 can be obtained from Statistical
Quality Control tables. However for ready reference these are given below in tabular form.
Even in the best manufacturing process, certain errors may develop and that constitute the
assignable causes but no statistical action can be taken. This leads to many practical difficulties
regarding what relationship show satisfactory control.
Under such circumstances, the inspection results are based on the classification of products
as being defective or not defective, acceptable as good or bad accordingly as that product
confirms or fails to confirm the specified specification.
These products are inspected with GO and NOT GO gauges. Again under this type also,
our aim is to tell that whether product confirms or does not confirm to the specified values.
Quality characteristics expressed in this way are known as attributes.
This is the control chart for percent defectives or for fraction defectives. This is used
when-ever the quality characteristics are expressed as the number of units confirming or not
confirm-ing to the specified specifications either by visual inspection or by ‘GO’ and ‘NOT GO’
gauges.
152
It is denoted by P (P bar) and may be defined as the ratio between the total number of
defective (non-conforming) products observed in all the samples combined and the total number
of products inspected. For example, 15 products are found to be defective in a sample of 200,
then 15/200 is the value of P.
c. Standard Deviation
The standard deviation for fraction defective denoted by P is calculated by the formula.
P
P 1 P
n
Just as the control limits for the X and R-charts are obtained as + 3ó values above the
average. The two control limits, upper and lower for this chart are also calculated by simply
adding or subtracting 3ó values from centre line value. These trial limits are computed to
determine whether a process is in statistical control or not.
So, UCLP P 3 P. P 3
P 1 P
n
Similarly, LCLP P 3
P 1 P
n
153
Mostly the control limits are obtained on the basis of about 20-25 samples to pick up the
problem and standard deviation from the samples is calculated for further production control.
This is a method of plotting attribute characteristics. In this case, the sample taken is a
single unit, such as length, breadth and area or a fixed time etc. In some cases it is required to
find the number of defects per unit rather than the percent defective.
For example take a case in which a large number of small components form a large unit,
say a car or transistor. The transistor set may have defect at various points. In this case, it
seems natural to count the number of defects per set, rather than to determine all points at
which the unit is defective.
This attempt to use P-charts to locate all the points at which transistor is defective seems
to be wrong, impossible to some extent and impracticable approach to the problems. Such a
condition warrants the necessity for the use of a C-chart.
1. Variables control charts (those that measure variation on a continuous scale) are more
sensitive to change than attribute control charts (those that measure variation on a discrete
scale).
2. Variables charts are useful for processes such as measuring tool wear.
3. Use an individual’s chart when few measurements are available (e.g., when they are
infrequent or are particularly costly). These charts should be used when the natural
subgroup is not yet known.
5. In a u-chart, the defects within the unit must be independent of one another, such as with
component failures on a printed circuit board or the number of defects on a billing statement.
6. Use a u-chart for continuous items, such as fabric (e.g., defects per square meter of
cloth).
154
7. A c-chart is a useful alternative to a u-chart when there are a lot of possible defects on a
unit, but there is only a small chance of any one defect occurring (e.g., flaws in a roll of
material).
8. When charting proportions, p– and np-charts are useful (e.g., compliance rates or process
yields).
8.9 Summary
SPC aims to eliminate all disturbances in a process, reduce variation, and produce on
target, thus leading to continual process improvement. Errors in the process such as tool wear,
wrong adjustments, wrong materials etc. will be found at an early stage thus enabling production
with less variation and reduced scrap levels. In typical manufacturing quality methods,
measurements are compared to specification limits and the result is a pass/fail decision. There
is no indication of process variation or indeed if there are any disturbances (relative to a controlled
process). It is important to establish a normal variation pattern for the process and maintain it by
continual process monitoring. If there is a deviation from a normal variation then a disturbance
has occurred and process adjustments have to be made.
8.10 Keywords
CPK - Process Capability Index,
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8th
Edition, South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 9
QUALITY FUNCTION DEPLOYMENT
Learning Objectives
Structure
9.1 Introduction
9.7 Summary
9.8 Keywords
9.1 Introduction
Developed in 1960s Japan, QFD was imported to the United States in the early 1980s
and caught on thanks to its popularity and successful track record in the automotive industry.
Copying the model from manufacturers such as Toyota and Mitsubishi, the “big three” American
car makers used QFD to bring customer centricity to their industry. Once adopted, QFD shortened
design cycles significantly and reduced the total number of employees required in the design
process. Shifting the focus from bottom line cost analysis to customer satisfaction brought
157
innovations and increased sales of domestic vehicles after the surge in popularity of Japanese
imports in the 1970s. Today QFD and the House of Quality can be found in a variety of industries
and are part of the Six Sigma toolbox.
By continuously circling back to the Voice of the Customer, QFD ensures every technical
requirement takes the customer into account, using matrix diagrams such as the House of
Quality to drive customer value into every stage.
Effort in QFD
The Quality Function Deployment process begins with collecting input from customers
(or potential customers), typically through surveys. The sample size for these surveys should
be fairly significant because quantifiable data will carry more weight and avoid letting any outlier
comments drive product strategy in the wrong direction.
After completing the surveys and aggregating the data (along with competitive analysis
when applicable), it’s boiled down into the Voice of the Customer. These customer requirements,
requests, demands, and preferences are framed as specific items and ideally ranked in terms
of importance. These are then listed on the left-hand side of the House of Quality matrix and
represent what customers want the product to do.
158
From here, the technical requirements can be created, with each of them tying back to
the Voice of the Customer items identified in the signature Quality Function Deployment matrix,
the House of Quality. These Voice of the Customer items will continue to trickle down into other
stages of product development and deployment, including component definition, process
planning, and quality control.
When the product is “done,” the Voice of Customer requirements initially identified in the
process should be clearly met (or intentionally left out) and the product can be released with the
confidence that it is meeting the needs of customers.
Quality Function Deployment benefits companies primarily by ensuring they bring products
to market that customers actually want, thanks to listening to customer preferences at the
beginning of the design process and then mandating these needs and desires are met throughout
every aspect of the design and development process. In short, if something isn’t being built
because a customer wants it (or it provides underlying support for a customer need), it doesn’t
get built at all, which can prevent technology from driving strategy when it’s not directly beneficial
to the customer experience.
Constantly and consistently circling back to the customer might seem like overkill, but it
quickly identifies and often cuts short any activity that doesn’t work toward the ultimate goal of
providing products customers want to buy and use. And by limiting product development activities
to just the things customers are asking for, the overall process is faster, more efficient and less
expensive.
Since collecting customer inputs and applying them throughout the product development
process is such a cross-functional activity, it can also increase teamwork and ensure the entire
organization is aligned around the same goal of customer satisfaction instead of competing
with other internal priorities.
QFD doesn’t come without its share of downsides. First of all, it can be a seismic change
for some organizations, particularly those with an established process primarily focused on
profitability and cost reduction. While QFD should ultimately result in both of those objectives
as well as satisfied customers, switching the primary motivation to customer satisfaction can be
jarring and meet some resistance, particularly if the company thinks it’s already doing a great
job with this.
159
The tunnel vision focus of QFD on the customer can also have some negative
repercussions if customer needs drive up product costs or delay technological innovations that
could benefit the company down the line. QFD’s customer focus also places a huge emphasis
on survey results, which if poorly designed or executed could push a company in the wrong
direction, and also don’t account for changes in customer needs and desires that may emerge
after the product design process has commenced.
Use of QFD
As soon as there is a well understood customer and their challenges and desires have
been quantifiably captured, QFD can be incorporated into the product development process. It
is most effective when it is used throughout the entire product lifecycle, as its main purpose is
to ensure a constant focus on the voice of the customer. You can’t “check it off” as completed
since it is an ever-present ingredient every step of the way.
QFD is most appropriate when companies are focused on relatively iterative innovation
versus something completely new, since there is a large base of customer feedback and input
to drive the process. When a product is creating a completely new category it’s more difficult to
fully articulate the voice of the customer since they don’t necessarily have a frame of reference,
but even in these cases carrying forward what is known about customer needs and preferences
can provide value.
The house of quality, a part of QFD is the basic design tool of quality function deployment. It
identifies and classifies customer desires (What’s), identifies the importance of those desires,
identifies engineering characteristics which may be relevant to those desires (How’s), correlates
the two, allows for verification of those correlations, and then assigns objectives and priorities
for the system requirements.
This process can be applied at any system composition level (e.g. system, subsystem, or
component) in the design of a product, and can allow for assessment of different abstractions
of a system. It is intensely progressed through a number of hierarchical levels of what’s and
How’s and analyse each stage of product growth (service enhancement), and production (service
delivery).The house of quality appeared in 1972 in the design of an oil tanker by Mitsubishi
Heavy Industries.
160
The output of the house of quality is generally a matrix with customer desires on one
dimension and correlated nonfunctional requirements on the other dimension. The cells of matrix
table are filled with the weights assigned to the stakeholder characteristics where those
characteristics are affected by the system parameters across the top of the matrix. At the bottom
of the matrix, the column is summed, which allows for the system characteristics to be weighted
according to the stakeholder characteristics.
Developed in the 1950s, FMEA was one of the earliest structured reliability improvement
methods. Today it is still a highly effective method of lowering the possibility of failure
Failure Mode and Effects Analysis (FMEA) is a structured approach to discovering potential
failures that may exist within the design of a product or process. Failure modes are the ways in
which a process can fail. Effects are the ways that these failures can lead to waste, defects or
harmful outcomes for the customer. Failure Mode and Effects Analysis is designed to identify,
prioritize and limit these failure modes. FMEA is not a substitute for good engineering. Rather,
it enhances good engineering by applying the knowledge and experience of a Cross Functional
Team (CFT) to review the design progress of a product or process by assessing its risk of
failure.
161
There are two broad categories of FMEA, Design FMEA (DFMEA) and Process FMEA
(PFMEA).
Design FMEA
Design FMEA (DFMEA) explores the possibility of product malfunctions, reduced product
life, and safety and regulatory concerns derived from:
Material Properties
Geometry
Tolerances
Process FMEA
Process FMEA (PFMEA) discovers failure that impacts product quality, reduced reliability
of the process, customer dissatisfaction, and safety or environmental hazards derived from:
Human Factors
Materials used
Machines utilized
Historically, the sooner a failure is discovered, the less it will cost. If a failure is discovered
late in product development or launch, the impact is exponentially more devastating. FMEA is
one of many tools used to discover failure at its earliest possible point in product or process
design. Discovering a failure early in Product Development (PD) using FMEA provides the
benefits of:
Pre-work involves the collection and creation of key documents. FMEA works smoothly
through the development phases when an investigation of past failures and preparatory
documents is performed from its onset. Preparatory documents may include:
A pre-work Checklist is recommended for an efficient FMEA event. Checklist items may
include:
Requirements to be included
Path 1 consists of inserting the functions, failure modes, effects of failure and Severity
rankings. The pre-work documents assist in this task by taking information previously captured
to populate the first few columns (depending on the worksheet selected) of the FMEA.
164
Functions should be written in verb-noun context. Each function must have an associated
measurable. Functions may include:
Specifications of a design
Government regulations
Program-specific requirements
Where each individual effect is given a Severity ranking. Actions are considered at this
stage if the Severity is 9 or 10. Recommended Actions may be considered that impact the
product or process design addressing Failure Modes on High Severity Rankings (Safety and
Regulatory).
Causes are selected from the design / process inputs or past failures and placed in the
Cause column when applicable to a specific failure mode. The columns completed in Path 2
are:
Current Prevention Controls (i.e. standard work, previously successful designs, etc.)
Actions are developed to address high risk Severity and Occurrence combinations,
defined in the Quality-One Criticality Matrix
Path 3 Development involves the addition of Detection Controls that verify that the design
meets requirements (for Design FMEA) or cause and/or failure mode, if undetected, may reach
a customer (for Process FMEA).
Detection Controls
Detection Ranking
Actions are determined to improve the controls if they are insufficient to the Risks
determined in Paths 1 and 2. Recommended Actions should address weakness in
the testing and/or control strategy.
Review and updates of the Design Verification Plan and Report (DVP&R) or Control
Plans are also possible outcomes of Path 3.
The Actions that were previously determined in Paths 1, 2 or 3 are assigned a Risk
Priority Number (RPN) for action follow-up.
RPN is calculated by multiplying the Severity, Occurrence and Detection Rankings for
each potential failure / effect, cause and control combination. Actions should not be determined
based on an RPN threshold value. This is done commonly and is a practice that leads to poor
team behavior. The columns completed are:
FMEA Actions are closed when counter measures have been taken and are successful at
reducing risk. The purpose of an FMEA is to discover and mitigate risk. FMEAs which do not
find risk are considered to be weak and non-value added. Effort of the team did not produce
improvement and therefore time was wasted in the analysis.
After successful confirmation of Risk Mitigation Actions, the Core Team or Team Leader
will re-rank the appropriate ranking value (Severity, Occurrence or Detection). The new rankings
will be multiplied to attain the new RPN. The original RPN is compared to the revised RPN and
the relative improvement to the design or process has been confirmed. Columns completed in
Step 7:
Re-ranked Severity
Re-ranked Occurrence
Re-ranked Detection
Re-ranked RPN
Generate new Actions, repeating Step 5, until risk has been mitigated
Deciding when to take an action on the FMEA has historically been determined by RPN
thresholds. Quality-One does not recommend the use of RPN thresholds for setting action
targets. Such targets are believed to negatively change team behavior because teams select
the lowest numbers to get below the threshold and not actual risk, requiring mitigation.
RPN Pareto
167
When completed, Actions move the risk from its current position in the Quality-One FMEA
Criticality Matrix to a lower risk position.
The Risk Priority Number, or RPN, is a numeric assessment of risk assigned to a process,
or steps in a process, as part of Failure Modes and Effects Analysis (FMEA), in which a team
assigns each failure mode numeric values that quantify likelihood of occurrence, likelihood of
detection, and severity of impact.
Reduce Variation of the Process (Statistical Process Control and Process Capability)
3. Improve Controls
The Failure Modes in a FMEA are equivalent to the Problem Statement or Problem
Description in Problem Solving. Causes in a FMEA are equivalent to potential root causes in
Problem Solving. Effects of failure in a FMEA are Problem Symptoms in Problem Solving. More
examples of this relationship are:
The problem statements and descriptions are linked between both documents. Problem
solving methods are completed faster by utilizing easy to locate, pre-brainstormed
information from an FMEA.
Possible causes in an FMEA are immediately used to jump start Fishbone or Ishikawa
diagrams. Brainstorming information that is already known is not a good use of time or
168
resources.
Data collected from problem solving is placed into an FMEA for future planning of new
products or process quality. This allows an FMEA to consider actual failures, categorized
as failure modes and causes, making the FMEA more effective and complete.
The design or process controls in an FMEA are used in verifying the root cause and
Permanent Corrective Action (PCA).
The FMEA and Problem Solving reconcile each failure and cause by cross documenting
failure modes, problem statements and possible causes.
The concept of Taguchi’s quality loss function was in contrast with the American concept
of quality, popularly known as goal post philosophy, the concept given by American quality
guru Phil Crosby. Goal post philosophy emphasizes that if a product feature doesn’t meet the
designed specifications it is termed as a product of poor quality (rejected), irrespective of amount
of deviation from the target value (mean value of tolerance zone). This concept has similarity
with the concept of scoring a ‘goal’ in the game of football or hockey, because a goal is counted
‘one’ irrespective of the location of strike of the ball in the ‘goal post’, whether it is in the center
or towards the corner. This means that if the product dimension goes out of the tolerance limit
the quality of the product drops suddenly.
Through his concept of the quality loss function, Taguchi explained that from the customer’s
point of view this drop of quality is not sudden. The customer experiences a loss of quality the
moment product specification deviates from the ‘target value’. This ‘loss’ is depicted by a quality
loss function and it follows a parabolic curve mathematically given by L = k(y–m)2, where m is
the theoretical ‘target value’ or ‘mean value’ and y is the actual size of the product, k is a constant
169
and L is the loss. This means that if the difference between ‘actual size’ and ‘target value’ i.e.
(ym) is large, loss would be more, irrespective of tolerance specifications. In Taguchi’s view
tolerance specifications are given by engineers and not by customers; what the customer
experiences is ‘loss’. This equation is true for a single product; if ‘loss’ is to be calculated for
multiple products the loss function is given by L = k[S2 + ({\displaystyle {\bar {y}}} – m)2],
where S2 is the ‘variance of product size’ and {\displaystyle {\bar {y}}}the average product size.
This Robust Design is focused on the improvement of the significant function or role of a
process or a product itself, therefore smoothing the progress of strategies and coexisting
engineering. The Taguchi (Robust Design) approach rooted on a so called Energy Transformation
method for engineering systems like electrical, chemical, mechanical and the like. It is a unique
method which makes use of the ideal function of a process or product in contrast to the
conventional approaches which mainly concentrate on “symptom analysis” as a source for
development or improvement towards the achievement of Robustness and Quality Assurance.
To ensure or guarantee customer satisfaction, the Robust Design approach takes into
account both.
2) The cost considered as the rate of deterioration in the area. It is a technique for
performing experiments to look into processes or investigate on processes where the end
170
result depends on several factors such as inputs and variables without having a mind-numbing
and inefficient or too costly operation with the use of possible and feasible mixture of values of
the said variables. With a systematic choice of variable combination, dividing their individual
effects is possible.
§ Improve processes and products which are intended under a broad variety of consumer’s
circumstances in their life cycle and making processes reliable and products durable
§ Capitalize and get the most out of robustness by developing the planned function of a
product by improving and expanding insensitivity to factors of noise which somehow
discredit performance
§ Alter and develop formulas and processes of a product to arrive at the performance
desired at a reduced cost or the lowest rate possible but, at the shortest turnaround or
time frame
Over the years, Six Sigma has made it possible to reduce cost by uncovering problems
which occur during manufacturing and resolving instant causes in the life cycle of a product.
Robust Design on the other hand has made it feasible to prevent issues or problems by rigorously
developing designs for both manufacturing process and product. The Robust Design follows a
crucial methodology to ensure a systematic process to attain a good output. Below are the 5
primary tools used in the Robust Design approach:
1. The P-Diagram
This is used to categorize variables into noise, signal or the input, response or the output,
and control factors related with a product.
171
This Ideal Function is utilized to statistically or mathematically identify the ideal or ultimate
outline of the signal-response association as represented by the design idea for developing the
higher-level system work fault free.
This is also termed the Quality Loss Function and is used to measure the loss earned or
acquired by the consumer or user from the intended performance due to a deviation from it.
4. Signal-to-Noise Ratio
This is used to predict the quality of the field by going through systematic laboratory tests
or experiments.
5. Orthogonal Arrays
These are used to collect and gather reliable information about control factors which are
considered the design parameters with minimal number of tests and experiments.
I. Problem Formulation
This step would incorporate the identification of the main function, development of the P-
diagram, classifying the best function and signal to noise or S/N ratio, and planning or strategizing
the experiments. The tests or experiments would involve altering the noise, control as well as
the signal factor logically and efficiently utilizing orthogonal arrays.
This is the stage where experiments or tests are performed in either simulation or hardware.
Having a full-scale example of the product for experimentation purposes is not considered
necessary or compulsory in this step. What’s important or significant in this stage is to have a
vital model or example of the product which satisfactorily encapsulates the design idea or concept.
As a result, experiments or tests can be performed at a low cost or economically.
172
This is the stage where results or outcome of the control factors are estimated and such
results are evaluated to identify and classify the most favorable arrangement of the control
variables or factors.
IV. Prediction/Confirmation
This is the stage wherein predicting the performance or operation of the product model
under the most favorable arrangement of the control variables or factors to confirm best conditions
is done. After which, experiments are done under such conditions as well as comparing the
results observed with the underlying predictions. If the outcome or results of the experiments
did corresponds with the predicted results or predictions, final results are then implemented.
However, if predictions do not match with the final results, the steps need to be repeated.
A lot of companies worldwide have saved millions of dollars or even hundreds of millions
just by using the Taguchi approach. Telecommunications, software, electronics, xerography,
automobiles and other engineering fields are just some of the few businesses which have
already practiced the Robust Design method. With the Robust approach, rapid achievement to
the full technological capability of designs and higher profit can be considered consistent.
9.7 Summary
Quality Function Deployment, or QFD, is a model for product development and production
popularized in Japan in the 1960’s. The model aids in translating customer needs and
expectations into technical requirements by listening to the voice of customer. QFD is applied in
a wide variety of applications viz product design , manufacturing, production, engineering,
research and development (R&D), information technology (IT), support, testing, regulatory,
and other phases in hardware, software, service, and system organizations. organization
functions necessary to assure customer satisfaction, including business planning, packaging
and logistics, procurement, marketing, sales & service. QFD is also deployed in quality
improvement, quality management, military needs and consumer products. Customer services
Applications for Education improvement and services in hotels etc.
173
9.8 Keywords
CFT- Cross Functional Team
PFMEA - Process Failure Mode Effects Analysis
FMECA - Failure mode, effects, and criticality analysis
DFM/A - Design for Manufacturing and Assembly
PCA - Permanent Corrective Action
DOE - Design of Experiments
LESSON - 10
Reliability
Learning Objectives
Define Reliability
Structure
10.1 Introduction
10.2 Reliability
10.7 Summary
10.8 Keywords
10.1 Introduction
The word reliability can be traced back to 1816, and is first attested to the poet Samuel
Taylor Coleridge. Before World War II the term was linked mostly to repeatability; a test (in
any type of science) was considered “reliable” if the same results would be obtained repeatedly.
In the 1920s, product improvement through the use of statistical process control was promoted
by Dr. Walter A. Shewhart at Bell Labs, around the time that Waloddi Weibull was working on
statistical models for fatigue. The development of reliability engineering was here on a parallel
175
path with quality. The modern use of the word reliability was defined by the U.S. military in the
1940s, characterizing a product that would operate when expected and for a specified period of
time.
In World War II, many reliability issues were due to the inherent unreliability of electronic
equipment available at the time, and to fatigue issues. In 1945, M.A. Miner published the
seminal paper titled “Cumulative Damage in Fatigue” in an ASME journal. A main application for
reliability engineering in the military was for the vacuum tube as used in radar systems and
other electronics, for which reliability proved to be very problematic and costly. The IEEE formed
the Reliability Society in 1948. In 1950, the United States Department of Defense formed group
called the “Advisory Group on the Reliability of Electronic Equipment” (AGREE) to investigate
reliability methods for military equipment. This group recommended three main ways of working:
In the 1960s, more emphasis was given to reliability testing on component and system
level. The famous military standard MIL-STD-781 was created at that time. Around this period
also the much-used predecessor to military handbook 217 was published by RCA and was
used for the prediction of failure rates of electronic components. The emphasis on component
reliability and empirical research (e.g. Mil Std 217) alone slowly decreased. More pragmatic
approaches, as used in the consumer industries, were being used. In the 1980s, televisions
were increasingly made up of solid-state semiconductors.
10.2 Reliability
Definitions
Reliability is defined as the probability that a product, system, or service will perform its
intended function adequately for a specified period of time, or will operate in a defined environment
without failure.
The idea that an item is fit for a purpose with respect to time
The probability of an item to perform a required function under stated conditions for
a specified period of time
The most important components of this definition must be clearly understood to fully
know how reliability in a product or service is established:
Probability of success
Durability
Dependability
177
“This car is under warranty for 40,000 miles or 3 years, whichever comes first.”
This is when a new product is first brought to market, before there is a proved demand for
it, and often before it has been fully proved out technically in all respects. Sales are low and
creep along slowly.
Demand begins to accelerate and the size of the total market expands rapidly. It might
also be called the “Takeoff Stage.”
Demand levels off and grows, for the most part, only at the replacement and new family-
formation rate.
The product begins to lose consumer appeal and sales drift downward, such as when
buggy whips lost out with the advent of automobiles and when silk lost out to nylon.
Given a proposed new product or service, how and to what extent can the shape
and duration of each stage be predicted?
Given an existing product, how can one determine what stage it is in?
A brief further elaboration of each stage will be useful before dealing with these questions
in detail.
Development Stage
Bringing a new product to market is fraught with unknowns, uncertainties, and frequently
unknowable risks. Generally, demand has to be “created” during the product’s initial market
development stage. How long this takes depends on the product’s complexity, its degree of
newness, its fit into consumer needs, and the presence of competitive substitutes of one form
or another.
A proved cancer cure would require virtually no market development; it would get immediate
massive support. An alleged superior substitute for the lost-wax process of sculpture casting
would take lots longer.
While it has been demonstrated time after time that properly customer oriented new
product development is one of the primary conditions of sales and profit growth, what have
been demonstrated even more conclusively are the ravaging costs and frequent fatalities
associated with launching new products. Nothing seems to take more time, cost more money,
involve more pitfalls, cause more anguish, or break more careers than do sincere and well-
conceived new product programs.
179
The fact is, most new products don’t have any sort of classical life cycle curve at all. They
have instead from the very outset an infinitely descending curve. The product not only doesn’t
get off the ground; it goes quickly underground six feet under.
It is little wonder, therefore, that some disillusioned and badly burned companies have
recently adopted a more conservative policy what I call the “used apple policy.” Instead of
aspiring to be the first company to see and seize an opportunity, they systematically avoid being
first. They let others take the first bite of the supposedly juicy apple that tantalizes them. They
let others do the pioneering. If the idea works, they quickly follow suit. They say, in effect, “The
trouble with being a pioneer is that the pioneers get killed by the Indians.” Hence, they say
(thoroughly mixing their metaphors), “We don’t have to get the first bite of the apple. The
second one is good enough.” They are willing to eat off a used apple, but they try to be alert
enough to make sure it is only slightly used that they at least get the second big bite, not the
tenth skimpy one.
Growth Stage
The usual characteristic of a successful new product is a gradual rise in its sales curve
during the market development stage. At some point in this rise a marked increase in consumer
demand occurs and sales take off. The boom is on. This is the beginning of Stage 2the market
growth stage. At this point potential competitors who have been watching developments during
Stage I jump into the fray. The first ones to get in are generally those with an exceptionally
effective “used apple policy.” Some enter the market with carbon-copies of the originator’s
product. Others make functional and design improvements. And at this point product and brand
differentiation begin to develop.
The ensuing fight for the consumer’s patronage poses to the originating producer an
entirely new set of problems. Instead of seeking ways of getting consumers to try the
product, the originator now faces the more compelling problem of getting them to prefer his
brand. This generally requires important changes in marketing strategies and methods. But
the policies and tactics now adopted will be neither freely the sole choice of the originating
producer, nor as experimental as they might have been during Stage I. The presence of
competitors both dictates and limits what can easily be tried such as, for example, testing what
is the best price level or the best channel of distribution.
180
Maturity Stage
This new stage is the market maturity stage. The first sign of its advent is evidence of
market saturation. This means that most consumer companies or households that are sales
prospects will be owning or using the product. Sales now grow about on a par with population.
No more distribution pipelines need be filled. Price competition now becomes intense. Competitive
attempts to achieve and hold brand preference now involve making finer and finer differentiations
in the product, in customer services, and in the promotional practices and claims made for the
product.
Typically, the market maturity stage forces the producer to concentrate on holding his
distribution outlets, retaining his shelf space, and, in the end, trying to secure even more intensive
distribution. Whereas during the market development stage, the originator depended heavily
on the positive efforts of his retailers and distributors to help sell his product, retailers and
distributors will now frequently have been reduced largely to being merchandise displayers and
order takers. In the case of branded products in particular, the originator must now, more than
ever, communicate directly with the consumer.
The market maturity stage typically calls for a new kind of emphasis on competing more
effectively. The originator is increasingly forced to appeal to the consumer on the basis of price,
marginal product differences, or both. Depending on the product, services and deals offered in
connection with it are often the clearest and most effective forms of differentiation. Beyond
these, there will be attempts to create and promote fine product distinctions through packaging
and advertising, and to appeal to special market segments.
The market maturity stage can be passed through rapidly, as in the case of most women’s
fashion fads, or it can persist for generations with per capita consumption neither rising nor
falling, as in the case of such staples as men’s shoes and industrial fasteners. Or maturity can
persist, but in a state of gradual but steady per capita decline, as in the case of beer and steel.
181
Decline Stage
When market maturity tapers off and consequently comes to an end, the product enters
Stage 4market decline. In all cases of maturity and decline the industry is transformed. Few
companies are able to weather the competitive storm. As demand declines, the overcapacity
that was already apparent during the period of maturity now becomes endemic. Some producers
see the handwriting implacably on the wall but feel that with proper management and cunning
they will be one of the survivors after the industry-wide deluge they so clearly foresee.
To hasten their competitors’ eclipse directly, or to frighten them into early voluntary
withdrawal from the industry, they initiate a variety of aggressively depressive tactics, propose
mergers or buy-outs, and generally engage in activities that make life thanklessly burdensome
for all firms, and make death the inevitable consequence for most of them. A few companies do
indeed weather the storm, sustaining life through the constant descent that now clearly
characterizes the industry. Production gets concentrated into fewer hands. Prices and margins
get depressed. Consumers get bored. The only cases where there is any relief from this boredom
and gradual euthanasia are where styling and fashion play some constantly revivifying role.
Preplanning Importance
Knowing that the lives of successful products and services are generally characterized by
something like the pattern illustrated in Exhibit I can become the basis for important life-giving
policies and practices. One of the greatest values of the life cycle concept is for managers
about to launch a new product. The first step for them is to try to foresee the profile of the
proposed product’s cycle.
But this does not mean that useful efforts cannot or should not be made to try to foresee
the slope and duration of a new product’s life. Time spent in attempting this kind of foresight not
182
only helps assure that a more rational approach is brought to product planning and merchandising;
also, as will be shown later, it can help create valuable lead time for important strategic and
tactical moves after the product is brought to market. Specifically, it can be a great help in
developing an orderly series of competitive moves, in expanding or stretching out the life of a
product, in maintaining a clean product line, and in purposely phasing out dying and costly old
products.
Possibilities of Failure
As pointed out above, the length and slope of the market development stage depend on
the product’s complexity, its degree of newness, its fit into customer needs, and the presence of
competitive substitutes.
The more unique or distinctive the newness of the product, the longer it generally takes to
get it successfully off the ground. The world does not automatically beat a path to the man with
the better mousetrap. The world has to be told, coddled, enticed, romanced, and even bribed
(as with, for example, coupons, samples, free application aids, and the like). When the product’s
newness is distinctive and the job it is designed to do is unique, the public will generally be less
quick to perceive it as something it clearly needs or wants.
This makes life particularly difficult for the innovator. He will have more than the usual
difficulties of identifying those characteristics of his product and those supporting communication
themes or devices which imply value to the consumer. As a consequence, the more distinctive
the newness, the greater the risk of failure resulting either from insufficient working capital to
sustain a long and frustrating period of creating enough solvent customers to make the proposition
pay, or from the inability to convince investors and bankers that they should put up more money.
In any particular situation the more people who will be involved in making a single
purchasing decision for a new product, the more drawn out Stage I will be. Thus in the highly
fragmented construction materials industry, for example, success takes an exceptionally long
time to catch hold; and having once caught hold, it tends to hold tenaciously for a long time
often too long. On the other hand, fashion items clearly catch on fastest and last shortest. But
because fashion is so powerful, recently some companies in what often seem the least fashion
influenced of industries (machine tools, for example) have shortened the market development
stage by introducing elements of design and packaging fashion to their products.
183
What factors tend to prolong the market development stage and therefore raise the risk
of failure? The more complex the product, the more distinctive its newness, the less influenced
by fashion, the greater the number of persons influencing a single buying decision, the more
costly, and the greater the required shift in the customer’s usual way of doing things these are
the conditions most likely to slow things up and create problems.
Success Chances
But problems also create opportunities to control the forces arrayed against new product
success. For example, the newer the product, the more important it becomes for the customers
to have a favorable first experience with it. Newness creates a certain special visibility for the
product, with a certain number of people standing on the sidelines to see how the first customers
get on with it. If their first experience is unfavorable in some crucial way, this may have
repercussions far out of proportion to the actual extent of the under fulfillment of the customers’
expectations. But a favorable first experience or application will, for the same reason, get a lot
of disproportionately favorable publicity.
The possibility of exaggerated disillusionment with a poor first experience can raise vital
questions regarding the appropriate channels of distribution for a new product. On the one
hand, getting the product successfully launched may require having as in the case of, say, the
early days of home washing machines many retailers who can give consumers considerable
help in the product’s correct utilization and thus help assure a favorable first experience for
those buyers. On the other hand, channels that provide this kind of help (such as small
neighborhood appliance stores in the case of washing machines) during the market development
stage may not be the ones best able to merchandise the product most successfully later when
help in creating and personally reassuring customers is less important than wide product
distribution.
To the extent that channel decisions during this first stage sacrifice some of the
requirements of the market development stage to later stages, the rate of the product’s
acceptance by consumers at the outset may be delayed.
In entering the market development stage, pricing decisions are often particularly hard for
the producer to make. Should he set an initially high price to recoup his investment quickly i.e.,
“skim the cream” or should he set a low price to discourage potential competition i.e., “exclusion”?
The answer depends on the innovator’s estimate of the probable length of the product’s life
184
cycle, the degree of patent protection the product is likely to enjoy, the amount of capital needed
to get the product off the ground, the elasticity of demand during the early life of the product,
and many other factors.
The decision that is finally made may affect not just the rate at which the product catches
on at the beginning, but even the duration of its total life. Thus some products that are priced
too low at the outset (particularly fashion goods, such as the chemise, or sack, a few years ago)
may catch on so quickly that they become short-lived fads. A slower rate of consumer acceptance
might often extend their life cycles and raise the total profits they yield.
The actual slope, or rate of the growth stage, depends on some of the same things as
does success or failure in Stage I. But the extent to which patent exclusiveness can play a
critical role is sometimes inexplicably forgotten. More frequently than one might offhand expect,
holders of strong patent positions fail to recognize either the market-development virtue of
making their patents available to competitors or the market-destroying possibilities of failing to
control more effectively their competitors’ use of such products.
Generally speaking, the more producers there are of a new product, the more effort goes
into developing a market for it. The net result is very likely to be more rapid and steeper growth
of the total market. The originator’s market share may fall, but his total sales and profits may
rise more rapidly. Certainly this has been the case in recent years of color television; RCA’s
eagerness to make its tubes available to competitors reflects its recognition of the power of
numbers over the power of monopoly.
On the other hand, the failure to set and enforce appropriate quality standards in the early
days of polystyrene and polyethylene drinking glasses and cups produced such sloppy, inferior
goods that it took years to recover the consumer’s confidence and revive the growth pattern.
But to try to see in advance what a product’s growth pattern might be is not very useful if
one fails to distinguish between the industry pattern and the pattern of the single firm for its
particular brand. The industry’s cycle will almost certainly be different from the cycle of individual
firms. Moreover, the life cycle of a given product may be different for different companies in the
same industry at the same point in time, and it certainly affects different companies in the same
industry differently.
185
Reliability specialists often describe the lifetime of a population of products using a graphical
representation called the bathtub curve. The bathtub curve consists of three periods: an infant
mortality period with a decreasing failure rate followed by a normal life period (also known as
“useful life”) with a low, relatively constant failure rate and concluding with a wear-out period
that exhibits an increasing failure rate. It describes methods to reduce failures at each stage of
product life and shows how burn-in, when appropriate, can significantly reduce operational
failure rate by screening out infant mortality failures.
The bathtub curve is widely used in reliability engineering. It describes a particular form
of the hazard function which comprises three parts:
The name is derived from the cross-sectional shape of a bathtub: steep sides and a flat
bottom.
The bathtub curve is generated by mapping the rate of early “infant mortality” failures
when first introduced, the rate of random failures with constant failure rate during its “useful
life”, and finally the rate of “wear out” failures as the product exceeds its design lifetime.
In less technical terms, in the early life of a product adhering to the bathtub curve, the
failure rate is high but rapidly decreasing as defective products are identified and discarded,
and early sources of potential failure such as handling and installation error are surmounted. In
the mid-life of a product generally speaking for consumer products the failure rate is low and
constant. In the late life of the product, the failure rate increases, as age and wear take their toll
on the product. Many electronic consumer product life cycles strongly exhibit the bathtub curve.
While the bathtub curve is useful, not every product or system follows a bathtub curve
hazard function, for example if units are retired or have decreased use during or before the
186
onset of the wear-out period, they will show fewer failures per unit calendar time (not per unit
use time) than the bathtub curve.
availability, testability, maintainability and maintenance are often defined as a part of “reliability
engineering” in reliability programs. Reliability plays a key role in the cost-effectiveness of systems
for example cars have a higher resale value when they fail less often.
Reliability and quality are closely related. Normally quality focuses on the prevention of
defects during the warranty phase whereas reliability looks at preventing failures during the
useful lifetime of the product or system from commissioning to decommissioning.
Reliability engineering deals with the estimation, prevention and management of high
levels of “lifetime” engineering uncertainty and risks of failure. Although stochastic parameters
define and affect reliability, reliability is not (solely) achieved by mathematics and statistics. One
cannot really find a root cause (needed to effectively prevent failures) by only looking at statistics.
“Nearly all teaching and literature on the subject emphasize these aspects, and ignore
the reality that the ranges of uncertainty involved largely invalidate quantitative methods for
prediction and measurement.” For example, it is easy to represent “probability of failure” as a
symbol or value in an equation, but it is almost impossible to predict its true magnitude in
practice, which is massively multivariate, so having the equation for reliability does not begin to
equal having an accurate predictive measurement of reliability.
Reliability engineering relates closely to safety engineering and to system safety, in that
they use common methods for their analysis and may require input from each other. Reliability
engineering focuses on costs of failure caused by system downtime, cost of spares, repair
equipment, personnel, and cost of warranty claims. Safety engineering normally focuses more
on preserving life and nature than on cost, and therefore deals only with particularly dangerous
system failure modes. High reliability (safety factor) levels also result from good engineering
and from attention to detail, and almost never from only reactive failure management (using
reliability accounting and statistics)
2. To identify and correct the causes of failures that do occur despite the efforts to prevent
them.
188
3. To determine ways of coping with failures that do occur, if their causes have not been
corrected.
4. To apply methods for estimating the likely reliability of new designs, and for analysing
reliability data.
The reason for the priority emphasis is that it is by far the most effective way of working,
in terms of minimizing costs and generating reliable products. The primary skills that are required,
therefore, are the ability to understand and anticipate the possible causes of failures, and
knowledge of how to prevent them. It is also necessary to have knowledge of the methods that
can be used for analysing designs and data.
b) The extremely high level of uncertainties involved for showing compliance with all these
probabilistic requirements, and because
Compare this problem with the continuous (re)balancing of, for example, lower level system
mass requirements in the development of an aircraft, which is already often a big undertaking.
Notice that in this case, masses do only differ in terms of only some %, are not a function of
time, the data is non-probabilistic and available already in CAD models.
In case of reliability, the levels of unreliability (failure rates) may change with factors of
decades (multiples of 10) as result of very minor deviations in design, process, or anything
else. The information is often not available without huge uncertainties within the development
phase. This makes this allocation problem almost impossible to do in a useful, practical, valid
manner that does not result in massive over or under specification. A pragmatic approach is
therefore needed for example: the use of general levels / classes of quantitative requirements
depending only on severity of failure effects. Also, the validation of results is a far more subjective
task than for any other type of requirement. (Quantitative) reliability parameter sin terms of
MTBF are by far the most uncertain design parameters in any design.
The maintainability requirements address the costs of repairs as well as repair time.
Testability (not to be confused with test requirements) requirements provide the link between
reliability and maintainability and should address detectability of failure modes (on a particular
system level), isolation levels, and the creation of diagnostics (procedures). As indicated above,
reliability engineers should also address requirements for various reliability tasks and
documentation during system development, testing, production, and operation. These
requirements are generally specified in the contract statement of work and depend on how
much leeway the customer wishes to provide to the contractor.
Reliability tasks include various analyses, planning, and failure reporting. Task selection
depends on the criticality of the system as well as cost. A safety-critical system may require a
formal failure reporting and review process throughout development, whereas a non-critical
system may rely on final test reports. The most common reliability program tasks are documented
in reliability program standards, such as MIL-STD-785 and IEEE 1332. Failure reporting analysis
and corrective action systems are a common approach for product/process reliability monitoring.
In practice, most failures can be traced back to some type of human error, for example in:
Assumptions
Design
Design drawings
Statistical analysis
Manufacturing
· Quality control
Maintenance
Maintenance manuals
Training
However, humans are also very good at detecting such failures, correcting for them, and
improvising when abnormal situations occur. Therefore, policies that completely rule out human
actions in design and production processes to improve reliability may not be effective. Some
tasks are better performed by humans and some are better performed by machines.
Design for Reliability (DfR) is a process that encompasses tools and procedures to ensure
that a product meets its reliability requirements, under its use environment, for the duration of
its lifetime. DfR is implemented in the design stage of a product to proactively improve product
reliability. DfR is often used as part of an overall Design for Excellence (DfX) strategy.
Predictions based on failure rates taken from historical data. While the (input data)
predictions are often not accurate in an absolute sense, they are valuable to assess relative
differences in design alternatives. Maintainability parameters, for example Mean time to
repair (MTTR), can also be used as inputs for such models.
The most important fundamental initiating causes and failure mechanisms are to be
identified and analyzed with engineering tools. A diverse set of practical guidance as to
performance and reliability should be provided to designers so that they can generate low-
stressed designs and products that protect, or are protected against, damage and excessive
wear. Proper validation of input loads (requirements) may be needed, in addition to verification
for reliability “performance” by testing.
One of the most important design techniques is redundancy. This means that if one part
of the system fails, there is an alternate success path, such as a backup system. The reason
why this is the ultimate design choice is related to the fact that high-confidence reliability evidence
for new parts or systems is often not available, or is extremely expensive to obtain. By combining
redundancy, together with a high level of failure monitoring, and the avoidance of common
cause failures; even a system with relatively poor single-channel (part) reliability, can be made
highly reliable at a system level (up to mission critical reliability). No testing of reliability has to
be required for this.
Another effective way to deal with reliability issues is to perform analysis that predicts
degradation, enabling the prevention of unscheduled downtime events / failures. RCM (Reliability
Centered Maintenance) programs can be used for this.
10.7 Summary
Reliability has sometimes been classified as “how quality changes over time.” The
difference between quality and reliability is that quality shows how well an object performs its
proper function, while reliability shows how well this object maintains its original level of quality
over time, through various conditions. For example, a quality vehicle that is safe, fuel efficient,
and easy to operate may be considered high quality. If this car continues to meet this criterion
for several years, and performs well and remains safe even when driven in inclement weather,
it may be considered reliable.
193
10.8 Keywords
MIL - STD-781- Famous military standard
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8th
Edition, South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 11
QUALITY STANDARDS
Learning Objectives
Structure
11.1 Introduction
11.4 Certification
11.10 Summary
11.11 Keywords
11.1 Introduction
The world has witnessed the growth of quality management systems’ standards such as
ISO 9000 where by the end of year 2005, 776608 number of certifications awarded by 161
countries worldwide (ISO survey 2005) and emergence of Total Quality Management (TQM),
Total Productive Maintenance (TPM), Just in Time (JIT), Business Process Re-engineering
(BPR), business excellence, lean thinking, six sigma, etc. in continuation with this, in today’s
business environment, the conformation and application of standardized quality management
system models such as ISO 9000 and TQM are vitally considered to be one of the most important
phenomena in total quality management development and globalization - Dale et al., (2001);
Ruzevicius et al., (2004). TQM and quality management system implementation has had the
highest positive impact on the quality improvement of companies’ operations and products -
Adomaitiene and Ruzevicius,(1999). In the light of this, it is vital for organizations to develop
or adopt an effective quality management system such as ISO 9000, which combines also the
main TQM principles - Rohitratana and Boon-Itt, (2001).
Approach
The standard has completely abandoned the “chimney approach” to auditing and uses a
process-oriented approach. ISO 9001:2008 examines about 21 processes in its scope. In fact,
the Quality Management System as a whole is a process. The process approach is the elemental
core or groundwork of ISO’s perspective of Quality Management System. According to this
approach, a QMS can be thought of as a single large process that avails many inputs to generate
many outputs.
196
The revised ISO 9001 and 9004 are being designed to constitute a “consistent pair” of
standards. Their structure and sequence will be identical in order to facilitate an easy and
useful transition between them. The primary aim of the “consistent pair” is to relate modern
quality management to most of the processes and activities of an organization, including the
promotion of continual improvement and achievement of customer satisfaction. Furthermore, it
is intended that the ISO 9000 standards have global applicability. Major changes in the revised
ISO 9000 standards are: increased focus on top management commitment and customer
satisfaction, the emphasis on processes within the organization, and the introduction of continual
improvement concepts.
In this way, all organizations, whether private or public, large or small, producing
manufactured goods, services, or software, are being offered tools with which to achieve internal
and external benefits. In the ISO 9000 family there will be a single Quality Management
Requirements standard that is clear, concise and universal.
As we move to the next millennium, the ISO 9000 family of quality management system
standards is being updated to reflect a modern understanding of quality.
ISO 9000 was published in 1987, and then revised in 1994 and 2000. “ISO 9000:2000 is
used to describe the whole family of standards beginning with 900x. It is important to have
standard operating procedures in the global market. With millions of ISO 9000 users worldwide,
it is imperative that the introduction of these standards be as seamless as possible. In pursuit of
these goals, ISO/TC 176/SC 2, which handled the revision of the ISO 9001 and ISO 9004
197
standards, has developed an introduction plan to facilitate the successful launch of the new
standards.
As an initial step along the road to the ISO 9000:2000 revisions, TC 176 developed a
consensus on a set of quality management principles (QMPs). The principles were developed
after research of the quality concepts in use around the world. Many input sources were
considered.
Eight principles resulted from this work and they have been used as a foundation for the
revisions (“The Eight Quality Management Principles”). These principles appear in both ISO
9000 and ISO 9004.
While the principles were a basis for developing ISO 9001, they do not formally appear in
that document. Each principle has a place within the ISO 9001 requirements, but the extent of
application to ISO 9001 is quite limited compared to its application in the new ISO 9004. ISO
9004 uses each principle fully to help organizations drive for excellence.
Leadership.
Involvement of people.
Process approach.
Continual improvement.
2. organizations seeking confidence from their suppliers that their product requirements will
be satisfied;
5. those internal or external to the organization who assess the quality management system
or audit it for conformity with the requirements of ISO 9001 (e.g. auditors, regulators,
certification/registration bodies);
6. those internal or external to the organization who give advice or training on the quality
management system appropriate to that organization;
Those internal or external to the organization who assess the quality management system
or audit it for conformity with the requirements of ISO 9001 (e.g. auditors, regulators,
certification/registration bodies);
199
There are many different ways of applying the ISO 9000:2000 quality management
principles. The nature of the organization and the specific challenges it faces will determine
how to implement them.
The ISO 9000:2000 series consist of only one Specification Standard ISO 9001:2008 -
compared to the older series ISO 9000:1994 which was comprised of three specification
standards - ISO 9001, ISO 9002, ISO 9003 and its relevant guidelines. The ISO 9001:1994
Series was designed to remain in effect until December 15, 2003 when the first surveillance
audits for ISO 9001:2008 would be completed.
Each of these three documents has its own role in the family. ISO 9001 will remain the
most used of the three, but the other two are useful companions that should not be ignored.
Importance
ISO 9001:2008 is focused on the customer. Both from the standpoint of customer
expectations and customer satisfaction, the standard insist that you recognize the voice of the
customer. ISO 9001 evaluates the effectiveness and suitability of your quality management
system, and identifies and implements improvements. It is perhaps the concept of ‘continuous
improvement’ that is most relevant to ISO 9000:2000.
from better working conditions, increased job satisfaction, improved morale, and stability of
employment. ‘Owners and investors’ will benefit from an increased return on investment, larger
market share, increased profits and improved operational results, and ‘society’ will gain from
the fulfillment of legal and regulatory requirements, and reduced environmental impacts.
11.4 Certification
The International Organization for Standardization (ISO) does not certify organisations
itself. Numerous certification bodies exist, which audit organisations and, upon success, issue
ISO 9001 compliance certificates. Although commonly referred to as “ISO 9000” certification,
the actual standard to which an organization’s quality management system can be certified is
ISO 9001:2015 (ISO 9001:2008 expired around September 2018).
Many countries have formed accreditation bodies to authorize (“accredit”) the certification
bodies. Both the accreditation bodies and the certification bodies charge fees for their services.
The various accreditation bodies have mutual agreements with each other to ensure that
certificates issued by one of the accredited certification bodies (CB) are accepted worldwide.
Certification bodies themselves operate under another quality standard, ISO/IEC 17021, while
accreditation bodies operate under ISO/IEC 17011.
Where major nonconformities are identified, the organization will present an improvement
plan to the certification body (e.g., corrective action reports showing how the problems will be
resolved); once the certification body is satisfied that the organization has carried out sufficient
corrective action, it will issue a certificate. The certificate is limited by a certain scope (e.g.,
production of golf balls) and will display the addresses to which the certificate refers.
An ISO 9001 certificate is not a once-and-for-all award but must be renewed at regular
intervals recommended by the certification body, usually once every three years. There are no
grades of competence within ISO 9001: either a company is certified (meaning that it is committed
to the method and model of quality management described in the standard) or it is not. In this
respect, ISO 9001 certification contrasts with measurement-based quality systems.
201
This survey is the largest and most wide-ranging of its kind to be conducted in this country.
It covers well over 1200 ISO 9000-certified organizations of all sizes and industry sectors and
registered by different certification bodies. Analysis of the survey data revealed that ISO 9000
implementation has benefited in: better understanding of process/activities being performed;
better understanding of responsibilities/authorities; and linkage across the organization.
ISO 9000 is one of the most influential initiatives that grew from the quality movement of
the 1980s (Poksinska et al., 2002), and one of the most frequently company implemented
strategies concerning quality across the world (ISO, 2006). Moreover, ISO 9000 has become a
subject of focus in many developing countries, including India. The literature review offers
many diverse opinions on ISO 9000 in different countries but little empirical research has been
carried out in India concerning ISO implementation issues. In more specific, the current study
attempts to find out the applicability of ISO 9000 to Indian manufacturing organizations, which
were successful in obtaining ISO registration. It specifically compares the two stages pre-
implementation and post implementation.
Background
general application in 1979. It published the ISO 9000 standards in 1987. The ISO 9000 standards
were revised in 1994 and 2000.
The three versions of the standards i.e. ISO 9001, ISO 9002 and ISO 9003 were integrated
into a single norm called ISO 9001:2000. This standard is valid for any organization. Up to
2003, the three previous standards and the new one co-existed, thus giving time to certified
companies to adopt quality systems to the requirements of new version. From year 2004, all
companies had to follow ISO 9001:2000 for certification. ISO 9000 currently includes three
quality standards: ISO 9000:2000, ISO 9001:2000, and ISO 9004:2000. ISO 9001:2000 presents
requirements, while ISO 9000:2000 and ISO 9004:2000 present guidelines. Following a meeting
of ISO’s Technical Committee TC176 in Helsinki, Finland, from June 11th—15th 2007, publication
of the new version of ISO 9001 has been brought forward from 2009 and is now scheduled to
be published in August 2008.
In March 1992, BSI Group published the world’s first environmental management systems
standard, BS 7750, as part of a response to growing concerns about protecting the
environment. Prior to this, environmental management had been part of larger systems such
as Responsible Care. BS 7750 supplied the template for the development of the ISO 14000
series in 1996, which has representation from ISO committees all over the world. As of 2017,
more than 300,000 certifications to ISO 14001 can be found in 171 countries.
203
Key takeaways
ISO 14000 is a set of rules and standards created to help companies address their
environmental impact.
ISO 14000 certification can be used as a marketing tool for engaging environmentally
conscious consumers.
ISO 14000 is part of a series of standards that address certain aspects of environmental
regulations. It’s meant to be a step-by-step format for setting and then achieving environmental
friendly objectives for business practices or products. The purpose is to help companies manage
processes while minimizing environmental effects, whereas the ISO 9000 standards from 1987
were focused on the best management practices for quality assurance. The two can be
implemented concurrently.
ISO 14000 includes several standards that cover aspects of the managing practices inside
facilities, the immediate environment around the facilities, and the life cycle of the actual product.
This includes understanding the impact of the raw materials used within the product, as well as
the impact of product disposal.
The most notable standard is ISO 14001, which lays out the guidelines for putting an
environmental management system (EMS) in place. Then there’s ISO 14004, which offers
additional insight and specialized standards for implementing an EMS.
ISO 14000 certification can be achieved by having an accredited auditor verify that all
the requirements are met, or a company may self-declare. Obtaining the ISO 14000 certification
can be considered a sign of a commitment to the environment, which can be used as a marketing
tool for companies. It may also help companies meet certain environmental regulations.
204
The other benefits include being able to sell products to companies that use ISO 14000–
certified suppliers. Companies and customers may also pay more for products that are considered
environmental friendly. On the cost side, meeting the ISO 14000 standards can help reduce
costs, as it encourages the efficient use of resources and limiting of waste. This may lead to
finding ways to recycle products or new uses for previously disposed of byproducts.
With its predecessor, ISO 9000, the international standard of quality management, which
served as a model for its internal structure, and both can be implemented side by side. As with
ISO 9000, ISO 14000 acts both as an internal management tool and as a way of demonstrating
a company’s environmental commitment to its customers and clients.
ISO 14000 is similar to ISO 9000 quality management in that both pertain to the process
of how a product is produced, rather than to the product itself. As with ISO 9001, certification is
performed by third-party organizations rather than being awarded by ISO directly. The ISO
19011 and ISO 17021 audit standards apply when audits are being performed.
The requirements of ISO 14001 are an integral part of the European Union’s Eco-
Management and Audit Scheme (EMAS). EMAS’s structure and material are more demanding,
mainly concerning performance improvement, legal compliance, and reporting duties. The current
version of ISO 14001 is ISO 14001:2015, which was published in September 2015.
Prior to the development of the ISO 14000 series, organizations voluntarily constructed
their own EMSs, but this made comparisons of environmental effects between companies difficult;
therefore, the universal ISO 14000 series was developed. An EMS is defined by ISO as: “part of
the overall management system that includes organizational structure, planning activities,
responsibilities, practices, procedures, processes, and resources for developing, implementing,
achieving, and maintaining the environmental policy.”
The ISO 14000 family includes most notably the ISO 14001 standard, which represents
the core set of standards used by organizations for designing and implementing an
effective environmental management system (EMS). Other standards in this series include ISO
14004, which gives additional guidelines for a good EMS, and more specialized standards
dealing with specific aspects of environmental management. The major objective of the ISO
14000 series of norms is to provide “practical tools for companies and organizations of all kinds
looking to manage their environmental responsibilities”.
205
The ISO 14000 series is based on a voluntary approach to environmental regulation. The
series includes the ISO 14001 standard, which provides guidelines for the establishment or
improvement of an EMS. The standard shares many common traits.
Getting a Six Sigma Certification usually requires individuals to have a certain level of
experience and testify their proficiency. The certification can help you become a specialist in
process improvement and will enhance your credibility.
The Six Sigma certification comes in various skill levels: White Belt, Yellow Belt, Green
Belt, Black Belt, and Master Black Belt. These certifications can be obtained through an
accreditation body like the American Society for Quality (ASQ).
It is the basic level of certification that deals with the basic Six Sigma concepts. White
belts support change management in an organization and engage with local problem-solving
teams that assist projects.
At this level, you know the specifics of Six Sigma, how and where to apply it. You will
support project teams on problem-solving tasks.
At this level, you understand advanced analysis and can resolve problems that affect
quality. Green belts lead projects and assist black belts with data collection and analysis.
206
Black belts are experts and agents of change. They provide training in addition to leading
projects.
This is the highest level of Six Sigma achievement. At this level, you will shape strategy,
develop key metrics, act as a consultant and coach black and green belts.
Apart from just being able to add another certification to your resume, many
advantages make Six Sigma certifications useful for companies and individuals.
Improved productivity
Reduced costs
In 1995, Jack Welch made Six Sigma a key part of General Electric’s business strategy.
Since then, companies have used Six Sigma with notable success. We look at some of those
benefits below:
Improved productivity:
Pressed for space to manufacture new products, Allen medical employed DMAIC
methodology and lean tools to improve the production rate of arm boards. With their new
approach, they saved 45 seconds on average, per arm board, and increased the number of
arm boards produced each hour from 5.3 to slightly over 6.
Reduced costs:
Defect reduction minimizes waste, hence results in low cost of production and higher
profits. Failure to create a quality product can be costly. Creating a substandard product or
service can significantly reduce its cost. This is the true “cost of quality”.
207
Implementing Six Sigma can help to streamline processes and reduce customer
satisf action. For instance, by applying the cross-functional process mapping
(CFPM) methodology, Citibank was able to identify wasteful steps in their processes and correct
them, with great results in customer satisfaction levels.
Whether you are in the software business offering services to clients or operate in the
food, hospitality or travel industry, service quality management is integral to managing customer
expectations and business growth. The service quality can either relate to the service potential
(qualifications of the persons offering service), service process (quickness, reliability etc.) or
the service result (meeting customer expectations).
Measuring of service quality relies on the customer’s perception and this could be different
from the expected service. To determine the gap between services expected and perceived
service, several models are used like the Servqual model, Rater model, e-Service Quality etc.
The main dimensions of service quality determination are as follows:
Reliability – This is the ability to perform the service dependably and accurately, as
promised. In software service, it would be the correct technical functioning of the application
and various features such as GUI features, billing, product information etc.
208
Responsiveness – How quickly the services are rendered to the customer and the
promptness of service delivery. With respect to software services, it would be the ability to
respond to customer problems or give solutions.
Assurance – This is a measure of the ability to convey trust to the customers and how
well they extend the courtesy. Software assurance involves the amount of confidence the
customer has in handling the software application or navigating a site, the belief he has
on the information provided and its clarity, reputation etc.
Empathy – Giving personalized attention, understanding the requirements and caring for
the customers. The software service would include customized applications, one-to-one
customer attention, security privacy and understanding customer preferences.
Tangibles – The physical attributes like appearance, equipment, facilities etc. When we
speak of software services, the tangibles would be aesthetics of the software application
or website, navigation features, accessibility, flexibility etc.
Software quality measurement and assurance involves processes that check if the
developed software meets the standardized specifications and works accurately. SQA (Software
Quality Assurance) is an integral part of the complete software development life cycle and
regularly measures the different attributes of the software before it’s released. This way the
businesses ensure that high-quality software services are delivered to the customer on-time.
Quality control is achieved through software testing, verification and validation, and other
processes to detect bugs or errors and fix them appropriately. Let us now look at some of the
aspects of software testing, defect tracking and measurement for better understanding of software
quality measurement.
Software testing is the process of evaluating the performance of the software by providing
inputs and observing the outputs thereby ensuring that the application meets the technical,
functional, user and business requirements as specified.
Testing is part of the software development cycle and involves verification of the code,
identifying defects or bugs and evaluating the different functionalities like usability, security,
compatibility, performance and installation etc.
209
11.10 Summary
The needs of modern businesses are numerous and manifold. The ISO 9000 standards
are applicable to all areas of the international business, because they are international in scope.
Furthermore, ISO 9001:2008’s new focus on customer satisfaction ensures that supplier product
precisely meets the needs of the customer. The revised standards will be of specific help to
organizations wishing to go beyond simple compliance with Quality Management System
requirements for the sake of certification. The present day business-wide needs, whether quality-
related or not, are exacting. Keeping in line with this trend of reasoning, the ISO members
thought it essential to introduce structural changes to the standards, while maintaining the
basic requirements of the original standards.
11.11 Keywords
TC 176 - Technical Committee
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8th
Edition, South Western, 2012.
210
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
LESSON - 12
Human Resource Issues in Quality
Learning Objectives
Discuss Leadership
Structure
12.1 Introduction
12.4 Leadership
12.8 Summary
12.9 Keywords
12.1 Introduction
In the 1950s and 1960s, Japanese goods were synonymous with cheapness and low
quality, but over time their quality initiatives began to be successful, with Japan achieving high
levels of quality in products from the 1970s onward. For example, Japanese cars regularly top
the J.D. Power customer satisfaction ratings. In the 1980s Deming was asked by Ford Motor
212
Company to start a quality initiative after they realized that they were falling behind Japanese
manufacturers. A number of highly successful quality initiatives have been invented by the
Japanese.
Working in teams is one of the current popular management techniques and it is becoming
increasingly common for academic librarians to work with others on campus to solve problems,
deliver services, develop information resources, create facilities and formulate policies.
Collaborative teams of librarians and computing professionals have created campus websites,
offered workshops for staff and users, planned labs and instructional technology centers and
developed joint service desks.
Teams of faculty, librarians, instructional technologists and others have created network-
based learning experiences incorporating electronic information resources as an integral aspect
of the curriculum. Faculty, student, librarian, and technologist teams have developed publishing
projects such as electronic journals, electronic dissertations and databases. Teams of librarians,
technologists and assessment experts are working to establish measures of the use and value
of technology and electronic information resources on campus.
Importance of Teams in TQ
Teams are everywhere in TQ organizations: at the top and bottom and in ever function
and department in between. Why are there so many teams? The TQ philosophy recognizes the
interdependence of various parts of the organization and uses teams as a way to coordinate
work. Teamwork enables various parts of the organization work together in meeting customer
needs that can seldom be fulfilled by employees limited to one specialty. Teams promote equality
among individuals, encouraging a positive attitude and trust.
The diversity inherent in teams often provides unique perspective on work, spontaneous
thought, and creativity. In addition, teams develop a greater sense of responsibility for achieving
goals and performing tasks. In short, teams provide a variety of benefits that are not derived
from individuals working alone.
213
TQ organizations recognize that the potential contributions of employees are much greater
than in the traditional organization, teams are an attempt to take advantage this potential.
Further, the competitive environment of modern business requires flexible, fast reaction to
changes in customer demands or technological capacity. Teams can provide the capacity for
rapid response.
Steering committees (or quality councils) management teams that lead an organization
and provide direction and focus.
Problem-solving team’s, Team of workers and supervisors that met to address workplace
problems involving quality and productivity, or ad-hoc teams with a specific mission such
as organizational design teams that act architects of change.
Natural work teams people who work together every day to perform a complete unit of
work.
Self-managed teams, Work teams that are empowered to make and control their own
decisions.
Virtual team’s Team whose member communicates by computer, take turns as leaders,
and jump in an out as necessary. Virtual teams are beginning to play an increasingly
important role because of the Internet and electronic communication.
Steering Committees
Most organizations practicing total quality have a steering committee, called a quality
council by Juran and a quality improvement team by Crosby. Steering committees are responsible
for establishing policy for TQ and for guiding the implementation and evolution of TQ throughout
the organization. The top manager of the organization is usually on the steering committee, as
is the manager with overall responsibility for quality- for example, the Vice President /Director
of Total quality.
The steering committee may meet fairly often when a TQ effort is getting started, but
usually meets once in monthly or quarterly once things are under way. This group makes key
214
decisions about the quality process, how quality should be measured and what structures and
approaches should be used to improve quality. The steering committee also periodically reviews
the status of TQ and makes the adjustment necessary to ensure customer satisfaction and
continuous improvement. In general, the steering committee has overall responsibility for the
progress and success of the TQ effort.
The second, and probably most common, type of team used in TQ is the problem-solving
team. As the name implies, problem-solving teams work to improve quality by identifying and
solving specific quality-related problems facing the organization. Such teams are sometime
referred to as corrective action teams, or quality circles, although many organizations have
created their own names for them. Two basic types of problem-solving teams are departmental
and cross-functional.
These teams are limited in membership to employees of a specific department and are
limited in scope to problems within that department. Such groups typically meet once a week
for one to two hours and progress through a standardized problem-solving methodology. First
they identify a set of problems and select one to work on. Then they collect data about the
causes of the problem and determine the best approach to solving it. If the solution does not
require any major changes in procedures or substantial resources, the group frequently can
implement its own solution. If this is not the case, group members will make a presentation to
some level of management, requesting approval for their solution and the resources to implement
it. These teams typically remain relatively intact as they address a number of problems in
succession.
Cross-Functional Teams
Cross-functional teams are not unique to total quality. They are commonly used in new
product development, but are increasingly becoming a mainstay of quality programs. These
teams are similar in many ways to the departmental teams just discussed: they receive training
in problem solving, identify and solve problems, and either implement or recommended solutions.
The differences are that members of cross-functional teams come from several
departments or functions, deal with problems that involve a variety of functions, and typically
dissolve after the problem is solved. For example, a cross-functional team in a brokerage might
deal with problems in handling questions from clients.
215
The issues raised would not be limited to stocks, bonds, or mutual funds, so people from
all of these areas would be involved. Cross-functional teams make a great deal of sense in an
organization devoted to process improvement, because most processes do not respect functional
boundaries. If a process is to be comprehensively addressed, the team addressing it cannot be
limited, by either membership or charter, to only one function. To be effective, cross-functional
teams should include people from several departments: those who are feeling the effects of the
problems, those who may be causing it, those who can provide remedies, and those who can
furnish data.
Natural work teams are organized to perform a complete unit of work, such as assembling
a motorcycle, creating circuit plans for a television set, or performing a market research study
from beginning to end. The “unit of work” need not be the final product, but some intermediate
component. Natural work teams replace rather than complement the traditional organizational
of work. What is different in this work design structure is that work tasks are not narrowly
defined as they would be on an assembly line, for instance. Team members share responsibility
for completing the job and are usually cross-trained to perform all work tasks and often rotate
among them.
Self-Managed Teams
Although self-managed teams have been used for decades, (the SMT concept was
developed in Britain and Sweden in the 1950s, and one of the early companies to adopt it was
Volvo, the Swedish auto manufacturer), their popularity has increased in recent years, due in
part to their use in TQ. In the absence of a supervisor, SMTs often handle budgeting, scheduling,
setting goals, and ordering supplies. Some teams even evaluate one another’s performance
and hire replacements for departing team members. SMTs have resulted in improved quality
and customer service, greater flexibility, reduced costs, faster response, simpler job
classifications, increased employee commitment to the organization, and the ability to attract
and retain the best people.
216
Virtual Teams
Virtual teams are groups of people who work closely together despite being geographically
separated. Virtual teams rarely meet face-to-face; their primary interaction is through technologies
such as telephone, fax, shared databases and collaborative software systems, the Internet, e-
mail, and video conferencing, In 1998, over 8 million workers were members of such teams,
and this number has undoubtedly grown as new technology has proliferated. Virtual teams are
becoming important because of increasing globalization, flatter organizational structures, an
increasing shift to knowledge work, and the need to bring diverse talents and expertise to
complex projects and customize solutions to meet market demands. For example, a product
design team in the United States can hand off its work to another team in Asia or Australia,
resulting in an almost continuous work effort that speeds up development time considerably.
12.4 Leadership
The leader’s main responsibility is to overlook and supervise the effective functioning of
all the boxes and to maintain a balance among them. The strengths of the model are two-the
model gives due importance to leadership, which represents the coordinating function and the
model is useful for organisations with less sophistication with respect to their systemic thinking
and the larger complexities of organisational dynamics.
Providing directions being the core responsibility, leadership roles include responsibilities
of persuasion, influence, serving followers, and acting as a role model. Though leadership
often overlaps with management practices, these have definite, differences as well. Leadership
is related to vision, using one’s influence, persuasive communication skills, recognising people
through praise and providing opportunities to learn new skills. Management deals with setting
objectives, task accomplishments, using the organisation’s resources efficiently and effectively
and rewarding people using extrinsic factors such as monetary rewards and elevation in status.
Quality leadership from a national perspective has changed over the past decades. After
the Second World War, Japan decided to make quality improvement a national imperative as
part of rebuilding their economy, and sought the help of Shewhart, Deming and Juran, amongst
others. W. Edwards Deming championed Shewhart’s ideas in Japan from 1950 onwards. He is
probably best known for his management philosophy establishing quality, productivity, and
competitive position. He has formulated 14 points of attention for managers, which are a high
level abstraction of many of his deep insights. They should be interpreted by learning and
217
understanding the deeper insights. These 14 points include key concepts such as:
In the past two decades this quality gap has been greatly reduced between competitive
products and services. This is partly due to the contracting (also called outsourcing) of
manufacture to countries like China and India, as well internationalization of trade and
competition. These countries, among many others, have raised their own standards of quality
in order to meet international standards and customer demands. The ISO 9000 series of
standards are probably the best known International standards for quality management.
spent in organisations. It ensures the leaders adopt a strategic overview of quality and focus on
prevention not detection of problems. Whilst it must involve everyone, to be successful, it must
start at the top with the leaders of the organisation. All senior managers must demonstrate their
seriousness and commitment to quality, and middle managers must, as well as demonstrating
their commitment, ensure they communicate the principles, strategies and benefits to the people
for whom they have responsibility. Only then will the right attitudes spread throughout the
organization.
Personal involvement and acting as role models for a culture of total quality
Developing clear and effective strategies and supporting plans for achieving the
mission and objectives
Organisational culture refers to the way things are done in an organisation and the manner
in which these norms and values are communicated and explicitly or implicitly followed. Culture
implies to the rules that one follows, both explicitly and implicitly as one conforms to the norms
and one’s beliefs.
219
• Customer-driven quality
• Leadership
• Valuing employees
• Fast response
• Management by fact
• Partnership development
• Results focus.
Staff Management
of the staff are the most crucial factors that can make the name of the institution. It is certainly
true that, as long as there is an understandable human desire for development exists between
the managers and its staff, achieving total quality management is not an impediment rather it is
a simple task. The managers needs to develop an attitude of awarding their staff for better
performance, which will boost the morale of staff in achieving Total Quality Management (Stuart,
2007). The effective cooperation and co-ordination among staff and the leader is a basic necessity
for achieving quality goals in the library. The form of recognition should fit the accomplishment;
in other words, the value of the recognition should be commensurate with the value of the
accomplishment (Porter and Parker, 1993)
If Quality management is about anything, it is about change; change for the better
(improvement) and learning are crucial if an organization is to achieve a degree of excellence.
The three are intimately connected in that change management and learning are both necessary
for improvement to happen. Learning, however, is a more holistic concept covering culture,
attitudes and behaviours as well as mechanistic processes and short-term benefit. For sustainable
benefit organizations need to become learning organizations, continually challenging the status
quo and re-inventing how they do business at all levels.
“It is not the strongest species that survive, nor the most intelligent, but the ones most
responsive to change” - Charles Darwin.
The best way for managing change, in fact it is probably true to say that there is no single
solution. What it does seek to do is to outline the key aspects of the management of change
and to give some ‘pointers’ in useful directions. The management of change is highly context
sensitive. The approaches, tools and techniques that are appropriate will vary from one situation
to the next. You will ultimately develop your own models for change that work for you in your
situation. There are 3 aspects to consider:
Personal
In our dealing with others it can be argued strongly that it is not possible to change others,
we can only change ourselves. If this is accepted then it becomes clear that awareness of your
own role in the process is a key element of success.
In management textbooks much is made of ‘Leadership’ as a skill set to strive for and it is
true that the ability to play the role of a leader will help you in the management of change.
However, Leadership means significantly more than simple charisma, exhortation and the ability
to ‘motivate’. There are a number of elements of leadership that are useful characteristics for
the manager of change:
Congruence
‘Walking the talk’, ‘doing as you say’, and ‘saying as you do’. This is about demonstrating
your commitment to the change in everything that you do or say in the organisation.
Flexibility
An effective leader will readily adapt to the use of new tools, will support those around
them with a wide variety of approaches to everything that they do.
Facilitation
Being the leader does not always mean making the decisions. In most cases the process
of change will benefit from the people involved being given the opportunity to develop their own
solutions and to create their own meaning.
The effective leader will support this process by facilitating the group processes and
coaching individuals. Perhaps the key element of being a good leader is ‘knowing when and
how to ask for help’. This statement may seem to challenge many traditional views of leadership
and yet probably has more meaning in the context of a modern organization than the ‘military’
models of command on which much of the literature is based. Building on this alternative view
of leadership it is possible to begin to see how there can be more than one leader in a change
project, each providing different aspects of the necessary support and each coming to the fore
or working in a supporting role at different stages of the project.
222
A change isn’t a change until people are doing things differently. People in the organisation
can then be seen as being the principal enabler in change.
In discussions on the management of change much time is taken up talking about how
people are uncomfortable with change. There are two forces at work here, firstly you’re past.
Good or bad? Secondly the very nature of change is that it is likely to involve uncertainty
and this is often the aspect of change that most unsettles people. Whatever the causes you can
be sure that if you seek to create change then along the way you can expect to meet some
strong reactions from individuals – often emotional and sometimes apparently irrational. As
Machiavelli once said (paraphrased from his book The Prince):
“There is no greater task than the development of change since the change agent can
expect violent criticism from those who feel that they may lose out as a result of the change and
only lukewarm support from those who expect to benefit.”
The starting point is to begin to develop a high degree of self-awareness – what is your
motivation, why do you maintain the beliefs and values that you hold, what drives your behaviour
and what effect does this have on others. Remember also that you are dealing with individuals
– terms such as ‘the shop floor’, ‘the workers’, ‘the management’, ‘the front office’ and so on are
generalizations that hide a multitude of attitudes, emotions, motivations and behaviours. They
do not describe the individuals that work within these units.
PSDA Model
The team has tested their improvement ideas through small tests of change (Plan-Do-
Study-Act Cycles or PDSAs) and is confident that the changes are an improvement – as
demonstrated through an analysis of their data. During the implementation phase, changes are
formally applied to everyday practice in the unit or department where the improvement effort is
taking place. The lessons learned by teams during their small tests of change are essential to
the continued success of these improvements during the implementation phase.
223
To effectively implement change and have the change “stick” or be sustained over the
long term, the following concepts should be employed. These concepts will be explained further
below.
• Engagement of others
• Communication
• Training
• Measurement
• Sustainability plans
Engage Others Front-line staff members play an important role in each phase of an
improvement initiative. The continual support and a regular evaluation of the needs of those
working within the changed processes are necessary for any improvement to be successful.
Those individuals affected by the change are simultaneously the most critical resource, support,
barrier and risk factor when managing change.
The uncertainties of change can evoke strong emotions in those that are affected. People
may feel frustration, anger, despair, acceptance, enthusiasm and elation over the course of the
improvement initiative, as their work processes change. How people feel will be dependent on
whether they accept the change willingly or reluctantly, the level of consultation that occurs, the
effect of the change on their work, and the support provided by organizational and team
leadership. Understanding why people feel differently about an improvement initiative may help
the team leader ensure that change is introduced in a manner that anticipates, acknowledges
and responds to the concerns of everyone affected.
When planning an implementation strategy, the QI team and the organizational leaders
should consider the fact that staff may:
• Feel that there are other, more important issues to be dealt with
• Not agree with the proposed change, or feel that there is a better way to achieve the
outcome
• Feel there is criticism about the way they do things implied in the change process
• Feel that they have done this before and nothing changed
• Feel that there will be extra work for them as a result of the changes. Changing the
way people think about change is an important aspect of any improvement initiative.
Change should not be daunting to your coworkers.
Rather, it is about having the tools, techniques and confidence to work with colleagues to
try something different. It is about understanding the possibilities of thinking differently and
aiming to make practical improvements for patients and staff. If staff are engaged early and
supported throughout the development of the improvement, they are likely to become champions
of that improvement and embed its processes in their jobs. Research and experience demonstrate
that support from organizational leadership is essential to successful quality improvement efforts.
Administrative leaders who work directly on, or indirectly support, the improvement project
must ensure that all barriers to success are removed and project priorities are clearly identified
and communicated. Communicate implementation plans with organizational leadership before
rolling out changes to ensure that they are able to support the implementation plans
wholeheartedly. In addition to formal leaders, the QI team should consider who needs to be on
board for changes to happen. Engage those that ultimately influence whether or not something
happens – whether these individuals are in management positions or not.
Communication
• Share knowledge
Communications should take place regularly and should reach all who are affected by the
proposed change - staff, consumers, as well as internal and external stakeholders. Try new
communication strategies over the course of the implementation process. To assist the QI team
in planning when to communicate, what to communicate, and who to communicate to, the
following points should be considered:
225
1. The audience
2. The objectives
3. The message
QI teams need to continuously share their improvements and demonstrate how the change
has positively impacted the customer/client. The change will be more successful, and people
will be more committed to the change, if they truly believe it will improve things. It is important to
highlight the “what’s in it for me,” for everyone affected by the changes being made. Engaging
staff and demonstrating how improvements are achieved are important to ensuring support for
future changes.
Effective, early, and frequent communication will give those affected by the change some
ownership of the project and a vested interest in its success. To learn more about effective
communication, please refer to Health Quality Ontario’s Communication Planning Tool. Formalize
and Standardize Change Once a change or new process has been implemented, it must be
monitored to ensure it is performing as expected. It also helps to completely eliminate old
methods of doing things where possible (e.g., destroying old forms or erasing old software).
Which is known as “error proofing?” Behaviours that support new processes should be
encouraged and reinforced to make the change the ‘norm’ in your work culture. To make changes
“stick,” information about the new processes should be built into the orientation of new employees,
into job descriptions, and into policies. Visual management is a form of standardize.
Sustainability is achieved when the new ways of working and the resulting improved
outcomes become the norm. Not only have the processes and outcomes changed, but the
thinking and attitudes behind them are fundamentally altered. In other words, the change has
been integrated into the day-to-day, rather than something ‘added on.’ Sustainability means
holding the gains and evolving as required, without reverting to the old ways of doing things.
Teams can become frustrated if they experience the “improvement evaporation effect,”3 or a
lack of sustainability.
226
The improvement evaporation effect occurs when the team has gone to great lengths to
achieve improvement in a process, only to find that they are not able to retain or sustain the
improvements that they had made. Typically, the completion of an improvement effort is
celebrated, but little is done to celebrate the maintenance of that improvement. Make definite
plans in advance to celebrate continued success and to reflect on the team’s progress. Set a
new aim or goal and try to improve even more. Celebrate and communicate periodically the fact
that the indicator has stayed at the improved level. For example, post a sign in the break room
stating: “Celebrating six months of keeping readmission rates below 10%.” Perhaps count the
number of days without an incident.
Consider applying to conferences or poster presentations to celebrate your work with the
community. Make your efforts a continuous improvement process, without allowing it to settle
into simple maintenance mode. Find ways to renew the passion for improvement and creativity
that was part of the early days of the project. In addition to carefully planning the implementation
of a change, the QI team may wish to utilize a diagnostic tool that was created by the United
Kingdom’s National Health Service
The Sustainability Model can assist QI teams to “identify strengths and weaknesses in
[their] implementation plan and predict the likelihood of sustainability for [their] improvement
initiative.” The Sustainability Model addresses the ten factors relating to process, staff and
organizational issues that play a role in sustaining change in a health care organization. The
Sustainability Model was developed in consultation with: front line teams, improvement experts,
senior administrative and clinical leaders and subject area experts from academia and other
industries.
The NHS website states that: “The development of the Model is based on the premise
that the changes individuals and teams wish to make fulfill the fundamental principle of improving
the patient experience of health services. Another important impact that can be gained by using
the Model is the effective achievement of change which creates a platform for continual
improvement. By holding the gains, resources - including financial and most importantly human
resources - are effectively employed rather than being wasted because processes that were
improved have reverted to the old way or old level of performance.”
It is estimated that fewer than 40 percent of health care improvement initiatives successfully
transition from adoption to sustained implementation that is spread to more than one area of an
organization.
227
12.8 Summary
Change is an inevitable aspect of life. It is the essence of any entity that has life or whose
existence finds validity in the presence of life. Even ‘time’ would lose its significance in the
absence of change. As change manifests itself in a variety of ways, it does not hold the same
connotation across people, situations, and contexts. Times change, people change, things
change, situations change, and so do organisations. Globalisation of economies and resultant
competition, liberalisation, deregulation, privatisation, mergers, and acquisitions, development
of internet and web based technologies have changed the landscape in which organisations /
businesses used operate in the past.
Speed and accessibility across the globe have brought about changes in organisational
paradigm and executives have been confronted with unprecedented challenges of change during
the past decade. They are trying to grapple with the impact of the above mentioned factors at
their workplaces. The most important challenge before a manager today is how to manage
change in a changing world with knowledge of traditional, rigid and static management systems
and processes. Identifying the need for organisational change and leading the organisation
through change is one of the most critical and challenging responsibilities in an organisation
today.
12.9 Keywords
SMT - Self-Managed Teams
PDSA - Plan-Do-Study-Act
2. Discuss Leadership.
3. Evans, J., and Lindsay, W.M., The Management and Control of Quality, 8th
Edition, South Western, 2012.
4. Evans, J., Quality Management, Organisation and Strategy, 6th Edition, Cengage
International, 2011.
SECTION - A
1. What is Quality ?
4. What is Kaizen ?
SECTION - B
SECTION - C