Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 32

OPHRM – 13/02/2021

Theory of 10/10 to 1/1


10/10 1/1
10 years to make a technology and 10 yrs to market – Today’s world where people take 1 yr to plan, strategy,
with time to plan, strategize and market. launch and market the product
Focus Adaptable, Agility learn
Analytical Attitude
Data Analytics Innovation

What is Design Thinking and Why Is It So Popular?


Design Thinking is an interative process in which we seek to understand the user, challenge assumptions, and redefine problems in
an attempt to identify alternative strategies and solutions that might not be instantly apparent with our initial level of
understanding. At the same time, Design Thinking provides a solution-based approach to solving problems. It is a way of thinking and
working as well as a collection of hands-on methods.
Design Thinking revolves around a deep interest in developing an understanding of the people for whom we’re designing the
products or services. It helps us observe and develop empathy with the target user. Design Thinking helps us in the process of
questioning: questioning the problem, questioning the assumptions, and questioning the implications. Design Thinking is extremely
useful in tackling problems that are ill-defined or unknown, by re-framing the problem in human-centric ways, creating many ideas
in brainstorming sessions, and adopting a hands-on approach in prototyping and testing. Design Thinking also involves ongoing
experimentation: sketching, prototyping, testing, and trying out concepts and ideas.

https://www.interaction-design.org/literature/article/what-is-design-thinking-and-why-is-it-so-popular

Operations
Eg: McDonald Brother Discovered a Mystery
The How of McD –
1. Drive in concept
2. Convenience
3. Yester year’s fine dining
4. Ford reducing the price of Cars, hence people buying cars to go for a drive.
5. Spotted the unmet customer requirement

Journey from MYSTERY to HEURISTICS to ALGORYTHM – KNOWLEDGE FUNNEL


Design thinking is balancing analytical and intuitive thinking in a dynamic way. Analytical and
intuitive thinking are oppositions consisting of several characteristics:
- Analytical thinking: using deductive and inductive logic; reliability; repetition
- Intuitive thinking: using abductive logic; validity; creativity and innovation.
- The knowledge funnel

In an interesting way Martin combines these two thinking modes in his model of knowledge creation. He calls it the knowledge
funnel. There are three stages in funnel:
- Mystery : it is a problem to solve; a question; a chaos of data; a surprise, a wicked problem.
- Heuristic: “a rule of thumb that helps narrow the field of inquiry and work the mystery down to a manageable size
- Algorithm: a fixed formula, a tested method or procedure
The knowledge funnel model is intended to be general model of knowledge creation. The challenge is how to drive through the
knowledge funnel from mystery to heuristic to algorithm. The great promise is that “the firms that master it will gain a nearly
inexhaustible, long term business advantages” (p. 6-7). And exactly the design thinking is the form of thought that gives this mastery.
Although Martin is mainly dealing with business, he refers also to the importance of knowledge funnel model and design thinking in
science, too. I agree. Of course, scientific thinking includes other elements too, but problem solving and developing a heuristic and
algorithm are essential part of scientific method.
Source
https://www.thehindu.com/business/Creating-value-across-the-knowledge-funnel/article16838735.ece
https://businessisdesign.wordpress.com/design-thinking-seminars/week-4/
https://www.youtube.com/watch?v=yfNaCrkgsjo

CLOCKSPEED
Clockspeed: Winning Industry Control in the Age of Temporary Advantage: https://www.publishersweekly.com/978-0-7382-0001-9
In propounding a ""theory of business genetics,"" Fine, a professor of management at MIT, analyzes factors that determine
corporate evolution, then outlines approaches to aid strategic decision making. For Fine, industries change at different rates, or
""clockspeeds,"" depending on differing opportunities for innovation and competition, as is the case in the animal kingdom.
Changing relationships and their causes often seem more apparent, he notes, in fast-clockspeed scenarios such as the current
computer industry. However, ""all advantage is temporary,"" Fine continues, ""and the faster the clockspeed, the more temporary
the advantage."" Against that background, his main thesis is that design of the supply chain is ""the ultimate core competency"" for
maintaining advantage in business. Fine advocates diligently and continuously studying its dynamics from the standpoints of
organization, technology and capability. Citing the case of IBM as a cautionary tale, Fine notes the company's flawed decision to
outsource its PC's microprocessor and operating system, with the result that customers are more concerned with the label ""Intel
Inside"" than the actual makeup of their computer. Oriented primarily to specialists (and prospective clients) in the computer
industry, Fine's theorizing suffers somewhat from management jargon yet is impressively well tuned.
Change in the Power industry and changes that has influenced other industries
1. Open access (Mobile portability)
2. Time of the day (Movie Tickets)
3. Prepaid Cards (Mobile)
4. Smart Grid (The Internet)
5. Banking (Virtual)
6. Trading (Share Market)
7. Auto Metering (Net banking)

MEDICI EFFECT
https://worldofwork.io/2019/07/the-medici-effect/
The Medici effect is the name given to the idea that increased creativity and innovation occurs through diversity. When ideas and
talented people from different fields are brought together to collaborate, step-changes can occur. The idea comes from a book of
the same name by Frans Johansson.
One person cannot be specializing in all activities and hence needs to collaborate. Hence one needs to indulge in training and skill
development towards adjacent technologies.
“The Medici Effect” is a book by Frans Johansson. It reflects on a few key innovations from history. Its central theme is that many key
primary innovations arise as a result of intersectionality. In other words, by bringing together people and ideas from a range of
diverse backgrounds, you increase the likelihood of intellectual cross-pollination and through this, great leaps in innovation.

LEARNING FROM DIFFERENT INDUSTRIES


1. Health Care – Applied science of Physics/ Chemistry/ Biology and Psychology
2. Health care learns from Automobile industry
3. Importance of “Power of Observation”
4. Hospitals learning from F1 Pit stop
5. hospitality industry learning from F1 Pit stop

MORAVEC PARADOX
https://medium.com/@froger_mcs/moravecs-paradox-c79bf638103f
https://www.thinkautomation.com/bots-and-ai/what-is-moravecs-paradox-and-what-does-it-mean-for-modern-ai/
There is a discovery in the field of AI, called Moravec’s paradox which tells that activities like abstract thinking and reasoning or skills
classified as “hard” — engineering, maths or art are way easier to handle by machine than sensory or motor based unconscious
activities.
It’s much easier to implement specialized computers to mimic adult human experts (professional chess or Go players, artists —
painters or musicians) than building a machine with skills of 1-year old children with abilities to learn how to move around, recognize
faces and voice or pay attention to interesting things. Easy problems are hard and require enormous computation resources, hard
problems are easy and require very little computation.
Researchers look for the explanation in theory of evolution — our unconscious skills were developed and optimized during the
natural selection process, over millions of years of evolution. And the “newer” skill is (like abstract thinking which appeared “only”
hundreds thousands of years ago), the less time nature had to adjust our brains to handle it.
It’s not easy to interpret Moravec’s paradox. Some tell that it describes the future where machines will take jobs which require
specialistic skills, making people serving an army of robotic chiefs and analysts. Others argue that paradox guarantees that AI will
always need an assistance of people. Or, perhaps more correctly, people will use AI to improve those skills which aren’t as highly
developed by nature.
For sure Moravec’s paradox proves one thing — the fact that we developed computer to beat human in Go or Chess doesn’t mean
that General Artificial Intelligence is just around the corner. Yes, we are one step closer. But as long as AGI means for us “full copy of
human intelligence”, over time it will be only harder.
Polanyi paradox
https://luca-d3.com/data-speaks/technology-dictionary/polanyi-paradox
Polanyi's paradox from1966 is not truly a paradox, as what it reflects, rather than a contradiction, is a difficulty, a barrier of
overcoming the development of artificial intelligence and automation. And since 1966 a lot has changed. Important technological
advances have taken place and different strategies have been implemented to try to overcome this difficulty.

How to "Dodge" Polanyi's paradox?


We have already seen that Polanyi's paradox shows the difficulty of automating a task that is easy for us to carry out, but difficult to
explain. There have been two main strategies to overcome this difficulty.
Controlling the environment; so that it is easier for a machine to perform a task. The machines work with relatively simple routines,
but they find it difficult to adapt to changes in the environment. If you simplify the environment, you facilitate automation. A simple
example of "simplifying the environment" can be train tracks. The train does not have to overcome obstacles of the terrain, only
circulate the tracks. Another interesting example are the Kiva robots that Amazon uses for its warehouses. In the video we see how
the warehouse environment has been simplified so that the robots can transport the shelves containing the products. However, it is
human workers who load the products onto these shelves or choose the product from each of them that must be added to a specific
order.
The second strategy is to try to "teach" the machine to make decisions as a human expert would. How? In contrast to the "top-
down" programming strategies (from rules to results), we turn to the "bottom-up" strategies of Machine Learning (based on the
example data, we train the machines to interpret the rules). In the new data-based economy, we can find examples of ML
application practically everywhere. Recommendation systems, recognition of images, text or sounds etc. While the first strategy
tried to adapt the environment to the limitations of the machine, in this second strategy, it is the machine that adapts to the
difficulties of the environment, it "learns" from it, training through data. This development has been possible due to the greater
availability of training data and processing capacity of the systems.
However, although machines can perform tasks that are impossible for humans, such as processing huge amounts of data to, for
example, correlate our genome with that of other species, or certain biological variables with drugs that can cure a certain disease,
all this is only a small part of what can be called real human intelligence.
OPERATIONS MANAGEMENT – THE CONSTANTS
1. Inventory
2. Process
3. Projects
MAJOR AREAS TO FOCUS IN PHYSICAL INDUSTRY
1. Raw Materials/ Inventories
a. Oil
b. Rare Earth
c. Water
2. Concepts of Inventory
a. Right amount of Inventory
b. Just in Time inventory
c. Not even mine

COUNTRIES TO WATCH
- JAPAN
- GERMAN
- CHINA

Confucius saying: I hear and I forget. I see and I remember. I do and I understand.
A vs B Approach/ AB Testing
1. Multiple ideas
2. Innovative options and feasible ones
3. Choice of Brainstorming
4. Quantity of ideas that quality
5. ¾ period of brainstorming goes in understanding the process and ¼ period gives the ideas
6. Freedom to compare
7. Preserve the strength and avoid weaknesses

OPHRM – 18/02/2021

Theory of Constraints
Surprising Facts about Constraints
 You will always have a constraint, so choose wisely ... perhaps the most capital intensive, or energy consuming, or largest
batch, or longest touch time, etc.
 If you identify the wrong constraint, it is easily rectified and causes no permanent damage. The Five Focusing Steps auto-
correct for errors made over time.
 The constraint may appear to shift suddenly based on product mix, however this is often due to batching practices rather
than actual shifting of the constraint.
 Most systems typically have ONE SINGLE RESOURCE CONSTRAINT such as a machine or department. This constraint, which
may or may not be binding at any given point of time, is referred to as the Capacity-Constrained Resource (CCR). In certain
cases there may be 2-3 CCRs, but rarely more.
 Permanent constraints typically include sales/marketing (with better techniques we could always raise prices) and R&D
(with more awesome products we could make far higher margins).
 Eventually the constraint should be stabilized; frequently shifting constraints wreck havoc on policies, procedures and
people.

Process of Ongoing Improvement:


The Five Focusing Steps (POOGI)
 Focusing Step #1: IDENTIFY the system's constraint.
 Focusing Step #2: EXPLOIT the constraint.
 Focusing Step #3: SUBORDINATE everything else to the constraint.
 Focusing Step #4: ELEVATE the constraint.
 Focusing Step #5: PREVENT INERTIA from becoming the constraint!

Focusing Step #1: IDENTIFY the system's constraint


Strengthening any link of a chain (apart from the weakest) is a waste of time and energy. Similarly, the vast majority of efforts to
"improve" something in the organization fail to result in more profits for shareholders, delight for customers, or satisfaction for
employees. This is because most initiatives are not focused on the constraint of the organization.
Yet it is impossible to manage a constraint until you find out what it is! And it is surprisingly easy to find, once you know how to look.
Check out our free Constraint Checker Tool.
Focusing Step #2: EXPLOIT the constraint
The output of the constraint governs or restricts the output of the organization as a whole. It is therefore imperative to squeeze as
much as possible out of it. Maximize the utilization and productivity of the constraint (NOT utilization and productivity of non-
constraints). Rather than immediately purchasing more of the constraint (by buying machines, hiring workers, increasing the
advertising budget, etc.) we should first learn to use the resources that we already have more efficiently.

The constraint of most organizations is not well utilized, often less than 50% on a 24x7 basis. If the reasons for under-utilization are
not immediately clear, try measuring the constraint's OEE including the breakup of availability/quality/performance. Gather the
underlying data and analyze it using Pareto techniques. Once the primary causes are identified, use fishbone diagrams and Five Why
analysis to drill down to the root cause for under-performance.

When the root causes are clear, eliminate them on a permanent basis. Quality and productivity tools such as Six Sigma, Poka-Yoke,
design of experiments, SMED, etc. often provide the answer, depending upon the nature of the problem.
Focusing Step #3: SUBORDINATE everything else to the constraint
By definition, any non-constraint has more capacity to produce than the constraint itself. Left unchecked, this results in bloated WIP
inventory, elongated lead times, and frequent expediting/firefighting. Hence, it is crucial to avoid producing more than the
constraint can handle. In a manufacturing environment this is accomplished by choking the release of raw material in line with the
capacity of the constraint.
Equally important is ensuring that the rest of the system supports the work of the constraint at all times. It must never ever be
starved for inputs, or fed poor quality materials. This can be achieved by maintaining a reasonable buffer of safety stock. Similarly,
other established policies and habits can hamper productivity at the constraint and must be systematically aligned to achieve
maximum performance.
Focusing Step #4: ELEVATE the constraint
Once the capacity of the system is exhausted, it must be expanded by investing in additional equipment/land, hiring people, or the
like.
Focusing Step #5: PREVENT INERTIA from becoming the constraint!
Once elevated, the weak link may not remain weakest. Consider elevating other resources to retain the old constraint, depending on
where you wish to have the constraint in the long-term. A new constraint demands a whole new way of managing the the system.
We therefore return to Step 1, and thus begins our journey of continuous improvement...

What is Bottleneck??
A bottleneck is a point of congestion in a production system (such as an assembly line or a computer network) that occurs when
workloads arrive too quickly for the production process to handle. The inefficiencies brought about by the bottleneck often creates
delays and higher production costs. A bottleneck can have a significant impact on the flow of manufacturing and can sharply increase
the time and expense of production. Companies are more at risk for bottlenecks when they start the production process for a new
product. This is because there may be flaws in the process that the company must identify and correct; this situation requires more
scrutiny and fine-tuning. Operations management is concerned with controlling the production process, identifying potential
bottlenecks before they occur, and finding efficient solutions.
A bottleneck affects the level of production capacity that a firm can achieve each month. Theoretical capacity assumes that a
company can produce at maximum capacity at all times. This concept assumes no machine breakdowns, bathroom breaks, or
employee vacations.
Because theoretical capacity is not realistic, most businesses use practical capacity to manage production. This level of capacity
assumes downtime for machine repairs and employee time off. Practical capacity provides a range for which different processes can
operate efficiently without breaking down. Go above the optimum range and the risk increases for a bottleneck due to a breakdown
of one or more processes.
If a company finds that its production capacity is inadequate to meet its production goals, it has several options at its disposal.
Company management could decide to lower their production goals in order to bring them in line with their production capacity. Or,
they could work to find solutions that simultaneously prevent bottlenecks and increase production. Companies often use capacity
requirements planning (CRP) tools and methods to determine and meet production goals.
PDCA Cycle:
Explained briefly, the Plan-Do-Check-Act cycle is a model for carrying out change. It is an essential part of the lean manufacturing
philosophy and a key prerequisite for continuous improvement of people and processes.

First, proposed by Walter Shewhart and later developed by William Deming, the PDCA cycle became a widespread framework for
constant improvements in manufacturing, management, and other areas.

PDCA is a simple four-stage method that enables teams to avoid recurring mistakes and improve processes.
PLAN:
At this stage, you will literally plan what needs to be done. Depending on the project's size, planning can take a major part of your
team’s efforts. It will usually consist of smaller steps so that you can build a proper plan with fewer possibilities of failure.

Before you move to the next stage, you need to be sure that you answered some basic concerns:

What is the core problem we need to solve?


What resources do we need?
What resources do we have?
What is the best solution for fixing the problem with the available resources?
In what conditions will the plan be considered successful? What are the goals?
Keep in mind, you and your team may need to go through the plan a couple of times before being able to proceed. In this case, it is
appropriate to use a technique for creating and maintaining open feedback loops such as Hoshin Kanri Catchball. It will enable you to
collect enough information before you decide to proceed.
DO
After you have agreed on the plan, it is time to take action. At this stage, you will apply everything that has been considered during
the previous stage.

Be aware that unpredicted problems may occur at this phase. This is why, in a perfect situation, you may first try to incorporate your
plan on a small scale and in a controlled environment.
Standardization is something that will definitely help your team apply the plan smoothly. Make sure that everybody knows their
roles and responsibilities.

CHECK
This is probably the most important stage of the PDCA cycle. If you want to clarify your plan, avoid recurring mistakes, and apply
continuous improvement successfully, you need to pay enough attention to the CHECK phase.

Here, you need to audit your plan’s execution and see if your initial plan actually worked. Moreover, your team will be able to
identify problematic parts of the current process and eliminate them in the future. If something went wrong during the process, you
need to analyze it and find the root cause of the problems.
ACT
Finally, you arrive at the last stage of the Plan-Do-Check-Act cycle. Previously, you developed, applied, and checked your plan. Now,
you need to act. If everything seems perfect and your team managed to achieve the original goals, then you can proceed and apply
your initial plan. It can be appropriate to adopt the whole plan if objectives are met. Respectively, your PDCA model will become the
new standard baseline. However, every time you repeat a standardized plan, remind your team to go through all steps again and try
to improve carefully.

The PDCA cycle is a simple but powerful framework for fixing issues on any level of your organization. It can be part of a bigger
planning process, such as Hoshin Kanri. The repetitive approach helps your team find and test solutions and improve them through a
waste-reducing cycle. The PDCA process includes a mandatory commitment to continuous improvement, and it can have a positive
impact on productivity and efficiency. Finally, keep in mind that the PDCA model requires a certain amount of time, and it may not
be appropriate for solving urgent issues.

PDCA has some significant advantages:


It stimulates continuous improvement of people and processes.
It lets your team test possible solutions on a small scale and in a controlled environment.
It prevents the work process from recurring mistakes

Design thinking: Reliability VS Validity


Design thinking combines analytic thinking with intuitive thinking to create innovative solutions and long lived value. Martin defines
design thinking as adductive logic, which “posits what could be true”. This is distinguished from deductive (moving from general to
specific) and inductive (moving from specific to general) logic types which are used in analytical thinking. Adductive logic also
operates in the realm of paradox and instances where data is found which does not fit into existing models. Adductive logic is based
on inference from the best available explanation and can proceed iteratively and experimentally to the next stage of understanding.
Both analytical and intuitive thinking have limitations which are transcended by design thinking, thus allowing a design thinking
company to achieve competitive advantage over those enterprises that only use one or the other.

1. The knowledge funnel: How discovery takes shape

The introductory chapter starts with a story about McDonalds journey from mystery (how and what did Californians want to eat) to
algorithm (stripping away uncertainty, ambiguity, and judgment from almost all processes). It briefly discusses analytical thinking,
intuitive thinking and design thinking, to solve mysteries and advance knowledge, and the fine balance between exploring new
knowledge and exploiting existing one.
It introduces and explores the concept of the "Knowledge
Funnel" describing how knowledge advances from mystery to heuristic, to algorithm for businesses to gain efficiency and lower
costs. This is explored also in later chapters: "Mysteries are expensive, time consuming, and risky; they are worth tackling only
because of the potential benefits of discovering a path out of the mystery to a revenue-generating heuristic", "The algorithm
generates savings by turning judgment… …into a formula or set of rules that, if followed, will produce a desired solution" and
“Computer code – the digital end point of the algorithm stage – is the most efficient expression of an algorithm”. It also addresses
the need for organizations to re-explore solved mysteries, even the founding ideas behind the business, and not get too comfortable
focusing on the "administration of business" running an existing algorithm.

In addition, the first chapter presents abductive logic, and some ideas originated by philosopher Charles Sanders Peirce; that it is not
possible to prove a new thought concept, or idea in advance and that all new ideas can be validated only through the unfolding of
future events. To advance knowledge we need to make a "logical leap of the mind" or an "inference to the best explanation to
imaging a heuristic for understanding a mystery. 

2. The reliability bias: Why advancing knowledge is so hard

The second chapter focus on the distinction between reliability (produce consistent, predictable outcomes by narrowing the scope
of a test to what can be measured in a replicable, quantitative way) and validity (produce outcomes that meet a desired objective,
that through the passage of time will be shown to be correct, often incorporating some aspects of subjectivity and judgment to be
achieved). Roger's main point in the chapter (or even in the book) is that today's business world is focusing too much on reliability
(due to three forces: demand for proof, an aversion to bias and the constraints of time), with algorithmic decision-making
techniques using various systems (such as ERP, CRM, TQM, KM) to crunch data objectively and extrapolate from the past to make
predictions about the future. "What organizations dedicated to running reliable algorithms often fail to realize is that while they
reduce the risk of small variations in their businesses, they increase the risk of cataclysmic events that occur when the future no
longer resembles the past and the algorithm is no longer relevant or useful" With the turbulent times we live in, where new
mysteries constantly spring up that reliable systems won't address or even acknowledge, businesses risk being outflanked by new
entrants solving old and new mysteries developing new heuristics and algorithms. "Without validity, an organization has little chance
of moving knowledge across the funnel. Without reliability, an organization will struggle to exploit the rewards of its advances… the
optimal approach... is to seek a balance of both"
3. Design thinking: How thinking like a designer can create sustainable advantage
Chapter three starts with an interesting case of Research In Motion (RIM) that leads into the discussion of what is really design
thinking. Roger uses the quote by Tim Brown of IDEO, "a discipline that uses the designer's sensibility and methods to match
people's needs with what is technologically feasible and what a viable business strategy can convert into customer value and market
opportunity" and adds himself "a person or organization instilled with that discipline is constantly seeking a fruitful balance between
reliability and validity, between art and science, between intuition and analytics, and between exploration and exploitation". That
designers live in the world of abductive reasoning, actively look for new data points, challenge accepted explanations to posit what
could possibly be true (in contrast to the two dominant forms of logic - deduction and induction, with the goal to declare a
conclusion to be true or false).

The chapter ends with the first discussion on roadblocks to design thinking (many more to come), with one being the corporate
tendency to settle at the current stage in the knowledge funnel, and another how "highly paid executives or specialists with
knowledge, turf and paychecks to defend” has the company's heuristics in their heads with no interest in advancing to the algorithm
stage, making the executives less important. This leads nicely into the forth chapter about the transformation of Procter & Gamble.

4. Transforming the corporation: The design of Procter & Gamble


A.G. Lafley's transformation of Procter & Gamble from an incumbent in crisis to an innovative and efficient organization in just a few
years has been widely covered in the business literature. As a student some years back I made an internship in P&G's Connect &
Develop (connect with innovators outside the company and develop their ideas for P&G products), and have since been reading up
on everything I can find about the transition and why other companies have not been able to make the same transition. Roger adds
interesting perspectives, from his work with the company and its first vice president of innovation strategy and design, Claudia
Kotchka, to develop "a comprehensive program that would provide practical experience in design thinking to P&G leaders". One of
the top-down efforts being to drive brand-building from heuristic (in the minds of scarce and costly senior executives) toward
algorithm, providing less senior employees the tools needed to do much of the work previously done by high-cost elites who then
could then focus on the next mystery in order to create the next brand experience. The chapter also covers the Connect & Develop
initiative and how it bulked up P&G's supply of ideas in the mystery-heuristic transition where it was thin, enabling it to feed more
opportunities into its well-developed heuristics and algorithms of development, branding, positioning, pricing and distribution.

Another highly interesting topic covered in the chapter is the change of processes within P&G, including the strategy review, at P&G.
Lafley recognized that the existing processes was a recipe for producing reliability, not validity, "so risky creative leaps were out of
the question". A transition from annual reviews with category managers pitching, "with all the inductive and deductive proof needed
to gain the approval of the CEO and senior management" to "forcing category managers to toss around ideas with senior
management… to become comfortable with the logical leaps of mind needed to generate new ideas".

5. The balancing act: How design-thinking organizations embrace reliability and validity
The chapter focuses on the need to balance reliability and validity, and the challenges to do so (foremost all structures, processes
and cultural norms tilted towards reliability). "Financial planning and reward systems are dramatically tilted toward running an
existing heuristic or algorithm and must be modified in significant ways to create a balance between reliability and validity". Roger
presents a rough rule of thumb "when the challenge is to seize an emerging opportunity, the solution is to perform like a design
team: work iteratively, build a prototype, elicit feedback, refine it, rinse, repeat… On the other hand, running a supply chain, building
a forecasting model, and compiling the financials are functions best left to people who work in fixed roles with permanent tasks".
The chapter feels somewhat repetitive, in the uphill battle for validity, and more obstacles of change are presented:

 Preponderance of Training in Analytical Thinking


 Reliability orientation of key stakeholders
 Ease of defending reliability vs. validity
In this chapter, Roger also discusses how design-thinking companies have to develop new reward systems and norms, with an
example of how to think about constraints. "In reliability-driven, analytical-thinking companies, the norm is to see constraints as the
enemy", whereas when validity is the goal "constraints are opportunities" and "they frame the mystery that needs to be solved".

6. World-class explorers: Leading the design-thinking organization

Roger adds to the existing body of knowledge with the twist of reliability vs. validity in creating a new market, and the knowledge
funnel taking a one-off street festival into an unstoppable international $600 million-a-year business with four thousand employees.
Laliberté has reinvented Cirque's creative and business models time and time again, "usually over protests that he was fixing what
was not broken and that he could destroy the company". Other CEOs and cases covered in the chapter are James Hackett of
Steelcase, Bob Ulrich of Target, and Steve Jobs of Apple.
The role of the CEO and different approaches to build design-friendly organizational processes and norms into companies are
discussed referring to the different cases presented.

Again, Roger returns to the reliability vs validity battle, now from a CEO perspective with terms such as "resisting reliability", "those
systems-whether they are for budgeting, capital appropriation, product development…", and "counter the internal and external
pressures toward reliability".

7. Getting personal: Developing yourself as a design thinker


the focus is on how a non-CEO can function as a design thinker and develop skills to individually produce more valid outcomes even
in reliability-oriented companies. Roger refers back to his previous book The Opposable Mind, and the concept of a personal
knowledge system as a way of thinking about how we acquire knowledge and expertise. The knowledge system has three
components:

 Stance: "Who am I in the world and what am I trying to accomplish?"


 Tools: "With what tools and models do I organize my thinking and understand the world?"
 Experiences: "With what experiences can I build my repertoire of sensitivities and skills.

Roger also presents five things that the design thinker needs to do to be more effective with colleagues at the extremes of the
reliability and validity spectrum:

 Reframe extreme views as a creative challenge


 Empathize with your colleagues on the extremes
 Learn to speak the languages of both reliability and validity
 Put unfamiliar concepts in familiar terms
 When it comes to proof, use size to your advantage

PSYCHOLOGY OF QUEING:

1. Unoccupied time feels longer than occupied time


Even if customers are in love with the product they’re queuing for, not providing a distraction during the wait can make it seem
torturous. Just like with the elevator mirrors, get creative with ways to engage your customers.

If callers are waiting to speak to your customer service, give them the chance to get called back when it’s their turn. If fans are
waiting for an artist to perform, let them join in on a game of trivia using an app like Kahoot. If customers are waiting in an online
queue, customize the queue page and embed videos or games.

Online queues actually have an advantage over physical queues as customers aren’t limited by the need to stand in line. If your
virtual waiting room can notify visitors when it’s their turn in line, they can check email, tidy up the house, or do any number of
things to occupy their time while waiting.

2. People want to get started


Think about when you enter a restaurant. Sometimes the wait until you’re first greeted by the waiter can seem worse than the wait
for your table. The start of the transaction is the end of the wait, so make sure people feel like they’ve started.

If your business is a restaurant, let your customers preview the menu. If you’re running an online product launch, let customers in
the online queue read more about the product so they feel like they’ve started the buying process. Even better, give them a sneak
peek of upcoming products.

Adding a progress bar on the online queue page also highlights for customers that they’ve started. It shows a beginning and an end,
and waiting becomes conceptualized as progress.

3. Uncertain waits are longer than known, finite waits.


Communication is key because the transparency helps set expectations.
Provide information on how many other people are waiting in line. Give an estimated waiting time. If in doubt, it’s better to
overestimate the wait than underestimate it. How an experience ends (known as the peak-end rule) greatly influences people’s
assessment of the whole experience. So, being rewarded with an early exit from the queue will pleasantly surprise your customers
and leave them feeling more positively overall.
4. Unexplained waits are longer than explained waits.
Humans look for explanations behind all things. The absence of explanation is frustrating. Airline pilots know this well and will always
include the reason for a delay (whether it’s the airline’s fault or not) instead of merely stating there is a wait.

Such explanations are even more critical during an online queue, where there are fewer contextual cues available to your visitors.
Saying your site is experiencing “technical difficulties” is a vague and unnerving description for visitors. Make sure to provide a clear
explanation of why your customers are in a queue (e.g. “Hi Sneakerhead! So that everyone has a fair shot at getting their hands on a
pair of new sneaks, we’ve reserved a place in line for you in our virtual waiting room.”)

If possible, keep real-time communication flowing to your waiting customers to keep them up-to-date and remind them why there is
a wait.

5. Unfair waits are longer than fair waits


The perception of fairness has arguably the biggest impact on how we feel when we’re waiting in line. We’re constantly on guard to
ensure no one cuts the line. Violations can be met with queue rage.

A first-in, first-out (FIFO) (or first-come, first-served) wait is the exemplar of fairness. Make sure your queue—whether online or
physical—operates in this way.

If you’re operating an online queue, remember to address customers who arrive early. For example, we’ve designed our virtual
waiting room to place early visitors in a pre-queue with a countdown to the official start of the queued event. When the sale or
registration begins, we assign a randomized queue number to all early visitors and then operate the queue in a first-in, first-out
fashion. This ensures early visitors don’t benefit from arriving early and gives everyone who does a fair shot at being first in line.

6. Anxiety makes waits feel longer


Put yourself in your customers’ shoes. They really want the concert ticket or pair of sneakers they’re waiting in line for. That in itself
is already anxiety-provoking. Removing anything that could cause anxiety (e.g. warning visitors they only have a few minutes to
complete their booking) is great. Preemptively addressing any anxieties, rational or not, is even better.

If your setup involves multiple queues, think again. A large portion of queue anxiety surrounds being unfairly overtaken by others,
what Richard Larson calls “skips and slips”. One serpentine line removes any need for your customers to make (and constantly
reassess) a decision about choosing the “right line”.

CANNOT HAVE ONE SIZE FITS ALL APPROACH:


One of the common myths and misconceptions about projects is that all projects are the same and you can use similar tools for all
your project activities. We call this the project is a project is a project syndrome, and it often leads to project failure and delays when
companies are using improper project management techniques for some of their project efforts. While all projects have a goal, a
budget, and a time-frame, there is more to project management than just a few common elements. In reality, projects differ in
numerous ways, and one size does not fit all! Yet, at present, only a few organizations know explicitly how to classify their project
efforts and how to select the best approach to a specific project, and there is still no standard framework for distinction among
projects and for selecting the right approach to the right project.

We have come to realize that project success depends, not just on the traditional critical success factors, but also, greatly, on the
proper project management style and on adapting the right technique to the right project. We have seen many projects fail because
managers assumed that their current project would be the same as the previous one.

Our research shows that in well-managed projects, project managers as well as top management are aware of project differences
and use specific techniques and styles to manage different kinds of projects. However, our research also found that in most cases,
companies and managers don’t have a specific framework with which they distinguish among their project efforts. We realized it is
time to develop a universal, context-free, and simple-to-use framework that will help managers and organizations start a project by
assessing the project type and selecting an appropriate management style.

However, in our work we realized that one framework does not fit all either, and that one would need different project classification
systems for various organizational needs.
Initial Theoretical and Practical Perspectives
From a theoretical perspective, one of the major barriers to understanding the nature of projects has been the little theoretical
distinction made between the project type and its strategic and managerial problems, and the lack of specificity of constructs
applied in project management studies. Although innovation studies have often used a traditional distinction between incremental
and radical innovation, the project management literature has been slow in adopting similar approaches. Furthermore, while
correlates of structural and environmental attributes have been well studied when the organization is the unit of analysis, they have
been much less investigated in the project context. As mentioned above, the project management literature has often ignored the
importance of project contingencies, assuming that all projects share a universal set of managerial characteristics. Yet, projects can
be seen as “temporary organizations within organizations,” and may exhibit variations in structure when compared to their mother
organizations. Indeed, numerous scholars have recently expressed disappointment in the universal, “one-size-fits-all” idea, and
recommended a more contingent approach to the study of projects. As argued, by utilizing traditional concepts in a new domain,
new insights will most likely emerge in this evolving and dynamic field. Our study attempts, as well, to address this theoretical gap.

Who are the stakeholders that would benefit from a framework for project classification? It seems that different people would use
such a framework for different purposes. For example, among other things, top management— CEOs, executives, and business
leaders—would look at decisions about portfolio management, aggregated returns, financial and business risk, and setting priorities.
The marketing function, in contrast, would look for the impact of different projects on their marketing efforts, market research, and
their ability to determine customer needs and product requirements. Similarly, the engineering manager would need a framework
for distinguishing among difficulty and complexity of technical tasks, for assignment of technical experts, and for resources
allocation. Finally, project managers would use a framework to determine their project structure, processes, and tools. They would
also use it to distinguish among their design, testing, and verification efforts, and for the selection of team members and
assignments of tasks.

The UCP Model


In search for a universal, context-free framework for all project types, our research identified three dimensions to distinguish among
projects: uncertainty, complexity, and pace. Together, we call them the UCP model, and they form a context-free framework for
selecting the proper management style (see Exhibit 1).

Uncertainty
Different projects present, at the outset, different levels of uncertainty. Clearly, when a project is completed, everything is well
known—the end result, the cost, the time, and the final specifications. Yet, as we have seen, uncertainty at project initiation is one
of the major characteristics of any project. It determines, among other things, the length and timing of front-end activities, how well
and how fast can one define and finalize product requirements and design, the degree of detail and extent of planning accuracy, and
the level of contingency resources (time buffer and budget slack). Project execution can therefore be seen as an ongoing process,
which is aimed at uncertainty reduction. Failing to assess project uncertainty may result in excessive resources and unexpected
delays.

Uncertainties could be divided into external or internal, and classified into several levels as we show later. External uncertainty will
have an effect on accuracy and predictability of customer requirements, and on how to treat market research results, while internal
uncertainty determines the process of product design, testing, and finalizing the specifications.

Complexity
Complexity is the second dimension for distinction among projects. Management should realize that complexity and uncertainty is
not the same thing. Imagine building a high-rise office tower in a major city. For this kind of project, even low levels of uncertainty
may involve a highly complex collection of tasks. Project complexity is generally determined by project size, number and variety of
elements, and interconnection among elements, and it may come from product complexity, but also from the level of organizational
interaction.

When it comes to managing different levels of complexity, the “one-size-does-not-fit-all” rule will affect project organizational
structure, the hierarchical level of the project manager, the formality with which the project is managed, the extent of
subcontracting and outsourcing, and the degree and tightness of project control.

Pace
The third dimension of the UCP model involves the speed and criticality of time goals. Obviously, no project is free of limitations, and
time is one of the major constraints. However, as we found, the available time given for project completion and the degree of
urgency, is an important factor for distinction among projects. The same goal with different timeframes may require different
project structures and different management attention.
The highest-paced projects are the most critical in terms of time of product introduction. The higher the pace, the closer is project
monitoring, the more autonomous is the project team, and the higher is management involvement.

THE FOCUSED FACTORY: WICKHAM SKINNER

A Focused Factory strives for a narrow range of products, customers and processes. The result is a factory that is smaller, simpler
and totally focused on one or two Key Manufacturing Tasks.

The Focused Factory rests on three underlying concepts:

 There are many ways to compete besides low cost.


 A factory cannot perform well on every measure.
 Simplicity & repetition bring competence.

Benefits of Focus
At Strategos, we have seen the effects of focus- customer satisfaction, lower cost and less frustration. Several researchers have
documented these effects with quantitative studies.

Key Manufacturing Tasks


Skinner's research suggests that a particular factory can excel with no more than one or two overall objectives. These might be
quality, delivery reliability, response time, low cost, customization, short life cycle products, or another competitive dimension.

The Key Manufacturing Task(s) is the most important thing the factory must do or achieve for success. Terry Hill, in his book
"Manufacturing Strategy" shows how to identify the Key Manufacturing Task(s) and link it to marketing and corporate strategies.

Why Factories Lose Focus


Some factories are unfocused originally because designers fail to recognize the limits and constraints of technologies and systems.

Other factories are highly focused at first but lose it over time. Several forces and factors diffuse the original focus. Among these are:
 Inconsistent Policies
 Professional Isolation
 Mission Creep
 Failure to Design the Task
 Unrecognized Inconsistencies
 Product Proliferation
 Market Proliferation

Wickham Skinner, in his seminal 1974 article for The Harvard Business Review, says it best:
"The focused factory will out-produce, undersell, and quickly gain competitive edge over the complex factory, The focused factory
does a better job because repetition and concentration in one area allows its work force and managers to become effective and
experienced in the task required for success. The focused factory is manageable and controllable. Its problems are demanding, but
limited in scope."

Process Technologies
Typically, unproven and uncertain technologies are limited to one per factory. Proven, mature technologies are limited to what their
managers can easily handle, typically two or three. (e.g.,a foundry, metal working and metal finishing.)

Market Demands
These consist of a set of demands including quality, price, lead times, and reliability specifications. A given plant can usually only do a
superb job on one or two demands at any given period of time.

Product Volumes
Generally, these are of comparable levels, such that tooling, order quantities, materials handling techniques, and job contents can
be approached with a consistent philosophy. But what about the inevitable short runs, customer specials, and one-of-a-kind orders
that every factory must handle? The answer usually is to segregate them.

Quality Levels
These employ a common attitude and set of approaches so as to neither over-specify or over control quality and specifications. One
frame of mind and set of mental assumptions suffice for equipment, tooling, inspection, training, supervision, job content, and
materials handling.

Manufacturing Tasks
These are limited to only one (or two at the most) at any given time. The task at which the plant must excel in order to be
competitive focuses on one set of internally consistent, doable, non-compromised criteria for success.

OPHRM – 27.02.2021

Resource Utilization or Flow:

Henry Ford – The model T Story: from Iron Ore to Car in Showroom – 81 hrs

1. Game was Flow


2. How fast the resource can be converted to Cash

Hence it’s important to remember the below 5 points to design any process:

Historical Stalwarts:

1. Adam Smith: Father of Economics: 5 Philosophies & PIN Factory

Adam Smith took reference from PIN Factory


‘One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head;
to make the head requires two or three distinct operations; to put it on, is a peculiar business, to whiten the pins is another; it is
even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about
eighteen distinct operations, which, in some manufactories, are all performed by distinct hands’.

- increasing division of labor


- specialized tasks
- relatively unskilled labor could perform any of these tasks
- More output person
- Better quality
- Identification of bottlenecks
- Match the skill to the task (Talent Management & Acquisition)

2. Charles Babbage: Study of Terracotta Army in China


- Different body parts were created separately and assembled
- Cost effective
- Reduced time of learning
- Reduced waste
- Attainment of higher skills
- Match skills to jobs

Poka-Yoke Technique
Lean Management has adopted the principles and techniques originating as part of the Lean Manufacturing methodology and
developed them further. Now we can experience Lean's benefits in management and transfer successful techniques from the times
of post-war Japan to modern-day business conditions.
One of the most valuable takeaways is Poka-Yoke. It has become one of the most powerful work standardization techniques and can
be applied to any manufacturing or service industry.
Its idea to prevent errors and defects from appearing in the first place is universally applicable and has proven to be a true efficiency
booster.

Meaning and Birth of Poka-Yoke


The term Poka-Yoke (poh-kah yoh-keh) was coined in Japan during the 1960s by Shigeo Shingo, an industrial engineer at Toyota.
Shingo also created and formalized Zero Quality Control – a combination of Poka-Yoke techniques to correct possible defects and
source inspection to prevent defects.
Actually, the initial term was baka-yoke, meaning ‘fool-proofing’, but was later changed because of the term’s dishonorable and
offensive connotation. Poka-Yoke means ‘mistake-proofing’ or more literally – avoiding (yokeru) inadvertent errors (poka).
Poka-Yoke ensures that the right conditions exist before a process step is executed, and thus preventing defects from occurring in
the first place. Where this is not possible, Poka-Yoke performs a detective function, eliminating defects in the process as early as
possible. Poka-Yoke is any mechanism in a Lean manufacturing process that helps to avoid mistakes.
Its purpose is to eliminate product defects by preventing, correcting, or drawing attention to human errors as they occur.

Examples of Poka-Yoke Application

In a broader sense, it is also a behavior-shaping constraint as a process step to prevent incorrect operation. One of the most
common is when a car driver with a manual gearbox must press on the clutch pedal (a process step – Poka-Yoke) before starting the
engine. The interlock prevents an unintended movement of the car. Another example is a car with an automatic transmission, which
has a switch that requires the vehicle to be in “Park” or “Neutral” before it can be started. These serve as behavior-shaping
constraints as there are actions that must be performed before the car is allowed to start. This way, over time, the driver’s behavior
is adjusted to the requirements by repetition and habit. Other examples can be found in the child-proof electric sockets or the
washing machine that does not start if the door is not closed properly to prevent flooding. These types of automation don’t allow
mistakes or incorrect operation from the start.
Why is Poka-Yoke important?

The value of using Poka-Yoke is that they help people and processes work right the first time, which makes mistakes impossible to
happen. These techniques can significantly improve the quality and reliability of products and processes by eliminating defects. This
approach to production fits perfectly the culture of continuous improvement, which is also part of the Lean management arsenal. It
can also be used to fine tune improvements, and process designs from six-sigma Define – Measure – Analyze – Improve – Control
(DMAIC) projects. Applying simple Poka Yoke ideas and methods in product and process design can eliminate human and mechanical
errors. The flexibility of Poka-Yoke allows for it not to be costly. For example, Toyota’s goal is to implement each mistake-proofing
device for under $150. Depending on the size of the company, it can be an extremely cost-efficient endeavor.

When and how to use it?


Poka-Yoke technique could be used whenever a mistake could occur, or something could be done wrong – meaning everywhere. It
can be successfully applied to any type of process in the manufacturing or services industry, preventing all kinds of errors:

- Processing error: Process operation missed or not performed per the standard operating procedure.

- Setup error: Using the wrong tooling or setting machine adjustments incorrectly.

- Missing part: Not all parts are included in the assembly, welding, or other processes.

- Improper part/item: Wrong part used in the process.

- Operations error: Carrying out an operation incorrectly; having the incorrect version of the specification.

- Measurement error: Errors in machine adjustment, test measurement, or dimensions of a part coming in from a supplier.

Poka-Yoke is easy to implement because of its universal and rational nature. You can follow this step by step process to apply it:

- Identify the operation or process.

- Analyze the 5-whys and the ways a process can fail.


- Choose the right Poka-Yoke approach, such as using a shutout type (preventing an error being made) or an attention type
(highlighting that an error has been made).

- Take a comprehensive approach instead of thinking of Poka Yokes just as limit switches or automatic shutoff.

- Determine whether a contact (use of shape, size, or other physical attributes for detection), constant number (error
triggered if a certain number of actions are not made), or a sequencing method (use of a checklist to ensure completing all
process steps) is most appropriate.

- Test the method and see if it works.

- Train the operator, review performance, and measure success.

In Summary
Poka-Yoke technique is one of the most precious gems in the crown of Lean management. It is a way of ensuring quality without
actually having a quality assurance process, rather than preventing defects to appear in the first place. Poka-Yoke may be
implemented in any industry and have many benefits, the most important of which are:

- Helps work right the first time

- In time makes it impossible for mistakes to occur

- It’s not costly

Claude Shannon - Inventor of Information Theory

Started processing information for the following reasons:

- Reduces uncertainty – Hick Hyman Law

- 1 bit is that amount of information required to distinguish between 2 equally likely Contributors

Henry Ford – 1900’s

It took 200 yrs for Adam Smith’s theories to implement because of the following reasons:

- Change of mindset

- Industrialization

- Market

- Standardization – concept coined by Eli Whitney

Eli Whitney – Concept pf Interchangeability

Cotton Gin - https://www.history.com/topics/inventions/cotton-gin-and-eli-whitney

- Standard of one product affects many other products

- Standards get affected in case there is no common standards

- Standards don’t stand alone

- Standards are required to be anticipated. Change is the only constant and hence anticipation is a must.
- They can be used as Opportunity, threat and used to block others

Frederick Winslow Taylor

Scientific management: Taylor's work The principles of scientific management (source of all the following quotes) was published in
1911. His ideas were an accumulation of his life's work, and included several examples from his places of employment.

The overriding principles of scientific management are that:

- Each part of an individual's work is analyzed 'scientifically', and the most efficient method for undertaking the job is
devised; the 'one best way' of working. This consists of examining the implements needed to carry out the work, and
measuring the maximum amount a 'first-class' worker could do in a day; workers are then expected to do this much work
every day.

- The most suitable person to undertake the job is chosen, again 'scientifically'. The individual is taught to do the job in the
exact way devised. Everyone, according to Taylor, had the ability to be 'first-class' at some job. It was management's role to
find out which job suited each employee and train them until they were first-class.

- Managers must cooperate with workers to ensure the job is done in the scientific way.

- There is a clear 'division' of work and responsibility between management and workers. Managers concern themselves with
the planning and supervision of the work, and workers carry it out.

Examples of Standardization

- 2 types – product related standards & process of the standards

- ISO Standards to Business Process Standards

Process Standards

- Malcom Baldrige Award - https://asq.org/quality-resources/malcolm-baldrige-national-quality-award

- Inter-organizational Process – Salary paid from cash (yesteryears) to Bank transfer (today) - michael hammer

Time and Motion Study: Efficiency Tool that Improves Processes

For the longest time, people have been searching for the most efficient ways to work. Frederick W. Taylor is one of the experts who
wrote a book on the subject, both literally and figuratively. Taylor’s concepts contributed a great deal in efficiency studies. The time
and motion study is among his significant contributions.

Put simply, the study is about evaluating the movements that it takes to achieve a certain role and the time consumed. Time &
motion study, at its core, seeks to drive productivity from a workforce.

What is Time and Motion Study?

The time and motion study consists of two components – time study by Frederick Taylor and motion study by Frank B. and Lillian M.
Gilbreth. Taylor began time studies in the 1880s to determine the duration of particular tasks occurring under specific conditions. A
few other studies came before Taylor, but his had the most impact. The time study was a component of the scientific management
theory. Taylor’s approach focused on reducing time wastage for maximum efficiency.

Motion study by the Gilbreths evaluated movements and how they can improve work methods. Frank and Lillian Gilbreth pursued
the motion study in a bid to expound on scientific management. Taylorism, as the theory is called, had a major flaw. It lacked a
human element. Critics said that Taylor’s approach was solely about profits.
The Gilbreths included several variables while studying how to increase efficiency. Some of them are health, skills, habits,
temperament and nutrition. In the book Gilbreth and Gilbreth, the two experts explain that motion study looks at the fatigue that
workers experience then finds ways to eliminate it. They recommended solutions like rest-recovery periods, chairs and
workbenches.

Implementation of the scientific management theory was one of the first instances that process improvement and process
management were treated as a scientific problem.

Application of Time-Motion Study in Today's Business

Every task you do, except for thinking, requires some movement. Whether it’s typing code, plugging in a pressure washer or
sketching and building plan, movement is key. It’s why’s the time & motion study is applicable even in the modern environment. By
analysing how employees operate, and the time they spend, a company can pinpoint where the problem is. Removing inefficiencies
increases the productivity of your staff.

For example, finding a better way to manufacturer a car means that production time reduces and output increases. Excessive motion
is the biggest cause of time wastage. Completing a task in ten steps, when seven could have easily accomplished the same results
means that a worker is wasting a lot of resources.

Implementation The Study Improves Processes

With proper implementation time-motion study allows you to improve processes and optimization of performance. Better working
methods boost efficiency and decrease fatigue in workers. Effectiveness not just about how hard you work, but how smart.

Time-motion theory enhances resource planning and allocation. When you know how much time and movements particular tasks
require, you can apportion the necessary resources. Decreased costs is another advantage. The better you plan resources and the
more work the staff accomplishes, the higher the cost savings. Remember to measure how much time workers save after
implementing changes.

Once you grasp how time and motion study fits into everyday operations, you can use the theory to get the most out of employees.

Therbligs:

Therbligs are 18 kinds of elemental motions that make up a set of fundamental motions required for a worker to perform a manual
operation or task. They are used in the study of motion economy in the workplace. A workplace task is analyzed by recording each of
the therblig units for a process, with the results used for optimization of manual labor by eliminating unneeded movements. Next,
the therblig is listed on a SIMO Chart, which compares the worker’s right and left hand micro motions.

They were first described and later credited to Frank and Lillian Gilbreth. The word is their name spelled backwards with “th”
transposed.

- WASTE CAN, BY LONG HABITS BECOMES BUILT INTO THE JOB!!

- WORST WASTE IS NOT KNOWING YOU HAVE WASTE

- 30:70 = Value added: Non Value added

Henry Ford’s Take away from the theories:

- Interchangeable Parts and Interchangeable processes

- You can learn from anywhere – Inspiration from Butchery Shop

- Never listen to your customer – It isn’t a consumer job to know what they want

- Don’t think of only resources utilization but also Flow! – 81 hrs from Iron ore to Automobile
- One best way to do the job and it is the manager’s responsibility to find it

o Manager’s observe everything

o Workers underplay in their jobs

Henry Gantt

Henry Gantt is best known for a management tool that bears his name, the Gantt chart. Gantt lived at the turn of the twentieth
century, a time when giant corporations were just emerging. Their size led to new management challenges. Gantt rose to the
occasion by immersing himself in a growing movement called scientific management, also known as Taylorism. As the name
suggests, the movement sought to treat management as a science.

Gantt recognized that large projects, say the construction of a building, were made up of many smaller tasks. Some tasks depended
on the completion of others. Walls can’t be put up until the foundation is poured. Other tasks were independent. Electricians
needn’t worry when the plumbers get their job done, and vice versa.

It’s a pretty simple observation. But when a project gets large enough, keeping track of all the dependencies can be difficult. Project
managers have to keep them straight. They can’t have painters arriving before the wall board’s up. Nor do they want the painters
showing up months after the walls have been finished — the goals to get the job done as quickly as possible.

Gantt charts display these dependencies pictorially. Each task is represented by a horizontal bar whose length corresponds to the
time required to complete the task. Then the bars are laid out on a time line. Each bar is laid down as early as possible, but after all
the tasks that must precede it.

Assumption – Role of the Manager

Real World Model

Solve the
model

Real World Interpretation


Answers
Conclusion

EOQ - Economic order quantity: HELP ME BUY RICE

Economic order quantity (EOQ) is the ideal order quantity a company should purchase to minimize inventory costs such as holding
costs, shortage costs, and order costs. This production-scheduling model was developed in 1913 by Ford W. Harris and has been
refined over time.1 The formula assumes that demand, ordering, and holding costs all remain constant.

KEY TAKEAWAYS

 The EOQ is a company's optimal order quantity that minimizes its total costs related to ordering, receiving, and holding
inventory.
 The EOQ formula is best applied in situations where demand, ordering, and holding costs remain constant over time.
 One of the important limitations of the economic order quantity is that it assumes the demand for the company’s products
is constant over time.

OPHRM – 11.03.2021

RECAP

1. PIN Factory – Adam Smith – Imp of Specialization, right job to right function (1 st Factory)

2. 5 Philosophy/ 4 Factory

3. Malcom Balbridge – Award – Business Process Awards – 7 processes

4. Standardization (Eli Whitney) – Standard are not given, it needs to be anticipate/ SOP – Processes/ Good HR Manager
needs to see where standardization is needed and how they should go about it.

a. Standards are not mandatory for all, apart from Food, Electrical etc.

b. ISO – Process Standards/ BIS – Product Standards

c. Belief: If u have standards in process, the product also will come right.

d. Super-Efficient Company – Process of one company feeding into another company

5. Poka Yoke – error proofing

6. Taylor –

a. One best way to do a Job - Scientific Management

b. Managers are responsible

c. Theory X

7. Ford

a. Interchangeable parts & process

b. Learn to See

c. Never listen to the Customer

d. Flow & not resource utilization


e. Devil is in details

8. Time & motion - Gilbert

9. Gantt chart

10. Hick Hyman Law –

a. Information reduces uncertainty

b. Can we find out time to act on activities that are intellectual in nature

11. Flow & Resource Utilization

12. Incremental & Out of Box

13. Bad Apple Theory

a. Process vs Person

b. As managers, it is our job to uncover problems

14. EOQ

a. Intuitions backed by Logic

b. Uncover Assumptions

15. Medici Affect

16. Clock Speed

17. A/ B Testing

18. Focused

19. Measurements

20. Design thinking

21. Commodity

22. Moravic & Polyani Paradox

Gantt chart

- Work Started in 1950’s

- Book binding problem

- 2 Machine example – Washing & Drying

- Johnson’s 2 Machine Problem

- Creating an Algorithm for scheduling for best output


Gantt creating the following:

1. Create an algorithm

2. Prove that this is the best way

3. Do the jobs have a defined objective?

4. Scheduling and sequencing

EOQ:

1. GAS Cylinder Example

a. Re-order Quantity – ROQ = EOQ (Order Cost + Interest cost)

b. Re-order Level – ROL – Lead Time

c. Order pointing
Ford (2nd Factory) Sloan (GM)
Middle Class Aristocrat Class
Affordable Cars Classy Cars
Key to eco. Prosperity is to organize creation of Customer
Cost efficient
Dissatisfaction
Butchery Plant Fashion Industry
Same model Create new versions

3rd Factory – Elton Mayo - TQM – Total Quality Management


- Brainstorming is Scientific or TQM?? TQM
- 6 Sigma – TQM Process

6 ∑:
- Define the problem - CTQ – from the perspective of Customer
- Measure – No. of Opportunities & No. of Defects in 10^6 DPMO – if this number be 3.4, its meets 6 ∑
-
CTQ of Dabbawala
1. On time Delivery (They used only 1)

2. Current person
3. Hygiene
4. No Spillage

If Maruti is 6 ∑, how many defects if 10^6 cars are inspected?


- Assume there are 100 CTQ’s
- If I inspect 10^4 with 10^2 CTQ, 10^6 opportunities are found – 3.4 DPMO

HISTORY OF SIX SIGMA - W. Edwards Deming – Joseph M. Juran – Philip B. Crosby

W. Edwards Deming – Mathematicians – “In God we trust, rest all should bring data”.

- Sampling - Sampling is a process used in statistical analysis in which a predetermined number of observations are taken
from a larger population. The methodology used to sample from a larger population depends on the type of analysis
being performed, but it may include simple random sampling or systematic sampling.

- Sampling helps us understand the behavior – Statistical Quality Control


- UCL & LCL - UCL represents upper control limit on a control chart, and LCL represents lower control limit. A control
chart is a line graph that displays a continuous picture of what is happening in production process with respect to time.

- Chart of Averages - Average lines display the data average for a given chart, drawing a line across the entire chart at
the average value point on the Y axis. By default, average line labels are displayed as a combination of the line value
and the line title.

- Range Chart - An X-bar and R (range) chart is a pair of control charts used with processes that have a subgroup size of
two or more. The standard chart for variables data, X-bar and R charts help determine if a process is stable and
predictable.

- "common causes", also called natural patterns, are the usual, historical, quantifiable variation in a system, while
"special causes" are unusual, not previously observed, non-quantifiable variation.

W. Edwards deming’s 14 points for total quality management


- Create constancy of purpose for improving products and services.
- Adopt the new philosophy.
- Cease dependence on inspection to achieve quality.
- End the practice of awarding business on price alone; instead, minimize total cost by working with a single supplier.
- Improve constantly and forever every process for planning, production and service.
- Institute training on the job.
- Adopt and institute leadership.
- Drive out fear.
- Break down barriers between staff areas.
- Eliminate slogans, exhortations and targets for the workforce.
- Eliminate numerical quotas for the workforce and numerical goals for management.
- Remove barriers that rob people of pride of workmanship, and eliminate the annual rating or merit system.
- Institute a vigorous program of education and self-improvement for everyone.
- Put everybody in the company to work accomplishing the transformation.

Deming Cycle, PDCA

The Deming Cycle, or PDCA Cycle (also known as PDSA Cycle), is a continuous quality improvement model consisting out of a logical
sequence of four repetitive steps for continuous improvement and learning: Plan, Do, Check (Study) and Act. The PDSA cycle (or
PDCA) is also known as the Deming Cycle, the Deming wheel of continuous improvement spiral. Its origin can be traced back to the
eminent statistics expert Mr. Walter A. Shewart, in the 1920s. He introduced the concept of PLAN, DO and SEE. The late Total Quality
Management (TQM) guru and renowned statistician Edward W. Deming modified the SHEWART cycle as: PLAN, DO, STUDY, and ACT.
Along with the other well-known American quality guru-J.M. Juran, Deming went to Japan as part of the occupation forces of the
allies after World War II. Deming taught a lot of Quality Improvement methods to the Japanese, including the usage of statistics and
the PLAN, DO, STUDY, ACT cycle.

The Deming cycle, or PDSA cycle:


 PLAN: plan ahead for change. Analyze and predict the results.
 DO: execute the plan, taking small steps in controlled circumstances.
 STUDY: check, study the results.
 ACT: take action to standardize or improve the process

Benefits of the PDSA cycle:


 Daily routine management-for the individual and/or the team
 Problem-solving process
 Project management
 Continuous development
 Vendor development
 Human resources development
 New product development
 Process trials

Joseph M. Juran – Romanian-American engineer

- Cost of poor quality (COPQ) or poor quality costs (PQC), are costs that would disappear if systems, processes, and
products were perfect. COPQ was popularized by IBM quality expert H. James Harrington in his 1987 book Poor Quality
Costs. COPQ is a refinement of the concept of quality costs.

- Those costs that are generated as a result of producing defective material.


- This cost includes the cost involved in fulfilling the gap between the desired and actual product/service quality. It also
includes the cost of lost opportunity due to the loss of resources used in rectifying the defect. This cost includes all the
labor cost, rework cost, disposition costs, and material costs that have been added to the unit up to the point of
rejection. COPQ does not include detection and prevention cost.
- Detection to Prevention
- Control Cost (Prevention & Appraisal) – Failure Cost (Internal & External)

Suppliers can generally affect your cost due to:


- Producing defective material.
- Damaging material during delivery

Our COPQ will generally cover the followings:


- Cost of labor to fix the problem.
- Cost of extra material used.
- Cost of extra utilities
- Cost of lost opportunity
- Loss of sales/revenue (profit margin)
- Potential loss of market share
- Lower service level to customers/consumers

Philip Crosby – Businessman – “Quality is Free”

- From “quality is free” to “Quality is free, eventually”


- Cost of Ownership - The total cost of ownership (TCO) is the purchase price of an asset plus the costs of operation.
Assessing the total cost of ownership represents taking a bigger picture look at what the product is and what its value is
over time.

- Total Cost of Ownership normally includes eight key elements:

i. Purchase price: cost price and supplier margin.

ii. Associated costs: transport, packaging, customs duties, payment terms etc.

iii. Acquisition cost: operation of the purchasing department.


iv. Cost of ownership: stock management, depreciation costs etc.

v. Maintenance costs: spare parts, maintenance etc.

vi. Usage costs: use value, operation, services etc.

vii. Non-quality costs: deadline compliance, non-compliance processes etc.

viii. Disposal costs: recycling, resale, disposal etc.


- TCO: The benefits - Reducing Total Cost of Ownership remains one of the most popular strategies among procurement
decision makers (32% of companies) in order to create value, and can offer many benefits:

i. Grounds for negotiation with suppliers.

ii. Guidance tool for optimizing direct or indirect costs (avoiding waste, exceeding quality requirements etc.).

iii. Decision-making aid for outsourcing/internalization operations etc.

iv. ROI (Return on Investment) or ROTI (Return on Time Investment) evaluation.

v. Improved long-term financial performance.


5 Philosophies:
- Scientific Management
- TQM
- Theory of Constraints - Goldratt
- Just in Time – Ohmo (Lean management)
- Focus – Skinner (need to make trade off)

4 Factory
- PIN Factory
- Ford
- Hawthorne
- Toyota

OPHRM – 16.03.2021

Recap

- Six Sigma
- 4 Factory – Pin/ Ford/ Hawthorne/ Toyota
- 5 Philosophies (written above)
- DMAIC
- TQM
- Order Point technique – ROL vs ROQ
- PDCA - Deming
- CTQ
- Book Binding/ EDD/ Scheduling – Johnsn’s 2 Machin algorithm
- Ford vs GM
- Deming/ Juran/ Crosby
- Statistical Quality Control
- Scientific Management/ COPQ
- Customer Dissatisfaction
- Concept of a Model

Theories of Scientific management best applied:


- When standards are defined
- One best way to do the job
- Managers are responsible

Theories of TQM best applied:


- Customer perspective

Theories of Constraints best applied:


- Bottleneck

Theories of Lean best applied:


- Non Value added
- Waste

Theories of Focus best applied:


- Trade off – You can’t be best in everything
- FUSE is design to fail – so that we understand our problem area

- ANDON – Japanese term meaning “light” or “lamp.” In Lean manufacturing, an andon refers to a tool that is used to
inform and alarm workers of problems within their production process. It is an integral part of applying Jidoka
(automation with human intelligence) in the workplace

- Deming – PDCA & Statistical Quality Control & 14 Principles

- SQC - Statistical Quality Control is typically the measuring and recording of data against specific requirements for a
product ensuring they meet the necessary requirements – size, weight, colour etc.

- CTQ are the key measurable characteristics of a product or process whose performance standards or specification
limits must be met in order to satisfy the customer. They align improvement or design efforts with customer
requirements

- DPMO - In process improvement efforts, defects per million opportunities or DPMO is a measure of process
performance
- COPQ (Juran) – Large amount of money is spent on rectifying problems. Cost of poor quality or poor quality costs, are
costs that would disappear if systems, processes, and products were perfect. COPQ was popularized by IBM quality
expert H. James Harrington in his 1987 book Poor Quality Costs. COPQ is a refinement of the concept of quality costs

- Crosby – TCO – The total cost of ownership is the purchase price of an asset plus the costs of operation. Assessing the
total cost of ownership represents taking a bigger picture look at what the product is and what its value is over time.

3 CONSTANTS
- Inventory
- Project
- Process

Process Fundamentals – A Bad process will beat a good person

Standards of Process Design


- Product Standards - IS (BIS)
- Process - ISO 9000
- Business Process
- Inter Organizational Process

Philosophy of Process Design


- PDCA

What id a process?
- Step by step
- Quantity
- Time
- Input
- Value addition
- Output
- Checks and Measures
- Feedback
- Systematic Process – another process to analyze the feedback the process – it completes the PDCA Cycle

Guru of Process – Michael Hammer – Super Efficient Company


- Any process is better than no process
- A good process is better than a bad process (Importance of Measures)
- Even a good process can be improved (PDCA)

Process Flow Diagram


- Bread Making Process
i. Rectangle – Task or Activity (process)
ii. Arrow – for the flow
iii. Inverted Triangle – Storage of Goods (Work in Progress/ Finished Goods) (Input and Output)
Japanese define process with lot of rigidity to ensure that tracing back problems will be feasible.

Measures in Process:

- P – Product
- Q – Quality – IS standard
- D – Delivery
- S – Safety
- C – Cost
- M – Morale
- E – Environment

Process flow diagrams are usually created to understand “P” related measures

Cycle – Time Process (Focused/ Tradeoffs)

If cycle time goes up – Capacity goes down and vice versa

- To understand the capacity, one must understand the output time


- In 1 hr, break making can make 100, packing can capture 133.
- Bottleneck is Break making machine
- Hence for 15 mins, the packing machine stays idle.
- For 100 loaves are made in 60 mins, per bread is 0.6 secs – Cycle time

- Cycle Time – Time between production of any 2 products (0.6 mins)


- Turn Around Time (TAT/ Manufacturing Lead Time) – Time taken by one piece going from input to output (1.45 mins)
- 90% of the time is wait time or idle time

Exercise: Recruitment Process

- Resume Verification – 10 mins


- Interview – 30 mins
- Salary Negotiations – 20 mins

How many people can be interviewed in 8 hrs? – 2 every hr – 16 Candidates (Capacity is tied to the bottleneck)

Bottleneck – Interview

Resume Interview (Bottleneck) Salary

1 in 10 mins 1 in 30 mins 1 in 20 mins


Capacity 6 in 60mins 2 in 60 mins 3 in 60 mins
48 in 8 hrs 16 in 8 hrs 24 in 8 hrs
Idle time 20 mins/ 30 mins 0 mins/ 30 mins 10 mins/ 30 mins

Post 2nd Interview Station

Resume Interview Salary (bottleneck)

1 in 10 mins 2 in 30 mins 1 in 20 mins


Capacity 6 in 60mins 4 in 60 mins 3 in 60 mins
48 in 8 hrs 32 in 8 hrs 24 in 8 hrs

Idle time 20 mins/ 30 mins 0 mins/ 30 mins 10 mins/ 30 mins

Bottleneck has shifted


Work in Progress X cycle time = Wait period

Exercise: Bank

- Teller A – 1 min
- Teller B – 3 mins
- Teller C – 2 mins

How many people can be catered in 1 hrs?

Cycle time – 3 mins

3 mins – 1 Customer
6 mins – 2 Customer
60 mins – 20 Customer

Teller A Teller B (Bottleneck) Teller C


1 – 1 min 1 – 3min 1 – 2min
Capacity
60 mins 20 – 60 mins 30 – 60 mins
% 33% 100 66%
Idle time 2 mins/ 3 mins 0 mins/ 3 mins 1 mins/ 3 mins

40 customers in queue

Wait period = 40 *3 = 120mins (TAT)

Projects

Project is all about NEW.

PMP – Project is set of process (PMBOK) – Ends with 49 different processes

5 process:
- Initiating
- Planning
- Execution
- Monitoring
- Closure

10 Knowledge areas
- Project Integration Management
- Project Scope Management
- Project Schedule Management
- Project Cost Management
- Project Quality Management
- Project Resource Management
- Project Communications Management
- Project Risk Management
- Project Procurement Management
- Project Stakeholder Management

3 Objective/ criteria

- Scope
- Time
- Cost

What is most important for the Project? Project management only address time and cost

Bulb Company example

- Comparison with previous results to determine the performance of the current


- Schedule

4 Major questions: (LITMUS TEST)

- What is to be done?
- Where is it to be done?
- Am I on Schedule?
- When is the project likely to be completed?

Elements of Definition of Projects – A temporary endeavor to create a unique product/ services


- New
- Start and End
- Re-planning/ Progressive elaboration
- Goal – Solution matrix

SOLUTION

Clear Not Clear


GOAL
Extreme Project
Not Clear Extreme
Mngt
Agile
Clear Traditional (Iterative & Incremental)
(SCRUM)
Traditional Agile

Technology CPM/ PERT SCRUM

MS – project
Software JIRA/ Trello
Primave

PERT – Program Evaluation Review Technique - When the project time estimation is presumed to be in a range of time (R & D)

CPM – Critical Path method – When the project time estimation is presumed to be a fixed time (Constructions)

All standard project tools are based on CPM.

Critical Path - n project management, a critical path is the sequence of project network activities which add up to the longest overall
duration, regardless if that longest duration has float or not. This determines the shortest time possible to complete the project.
Float/ Slack - Slack time is an interval that occurs when there are activities that can be completed before the time when they are
actually needed. The difference between the scheduled completion date and the required date to meet the critical path is the
amount of slack time available.

- Lower the number - more important is the activity


- Higher the number - less important
- zero is the critical

Example:

A = 2 days
B = 3 days
C = 5 Days after completion of A

So, Length of Critical path = 7 days


A = Cannot be delayed
B = Can be delayed by 4 days (7 days – 3 days = Slack)

Software answers:
- What are the activities?
- What is the duration?
- What is the predecessor?

Recap

- Clockspeed
- Medici
- 3 constants
- Oil/ Water/ rare earth
- China
- 5 slogans – Focused/ Bottleneck/ Measurements/ Value added Non value added/ One size fit all/
- Resource utilization & Flow
- 4 factories
- 5 Philosophies
- Process
- Cycle Time
- TAT
- Projects
- Critical Path
- Float
- Network
- Traditional & Agile
- Critical Path Method
- Six Sigma
- DMAIC
- TQM
- Order Point technique – ROL vs ROQ
- PDCA - Deming
- CTQ
- Book Binding/ EDD/ Scheduling – Johnsn’s 2 Machin alogorithm
- Ford vs GM
- Deming/ Juran/ Crosby
- Statistical Quality Control
- Scientific Management/ COPQ
- Customer Dissatisfaction
- Concept of a Model

You might also like