Professional Documents
Culture Documents
Notes On Session 1-10 - DR
Notes On Session 1-10 - DR
https://www.interaction-design.org/literature/article/what-is-design-thinking-and-why-is-it-so-popular
Operations
Eg: McDonald Brother Discovered a Mystery
The How of McD –
1. Drive in concept
2. Convenience
3. Yester year’s fine dining
4. Ford reducing the price of Cars, hence people buying cars to go for a drive.
5. Spotted the unmet customer requirement
In an interesting way Martin combines these two thinking modes in his model of knowledge creation. He calls it the knowledge
funnel. There are three stages in funnel:
- Mystery : it is a problem to solve; a question; a chaos of data; a surprise, a wicked problem.
- Heuristic: “a rule of thumb that helps narrow the field of inquiry and work the mystery down to a manageable size
- Algorithm: a fixed formula, a tested method or procedure
The knowledge funnel model is intended to be general model of knowledge creation. The challenge is how to drive through the
knowledge funnel from mystery to heuristic to algorithm. The great promise is that “the firms that master it will gain a nearly
inexhaustible, long term business advantages” (p. 6-7). And exactly the design thinking is the form of thought that gives this mastery.
Although Martin is mainly dealing with business, he refers also to the importance of knowledge funnel model and design thinking in
science, too. I agree. Of course, scientific thinking includes other elements too, but problem solving and developing a heuristic and
algorithm are essential part of scientific method.
Source
https://www.thehindu.com/business/Creating-value-across-the-knowledge-funnel/article16838735.ece
https://businessisdesign.wordpress.com/design-thinking-seminars/week-4/
https://www.youtube.com/watch?v=yfNaCrkgsjo
CLOCKSPEED
Clockspeed: Winning Industry Control in the Age of Temporary Advantage: https://www.publishersweekly.com/978-0-7382-0001-9
In propounding a ""theory of business genetics,"" Fine, a professor of management at MIT, analyzes factors that determine
corporate evolution, then outlines approaches to aid strategic decision making. For Fine, industries change at different rates, or
""clockspeeds,"" depending on differing opportunities for innovation and competition, as is the case in the animal kingdom.
Changing relationships and their causes often seem more apparent, he notes, in fast-clockspeed scenarios such as the current
computer industry. However, ""all advantage is temporary,"" Fine continues, ""and the faster the clockspeed, the more temporary
the advantage."" Against that background, his main thesis is that design of the supply chain is ""the ultimate core competency"" for
maintaining advantage in business. Fine advocates diligently and continuously studying its dynamics from the standpoints of
organization, technology and capability. Citing the case of IBM as a cautionary tale, Fine notes the company's flawed decision to
outsource its PC's microprocessor and operating system, with the result that customers are more concerned with the label ""Intel
Inside"" than the actual makeup of their computer. Oriented primarily to specialists (and prospective clients) in the computer
industry, Fine's theorizing suffers somewhat from management jargon yet is impressively well tuned.
Change in the Power industry and changes that has influenced other industries
1. Open access (Mobile portability)
2. Time of the day (Movie Tickets)
3. Prepaid Cards (Mobile)
4. Smart Grid (The Internet)
5. Banking (Virtual)
6. Trading (Share Market)
7. Auto Metering (Net banking)
MEDICI EFFECT
https://worldofwork.io/2019/07/the-medici-effect/
The Medici effect is the name given to the idea that increased creativity and innovation occurs through diversity. When ideas and
talented people from different fields are brought together to collaborate, step-changes can occur. The idea comes from a book of
the same name by Frans Johansson.
One person cannot be specializing in all activities and hence needs to collaborate. Hence one needs to indulge in training and skill
development towards adjacent technologies.
“The Medici Effect” is a book by Frans Johansson. It reflects on a few key innovations from history. Its central theme is that many key
primary innovations arise as a result of intersectionality. In other words, by bringing together people and ideas from a range of
diverse backgrounds, you increase the likelihood of intellectual cross-pollination and through this, great leaps in innovation.
MORAVEC PARADOX
https://medium.com/@froger_mcs/moravecs-paradox-c79bf638103f
https://www.thinkautomation.com/bots-and-ai/what-is-moravecs-paradox-and-what-does-it-mean-for-modern-ai/
There is a discovery in the field of AI, called Moravec’s paradox which tells that activities like abstract thinking and reasoning or skills
classified as “hard” — engineering, maths or art are way easier to handle by machine than sensory or motor based unconscious
activities.
It’s much easier to implement specialized computers to mimic adult human experts (professional chess or Go players, artists —
painters or musicians) than building a machine with skills of 1-year old children with abilities to learn how to move around, recognize
faces and voice or pay attention to interesting things. Easy problems are hard and require enormous computation resources, hard
problems are easy and require very little computation.
Researchers look for the explanation in theory of evolution — our unconscious skills were developed and optimized during the
natural selection process, over millions of years of evolution. And the “newer” skill is (like abstract thinking which appeared “only”
hundreds thousands of years ago), the less time nature had to adjust our brains to handle it.
It’s not easy to interpret Moravec’s paradox. Some tell that it describes the future where machines will take jobs which require
specialistic skills, making people serving an army of robotic chiefs and analysts. Others argue that paradox guarantees that AI will
always need an assistance of people. Or, perhaps more correctly, people will use AI to improve those skills which aren’t as highly
developed by nature.
For sure Moravec’s paradox proves one thing — the fact that we developed computer to beat human in Go or Chess doesn’t mean
that General Artificial Intelligence is just around the corner. Yes, we are one step closer. But as long as AGI means for us “full copy of
human intelligence”, over time it will be only harder.
Polanyi paradox
https://luca-d3.com/data-speaks/technology-dictionary/polanyi-paradox
Polanyi's paradox from1966 is not truly a paradox, as what it reflects, rather than a contradiction, is a difficulty, a barrier of
overcoming the development of artificial intelligence and automation. And since 1966 a lot has changed. Important technological
advances have taken place and different strategies have been implemented to try to overcome this difficulty.
COUNTRIES TO WATCH
- JAPAN
- GERMAN
- CHINA
Confucius saying: I hear and I forget. I see and I remember. I do and I understand.
A vs B Approach/ AB Testing
1. Multiple ideas
2. Innovative options and feasible ones
3. Choice of Brainstorming
4. Quantity of ideas that quality
5. ¾ period of brainstorming goes in understanding the process and ¼ period gives the ideas
6. Freedom to compare
7. Preserve the strength and avoid weaknesses
OPHRM – 18/02/2021
Theory of Constraints
Surprising Facts about Constraints
You will always have a constraint, so choose wisely ... perhaps the most capital intensive, or energy consuming, or largest
batch, or longest touch time, etc.
If you identify the wrong constraint, it is easily rectified and causes no permanent damage. The Five Focusing Steps auto-
correct for errors made over time.
The constraint may appear to shift suddenly based on product mix, however this is often due to batching practices rather
than actual shifting of the constraint.
Most systems typically have ONE SINGLE RESOURCE CONSTRAINT such as a machine or department. This constraint, which
may or may not be binding at any given point of time, is referred to as the Capacity-Constrained Resource (CCR). In certain
cases there may be 2-3 CCRs, but rarely more.
Permanent constraints typically include sales/marketing (with better techniques we could always raise prices) and R&D
(with more awesome products we could make far higher margins).
Eventually the constraint should be stabilized; frequently shifting constraints wreck havoc on policies, procedures and
people.
The constraint of most organizations is not well utilized, often less than 50% on a 24x7 basis. If the reasons for under-utilization are
not immediately clear, try measuring the constraint's OEE including the breakup of availability/quality/performance. Gather the
underlying data and analyze it using Pareto techniques. Once the primary causes are identified, use fishbone diagrams and Five Why
analysis to drill down to the root cause for under-performance.
When the root causes are clear, eliminate them on a permanent basis. Quality and productivity tools such as Six Sigma, Poka-Yoke,
design of experiments, SMED, etc. often provide the answer, depending upon the nature of the problem.
Focusing Step #3: SUBORDINATE everything else to the constraint
By definition, any non-constraint has more capacity to produce than the constraint itself. Left unchecked, this results in bloated WIP
inventory, elongated lead times, and frequent expediting/firefighting. Hence, it is crucial to avoid producing more than the
constraint can handle. In a manufacturing environment this is accomplished by choking the release of raw material in line with the
capacity of the constraint.
Equally important is ensuring that the rest of the system supports the work of the constraint at all times. It must never ever be
starved for inputs, or fed poor quality materials. This can be achieved by maintaining a reasonable buffer of safety stock. Similarly,
other established policies and habits can hamper productivity at the constraint and must be systematically aligned to achieve
maximum performance.
Focusing Step #4: ELEVATE the constraint
Once the capacity of the system is exhausted, it must be expanded by investing in additional equipment/land, hiring people, or the
like.
Focusing Step #5: PREVENT INERTIA from becoming the constraint!
Once elevated, the weak link may not remain weakest. Consider elevating other resources to retain the old constraint, depending on
where you wish to have the constraint in the long-term. A new constraint demands a whole new way of managing the the system.
We therefore return to Step 1, and thus begins our journey of continuous improvement...
What is Bottleneck??
A bottleneck is a point of congestion in a production system (such as an assembly line or a computer network) that occurs when
workloads arrive too quickly for the production process to handle. The inefficiencies brought about by the bottleneck often creates
delays and higher production costs. A bottleneck can have a significant impact on the flow of manufacturing and can sharply increase
the time and expense of production. Companies are more at risk for bottlenecks when they start the production process for a new
product. This is because there may be flaws in the process that the company must identify and correct; this situation requires more
scrutiny and fine-tuning. Operations management is concerned with controlling the production process, identifying potential
bottlenecks before they occur, and finding efficient solutions.
A bottleneck affects the level of production capacity that a firm can achieve each month. Theoretical capacity assumes that a
company can produce at maximum capacity at all times. This concept assumes no machine breakdowns, bathroom breaks, or
employee vacations.
Because theoretical capacity is not realistic, most businesses use practical capacity to manage production. This level of capacity
assumes downtime for machine repairs and employee time off. Practical capacity provides a range for which different processes can
operate efficiently without breaking down. Go above the optimum range and the risk increases for a bottleneck due to a breakdown
of one or more processes.
If a company finds that its production capacity is inadequate to meet its production goals, it has several options at its disposal.
Company management could decide to lower their production goals in order to bring them in line with their production capacity. Or,
they could work to find solutions that simultaneously prevent bottlenecks and increase production. Companies often use capacity
requirements planning (CRP) tools and methods to determine and meet production goals.
PDCA Cycle:
Explained briefly, the Plan-Do-Check-Act cycle is a model for carrying out change. It is an essential part of the lean manufacturing
philosophy and a key prerequisite for continuous improvement of people and processes.
First, proposed by Walter Shewhart and later developed by William Deming, the PDCA cycle became a widespread framework for
constant improvements in manufacturing, management, and other areas.
PDCA is a simple four-stage method that enables teams to avoid recurring mistakes and improve processes.
PLAN:
At this stage, you will literally plan what needs to be done. Depending on the project's size, planning can take a major part of your
team’s efforts. It will usually consist of smaller steps so that you can build a proper plan with fewer possibilities of failure.
Before you move to the next stage, you need to be sure that you answered some basic concerns:
Be aware that unpredicted problems may occur at this phase. This is why, in a perfect situation, you may first try to incorporate your
plan on a small scale and in a controlled environment.
Standardization is something that will definitely help your team apply the plan smoothly. Make sure that everybody knows their
roles and responsibilities.
CHECK
This is probably the most important stage of the PDCA cycle. If you want to clarify your plan, avoid recurring mistakes, and apply
continuous improvement successfully, you need to pay enough attention to the CHECK phase.
Here, you need to audit your plan’s execution and see if your initial plan actually worked. Moreover, your team will be able to
identify problematic parts of the current process and eliminate them in the future. If something went wrong during the process, you
need to analyze it and find the root cause of the problems.
ACT
Finally, you arrive at the last stage of the Plan-Do-Check-Act cycle. Previously, you developed, applied, and checked your plan. Now,
you need to act. If everything seems perfect and your team managed to achieve the original goals, then you can proceed and apply
your initial plan. It can be appropriate to adopt the whole plan if objectives are met. Respectively, your PDCA model will become the
new standard baseline. However, every time you repeat a standardized plan, remind your team to go through all steps again and try
to improve carefully.
The PDCA cycle is a simple but powerful framework for fixing issues on any level of your organization. It can be part of a bigger
planning process, such as Hoshin Kanri. The repetitive approach helps your team find and test solutions and improve them through a
waste-reducing cycle. The PDCA process includes a mandatory commitment to continuous improvement, and it can have a positive
impact on productivity and efficiency. Finally, keep in mind that the PDCA model requires a certain amount of time, and it may not
be appropriate for solving urgent issues.
The introductory chapter starts with a story about McDonalds journey from mystery (how and what did Californians want to eat) to
algorithm (stripping away uncertainty, ambiguity, and judgment from almost all processes). It briefly discusses analytical thinking,
intuitive thinking and design thinking, to solve mysteries and advance knowledge, and the fine balance between exploring new
knowledge and exploiting existing one.
It introduces and explores the concept of the "Knowledge
Funnel" describing how knowledge advances from mystery to heuristic, to algorithm for businesses to gain efficiency and lower
costs. This is explored also in later chapters: "Mysteries are expensive, time consuming, and risky; they are worth tackling only
because of the potential benefits of discovering a path out of the mystery to a revenue-generating heuristic", "The algorithm
generates savings by turning judgment… …into a formula or set of rules that, if followed, will produce a desired solution" and
“Computer code – the digital end point of the algorithm stage – is the most efficient expression of an algorithm”. It also addresses
the need for organizations to re-explore solved mysteries, even the founding ideas behind the business, and not get too comfortable
focusing on the "administration of business" running an existing algorithm.
In addition, the first chapter presents abductive logic, and some ideas originated by philosopher Charles Sanders Peirce; that it is not
possible to prove a new thought concept, or idea in advance and that all new ideas can be validated only through the unfolding of
future events. To advance knowledge we need to make a "logical leap of the mind" or an "inference to the best explanation to
imaging a heuristic for understanding a mystery.
The second chapter focus on the distinction between reliability (produce consistent, predictable outcomes by narrowing the scope
of a test to what can be measured in a replicable, quantitative way) and validity (produce outcomes that meet a desired objective,
that through the passage of time will be shown to be correct, often incorporating some aspects of subjectivity and judgment to be
achieved). Roger's main point in the chapter (or even in the book) is that today's business world is focusing too much on reliability
(due to three forces: demand for proof, an aversion to bias and the constraints of time), with algorithmic decision-making
techniques using various systems (such as ERP, CRM, TQM, KM) to crunch data objectively and extrapolate from the past to make
predictions about the future. "What organizations dedicated to running reliable algorithms often fail to realize is that while they
reduce the risk of small variations in their businesses, they increase the risk of cataclysmic events that occur when the future no
longer resembles the past and the algorithm is no longer relevant or useful" With the turbulent times we live in, where new
mysteries constantly spring up that reliable systems won't address or even acknowledge, businesses risk being outflanked by new
entrants solving old and new mysteries developing new heuristics and algorithms. "Without validity, an organization has little chance
of moving knowledge across the funnel. Without reliability, an organization will struggle to exploit the rewards of its advances… the
optimal approach... is to seek a balance of both"
3. Design thinking: How thinking like a designer can create sustainable advantage
Chapter three starts with an interesting case of Research In Motion (RIM) that leads into the discussion of what is really design
thinking. Roger uses the quote by Tim Brown of IDEO, "a discipline that uses the designer's sensibility and methods to match
people's needs with what is technologically feasible and what a viable business strategy can convert into customer value and market
opportunity" and adds himself "a person or organization instilled with that discipline is constantly seeking a fruitful balance between
reliability and validity, between art and science, between intuition and analytics, and between exploration and exploitation". That
designers live in the world of abductive reasoning, actively look for new data points, challenge accepted explanations to posit what
could possibly be true (in contrast to the two dominant forms of logic - deduction and induction, with the goal to declare a
conclusion to be true or false).
The chapter ends with the first discussion on roadblocks to design thinking (many more to come), with one being the corporate
tendency to settle at the current stage in the knowledge funnel, and another how "highly paid executives or specialists with
knowledge, turf and paychecks to defend” has the company's heuristics in their heads with no interest in advancing to the algorithm
stage, making the executives less important. This leads nicely into the forth chapter about the transformation of Procter & Gamble.
Another highly interesting topic covered in the chapter is the change of processes within P&G, including the strategy review, at P&G.
Lafley recognized that the existing processes was a recipe for producing reliability, not validity, "so risky creative leaps were out of
the question". A transition from annual reviews with category managers pitching, "with all the inductive and deductive proof needed
to gain the approval of the CEO and senior management" to "forcing category managers to toss around ideas with senior
management… to become comfortable with the logical leaps of mind needed to generate new ideas".
5. The balancing act: How design-thinking organizations embrace reliability and validity
The chapter focuses on the need to balance reliability and validity, and the challenges to do so (foremost all structures, processes
and cultural norms tilted towards reliability). "Financial planning and reward systems are dramatically tilted toward running an
existing heuristic or algorithm and must be modified in significant ways to create a balance between reliability and validity". Roger
presents a rough rule of thumb "when the challenge is to seize an emerging opportunity, the solution is to perform like a design
team: work iteratively, build a prototype, elicit feedback, refine it, rinse, repeat… On the other hand, running a supply chain, building
a forecasting model, and compiling the financials are functions best left to people who work in fixed roles with permanent tasks".
The chapter feels somewhat repetitive, in the uphill battle for validity, and more obstacles of change are presented:
Roger adds to the existing body of knowledge with the twist of reliability vs. validity in creating a new market, and the knowledge
funnel taking a one-off street festival into an unstoppable international $600 million-a-year business with four thousand employees.
Laliberté has reinvented Cirque's creative and business models time and time again, "usually over protests that he was fixing what
was not broken and that he could destroy the company". Other CEOs and cases covered in the chapter are James Hackett of
Steelcase, Bob Ulrich of Target, and Steve Jobs of Apple.
The role of the CEO and different approaches to build design-friendly organizational processes and norms into companies are
discussed referring to the different cases presented.
Again, Roger returns to the reliability vs validity battle, now from a CEO perspective with terms such as "resisting reliability", "those
systems-whether they are for budgeting, capital appropriation, product development…", and "counter the internal and external
pressures toward reliability".
Roger also presents five things that the design thinker needs to do to be more effective with colleagues at the extremes of the
reliability and validity spectrum:
PSYCHOLOGY OF QUEING:
If callers are waiting to speak to your customer service, give them the chance to get called back when it’s their turn. If fans are
waiting for an artist to perform, let them join in on a game of trivia using an app like Kahoot. If customers are waiting in an online
queue, customize the queue page and embed videos or games.
Online queues actually have an advantage over physical queues as customers aren’t limited by the need to stand in line. If your
virtual waiting room can notify visitors when it’s their turn in line, they can check email, tidy up the house, or do any number of
things to occupy their time while waiting.
If your business is a restaurant, let your customers preview the menu. If you’re running an online product launch, let customers in
the online queue read more about the product so they feel like they’ve started the buying process. Even better, give them a sneak
peek of upcoming products.
Adding a progress bar on the online queue page also highlights for customers that they’ve started. It shows a beginning and an end,
and waiting becomes conceptualized as progress.
Such explanations are even more critical during an online queue, where there are fewer contextual cues available to your visitors.
Saying your site is experiencing “technical difficulties” is a vague and unnerving description for visitors. Make sure to provide a clear
explanation of why your customers are in a queue (e.g. “Hi Sneakerhead! So that everyone has a fair shot at getting their hands on a
pair of new sneaks, we’ve reserved a place in line for you in our virtual waiting room.”)
If possible, keep real-time communication flowing to your waiting customers to keep them up-to-date and remind them why there is
a wait.
A first-in, first-out (FIFO) (or first-come, first-served) wait is the exemplar of fairness. Make sure your queue—whether online or
physical—operates in this way.
If you’re operating an online queue, remember to address customers who arrive early. For example, we’ve designed our virtual
waiting room to place early visitors in a pre-queue with a countdown to the official start of the queued event. When the sale or
registration begins, we assign a randomized queue number to all early visitors and then operate the queue in a first-in, first-out
fashion. This ensures early visitors don’t benefit from arriving early and gives everyone who does a fair shot at being first in line.
If your setup involves multiple queues, think again. A large portion of queue anxiety surrounds being unfairly overtaken by others,
what Richard Larson calls “skips and slips”. One serpentine line removes any need for your customers to make (and constantly
reassess) a decision about choosing the “right line”.
We have come to realize that project success depends, not just on the traditional critical success factors, but also, greatly, on the
proper project management style and on adapting the right technique to the right project. We have seen many projects fail because
managers assumed that their current project would be the same as the previous one.
Our research shows that in well-managed projects, project managers as well as top management are aware of project differences
and use specific techniques and styles to manage different kinds of projects. However, our research also found that in most cases,
companies and managers don’t have a specific framework with which they distinguish among their project efforts. We realized it is
time to develop a universal, context-free, and simple-to-use framework that will help managers and organizations start a project by
assessing the project type and selecting an appropriate management style.
However, in our work we realized that one framework does not fit all either, and that one would need different project classification
systems for various organizational needs.
Initial Theoretical and Practical Perspectives
From a theoretical perspective, one of the major barriers to understanding the nature of projects has been the little theoretical
distinction made between the project type and its strategic and managerial problems, and the lack of specificity of constructs
applied in project management studies. Although innovation studies have often used a traditional distinction between incremental
and radical innovation, the project management literature has been slow in adopting similar approaches. Furthermore, while
correlates of structural and environmental attributes have been well studied when the organization is the unit of analysis, they have
been much less investigated in the project context. As mentioned above, the project management literature has often ignored the
importance of project contingencies, assuming that all projects share a universal set of managerial characteristics. Yet, projects can
be seen as “temporary organizations within organizations,” and may exhibit variations in structure when compared to their mother
organizations. Indeed, numerous scholars have recently expressed disappointment in the universal, “one-size-fits-all” idea, and
recommended a more contingent approach to the study of projects. As argued, by utilizing traditional concepts in a new domain,
new insights will most likely emerge in this evolving and dynamic field. Our study attempts, as well, to address this theoretical gap.
Who are the stakeholders that would benefit from a framework for project classification? It seems that different people would use
such a framework for different purposes. For example, among other things, top management— CEOs, executives, and business
leaders—would look at decisions about portfolio management, aggregated returns, financial and business risk, and setting priorities.
The marketing function, in contrast, would look for the impact of different projects on their marketing efforts, market research, and
their ability to determine customer needs and product requirements. Similarly, the engineering manager would need a framework
for distinguishing among difficulty and complexity of technical tasks, for assignment of technical experts, and for resources
allocation. Finally, project managers would use a framework to determine their project structure, processes, and tools. They would
also use it to distinguish among their design, testing, and verification efforts, and for the selection of team members and
assignments of tasks.
Uncertainty
Different projects present, at the outset, different levels of uncertainty. Clearly, when a project is completed, everything is well
known—the end result, the cost, the time, and the final specifications. Yet, as we have seen, uncertainty at project initiation is one
of the major characteristics of any project. It determines, among other things, the length and timing of front-end activities, how well
and how fast can one define and finalize product requirements and design, the degree of detail and extent of planning accuracy, and
the level of contingency resources (time buffer and budget slack). Project execution can therefore be seen as an ongoing process,
which is aimed at uncertainty reduction. Failing to assess project uncertainty may result in excessive resources and unexpected
delays.
Uncertainties could be divided into external or internal, and classified into several levels as we show later. External uncertainty will
have an effect on accuracy and predictability of customer requirements, and on how to treat market research results, while internal
uncertainty determines the process of product design, testing, and finalizing the specifications.
Complexity
Complexity is the second dimension for distinction among projects. Management should realize that complexity and uncertainty is
not the same thing. Imagine building a high-rise office tower in a major city. For this kind of project, even low levels of uncertainty
may involve a highly complex collection of tasks. Project complexity is generally determined by project size, number and variety of
elements, and interconnection among elements, and it may come from product complexity, but also from the level of organizational
interaction.
When it comes to managing different levels of complexity, the “one-size-does-not-fit-all” rule will affect project organizational
structure, the hierarchical level of the project manager, the formality with which the project is managed, the extent of
subcontracting and outsourcing, and the degree and tightness of project control.
Pace
The third dimension of the UCP model involves the speed and criticality of time goals. Obviously, no project is free of limitations, and
time is one of the major constraints. However, as we found, the available time given for project completion and the degree of
urgency, is an important factor for distinction among projects. The same goal with different timeframes may require different
project structures and different management attention.
The highest-paced projects are the most critical in terms of time of product introduction. The higher the pace, the closer is project
monitoring, the more autonomous is the project team, and the higher is management involvement.
A Focused Factory strives for a narrow range of products, customers and processes. The result is a factory that is smaller, simpler
and totally focused on one or two Key Manufacturing Tasks.
Benefits of Focus
At Strategos, we have seen the effects of focus- customer satisfaction, lower cost and less frustration. Several researchers have
documented these effects with quantitative studies.
The Key Manufacturing Task(s) is the most important thing the factory must do or achieve for success. Terry Hill, in his book
"Manufacturing Strategy" shows how to identify the Key Manufacturing Task(s) and link it to marketing and corporate strategies.
Other factories are highly focused at first but lose it over time. Several forces and factors diffuse the original focus. Among these are:
Inconsistent Policies
Professional Isolation
Mission Creep
Failure to Design the Task
Unrecognized Inconsistencies
Product Proliferation
Market Proliferation
Wickham Skinner, in his seminal 1974 article for The Harvard Business Review, says it best:
"The focused factory will out-produce, undersell, and quickly gain competitive edge over the complex factory, The focused factory
does a better job because repetition and concentration in one area allows its work force and managers to become effective and
experienced in the task required for success. The focused factory is manageable and controllable. Its problems are demanding, but
limited in scope."
Process Technologies
Typically, unproven and uncertain technologies are limited to one per factory. Proven, mature technologies are limited to what their
managers can easily handle, typically two or three. (e.g.,a foundry, metal working and metal finishing.)
Market Demands
These consist of a set of demands including quality, price, lead times, and reliability specifications. A given plant can usually only do a
superb job on one or two demands at any given period of time.
Product Volumes
Generally, these are of comparable levels, such that tooling, order quantities, materials handling techniques, and job contents can
be approached with a consistent philosophy. But what about the inevitable short runs, customer specials, and one-of-a-kind orders
that every factory must handle? The answer usually is to segregate them.
Quality Levels
These employ a common attitude and set of approaches so as to neither over-specify or over control quality and specifications. One
frame of mind and set of mental assumptions suffice for equipment, tooling, inspection, training, supervision, job content, and
materials handling.
Manufacturing Tasks
These are limited to only one (or two at the most) at any given time. The task at which the plant must excel in order to be
competitive focuses on one set of internally consistent, doable, non-compromised criteria for success.
OPHRM – 27.02.2021
Henry Ford – The model T Story: from Iron Ore to Car in Showroom – 81 hrs
Hence it’s important to remember the below 5 points to design any process:
Historical Stalwarts:
Poka-Yoke Technique
Lean Management has adopted the principles and techniques originating as part of the Lean Manufacturing methodology and
developed them further. Now we can experience Lean's benefits in management and transfer successful techniques from the times
of post-war Japan to modern-day business conditions.
One of the most valuable takeaways is Poka-Yoke. It has become one of the most powerful work standardization techniques and can
be applied to any manufacturing or service industry.
Its idea to prevent errors and defects from appearing in the first place is universally applicable and has proven to be a true efficiency
booster.
In a broader sense, it is also a behavior-shaping constraint as a process step to prevent incorrect operation. One of the most
common is when a car driver with a manual gearbox must press on the clutch pedal (a process step – Poka-Yoke) before starting the
engine. The interlock prevents an unintended movement of the car. Another example is a car with an automatic transmission, which
has a switch that requires the vehicle to be in “Park” or “Neutral” before it can be started. These serve as behavior-shaping
constraints as there are actions that must be performed before the car is allowed to start. This way, over time, the driver’s behavior
is adjusted to the requirements by repetition and habit. Other examples can be found in the child-proof electric sockets or the
washing machine that does not start if the door is not closed properly to prevent flooding. These types of automation don’t allow
mistakes or incorrect operation from the start.
Why is Poka-Yoke important?
The value of using Poka-Yoke is that they help people and processes work right the first time, which makes mistakes impossible to
happen. These techniques can significantly improve the quality and reliability of products and processes by eliminating defects. This
approach to production fits perfectly the culture of continuous improvement, which is also part of the Lean management arsenal. It
can also be used to fine tune improvements, and process designs from six-sigma Define – Measure – Analyze – Improve – Control
(DMAIC) projects. Applying simple Poka Yoke ideas and methods in product and process design can eliminate human and mechanical
errors. The flexibility of Poka-Yoke allows for it not to be costly. For example, Toyota’s goal is to implement each mistake-proofing
device for under $150. Depending on the size of the company, it can be an extremely cost-efficient endeavor.
- Processing error: Process operation missed or not performed per the standard operating procedure.
- Setup error: Using the wrong tooling or setting machine adjustments incorrectly.
- Missing part: Not all parts are included in the assembly, welding, or other processes.
- Operations error: Carrying out an operation incorrectly; having the incorrect version of the specification.
- Measurement error: Errors in machine adjustment, test measurement, or dimensions of a part coming in from a supplier.
Poka-Yoke is easy to implement because of its universal and rational nature. You can follow this step by step process to apply it:
- Take a comprehensive approach instead of thinking of Poka Yokes just as limit switches or automatic shutoff.
- Determine whether a contact (use of shape, size, or other physical attributes for detection), constant number (error
triggered if a certain number of actions are not made), or a sequencing method (use of a checklist to ensure completing all
process steps) is most appropriate.
In Summary
Poka-Yoke technique is one of the most precious gems in the crown of Lean management. It is a way of ensuring quality without
actually having a quality assurance process, rather than preventing defects to appear in the first place. Poka-Yoke may be
implemented in any industry and have many benefits, the most important of which are:
- 1 bit is that amount of information required to distinguish between 2 equally likely Contributors
It took 200 yrs for Adam Smith’s theories to implement because of the following reasons:
- Change of mindset
- Industrialization
- Market
- Standards are required to be anticipated. Change is the only constant and hence anticipation is a must.
- They can be used as Opportunity, threat and used to block others
Scientific management: Taylor's work The principles of scientific management (source of all the following quotes) was published in
1911. His ideas were an accumulation of his life's work, and included several examples from his places of employment.
- Each part of an individual's work is analyzed 'scientifically', and the most efficient method for undertaking the job is
devised; the 'one best way' of working. This consists of examining the implements needed to carry out the work, and
measuring the maximum amount a 'first-class' worker could do in a day; workers are then expected to do this much work
every day.
- The most suitable person to undertake the job is chosen, again 'scientifically'. The individual is taught to do the job in the
exact way devised. Everyone, according to Taylor, had the ability to be 'first-class' at some job. It was management's role to
find out which job suited each employee and train them until they were first-class.
- Managers must cooperate with workers to ensure the job is done in the scientific way.
- There is a clear 'division' of work and responsibility between management and workers. Managers concern themselves with
the planning and supervision of the work, and workers carry it out.
Examples of Standardization
Process Standards
- Inter-organizational Process – Salary paid from cash (yesteryears) to Bank transfer (today) - michael hammer
For the longest time, people have been searching for the most efficient ways to work. Frederick W. Taylor is one of the experts who
wrote a book on the subject, both literally and figuratively. Taylor’s concepts contributed a great deal in efficiency studies. The time
and motion study is among his significant contributions.
Put simply, the study is about evaluating the movements that it takes to achieve a certain role and the time consumed. Time &
motion study, at its core, seeks to drive productivity from a workforce.
The time and motion study consists of two components – time study by Frederick Taylor and motion study by Frank B. and Lillian M.
Gilbreth. Taylor began time studies in the 1880s to determine the duration of particular tasks occurring under specific conditions. A
few other studies came before Taylor, but his had the most impact. The time study was a component of the scientific management
theory. Taylor’s approach focused on reducing time wastage for maximum efficiency.
Motion study by the Gilbreths evaluated movements and how they can improve work methods. Frank and Lillian Gilbreth pursued
the motion study in a bid to expound on scientific management. Taylorism, as the theory is called, had a major flaw. It lacked a
human element. Critics said that Taylor’s approach was solely about profits.
The Gilbreths included several variables while studying how to increase efficiency. Some of them are health, skills, habits,
temperament and nutrition. In the book Gilbreth and Gilbreth, the two experts explain that motion study looks at the fatigue that
workers experience then finds ways to eliminate it. They recommended solutions like rest-recovery periods, chairs and
workbenches.
Implementation of the scientific management theory was one of the first instances that process improvement and process
management were treated as a scientific problem.
Every task you do, except for thinking, requires some movement. Whether it’s typing code, plugging in a pressure washer or
sketching and building plan, movement is key. It’s why’s the time & motion study is applicable even in the modern environment. By
analysing how employees operate, and the time they spend, a company can pinpoint where the problem is. Removing inefficiencies
increases the productivity of your staff.
For example, finding a better way to manufacturer a car means that production time reduces and output increases. Excessive motion
is the biggest cause of time wastage. Completing a task in ten steps, when seven could have easily accomplished the same results
means that a worker is wasting a lot of resources.
With proper implementation time-motion study allows you to improve processes and optimization of performance. Better working
methods boost efficiency and decrease fatigue in workers. Effectiveness not just about how hard you work, but how smart.
Time-motion theory enhances resource planning and allocation. When you know how much time and movements particular tasks
require, you can apportion the necessary resources. Decreased costs is another advantage. The better you plan resources and the
more work the staff accomplishes, the higher the cost savings. Remember to measure how much time workers save after
implementing changes.
Once you grasp how time and motion study fits into everyday operations, you can use the theory to get the most out of employees.
Therbligs:
Therbligs are 18 kinds of elemental motions that make up a set of fundamental motions required for a worker to perform a manual
operation or task. They are used in the study of motion economy in the workplace. A workplace task is analyzed by recording each of
the therblig units for a process, with the results used for optimization of manual labor by eliminating unneeded movements. Next,
the therblig is listed on a SIMO Chart, which compares the worker’s right and left hand micro motions.
They were first described and later credited to Frank and Lillian Gilbreth. The word is their name spelled backwards with “th”
transposed.
- Never listen to your customer – It isn’t a consumer job to know what they want
- Don’t think of only resources utilization but also Flow! – 81 hrs from Iron ore to Automobile
- One best way to do the job and it is the manager’s responsibility to find it
Henry Gantt
Henry Gantt is best known for a management tool that bears his name, the Gantt chart. Gantt lived at the turn of the twentieth
century, a time when giant corporations were just emerging. Their size led to new management challenges. Gantt rose to the
occasion by immersing himself in a growing movement called scientific management, also known as Taylorism. As the name
suggests, the movement sought to treat management as a science.
Gantt recognized that large projects, say the construction of a building, were made up of many smaller tasks. Some tasks depended
on the completion of others. Walls can’t be put up until the foundation is poured. Other tasks were independent. Electricians
needn’t worry when the plumbers get their job done, and vice versa.
It’s a pretty simple observation. But when a project gets large enough, keeping track of all the dependencies can be difficult. Project
managers have to keep them straight. They can’t have painters arriving before the wall board’s up. Nor do they want the painters
showing up months after the walls have been finished — the goals to get the job done as quickly as possible.
Gantt charts display these dependencies pictorially. Each task is represented by a horizontal bar whose length corresponds to the
time required to complete the task. Then the bars are laid out on a time line. Each bar is laid down as early as possible, but after all
the tasks that must precede it.
Solve the
model
Economic order quantity (EOQ) is the ideal order quantity a company should purchase to minimize inventory costs such as holding
costs, shortage costs, and order costs. This production-scheduling model was developed in 1913 by Ford W. Harris and has been
refined over time.1 The formula assumes that demand, ordering, and holding costs all remain constant.
KEY TAKEAWAYS
The EOQ is a company's optimal order quantity that minimizes its total costs related to ordering, receiving, and holding
inventory.
The EOQ formula is best applied in situations where demand, ordering, and holding costs remain constant over time.
One of the important limitations of the economic order quantity is that it assumes the demand for the company’s products
is constant over time.
OPHRM – 11.03.2021
RECAP
1. PIN Factory – Adam Smith – Imp of Specialization, right job to right function (1 st Factory)
2. 5 Philosophy/ 4 Factory
4. Standardization (Eli Whitney) – Standard are not given, it needs to be anticipate/ SOP – Processes/ Good HR Manager
needs to see where standardization is needed and how they should go about it.
a. Standards are not mandatory for all, apart from Food, Electrical etc.
c. Belief: If u have standards in process, the product also will come right.
6. Taylor –
c. Theory X
7. Ford
b. Learn to See
9. Gantt chart
b. Can we find out time to act on activities that are intellectual in nature
a. Process vs Person
14. EOQ
b. Uncover Assumptions
17. A/ B Testing
18. Focused
19. Measurements
21. Commodity
Gantt chart
1. Create an algorithm
EOQ:
c. Order pointing
Ford (2nd Factory) Sloan (GM)
Middle Class Aristocrat Class
Affordable Cars Classy Cars
Key to eco. Prosperity is to organize creation of Customer
Cost efficient
Dissatisfaction
Butchery Plant Fashion Industry
Same model Create new versions
6 ∑:
- Define the problem - CTQ – from the perspective of Customer
- Measure – No. of Opportunities & No. of Defects in 10^6 DPMO – if this number be 3.4, its meets 6 ∑
-
CTQ of Dabbawala
1. On time Delivery (They used only 1)
2. Current person
3. Hygiene
4. No Spillage
W. Edwards Deming – Mathematicians – “In God we trust, rest all should bring data”.
- Sampling - Sampling is a process used in statistical analysis in which a predetermined number of observations are taken
from a larger population. The methodology used to sample from a larger population depends on the type of analysis
being performed, but it may include simple random sampling or systematic sampling.
- Chart of Averages - Average lines display the data average for a given chart, drawing a line across the entire chart at
the average value point on the Y axis. By default, average line labels are displayed as a combination of the line value
and the line title.
- Range Chart - An X-bar and R (range) chart is a pair of control charts used with processes that have a subgroup size of
two or more. The standard chart for variables data, X-bar and R charts help determine if a process is stable and
predictable.
- "common causes", also called natural patterns, are the usual, historical, quantifiable variation in a system, while
"special causes" are unusual, not previously observed, non-quantifiable variation.
The Deming Cycle, or PDCA Cycle (also known as PDSA Cycle), is a continuous quality improvement model consisting out of a logical
sequence of four repetitive steps for continuous improvement and learning: Plan, Do, Check (Study) and Act. The PDSA cycle (or
PDCA) is also known as the Deming Cycle, the Deming wheel of continuous improvement spiral. Its origin can be traced back to the
eminent statistics expert Mr. Walter A. Shewart, in the 1920s. He introduced the concept of PLAN, DO and SEE. The late Total Quality
Management (TQM) guru and renowned statistician Edward W. Deming modified the SHEWART cycle as: PLAN, DO, STUDY, and ACT.
Along with the other well-known American quality guru-J.M. Juran, Deming went to Japan as part of the occupation forces of the
allies after World War II. Deming taught a lot of Quality Improvement methods to the Japanese, including the usage of statistics and
the PLAN, DO, STUDY, ACT cycle.
- Cost of poor quality (COPQ) or poor quality costs (PQC), are costs that would disappear if systems, processes, and
products were perfect. COPQ was popularized by IBM quality expert H. James Harrington in his 1987 book Poor Quality
Costs. COPQ is a refinement of the concept of quality costs.
ii. Associated costs: transport, packaging, customs duties, payment terms etc.
ii. Guidance tool for optimizing direct or indirect costs (avoiding waste, exceeding quality requirements etc.).
4 Factory
- PIN Factory
- Ford
- Hawthorne
- Toyota
OPHRM – 16.03.2021
Recap
- Six Sigma
- 4 Factory – Pin/ Ford/ Hawthorne/ Toyota
- 5 Philosophies (written above)
- DMAIC
- TQM
- Order Point technique – ROL vs ROQ
- PDCA - Deming
- CTQ
- Book Binding/ EDD/ Scheduling – Johnsn’s 2 Machin algorithm
- Ford vs GM
- Deming/ Juran/ Crosby
- Statistical Quality Control
- Scientific Management/ COPQ
- Customer Dissatisfaction
- Concept of a Model
- ANDON – Japanese term meaning “light” or “lamp.” In Lean manufacturing, an andon refers to a tool that is used to
inform and alarm workers of problems within their production process. It is an integral part of applying Jidoka
(automation with human intelligence) in the workplace
- SQC - Statistical Quality Control is typically the measuring and recording of data against specific requirements for a
product ensuring they meet the necessary requirements – size, weight, colour etc.
- CTQ are the key measurable characteristics of a product or process whose performance standards or specification
limits must be met in order to satisfy the customer. They align improvement or design efforts with customer
requirements
- DPMO - In process improvement efforts, defects per million opportunities or DPMO is a measure of process
performance
- COPQ (Juran) – Large amount of money is spent on rectifying problems. Cost of poor quality or poor quality costs, are
costs that would disappear if systems, processes, and products were perfect. COPQ was popularized by IBM quality
expert H. James Harrington in his 1987 book Poor Quality Costs. COPQ is a refinement of the concept of quality costs
- Crosby – TCO – The total cost of ownership is the purchase price of an asset plus the costs of operation. Assessing the
total cost of ownership represents taking a bigger picture look at what the product is and what its value is over time.
3 CONSTANTS
- Inventory
- Project
- Process
What id a process?
- Step by step
- Quantity
- Time
- Input
- Value addition
- Output
- Checks and Measures
- Feedback
- Systematic Process – another process to analyze the feedback the process – it completes the PDCA Cycle
Measures in Process:
- P – Product
- Q – Quality – IS standard
- D – Delivery
- S – Safety
- C – Cost
- M – Morale
- E – Environment
Process flow diagrams are usually created to understand “P” related measures
How many people can be interviewed in 8 hrs? – 2 every hr – 16 Candidates (Capacity is tied to the bottleneck)
Bottleneck – Interview
Exercise: Bank
- Teller A – 1 min
- Teller B – 3 mins
- Teller C – 2 mins
3 mins – 1 Customer
6 mins – 2 Customer
60 mins – 20 Customer
40 customers in queue
Projects
5 process:
- Initiating
- Planning
- Execution
- Monitoring
- Closure
10 Knowledge areas
- Project Integration Management
- Project Scope Management
- Project Schedule Management
- Project Cost Management
- Project Quality Management
- Project Resource Management
- Project Communications Management
- Project Risk Management
- Project Procurement Management
- Project Stakeholder Management
3 Objective/ criteria
- Scope
- Time
- Cost
What is most important for the Project? Project management only address time and cost
- What is to be done?
- Where is it to be done?
- Am I on Schedule?
- When is the project likely to be completed?
SOLUTION
MS – project
Software JIRA/ Trello
Primave
PERT – Program Evaluation Review Technique - When the project time estimation is presumed to be in a range of time (R & D)
CPM – Critical Path method – When the project time estimation is presumed to be a fixed time (Constructions)
Critical Path - n project management, a critical path is the sequence of project network activities which add up to the longest overall
duration, regardless if that longest duration has float or not. This determines the shortest time possible to complete the project.
Float/ Slack - Slack time is an interval that occurs when there are activities that can be completed before the time when they are
actually needed. The difference between the scheduled completion date and the required date to meet the critical path is the
amount of slack time available.
Example:
A = 2 days
B = 3 days
C = 5 Days after completion of A
Software answers:
- What are the activities?
- What is the duration?
- What is the predecessor?
Recap
- Clockspeed
- Medici
- 3 constants
- Oil/ Water/ rare earth
- China
- 5 slogans – Focused/ Bottleneck/ Measurements/ Value added Non value added/ One size fit all/
- Resource utilization & Flow
- 4 factories
- 5 Philosophies
- Process
- Cycle Time
- TAT
- Projects
- Critical Path
- Float
- Network
- Traditional & Agile
- Critical Path Method
- Six Sigma
- DMAIC
- TQM
- Order Point technique – ROL vs ROQ
- PDCA - Deming
- CTQ
- Book Binding/ EDD/ Scheduling – Johnsn’s 2 Machin alogorithm
- Ford vs GM
- Deming/ Juran/ Crosby
- Statistical Quality Control
- Scientific Management/ COPQ
- Customer Dissatisfaction
- Concept of a Model