Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

PART 1 – ASSEMBLY LINE SIMULATION

Scoping the task


We have named our machines AM, AY and MB (abbreviation of names of members of our team), where AM is
the 1st machine, AY 2nd and MB is the 3rd machine.
We have taken 4 different values of standard deviation in each machine’s processing times, which are 0.5, 1.5
& 2 (total 4). We have varied the buffer levels from 0 to 25 with a step of 2, so basically, our buffer values are:
0, 2, 4, 6……20, 22, 25 (total 13). The number of experiments we have carried out to answer the first 3 parts of
the question is 4*13 = 52 Nos. The run-in time we have used is 1000 mins, run length is 10000 mins. and no. of
repetitions as 10.

Answering the questions asked in the task


A) How many units would you expect to be able to ideally produce per hour on the average?
In an ideal case, there will be no variation in the processing times i.e. standard deviation will be 0. So in such a
case, the output would be given by:

60
Ideal Output per Hour=
Ideal Processing Time

Putting ‘Ideal Processing Time” as 5 minutes in the above equation will give us the ‘Ideal Output per Hour’
as 12 units.

The ideal output in our experiment, where we have taken the run length as 10000 minutes, will be 2000 units.
B) How many units did you actually produce per hour on the average? Explain any difference between
your simulation results and your estimate made in (A)? Also, tabulate the average utilizations at each
machine, and compute the mean and standard deviation of system output.

The actual average output that we get at the final station depends upon:
a) The standard deviation in the processing time
b) The buffer capacity between the machines

As we will keep on increasing the standard deviation, we will keep getting an output that is farther from the
ideal output that we expect the system to produce, however this deviation of the output decreases as we increase
the buffer capacity between the machines. Shown below, in Fig. 1, is a snippet of the actual output, its standard
deviation and the actual output as a percentage of expected output, for 4 different standard deviations of
processing times and 4 different buffer capacities. The complete table of the observations from all of the 52
experiments is in the excel file attached along with the submission.

We see this deviation in the final output from the expected output because of the variation in the processing
times of the machines. We will frequently encounter situations where a particular machine’s processing time
will be greater than the mean processing time and this is where the deviation in the output will be realized. This
effect is more pronounced in case we have a very low buffer capacity as the machine whose processing time is
less than the subsequent machine(s) will remain idle for longer durations hence a larger deviation from the
expected output.

1|Page
Input Output
Final Output Output as % of
Experiment No. Time Std. Dev. Buffer Capcity Experiment No.
Average Std. Dev. expected (i.e. 2000)
E1 5 0.5 0 E1 1856 4 92.8%
E5 5 0.5 8 E5 1996 2 99.8%
E9 5 0.5 16 E9 1997 4 99.9%
E13 5 0.5 25 E13 1998 4 99.9%
E14 5 1 0 E14 1732 5 86.6%
E18 5 1 8 E18 1986 5 99.3%
E22 5 1 16 E22 1994 8 99.7%
E26 5 1 25 E26 1994 8 99.7%
E27 5 1.5 0 E27 1623 15 81.2%
E31 5 1.5 8 E31 1966 8 98.3%
E35 5 1.5 16 E35 1985 4 99.3%
E39 5 1.5 25 E39 1985 15 99.3%
E40 5 2 0 E40 1529 10 76.5%
E44 5 2 8 E44 1942 9 97.1%
E48 5 2 16 E48 1965 9 98.3%
E52 5 2 25 E52 1983 14 99.2%

Fig. 1

C) Diagram the impact of changing the buffer stocks on the output of the system by changing the storage
area capacity cells (e.g. the buffer between Joe and Next is changed by changing Joe’s Output storage
capacity). Consider buffer levels that vary from 0 to a maximum of at least 20 units. What can you
conclude from these experiments?

The graph depicting the variation in output of the system with varying buffer capacity levels is shown in Fig.
2, for a particular std deviation in the processing time (=1).

2050
2000
1950
1900
Output

1850
1800
1750
1700
1650
1600
0 2 4 6 8 10 12 14 16 18 20 22 24 26

Buffer Capacity

Fig. 2

2|Page
The graph depicting variation in the output with respect to the change in standard deviation, keeping the buffer
capacity constant is depicted in Fig. 3, for a buffer capacity value of 10.

2010
2000
1990
1980
Output

1970
1960
1950
1940
1930
1920
0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2

Std. Dev. in Time

Fig. 3

Based on the above graphs we can conclude:

1.) As we increase the buffer capacity the system seems to absorb the impact of the variation caused by the
deviation in the mean processing times. This can be explained by reasoning that as you can increase the buffer
capacity the machine doesn’t have to wait for subseque`nt machines to complete their respective processes
which increases the utilization of all the machines, which ultimately gets reflected in the final output of the
system. But an important thing to notice here is the fact that as we keep on increasing the buffer capacity, it
starts to give us returns which can be said to be of diminishing nature i.e. the value addition of increasing the
capacity from 18 to 20 is much lower than what we could achieve by increasing the value to 2 from 0. This is of
great importance as it allows us to design our process optimally, in terms of buffer capacity. We can figure out
an optimal buffer capacity depending on the variation in the time that we are expecting, for anything lesser than
this optimum value, we will be underutilizing our machines and for anything above that value, we will be
wasting the space and capital.
2.) As we increase the value of standard deviation, for the same buffer capacity we observe a downward trend,
which is quite obvious since as we will keep on increasing the variation in the system, it will require more
buffer capacity to absorb that variation and if we are not providing that extra capacity, then we will observe that
our machines are underutilized hence we will get lower output.

D.) What would be the impact on system performance if machine MB had a processing time that
averaged 6 minutes (assuming AM and AY still run at an average of 5)? What happens to the inventories
after AM and AY? Does varying the size of these buffers have any impact?

The snippet of the observed readings consisting of Average Output, its Standard Deviation, Utilizations of each
machine & Average Buffer Inventory is in the Fig. 4.

3|Page
Input Output
Time Final Output Output as % Avg Utilisation Avg Inventory
Experiment Std. Buffer Experiment
Std. of expected
No. AM AY MB Dev. Capcity No. Avg. AM AY MB AM AY
Dev. (i.e. 1667)
E0 5 5 6 0 0 E0 1667 0 100% 83.3% 83.3% 100% 0 0
E1 5 5 6 1 0 E1 1597 5 95.8% 80.0% 79.7% 95.6% 0 0
E2 5 5 6 1 2 E2 1664 4 99.8% 83.2% 83.3% 100.0% 1.92 1.93
E3 5 5 6 1 4 E3 1664 5 99.8% 83.2% 83.3% 100.0% 3.93 3.93
E4 5 5 6 1 6 E4 1664 4 99.8% 83.2% 83.3% 100.0% 5.93 5.93
E5 5 5 6 1 8 E5 1664 5 99.8% 83.2% 83.3% 100.0% 7.93 7.93

Fig. 4
The inferences that can be made from the above data are:
1.) The average buffer inventories for the first 2 machines are almost equal to the capacity that we have
provided. That is quite expected since the last machine is the bottleneck in our case, the first 2 machines will fill
up whatever space we have provided and will stop (get blocked), hence taking the inventory to its maximum
utilization.
2.) As we vary the buffer capacity we don’t see any change after the first increase, that is because there itself
machine 3, which is our bottleneck operation reached its maximum utilization and with any increment in the
buffer capacity, we provide won’t add any value to it because it is already operating at its capacity.
3.) The actual output in experiment E1, as a percentage of ideal output is comparatively higher than what we
observed in the earlier case. This might seem counter-intuitive as we might expect that the process will be more
distant from the ideal one if we have different processing times but our observations paint a different picture.
The reason for this is, that when we had the same processing times with no variation in the processing time, the
process was behaving like an ideal one as all the machines were continuously working in sync and were getting
utilized for every second. But as we impart some variation to the system, it goes under drastic changes and
becomes more distant from the ideal process. While in the case where one of the machines has a processing
time of 6 minutes, even in the “ideal” case when there is no variation in their processing times, we are only
getting utilization of ~83% for the first two machines and as we impart system with some variation, it does go
under change but it is not so far distant from the “ideal” process that we defined in this case.

E.) What happens if Joe is the bottleneck instead of M3? Do the buffers at Joe and Next have any
impact?
In terms of total output produced, the system behaves in very much the same way with a very slight change
from the previous case. The only difference is, that in this case, the average buffer inventory is on the minimum
side, which is expected since the 1st machine has the highest time, it will remain utilized for maximum time and
as soon as it throws its output, the 2nd machine will start processing it. A snippet of the output is shown in Fig.
5.

4|Page
Input Output
Time Final Output Output as % Avg Utilisation Avg Inventory
Experiment Std. Buffer Experiment
Std. of expected
No. AM AY MB Dev. Capcity No. Avg. AM AY MB AM AY
Dev. (i.e. 1667)
E0' 6 5 5 0 0 E0 1667 0 100% 100% 83.3% 83.3% 0 0
E6 6 5 5 1 0 E6 1593 10 95.6% 95.7% 79.7% 79.8% 0 0
E7 6 5 5 1 2 E7 1667 6 100.0% 100.0% 83.5% 83.5% 0.07 0.10
E8 6 5 5 1 4 E8 1667 6 100.0% 100.0% 83.5% 83.5% 0.07 0.10
E9 6 5 5 1 6 E9 1667 6 100.0% 100.0% 83.5% 83.5% 0.07 0.10
E10 6 5 5 1 8 E10 1667 6 100.0% 100.0% 83.5% 83.5% 0.07 0.10

Fig. 5
PART 3 – RE(SEARCH) BASED: ORM HIGHLIGHTS

Scoping the task


The area in Operations we were allocated was “Product & Process Design (Services)”. After exploring various
service-based industries and multiple companies in them, we identified the 2 major stories/breakthroughs
concerned with the field of operations. The first one is “Stream Yard”, which is US based company and falls
under the umbrella of IT-based companies and the other one is “Zepto”, an Indian company falling under the
Non-IT umbrella.

1.) Stream Yard1


Founded in 2018, Stream Yard changed how live streaming was being done. Before Stream yard the only viable
options the customers had were Twitch and OBS software which was not user-friendly and these products
didn’t allow the users to stream across multiple platforms. As live streams were integrated into more and more
platforms it became difficult for the creators and the users to manage all the live streams on all the platforms at
once.

This is where Stream Yard came in and changed the game. It’s a platform which allows the streamers to live
stream on various platforms at the same time. This is a game changer and this thing led to a boost in Stream
Yard users especially during the pandemic when there was a huge uprise of streamers and people were
streaming on every platform, continuously. Stream Yard has come as a breath of fresh air with its new product
design. It has reduced the complexity of installing the software with it just being an extension which again
reduces the computing power required to do multi-platform live streams. It comes up with a clean and easy-to-
use interface and is very intuitive. With it not requiring any software to install, the product has become
independent of the operating system which again makes it easier to maintain as the developer and provides less
hindrance to roll out Andy updates or bug issues. It also allows the user to use the product across various
devices without any compatibility issues. The design also focuses on making sure that the audience across
platforms are able to see their comments irrespective of the platform they are watching the stream on which
enables cross-platform interaction between the audience increasing the influence of the content creators without
any effort from their side. This is a unique feature that is available on Stream Yard. Along with this Stream
Yard, keeping the future in mind, Stream Yard has added the feature of streaming from mobile as well. Stream
Yard has greatly innovated the product by focusing on the following points:

1. Providing a solution to the problems that already exist in similar products (Multi-platform streaming).
2. Develop a product that’s efficient and less power hungry (Works as an extension and not as software).

5|Page
Stream Yard has surely changed what’s expected from a product in the live streaming industry with its new
innovations.

2.) Zepto2
On September 1 2021, Zepto was founded and entered the quick grocery delivery industry. The main aim for
Zepto was to reduce the time required to deliver the daily groceries.
Due to the pandemic, the founders realised that any order that was completed in 45 minutes to 1 hour had a
repeat rate of 20% while the repeat rate jumped up to 40 to 50% when the time was under 30 minutes. That’s
why while the big players like Swiggy, Big Basket etc. were fighting price wars by offering deep discounts
Zepto focused on hyper-speed delivery and looked to consistently achieve the mark of 10 mins by improving
the delivery process.

Zepto used Locus, an AI tool based on Google API, to track the customer’s geostatistical data, traffic dynamics
and how much time it will take to make the last-minute delivery. All this data was used to decide the locations
where the company should set up the Dark Store. Dark Stores are like mini warehouses in high population
areas with a capacity to fulfil all the needs of the customers within a radius of 3 km. With these dark stores set
up in the best locations, Zepto applied the process of PPB which made Zepto stand out in terms of their delivery
times. PPB can be explained as:

1. First P is picking the order from the warehouse. All the employees are equipped with tablets so as soon as
they receive the order the order is mapped with which shelf they have to access in which row of the warehouse
which reduces the picking time.
2. Second P is packing. All the products are packed as quickly as possible and are fully packed most of the time
before the delivery boy arrives at the warehouse.
3. Third P is bagging. Zepto has implemented a rule for the agents to leave the warehouse as soon as they get
the order in their hands. Zepto makes sure that these three steps only take 60 seconds to complete.

These 3 philosophies/rules combined with strategically placed Dark Stores have helped Zepto in achieving the
median delivery time of 8 minutes and 47 seconds, and have made them the fastest delivery service in India.

6|Page
References for Task 3
1. https://webtribunal.net/streaming/streamyard/
2. https://genztimes.in/zepto-case-study

7|Page

You might also like