Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 25

INDE3314 QUALITY PLANNING AND CONTROL

SPRING 2023
INSTRUCTOR: Sezgin Çağlar AKSEZER
Study of The Quality Control Process
Hakan Emir Durmaz 19INDE1038
Eda Öztürk 20INDE1008
Recep Kaan İşbilir 218IE2193
Table of Contents
Introduction

Motivation
The primary motivation for this project is to enhance the quality of Lays chips by reducing the occurrence
of burnt chips in each packet. Consistent product quality is crucial for maintaining customer satisfaction,
brand reputation, and market competitiveness. Burnt chips are a common complaint among consumers
and can significantly affect their overall perception of the product. By addressing this issue, the company
can improve customer loyalty, reduce wastage, and increase profitability.

Reasons for Selecting the Critical Quality Characteristics:


We have chosen to focus on the presence of burnt chips in Lays packets as the critical quality
characteristic for several reasons:

1. Customer Satisfaction: Burnt chips negatively impact the taste and overall eating experience, leading
to customer dissatisfaction and potential loss of repeat business.
2. Product Consistency: Ensuring a consistent product quality across all packets is essential for
maintaining brand trust and loyalty.

3. Operational Efficiency: Reducing the incidence of burnt chips can decrease rework and scrap rates,
improving overall operational efficiency and cost-effectiveness.

Background Information on Related Processes:

1. Production Process:
- Raw Material Preparation: Potatoes are sourced, cleaned, peeled, and sliced to the desired thickness.

- Frying: Potato slices are fried in oil at high temperatures until they reach the desired crispiness.

- Seasoning and Packaging: Fried chips are seasoned and packaged into individual packets.

2. Controllable Inputs:
- Frying Temperature: Precise control of oil temperature is critical to avoid over-frying, which leads to
burnt chips.

- Frying Time: The duration the potato slices spend in the fryer needs to be optimized to ensure they are
cooked thoroughly but not burnt.
- Quality of Oil: Regular monitoring and replacement of frying oil to ensure it is not degraded, which can
affect the frying process.

3. Uncontrollable Inputs:
- Potato Quality: Variations in the quality and moisture content of potatoes can affect frying outcomes.

- Environmental Factors: Humidity and temperature in the production environment can impact the frying
process.

4. Outputs of the Process:


- Primary Output: Packets of Lays chips ready for consumer distribution.

- By-products: Oil residues, potato peels, and scrap chips.

5. Management Activities:
- Quality Control: Implementation of quality control checks at various stages of production to detect and
remove burnt chips.

- Employee Training: Training employees on optimal frying techniques and the importance of maintaining
quality standards.
- Process Monitoring: Continuous monitoring and adjustment of frying parameters to ensure consistent
product quality.

By focusing on these aspects, the project aims to identify and address the root causes of burnt chips,
implement effective control measures, and ultimately enhance the overall quality of Lays chips.

1-Data Collection

Why, How, and When We Collected Data:

Why:
We collected data on the occurrence of burnt chips in Lays packets to understand the extent of the
problem, identify root causes, and implement corrective actions to improve product quality. By quantifying
the frequency and distribution of burnt chips, we can make data-driven decisions to enhance the frying
process and ensure consistent product quality.
How:
We employed a systematic approach to data collection involving the following methods:
1. Sampling: Randomly selecting packets from different production batches over a specified period.

2. Inspection: Conducting a visual inspection of each sampled packet to count the number of burnt chips.

3. Recording: Documenting the number of burnt chips per packet along with batch details, production
date, and time.

When:
Data was collected over a one-month period, covering various shifts and production runs to account for
any variations due to different operators, time of day, or other environmental factors.

Steps in Collecting Data, Challenges Faced, and Systematic Errors:

Steps in Collecting Data:

1. Preparation: Define the sampling plan, including the


number of packets to be sampled per batch and the frequency of sampling.

2. Random Sampling: Randomly select packets from the production line at predefined intervals to ensure
a representative sample.

3. Inspection Process: Open each sampled packet and visually inspect the contents, counting and
recording the number of burnt chips.

4. Data Entry: Enter the collected data into a spreadsheet or database, ensuring accurate recording of
each sample's details.

5. Verification Perform periodic checks to verify the accuracy of data entry and ensure consistency in the
inspection process.

Challenges Faced:

1. Consistency in Inspection: Ensuring that different inspectors maintain consistent criteria for identifying
and counting burnt chips.
2. Sampling Bias: Avoiding any bias in the selection of packets to ensure a representative sample of the
entire production.

3. Time Constraints: Managing the additional workload of data collection without disrupting regular
production activities.

Systematic Errors:
1. Inspector Variability: Different inspectors might have slightly different perceptions of what constitutes a
burnt chip, leading to variability in data.

2. Recording Errors: Human errors in counting or recording the number of burnt chips could introduce
inaccuracies.

3. Environmental Factors: Variations in production conditions (e.g., changes in frying temperature due to
ambient conditions) could affect the consistency of data.

Data Manipulation Necessities:


To ensure the reliability and validity of the data, the following data manipulation steps were necessary:

1. Standardization: Establishing clear guidelines for identifying burnt chips to minimize inspector
variability.

2. Data Cleaning: Reviewing and correcting any obvious errors or inconsistencies in the recorded data.

3. Normalization: Adjusting the data to account for any known systematic biases, such as differences
between shifts or production lines.

4. Aggregation: Summarizing the data to identify overall trends and patterns, such as the average
number of burnt chips per packet and any variations between batches.

By carefully planning and executing the data collection process, we aimed to gather accurate and
representative data to inform our analysis and subsequent quality improvement initiatives.
2- Data Set
Attribute Data Set
3-Statistical Process Control

Statistical Process Control (SPC) is a method of quality control that uses statistical methods to monitor
and control a process. This helps ensure that the process operates efficiently, producing more
specification-conforming products with less waste.

3.1-Histogram
3.2- Cause and Effect Diagram
It is imperative to look into all potential causes before considering a solution for a significant issue. In this
manner, the issue can be resolved entirely the first time around as opposed to being partially resolved
and having the issue continue. To do this, Cause and Effect Analysis is a useful tool.
A Cause and Effect Diagram, also known as a Fishbone Diagram or Ishikawa Diagram, is a tool used to
systematically identify and analyze the root causes of a specific problem or effect. It is widely used in
quality management and process improvement initiatives. Here are the key aspects of a Cause and
Effect Diagram:

Purpose:
-Identify Root Causes: Helps in identifying the potential root causes of a problem, rather than just the
symptoms.

- Organize Information: Provides a structured way to organize potential causes into categories for easier
analysis.

- Facilitate Discussion: Encourages team collaboration and brainstorming to uncover all possible causes.

Structure:
- Head: Represents the problem or effect being analyzed. This is typically written at the head of the "fish."

- Bones: Main categories of potential causes, which branch off the main line like the ribs of a fish.
Common categories include:

- Machine (Equipment): Issues related to machinery or equipment.

-Method (Process): Problems with the processes or procedures.

- Material: Issues with raw materials or components.

- Manpower (People): Human-related issues, such as skills, training, or human error.

- Measurement: Problems with measurement systems


or data collection.

- Environment: External factors, such as environmental conditions.

- Sub-bones: Specific causes related to each main category, branching off the main bones.

3.3- Control Chart

A control chart is a graphical tool used to monitor the variation of a process over time. It typically includes
a central line representing the mean, an upper control limit (UCL), and a lower control limit (LCL). These
lines are derived from historical data and reflect the natural variability of the process. Control charts
enable the assessment of process stability and predictability, providing insights into the reliability and
consistency of quality inspections. For our project, we constructed control charts for both data sets. The
u chart was selected for its suitability in calculating defects per unit in attributes, particularly with variable
sample sizes.

For Attribute Data Set

4- Statistical Analyses
Quality control charts vary based on the type of data being analyzed. For variable data types, it is
essential to monitor both the mean value and the variability of the quality characteristic. The process
mean is typically monitored using the X (bar) chart, while process variability can be tracked using either
the R chart or the S chart. Once the process mean and standard deviation are known, control charts can
be constructed. If these parameters are unknown, they must be estimated.

In instances where there is no assignable cause for an out-of-control point, it is considered a false signal.
To determine the adequacy of control limits, they should be scrutinized for assignable causes:

1. If assignable causes are identified, discard the affected points from the calculations and establish new
trial control limits.
2. Repeat this process until all points plot within control limits.
3. Adopt the trial control limits that result from this process.
4. If no assignable cause is found, there are two options: remove the point as if an assignable cause had
been identified and update the limits, or retain the point and consider the limits appropriate for control.
Additionally, even in the absence of out-of-control points, patterns may emerge that could indicate
potential problems. It is crucial to monitor these patterns to ensure ongoing process stability and
reliability.
5-Phase 1 Control Charts for Attributes Data
-u chart

The Phase 1 control chart for burnt chips in Lays packets is displayed below. It includes the average (u)
value, upper control limit (UCL), and lower control limit (LCL). The red dots indicate any out-of-control
points, if present.

According to the chart, there are no out-of-control points identified in the dataset, meaning the process
appears stable based on the provided data.
-p chart
This p-chart helps in monitoring the proportion of nonconformities over time and identifying any variations
that may indicate a shift in the process or special causes of variation.

-Exponentialy Weighted Moving Average Chart


Here is the Exponentially Weighted Moving Average (EWMA) chart for the proportion of nonconformities
over the days. The chart includes:EWMA Line: This line represents the smoothed proportion of

nonconformities.
Center Line: The overall mean proportion of nonconformities.
Upper Control Limit (UCL) and Lower Control Limit (LCL): These lines help identify any points that are
out of control, indicating potential issues in the process.
To create the EWMA chart, we will use a typical smoothing constant (lambda) of 0.2.
Out of Control Action Plan Flow Chart

Data Finding

Record of Statistics

Control Chart Interpretation

NO
Is there an assignable cause? Edit Data to correct entry

NO
Was Our Result Effected? Retain Data

NO
Does the Package need more control? Check other variables

Check to improve condition

Docmuent and report every action


6- Error Analyses
In quality control, Type I and Type II errors are critical concepts associated with hypothesis testing. The
probabilities of these errors are inversely related to the test's significance level and power.

Type I Error
A Type I error occurs when the null hypothesis is true, but it is incorrectly rejected. The probability of
committing a Type I error is denoted by (alpha), the significance level chosen for the hypothesis test. To
minimize this risk, a smaller (alpha) value should be used. However, using a lower (alpha) value
decreases the likelihood of detecting a true effect (increasing the risk of a Type II error, (Beta)) if one
exists.

Type II Error
A Type II error occurs when the null hypothesis is false, but it is incorrectly accepted. The probability of
making a Type II error is denoted by (Beta), which is dependent on the test's power. The risk of a Type II
error can be reduced by ensuring a sufficient sample size to detect a practical difference when one
exists.

Understanding and managing these errors is essential for maintaining the integrity of quality control
processes and making informed decisions based on statistical tests.
Type 1 Error
Type 1 error, also known as the alpha error or false positive error, occurs when a process is identified as
being out of control when it is actually in control. For control charts, this error is related to the probability
of a point falling outside the control limits when the process is stable.

Type 1 Error for a u-Chart

A u-chart is used to monitor the number of defects per unit in a sample. Similar to p-charts, the control
limits for u-charts are typically set at ±3 sigma from the average number of defects per unit, which
accounts for about 99.73% of the data under normal distribution.

For a u-chart, the type 1 error (α) is also approximately 0.0027 (or 0.27%) for a single point outside the
control limits. This is because:

- The control limits are designed such that there is a 99.73% probability that a point will fall within these
limits if the
process is in control.
- Thus, there is a 0.27% chance that a point will fall outside the control limits purely by random variation,
leading to a false positive or type 1 error.

Here are the type 1 error calculations for the p-chart and the EWMA chart:
p-Chart

For a p-chart, the type 1 error is based on the control limits set at ±3 sigma. The probability of a Type 1
error (α) for a single point is approximately 0.0027 (or 0.27%) because:

- The control limits are set such that 99.73% of points fall within the control limits under normal
distribution (±3 sigma).
- Thus, the chance of a point falling outside the control limits is 0.27%.

EWMA Chart

For an EWMA chart, the type 1 error is influenced by the choice of the smoothing constant (λ) and the
control limits. The control limits are typically set to ensure a desired false alarm rate.

- With λ = 0.2 and control limits set at ±3 sigma, the type 1


error for an EWMA chart is also approximately 0.0027 (or 0.27%) for individual points.
- The exact type 1 error can vary slightly based on the specific parameters and calculations used for the
EWMA chart.

In both cases, the type 1 error is a measure of how likely it is to incorrectly signal an out-of-control
process when the process is actually in control.

Summary of Type 1 Errors for Our Charts

- p-Chart: Approximately 0.27%


- EWMA Chart: Approximately 0.27% (with λ = 0.2 and control limits at ±3 sigma)
- u-Chart: Approximately 0.27%

These type 1 error rates indicate the likelihood of incorrectly identifying a process as out of control when
it is actually in control, assuming the process data follows a normal distribution.

Type 2 error for u chart

The OC curve for the u chart can be generated using the equation below where the equation is
Poisson with λ = n*u:

To calculate the Type II error (β) for a u-chart, which is the probability of not detecting a shift in the
process when one actually exists, we need to define a specific shift size in the mean number of defects
per unit that you are interested in detecting. This is commonly expressed as a multiple of the baseline
defect rate

The calculation of the Type II error involves understanding the distribution of the process after the shift
and the sensitivity of the control chart at detecting such shifts. Here’s the general approach:

1. Define the Shift Size: Decide on the magnitude of the shift in terms of the baseline defect rate (e.g.,
1.5 times the baseline if you're looking for a 50% increase).

2. Calculate New Average (µ'): This is the baseline defect rate multiplied by the shift size.

3. Control Limits Calculation: These remain based on the original baseline defect rate.

4. Error Calculation: Calculate the probability that the new process mean (after the shift) still falls within
the original control limits.
Here, the main statistical tool used is the normal approximation of the Poisson distribution (since u-charts
assume defects follow a Poisson distribution). I'll perform this calculation for a specific example, such as
detecting a 50% increase in the defect rate. Let's proceed with this.

The calculated Type II error (β) for a u-chart, when looking to detect a 50% increase in the defect rate, is
approximately 0.464. This means there is about a 46.4% chance that the chart will fail to detect a shift in
the process where the defect rate has increased by 50%.

This error rate suggests a moderate sensitivity of the u-chart to detect such a shift under the current
settings. If you need the chart to be more sensitive to smaller shifts, adjustments in the control limit
calculations or sample size may be necessary. If you need help with those adjustments or further
analysis, let me know!

The OC curve for the u-chart

Here is the simulated Operating Characteristic (OC) curve for the u-chart. The curve shows the
probability of not detecting various levels of shifts in the process mean defect rate, from 0% to 200% of
the baseline defect rate \( u \). The x-axis represents the multiple of the baseline defect rate, and the y-
axis shows the Type II error—the probability of not detecting a shift.

This visualization helps assess how effectively the u-chart detects shifts in the defect rate. The chart
becomes more effective at detecting larger increases in defects, as indicated by the decreasing Type II
error as the defect rate increases.

Here is our Excel data For attributes


Process Capability
If we determine the values as

- USL (Upper Specification Limit) = 75


- LSL (Lower Specification Limit) = 5
- Mean = 40
- Standard Deviation = 2

The process capability indices (Cp) and (Cpk) for our process are both 5.833. This suggests that our
process is capable and well-centered within the specification limits, as the (Cp) and (Cpk) values are
equal, indicating no significant shift from the center of the specification range. These high values imply
that the process variability is well within acceptable limits, making it highly capable of meeting the
specifications consistently .

You might also like