Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 62

1

Ch. 10 Group Technology

2007Fall_MAE547
GyeongJune Hahm
2

• 10.1 Introduction
• 10.2 Cluster Analysis Method
• 10.3 Branching Algorithms
• 10.4 Assignment of Parts to the Existing
Machine Cells
• 10.5 Summary
3

• 10.1 Introduction
▫ 10.1.1 Visual Method
▫ 10.1.2 Coding Method
• 10.2 Cluster Analysis Method
▫ 10.2.1 Matrix Formulation
▫ 10.2.2 Mathematical Programming
Formulation
▫ 10.2.3 Innovative Applications of Group
Technology
4

10.1 Introduction
• Advantages
▫ Reduced production lead time
▫ Reduced work-in-process (WIP)
▫ Reduced labor
▫ Reduced tooling
▫ Reduced rework and scrap materials
▫ Reduced setup time
▫ Reduced order time delivery
▫ Improved human relations
▫ Reduced paper work
5

Two basic methods used for solving


the GT problem
• Classification
▫ Visual method (10.1.1)
▫ Coding method (10.1.2)
• Cluster analysis (10.2)
10.1.1 Visual Method

Parts grouped into families using a visual method

 Parts are grouped according to their similarities in


the geometric shape
 Dependent on personal preference ( 주관적 )
 Applicable when the number of parts is rather limited

6
10.1.2 Coding Method

Parts are first classified based on features


(characteristics)
Using a coding system : numerical or
alphanumerical code
Three basic types of coding systems
 Monocode (tree structure)
 Polycode
 Hybrid

7
8
10.2 Cluster Analysis Method

Grouping objects into homogeneous


clusters(groups) based on the object features
Grouping : parts into part families (PFs)
machines into machine cells (MCs)
To model the GT problem, three clustering
formulation are used
 Matrix formulation (10.2.1)
 Mathematical programming formulation (10.2.2)
 Graph formulation

9
Result of grouping

 Physical cell vs Logical (virtual) cell

10
11
12
13

10.2.1 Matrix Formulation


• 10.2.1.1 Similarity Coefficient Methods
• 10.2.1.2 Sorting-Based Algorithms
• 10.2.1.3 Cluster Identification Algorithms
• 10.2.1.4 Extended CI Algorithms
incidence matrix
A machine-part incidence matrix aij : (0, 1)
A clustering algorithm transforms an initial
incidence matrix into a more structured (possibly
block diagonal) form.
Rearranging rows and columns in matrix

14
separable clusters
Clustering a binary incidence matrix may result in
the following two classes of clusters
 Mutually separable clusters

 Partially separable clusters

15
bottleneck part

To deal with the bottleneck part 5


 It can be machined in one machine cell and
transferred to the other machine cell by a material
handing carrier
 It can be machined in a functional facility
 It can be subcontracted 16
bottleneck machine

17
solution methods
To solve the matrix formulation of the GT
problem
 Similarity coefficient methods (10.2.1.1)
 Sorting-based algorithms (10.2.1.2)
 Bond energy algorithm
 Cost-based method
 Cluster identification algorithm (10.2.1.3)
 Extended cluster identification algorithm (10.2.1.4)

 Some of these solution methods are discussed18


19

10.2.1.1 Similarity Coefficient Methods


• SLCA (single linkage cluster analysis)
• Similarity coefficient sij measure between 2
machines i and j
• Machine cells are generated using a threshold
value for the sij
• Disadvantage: Fails to recognize the chaining
from the duplication of bottleneck machines
20

Similarity coeff. Sij


s12 = s34 = 0.66, s13 = 0.25, s14 = s24 = s23 = 0

Fig.10.5 Tree of machines


21

10.2.1.2 Sorting-Based Algorithms


• Sorting rows and columns of the machine-part
incidence matrix
• ROC (rank-order clustering)
• Weight for each row i and column j
• Separable clusters are visible after the sorting
22

10.2.1.3 Cluster Identification


Algorithms
• Partition machine-part incidence matrix into
mutually separable submatrices
• CI algorithm in Ch.2 (p.52)
• Remove all double-crossed entries
23

10.2.1.3 Cluster Identification


Algorithms – example
24

10.2.1.3 Cluster Identification


Algorithms – example
25

10.2.1.3 Cluster Identification


Algorithms – example
26

10.2.1.3 Cluster Identification


Algorithms – example
27

10.2.1.3 Cluster Identification


Algorithms – example

CIA result
28

Fig.10.6 Conceptual design of a cellular manufacturing sys


29

10.2.1.4 Extended CI Algorithms

• Heuristically solves the GT problem with


bottleneck parts or bottleneck machines
• Select a row based on the user expertise
• Consider constraints: # of machines in a cell,
Machines cannot be included in the temporary
cell
30

10.2.2 Mathematical Programming


Formulation
• Axioms for Distance Measure b/w part i & j d ij
▫ Reflexivity d ii  0
▫ Symmetry d ij  d ji
▫ Triangular Inequality d iq  d ip  d pq
31

• Most Commonly Used Distance Measures for a


machine-part incidence matrix [aij ]
▫ Minkowski distance measure

▫ Weighted Minkowski distance measure

▫ Hamming distance
• Minkowski Distance Measure
1/ r
d ij    aki  akj 
m r r = a positive integer
 k 1  m = number of machines

(a) Absolute metric (for r = 1)


(b) Euclidean metric(for r = 2)

• Weighted Minkowski Distance Measure


1/ r
d ij    wk aki  akj 
m r

 k 1 
32
• Hamming Distance
d ij     aki , akj 
m

k 1

where 1 if aki  akj


  aki , akj   
0 otherwise
• Mathematical Programming Models
▫ p-Median Model
▫ Generalized p-Median Model

33
34

10.2.2.1 p-Median Model


• Defined Notation
m = number of machines
n = number of parts
p = number of part families
Xij = 1 if part i belongs to part family j
= 0 otherwise
dij = distance measure b/w parts i and j
35

• Objective Function
▫ Minimize the total distance
b/w any two parts i & j
n n
min   d ij xij
i 1 j 1

Subject to n

 x
iFk j 1
ij  1 for all k  1, , n

x
j 1
jj p

xij  x jj for all i  1, , n, j  1, , n


xij  0, 1 for all i  1,  , n, j  1,  , n
36

10.2.2.2 Generalized p-MedianModel


• Considerations
▫ More than One Process Plan for Each Part
▫ Production Cost in Association with Each Process
Plan
• Defined Notation
r = number of machines
Fk = number of parts
p = number of part families
dij = distance measure b/w parts i and j
cj = cost of process plans
37

• Objective Function
n n n
min   d ij xij   c j x jj
i 1 j 1 j 1

Subject to n
  xij  1 for all k  1, , n
iFk j 1
n
 x jj  p
j 1

xij  x jj for all i  1, , n, j  1, , n


xij  0, 1 for all i  1, , n, j  1, , n
38

10.2.3 Innovative Applications of


Group Technology
39

10.3 Bracnhing Algoritham


• Algorithm 10.1
• Algorithm 10.2
• Algorithm 10.3
• Algorithm 10.4
• Algorithm 10.5
40

10.3 Bracnhing Algoritham


• Heuristic Algorithm for Solving the GT
Problems
• Main Features of the Algorithms
▫ High-Quality Solutions
▫ Detection of Bottleneck Parts & Bottleneck
Machines
▫ Ease of Incorporation of Constraints of Any Type
▫ Simplicity of Algorithms and Ease of Coding

• Using the CI Algorithm at each node


41

Algorithm 10.1
• Step 0. (Initialization): Begin with the Incidence Matrix
[aij] at Level 0. Solve the GT Problem Represented with
[aij] with the CI Algorithm
• Step 1. (Branching): Using the Breadth-First Search
Strategy, Select an Active Node (Not Fathomed) and Solve
the Corresponding GT Problem with the CI Algorithm
• Step 2. (Fathoming): Exclude a New Node from Further
Consideration if:
Test1: Cluster Size is Not Satisfactory
Test2: Cluster Structure is Not Satisfactory
• Step 3. (Backtracking): Return to an Active Node
• Step 4. (Stopping Rule): Stop when There are No Active
Nodes Remained; the Current Incumbent Solution is
Optimal; otherwise Go to Step 1
42

• Branching
1. By the CI Algorithm when the Incidence Matrix
Partitions into Mutually Separable Submatrices
2. By Removal of One Column at a Time from
the Corresponding Incidence Matrix when the
Matrix Does Not Partition into Mutually
Separable Submatrices
43

• Assumption
2 <= Size of Machine Cell <= 3
44

< Branching by CI>


45

Fig 10.8. Branching Algorithm enumeration tree


46

Final Solution
47

Algorithm 10.2
48
49
50

• Algorithm 10.1
▫ Remove One Column (Part), then Apply the
CI Algorithm
• Algorithm 10.2
▫ Insert Machines (Rows), then Apply the CI
Algorithm
• Algorithm 10.1 and 10.2
▫ Simple but Not Efficient
▫ Guarantee the Optimality
51

Algorithm 10.3
• Remove 1 column at a time
• Calculate a row index
• MRI (maximum value of the row index)
• May result in undesirable partitioning: too large
cluster
52

MRI

Fig.10.9 Row index


53

Fig.10.10 Child nodes obtained from matrix (10.46)


54
55

Algorithm 10.4
• Limit the size of each machine cell
• If the size of submatrix identified by the CI
algorithm is too big, perform branching using
the Algorithm 10.3
56

Fig.10.11 Child nodes obtained from matrix (10.51)


57

Algorithm 10.5
• Screen ( 걸르기 ) bottleneck parts and machines,
remaining steps are identical to Algorithm 10.4
• If # of machines required by a part is greater
than the allowed size of machine cells, that part
can be considered as a possible bottleneck part
58

Comparison
• Algorithm 10.1: Remove one column (part),
then apply the CI algorithm
• Algorithm 10.2: Insert machines (rows), then
apply the CI algorithm
• Algorithm 10.3: Not guarantee optimality but
computationally efficient
• Algorithm 10.4: Same as 10.3, considering
machine cell size.
• Algorithm 10.5: Same as 10.4, remove
bottleneck parts or machine in initial stage.
59

Industrial applications
• Large # of parts and machines
• Matrix representation is the most suitable
• Apply first the CI algorithm to decompose into
submatrices
60

10.4 Assignment of Parts to the


Existing Machine Cells
• Minimize total # of assignments (visits) of parts
to cells
• (10.63) min   xij (i=>n, j=>k)
• n: # of parts, k: # of cells
• Subject to
• (10.65)  xij <= N
• Total # of parts per each cell is constrained by N
(max allowed #)
61

10.5 Summary
• Two basic methods used for solving the GT
problem;
• Classification
▫ Visual method (10.1.1)
▫ Coding method (10.1.2)
• Cluster analysis (10.2)
62

감사합니다 .

You might also like