Modeling Medium Carbon Steels by Using Artificial Neural Networks

You might also like

Download as pdf
Download as pdf
You are on page 1of 13
Maes ene and Engineering A 58 (200) 93-105, Contents sts avaliable at SclenceDirect Materials Science and Engineering A Journal homepage: www.elsevier.com/locate/msea Modeling medium carbon steels by using artificial neural networks N.S. Reddy", J. Krishnaiah, Seong-Gu Hongs, Jae Sang Lee* * Atenas Techaooay taboos, Graduate ute of Frou Technolgy (GFT Pog Unersity of See an Technol, Pahang POSTE). unghut 79074, Repub of Korea ® Former scklar at Department of Mechanical Engineering nde Institue of Tecnology, Kharagpur 721302, dio «cantor or New and Renewable nergy Measarement Dison of dustrial Metro Haea Research Instituto Standard and Sec, 200 Gojeong-R,Yston Gu, Dajen 95-940, Ruble Kor ARTICLE INFO ABSTRACT ‘re ston: [An artificial neural network (ANN) model has been developed forthe analysis and simulation of the Received 7 October 2008 correlation between the mechanical properties and composition and heat treatment parameters of low log steels, The input parameters ofthe model cansst of alloy compositions (C, Si Mn, SF, Ni, Cr Mo, i and Ni) and heat treatment parameters (cooling rate and tempering temperature). The outputs ofthe [ANN model include property parameters namely: ultimate tensile strength, yield strength, percentage elongation, reduction in area and impact energy. The model can be used to calculate the properties of low alloy tees as a function of alloy composition and heat treatment variables. The individual andthe ‘combined influence of inputs on properties of medium carbon steels is simulated using the mode. The current study achieved a good performance ofthe ANN model, and the results are in agreement with experimental knowledge. Explanation of the calculated results from the metallurgical point of view is Received in revised frm 11 December 2008 ‘Accepted 1 Deere 2008 Keywords ‘ical neural neworks low alloys sees Mechanical properties eat treatment parameters Alloy design tempted. The developed madel can be used 38 guide for further alloy development. (© 2008 Elsevier BL. All rights reserved 1. Introduction ‘The properties of alloy steels mainly depend on the mechanical treatment, alloying elements and heat treatment variables. These three affect the microstructure, which in turn affects the proper- ties, all alloys have defects such as intrusions, micro-blowholes and cracks. Apart from these defects, when alloys are made under industrial conditions, some amount of inhomogeneity and inconsis- tencies invariably creep in, These make the system-mode chaotic: Besides this, the input variables that are considered are in reality fuzzy in nature, The first hurdle to overcome during modeling of steelsis to acquire a reliable database. The industries that manufac- ture these alloys (or steets) tend to classy their process variables {or obvious reasons. It is therefore very difficult to get a data set 6r information that would consist all ofthe above details. As the relationships between these outputs and inputs are nonlinear and complex in nature, itis impossible to develop them in the form (of mathematical equations. Techniques such as linear regression are not well suited for accurate modeling of data which exhibits Considerable ‘noise’, which is usually the case. Regression analy- sis {0 model non-linear data necessitates the use of an equation to attempt to transform the data into a linear form [1]. This rep- resents an approximation that inevitably introduces a significant * corresponding author Tel: +8254 2795035; fax: +82 42794400, mal address astedlyOpostch c kr NS, Red) 0021-5005) see font mater © 2008 Elsevier BV. Alright reserved 101016) mse22008.12022 degree of error. Similarly, itis not easy (0 use statistical methods to relate multiple inputs to multiple process outputs. The method using neural networks (NN), on the other hand, has been iden- tifled as a suitable way for overcoming these difficulties. NN are mathematical models and algorithms that imitate certain aspects of the information-processing and knowledge-gathering methods of the human nervous system [2-5]. A NN can perform highly ‘complex mappings on nonlinearly related data by inferring sub- tle relationships between input and output parameters [5-8]. It ‘can, in principle, generalize froma limited quantity of taining data to overall trends in functional relationships [9], Although several network architectures and training algorithms are available, the feed-forward neural network with the back-propagation (BP) learn- ing algorithm is more commonly used [8] Therefore, within the last decade, the application of neural networks in the materials sci fence research has steadily increased. A number of reviews carried ‘out recently have identified the application of neural networks to adverse range of materials science applications [10,11]. The objec- tive of the present work is, therefore, o develop a neural network ‘model, which can predict the properties fora given composition and heat treatment, and the relationship of the properties with respect to these input variables. 2. Back-propagation neural networks (BPNN) modeling ABPNN model consists of an input layer and an outputlayer with as many units as their respective number of variables. in-between o NS Reddy ta. MacralsScence and Enginering 508 (2009) 93-105 Cooling rate, is voto 1 w 2 9 69 56 46 a9 oe Cooling rate, Fis 306 12s 66a 214 163 124 wo 89 70 Cooling rate, cis mm 1 we 9 63 56 46 99 Cooling rate, Fis 25 125 56s 214 163 wea 100 83 70 5 5 Ps E gq | Suace Pn E ol + oe Eso] le ra AY baal f= : is : Ent BF sf bE : a 1 canes : j =| B i i 2 i x 1 Bs - —+— a A T 6 Oo? tee we ww a os 8 we ee we Equivalent stance rom quenched end, gn. Equivalent catance trom quenched ond, eh 4 1 1 sca Tg se Equivalent distance from quenched end, am Equivalent distance trom quenched end, mm Fig. 1. Coolng ates or (2) quenching and (b) water quencing fom ASM Meas handbook the input and output layers are the hidden layers, each having a certain number of nodes (or units). The actual number of nodes ‘depends heavily on the system and the acceptable error level of the model. The ervor-correction step takes place after a pattern is presented at the input layer and the forward propagation is com= pleted. Each processing unit in the output layer produces a single real number asits output, whichis compared withthe targeted out- Put specified in the training set. Based on this difference, an error value is calculated for each unit in the output layer. The weight of each of these units is adjusted for all ofthe interconnections that are established with the output layer. After this, the second sets of ‘weights are adjusted for all the interconnections coming into the hidden layer that isjust beneath the output layer. The processis con- tinued until the weights ofthe first hidden layer are adjusted. As, the correction mechanism starts with the output units and propagates backward through each internal hidden layer tothe input layer. the algorithm is called as back-error propagation or back propagation (BP). ABP network i trained using supervised learning mechanism, ‘The network is presented with patterns. (inputs and targeted out- puts), in the training phase. Upon each presentation, the weights are adjusted t0 decrease the error between the networks’ output and the targeted output [3.12] 4 detailed description on BP algo- rithm used in the present study is beyond the scope af the present paper, and it can be found in literature [2,13]. The training of the neural network has been carried out by using software developed in programming language C, and sigmoid function was used as an activation function with the BP training algorithm, 3. The steel under consideration Typical applications of low alloy steels are general engineering components, high tensile bolts and studs, crankshafts, gears, bor ing bats, lead screws, shafting, milling and boring cutter bodies, collects. Inthe softened condition, the machinability of these steels, is approximately 60% of that of mild steel, white in the hardened and tempered condition; the machinablity is 45-55%0f thatof mild steel. The data on this low alloy steel has been collected from the handbook on Standard EN Steels | 14]. The data consists of the com- position, ie, the percentage of carbon, silicon, manganese, sulphut, phosphorous, nickeland chromium, as well asthe section size, soak ing temperature and type of quenching. The properties considered are the yield strength (¥5), ultimate tensile strength (UTS), % elon- gation (EL), % reduction in area (RA) and impact strength (1S) in Joules. ‘The various heat treatment processes applied to low alloy steels, are softening (sub-critical annealing) at 650-690°C followed by air cooling oF oll quench, hardening at 830-860 °C followed by oil {quench and tempering at $50-660 °C followed by air cooling or oil ‘quenching, The dataconsists of different section sizesand theirheat treatment temperatures and type of quenching. So, the cooling rate varies for different section sizes, which is not uniform: furthermore, the variations of cooling rates are not linear. Based on ASM Metal hhandbooke( 15] as shown in Fig 1), the cooling rate equations were developed for different section sizes Fig.2).The equations obtained based on the best fits are shown below. acne | ) ea 04 z gus 7 3 & 21 gm 2 3 Ha = 1 2 e S274 erase a ese tie & 2] Someenete Stas acorsosre 8 Maonerex to 10 om 0 0 0 10 1% om 0 0 Ww 1m Diameter in mm Diameter in mm Fig. 2. Cooling ates variation with ameter af he sample or) ll quenching and (5) water quenching imap arene ene aceon SOND tet Sets etdaa wetfr rel set neg vane Tag da aT aT ee | i co) Oe os = e 8 & Ab ‘7 tsa f a 2 —. «¢ Fi ns cen ace ounces t ee ta ae fe cmc a a. Soe ee me we Ca ee SB es ae a: mw aes coe ee tae ea un om fy ae fe Se oe es Tete) je ed Be Ee Be ee oe ee aa) BOGE Bae fF a pe een Be as aes es cos ——— at -wa] ole es O-al Bots oe le 4 2 0 — Pe os a 2000900505 9 ‘0 a a Hidden Neurone peayer Hiden Neurons per layer Ps “PP wee] ele ry, es > Wa Oa | Oa = = es — |e") } ao uf os 1s 03 ro? * eer ees Hiden Newtons per ayer Hen Neurons per layer ” 35 te) ms oO Bs 20 A, O-a} * -O-n 2s - a @e & 20 Pe eee ee ; Me ie 2 M009 po os '-<—s = s © ees a... tie bien per omar Sp nl cmap Af bon erences ‘with singe en layer ad two hidden ees NS Reddy va. Mazer Sconce and For water quenching, Y = 5.77 —0.01X ~ 0.002? + 2.195 ~ 531.11 -7x4 + 1.4E - 10° For oil quenching, Y = 6.16 0.15X + 0.003x? — 2.728 $+ 731E 1x8 O49.338 8x" Where, X and ¥ are diameter of the specimen and cooling rate, respectively, ‘Based on the above equations, the cooling rate has been caleu- lated for different section sizes and used as one of input parameters for NN training, After coding, the total sets available for modeling, are 140 and shown in Appendix. Among the 140 data sets avail- able, 12 data sets have been used for NN training. When training 4 model, the choice of input variables is of great importance. Also, ‘when a combination of these variables isbelieved tobe of particular importance, the model can be improved by adding the combina tion as an explicit variable. The model was trained on the ratio of Mand S, where the concentrations are in we. This is because sul- phur reacts with manganese and formsMnS.To avoid biasing of the ‘model, the individual variables making up the Mn ratio ate also included, so that the direct influence of any of them can also be detected. The statistics ofthe steels data used for neural network modeling is presented in Table 1 gern 508 (2000) 93-105. ‘tabled ‘Vartan of MSE ofthe tained data a a function f numberof erations (9-055 and «065, numberof hidden nerons~22 nd number of hidden Lyers= 2) iNomber of teations NSE 00 ‘0.09590 1.00 ‘o0m67 51000 oss 10.000 00043 2000 ‘90s 0.000 ‘000879 5.000 ‘000451 75000 ‘pnt 10000 00237 150000 ‘0149 r75:000 ‘noi ‘0.000, ‘90104 20100 ‘oon? 195000 ‘ooni02 200.000 00102 4. The analysis 4.1, The training behavior of BPNN model Results obtained from madeling studies are presented and dis- ‘cussed in detail in this section. In this study, the optimal parameters of the network are determined based on the minimum mean sum square (MSE) error [3] and the average error in output prediction rable ‘The ariaon of MSE function of hidden neurons (025. 09a the numberof erations executed ar 200.00). earons a ager ewoas niger 2 NSE Fos) Feu) Fee) aR) Fes) 7 6 ‘00478 4550 Er} 3m 773 7 5 ‘005058 aa oB2 282 770 a 6 mst x05 aa fos 20 See 5 2 ‘000828 nar 2105 52 090 306 8 5 oon 4138 so ss 240 Bis 6 2 o072 oa70 2038 tat bse 236 6 ® ‘oon 1955 isa st bss an 6 4 ‘oonsss an 208 oss ost 26 ® 6 nwa Zao an se on 2s » wo ‘ora0¢ rons 220 se 1 aa ™ 6 nga Dai 2095 ot Oe an 3 8 ‘o0ne72 221 a3 O30 ost 0 2 0 ‘00s 229 1809 ot bar 7 x 2 ‘000090 aon 2010 oar bet 26 2 3 015 ze00 20 a7 270 3 5 0201s 20 3199 0 553 9 8 0113 om a0 ss eo i 0 ‘0023st a0 mas 95 a6 " 0 01st 29 ase ose 12 am 2 2 001876 387 2078 on we 47 2 B ‘oun 2475, ne 30 121 26 1 “ ones 2401 ‘944 fs an aa % 15 ‘01039 2551 2230 055 90 sas 16 6 ‘000081 2a 22s 8 ose aor ” 0 ‘oonss0 mest ais os on 250 ® ® ‘oom? 2685 2196 st on an 9 0 ‘hons72 2.19 1896 oar ber 26 20 2 ‘oo06s2 nn 212 oar O58 26 2 2 ‘000677 nao 1758 036 050 28 2 2 ‘00406 1621 191 28 at ts B 2 ‘000806 1953 1734 34 bo is 2% Fa ‘000833 2150 2057 036 08 200 x0 30 ‘000359, nas 257 036 050 aa NS Rey Materials cence and Eger A508 (200 99-105 o (By) of the trained data. Eyl) AY M0 - of @ ‘where y(x)=average error in prediction of training data set for ‘output parameter x. N= number of data sets, (x) targeted output, (0(x)= output calculates. Inapplying the BPNN forthe proposed estimation work, the fol- lowing key design issues, ie. number of hidden layers and hidden neurons, neural network parameters (learning rate and momentum rate) and number of iterations are discussed, 4.14. Cholce of number of hidden layers and neurons inthe hidden tayer ‘The influence of the number of hidden layers and the number ‘of neurons in each hidden layer on the convergence criterion is studied extensively [16]. In order to decide the suitable number ‘of hidden layers, two basic structures of the neural networks are ‘examined: one with single hidden layer and the other with two hidden layers. In the first case, the BPNN structure of one hidden layer with a learning rate of 0.25 and a momentum rate of 0.9 was trained with different hidden neurons starting from 8 to 30. Eight hidden neu rons in single layer yield a value of MSE of 0.02045 after 200,000, iterations. Fig. 3 shows the variation of MSE and Ey for different properties with the number af neurons in the hidden layer. With ‘an increase in the number of neurons, the MSE value increases up to 10 hidden neurons and then gradually decreases with further increase of hidden neurons. In order to examine the effect of num- ber of layers, the NN structure is also designed with two hidden layers, each containing 8-30 neurons in each layer, and the results are shown in Table 2. The effec of diferent combination of hidden neurons on two hidden layers is also shown in Table 3. Excellent ‘convergence was observed with 22 hidden neurons in each layer, and avalue of MSE of 0.00427 was obtained after 200,000 iterations ‘with =0.25 and «=0., From Fig 3, itis clear that the convergence is better for the two hidden layers than the single hidden layer with different number ‘of hidden neurons in the layer. Hence, 22 hidden neurons with two hidden layers have been selected for further training to optimise ‘other parameters. Even though the MSE is 0.00359 with 30 hid- ‘den neurons, which is fr less than that of 22 hidden neurons, the average error in output predictions is greater in all the mechanical properties. Hence, 22 hidden neurons in two layers were selected foranalysis.One more advantage with the22 hidden neurons selec tion is its less complexity in comparison to if 30 hidden neurons were to be selected, x» con {—a— © » oa 8 oo ae 4g A & 2 oc. ° ed ‘ cow oo 020s 0b OB 02 0a 08 08 10 nea nenda © s ? Sa & ‘ ° aya > ‘andse nada 12 20 —) aC) =| os © os] O—n =o 20 7° a. é AY oo Eos boos os 00 00 a YT) 00 02 0a 08ST sand a anda Fig. Variation of{a) MSE and average error in output prediction (of) ¥,(e)UTS. (4) BL () RAand (ofthe an data as function oflearingrate ane momentum ratein the sep 005, 96 NS Reddy ta. MacralsScence and Enginering 508 (2009) 93-105 & No.of weights & Weighs Fig. 5. The istribucon of waht fr best rained BPRN acter 4.12. Choice of neural networks parameters For back-propagation learning algorithm with fixed values of learning rate, n, and momentum rate, a, the optimum values are ‘obtained by simulation with eifferent values of n and a. Two val- tues, 10.25 and a=0.9, are initially chosen and varied to get the ‘optimum value for each parameter. Fig. 4 shows the variation of MSE and £;, for mechanical properties of the training data for different ‘able Compostion and heat treatment varies of sample Tre wi c og Ss 03 in 131 P on N O80 & oat Mo O16 nis aio @ a Temp, 550 vvalues of 9 and e with two hidden layers consisting of 22 hidden ‘neurons in each layer. Ftom Fig. 4 itis clearly evident that larger vvalue of learning rate results in higher MSE and fi, which is an expected behavior [16,17] This phenomenon is usually referred t0 asover training. However, it was observed that with higher learning Fates, the convergence takes place faster than it daes at lower learn- ing rates, The fgure also indicates that MSE and Ey are not quite sensitive tothe momentum rate. The best combination is found to be when 7 is 0.55 and a is 0.65 (Fig. 4), For the above combination, the MSE of 0.001015 is obtained after 200,000 iterations. 413. Number of iterations ‘The number of iterations executed to obtain optimum values is an important parameter in the case of BPNN training. The MSE 1200 (ZZ @ o 1000 |e : seo io q & «0 é - : (in eeneaci) TESTES ES OURAUE No.of Samples 0 2a © © Ip 7 . 20 15 a * te 5 5 2 o! +234 100 10 Zan © soe 8 n o ae zo ai 20 Bab theme apn thre 2 al All All oper alla T2s45e7 eon TeesesToeswingewe No, of Samples No. of Samples Fig. 6 The comparison of actual (A) and predicted () values and the espctive percentage errs for 15 est samples of EN100 steel (2) and (b)¥,() an (4) ELand (e) and(DIs vysqure) agegess BL a ‘YS (MPa) \ EL 1900 NS Rey ov Materials Sconce and Ein rin 508 (2000) 92-105 1200. 100 le t UTS (wa) 1400 1200 100. 1000. 00. 200. 700. ) 236 30 40. 032 038 035 038 040 oa? O44 046 % Carbon 600. 032 034 038 ass 040 042 O48 086 % Carbon le, o 1300 1200 100 1000 200. 700 600 00. 400. 2. 20 032 Os 035 098 040 Oa? O44 O46 % Carbon Fi. 7 ec of atbon variation on mechanical ropesties of sample (a) (UTS, () EL and (4) RA 1400 032 054 035 038 O40 042 Oa O46 % Carbon @ uTs (mPa) +4300 1200 1100 700. cy om 08s Om % Silicon os 040 om 0% 080 085 oan % Siteon ° >. . 46 @ — ~~ om 08s 0m % Silicon 035 om 0s 080 ads ao % Silicon Fig 8 fect of sien variation on mechanical properties of sample 9:3) ¥. (DUS, EL and) RA 100 NS Reddy ta. MacralsScence and Enginering 508 (2009) 93-105 achieved from different iterations is shown in Table 3. It is evi- dent that the value of MSE gradually decreases during the progress of training, as expected. An error of 0.07067 after 1000 iterations reduces (0 0.00237 after 100,000 iterations and falls to 0400104 after 180,000 iterations, and no change has been found with fur- ther increase in the numberof iterations. Ithas been observed that the MSE does not change significantly beyond 200,000 iterations ‘and hence further processing has been stopped after the above- ‘mentioned iterations. Finally, the 11 units in input layer, 22 hidden neurons in two hidden layers and 5 units in output layer (11-22-22 5) architecture with a learning rate of 0.55 and a momentum rate (0f 0.65 are used for prediction and analysis of the EN100 steels 4.14. Range of weights for EN100 steels raining The sigmoid function is used as the activation function in the present network. From Fig. 5, itis Very much clear that the distribution of weights of the trained network, with 11-22-22-5 architecture, takes sigmoid shape. The initial values for the weights {are randomly generated between -0.5 and 0.5, but as the taining progresses the weights varied between a minimum value of -56.87 toa maximum value of 57.61. The magnitude of the variation of the ‘weights indicates the complexity of the present system. A total of 885 weights will be available ((11 +1) x 22423 x22+23 x5). On ‘completion of the training phase, the mechanical properties for the test data are estimated simply by passing the input data in forward path of the network and using updated weights of the network obtained. 42, Predictions of est data sets ‘Total data sets available for BPN Model training is 140. The randomly selected 112 data sets have been used for training, and the remaining 28 data sets are used for testing, The predictions of 114, 115, 116,118, 19, 123, 125, 127,128, 131, 133, 136, 137, 138 and 140 data sets (a total of 15 sets) shown in Appencix are presented in Fig. 6(a), (C), and (e). The comparison between the actual and predicted values and the respective percentage error for 15 testing data sets also shown in Fig. 6(b),(d),and(P).Ithas been observed that most (nearly 95%) of the outputs predicted by the model are within 4% of the error band (indicated in Fig. 6(b), (4) and (0) For the testing data that has never been seen by the model, the NN gives a reasonably accurate prediction. Inthe case of sample 11 (data set of 133 in Appendin), it can be seen that there is a less variation between actual and predicted values as shown in Fig. 6), but the percentage error is higher because the actual values are very small 43, Hypothetical alloys ‘The developed model can be used in practice to manipulate the ‘mechanical properties of EN100 steels, in general, by altering chem= ical composition and heat treatment variables. The effects of all of these can be estimated quantitatively by using the BPN model (0 ‘observe whether they make any metallurgical sense. The combined elect ofa single variable or any two variables on the properties has also been studied. Here we present some hypothetical alloys by varying the composition and heat treatment variables and their effect on mechanical properties namely: (a) yield strength (YS), (b) ultimate tensile strength (UTS), (c) % elongation (EL), (4) % reduc tion in area RA) and (e) impact strength (15) for sample number 91 of Appendix is presented in Figs. 7-9, The effect of combined vari- ation of two elements on properties is presented in Figs. 10 and It ‘The composition of sample 91 is given in Table 4, 4311. Effect of carbon on properties Carbon is an essential non-metallic element in iron to make steel, None ofthe other elements so dramatically alters the strength 1200 1400 c 1200 {{@) 13004 7 1200 = 1100 = m0 3 e F 1000 © 0 g g@ 700 gu 600 - 500 700 40 co oe ea oe eee nis Rato ins Ratio 6 lo nfo x” a} “ t é “ 16 oe oa nis Rato nis Rat Fig.8. fect of Mn variation on mechanical properties of smple (a), (8) UTS. Land (RA NS Rey Materials cence and Eger A508 (2009) 99-105 101 7, SSS os 07 0808 10 ta ee Mii cr or Ce RL) os 07 08 08 1014 Fig. 10, Combined fect of Nad Crvariton on mechanical properties for sample 91: (@}¥5.(6)UTS ana) EL and the hardiness as do small changes in the carbon content does. Carbon is the principal hardening element, influencing the level ‘of hardness or the strength attainable by quenching, The model predictions as indicated in Fi. 7 show thatthe YS and UTS increase ‘while the ELand RA decrease with increase in C content. As carbon, increases from 0.32% to 0.44%, (Fig. 7), there will be approximately 2 15increase in the formation of pearite accompanied by areduc- tion of ferrite by the same amount, which results in an increase in strength and a decrease in ductility. The increase in strength and the decrease in ductility with the inerease in carbon content are also due to the increase in the formation of various carbides with ‘the other alloying elements as the carbon percentage is increased from 03 to 0.44, The impact strength would also decrease, a the area under the stress-strain curve would decrease owing to the eduction in ductility. Thus, the predictions of Fig. 7 are clearly is in agreement with the expected experimental trend, 4.2. Bffect of silicon on properties Silicon is one of the principal deoxidizers used in steel making, and therefore the amount of silicon present is elated tothe type of| steel, Usually only small amounts (0.20%) are present in rolled stee! ‘when it is used as a deoxidizer, Silicon dissolves in iron and tends to strengthen itn the low alloy steels, Si is present in the range of, (02-0.4%, Fig 8 shows the variation of mechanical properties with silicon content in the range of 0.2-0.4%. The variation in mechan- ical properties as observed in Fig. 8 cannot be explained based on the effect ofSi alone. Itis possible that a number of other elements ‘present might be responsible for the behavior represented in Fig. 8. Fig, 8 shows that a significant change occurs in mechanical prop- NY rr a eT) Cooling Rate (°C/s) SaS88 89 ‘Tempering Temperature ('C) © ra Cooling Rate ("Cis) Fig. 1. Combined efecto cooling fate and tempering temperature varaon on properties fr sample 9a) ¥5,(b}UTS and (e) EL erties at around 0.3% Si, which needs to be explained based on ‘metallurgical principles involving the complex combined effects ‘of a number of elements. However, it is important to note that a significant increase in the UTS at around 0.3% Si brings a significant reduction in EL and RA exactly at the same composition. This sug- {gests that the madel sable toestablish strong correlations berween the individual output parameters, though such information is not fed to the model a prior. 4.3.3. Effect of MnyS ratio on properties Manganese is a deoxidizer and degasifier, which reacts favor- ably with sulphur to improve forging ability and surface quality as it converts sulphur to manganese sulphide, Manganese increases tensile strength, hardness, hardenability, and resistance to wear and increases the rate of carbon penetration during carburizing ‘There is a tendency nowadays to increase the manganese con- tent and reduce the carbon content in order to achieve steels with an equal tensile strength but improved ductility. Fig. 9 shows the effect of Minjs variation on mechanical properties. AS Mins ratio increases strength increases with a corresponding reduction in duc- tility, which is clearly brought out by the model in Fig 9 43.4, Effect of nickel and chromium on properties Nickel is used to stabilise alloy steels. Nickel and manganese cexibit very similar behavior and both lowers the eutectoid tem- perature. It strengthens ferrite by solid solution but is a powerful ‘raphitiser. Nickel increases the impact strength of engineering steels considerably, It inereases the strength and hardness with- ‘out sacrificing ductility and toughness. Steels with 0.5% nickel is similar to carbon steel but is stronger because of the forma- 2 ‘ables, Comparison factual and predicted properties for typohetical ally of sample 9 NS Reddy ta. MacralsScence and Enginering 508 (2009) 93-105 iyptbetalaliy Fine ‘chanical properties [experimental predicted) s us aA 5 “Sample 9 fexpesimearal 750 or 050 2000 arbon(04) 7 153 oa a 1730 Siicon(03) 5 ar 906, 4590 2080 Manganese (151) 78 905 485 2000 Phosphorous (0038) ns so aes ais Nike (089) ne 806 4800 3080 Chromium (021) a 904 4800 1136 Molybdenum (016) ne 904 4645, 16.0 Sea (4194) 9 ne 07 fae 125 Cooling ate) na sic aon nn ‘Tempering temperature (550°C) 1 04 aR6t 3006 tion of finer peatlite and the presence of nickel in solution in the ferrite, Among all of the common alloying elements, chromium ranks near the top in promoting hardenability. It makes the steel apt for oil or air hardening as it reduces the critical cooling rate required forthe formation of martensite. Fig. 10(a)-(c) shows com- bined effect of Cr and Ni on the YS, UTS and EL, respectively. ‘The figure cleariy points out that the YS and UTS decrease with {an increase in Cr content at lower concentrations for a given Ni Concentration, which can be attributed to the ferrite stabilisa~ tion, However, at higher Cr content, the strength starts to increase again due to the formation of carbides, The percentage elongation shows exactly the opposite trend, as expected. Thus, the model is able to predict the combined effect of Ni and Cr quite success fully. 43.5. Combined effect of cooling rate and tempering temperature (on properties ‘The effect of cooling rate on properties is non-linear and com= plex, and itis difficult to explain. Fig. 11(a)-(c) shows the combined cffectof cooling rate and tempering temperature on the YS, UTSand EL, respectively. Increase in tempering temperature bringsinan ini- tial increase in the strength inthe temperature range of 450-550°C, which could be attributed to the secondary hardening. Further increase in the tempering temperature leads to a decrease in the strength, s expected. The EL shows a reverse trend as expected. It isalso important to note that the variation in the mechanical prop- erties is larger with different heat treatment parameters (cooling rateand tempering temperature), in comparison tothat with Crand Ni variation. This clearly suggests that the mechanical properties are more sensitive to heat treatment parameters than the concen- tration of alloying elements such as Cr and Ni, which is clearly predicted by the mode. Tables shows actual and model predicted mechanical properties for different hypothetical alloys for sample 91. Though iti difficult toexplain each and every behavior of hypothetical predictions, the ‘model predicted a sudden increase in ductility, whenever there is, a sudden decrease in the strength and vice versa, which has to be roted, It is very much clear thatthe error in the model predictions is less than 4% in all cases. This shows the efficiency of the model in understanding the relationships between input parameters and output parameters. This will greatly help the designing of alloy t0 achieve certain target properties 5. Conclusions Acomparison ofthe modeled and experimental results indicated ‘that NN can very well be employed forthe estimation and analysis, fof mechanical properties as a function of chemical composition and/or heat treatment for low alloy steels, + Two hidden layers converged better than the single hidden layer. ‘At higher learning rate, the MSE and £,, significantly increased as expected, + Even though Rumelhurt etal. [12] suggested that a learning rate n=025, and a momentum rate, a=0.8, yield good results for _most applications, the present work indicated itis not always so and that the optimum values vary for different steels. This fact ‘was in agreement with the finding of Satish and Zaeng! [18] and CChakravorti and Mukherjee [19]. The best combination of ANN parameters for the best resultsin eachicase studied was identified, Present model was able fo map the relation between the out- put parameters, even though such a relation was not explicitly fed fo the model, For example, the relationship between ductility and strengths was clearly brought out by the model (for example, Figs. 7-11). Results from the current study demonstrated that the neural net= ‘work model can be used to examine the effects of individual input variables on the output parameters, which is incredibly difficult {0 do experimentally. (Development of hypothetical alloys.) ‘Acknowledgements One of the authors (NSS. Reddy) acknovvledges Dr. N.N. Acharya, Department of Metallurgical and) Materials Engineering, Indian Institute of Technology, Kharagpur for useful discussions on deter- ‘mining cooling rate equations and Yandra Kiran Kumar for helping, to develop CUI design for ANN Mode. NS Rey Materials cence and Eger A508 (2009) 99-105 103, Appendix A. Low alloy steels used in modeling: chemical composition, heat treatment parameters (CR: cooling rate in °C/s and TT tempering temperature in °C) and mechanical properties. mesa a ae a ST Te? 2 082 02318 ome oss ads GDH aS 3 0a oot) ss 8G COSSASSSwS Dg 4 08 cl 78 om om ts on ff o of im ms a0 6 5 oe of Gk om 08 fe «ah = 7 = @ i ae fan Bi 6 0a at) ams us as) 783s > 03st 132 omsotaakots a3 Sw] & 08 «035 08} opm aka 9 035 «025-033, oma SOO] SSO 1 035-025-3300] OS SS] SSD mM 035 02508 ome oss kN THOS a « ws ie om of de @: 4 = of mm: mi es cn B G «s ts (2 tm Gh te @: 6 3 fm en mi ao cl Sf “4 06 Oe OR oom oO eC 1% 035025 033.00 aw 035025. ooo NS 7 «= Gs os tm om de «8 & b & ol mi =o mo 7m w 0350s aks kw] 1 035025038, aS SB ® 035025033, ams? =otOls 8ST SODwD8O3sR a Gn oe 0 oo ls mC a wa 2 of os oH om on ae 06 &£ 7 © m7 MO eo mM @ (2 ba Wa Wl we th & ¢ & bo mw mm aE 7 2% 035025033, am? kT SOM w 5 os OF wm wm Wl Ww MS £ 3 © GS mM 19 oo Ww Ss we ww wl we Ol wo ws Ss 3 mw ol wm my as 9 % 035025033, az OO SRR 7H SSIS 2 035025033, ams? omg SSK DHSS 7 th te oy ne oo ie th Ff me mw wl wm a a 33502533 oS] as OS TTS zDD B 035.025.0330] OSSD SSCS 34035025033, ams? kD HS SS 70 35 035025033] OSDIR HDD Baas assumes kw 30035025033? OOS kG 5D St 035025033, amet OSS GSS TH HDS 2 03 035 «033 ames tS DM SR % 03s 025.03} amos kD SDS 4035025033, am? SOs tS TRH 2 (2 (= of tm 6n G6 fk fs © bt we mo Ste me 03803833, ooo) s SSS 4 035025033 amos ka 0G t0t S20 # «03503508 kD a «8 02 03 em os a6 @ 6 8 So mM ms mo en os S038 025.033 kG 035 0350s kDa 403s oa3 amet oss kD S 08 as «3 om on O68 05 §& & Gs oO ME 1S ao @ Saas 025 oa} amme oaks 8 DSRS SOD 7 0380281387) SSD = 0 «2 12 ome OM G6 0s © 2 So a tm 25 ao ot so 03502180 DOTS 7B @ 03502128002) S576 @ 03502128 amas 02s 4) 7 GbR 10008 @ 0% 02 128 Ome OM O46 025 4 7 6 7M 9 2D S80 aD em fo i om ie i es et me mo sl 4 038025 126s sE COS SSDS & 038033, 126m”) SOT awa 6 «038s 126 ames on SSD] @ 038 om 126 om 058 05 07 7 1m «6D MGS tS & 038 0s 126s sw @ 038033, 126 amon) DR Dw ™ 038 033, 126m OS] SHOR] 7 03k 033128 HTD Sak m oe co im tm 06 @ i © 2B Gm of wf oo Se of 0380331360287 SD TRS ™ 038 0331260205508 79S wD tT s os op jo) wm oo os oom 63 om Oa tow | in oo 28 104 NS Reddy ta. MacralsScence and Enginering 508 (2009) 93-105 ‘Appendix A (Continued ) Siow NG No ms @ 9 8 US & & 8 m 0s om 1 om te os on om | 65 el Oo Om mn, =O me 03305137 ams ask SOS DS] © 039033? aT 50S TS 2 to «3 1% ome 1m as on # 7 Gx mm of me seh #033, 294s, ams? ast aka aS SRD &% 03902) t4Ssamasa? asa a7 GS s 4 © i te 1 ts of S * mm so ms ms So & on is os of os «2 = SS Gm 95 ue me oS! ® 04 03 1s oma Osos a 04 «03, Oa oka @ 04 03 tS On om om a 4 9 Go fe O8 1s S70 & a A 3 i one om on Oe @ Ss Gol me oe os esl og 2 04 03S] ome) ot Dm 9 «04 «03S sot UHH 4 oat 0254) mss oaks Saw] = @ & it oh oo i 8 sof wm se me ao a % naz ta ome oat Sk TSM 7 042037 129m HSA 50 tS % = oa2 025 tA mT 43850 RT Ha 2 az 025A m7? nwo 042025148?) 05 SDT jo 2025033778560, 5_CtOHGCSCtSS a mm = na2025 14s oad SSD 13 naz 025 tA 023? 15 a2 025.0337? 7HHO OSSD 1 0420250337?’ 1 = nas 134 somes oat wk RBS Dw mo tad 038 143s onset DGS a mM 04403513] DT wm nad 035 14},Oshss Osawa im ayaa tas ms) asG SO ms asa) 1kom OS OSG] ROMO SS 1 035 oats 133, oms toss] SODSS OOS SD no 03s a2.) ones Sa mo 03602128 ones 0S GHOSTS SDS im 380s ass wick uaa: wae 7a eS ms 03803} 160m S79 gk SCS ms «3902s tat masa) 03k wk wa my om OF 14 a0 of 0G OM «4 Em Me om os OG us 03903388 Gad SSD uo tO 138 ost) SS] mo 04 03stst ogo tsDC (OS cl OAs wm amt TS i003] Oot G OS mi 0403S] ogs ots 0, 8, is 6 aka’ cs ews GS a2 02510837? aD 1 042025033775 SSD 18 a2 025,337? 3560S oad 0381s osk OST) wo oa 0361s essa SD References (51 JE. Dayhot Neural Network Architectures: An duction, VNR Press, New [11 Ns Red. Arshoaoh VK Kumar, UN Achy Proceedings ofthe He inte ‘ational Canfrenneon Neural Information Processing, 2002 ION O2) 2002, 820 12) IM 2urada, reduction o Arica Neural Systems, PWS publishing Company, Poston, 1082. [p] RP Upmann, EEE AssP Nag, 4(1987)35. 1a) Hornik, "ML Stincheombe, He White, Be, eur Netw 2 (1980), “ove, 190, [61 NS. edd. Stay of some complex metallurgical syste by computation] ‘oreligence cechniques, PD. Thesis in Department of Mates Since ad "Enginerng, Indian stitute of Technology harap India, 2008.15, [9] Ney. C8. Lee JH Kim SL Semlatin Mater Se Eng 434 (2006) 218, [a] NS Reaoy AC Prasda Rao, ML Chakraborty, BS. Murty, Mate Sl Eng. A301 (200513 {ol Si Singh ti.014. Bhadestia, Mater Si Eng. A245 (1998) 72, [io] FaKDH Bhadeshs sf, 39 (1999) 966, [1] S.Mainow Wha, Comput Mater Sc.28 (2003) 178, NS Rey Materials cence and Eger A508 (2009) 99-105 105, (12) DE. Rumelhar GE. Minton, RE Willams Nacure323(1986)533, 1161 K. swinger, Applying Neural Nerworks A Practical Guide, Morgan Kaufman [13] ME Hssoun, Fundamentals of rtf Neural Neos, Me MIT Pes, Cam Publishers, in San Rancsco, cain 1906. bridge, Mossachusets, London Eagan 1995 17] Ws. Sale: Compatneural-nets FAG, SAS state The, Can NC 27513, USA tn4) Woolnan RA, Motaum, The Mechanical and pica! properties ofthe 2004, np: vw fagscongags aa neues. Sch Standard En Stels (2S 970-1855), Pergamon press, ford, London, [18] {Sats WS. 2aenh IEEE Tans Distecs. lcs sul (1984) 26, 1368, [ig] SChakiavort, PAC Mukherjee IEE Tans, Dileete Elec, lal 1 (1994) (15) & Stine Properties ad election: rons, Steets nd High-Performance Alloys, 258, ‘ASM Materia ark OF, USA 1000,

You might also like