Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Computational Materials Science 159 (2019) 349–356

Contents lists available at ScienceDirect

Computational Materials Science


journal homepage: www.elsevier.com/locate/commatsci

Modelling of hysteresis loop and magnetic behaviour of Fe-48Ni alloys using T


artificial neural network coupled with genetic algorithm

Parastoo Vahdati Yektaa, Farzad Jaafari Honarb, Mohammad Naghiyan Fesharakia,
a
Department of Materials Engineering, Malek-Ashtar University of Technology (MUT), Iran
b
Department of Electrical Engineering and Physics, Bu-Ali Sian University, Hamedan, Iran

A R T I C LE I N FO A B S T R A C T

Keywords: In this work, the application of a hybrid artificial neural network and genetic algorithm is proposed for pre-
Fe-48Ni permalloy diction of hysteresis loop and magnetic properties of Fe-48Ni permalloy as an important and widely used soft
Hysteresis loop magnetic material. In this case, thickness of samples, annealing temperature, holding time and field strength are
Magnetic properties considered as the network inputs and the magnetization as the output. The experiments were performed at
Artificial neural network
thicknesses of 0.4, 0.8, 1.2 and 1.6 mm which are related to 80%, 60%, 40% and 20% rolled samples, annealing
Genetic algorithm
temperatures of 600, 700, 800, 900, 1000 and 1100 °C, different holding times of 5, 10, 20 and 60 min and field
strength between −10,000 and +10,000 Oe. Using an artificial neural network coupled with genetic algorithm,
Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE) values found as 2.69 and 3.47 ( emu ) for the
gr
training part and 3.95 and 4.77 ( emu ) for the test. Finally, the major conclusions of this research show that ANNs
gr
as powerful computational techniques in modeling of nonlinear systems, can be reliably used in the prediction of
hysteresis and magnetic properties from the input variables in Fe-48Ni permalloys.

1. Introduction in the zero magnetic field is called the remanence (Mr). Finally, if the H-
M relationship is plotted, the magnetic hysteresis loop will be gener-
Permalloy is an attractive alloy with magnetically soft properties, ated. In a hysteresis loop, the coercivity (Hc) is the offset from the
consisting of iron and nickel and can be additionally doped with several origin along the H axis [5,6]. The hysteresis loop can be obtained from
other components such as Cu, Mo, Cr and Mn. The expanding appli- Vibrating Sample Magnetometer (VSM) test. As a nonlinear system with
cations of permalloys in a wide range of science and engineering is due a high level of complexity, the magnetic nature of permalloys is difficult
to their excellent magnetic properties such as [2] high saturation to explain in terms of mathematical models. Obviously, prediction of
magnetization, low remanence and low coercivity [1–3]. Over the past soft magnetic material properties based on models of the physical and
decades, variety of experimental and computational researches have chemical processes requires deep knowledge in material science and
been done on permalloys to enhance their properties and characteristics also various experimental tests that are too expensive and time-con-
[4]. The concentration of this research is on the magnetic properties of suming. Thus, it is necessary to develop indirect methods for mon-
Fe-48Ni. We will employ both experimental and theoretical approaches itoring material magnetic properties without needing to perform ex-
to determine saturation magnetization (Ms), remanence (Mr) and pensive experimental tests. Artificial Neural Networks (ANNs) are
coercivity (Hc) of Fe-48Ni alloys as three important magnetic proper- intelligent bio-inspired computing networks which have been applied
ties. The mathematical relationship between magnetization (M) and in order to recognize the highly complicated relations between different
field strength (H) doesn’t demonstrate a linear equation in magnetic variables in nonlinear and complex phenomena. Over the decades,
materials. In detailed, if a magnetic alloy is demagnetized and the considerable attentions have been concentrated on the ANNs applica-
magnetization is plotted in term of applied field strength, it doesn’t tion in an extended variety of areas [7–14]. Especially in material sci-
exactly track the previous magnetization curve. This curve initially ence and engineering, ANNs have been employed in a variety of ap-
increases and after that reaches a point that is known as saturation plication, including: Modeling of thermal deformation behavior in a Ni-
magnetization (Ms). Next, if the applied field strength is reduced, based powder metallurgy superalloy [15], Designing dual-phase steels
magnetization experiences a different curve. The magnetization value with improved performance [16], Predicting the martensite fraction of


Corresponding author.
E-mail address: Naghiyan.mohammad@gmail.com (M.N. Fesharaki).

https://doi.org/10.1016/j.commatsci.2018.12.025
Received 7 November 2018; Received in revised form 10 December 2018; Accepted 13 December 2018
Available online 27 December 2018
0927-0256/ © 2018 Elsevier B.V. All rights reserved.
P.V. Yekta et al. Computational Materials Science 159 (2019) 349–356

Fig. 1. Block diagram of a multi-layer perceptron artificial neural network.

microalloyed steel [17], Computer-aided modeling for predicting layer 2.2. Theoretical procedure
thickness of a duplex treated ceramic coating on tool steels [18], Im-
proving the level of Rietveld refinement [19], Predicting the ultimate In this paper, Artificial Neural Network (ANN) based on Genetic
grain size of aluminum sheets undergone constrained groove pressing Algorithm (GA) will be employed as theoretical approach in prediction
[20], Establishing structure-property linkages [21], Predicting the of hysteresis loop. An ANN is a mathematical function that try to si-
toughness of high strength low alloy steels [22], Accelerating high- mulate the behavior of biological neural networks. ANNs consist of
throughput searches for new alloys [23], Finite element simulation of simple processors (nodes) connected and interacting with each other.
carbide precipitation in austenitic stainless steel 304 [24], etc. Such separately simple processors together are capable of performing
Based on the robust potential that ANNs have to model the complex complex tasks. In data science point of view, ANNs are non-linear sta-
and nonlinear processes, the novelty of this work is the combination of tistical data structures organized as modeling tools. They can be em-
ANN and GA in the prediction of Fe-48Ni hysteresis loop and magnetic ployed to simulate complex and highly non-linear relationships be-
properties. Using ANN and GA, we have no need to implement ex- tween inputs and outputs that other analytical methods can't easily
pensive and time-consuming experiments such as VSM test to obtain represent. In general, ANNs may consist of one or multiple layers.
hysteresis loop of Fe-48Ni permalloys. According to our knowledge, no Exemplifying with three layers, we could have an input layer that send
such approach has been developed up to now. data through the synapses to the hidden layer of neurons, and then
The rest of paper is organized as follows: In the next section (Section through more synapses to the last layer. However, the more capable
2) we will introduce the materials, production method, equipment and networks will have more than three layers. An ANN can be understood
the conducted experiments. Next, we provide some basic information both by software programs and by various hardwares such as Digital
about ANNs as intelligent machines in modeling nonlinear and complex Signal Processor (DSP). This field as an important branch of artificial
processes and the GA as powerful evolutionary algorithm. The network intelligence can also be utilized with other intelligent methods such as
topology will also be described in detail. In Section 3 we employ ANN in the fuzzy logic and evolutionary algorithms. Over the decades, various
prediction of hysteresis loop and compare results with experimental types of ANNs have been developed to solve a variety types of pro-
data in two statistical criteria: Mean Absolute Error (MAE) and Root blems. They include: Feed-forward multi-layer perceptron [25], Con-
Mean Square Error (RMSE). Finally, we conclude in Section 4 with a volutional [26], Radial Basis Function (RBF) [27], etc. The choice of the
summary of our approach. type of neural network fully depends on the area of the usage and
nature of the problem to be solved. The general operation mechanism of
feed forward perceptron networks can be explained with the block
2. Materials and methods diagram shown in Fig. 1. As illustrated in Fig. 1, a feed forward per-
ceptron network is a control system consisting of an adder block, a
2.1. Experimental procedure processor block and another adder block. The processor block is a
mathematical function known as the activation function. Initially, the
Ingot of 48% permalloy were melted in a vacuum induction furnace. network receives the first input as a vector consisting of number of
After hot forging with 50% reduction to bars of 9 × 8 mm2 in cross arrays. Each array will be multiplied by a constant number which is
section, the Fe–48%Ni permalloy was cold rolled to the final thick- called a weight. The weighted arrays next will be summed and then
nesses of respectively 1.6, 1.2, 0.8 and 0.4 mm which were related to went into the activation function to calculate the function output. This
20%, 40%. 60% and 80% rolled samples. Moreover, in order to in- value, which is called the network output, will be sent to the next adder
vestigate the magnetic properties of rolled sheets, samples annealed block, where it will be reduced by the real output. The difference be-
during 20 min at 600–1100 °C and also 80% rolled samples annealed in tween these values will be the first stage error. At the end of first stage,
900 °C for 5, 10, 20 and 60 min. The annealing has been performed network optimizes the weights in order to minimize the first stage error,
under Argon atmosphere. Finally, to obtain the hysteresis loops, mag- and repeats the above process on the next input using the adjusted
netic tests performed using VSM machine. Performing VSM test, the weights. At the end of next stage, network readjusts the weights so that
hysteresis loop of 25 different samples including 3650 independent data the first and second stage errors are minimized. This process is repeated
obtained to be employed in neural network. for each stage until all training inputs are trained. During the last stage,

350
P.V. Yekta et al. Computational Materials Science 159 (2019) 349–356

weights should be adjusted in a manner that the errors of all stages are end
MutatNum = PopSize - RecomNum - CrossNum;
minimized. After the final weight adjustment, the network is ready to
Artificial Neural Network Parameters:
receive new inputs and compute the output based on the adjusted pr = [0 1];
weights. This process is an optimization problem which can be solved PR = repmat(pr,4,1);
by several methods such as least squares method which is described as net = newff(PR,[1 0 1], {'tansig' 'tansig'});
follows: net.trainParam.epochs = 1000;
net.trainParam.goal = 0.0007;
Let A be the number pairs with independent variables xi and de- net = train_using_GA(net,Xtrain',Ytrain');
pendent variables yi representing n experimental data. Also, let F(xi, β ) ,
be the model function, where β is a matrix of adjustable weights. As it can be seen in the MATLAB code, 150 chromosomes were gener-
Considering definitions above, the function S is defined as Eq. (1). ated as population size. Also, 50%, 35% and 15% of chromosomes were
n assigned for cross over, mutation and natural selection, respectively.
S(xi, β ) = ∑ (yi − F(xi, β ))2 The number of inputs was 4, the number of hidden layers was 1 and we
i=1 (1) used newff syntax for the Multi-Layer Perceptron with hyperbolic tan-
gent function. Before training, a portion of experimental data should be
The goal is to set the β matrix in order to minimize the function S.
held for network validation, test and performance appraisal. In this
There are several solutions to the least square problem like genetic
case, the dividing method was based on randomly choosing of the
algorithm, simulated annealing algorithm, Monte-Carlo algorithm, etc.
samples. Thus, all data (3650 records) was divided into two separate
In order to solve the least squares optimization problem using the GA, it
sets: one for the training of ANN including 2920 independent data (80%
is first necessary to provide some initial values for the β matrix. After
of all data) obtained from 20 samples and the rest including 730 in-
generating enough β matrices, they should be sorted according to the
dependent data (20% of all data) from 5 samples for the validation and
best fit. Thus, a new matrix will be created whose first line identifies the
test. The data used in test process has never been seen by ANN before.
best solution. The resulting matrix is known as sorted first generation.
This gave us a good chance to perform an important test of the power of
In order to achieve the optimal β matrix, it is necessary to apply se-
our network in the prediction of Fe-48Ni hysteresis loop. In detail, four
lection, cross over and mutation operators as three main genetic op-
ANN inputs were: thickness of samples, annealing temperature, holding
erators to the sorted first generation. Selection means retaining mem-
time and magnetic field and target is magnetization. In constructing an
bers who better satisfy the conditions. In practical word, selection
ANN, the number of inherent parameters such as neurons and hidden
operators allow the better solutions to be passed on to the next gen-
layers is one of the most important keys to success. Using too few
eration. The best solutions are determined according to problem fitness
neurons may cause a problem called underfitting which occurs when a
function. For mathematical modeling of selection, various methods
network cannot adequately capture the underlying structure of the
have been proposed such as tournament selection and fitness propor-
data. Conversely, using too many may cause a problem called over-
tionate selection. The cross over is an operator employed to combine
fitting which happens when a model fits exceptionally well to the
the solutions (chromosome information) of two parents to produce new
training data, but fails to fit to test data. In this case, we Initially ana-
offspring. It is similar to the crossover that occurs during sexual re-
lyzed the effect of number of neurons (0 to 100) and hidden layers (1 to
production in biology. To model cross over process in GA, we can use
5) on the network performance. Accordingly, to avoid underfitting and
Eq. (2):
overfitting of the network and thus reach the best performance, one
X = aY + (1 − a) Z (2) input layer with four neurons, one hidden layer with ten neurons and
where X is offspring which is produced by combination of Y and Z one output layer with one neuron was found to be the appropriate
parents. structure for precise prediction of the target. The graphical topology of
Mutation is an operator employed to keep up genetic variety from ANN is demonstrated in Fig. 2. In order to reach the best solutions, 500
one generation to the next that is similar to biological mutation. In generations created using GA as optimization algorithm. Also, all data
mutation operation, the gene values in typical chromosomes are altered were normalized to (0, 1) interim before training, in order to improve
from their first values. Finally, with the application of triple operators network performance. After training, the result data should be de-
of the GA, the second generation is created. Next generations are cre- normalized again. Let X, H and L be the data, upper limit and lower
ated by repeating the above process. Ultimately, the most elite member limit of normalization interim, respectively. The Normalized (N) and
of the last generation is the optimal solution to the optimization pro- Denormalized (DeN) data will be given by:
blem. X − MinX
N=L+ (H − L)
MaxX − MinX (3)
3. Results and discussion
XN − L
DeN = MinX + (MaxX − MinX )
Our first attempt was to use a simple perceptron with four para- H−L (4)
meters as inputs and magnetization as output. The results were not
The network has been modeled, trained and simulated utilizing the
satisfactory because the simple configuration of network. Therefore, we
MATLAB software and then executed on the computer with 2.0-GHZ.
created a Multi-Layer Perceptron. A part of the MATLAB code asso-
CPU, 4-GB memory, and the operation system of Windows 10.
ciated to GA and ANN parameters are provided as follows such that
The comparison was made in terms of two statistical criteria, the
readers be able to duplicate our results.
Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE),
Genetic Algorithm Parameters:
as characterized by Eqs. (5) and (6).
n
1
PopSize = 150; MAE =
n
∑ |Xthe − Xexp|
MaxGenerations = 500; i=1 (5)
RecomPercent = 15/100;
CrossPercent = 50/100; 1 n
MutatPercent = 1 - RecomPercent - CrossPercent; RMSE =
n
∑i=1 (Xthe − Xexp)2 (6)
RecomNum = round(PopSize*RecomPercent);
CrossNum = round(PopSize*CrossPercent);
where n is the number of data, Xthe and X exp are the theoretical and
if mod(CrossNum,2)∼=0
CrossNum = CrossNum − 1;
experimental data, respectively. To be more specific, MAE and RMSE
are statistical measures that demonstrate the average magnitude of the

351
P.V. Yekta et al. Computational Materials Science 159 (2019) 349–356

Fig. 2. The ANN topology (4-10-1) used in the present study.

Fig. 3. Hysteresis loop of samples in training region. (a) 80% cold rolled sample annealed at 1000 °C for 20 min. (b) 60% cold rolled sample annealed at 900 °C for
20 min. (c) 40% cold rolled sample annealed at 600 °C for 20 min. (d) 20% cold rolled sample annealed at 700 °C for 20 min.

352
P.V. Yekta et al. Computational Materials Science 159 (2019) 349–356

Fig. 4. Hysteresis loop of samples in test region. (a) 80% cold rolled sample annealed at 600 °C for 20 min. (b) 80% cold rolled sample annealed at 1100 °C for 20 min.
(c) 40% cold rolled sample annealed at 1100 °C for 20 min. (d) 20% cold rolled sample annealed at 1000 °C for 20 min.

Table 1
Saturation magnetization (Ms), remanence (Mr) and coercivity (Hc) of training samples in the case of experimental data (EXP) and ANN based calculation.
Sample Ms EXP ( emu ) Ms ANN ( emu ) Mr EXP ( emu ) Mr ANN ( emu ) Hc EXP (Oe) Hc ANN (Oe)
gr gr gr gr

0.4 mm (cold rolled) 152 150 1.9 1.8 11 9


0.4 mm at 700 °C for 20 min 159 159 1.2 1.3 4.5 5
0.4 mm at 900 °C for 5 min 159 159 0.4 0.5 2.8 2.8
0.4 mm at 900 °C for 10 min 146 145 0.9 0.9 3.3 3
0.4 mm at 900 °C for 60 min 162 165 1 1 3.5 3.2
0.4 mm at 1000 °C for 20 min 164 161 1.2 1.1 4.2 4.1
1.2 mm (cold rolled) 158 158 0.9 0.8 7 6.1
1.2 mm at 600 °C for 20 min 158 156 0.5 0.6 5 6.1
1.2 mm at 700 °C for 20 min 160 163 0.4 0.5 3.4 3.6
1.2 mm at 800 °C for 20 min 158 157 0.3 0.5 2.5 3.6
1.2 mm at 900 °C for 20 min 161 165 0.3 0.4 2.5 3.6
1.2 mm at 1000 °C for 20 min 157 154 0.6 0.8 3.5 3.6
1.6 mm (cold rolled) 158 160 0.8 0.9 6 5.5
1.6 mm at 600 °C for 20 min 157 157 0.5 0.4 5 5.2
1.6 mm at 700 °C for 20 min 154 156 0.4 0.3 3.8 4
1.6 mm at 800 °C for 20 min 160 162 0.4 0.5 3 3.2
1.6 mm at 900 °C for 20 min 161 163 0.4 0.5 2.5 2.7
1.6 mm at 1000 °C for 20 min 155 155 0.3 0.5 2.9 2.5
0.8 mm (cold rolled) 156 158 1.3 1.5 10 10.7
0.8 mm at 900 °C for 20 min 164 160 0.8 0.5 4 9

353
P.V. Yekta et al. Computational Materials Science 159 (2019) 349–356

Table 2
Saturation magnetization (Ms), remanence (Mr) and coercivity (Hc) of test samples in the case of experimental data (EXP) and ANN based calculation.
Sample Ms EXP ( emu ) Ms ANN ( emu ) Mr EXP ( emu ) Mr ANN ( emu ) Hc EXP (Oe) Hc ANN (Oe)
gr gr gr gr

1.6 mm at 1100 °C for 20 min 160 160 0.3 0.4 2.8 2.6
1.2 mm at 1100 °C for 20 min 162 160 0.3 0.4 2.3 2.7
0.4 mm at 1100 °C for 20 min 152 152 1.6 1.5 4.6 4.3
0.4 mm at 600 °C for 20 min 164 165 1.7 1.7 6 7
0.4 mm at 800 °C for 20 min 162 160 1.3 1.3 4.4 4.7

errors in a set of prediction. MAE is a linear criterion which implies that Standard deviation. Tables 3 and 4 represent the results of statistical
all the individual differences are weighted equally in the average. analysis of 20 training samples and 5 test samples, respectively. As it
While, RMSE is a nonlinear scoring rule which demonstrates the can be seen in Tables 3 and 4, the GA-based ANN has been successful in
average magnitude of the error. Since the errors are squared before they predicting the hysteresis loop of each sample from statistical analysis
are averaged, the RMSE gives a relatively high weight to large errors. point of view. Considering theoretical predictions obtained from the
Considering the train and test datasets, in this work MAE and RMSE ANN, we can find that the saturated magnetization, remanence and
values were found as 2.69 and 3.47 ( emu ) for the training dataset and coercivity, clearly change after heat treatment. Since it is not possible to
gr
3.95 and 4.77 ( emu ) for the test. Given the fact that the VSM test also has draw a 4-D graph, we should choose an input as a constant parameter.
gr
For this, the holding time is supposed to be 20 min constant. As it can be
the same amount of error, we can conclude that the hybrid of ANN and
seen in Figs. 5–7, in each group of constant temperature, by increasing
GA have been able to predict hysteresis loop of Fe-48Ni permalloy with
the thickness of samples, the saturation magnetization slightly de-
very low discrepancy. Fig. 3 illustrates the experimental and predicted
creases while remanence doesn’t show a monotonic trend. In this case,
hysteresis loop of 20%, 40%, 60% and 80% cold rolled samples an-
the coercivity experiences a typical decrease. Also, in each group of
nealed at different temperatures for 20 min. Since the hysteresis loop of
constant thickness of samples, by increasing the temperature, the sa-
other samples conveys the same basic message, only the hysteresis loop
turation magnetization fluctuates and remanence and coercivity clearly
of some of the training samples is shown. As it can be seen in Fig. 3, the
decrease.
prediction of ANN is in an excellent agreement with experiment.
However, the predictive ability of ANN in test region is more important
in comparison to training part. Fig. 4 which is related to 20%, 40% and 4. Conclusion
80% cold rolled samples annealed at different temperatures for 20 min,
shows the excellent ability of ANN in test region. This excellent In this work, a hybrid Artificial Neural Network (ANN) and Genetic
agreement, enable us to determine magnetic properties of Fe-48Ni alloy Algorithm (GA) was employed to predict the hysteresis loop and mag-
theoretically, and we have no need to implement new experiments. The netic properties of Fe-48Ni permalloy. In this case, saturation magne-
saturated magnetization, remanence and coercivity experimental mea- tization, remanence and coercivity as three important magnetic prop-
surements and theoretical predictions in training and test dataset which erties for Fe-48Ni alloys were investigated. In order to have a rich
are obtained from related hysteresis loops are shown in Tables 1 and 2. training dataset, several experiments were carried out at thicknesses of
The results demonstrate the high capability of ANNs in prediction of 0.4, 0.8, 1.2 and 1.6 mm which are related to 80%, 60%, 40% and 20%
saturated magnetization, remanence and coercivity. Furthermore, it is rolled samples, annealing temperatures of 600, 700, 800, 900, 1000 and
necessary to perform some statistical analysis in order to ensure that 1100 °C, holding times of 5, 10, 20 and 60 min and field strength be-
our model is accurate as possible in the case of all samples. Thus, to tween −10,000 and 10,000 Oe. Under this circumstances, 3650 in-
have a comprehensive approach of error values in both train and test dependent data obtained from experiments. Next, for theoretical eva-
regions, we evaluated all the inputs by employing some frequently used luation of Fe-48Ni magnetic properties, a multi-layer perceptron ANN
statistical criteria such as Average, Variance, Range of Variation and based on GA was employed to calculate the magnetic properties as
function of thickness of samples, annealing temperature, holding time

Table 3
Statistical analysis of training samples based on Average, Variance, Range of Variation and Standard deviation. All values are in terms of ( emu ).
gr

Sample Average Variance Range of variation Standard deviation

0.4 mm (cold rolled) 2.8 13.7 1.2 3.7


0.4 mm at 700 °C for 20 min 3 12.2 1.1 3.5
0.4 mm at 900 °C for 5 min 2.9 13 2.1 3.6
0.4 mm at 900 °C for 10 min 3.1 8.4 2.2 2.9
0.4 mm at 900 °C for 60 min 3.1 14.4 1.1 3.8
0.4 mm at 1000 °C for 20 min 3.5 9.6 1.2 3.1
1.2 mm (cold rolled) 2.9 10.2 1.2 3.2
1.2 mm at 600 °C for 20 min 3 9.6 0.9 3.1
1.2 mm at 700 °C for 20 min 2.9 12.2 1 3.5
1.2 mm at 800 °C for 20 min 3.2 15.2 1 3.9
1.2 mm at 900 °C for 20 min 2.7 9.6 1.1 3.1
1.2 mm at 1000 °C for 20 min 3.2 9.6 0.9 3.1
1.6 mm (cold rolled) 3 10.2 1.2 3.2
1.6 mm at 600 °C for 20 min 3 13 1.1 3.6
1.6 mm at 700 °C for 20 min 3.1 12.2 1 3.5
1.6 mm at 800 °C for 20 min 2.7 11.6 1.2 3.4
1.6 mm at 900 °C for 20 min 3.2 13 1 3.6
1.6 mm at 1000 °C for 20 min 2.9 12.2 1.1 3.5
0.8 mm (cold rolled) 3 9.6 1.2 3.1
0.8 mm at 900 °C for 20 min 3.1 9 1.1 3

354
P.V. Yekta et al. Computational Materials Science 159 (2019) 349–356

Table 4
Statistical analysis of test samples based on Average, Variance, Range of Variation and Standard deviation. All values are in terms of ( emu ).
gr

Sample Average Variance Range of variation Standard deviation

1.6 mm at 1100 °C for 20 min 3.5 16.8 1.1 4.1


1.2 mm at 1100 °C for 20 min 3.4 13 1.2 3.6
0.4 mm at 1100 °C for 20 min 3.5 15.2 1.1 3.9
0.4 mm at 600 °C for 20 min 3.5 10.2 1.1 3.2
0.4 mm at 800 °C for 20 min 2.9 15.2 1.3 3.9

Fig. 5. The effect of annealing temperature and thickness of samples on sa-


Fig. 7. The effect of annealing temperature and thickness of samples on coer-
turation magnetization in 20 min constant holding time.
civity in 20 min constant holding time.

and field strength. All data divided into 2 datasets: (i) train dataset
including 2920 independent data (80% of all data) obtained from 20
samples and test including 730 independent data (20% of all data) from
5 samples. In each dataset, the predictive ability of the ANN was
compared to Experiment. For comparison, Mean Absolute Error (MAE)
and Root Mean Square Error (RMSE) considered as two frequently used
statistical criteria. In this research, MAE and RMSE values found as 2.69
and 3.47 for the training dataset, and 3.95 and 4.77 for the test which
shows that the predicted hysteresis curve and magnetic properties were
in good agreement with experiment, in respect of both criteria.

CRediT authorship contribution statement

Parastoo Vahdati Yekta: Data curation, Investigation, Writing -


review & editing, Project administration, Resources. Farzad
Jaafari Honar: Conceptualization, Formal analysis, Methodology,
Software, Writing - original draft. Mohammad Naghiyan Fesharaki:
Validation, Visualization.

References

[1] N. Djuzhev, A. Iurov, N. Mazurkin, M. Chinenkov, A. Trifonov, M. Pushkina, Effects


Fig. 6. The effect of annealing temperature and thickness of samples on re- of average grain size on the magnetic properties of permalloy films, EPJ Web of
manence in 20 min constant holding time. Conferences vol. 185, (2018).
[2] S. Raviolo, F. Tejo, N. Bajales, J.J.M.R.E. Escrig, Angular dependence of the mag-
netic properties of permalloy and nickel nanowires as a function of their diameters,
Mater. Res. Exp. 5 (1) (2018) 015043.
[3] M. Kateb, H. Hajihoseini, J.T. Gudmundsson, S.J.J.o.P.D.A.P. Ingvarsson,
Comparison of magnetic and structural properties of permalloy Ni 80 Fe 20 grown
by dc and high power impulse magnetron sputtering, J. Phys. D: Appl. Phys. (2018).
[4] A. Mangla, G. Deo, P.A.J.C.M.S. Apte, NiFe local ordering in segregated Ni3Fe al-
loys: a simulation study using angular dependent potential, Comput. Mater. Sci. 153

355
P.V. Yekta et al. Computational Materials Science 159 (2019) 349–356

(2018) 449–460. 6–16.


[5] L.J.C.M.S. Changshi, Comprehension of the ferromagnetic hysteresis via an explicit [17] G. Khalaj, A. Nazari, H.J.N.N.W. Pouraliakbar, Prediction of martensite fraction of
function, Comput. Mater. Sci. 110 (2015) 295–301. microalloyed steel by artificial neural networks, Neural Network World 23 (2)
[6] Y. Hane, Reluctance network model of permanent magnet synchronous motor (2013) 117–130.
considering magnetic hysteresis behavior, 2018 IEEE International Magnetic [18] G. Khalaj, H.J.C.I. Pouraliakbar, Computer-aided modeling for predicting layer
Conference (INTERMAG), IEEE, 2018, pp. 1–5. thickness of a duplex treated ceramic coating on tool steels, Ceram. Int. 40 (4)
[7] A.A. Hedayat, E.A. Afzadi, H. Kalantaripour, E. Morshedi, A.J.S.D. Iranpour, (2014) 5515–5522.
E. Engineering, A new predictive model for the minimum strength requirement of [19] Z. Feng, et al., Method of artificial intelligence algorithm to improve the automation
steel moment frames using artificial neural network, Soil Dyn. Earthquake Eng. 116 level of Rietveld refinement, Comput. Mater. Sci. 156 (2019) 310–314.
(2019) 69–81. [20] H. Pouraliakbar, S. Firooz, M.R. Jandaghi, G. Khalaj, A.J.T.I.J.o.A.M.T. Nazari,
[8] S. Mathur, G. Joshi, Reliability enhancement of line insulator of a transmission Predicting the ultimate grain size of aluminum sheets undergone constrained
system using artificial neural network, Int. J. Eng. Trends Technol. (IJETT) 41 groove pressing, Int. J. Adv. Manuf. Technol. 86 (5–8) (2016) 1639–1658.
(November) (2016). [21] J. Jung, J.I. Yoon, H.K. Park, J.Y. Kim, H.S.J.C.M.S. Kim, An efficient machine
[9] H. Hermantoro, R.J.I.J.o.O.P. Rudyanto, Modeling and simulation of oil palm learning approach to establish structure-property linkages, Comput. Mater. Sci. 156
plantation productivity based on land quality and climate using artificial neural (2019) 17–25.
network, Int. J. Oil Palm 1 (2) (2018) 65–70. [22] H. Pouraliakbar, G. Khalaj, M. Jandaghi, M.J.J.o.M. Khalaj, M.B. Metallurgy, Study
[10] E.O. Osigwe, Y.-G. Li, S. Suresh, G. Jombo, D. Indarti, Integrated gas turbine system on the correlation of toughness with chemical composition and tensile test results in
diagnostics: components and sensor faults quantification using Artificial Neural microalloyed API pipeline steels, J. Mining Metall. B: Metall. 51 (2) (2015)
Network, in: 23rd ISABE Conference Proceedings, 2017. 173–178.
[11] P. Sharma, B. Singh, R. Singh, Prediction of potato late blight disease based upon [23] K. Gubaev, E.V. Podryabinkin, G.L. Hart, A.V.J.C.M.S. Shapeev, Accelerating high-
weather parameters using artificial neural network approach, 2018 9th throughput searches for new alloys with active learning of interatomic potentials,
International Conference on Computing, Communication and Networking Comput. Mater. Sci. 156 (2019) 148–156.
Technologies (ICCCNT), IEEE, 2018, pp. 1–13. [24] E. Ranjbarnodeh, H. Pouraliakbar, A.J.I.J.o.M. Kokabi, Applications, finite element
[12] S. Jindal, S.S.J.a.p.a. Bulusu, A transferable artificial neural network model for simulation of carbide precipitation in austenitic stainless steel 304, Int. J. Mech.
atomic forces in nanoparticles, J. Chem. Phys. (2018). Appl. 2 (6) (2012) 117–123.
[13] S. Roy, S. Sengupta, S. Manna, M.A. Rahman, P.J.D. Das, W. Treatment, Rice husk [25] F.M. Bayat, M. Prezioso, B. Chakrabarti, H. Nili, I. Kataeva, D.J.N.c. Strukov,
derived silica and its application for treatment of fluoride containing wastewater: Implementation of multilayer perceptron network with highly uniform passive
batch study and modeling using artificial neural network analysis, Desalin. Water memristive crossbar circuits, Nat. Commun. 9 (1) (2018) 2331.
Treatm. 105 (2018) 73–82. [26] R. Cang, H. Li, H. Yao, Y. Jiao, Y.J.C.M.S. Ren, Improving direct physical properties
[14] M.K. Bantupalli, S.K. Matam, Wind Speed forecasting using empirical mode de- prediction of heterogeneous materials from imaging data via convolutional neural
composition with ANN and ARIMA models, 2017 14th IEEE India Council network and a morphology-aware generative model, Comput. Mater. Sci. 150
International Conference (INDICON), IEEE, 2017, pp. 1–6. (2018) 212–221.
[15] M. Zhang, G. Liu, H. Wang, B.J.C.M.S. Hu, Modeling of thermal deformation be- [27] M. Banchero, L.J.M. Manna, Comparison between multi-linear-and radial-basis-
havior near γ′ solvus in a Ni-based powder metallurgy superalloy, Comput. Mater. function-neural-network-based QSPR models for the prediction of the critical
Sci. 156 (2019) 241–245. temperature, critical pressure and acentric factor of organic compounds, Molecules
[16] T. Dutta, S. Dey, S. Datta, D.J.C.M.S. Das, Designing dual-phase steels with im- 23 (6) (2018) 1379.
proved performance using ANN and GA in tandem, Comput. Mater. Sci. 157 (2019)

356

You might also like