Professional Documents
Culture Documents
Artpficial Neural Networks Applications in Problems of Fitting in Forestry
Artpficial Neural Networks Applications in Problems of Fitting in Forestry
Abstract: Neural networks with different architectures and The main problem in optimal fitting of the
different activation functions represent a powerful tool for height curve is the fact that 'all data in Fig.1 have no
solving many approximation problems. Combining the the same weight, 131.The weights of data depend of the
knowledge of a forestry theory with the empirical actual number of trees and we presented in Fig2
knowledge stored in an artificial neural networks (ANN) We shalI use information from Fig2 in order to
trained on examples, can bring very significant results with
get the weighted error curve.
respect to traditional approaches. In our exampie neural
networks represent n very powerful tool for solving
problems of a fitting in forestry. 7
1. INTRODUCTION m
Q,
c-)
E
U
g20
.-
a
22
-/ field of suboptimal fitting in forestry we shall start
with the simplest two layers feed-forward neural
network with backpropagation, [4]. The number of
hidden neurons in the first layer show the complexity of
the problem. In our case the best results are obtained
using only two neurons (S1=2) with TANSIG
r activation function in hidden lcyer. For S b S
18 /
16
i overfitting is occurred. In second layer, neurons with
P W L I N activation function are used.
-0.05l I
0 2 4 6
input vector P
m
x -
+ 1.3-
0-
0
1 20 40 60
1
80
b
c
0
q
a,
> 1.2-
Training epochs c
1 4 76
0 2
Input vector P
Fig.7 Height curve (TRAINBPX)
0'
The error of suboptimal fitting now is less than 0 20 40 60 80
error which we have had in the case of using Training epochs
l"BPA. The sum-squared error is 2.3388 and Fig.9 Record of training errors(Leven.-Marq.:
weighted the sum-squared error is 0.2360.
The errors of suboptimal fitting and weighted error:
Feed-forward network with Levenberg-
suboptimal fitting , data denoted by *, versus
Marquardt algorithm is neural network with diameter classes of trees, we shown in Fig. 10.
backpropagation also. The corresponding training
function TRAINLM used 8 training parameters. The 2
parameters wluch are not mentioned until now are:
minimum gradient, initial value for MU, multiplier for I
5. CONCLUSION