Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

ARTPFICIAL NEURAL NETWORKS APPLICATIONS

IN PROBLEMS OF FITTING IN FORESTRY


Per0 J. Radonja and Milos J. Koprivica

Abstract: Neural networks with different architectures and The main problem in optimal fitting of the
different activation functions represent a powerful tool for height curve is the fact that 'all data in Fig.1 have no
solving many approximation problems. Combining the the same weight, 131.The weights of data depend of the
knowledge of a forestry theory with the empirical actual number of trees and we presented in Fig2
knowledge stored in an artificial neural networks (ANN) We shalI use information from Fig2 in order to
trained on examples, can bring very significant results with
get the weighted error curve.
respect to traditional approaches. In our exampie neural
networks represent n very powerful tool for solving
problems of a fitting in forestry. 7

1. INTRODUCTION m
Q,

Neural networks as tools for knowledge L


Y-
eliciting allows both imperfect theoretic knowledge and O
noisy experimental data [l]. ANN can he realized by f
n
using either the special hardware or general E
purpose hardware like personal computer and 3
Z
corresponding software.
Applications of generd purpose hardware and
a sofbvare realization of A" algorithms wili be
cosidered in this paper. The developed algorithms are "0 20 40 60
based in some of their parts on subroutines from Diameter classes [cm]
MatLab's Neural Networks Toolbox [2]. Fig2. The weight of the measurement data
In this paper we shall deal with height curve.
The height cxrve represents the results of fitting mean On the other hand, it is well known from theory
values of tree heights versus the diameter classes of of forestry how good height curve have to look. The
trees. The measurement data i.e. the mean values of height curve must be monotonously rising with one
tree heights versus the diameter classes of trees are tangent line for higher value of diameter classes.
presented in Fig. 1.
2. TRAM FEED-FORWARD NEURAL NETWORKS
WITH BACKPROPAGATION
24
Analyses of efficiency of ANN applications in

c-)

E
U

g20
.-
a
22
-/ field of suboptimal fitting in forestry we shall start
with the simplest two layers feed-forward neural
network with backpropagation, [4]. The number of
hidden neurons in the first layer show the complexity of
the problem. In our case the best results are obtained
using only two neurons (S1=2) with TANSIG
r activation function in hidden lcyer. For S b S
18 /

16
i overfitting is occurred. In second layer, neurons with
P W L I N activation function are used.

0 20 40 60 Initialization of weights and biases for this


Diameter classes [em] TANSIGPURELIN network is performed by INITFF
function. Furthermore, the inputs for TRAINBP
Fig 1. Measurement data
function are: the input vector P, the target vector T and The input vector P represents the normalized values of
vector of training parameters, TE'. Note that thls diameter classes of trees.
training function lXAIIVBP, can be c'dled to train up 3 We have assumed, that the maximum number
layers feed-forward networks. The training parameters of epochs to train is 80 and the sum-squared error goal
are: epochs between updating display, maximum is 0.005.
number of epochs to train, sum-squared error goal and
learning rate. The mentioned function returns new
.>+eightsand biases and the actual number of epochs
trained and record of training errors also.

Measurement data, denoted by * and results of


fitting of the height curve are shown in Fig.3.

-0.05l I
0 2 4 6
input vector P

i Fig.5 Errors and weighted errors of fitting


The actual the sum-squared error is 2.7368 and
the actual weighted the sum-squared enor is 0.3007.
Function TRAINBPM which trains feed-
I forward network with backpropagation and momenturn
0 2 4 6
needs two training parameters more. These are:
Input vector P momentum constant and ,maximurn error ratio.
Fig.3- Height curve (TRAINBP) However, results of fitting are not significant. The
A record of training errors with respect to sum-squared error is 4.2691 and weighted the sum-
training epochs is plotted in Fig.4 squared error is 0.8259. For this reason, the titting by
TRAINBPM is not shown in this paper.
TR AINBPA trains feed-forward network with
backpropagation and adaptive learning. This training
function needs instead of momentum constant, two
training parameters which define learning rate increase
and leaming rate decrease. Results of fitting are
satisfied and are presented in Fig.6.

m
x -
+ 1.3-
0-
0
1 20 40 60
1
80
b
c
0

q
a,
> 1.2-
Training epochs c

Fig4 Record of training errors 1.1

Considering the errors of fitting with regard to


weights of measurement data, that is, with regard to
Fig.2, we shall get the weighted error curve. The errors 1'
1 W I J
0 2 4 6
of fitting and weighted errors of titting , data denoted
by *, versus the input vector P, are shown in Fig.5. Input vector P
Fig.6 Height curve (TRAINBPA)
In this case the error of suboptimal fitting is 24
similar to the error when TRAINBP have been used. x
The sum-squared error is 2.4461 and weighted the sum-
squared error is 0.3357.

Function TRAINBPX which trains feed-.


forward network with fast backpropagation needs 8
training parameters. These ;ire parameters which we
have used in TRAINBPM and TRAINBPA already.
Results of fittjng are satisfied and are presented in
Fig.7.
16'
0 20 40 60

+ 1.3 i Diameter classes [cm]


Fig.8 Height curve (Levenberg Marquardt)

1 4 76
0 2
Input vector P
Fig.7 Height curve (TRAINBPX)
0'
The error of suboptimal fitting now is less than 0 20 40 60 80
error which we have had in the case of using Training epochs
l"BPA. The sum-squared error is 2.3388 and Fig.9 Record of training errors(Leven.-Marq.:
weighted the sum-squared error is 0.2360.
The errors of suboptimal fitting and weighted error:
Feed-forward network with Levenberg-
suboptimal fitting , data denoted by *, versus
Marquardt algorithm is neural network with diameter classes of trees, we shown in Fig. 10.
backpropagation also. The corresponding training
function TRAINLM used 8 training parameters. The 2
parameters wluch are not mentioned until now are:
minimum gradient, initial value for MU, multiplier for I

increasing MU, multiplier for decreasing MU and


maximum value for MU.

In our case the best performances are obtained


when the number of hidden TANSIG neurons is 3. Note
that applications of TRAIM,M ensure the best result
of suboptimal fitting, as we can see in Fig.8. Indeed. 5 -1 } I
the actual sum-squared error is 2.0191 and the actual
weighted the sum-squared error is 0.1773 only. j-
A record of training errors with respect to
training epochs is shown in Fig.9.
3. TRAIN ELMAN RECURRENT NETWORK

Training function for Elman recurrent neural 24


network is TRAINELM. The input vectors are: the m
weight matrix for first layer from input and feedback,
the bias column vector for first layer, the weight rnat'rix
for second layer from first layer and the bias column
vector for second layer. Input column vector P is
arranged in time and target column vector T is for final
output. Vector of training parameters TF' has 8
parameters. Besides already mentioned training
parameters, TRAINELM use initial adaptive learning
rate, ratio to increase learning rate, ratio to decrease
learning rate and momenmm constant with standard, m
default, value 0.95. TRAINELM returns new weights ,El
0 20 40 60
,and biases, the actual number of epochs trained and
record of errors throughout training. However the Diameter classes [cm]
results of fitting, as we can see in Fig.11 are no1
satisfied. Fig.12 Height curve (Radial basis neurons)

I Note that using radial basis networks we


mi obtaiiied the superfast suboptimal function
approximation.

5. CONCLUSION

The paper presents a new method for


suboptimal fitting of the height curve in forestry.
Using ANN we have obtained very successful
suboptimal fitting of the height curve and low sum-
squared error of suboptimal fitting. We have got a very
2 4 6 good agreement between numerical value of the
input vector P weighted sum-squared error of fitting 'and good results
Fig. 11 Height curve (Elman rec. net.) in view of theory of forestry (monotonously rising
curve with one tangent line).
4. DESIGN RADIAL BASIS NETWORK
REFERENCIS
Design function for radial basis neural network
is SOLVXRB. Design parameters are: number of
[l.] S . Haykin , Neural Networks, KJ:McWan,l994
iterations idween updating display, maximum number
of neurons, surn-squared error goal and spread of radial [2.] Neural Network Toolbox, (1994), Version 2 Oa,
06-Apr-94, MATLAB for Windows 4 . 2 ~1.
basis functions. In our case the optimal design
parameters have value lo., 12,2.5 and 88 respectively. [3.] Radonja J. P. (1998): Efficiency of ANFIS
SOLVERB finds a two-layer radial basis FUZZY ~ g o r i t h mand extraction of relevant
network with enough neurons to fit a function to within forest st<mds data in forestry, Proceedings qf
the error goal. Furthermore, SOLVERB beturns weight wnfcrence on computer theory unci
matrix and bias vector for radial basis layer, and weighr informution technologies Y U INFO'98, 23-
matrix and bias vector for linear layer also. Finally 27. mart 1998 god., Kopaonik.
SOLVERB returns the number of radial basis neurons [4.] Stankovid S. Srd6m,Milisavljevid M. Milan (1991)
used and record of training errors. "Training of multilayer perceptrons by stochastic
approximation" in Antognetti P., M&itinovid,V.
The obtained the height curve is shown in (eds.) Neurul Networks concepts, upplications
Fig. 12. arid as we can see the fitting is very good. and implementution, Vol.IV,Prentice-Hall.

You might also like