Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 16

Computational Intelligent

[CPEN6144]
Modul #2
Introduction to Artificial Neural
Network

Winda Astuti, PhD.


Binus-Aso School of Engineering
Outline
• Single node neural network
• Perceptron neural network
• Multilayer back propagation
Basic Models of ANNS
1. Processing Elements
MP-neuron can be extended to a general model of
processing element (PE) by changing the net input
function and activation function.
x1 w activation function
i1

x2 wi2 yi
f(.) a(.)

xm wim
net input function
General model of processing element (PE)
Fundamental Concepts
A linear threshold unit (LTU) is a PE with linear
integration function and a hard limiter activation
function.
M-P Neuron
x1 wi1
x2 wi2 f(.) y
 a(.) i

xm wim
b
 m   1, if f  0
yi (t  1)  a  wii x j (t )  b  a( f )  
 j 1   1, if f  0
b : Bias, z=Threshold
b=1;
w1=[-1/3;-1/2];
W=[w1'];
Single Node Neural NetworkPV=[dx;dy];
dx=[1 1 1.5]; %koordinat data 1 x1 PA=[d1x;d1y];
dy=[1 1.2 0.8];%koordinat data 1 x2 n=W*PV+b;
d1x=[1 1 2 2.2 2.6 3 3.1];%koordinat data figure(2)
2 x1 plot([0 3],[2 0]);
d1y=[2.1 3 2.8 2.1 1.5 3 1.8];%koordinat xlabel('P1');
data 2 x1 ylabel('P2');
axis([0 3.5 0 3.5]);
figure (1) while (b==1)
hold on p1=input('dx=');
plot(dx,dy,'og','MarkerFaceColor','g'); p2=input('dy=');
%plot data 1 P=[p1;p2];
plot(d1x,d1y,'ob','MarkerFaceColor','b'); a=hardlim(W*P+b)
%plot data 1 if (a==1)
xlabel('P1'); %label P1-data1 disp ('class 1')
ylabel('P2'); %label P2-data2 plot(p1,p2,'og','MarkerFaceColor','g')
axis([0 3.5 0 3.5]); else disp ('class2')
plot([0 3],[2 0]); plot (p1,p2,'ob','MarkerFaceColor','b')
end
end
pertanyaan
1. Buat test untuk 5 data secara acak, berapa
akurasinya,
2. Jika weight diubah, apa yang terjadi, apa
pengaruhnya terhadap akurasi
3. Buatlah program untuk data lain,
menggunakan function dan activation LGU-
bipolar sigmoid tansig
Basic Models of ANNS
3. Learning/Training Rule
-Supervised Learning-
Correct inputs and desired outputs is provided,
the weight adjustment performed based on the
error of the computed output.

ANN
x W
y
(Input) (Actual output)
Error signal
d
generator
(Desired output)
Perceptron

We can connect any number of McCulloch-Pitts


neurons together in any way we like.

An arrangement of one input layer of McCulloch-Pitts


neurons feeding forward to one output layer of
McCulloch-Pitts neurons is known as a Perceptron.
Perceptron

 m 
Target:
(k )
y
i 
aw xT
i
k
j   a  w ijx j   di
 k
 (k )

 j 1 
Perceptron

A simple perceptron

If the perceptrons are Linear Threshold Unit


(LTUs), the desired output can take only 1 values,
then
y(k )
i 
 sgn w x
T
i
k
j  d i
(k )

The boundary between two output values

w x 0
T
i
k
j
Perceptron
Linear Separable Problem
A single layer perceptron has only capability to
classify data into two classes for only a linear
separable problems.
A linear separable problem is one in which a decision
plane (line) can be found in the input space
separating the input pattern with desired output =
+1 from those with desired output = -1.

w Ti x  0 For each x with desired output = +1


w Ti x  0 For each x with desired output = -1
The above is true for each output PE I, i=1,2,3,…,n
Perceptron

x1 w11 Design of Perceptron


Actual output Desired output
x2
1 y1 d1
.. w e1
. 1m

Xm=-1

The perceptron is designed (by choosing the


appropriate weight) so that the actual output as
the desired output.
Perceptron Neural Network
x=[0 0 1 1;0 1 0 1];
d=[1 1 1 0];
plotpv(x,d)
lyr=newp([0 1;0 1],1);
lyr.iw{1,1}=[1 1];
lyr.b{1}=0.5;
fnc=lyr.iw{1,1};
bias=lyr.b{1};
plotpc(fnc,bias)
lyr=train(lyr,x,d)
figure;
fnc=lyr.iw{1,1};
bias=lyr.b{1};
plotpv(x,d)
plotpc(fnc,bias)
Y=sim(lyr,x);
train_error=mae(Y-d)
pertanyaan
1. Buat test untuk 5 data secara acak, berapa
akurasinya,
2. Jika weight diubah, apa yang terjadi, apa
pengaruhnya terhadap akurasi
3. Buatlah program untuk data lain
Multilayer neural network
x1=[1 1;2 1;2 2;3 2;3 3]; %data input 1
x2=[5 3;5 4;6 3;6 4;6 5]; %data input ke 2
xapp=[x1;x2]; %susunan data input
y1=[1 1 1 1 1]'; %output 1
y2=[-1 -1 -1 -1 -1]'; %output 2
yapp=[y1;y2]; %data output
%net=newff(xapp',yapp',[2,1],{'tansig','tansig'},'trainlm');%develop neural network
net=newff(xapp',yapp',[1],{'tansig'},'trainlm');%develop neural network
net.trainparam.show = 100; %Epochs between displays
net.trainparam.epochs = 100%Maximum number of epochs to train
net.trainparam.goal = 0.1; %Performance goal
net.trainparam.lr = 0.005; %Learning rate
net.trainParam.max_fail = 5; % Maximum validation failures
net.trainParam.min_grad = 1e-10; %Minimum performance gradient
net.trainParam.time = inf;
net=train(net,xapp',yapp')%train ANN
%outputTR=sign(sim(net,xapp')); %testing ANN
outputTR=sign(sim(net,[xapp]'));
ETR=mse(yapp-outputTR);%error ANN
error=yapp-outputTR'
pertanyaan
1. Buat test untuk 5 data secara acak, berapa
akurasinya,
2. Cobalah untuk berbagai configurasi hidden
layer, apa yang terjadi, apa pengaruhnya
terhadap akurasi
3. Buatlah program untuk data lain

You might also like