Download as doc, pdf, or txt
Download as doc, pdf, or txt
You are on page 1of 3

CIS 423- Computational Intelligence

Assignment No. 03
Instructor name: Dr. Abdul Majid Due Date: 27 March 2024

It should be noted that late assignments will carry zero credit. It is recommended that you do your
assignments individually. Total points=15

Assignment No. 03
Perceptron learning rule for training of the Adaline neural network with nonlinearly
transformed inputs

Question 1 : Adaline with non-linearly transformed inputs


Adaline with non-linearly transformed inputs (polynomial discriminant functions):
For solving classification problems for patterns that are not linearly separable, the inputs in the
Adaline can be preprocessed with fixed nonlinearities. Consider the example of the network shown
figure (1) below with two inputs.

x12 x0
(o) 2
X
w1(k) w1(k)
X
x1 X
w2(k)
x1 x2 v(k) y(k)

X X  sgn(
w3(k)
o)

x2 X
w4(k)
x 22
(o)2 X
w5(k)

Figure (1): Adaline with nonlinearly transformed inputs

Where

The critical thresholding condition for this Adaline with nonlinearly transformed inputs occurs
when v(k) is set to zero in the above equation. This condition represents an ellipse in the two-
dimensional input vector space. By introducing the nonlinearities in the input layer, a separation
boundary is generated which is not a straight line (i.e. an elliptic separating boundary in this case).
Therefore, if the appropriate nonlinearities are chosen, the network can be trained to separate the
input space into two subspaces which are not linearly separable. In general, the Adaline with
nonlinearly transformed inputs can be trained in the same manner as the linear Adaline network.
(a) Write a computer program for training the Adaline with nonlinearly transformed inputs
given in Figure (1) to perform the logic function XOR. Use bipolar vectors as training
inputs. In your own words explain why this network structure has superior separability
properties compared to the Perceptron.

Question 2
Consider a separation problem shown in Figure (2) below. It should be obvious that the circles
and squares are not linearly separable.

+1
1.0
1
0.5

1. 0. 0.0 0.5 1.0


0 5

0.
5

1.
0

Figure (2): Separation of patterns for question no. 2

(a) Write a computer program implementing an Adaline with nonlinearly transformed inputs,
trained with LMS algorithm.
(b) Use your program to separate the circles and squares given in Figure (2).

Question 3
Consider a set of two-dimensional vectors X defined as having components in the ranges
, and . Train the Adaline neural network with nonlinearly transformed
inputs given in question no. 1, to perform the following classification:
X is classified as 1 if ; otherwise, X is classified as 0.

Hint related to Question/Program No. 1 points 5

Write a program, may be in MATLAB/python using Perceptron learning rule under the following
instructions

Part 1: define Training data and various control parameters


Part 2: Use Perceptron learning rule for training of the net
Part 3: Test the resulting network for the input data
Part 4: Do some exercises by generating other types of patterns as well

%%%% Part 1 %%%%%%%%%%%%%%


% Initialize various parameters
w(1) = 0 % Initialize weights, w(1) is bias weight.
w(2) = 0; w(3) = 0
alpha = 1 % Learning Rate
theta = 0.2 % Length of non-decisive region
stopFlag = -1; % Stoping flag, -1 = do not stop and +1 = stop
wtChange = 1; % Weight change Flag, -1 = no change, +1 = change
epsilon = 0.001; % Termination Criteria
epoch = 0; % Epoch counter

% define training patterns


noOfTrainingPatterns = 4;
s(1,:) = [1 1 1]; t(1,1) = 1; s(2,:) = [1 1 0]; t(2,1) = -1;
s(3,:) = [1 0 1]; t(3,1) = -1; s(4,:) = [1 0 0]; t(4,1) = -1;

Write your own codes after this

Hint related to Question/Program No. 2 points 5

Write codes related to Adaline net Transversal filter used for Echo-Suppression. In this first part,
you will show how echo can affect the signal on a communication channel.
In the 2nd part, you will show how Adaline can used an error surface adaptive filter to cancel out
the noise in the incoming signal. You will show how the gradient descent works on the error
surface.

You might also like