Advance Communication Exp-5,6,7,8

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Ashish Kumar (20EEAEC003)

EXPERIMENT – 05
OBJECT – Find all the code words of the (15,11) Hamming code and verify that its minimum distance
is equal to 3.

APPARATUS – MATLAB SOFTWARE.

MATLAB CODE –
%Hamming code simulation-JC-4/15/06
%To run hit F5 and observe command window
%Simulation for encoding and decoding of a [7,4] Hamming code. The decoder
%can correct one error as shown and as theory states. The table at the end
%of the file shows the various outputs with different error positions and
%message bits. One error can be placed at any of the 7 bit locations and %corrections
made. clear
n = 7%# of codeword bits per block k = 4%# of
message bits per block
A = [ 1 1 1;1 1 0;1 0 1;0 1 1 ];%Parity submatrix-Need binary(decimal combination
of 7,6,5,3) G = [ eye(k) A ]%Generator matrix
H = [ A' eye(n-k) ]%Parity-check matrix
% ENCODER%
msg = [ 1 1 1 1 ] %Message block vector-change to any 4 bit sequence code =
mod(msg*G,2)%Encode message
% CHANNEL ERROR(add one error to code)%
%code(1)= ~code(1); code(2)= ~code(2);
%code(3)= ~code(3);
%code(4)= ~code(4);%Pick one,comment out others
%code(5)= ~code(5);
%code(6)= ~code(6); %code(7)= ~code(7); recd
= code %Received codeword with error
% DECODER%
syndrome = mod(recd * H',2)
%Find position of the error in codeword (index) find = 0; for ii = 1:n if
~find errvect = zeros(1,n); errvect(ii) =
1; search = mod(errvect * H',2); if search
== syndrome find = 1; index = ii; end end end
disp(['Position of error in codeword=',num2str(index)]); correctedcode = recd;
correctedcode(index) = mod(recd(index)+1,2)%Corrected codeword

%Strip off parity bits msg_decoded=correctedcode;


msg_decoded=msg_decoded(1:4)

%Error position Syndrome Decimal 4 bit word codeword dmin


% 1 111 7 0000 0000000
% 2 110 6 0001 0001011 3
% 3 101 5 0010 0010101 4
% 4 011 3 0011 0011110 3
% 5 100 4 0100 0100110 3
% 6 010 2 0101 0101101 3
% 7 001 1 0110 0110011 4
%No error will give syndrome of 000 0111 0111000 3
% 1000 1000111 4
% 1001 1001100 3

% 1010 1010010 4
% 1011 1011001 3
% 1100 1100001 3
% 1101 1101010 3
% 1110 1110100 4
% 1111 1111111 3
%Any exclusive or additions of any two codewords should give another
%codeword.

OUTPUT WAVEFORM -
Ashish Kumar (20EEAEC003)

RESULT – The MATLAB program was executed and output waveform was obtained.
Ashish Kumar (20EEAEC003)

EXPEEIMENT –06

OBJECT- Generate an equiprobable random binary information sequence of length 15. Determine
the output of the convolutional encoder shown below for this sequence.

APPARATUS – MATLAB SOFTWARE.


MATLAB CODE – message=[1 0 1
0 1]; % message=randi([0
1],1,15); enco_mem=[0 0
0];
encoded_seq=zeros((length(message)),2);
temp1=xor(message(1,1),enco_mem(1));
temp2=xor(enco_mem(2),enco_mem(3));
o1=xor(temp1,temp2); %generator
polynomial=1111
o2=xor(temp1,enco_mem(3));
%generator polynomial=1101
encoded_seq(1,1)=o1; encoded_seq(1,2)=o2;

msg_len=length(message); c=1; for


i=2:msg_len
enco_mem(1,3)=enco_mem(1,2)
;
enco_mem(1,2)=enco_mem(1,1)
; if(i<=msg_len)
enco_mem(1,1)=message(1,i-1); else
Ashish Kumar (20EEAEC003)

enco_mem(1,1)=0
; end
temp1=xor(message(1,i),enco_mem(1));
temp2=xor(enco_mem(2),enco_mem(3));
o1=xor(temp1,temp2); %generator polynomial=1111
o2=xor(temp1,enco_mem(3)); %generator polynomial=1101
encoded_seq(i,1)=o1; encoded_seq(i,2)=o2;

end

OUTPUT WAVEFORM -
Ashish Kumar (20EEAEC003)

RESULT – The MATLAB program was executed and output waveform was obtained.
Ashish Kumar (20EEAEC003)

EXPERIMENT – 07

OBJECT- Classify ECG signals using Neural networks.

THEORY –
The electrocardiogram (ECG) has become a useful tool for the diagnosis of cardiovascular diseases
as it is fast and non-invasive. It has been reported that about 80% of sudden cardiac deaths are the
result of ventricular arrhythmias or irregular heartbeats. While an experienced cardiologist can
easily distinguish arrhythmias by visually referencing the morphological pattern of the ECG
signals, a computer-oriented approach can effectively reduce the diagnostic time and would enable
the e-home health monitoring of cardiovascular disease.

However, realizing such computer-oriented approaches remains challenging due to the timevarying
dynamics and various profiles of ECG signals, which cause the classification precision to vary
from patient to patient, as even for a healthy person, the morphological pattern of their ECG signals
can vary significantly over a short time. To achieve the automatic classification of ECG signals,
scientists have proposed several methods to automatically classify heartbeats, including the Fourier
transform, principal component analysis, wavelet transform, and the hidden Markov method.
Moreover, machine learning methods, such as artificial neural networks (ANNs), support vector
machines (SVMs), least squares support vector machines (LSSVMs), particle swarm optimization
support vector machines (PSO-SVMs), particle swarm optimization radial basis functions (PSO-
RBFs), and extreme learning machines (ELMs), have also been developed for the accurate
classification of heartbeats. However, there are some drawbacks to these classification methods.
For example, expert systems require a large amount of prior knowledge, which may vary for
different patients. Another problem lies in the manual feature selection of the heartbeat signal for
some machine learning methods. ECG feature extraction is a key technique for heartbeat
recognition, which is used to select a representative feature subset from the raw ECG signal.
Subsets are selected as they are easier to generalize, which will improve the accuracy of ECG
heartbeat classification.

However, manual selection may result in the loss of information. Moreover, methods like the PCA
and Fourier transform may increase the complexity and computational time required to identify a
solution. As the number of patients increases, the classification accuracy will decrease due to the
large variation in the patterns of the ECG signals among different patients. Convolutional neural
networks (CNN) are useful tools that have been used in pattern recognition applications, such as
the classification of handwriting and object recognition in large archives. The connectivity between
neurons in a CNN is similar to the organization of the visual cortex in animals, which makes CNNs
superior to other methods in the recognition of the pattern and structure of items. CNN also provide
a number of advantages over conventional classification techniques in biomedical applications.
For example, they are widely used in skin cancer diagnosis, animal behaviour classification,
protein structure prediction, and electromyography (EMG) signal classification. In addition, CNNs
have also been used for ECG classification. For example, Keranas et al. proposed a one-
Ashish Kumar (20EEAEC003)

dimensional (1D) CNN for real-time patient-specific ECG classification. CNNs are a specialized
type of neural network with an inherent grid-like topology for processing input data in which
nearby entries are correlated, such as those in a two- dimensional (2D) image. In this paper, we
explored the 2D approach for ECG classification with CNN. An information fusion vector of
heartbeats is transformed into a binary image via one-hot encoding. Such images will capture the
morphology of a single heartbeat as well as the temporal relationship between adjacent beats and
will be used as the 2D input to a CNN. To accelerate the convergence speed of the learning, a per
dimension learning rate method for gradient descent called ADADELTA is incorporated into the
CNN. Moreover, to reduce the overfitting of the network, a biased dropout is also included.

RESULT – Hence, we have studied the basic approach on how to classify ECG signals using
Neural networks.
Ashish Kumar (20EEAEC003)

EXPERIMENT – 08

OBJECT- Perform feature extraction from a given image and use principal components as image
descriptors.

THEORY –
Principal component analysis- Principal Component Analysis is a feature extraction technique .It
is also called as Karhunen–Loève transform. The aim of principal component transform is to
transform a correlated set to an orthogonal set of variables called principal components. It reveals
the internal structure of the data in a way which best explains the variance in the data.

Algorithm and its visualization- ● 3D


Column Vector X:
• An RGB image having 3 components can be treated as a unit by expressing each group of three
corresponding pixels as a vector Xi, representing a pixel i.
• The 3-D column vector Xi for a pixel = [r g b]’, where r, g and b are the respective pixel values of
the RGB components of the image.
• Similarly, Extend it to the spectral band components where instead of RGB, we have b1, b2 ….,
bn; where b1, b2,…….bn are the respective pixel values of the spectral band image components.
• For PCA, the input vector X is a vector of vector Xi.
• For an m x n image, the vector X will be of length mn x number of bands.
Ashish Kumar (20EEAEC003)

Fig- RGB components of an image


Ashish Kumar (20EEAEC003)

Mean vector m:
Find the mean of vector over the bands.

● Covariance Matrix Cx: o Calculate the covariance matrix of vector X which is denoted by
Cx.
● Eigen values and Eigen vectors: o Calculate the Eigen values and Eigen vectors of the
covariance matrix Cx. o Arrange the Eigen vectors in the decreasing order of the respective
Eigen values.

● Transformation matrix A:

Generate a transformation matrix A using the Eigen vectors as the rows.


Ashish Kumar (20EEAEC003)

● Transformation matrix A:
Generate a transformation matrix A using the Eigen vectors as the rows.

● Transformed matrix Y:
The transformed matrix Y = A*(X-m).
Rearrange Y to obtain the principal components of the image (Matrix PCIM). The first component
of matrix PCIM will correspond to the maximum variance. The second component of matrix PCIM
will correspond to the second highest variance and so on.

MATLAB CODE –

%% PROJECT 2: PRINCIPAL COMPONENT ANALYSIS


% By Chetan Rao, Namrata
Baranwal % Image Processing
ECES 682
Ashish Kumar (20EEAEC003)

clear all; close


all; %% Read Band
Images
%image1= imread('PIA15260.jpg');
%image1= imread('Sunimsal.jpg');
%image1= imread('gravitational
lensing.jpg'); imagel(:,:,1)=
imread('1.tif'); image1(:,:,2)=
imread('2.tif'); image1(:,:,3)=
imread('3.tif'); image1(:,:,4)=
imread('4.tif'); image1(:,:,5)=
imread('5.tif'); image1(:,:,6)=
imread('6.tif'); %% Plot the band
components subplot(2,3,1);
imshow(image1(:,:,1)) title('B1
Component') subplot(2,3,2);
imshow(image1(:,:,2)) title('B2
Component') subplot(2,3,3);
imshow(image1(:,:,3)) title('B3
Component') if(size(image1,3)==6)
subplot(2,3,4);
imshow(image1(:,:,4)) title('B4
Component') subplot(2,3,5);
imshow(image1(:,:,5)) title('B5
Component') subplot(2,3,6);
imshow(image1(:,:,6)) title('B6
Component') end
%% 3D Column Vector x image(:,1)
=
image1(1,:,1);
image(:,2) =
image1(1,:,2);
image(:,3) =
image1(1,:,3); if(size(image1,3)==6)
image(:,4) =
image1(1,:,4);
image(:,5) =
image1(1,:,5);
image(:,6) =
Ashish Kumar (20EEAEC003)

image1(1,:,6); end for


i=2:size(image1,1) im
= image1(i,:,:);
imtemp(:,:) =
im(1,:,:); im =
imtemp; image =
[image; im]; end x= image;
clear imtemp image; %%
Mean m=mean(x);
M
=repmat(m',1,length(x)
); %% Covariance Matrix
c=cov(double(x));
%% Eigen Vectors and Eigen Values [evc eic]=eig(c);
[trash,I] = sort(sum(eic)); evc =
evc(:,I); %% Transformation Matrix
A= evc'; %% Trasformed PC
y=A*(double(x)'-M); %% Image
Principal Components j=1; k=1; for
i=1:length(y) ytemp =y(:,i);
pcim(k,j,:) = fliplr(ytemp');
j=j+1;

if(mod(i,size(image1,2))=
=0) j=1; k=k+1;
end end
clear ytemp;
%% Plot Principal Components figure
subplot(2,3,1);
imshow(pcim(:,:,1),[-
128,128]) title('Principal
Component 1 ')
subplot(2,3,2); imshow(pcim(:,:,2),[-
128,128]) title('Principal
component 2') subplot(2,3,3);
Kaushalendra Singh EC2
19EEAEC037

imshow(pcim(:,:,3),[-
Ashish Kumar (20EEAEC003)

128,128]) title('Principal
Component 3')
if(size(image1,3)==6) subplot(2,3,4);
imshow(pcim(:,:,4),[-
128,128]) title('Principal
Component 4 ')
subplot(2,3,5); imshow(pcim(:,:,5),[-
128,128]) title('Principal
component 5') subplot(2,3,6);
imshow(pcim(:,:,6),[-
128,128]) title('Principal
Component 6') end

%% Regenerate using only few Principal


Components newA=A; newY=y;
% Except for 1st and 2nd PC all others are made zero.
newY(1,:) = zeros(1,length(y)); newA(1,:) =
zeros(1,length(A)); newY(2,:) = zeros(1,length(y)); newA(2,:)
=zeros(1,length(A)); if(size(image1,3)==6) newY(3,:) =
zeros(1,length(y)); newA(3,:)
=zeros(1,length(A)); newY(4,:) = zeros(1,length(y));
newA(4,:) =zeros(1,length(A)); end py = newA'*newY +
M; j=1; k=1; for i=1:length(py) ytemp =py(:,i);
pcoim(k,j,:) = ytemp; j=j+1;
if(mod(i,size(image1,2))==0) j=1; k=k+1; end end
%% Plot Reconstructed images using Principal
Components figure subplot(2,3,1);
imshow(uint8(pcoim(:,:,1))) title('Reconstructed B1')
subplot(2,3,2); imshow(uint8(pcoim(:,:,2)))
title('Reconstructed B2')

subplot(2,3,3);
imshow(uint8(pcoim(:,:,3)))
title('Reconstructed B3')
if(size(image1,3)==6)
subplot(2,3,4);
Ashish Kumar (20EEAEC003)

imshow(uint8(pcoim(:,:,4)))
title('Reconstructed B4')
subplot(2,3,5);
imshow(uint8(pcoim(:,:,5)))
title('Reconstructed B5')
subplot(2,3,6);
imshow(uint8(pcoim(:,:,6)))
title('Reconstructed B6')
end

RESULT – Hence, we have Performed feature extraction from a given image and use principal
components as image descriptors.

You might also like