Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 9

Final year project

Overview
• In our architecture we have used residual neural network to classify different types of arrthymia
• The different types of arrthymia are
• Normal sinus rhythm
• Atrial fibrillation
• First-degree atrioventricular block
• Left bundle branch block
• Complete right bundle branch block
• Right bundle branch block
• Premature atrial contraction
• Supraventricular premature beats
• Ventricular ectopics
• T-segment depression
• ST-segment elevation
Why Residual network
In deep neural networks as no of layers increases,we see that gradient
reduces to a smaller and smaller value and finally it becomes zero.
At this point the neural network fails to learn anything new and there
will be no further improvement,this situation is known as vanishing
gradient.
To overcome this problem, Microsoft introduced a deep residual
learning framework where the input is added to the output of each
block.
In this way the neural networks will remember what it has learnt and it
will prevent the problem of vanishing gradient
Preprocessing

• Data preprocessing is a technique which is used to transform the raw data in a useful
and efficient format. In this model we have used data cleaning where we ignore the
missing values and retain other values.
• Here in our model we obtain all the attributes of a signal using wfdb (Waveform
database) package and create a reference.csv file.

• one of the sample meta_data of our dataset


• {'fs': 500, 'sig_len': 18488, 'n_sig': 12, 'base_date': None, 'base_time':
datetime.time(0, 0, 12), 'units': ['mV', 'mV', 'mV', 'mV', 'mV', 'mV', 'mV', 'mV', 'mV',
'mV', 'mV', 'mV'], 'sig_name': ['I', 'II', 'III', 'aVR', 'aVL', 'aVF', 'V1', 'V2', 'V3', 'V4', 'V5',
'V6'], 'comments': ['Age: 71', 'Sex: Female', 'Dx: 164884008', 'Rx: Unknown', 'Hx:
Unknown', 'Sx: Unknown']}
Once we obtain the meta_data we take the signals that are only
diagnosed with cardiac arrthymia type and store it in labels.csv file

Further we create 10 folds where we partition the original


training data set into 10 equal subsets. Each subset is called
a fold.
After creating the folds we split the dataset into training set and
validation set in the ratio 80:20
While training each of the dataset we do scaling individually to each
patients data and train with our model.
Model
15x15,Convolution,64/2

3x3,maxpooling/2

300*300*3

Residual Blocks
x4

Adaptive avg pool

Sigmoid(9)

Output
Residual Blocks
Downsample 7x7,64 x3
1x1 conv 7x7,64

Downsample 7x7,128/2
7x7,128 x4

7x7,256/2 x6
Downsample 7x7,256

7x7,512/2 x3
7x7512
7x7,256/2
7x7,256

7x7,256/2
7x7,256

7x7,64
7x7,64
7x7,256/2
7x7,256

7x7,64
7x7,64
7x7,256/2
7x7,256

7x7,64
7x7,64
7x7,256/2
7x7,256
7x7,128/2
7x7,128
7x7,256/2
7x7,256
7x7,128/2
7x7,128

7x7,512/2
7x7512 7x7,128/2
7x7,128

7x7,512/2
7x7,128/2
7x7512
7x7,128

7x7,512/2
7x7512

You might also like