Professional Documents
Culture Documents
Denoising of Images Using Autoencoders
Denoising of Images Using Autoencoders
Denoising of Images Using Autoencoders
AutoEncoders
SUBMITTED BY:
1
INDEX
1. ABSTRACT………………………………………………………………………4
1.1 OBJECTIVE…………………………………………………………….……4
2. INTRODUCTION………………………………………………………………..5
3. METHODOLOGY…………..…………………………………………………..8
3.4 EXPLANATION…………………………..………………………………..9
4. IMPLEMENTATION…………………………………………………………..10
4.1 FLOWCHART……………………...……………………………………...10
4.2 ALGORITHM…………………..………………………………………….11
4.3 CODE……………………..………………………………………………..11
5. RESULTS…………………….……………………..…………………………..14
5.2 GRAPHS……...……………………………………………………………...15
5.3 SUMMARY……………………………………….…………………………16
6. CONCLUSION………………………………………………………………….17
7. REFERENCES……………………………….………………………………….17
2
ACKNOWLEDGEMENT:
This is to certify that the project work entitled “Denoising Of Images Using
AutoEncoders” that is being submitted by Kinjal Sarkar, Sudesha Basu
Majumder, for Neural Networks and Fuzzy Control is a record of bonafide
work done under Prof. Dr. Monica Subashini M. The contents of this Project
Work, in full or in parts, have neither been taken from any other source nor
have been submitted for any other CAL courses.
3
1. ABSTRACT:-
1.1 OBJECTIVE
4
2. INTRODUCTION:-
The progression within the innovation of computerized
photography is momentous. It can provide dark and white
photographs and recordings color and reestablish any mutilated
pictures, which can be convenient prove for measurable
purposes. Computer vision and profound learning methods just
include this. Neural networks and convolution neural systems are
well known for their information modeling methods and approach.
5
2.1 LITERATURE SURVEY:
6
task and SDAE (Stacked Denoising Autoencoder) to pre-train
model, in network first layer logistic regression method is exploited
to persuade a deep network for classification and feature
extraction in include supervised fine-tuning and unsupervised pre-
training, the employed work produces good discriminability for
classification task and SDAE (Stacked Denoise Autoencoder) to
pre-train model, in network first layer logistic regression method.
The SDAE pre-training in aggregate by means of the LR
enhancement and classification (SDAE LR) can achieve higher
accuracies than the well-known SVM(Support Vector Machine)
classifier, according to the results obtained using ROSIS hyper
spectrum data, Hyper ion, and AVIRIS.
7
3. METHODOLOGY:-
3.1 DATASET USED IN THE PROJECT:-
https://www.kaggle.com/tongpython/cat-and-dog (only the cat dataset have
been used)
8
3.4 EXPLANATION:-
9
apply it to the entire dataset, causing each row to represent an
image to be corrupted. As shown in the picture below, a simple
convolutional denoising autoencoder architecture was employed
for modelling on a corrupted dataset.
4. IMPLEMENTATION:-
4.1 FLOWCHART:
10
4.2 ALGORITHM:
Step1: All the required libraries are imported.The data set is read.
Step4: The neural network is created and the noisy images are the
inputs to the encoder.The autoencoder isn’t fed the original images
at any point in time and we are expecting the autoencoder to give
us output without any noise.
Step5: Fitting of the model
4.3 Code:-
import numpy as np
import pandas as pd
import os
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.layers import Input, Dense, Conv2D, MaxPooling2D,
UpSampling2D, Conv2DTranspose
from keras.models import Model
from keras.preprocessing import image
cat_train_path = "../input/cat-and-dog/training_set/training_set/cats/"
cat_test_path = "../input/cat-and-dog/test_set/test_set/cats/"
cat_train = []
for filename in os.listdir(cat_train_path):
if filename.endswith(".jpg"):
11
img = image.load_img(cat_train_path+filename, target_size=(128, 128))
cat_train.append(image.img_to_array(img))
cat_train = np.array(cat_train)
cat_test = []
for filename in os.listdir(cat_test_path):
if filename.endswith(".jpg"):
img = image.load_img(cat_test_path+filename, target_size=(128, 128))
cat_test.append(image.img_to_array(img))
cat_test = np.array(cat_test)
print("cat_train", cat_train.shape)
print("cat_test", cat_test.shape)
def noisy(image):
row,col,ch= image.shape
mean = 50
var = 1024
sigma = var**0.5
gauss = np.random.normal(mean,sigma,(row,col,ch))
gauss = gauss.reshape(row,col,ch)
noisy = image + gauss
return noisy
cat_train_noisy = []
for img in cat_train:
noisy_img = noisy(img)
cat_train_noisy.append(noisy_img)
cat_train_noisy = np.array(cat_train_noisy)
print(cat_train_noisy.shape)
cat_test_noisy = []
for img in cat_test:
noisy_img = noisy(img)
cat_test_noisy.append(noisy_img)
cat_test_noisy = np.array(cat_test_noisy)
12
print(cat_test_noisy.shape)
history=cat_AE.fit(cat_train, cat_train,
epochs=100,
batch_size=32,
shuffle=True,
validation_data=(cat_test, cat_test))
cat_AE.save("cat_AE.h5")
get_encoded_cat = Model(inputs=cat_AE.input,
outputs=cat_AE.get_layer("CODE").output)
encoded_cat = get_encoded_cat.predict(cat_test)
encoded_cat = encoded_cat.reshape((len(cat_test), 16*16*8))
13
encoded_cat.shape
reconstructed_cats = cat_AE.predict(cat_test_noisy)
print("Test Accuracy:
{:.2f}%".format(cat_AE.evaluate(np.array(cat_test_noisy),np.array(cat_test))[1
]*100))
5. RESULTS :-
14
5.2 GRAPHS
● Our training dataset comprises 4001 images and our test dataset
comprises 1012 dataset. We see our validation loss graph almost
overlaps the training loss graphs, hence we can conclude that the
loss while training the model is almost equal to the loss while
testing the model.
15
● After training the model we get an accuracy of 79.57% and after
testing the model we get an accuracy of 77% approx. This justifies
the jittery validation accuracy graph and we can interpret that the
learning rate is high here.
5.3 SUMMARY
16
6. CONCLUSION:-
7. REFERENCES:-
17
with deep neural networks." In Advances in neural
information processing systems, pp. 341-349. 2012.
[9] https://www.cs.toronto.edu/~kriz/cifar.html
[10] https://blog.keras.io/building-autoencoders-in-keras.html
Plagiarism Check:
18