Professional Documents
Culture Documents
Deep Wavelet Network For Image Classification
Deep Wavelet Network For Image Classification
Deep Wavelet Network For Image Classification
Salwa Said1,2 , Olfa Jemai1,2 , Salima Hassairi2 , Ridha Ejbali2 , Mourad Zaied2 and Chokri Ben Amar2
1
Higher Institute of Computer Science and Multimedia, University of Gabes, Tunisia
2
REGIM-Lab: REsearch Groups in Intelligent Machines University of Sfax
salwa.said@ieee.org, olfa.jemai@ieee.org, salima.hassairi.tn@ieee.org,
ridha ejbali@ieee.org, mourad.zaied@ieee.org, chokri.benamar@ieee.org
Abstract—The success of the deep learning and specifically image classification using wavelet networks and deep learning
learning layer by layer led to many impressive results in several methods to create new Deep Wavelet Network. The remaining
contexts that include neural network. This gave us the idea to part of this paper is composed of five sections. The algorithm
apply this principle of learning on wavelet network because it
is an active research topic at the moment. This paper present of our approach is proposed in the section 2. The section
our approach for image classification by the combination of 3 gives an overview of the dataset which we used in the
two techniques of learning: the wavelet network and the deep classification test. Then, the section 4 contains the experiments
learning. We try to classify images in a supervised way following and the tests results of our network. Finally, the section 5
by an unsupervised learning using the principle of autoencoder. summarizes and concludes this paper.
Experiments on two databases COIL-100 and MNIST show
that our approach gives good results for the two classifiers that II. P ROPOSED APPROACH
we used.
Recent work has shown that deep learning methods have
Index Terms: wavelet network, deep learning, supervised
classification, unsupervised learning, autoencoder. produced impressive results in several contexts, that include
neuron networks, this gave us the idea to apply this principle
I. I NTRODUCTION of learning on wavelet network.
For precision, in the next subsections, we firstly introduce
Before 2006, we did not know how to train a deep archi-
a theoretical background of our approach. Then, we explain
tecture: the iterative optimization converged local minima of
how to use it for classification.
poor quality [1], [2] because of the vanishing gradient problem
[3], [4]. Thus, neural networks with more than two hidden A. Wavelet network
layers initialized randomly give worse results than the shallow Wavelet network is the result of the combination between
networks [5]. wavelet and neural network [19] [20]. It is constituted of three
To avoid this problem, Hinton and all. [6], in 2006, sug- layers (Fig.1): a first layer with Ni entries, a hidden layer
gested to pre-train each layer to learn a good representation consisting of Nw wavelets and an output layer accommodating
of its input. Thus, there are several possible algorithms for pre- the weighted outputs of wavelets as we show the following
training are surveyed in [7]. So, we succeed to get networks figure:
with 2 or more hidden layers which work not only better
then some networks not deep, but they beat the learning
algorithms to the state of the art, for example new methods
for unsupervised pre-training [8], [9], [10]. Those who led to
the breakthrough of the Deep Learning (DL) were Hinton,
Bengio and LeCun. Hinton used the Restricted Boltzmann
Machines (RBM) [11] as generative models of several different
types of data as well as labeled or unlabeled images [12].
The AutoEncoders (AE) were developed by Bengio [13], [14]
and have been used for learning efficient coding. Yun LeCun
invented the sparse representations for image classification and
Fig. 1. Graphic representation of wavelet network
object recognition [15], [16] [12].
In the other hand, Zhang and Benveniste introduce the It uses a feed-forward propagation algorithm from the input
wavelet networks in 1992 [17], [18] . They are the result to the output neurons [21]. Furthermore, it has a certain
from the combination of two techniques of signal processing proximity with the architecture of the neural networks. The
”wavelet transform and artificial neural networks” whose the main similarity between these two networks is that both of
activation functions are based on a family of wavelets. them calculate a linear combination of nonlinear functions
Combining the ability of wavelet network and the Deep whose form depends on the adjustable parameters (dilations
Learning techniques for image classification is the main objec- and translations) of this combination. However, the major
tive of our work. In this paper, we propose a novel approach for difference between them is the nature of the transfer functions
978-1-5090-1897-0/16/$31.00
2016
c IEEE utilized by the hidden cells [22].
B. Autoencoder • Step 6: Repeat the steps 3, 4 and 5 according to the
An autoencoder is a neural network trained to predict desired number of hidden layers.
their own entries (x=x’). It aim to minimize the error of After the learning phase, we organize the vectors obtained
reconstruction of the input data as shown in the figure below: in a matrix which each column represents a picture so that we
can apply the classification phase. The figure below illustrates
the steps of creation of our network: