Generatives Neural Networks

You might also like

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 31

Generatives Neural

Networks
Neural Networks Types
• Feed Forward-[image/text classification or prediction]
• Single layer Feed Forward NN
• Multilayer feed forward NN
• Convolution NN
• Pretrained NN- Dataset is very large, number of layers also large
• Autoencoder Networks-[image/text classification or prediction, denoising and image
compression]
• Encoder & Decoder
• Learns in unsupervised and finetunes in supervised way
• Generative NNs- [image generation]
• Generator & discriminator
• Recurrent NNs- [Many-to-one,one-to-many,many-to-many]
• LSTM,GRU,Bidirectional
• Neural Style Transfer
• DeepDream
• DCGAN- Deep Convolutional Generative Adversarial Network
• Pix2Pix
• CycleGAN
• Adversarail FGSM(Fast Gradient Signed Method)
• privacy Attack(Inference attack)
• Poisioning
• Torjoning
• Backdooring
Neural Style Transfer
• This tutorial uses deep learning to compose one image in the style of
another image (ever wish you could paint like Picasso or Van Gogh?).
• This is known as neural style transfer and the technique is outlined in
A Neural Algorithm of Artistic Style (Gatys et al.).
• Neural style transfer is an optimization technique used to take two
images—a content image and a style reference image (such as an
artwork by a famous painter)—and blend them together so the
output image looks like the content image, but “painted” in the style
of the style reference image.
For Example
DeepDream
• DeepDream is an experiment that visualizes the patterns learned by a
neural network. Similar to when a child watches clouds and tries to
interpret random shapes, DeepDream over-interprets and enhances
the patterns it sees in an image.
• It does so by forwarding an image through the network, then
calculating the gradient of the image with respect to the activations
of a particular layer. The image is then modified to increase these
activations, enhancing the patterns seen by the network, and
resulting in a dream-like image. This process was dubbed
"Inceptionism" (a reference to InceptionNet, and the movie
Inception).
DCGAN
DCGAN
Deep Convolutional Generative Adversarial
Network(DCGAN)

• This tutorial demonstrates how to generate images of handwritten


digits using a Deep Convolutional Generative Adversarial Network
(DCGAN).
• The code is written using the Keras Sequential API with a
tf.GradientTape training loop.
What are GANs?
• Generative Adversarial Networks (GANs) are one of the most
interesting ideas in computer science today.
• Two models are trained simultaneously by an adversarial process.
• A generator ("the artist") learns to create images that look real,
• while a discriminator ("the art critic") learns to tell real images apart from
fakes.
• During training, the generator progressively becomes better at
creating images that look real, while the discriminator becomes
better at telling them apart.
• The process reaches equilibrium when the discriminator can no
longer distinguish real images from fakes.
• This notebook demonstrates this process on the MNIST dataset. The
following animation shows a series of images produced by the
generator as it was trained for 50 epochs. The images begin as
random noise, and increasingly resemble hand written digits over
time.
Pix2Pix
• This notebook demonstrates image to image translation using
conditional GAN's, as described in Image-to-Image Translation with
Conditional Adversarial Networks.
• Using this technique we can colorize black and white photos, convert
google maps to google earth, etc. Here, we convert building facades
to real buildings.
Below is the output generated after training
the model for 200 epochs.
Below is the output generated after training
the model for 200 epochs.
CycleGAN
• This notebook demonstrates unpaired image to image translation
using conditional GAN's, as described in Unpaired Image-to-Image
Translation using Cycle-Consistent Adversarial Networks, also known
as CycleGAN. The paper proposes a method that can capture the
characteristics of one image domain and figure out how these
characteristics could be translated into another image domain, all in
the absence of any paired training examples.
• This notebook assumes you are familiar with Pix2Pix, which you can
learn about in the Pix2Pix tutorial. The code for CycleGAN is similar,
the main difference is an additional loss function, and the use of
unpaired training data.
• CycleGAN uses a cycle consistency loss to enable training without the
need for paired data. In other words, it can translate from one
domain to another without a one-to-one mapping between the
source and target domain.
• This opens up the possibility to do a lot of interesting tasks like photo-
enhancement, image colorization, style transfer, etc. All you need is
the source and the target dataset (which is simply a directory of
images).
Adversarial example using FGSM
• This tutorial creates an adversarial example using the Fast Gradient
Signed Method (FGSM) attack as described in Explaining and
Harnessing Adversarial Examples by Goodfellow et al.
• This was one of the first and most popular attacks to fool a neural
network.
What is an adversarial example?
• Adversarial examples are specialized inputs created with the purpose
of confusing a neural network, resulting in the misclassification of a
given input.
• These notorious inputs are indistinguishable to the human eye, but
cause the network to fail to identify the contents of the image.
• There are several types of such attacks, however, here the focus is on
the fast gradient sign method attack, which is a white box attack
whose goal is to ensure misclassification.
• A white box attack is where the attacker has complete access to the
model being attacked.
Knowledge restriction (White-box, Black-
box, Grey-box)
• As in any other type of attack, adversaries may have different
restrictions in terms of knowledge of a target system.
• Black-box method — an attacker can only send information to the
system and obtain a simple result about a class.
• Grey-box methods — an attacker may know details about dataset or
a type of neural network, its structure, the number of layers, etc.
• White-box methods — everything about the network is known
including all weights and all data on which this network was trained.
Here, starting with the image of a panda, the attacker adds small perturbations (distortions)
to the original image, which results in the model labelling this image as a gibbon, with high
confidence. The process of adding these perturbations is explained below.
Fast gradient sign method
• The fast gradient sign method works by using the gradients of the
neural network to create an adversarial example.
• For an input image, the method uses the gradients of the loss with
respect to the input image to create a new image that maximises the
loss.
• This new image is called the adversarial image. This can be
summarized using the following expression:
So let's try and fool a pretrained model. In this tutorial, the model is
MobileNetV2 model, pretrained on ImageNet.
Problems of GAN

You might also like