Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Extensive search on open source libraries and datasets suitable for training and modelling

Blurry image generated by convolving a uniform blur kernel. (c) Blurry image by averaging
sharp frames. In this case, blur is mostly caused by person motion, leaving the background as it
is. The blur kernel is non-uniform, complex shaped. However, when the blurry image is
synthesized by convolution with a uniform kernel, the background also gets blurred as if blur
was caused by camera shake. To model dynamic scene blur, our kernel-free method is
required.
Convolutional neural networks (CNN) date back decades [15] and have recently shown an
explosive popularity partially due to its success in image classification [14]. Several factors are
of central importance in this progress: (i) the efficient training implementation on modern
powerful GPUs

In single image deblurring, the “coarse-to-fine” scheme, i.e. gradually restoring the sharp
image on different resolutions in a pyramid, is very successful in both traditional optimization-
based methods and recent neural-network based approaches. In this paper, we investigate
this strategy and propose a Scale-recurrent Network (SRN-Deblur Net) for this deblurring task.
Compared with the many recent learning-based approaches in [25], it has a simpler network
structure, a smaller number of parameters and is easier to train. We evaluate our method on
large-scale deblurring datasets with complex motion. Results show that our method can
produce better quality results than state-of-threats, both quantitatively and qualitatively

To avoid border effects during training, all the convolutional layers have no padding, and the
network produces a smaller output (20 × 20). The MSE loss function is evaluated only by the
difference between the central 20 × 20 crop of Xi and the network output. In processing test
images, the convolutional neural network can be applied on images of arbitrary sizes
To create a large training dataset, the learning-based methods synthesize blurred images by convolving sharp latent
images with real or generated uniform or non-uniform blur kernels. generating blurred images through averaging
consecutive short-exposure frames from videos captured by high-speed cameras like GoPro Hero 4 Black to approximate
long-exposure blurry frames. These generated frames are more reasonable since they can recreate complex camera
shake and object motion, which are common in real photographs.

Recent work started to use end-to-end trainable networks for deblurring images and videos. They have achieved state-
of-the-art results using a multiscale convolutional neural network (CNN). This method commences from a very coarse
scale of the blurry image, and progressively recovers the latent image at higher resolutions until the full resolution is
reached. This framework follows the multi-scale mechanism in traditional methods, where the coarse-to-fine pipelines
are common when handling large blur kernels.

Scale-recurrent-network(SRN) which effectively improves the performance and output eliminating general issues in
CNN-based deblurring systems.

This new network structure has less parameters than older multi-scale deblurring ones and is simpler to train. Compared
with the many recent learning-based approaches, it has a simpler network structure, which is easier to train.
CRF – camera response function

The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103
training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding
ground truth shaper image that are obtained by a high-speed camera.

REDS dataset
REalistic and Dynamic Scenes dataset for video deblurring and super-resolution. Which provides varies types of datasets
for blur and compression or both.
To create a large training dataset, the learning-based methods synthesize blurred
images by convolving sharp latent images with real or generated uniform or non-
uniform blur kernels. generating blurred images through averaging consecutive
short-exposure frames from videos captured by high-speed cameras like GoPro
Hero 4 Black to approximate long-exposure blurry frames. These generated frames
are more reasonable since they can recreate complex camera shake and object
motion, which are common in real photographs.

Blurry image generated by convolving a uniform blur kernel. (c) Blurry image by
averaging sharp frames. In this case, blur is mostly caused by person motion,
leaving the background as it is. The blur kernel is non-uniform, complex
shaped. However, when the blurry image is synthesized by convolution with a
uniform kernel, the background also gets blurred as if blur
was caused by camera shake. To model dynamic scene blur, our kernel-free
method is required.

To avoid border effects during training, all the convolutional layers have no
padding, and the network produces a smaller output (20 × 20). The MSE loss
function is evaluated only by the difference between the central 20 × 20 crop of Xi
and the network output. In processing test images, the convolutional neural
network can be applied on images of arbitrary sizes

You might also like