Professional Documents
Culture Documents
Blind Image Deconvolution Using Deep Generative Priors: Muhammad Asim, Fahad Shamshad, and Ali Ahmed
Blind Image Deconvolution Using Deep Generative Priors: Muhammad Asim, Fahad Shamshad, and Ali Ahmed
I. I NTRODUCTION
prior models outlined above. The introduction of such deep The rest of the paper is organized as follows. In Section II,
generative priors in the image deblurring should enable a we give an overview of the related work. We formulate the
far more effective regularization yielding sharper and better problem in Section III followed by our proposed alternating
quality deblurred images. Our experiments in Figure 1 confirm gradient descent algorithms in Section IV. Section V contains
this hypothesis, and show that embedding pretrained deep experimental results followed by concluding remarks in Sec-
generative priors in an iterative scheme for image deblurring tion VI.
produces promising results on standard datasets of face images
and house numbers. Some of the blurred faces are almost not II. R ELATED W ORK
recognizable by the naked eye due to extravagant blurs yet the Blind image deblurring is a well-studied topic and in
recovery algorithm deblurs these images near perfectly with general, various priors/regularizers exploiting the structure of
the assistance of generative priors. an image or blur kernel are introduced to address the ill-
This paper shows that an alternating gradient decent scheme posedness. These natural structures expect images or blur ker-
assisted with generative priors is able to recover the blur kernel nels to be sparse in some transform domain; see, for example,
k, and a visually appealing and sharp approximation of the true [3], [4], [16]–[19]. Another approach [17], [18] is to learn an
image i from the blurry y. Specifically, the algorithm searches over complete dictionary for sparse representations of image
for a pair (î, k̂) in the range of respective pretrained generators patches. The inverse problem is regularized to favor sparsely
of images and blur kernels that explains the blurred image representable image patches in the learned dictionary. On the
y. Implicitly, the generative priors aggressively regularize the other hand, we learn a more powerful non-linear mapping
alternating gradient descent algorithm to produce a sharp and (generative network) of full size images to lower-dimensional
a clean image. Since the range of the generative models can be feature vectors. The inverse problem is now regularized by
traversed by a much lower dimensional latent representations constraining the images in the range of the generator. Some
compared to the ambient dimension of the images, it not only of the other penalty functions to improve the conditioning
reduces the number of unknowns in the deblurring problem of the blind image deblurring problem are low-rank [8] and
but also allows for an efficient implementation of gradients in total variation [20] based priors. A recently introduced dark
this lower dimensional space using back propagation through channel prior [21] also shows promising results; it assumes a
the generators. sparse structure on the dark channel of the image, and exploits
The numerical experiments manifest that, in general, the this structure in an optimization program [22] for the inverse
deep generative priors yield better deblurring results com- problem. Other works include extreme channel priors [23],
pared to the conventional image priors, studied extensively in outlier robust deblurring [24], learned data fitting [25], and
the literature. Compared to end-to-end deep learning based discriminative prior based blind image deblurring approaches
frameworks, our approach explicitly takes into account the [26].
knowledge of forward map (convolution) in the gradient decent Our approach bridges the gap between the conventional
algorithm to achieve robust results. Moreover, our approach iterative schemes, and recent end-to-end deep networks for
does not require expensive retraining of every deep network image deblurring [10], [11], [27]–[31]. The iterative schemes
involved in case of partial changes in blur problem specifica- are generally adaptable to new images/blurs, and other modifi-
tions such as alterations in the blur kernels or noise models; cations in the model such as noise statistics. Comparatively, the
in the former case, we only have to retrain the blur generative end-to-end approaches above breakdown to any such changes
model (a shallow network, and hence easy to train), and no that are not reflected in the training data, and require a
change in the later case. complete retraining of the network. Our approach combines
We have found that often strictly constraining the recovered some of the benefits of both these paradigms by employing
image to lie in the generator range might be counter productive powerful generative neural networks in an iterative scheme
owing to the limitation of the generative models to faithfully as priors that already are familiar with images and blurs
learn the image distribution: reasons may include mode col- under consideration. These neural networks help restrict the
lapse and convergence issues among others. We, therefore, candidate solutions to come from the learned or familiar
investigate a modification of the loss function to allow the images, and blur kernels only. A change in, for example, blur
recovered images some leverage/slack to deviate from the kernel model only requires retraining of a shallow network of
range of the generator. This modification effectively addresses blurs while keeping the image generative network, and rest of
the performance limitation due to the range of the generator. the iterative scheme unchanged. Similarly, a change in noise
Another important contribution of this work is to show that statistics is handled in a complete adaptable manner as in
even untrained deep convolutional generative networks can act classical iterative schemes. Retaining the adaptability while
as a good image prior in blind image deblurring. This learning maintaining a strong influence of the powerful neural networks
free prior ability suggests that some of the image statistics are is an important feature of our algorithm.
captured by the network structure alone. The weights of the Neural network based implicit generative models such as
untrained network are initialized using a single blurred image. generative adversarial networks (GANs) [32] and variational
The fact that untrained generative models can act as good autoencoders (VAEs) [33] have found much success in mod-
priors [15], allows us to importantly elevate the performance eling complex data distributions especially that of images.
of our algorithm on rich image datasets that are currently not Recently, GANs and VAEs have been used for blind image de-
effectively learned by the generative models. blurring but only in an end-to end manner, which is completely
3
Fig. 2: Naive Deblurring. We gradually increase blur size from left to right and demonstrate the failure of naive deblurring by finding the
closest image in the range of the image generator (last row) to blurred image.
bound is expected to be tight. The functional forms of q(z|i), over both zi , and zk , where ⊗ is the convolution operator.
and p(i|z) each are modeled via a deep network. The right Incorporating the fact that latent representation vectors zi ,
hand side is maximized by tweaking the network parameters. and zk are assumed to be coming from standard Gaussian
The deep network p(i|z) is the generative model that produces distributions in both the adversarial learning and variational
samples of p(i) from latent representation z. inference framework, outlined in Section IV-A, we further
Generative model GI for the face and shoe images is trained augment the measurement loss in (5) with `2 penalty terms on
using adversarial learning for visually better quality results. the latent representations. The resultant optimization program
Each of the generative model GI for other image datasets, is then
and GK for blur kernels are trained using variational inference
framework above. argmin ky −GI (zi )⊗GK (zk )k2 +γkzi k2 +λkzk k2 , (6)
zi ∈Rl ,zk ∈Rm
Fig. 3: Block diagram of proposed approach. Low dimensional parameters zi and zk are updated to minimize the measurement loss using
alternating gradient descent. The optimal pair (ẑi , ẑk ) generate image and blur estimates (GI (ẑi ), GK (ẑk )).
Algorithm 1 Deblurring Strictly under Generative Priors Algorithm 2 Deblurring under Classical and Generative Priors
Input: y, GI and GK Input: y, GI and GK
Output: Estimates î1 and k̂ Output: Estimates î2 and k̂
Initialize: Initialize:
(0) (0) (0) (0)
zi ∼ N (0, I), zk ∼ N (0, I) zi ∼ N (0, I), zk ∼ N (0, I), i(0) ∼ N (0.5, 10−2 I)
for t = 0, 1, 2, . . . , T do for t = 0, 1, 2, . . . T do
(t+1) (t) (t) (t) (t+1) (t) (t) (t)
zi ← zi - η∇zi L(zi , zk ); (8) zi ← zi - η∇zi L(zi , zk , i(t) ); (10)
(t+1) (t) (t) (t) (t+1) (t) (t) (t)
zk ← zk - η∇zk L(zi , zk ); (9) zk ← zk - η∇zk L(zi , zk , i(t) ); (10)
(t) (t)
end for i(t+1) ← i(t) - η∇i L(zi , zk , i(t) ); (10)
(T ) (T )
î1 ← GI (zi ), k̂ ← GK (zk ) end for
(T )
î2 ← i(T ) , k̂ ← GK (zk )
∂
Let ĠI = ∂z i
GI (zi ) and ĠK = ∂z∂k GK (zk ). Then, it is easy
to see that range of the generator, and rather also explore images a bit
√
∇zit L(zit , zkt ) = nĠ∗I F ∗ rt F GK (zkt ) + γzit , outside the range. To accomplish this, we propose minimizing
(8)
t t ∗ ∗ t
√ t the measurement loss of images inside the range exactly as in
∇zkt L(zi , zk ) = ĠK F r n F (GI (zit ) + λzk . (9) (5) together with the measurement loss ky − i ⊗ GK (zk )k2 of
For illustration, take the example of a two layer generator images not necessarily within the range. The in-range image
GI (zi ), which is simply GI (zi ), and the out-range image i are then tied together
by minimizing an additional penalty term, Range Error(i) :=
GI (zi ) = relu(W2 (relu(W1 zi ))), ki − GI (zi )k2 . The idea is to strictly minimize the range error
when pretrained generator has effectively learned the image
where W1 : Rl1 ×l , and W2 : Rn×l1 are the weight matri-
distribution, and afford some slack when it is not the case.
ces at the first, and second layer, respectively. In this case
The amount of slack can be controlled by tweaking the weights
ĠI = W f2,z W
i
f1,z where W
i
f1,z = diag(W1 zi > 0)W1 and
i
attached with each loss term in the final objective. Finally, to
W2,zi = diag(W2 W1,zi > 0)W2 . From this expression, it is
f f
guide the search of a best deblurred image beyond the range
clear that alternating gradient descent algorithm for the blind
of the generator, one of the conventional image priors such as
image deblurring requires alternate back propagation through
total variation measure k · ktv is also introduced. This leads to
the generators GI , and GK as illustrated in Figure 3. To update
the following optimization program
zit−1 , we fix zkt−1 , compute the Fourier transform of a scaling
of the residual vector rt , and back propagate it through the
generator GI . Similar update strategy is employed for zkt−1
keeping zit fixed. argmin ky − i ⊗ GK (zk )k2 + τ ki − GI (zi )k2
i,zi ,zk
Input Description
Output of Algorithm TABLE III: Algorithm 2 Parameters.
Alg-1 Alg-2 Alg-3
y = itest ⊗ k + n Blurry image generated from î1 î2 î3
test set images deblurring, we choose [11] that trains a convolutional neural
y = isample ⊗ k + n Blurry image generated from îsample – –
sampled image
network (CNN) in an end-to-end manner, and [30] that trains
isample = GI (zi ) a neural network (DeblurGAN) in an adversarial manner. Each
zi = N (0, I)
of these networks is trained on SVHN, and CelebA. For CNN,
y = irange ⊗ k + n Blurry image generated from îrange – –
closest image to itest in range we train a slightly modified (fine-tuned) version of [11] using
of GI Adam optimizer with learning rate 10−4 and batch size 16. To
TABLE I: Notations developed for different images used train the DeblurGAN, we use the code provided by authors of
to generate blurry images y for deblurring using proposed [30]. Deblurred images from these baseline methods will be
algorithms and their corresponding outputs. referred to as iDP , iEP , iOH , iDF , iCNN and iDeGAN .
Model Architectures
Model Encoder Decoder
TABLE IV: Architectures for VAEs used for Blur and SVHN. Here, conv(m,n,s) represents convolutional layer with m
filters of size n and stride s. Similarly, convT represents transposed convolution layer. Maxpool(n,m) represents a max pooling
layer with stride m and pool size of n. Finally, fc(m) represents a fully connected layer of size m. The decoder is designed
to be a mirror reflection of the encoder in each case.
Fig. 6: Image deblurring results on CelebA using Algorithm 1 and 2. It can be seen that î1 is in close resemblance to irange (closest image
in the generator range to the original image), where as î2 is almost exactly itest , thus mitigating the range issue of image generator.
Fig. 7: Image deblurring results on blurry images generated from samples, isample , of PG-GAN using Algorithm 1. It can be seen that visually
appealing images are recovered, îsample , from blurry ones.
Fig. 10: Image deblurring results on SVHN images using Algorithm 1. It can be seen that due to the simplicity of these images, î1 is a
visually a very good estimate of itest , due to the close proximity between irange and itest .
Fig. 12: Performance of Algorithm 1 with increasing length Fig. 13: Large Blurs. Under large blurs, proposed Algorithm 2,
of zi . A DCGAN was trained on CelebA dataset with varying shows excellent deblurring results.
length of zi . For each case, we plot the average PSNR for
Algorithm 1.
Fig. 15: Untrained Generative Priors. Deblurring results of arbitrary natural images using an untrained image generator are competitive
against the baseline methods. For each deblurred image, PSNR and SSIM are reported at the top.
see (12), modified to the image resolution, was introduced as based techniques. Interestingly, even the untrained generator
an untrained image generator in Algorithm 3. Initialization of performs quite competitively against these baseline methods.
DCGAN was carried out using Adam optimizer with step size The PSNR, and SSIM values of the deblurred images are
set to 0.001 for 400 iterations. Later, we optimized the loss in also reported in the inset. These initial results are meant to
(11), again using Adam optimizer for 20, 000 iterations. The showcase the potential of generative priors on more complex
step size for updating zi , zk and W were chosen to be 10−3 , image datasets. This shows that introducing a generative prior
10−3 and 10−4 , respectively. Smaller step size for the network in image deblurring is in general a good idea regardless of
weights W is to discourage any large deviation of the weight the expressive power of the generator on the image dataset
parameters from our qualified initialization derived from the as it acts as a reasonable prior based on its structure alone.
blurred image; the only available information in this case as Future work focusing on novel network architecture designs
the generator is not trained a priori. that more strongly favor clear images over blurry ones could
Figure 15 shows the results of Algorithm 3 on few complex pave way for more effective utilization of generative priors in
blurry images, and also compares against the classical prior image deconvolution.
14
VI. C ONCLUSION [19] J.-F. Cai, H. Ji, C. Liu, and Z. Shen, “Blind motion deblurring from
a single image using sparse approximation,” in Computer Vision and
This paper proposes a novel framework for blind im- Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE,
age deblurring that uses deep generative networks as priors 2009, pp. 104–111.
[20] J. Pan, R. Liu, Z. Su, and G. Liu, “Motion blur kernel estimation via
rather than in a conventional end-to-end manner. We report salient edges and low rank prior,” in Multimedia and Expo (ICME), 2014
convincing deblurring results under the generative priors in IEEE International Conference on. IEEE, 2014, pp. 1–6.
comparison to the existing methods. A thorough discussion [21] J. Pan, D. Sun, H. Pfister, and M.-H. Yang, “Blind image deblurring
using dark channel prior,” in Proceedings of the IEEE Conference on
on the possible limitations of this approach on more complex Computer Vision and Pattern Recognition, 2016, pp. 1628–1636.
images is presented along with a few effective remedies to [22] L. Xu, C. Lu, Y. Xu, and J. Jia, “Image smoothing via l 0 gradient
address these shortcomings. Importantly, the general strategy minimization,” in ACM Transactions on Graphics (TOG), vol. 30, no. 6.
ACM, 2011, p. 174.
of invoking generative priors is not limited to deblurring [23] Y. Yan, W. Ren, Y. Guo, R. Wang, and X. Cao, “Image deblurring
only but can be employed in other interesting non-linear via extreme channels prior,” in Proceedings of the IEEE Conference on
inverse problems in signal processing, and computer vision. Computer Vision and Pattern Recognition, 2017, pp. 4003–4011.
[24] J. Dong, J. Pan, Z. Su, and M.-H. Yang, “Blind image deblurring with
The main contribution of the paper, therefore, goes beyond outlier handling,” in IEEE International Conference on Computer Vision
image deblurring, and is in introducing generative priors as (ICCV), 2017, pp. 2478–2486.
effective method in challenging non-linear inverse problem [25] J. Pan, J. Dong, Y.-W. Tai, Z. Su, and M.-H. Yang, “Learning discrimi-
native data fitting functions for blind image deblurring.” in ICCV, 2017,
with a multitude of interesting follow up questions. pp. 1077–1085.
[26] L. Li, J. Pan, W.-S. Lai, C. Gao, N. Sang, and M.-H. Yang, “Learning a
discriminative prior for blind image deblurring,” in Proceedings of the
R EFERENCES IEEE Conference on Computer Vision and Pattern Recognition, 2018,
pp. 6616–6625.
[1] P. Campisi and K. Egiazarian, Blind image deconvolution: theory and [27] S. Nah, T. H. Kim, and K. M. Lee, “Deep multi-scale convolu-
applications. CRC press, 2016. tional neural network for dynamic scene deblurring,” arXiv preprint
[2] D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE arXiv:1612.02177, 2016.
signal processing magazine, vol. 13, no. 3, pp. 43–64, 1996. [28] X. Xu, D. Sun, J. Pan, Y. Zhang, H. Pfister, and M.-H. Yang, “Learning
[3] T. F. Chan and C.-K. Wong, “Total variation blind deconvolution,” IEEE to super-resolve blurry face and text images,” in Proceedings of the
transactions on Image Processing, vol. 7, no. 3, pp. 370–375, 1998. IEEE Conference on Computer Vision and Pattern Recognition, 2017,
[4] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman, pp. 251–260.
“Removing camera shake from a single photograph,” in ACM transac- [29] T. Nimisha, A. K. Singh, and A. Rajagopalan, “Blur-invariant deep
tions on graphics (TOG), vol. 25, no. 3. ACM, 2006, pp. 787–794. learning for blind-deblurring,” in Proceedings of the IEEE Conference
[5] L. Xu, S. Zheng, and J. Jia, “Unnatural l0 sparse representation for on Computer Vision and Pattern Recognition, 2017, pp. 4752–4760.
natural image deblurring,” in Proceedings of the IEEE conference on [30] O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas,
computer vision and pattern recognition, 2013, pp. 1107–1114. “Deblurgan: Blind motion deblurring using conditional adversarial net-
[6] T. Michaeli and M. Irani, “Blind deblurring using internal patch recur- works,” arXiv preprint arXiv:1711.07064, 2017.
rence,” in European Conference on Computer Vision. Springer, 2014, [31] Y. Chen, F. Wu, and J. Zhao, “Motion deblurring via using generative
pp. 783–798. adversarial networks for space-based imaging,” in 2018 IEEE 16th In-
[7] A. Ahmed, B. Recht, and J. Romberg, “Blind deconvolution using con- ternational Conference on Software Engineering Research, Management
vex programming,” IEEE Transactions on Information Theory, vol. 60, and Applications (SERA). IEEE, 2018, pp. 37–41.
no. 3, pp. 1711–1732, 2014. [32] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
[8] W. Ren, X. Cao, J. Pan, X. Guo, W. Zuo, and M.-H. Yang, “Image S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in
deblurring via enhanced low-rank prior,” IEEE Transactions on Image Advances in neural information processing systems, 2014, pp. 2672–
Processing, vol. 25, no. 7, pp. 3426–3437, 2016. 2680.
[9] D. Krishnan and R. Fergus, “Fast image deconvolution using hyper- [33] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv
laplacian priors,” in Advances in Neural Information Processing Systems, preprint arXiv:1312.6114, 2013.
2009, pp. 1033–1041. [34] A. Bora, A. Jalal, E. Price, and A. G. Dimakis, “Compressed sensing
[10] C. J. Schuler, M. Hirsch, S. Harmeling, and B. Schölkopf, “Learning to using generative models,” arXiv preprint arXiv:1703.03208, 2017.
deblur,” IEEE transactions on pattern analysis and machine intelligence, [35] V. Shah and C. Hegde, “Solving linear inverse problems using
vol. 38, no. 7, pp. 1439–1451, 2016. gan priors: An algorithm with provable guarantees,” arXiv preprint
[11] M. Hradiš, J. Kotera, P. Zemcı́k, and F. Šroubek, “Convolutional neural arXiv:1802.08406, 2018.
networks for direct text deblurring,” in Proceedings of BMVC, vol. 10, [36] P. Hand, O. Leong, and V. Voroninski, “Phase retrieval under a gen-
2015. erative prior,” in Advances in Neural Information Processing Systems,
[12] A. Chakrabarti, “A neural approach to blind motion deblurring,” in 2018, pp. 9154–9164.
European Conference on Computer Vision. Springer, 2016, pp. 221– [37] F. Shamshad and A. Ahmed, “Robust compressive phase retrieval via
235. deep generative priors,” arXiv preprint arXiv:1808.05854, 2018.
[13] P. Svoboda, M. Hradiš, L. Maršı́k, and P. Zemcı́k, “Cnn for license [38] F. Shamshad, F. Abbas, and A. Ahmed, “Deep ptych: Subsampled fourier
plate motion deblurring,” in Image Processing (ICIP), 2016 IEEE ptychography using generative priors,” arXiv preprint arXiv:1812.11065,
International Conference on. IEEE, 2016, pp. 3832–3836. 2018.
[14] P. Hand and V. Voroninski, “Global guarantees for enforcing deep [39] R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-Johnson,
generative priors by empirical risk,” arXiv preprint arXiv:1705.07576, and M. N. Do, “Semantic image inpainting with deep generative
2017. models,” in Proceedings of the IEEE Conference on Computer Vision
[15] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Deep image prior,” arXiv and Pattern Recognition, 2017, pp. 5485–5493.
preprint arXiv:1711.10925, 2017. [40] P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-gan: Protecting
[16] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understanding classifiers against adversarial attacks using generative models,” 2018.
and evaluating blind deconvolution algorithms,” in Computer Vision and [41] A. Ilyas, A. Jalal, E. Asteri, C. Daskalakis, and A. G. Dimakis, “The
Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, robust manifold defense: Adversarial training using generative models,”
2009, pp. 1964–1971. arXiv preprint arXiv:1712.09196, 2017.
[17] Z. Hu, J.-B. Huang, and M.-H. Yang, “Single image deblurring with [42] A. Yu and K. Grauman, “Fine-grained visual comparisons with local
adaptive dictionary learning,” in Image Processing (ICIP), 2010 17th learning,” in Proceedings of the IEEE Conference on Computer Vision
IEEE International Conference on. IEEE, 2010, pp. 1169–1172. and Pattern Recognition, 2014, pp. 192–199.
[18] H. Zhang, J. Yang, Y. Zhang, and T. S. Huang, “Sparse representation [43] G. Boracchi, A. Foi et al., “Modeling the performance of image
based blind image deblurring,” in Multimedia and Expo (ICME), 2011 restoration from motion blur.” IEEE Trans. Image Processing, vol. 21,
IEEE International Conference on. IEEE, 2011, pp. 1–6. no. 8, pp. 3502–3517, 2012.
15
A PPENDIX
Extended qualitative results for SVHN, CelebA and Shoes are provided in Figures 16, 17 and 18, respectively. In addition
to motion blurs, we also trained a generative model GK on Gaussian blurs and show qualitative results for Algorithm 1 on
SVHN and CelebA in Figure 19 and 20, respectively. Results of PG-GAN are also made available in Figure 21.
Fig. 16: Comparison of image deblurring on SVHN for Algorithm 1 with baseline methods. Deblurring results of Algorithm 1, î1 , are
superior than all other baseline methods, especially under large blurs. Better results of Algorithm 1 on SVHN are explained by the close
proximity between range images irange and original groundtruth images itest .
17
Fig. 17: Comparison of image deblurring on CelebA for Algorithm 1 and 2 with baseline methods. Deblurring results of Algorithm 2, î2 ,
are superior than all other baseline methods, especially under large blurs. Deblurred images of DeblurGAN, iDeGAN , although sharp, deviate
from the original images, itest , whereas Algorithm 2 tends to agree better with the groundtruth.
18
Fig. 18: Comparison of image deblurring on Shoes for Algorithm 1 and 2 with baseline methods. Deblurring results of Algorithm 2, î2 ,
are superior than all other baseline methods, especially under large blurs. Deblurred images of DeblurGAN, iDeGAN , although sharp, deviate
from the original images, itest , whereas Algorithm 2 tends to agree better with the groundtruth.
19
Fig. 19: Image deblurring results for Algorithm 1 on SVHN with Gaussian blurs. Images from test set along with corresponding blur kernels
are convolved to produce blurry images.
Fig. 20: Image deblurring results using Algorithm 1 on CelebA for Gaussian blurs. Images from test set along with corresponding blur
kernels are convolved to produce blurry images.
20
Fig. 21: Image deblurring results using PG-GAN as generator GI under heavy noise. Images sampled from PG-GAN, were blurred, and
Algorithm 1 was used to deblur these blurry images.