Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

2020 25th International Conference on Pattern Recognition (ICPR)

Milan, Italy, Jan 10-15, 2021

Defense Mechanism Against Adversarial Attacks


Using Density-based Representation of Images
Yen-Ting Huang Wen-Hung Liao Chen-Wei Huang

Department of Computer Science Department of Computer Science Department of Computer Science


National Chengchi University National Chengchi University National Chengchi University
Taipei, TAIWAN Taipei, TAIWAN Taipei, TAIWAN
Email: ythuang.peter@gmail.com Email: whliao@gmail.com Email: waylon-09@hotmail.com

Abstract—Adversarial examples are slightly modified inputs


devised to cause erroneous inference of deep learning models.
Protection against the intervention of adversarial examples is
a fundamental issue that needs to be addressed before the
wide adoption of deep-learning based intelligent systems. In this
research, we utilize the method known as input recharacteri-
2020 25th International Conference on Pattern Recognition (ICPR) | 978-1-7281-8808-9/21/$31.00 ©2021 IEEE | DOI: 10.1109/ICPR48806.2021.9412472

zation to effectively eliminate the perturbations found in the


adversarial examples. By converting images from the intensity
domain into density-based representation using halftoning oper-
ation, performance of the classifier can be properly maintained.
With adversarial attacks generated using FGSM, I-FGSM, and
PGD, the top-5 accuracy of the hybrid model can still achieve
80.97%, 78.77%, 81.56%, respectively. Although the accuracy
has been slightly affected, the influence of adversarial examples is
significantly discounted. The average improvement over existing
input transform defense mechanisms is approximately 10%.

I. I NTRODUCTION
Recent advances in deep neural networks have brought Fig. 1: Illustration of adversarial example generation [5].
forth revolutionary breakthroughs in many areas, including
object recognition, identity authentication, natural language
processing, and image generation. Driven by the emergence of
big data and hardware acceleration, deep learning can exploit
argmin k  k2 , s.t.F(X + ; θ) 6= F(X ; θ) (1)
more abstract levels of representation to capture the charac- 
teristics of the data. However, recent research has found that where  denotes the perturbation added, F represents the
deep neural networks are susceptible to adversarial examples classification model, and θ refers to the model parameters.
[1], which are produced by adding visually imperceptible One possible approach to counter adversarial attacks is to
perturbations to image pixels. Adversarial examples are not pre-process the input data so that it will fall into the correct
easily detectable by humans, but they can easily deceive side of the decision boundary afterward. The authors in [9]
deep neural networks. Therefore, adversarial vulnerabilities of investigate defense strategies based on image transformations
neural networks have become one of the most important risks such as bit-depth reduction, JPEG compression, total variance
in security-sensitive applications [2]–[4]. The presence of minimization, and image quilting. By altering particular statis-
adversarial examples has accelerated researches on developing tics of the input images, they hope to change the structure
defense mechanisms aimed to improve the robustness of these of these perturbations and reduce their adversarial effects.
learning-based models. However, both total variance minimization and image quilting,
According to [6], [13], the existence of adversarial examples which exhibit stronger defense performance in their exper-
can be attributed to mapping discontinuity in high-dimensional iments, require high computational resources. The strength
decision space for deep learning models. As illustrated in of their proposed strategy can also be further enhanced. Our
Fig. 1, the decision boundary of the trained model is very close objective is to devise a lightweight defense measure based
to the real data distribution. By adding subtle adversarial per- on domain transformation to effectively counter adversarial
turbations to the test sample, the model would misclassify the attacks in image classification tasks.
slightly modified input. The problem is formally formulated Our proposed defense mechanism converts an input image
as follows. from the intensity domain to the density domain as depicted

978-1-7281-8808-9/20/$31.00 ©2020 IEEE 3499

Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on February 28,2024 at 06:39:32 UTC from IEEE Xplore. Restrictions apply.
Fig. 2: Possible means of recharacterizing adversarial input.

Fig. 2, an operation that is termed input recharacterization in They can be categorized into three main groups: gradient
this research. Specifically, input recharacterization can consist masking [7], [8], input transformation [9] and adversarial
of two stages: a forward conversion (C) and an optional training [10]. 1) gradient masking: Because most of attack
backward reconstruction (R). Our hope is that by going approaches rely on computing gradients for generating per-
through the lossy one-way (C) or two-way transformation(C turbations, reducing or obfuscating the gradient information
and R), the purposely added ’noise’ or ’perturbation’ will can protect trained models against distance-based optimization
become ineffective. In other words, we would like to verify if methods. For example, bit-depth reduction [7] and defensive
one of the following three conditions will be satisfied using distillation [8] are examples of gradient masking. 2) input
the proposed transformation. transformation: Without changing the network architectures
and weights, these methods adopted pre-processing approaches
F(C(X + ); δ) = F(X ; θ) (2) to decrease the interference in the adversarial images. After
transforming the input images, classifiers may not be misled by
F(R(C(X + )); θ) = F(X ; θ) (3) the adversarial examples. 3) adversarial training: These types
of methods are mainly adopted to strengthen the robustness
F(R(C(X + )); θ̂) = F(X ; θ) (4) of the model. They augment training data with adversarial
Eq.(2) refers to the case when forward conversion with examples and then retrain the network. It effectively modifies
retraining in the transformed domain can handle the attack. the decision boundary, so the retrained model can correctly
Eq.(3) indicates when a two-stage processing eliminates the identify these adversarial examples.
perturbation. Eq. (4) holds if a two-way transformation plus Among these possible strategies, adversarial training is
model retraining are effective in resisting adversarial attacks. widely considered to be an effective option for defenses.
In this work, we employ digital halftoning and inverse halfton- However, high computational cost of generating adversarial
ing as the domain transfer approach. examples and retraining a model are impractical for most real-
Our key findings and main contributions are summarized as world scenarios. Our proposed approach is similar to input
follows. 1) We have generalized the input transform scheme transformation, which requires less computation. Compared
for adversarial defense into input recharacterization and inves- to TVM [11] or image quilting [12] that attempt to replace
tigated its efficacy under different settings. 2) We demonstrated or revise the patches of perturbations, our proposed input
that input transform based method can exhibit resistance to recharacterization method converts the whole image from
adversarial examples only through model retraining, and 3) We intensity-based representation to density-based representation
found out that one way transformation into halftone domain, using halftone operation. We hope the conversion process can
i.e., density-based representation, achieves best performance invalidate the attacks by moving all samples to an entirely new
in terms of classification accuracy and robustness against representation domain.
adversarial attacks.
The rest of the paper is organized as follows. In Section B. Adversarial Attacks
2, we briefly review attack and defense mechanisms for DNN We describe several representative attacks that would be
models. Section 3 describes the proposed input recharacteri- applied to validate the effectiveness of our defense mechanism
zation approach in detail. Section 4 reports quantitative exper- in the experiment section. Szegedy et al. firstly identified
imental results compared with previous image transformation the vulnerability of deep neural networks, and L-BFGS was
methods. Finally, Section 5 summarizes our work with a brief devised to fool DNN models in classification tasks [1]. It gen-
conclusion. erates adversarial samples by adding perturbations constrained
by L2 norm on the input image. Goodfellow et al. discovered
II. R ELATED W ORK
the vulnerability may result from the linearity of networks
A. Defense Mechanisms in high dimension and proposed Fast Gradient Sign Attack
To prevent adversarial attacks from harming safety-critical (FGSM) [13] to efficiently produce adversarial samples. This
applications, various defense strategies have been proposed. approach updates the image pixels along the gradient direction

3500

Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on February 28,2024 at 06:39:32 UTC from IEEE Xplore. Restrictions apply.
of the cost function in one step. Owing to the efficiency of unseen images. In addition, instead of focusing on high-quality
FGSM, researchers have extended FGSM to produce other reconstruction of images, our main task is to demonstrate that
types of adversarial samples. For example, Iterative-Fast Gra- input recharacterization can resist adversarial attacks.
dient Sign Method (I-FGSM) [14] takes multiple iterations for
optimization. Madry et al. proposed projected gradient descent III. P ROPOSED D EFENSE M ECHANISM
(PGD) [15] to improve the FGSM by projecting the prediction Our core guideline is to perform effective defense on adver-
result into the constraint set after each iteration. sarial examples without incurring excessive computing costs.
On the other hand, Jacobian-based Saliency Map Attack We aim to explore these three hypothesis :1) the transferability
(JSMA) proposed by Papernot et al. [16] exploited saliency of adversarial examples between intensity-based and density-
map, which was used to visualize the prediction results of based domain: If adversarial perturbations are added in the
models [18], to create deception. The saliency map based on intensity-based domain, it would not cause misclassification
the gradient of the forward-pass of the neural network exhibits for the model trained in the density-based domain. The di-
the most influential input pixels, and the greedy algorithm rection of the gradient for optimization-based attacks in one
is applied to iteratively modify the input image in order to domain is not equivalent to the direction of the gradient in
generate the adversarial example. another domain due to the change of decision boundaries for
In order to guarantee the correctness of attack models in different models 2) the attackability under the density-based
our research, we employed Adversarial Robustness Toolbox representation: The density-based representation simulates the
(ART) [17], which is maintained by IBM, to conduct our ex- gradient variation for the intensity-based image through error
periments. This toolbox provides various state-of-the-art threat fusion, which is expressed in binary patterns. The existing
models for verifying model robustness and the effectiveness of adversarial attacks might not achieve the same success in
proposed defense strategies. By using the implemented attack density representation as intensity representation. To be more
and defense approaches in the toolbox, we can ensure fairness specific, it would make contradiction of the assumption that
when making comparative analysis in our experiments. the adversarial perturbation is non-perceivable to human eyes.
3) the feasibility of invalidating attacks with two-stage input
C. Domain Transformation: Digital Halftoning recharacterization: The halftoning method is designed by both
Halftoning techniques are used in the early stage of printed quantizing the pixels and weighting the error with proximity
photos. It can emulate the continuous-tone photograph by relationships. At the same time, inverse halftoning is used to
resampling an image with a series of dots of different sizes or recover the details in the intensity domain. Therefore, whether
space. The larger or denser dots correspond to the dark areas in the lossy transformation between these two representations
a continuous-tone image. The result is a binary representation would indirectly purify the slight adversarial disturbances is
of the input image that retains the overall scene structure. the question we aim to investigate. If that is the case, it
Human have little problem recognizing objects in halftone may allow the model to correctly classify the image without
images provided the original image has sufficient resolution. retraining.
The halftoning operation maps the original intensity to For the first and third hypotheses, we will use different
cluster of points with various density. Unlike other input representation images as our training data, which are grayscale
transform methods whose output still lie in the same intensity images (intensity-based representation) and halftone images
domain, halftone images can be viewed as a novel discon- (density-based representation). Both are trained with the VGG-
tinuous density-based representation. Therefore, perturbations 16 model for 300 epochs. There are three types of training
generated in the continuous intensity domain might lose their models in this study: grayscale model, halftone model, and
capability when entering a totally new territory. In this work, hybrid model. Heterogeneous data consisting of a mixture of
we use the Floyd-Steinberg error diffusion kernel [19] perform halftone and grayscale images, are used to train the hybrid
the halftone mapping. model. We attempt to examine whether the hybrid model can
The process of converting the halftoning image back to the learn more general and abstract feature to counter adversarial
the continuous-tone image is referred to as inverse halftoning. examples. In summary, we not only plan to validate the model
The inverse halftoning methods are mainly divided into three trained on individual input representation, but also consider
categories: frequency or spatial filtering [20], adaptive filtering the fusion of these two domains to enhance robustness against
[21], and deep learning [22]–[24]. The first two methods adversarial attacks.
statistically smooth the image pixels by eliminating high- For the second hypothesis, we adjust the existing adversarial
frequency information so as to restore the original image. attack mechanisms based on gradient optimization to confuse
However, the image after the operation of the low-pass filter the model trained on density-based representation. We aim to
will become more blurred compared to the original one. confirm whether it can generate effective adversarial samples
Although the inverse halftoning method with deep learning on the density domain. Compared to distance metrics of l1 ,
techniques requires a large amount of training costs, the inver- l2 , and l∞ to generate adversarial examples, l0 -norm is more
sion result can achieve the best quality among current methods. intuitive and suitable for attacking halftone image with binary
In this work, we employ wavelet-based inverse halftoning via pattern. We extend two attack methods, namely, Projected
deconvolution (WInHD) [25] due to its ability to generalize to Gradient Descent Attack(PGD) and Jacobian-based Saliency

3501

Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on February 28,2024 at 06:39:32 UTC from IEEE Xplore. Restrictions apply.
Fig. 3: Flowchart of our defense mechanism.

IV. E XPERIMENTAL R ESULTS

Three sets of experiments have been conducted to evaluate


the performance of our proposed method, namely, transfer-
ability of adversarial examples between domains, attackability
under the density-based representation, and the feasibility of
invalidating attacks with two-stage input recharacterization.
For the first and third task, we will conduct detailed quan-
titative comparisons to reveal the efficacy of the proposed
defense strategy. For the second task, we will mainly focus
on assessing the visual quality of the perturbed images using
global and local adversarial perturbations.
Fig. 4: Performing halftone operation on images of different
resolutions: (a) CIFAR-10 32x32 (b) Tiny-ImageNet 128x128
A. Transferability of Adversarial Examples
To begin with, we trained baseline models using density-
based, intensity-based and hybrid representations. We then
Map Attack (JSMA), to assess the feasibility of producing apply FGSM, I-FGSM, and PGD attacks to test their defense
adversarial attacks in halftone images. The parameters of efficiencies. The epsilon parameter of the adversarial attacks
modified PGD attack are set to  = 1, α = 1, and iteration = is set to 0.04 and iteration is 100 for I-FGSM and PGD
1. By changing the pixel value (1 to 0 or 0 to 1) at each attacks. The results are summarized in Table I. We notice
step, it can generate adversarial examples in density-based that whatever attack method is used to confuse the model, the
domain. On the other hand, the JSMA attack would iteratively top-5 accuracy for the halftone model remains approximately
change the maximal-salient pixel pair from the saliency map. the same. Although the top-1 accuracy drops about 3% on
We can also use the same procedure to change the pixel pair FGSM and PGD attacks and decrease 9% on I-FGSM attack,
to 0 or 1 to confuse the model. We will investigate these two its ability to invalidate a large amount of the adversarial
attack methods with respect to global and local adversarial perturbations is confirmed. On the other hand, the hybrid
perturbations in our subsequent experiments. model is also capable of countering the adversary, and even
CIFAR-10 and CIFAR-100 are two datasets often used to achieved higher performance than halftone model on FGSM
verify the effectiveness of adversarial attacks. However, the attack.
image resolution is too low for effective halftone transform, as The proposed method outperforms other input transform
depicted in Fig. 4(a). Instead of using CIFAR-10 and CIFAR- approaches by a remarkable 10∼20% improvement in top-
100 for training, we employ Tiny-ImageNet in this study. The 1 accuracy as shown in Table I. These experimental results
dataset contains 200 categories with a total of 260 thousand are consistent with our first hypothesis that the model trained
images as training data. The test data contains 50 images on different domain would have different decision boundary.
per class. We resize all of the images to 128x128 pixels to As a result, the gradient-based attack method cannot transfer
allow faster processing. The corresponding halftone images the perturbations to another domain. To our surprise, if we
are shown in Fig. 4(b). consider adversarial examples in both intensity-based and

3502

Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on February 28,2024 at 06:39:32 UTC from IEEE Xplore. Restrictions apply.
TABLE I: Performance of different input transform schemes
Hybrid Hybrid
Attack Accuracy Defense Cropping and Rescaling TVM Grayscale Halftone
(intensity) (density)
Top-1 56.98 59.13 62.0 61.1 66.01 60.06
Baseline
Top-5 77.23 78.56 76.5 80.4 85.14 82.31
Top-1 43.65 36.46 12.0 57.78 59.93 59.40
FGSM
Top-5 69.96 69.07 31.4 80.34 81.13 80.97
Top-1 45.10 43.15 10.1 52.01 34.93 52.51
I-FGSM
Top-5 72.52 70.21 17.4 78.35 69.31 78.77
Top-1 45.68 39.13 10.1 57.23 48.69 58.03
PGD
Top-5 73.26 67.29 17.4 80.91 77.46 81.56

density-based domain, the hybrid model accomplishes even and 30% modification. As shown in Figure b, the overall shape
higher scores than halftone model on all types of attacks. of the fish does not change significantly at the ratio of 10%.
The top-5 accuracy of the model still maintains at around
B. Launching Attacks in the Halftone Domain
77%. However, as the modification ratio increases, the image
The basic requirement of an adversarial attack is that the becomes gradually unidentifiable and suffers a significant drop
polluted samples cannot deviate from the original images in accuracy. This again confirms the difficulty in setting up
too much such that the artifact becomes easily detectable by effective attacks in density-based representation.
human observer. In other words, if the attack scheme has to
add significant perturbations to an image in order to confuse
the classifier, it may not be qualified as an effective adversarial
attack. We argue that this is the case for halftone images.
To validate our claim, we will perform two types of attacks
to images using density-based representation. We will then
examine the visual quality of the polluted images to check
whether attacks in the halftone domain are feasible. Fig. 6: Generating different levels of local perturbations using
1) Global Adversarial Perturbations: PGD Attack: To JSMA.
investigate the attackability using global adversarial perturba-
tion, we use PGD attack to generate adversarial examples on C. Feasibility of Invalidating Attacks with Two-stage Input
the halftone model. We set the parameters of PGD, where  = Recharacterization
1, α = 1, and iteration = 1. It would only alter the pixel
In the third experiment, we will discuss whether the two-
value with ±1 at a time. The result is shown in Fig. 5. The
way lossy transform can effectively eliminate the effects of
polluted image is completely different from original one, and it
adversary. That is, we attempt to verify whether the model
becomes extremely difficult to distinguish the shape of the fish
without retraining is capable of handling the processed ad-
and the frog. To further confirm that the adversarial example
versarial samples. In table II, we found that both halftone
is indeed generated, we also perform targeted attack, which
and hybrid model cannot accurately predict the images with
would mislead the model to predict the input image to category
the two-way transformation. The image operated by inverse
0. The model classifies the adversarial image to category 0
halfton is shown in Fig 3, and it shows that two-way image
with 94.3% confidence. It is therefore safe to conclude that the
recharacterization would result in excessive loss of texture.
existing attack methods based on global gradient optimization
Therefore, degraded images with different discriminative fea-
cannot be applied to images with density representation.
tures cause the original decision boundary to be unsuitable for
classifying the inverse halftone images. The transformation to
the density-based domain is to quantize images to simulate
pixel changes in the intensity domain. After quantization and
error fusion, the image details, especially the high frequency
components, are lost. Conversion back to intensity domain
using WInHD cannot recover the detailed texture. In table I
Fig. 5: Adding global perturbations in the halftone domain and II, the drop in the recognition accuracy caused by the
using PGD. inverse halftone transformation is greater than the halftone
transformation in the process of two-way conversion.
2) Local Adversarial Perturbations: JSMA Attack: In this When comparing the accuracy before and after the adver-
experiment, we apply the greedy method to find the most sarial attacks, we notice that disrupted features still possesses
influential pixel in the saliency map for performing JSMA the capability of misleading the trained model. Although the
attack. In order to assess the image quality at different levels influence is less apparent, it deserves further exploration and
of attacks, we generate adversarial examples with 10%, 20%, discussion. We can study the phenomenon from perspective

3503

Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on February 28,2024 at 06:39:32 UTC from IEEE Xplore. Restrictions apply.
TABLE II: One-way vs. two-stage transformation for defending adversarial attacks
Grayscale Grayscale Hybrid Hybrid
Attack Accuracy Defense
(Original) (Inverse) (Original) (Inverse)
Top-1 62.0 12.0 66.01 26.32
Baseline
Top-5 76.5 27.9 85.14 46.64
Top-1 12.0 9.8 59.93 23.11
FGSM
Top-5 31.4 24.1 81.13 42.26
Top-1 10.1 8.30 34.93 20.63
I-FGSM
Top-5 17.4 22.05 69.31 40.23
Top-1 10.1 9.33 48.69 21.57
PGD
Top-5 17.4 23.41 77.46 41.50

of [26], which considers that adversarial examples are kind [6] Tabacof, Pedro, and Eduardo Valle. ”Exploring the space of adversar-
of features. Although the two-way lossy transformation can ial images.” 2016 International Joint Conference on Neural Networks
(IJCNN). IEEE, 2016.
reduce certain types of noise, the characteristics of the ad- [7] Weilin Xu, David Evans, and Yanjun Qi. Feature squeezing: Detecting
versarial samples still remain. The input recharacterization adversarial examples in deep neural networks. CoRR, abs/1704.01155,
exhibits certain degrees of purifying effects on adversarial 2017.
[8] Papernot, Nicolas, et al. ”Distillation as a defense to adversarial perturba-
perturbations. Unless visual patterns of CNN models can be tions against deep neural networks.” 2016 IEEE Symposium on Security
fully modeled, it is difficult to remove the adversarial features and Privacy (SP). IEEE, 2016.
without interfering with the identification of the model. For [9] Guo, Chuan, et al. ”Countering adversarial images using input transfor-
mations.” arXiv preprint arXiv:1711.00117 (2017).
example, if we use a simple conversion such as denoising [10] Madry, Aleksander, et al. ”Towards deep learning models resistant to
models to remove the adversarial perturbations, it would also adversarial attacks.” arXiv preprint arXiv:1706.06083 (2017).
adversely modify the visual features for trained models and [11] Leonid Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total varia-
tion based noise removal algorithms. Physica D, 60:259–268, 1992.
leads to decrease in model accuracy. [12] Alexei Efros and William Freeman. Image quilting for texture synthesis
and transfer. In Proc. SIGGRAPH, pp. 341–346, 2001.
V. C ONCLUSION [13] Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and
harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
A lightweight procedure to counter adversarial attacks has [14] Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving
been proposed in this research. By converting the input im- into transferable adversarial examples and black-box attacks. CoRR,
age into halftone representation, the perturbations purposely abs/1611.02770, 2016.
[15] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris
added to confuse the classifier can be effectively eliminated. Tsipras, and Adrian Vladu. Towards deep learning models resistant to
Additionally, launching adversarial attacks without detectable adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
artifacts in the halftone domain is extremely difficult as it is [16] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z
Berkay Celik, and Ananthram Swami. The limitations of deep learning
essentially a binary representation. This particular form of in- in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE
put recharacterization proves to a suitable defense mechanism European Symposium on, pages 372–387. IEEE, 2016.
against many existing attacks. Future work will investigate [17] Nicolae, Maria-Irina, et al. ”Adversarial Robustness Toolbox v0. 4.0.”
arXiv preprint arXiv:1807.01069 (2018).
its efficacy in other DNN models as well as more forms of [18] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside
adversarial attacks. convolutional networks: Visualising image classification models and
saliency maps. CoRR, abs/1312.6034, 2013.
ACKNOWLEDGEMENT [19] R. W. Floyd and L. Steinberg, “An Adaptive Algorithm for Spatial
Grayscale,” Proceedings of the Society of Information Display, Vol. 17,
This work was partially supported by The Ministry of Sci- No. 2, 1976, pp. 75-77.
ence and Technology, Taiwan, under GRANT No. MOST108- [20] Kite, Thomas D., et al. ”A fast, high-quality inverse halftoning algorithm
2221-E-004-008 and MOST109-2634-F-004-001 through Per- for error diffused halftones.” IEEE Transactions on Image Processing 9.9
(2000): 1583-1592.
vasive Artificial Intelligence Research (PAIR) Labs. [21] Siddiqui, Hasib, and Charles A. Bouman. ”Training-based descreening.”
IEEE transactions on image processing 16.3 (2007): 789-802.
R EFERENCES [22] Gao, Qifan, Xiao Shu, and Xiaolin Wu. ”Deep Restoration of Vintage
Photographs From Scanned Halftone Prints.” Proceedings of the IEEE
[1] Szegedy, Christian, et al. ”Intriguing properties of neural networks.” arXiv International Conference on Computer Vision. 2019.
preprint arXiv:1312.6199 (2013). [23] Kim, Tae-Hoon, and Sang Il Park. ”Deep context-aware descreening and
[2] Abadi, Martin, et al. ”Deep learning with differential privacy.” Pro- rescreening of halftone images.” ACM Transactions on Graphics (TOG)
ceedings of the 2016 ACM SIGSAC Conference on Computer and 37.4 (2018): 1-12.
Communications Security. 2016. [24] Hou, Xianxu, and Guoping Qiu. ”Image companding and inverse
[3] Shokri, Reza, et al. ”Membership inference attacks against machine halftoning using deep convolutional neural networks.” arXiv preprint
learning models.” 2017 IEEE Symposium on Security and Privacy (SP). arXiv:1707.00116 (2017).
IEEE, 2017. [25] Neelamani, Ramesh, Robert David Nowak, and Richard G. Baraniuk.
[4] Biggio, Battista, Giorgio Fumera, and Fabio Roli. ”Security evaluation ”Winhd: Wavelet-based inverse halftoning via deconvolution.” IEEE
of pattern classifiers under attack.” IEEE transactions on knowledge and Transactions on Image Processing (2002).
data engineering 26.4 (2013): 984-996. [26] Ilyas, Andrew, et al. ”Adversarial examples are not bugs, they are
[5] Papernot, Nicolas and Goodfellow. ”Breaking things features.” Advances in Neural Information Processing Systems. 2019.
is easy.” Cleverhans Blog, 2016 [Online]. Available:
http://www.cleverhans.io/security/privacy/ml/2016/12/16/breaking-
things-is-easy.html [Accessed: 15- March- 2020]

3504

Authorized licensed use limited to: SOUTHWEST JIAOTONG UNIVERSITY. Downloaded on February 28,2024 at 06:39:32 UTC from IEEE Xplore. Restrictions apply.

You might also like