Download as pdf or txt
Download as pdf or txt
You are on page 1of 5

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) | 979-8-3503-4485-1/24/$31.

00 ©2024 IEEE | DOI: 10.1109/ICASSP48485.2024.10446155

KILLING IT WITH ZERO-SHOT:


ADVERSARIALLY ROBUST NOVELTY DETECTION

Hossein Mirzaei 1 Mohammad Jafari 1 Hamid Reza Dehbashi 1 Zeinab Sadat Taghavi 1
Mohammad Sabokrou 2 Mohammad Hossein Rohban 1
1
Sharif University of Technology, Tehran, Iran
2
Okinawa Institute of Science and Technology, Okinawa, Japan

ABSTRACT  06$'


 7UDQVIRUPDO\
3ULQFLSD/6
Novelty Detection (ND) plays a crucial role in machine 2&6')
learning by identifying new or unseen data during model  $3$(
=$51' 2XUV
inference. This capability is especially important for the 

$852& 
safe and reliable operation of automated systems. Despite 
advances in this field, existing techniques often fail to main- 
tain their performance when subject to adversarial attacks.

Our research addresses this gap by marrying the merits of
nearest-neighbor algorithms with robust features obtained 
from models pretrained on ImageNet. We focus on enhancing 
the robustness and performance of ND algorithms. Experi- 
mental results demonstrate that our approach significantly 
outperforms current state-of-the-art methods across various 0 1 2 3 4
255 255 255 255 255
benchmarks, particularly under adversarial conditions. By 3HUWXUEDWLRQ0DJQLWXGH
incorporating robust pretrained features into the k-NN al-
gorithm, we establish a new standard for performance and Fig. 1: This plot evaluates ND methods—MSAD[1],
robustness in the field of robust ND. This work opens up new Transformaly[2], PrincipaLS[3], OCSDF[4], APAE[5], and
avenues for research aimed at fortifying machine learning ZARND (Ours)—for their robustness, measured in AUROC
systems against adversarial vulnerabilities. (%) on the CIFAR-10 dataset. It highlights ZARND’s supe-
Index Terms— anomaly detection, adversarial robust- rior performance and emphasizes the need for more robust
ness, zero-shot learning ND algorithms.

1. INTRODUCTION
that not only performs well in benign settings but is also re-
Anomaly detection plays an important role in computer vi-
silient to adversarial attacks that attempt to convert normal
sion, for example, in healthcare, industry, and autonomous
samples to anomalies and vice versa.
driving [6]. Its ability to identify new or unseen visual pat-
terns during inference is essential for the safety and reliability Motivated by this, we introduce a framework named Zero-
of ML-based systems [7]. Recent ND methods have shown shot Adversarially Robust Novelty Detection (ZARND),
significant results in clean settings. However, their perfor- which leverages an adversarially robust pre-trained model
mance significantly deteriorates under adversarial attacks to as a backbone and extracts features from the training data
the extent that it is worse than random guessing. For instance, using the last layer of the backbone. During testing, input
the SOTA ND results on the MVTecAD dataset [8] achieve data is passed through the backbone, and anomaly scores
near-perfect performance, with 100% AUROC (Area Under are calculated by considering the features of the training
the Receiver Operating Characteristics) [9]. However, when data, utilizing well-established algorithms such as k-Nearest
tiny adversarial perturbations are added to the data, the perfor- Neighbors (k-NN) [11].
mance of all previous works drops to near 0% AUROC. This While many previous methods utilize pre-trained models
emphasizes the necessity for further research to bridge the for ND as a downstream task, we are the first to employ an
performance gap of novelty detection between clean and ad- adversarially robust pre-trained model for ND task, as illus-
versarial settings [10]. Our main goal is to propose a method trated in Figure 1. The features extracted by our backbone

979-8-3503-4485-1/24/$31.00 ©2024 IEEE 7415 ICASSP 2024

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 01,2024 at 18:17:19 UTC from IEEE Xplore. Restrictions apply.
are both descriptive and resilient against adversarial manip- 3. PROBLEM STATEMENT
ulations, enabling us to achieve competitive results in clean
settings and establish a SOTA performance in adversarial set- Setup. In ND, our focus is on a specialized training set that
tings. We summarize the main contributions of this paper as comprises solely normal, unlabeled samples. This training set
follows: is formally denoted as:

• Our proposed zero-shot ND method, ZARND, achieves Xtrain = {x1 , x2 , . . . , xn }


significant results on various datasets, including medi-
cal and tiny datasets, highlighting the applicability of Here, Xtrain represents the collection of all ’normal’ samples,
our work to real-world applications. and each xi is an individual sample. These samples are el-
ements of a broader feature space, denoted as X . Our main
• We extensively compared our method in unsupervised goal is to develop a function f that maps from this feature
ND settings, including one-class and unlabeled multi- space to a binary output, which can be formally defined as:
class scenarios, with both clean and adversarially
trained proposed methods, surpassing their perfor- f : X → {0, 1}
mance by a margin of up to 40% AUROC.
In the context of this function:
2. BACKGROUND & RELATED WORK • f (x) = 0 implies that the sample x is ’normal’

ND. Various self-supervised approaches have been introduced • f (x) = 1 implies that the sample x is an ’outlier’
in ND to learn the normal sample distribution. DeepSVDDs
utilizes an autoencoder to learn distribution features, while Challenges. One of the main difficulties in ND comes from
CSI[12] employs a contrastive loss for distribution learn- the unsupervised nature of the training set Xtrain , which lacks
ing. Moreover FITYMI [13], DN2[14], PANDA[15], Trans- labels. Algorithms have the task of autonomously defining
formaly [2] and MSAD[1] leverage pre-trained models for what is ’normal,’ based solely on Xtrain ’s data distribution.
gaining insights. Additionally, some methods have ven- Another challenge arises from the common closed-set as-
tured into ND in an adversarial context; APAE[5] proposes sumption in machine learning, which stipulates that test data
using approximate projection and feature weighting for en- should come from the same distribution as training data. In
hancing adversarial robustness, and PrincipaLS[3] suggests ND, this is not often the case, complicating the identification
employing Principal Latent Space as a defense strategy for of ’novel’ samples.
adversarially robust ND. OCSDF[4] focuses on robustness by ND’s challenges extend to the quality of training data and
learning a signed distance function, interpreted as a normality the absence of labels. The training set’s unlabeled nature
score. The OSAD[16] method integrates adversarial training necessitates that ND algorithms identify what is genuinely
loss with an auto-encoder loss aimed at reconstructing the ’normal’ autonomously. Noise in the dataset places impor-
original image from its adversarial counterpart. tance on choosing an effective distance metric. Simple met-
Adversarial Attacks. An adversarial attack involves mali- rics, such as pixel-level spaces, are inadequate as they fail to
ciously perturbing a data point x with an associated label y capture the underlying semantic differences that separate nor-
to generate a new point x∗ that maximizes the loss function mal from novel instances. This limitation is accentuated when
ℓ(x∗ ; y). In this setting, x∗ is an adversarial example of x, deep neural networks are used to transform the data. In this
and the difference (x∗ − x) is termed as adversarial noise. An scenario, selecting a suitable threshold for differentiation be-
upper bound ϵ limits the lp norm of the adversarial noise to comes even more challenging, justifying the common use of
avoid altering the semantics of the original data. Specifically, AUROC as an evaluation metric.
an adversarial example x∗ must satisfy:
4. METHOD
x∗ = arg max

ℓ(x′ ; y), ∥x − x∗ ∥p ≤ ϵ
x
4.1. Initialization: Feature Extraction
The well-known projected Gradient Descent (PGD) technique
[17] iteratively maximizes the loss function by updating x∗ in Our methodology begins with the critical step of feature ex-
the direction of the gradient ∇x ℓ(x∗ ; y). During each itera- traction, setting the stage for all downstream operations. The
tion, the adversarial noise is constrained within an ℓ∞ -ball of primary objective at this stage is to generate a comprehensive
radius ϵ: and highly expressive feature space that is not only adaptable
but also generalizable across various types of data sets. To
x∗0 = x, x∗t+1 = x∗t + α. sign (∇x ℓ (x∗t , y)) this end, we employ pre-trained ResNet architectures as the
backbone of our feature extraction mechanism, denoted by
x∗ = x∗T the function h.

7416

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 01,2024 at 18:17:19 UTC from IEEE Xplore. Restrictions apply.
Initialization Calculate Distances Sorting Inference
4.3. Sorting: Ordering Euclidean Distances
Anomaly Score
Once the Euclidean distances have been computed between
the test sample ztest and all the feature vectors zi in our fea-
ture bank, the next phase is to sort these distances. Sorting is
crucial as it enables us to identify which vectors in the feature
Fig. 2: The four key steps of our ND framework are illus- bank are closest to ztest , and therefore most relevant for the
trated. (a) Initialization: Extracting features from the normal anomaly scoring process.
set. (b) Calculate Distance: Computing the Euclidean dif- Let D = {d1 , d2 , . . . , dN } be the set of Euclidean dis-
ference between the test sample and features from the nor- tances, where N is the total number of feature vectors in our
mal set. (c) Sorting: Ordering the distances from smallest to feature bank. The sorting operation can be formally described
largest. (d) Anomaly Score: Summing up the first K smallest as: Sorted Distances = sort(D). This orders the distances in
distances to generate the anomaly score, where K = 2. ascending order, preparing us for the final step: the calcula-
tion of the anomaly score using the k-nearest neighbors from
this sorted list.
Given an input sample xi , the feature extraction function
h performs a mapping operation that transforms xi into a 4.4. Anomaly Score: Calculating k-NN-Based Score
high-dimensional feature vector zi = h(xi ). By leveraging
The culmination of our methodology is the computation of
the extensive training and architectural benefits of ResNet,
the anomaly score, which is a decisive metric for categorizing
our feature extraction phase is capable of capturing complex,
a data point as normal or an outlier. We employ the k-NN al-
high-level semantics from the data.
gorithm to accomplish this, taking advantage of its efficacy in
Notably, our methodology deviates from conventional ap-
capturing local data characteristics without making assump-
proaches by discarding the terminal classification layer com-
tions about the global data distribution. To clarify the for-
monly found in neural network architectures. Instead, we fo-
mula, let Dsorted = {d1 , d2 , . . . , dk } be the set of k-smallest
cus on an intermediary feature map F as the end product of
distances obtained from the sorted list in the previous step.
this phase. This decision is grounded in our goal to construct a
The anomaly score SPfor a test sample ztest can be formally
feature space that is domain-agnostic, optimized specifically
defined as: S = d∈Dsorted d. Here, S is the sum of the
for the task of anomaly detection. By doing so, we ensure
k-smallest Euclidean distances between the test feature vec-
that the resulting feature space can capture the intricate data
tor ztest and the k-nearest feature vectors from our feature
structures and patterns needed for high-quality novelty detec-
bank. A higher value of S indicates that the test sample is
tion across diverse application domains.
more likely to be an outlier, making S a critical metric for
our Novelty Detection framework. Our proposed pipeline is
4.2. Calculate Distance: Computing Euclidean Distances illustrated in Figure 2.
After initializing our feature space through the extraction pro-
cess, the next crucial step in our methodology is to compute 5. EXPERIMENTAL RESULTS
the Euclidean distances between a given test sample and the
features stored in our feature bank. This is a pivotal operation In this section, we conduct comprehensive experiments to as-
as it lays the foundation for the anomaly score, which ulti- sess various outlier detection methods, encompassing both the
mately determines whether a data point is an outlier or not. clean and adversarially trained approaches and our method in
To carry out this operation, we introduce a test sample ztest the context of adversarial attacks.
and compare it with our feature bank z, which is a collection Setup. We utilize the ResNet-18 architecture as the backbone
of high-dimensional vectors extracted from in-distribution or of our neural network and incorporate pre-trained weights
’normal’ samples. The objective is to quantify the distance from [18].
4
between ztest and each feature vector zi in the feature bank. Evaluation Attack. We set the value of ϵ to 255 for low-
2
Formally, the Euclidean distance between ztest and zi is resolution datasets and 255 for high-resolution ones. For the
calculated as follows: models’ evaluation, we use a single random restart for the at-
tack, with random initialization within the range of (−ϵ, ϵ)
d(ztest , zi ) = ||ztest − zi ||2 , and perform 100 steps. Furthermore, we select the attack
step size as α = 2.5 × Nϵ . In addition to the PGD attack,
The L2 norm is employed here as our distance metric, in we have evaluated models using AutoAttack [19], a recently
accordance with the widely-accepted practice in anomaly de- introduced method designed to enhance robustness evaluation
tection tasks. By computing these distances, we set the stage by implementing more potent attacks.
for the subsequent sorting operation, which aids in the formu- We introduce a label-dependent function β(y) to design ef-
lation of a precise anomaly score. fective adversarial perturbations:

7417

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 01,2024 at 18:17:19 UTC from IEEE Xplore. Restrictions apply.
Method

Attack
Dataset Zero-Shot
DeepSVDD MSAD Transformaly PatchCore PrincipaLS OCSDF APAE Clean Pre-Trained Robust Pre-Trained
GMM k-NN GMM k-NN (ZARND)

Clean 64.8 97.2 98.3 68.3 57.7 57.1 55.2 92.24 93.9 87.3 89.7
CIFAR-10
PGD/AA 31.7 / 16.6 4.8 / 0.7 3.7 / 2.4 3.9 / 2.5 33.2 / 29.6 31.3 / 26.9 2.2 / 2.0 0.0 / 0.0 0.0 / 0.0 62.7 / 60.3 62.4 / 60.6
Clean 67.0 96.4 97.3 66.8 52.0 48.2 51.8 89.7 93.6 83.0 88.4
Low-Res

CIFAR-100
PGD/AA 23.7 / 14.3 8.4 / 10.7 9.4 / 7.3 4.3 / 3.5 26.2 / 24.6 23.5 / 19.8 4.1 / 0.9 0.0 / 0.0 0.0 / 0.0 53.2 / 50.7 62.9 / 60.1
Clean 94.8 96.0 94.8 83.2 97.3 95.5 92.5 91.3 96.0 95.0 99.0
MNIST
PGD/AA 18.8 / 15.4 3.2 / 14.1 18.9 / 11.6 2.6 / 2.4 83.1 / 77.3 68.9 / 66.2 34.7 / 28.6 0.0 / 0.0 0.0 / 0.0 90.5 / 84.1 97.1 / 95.2
Clean 94.5 94.2 94.4 77.4 91.0 90.6 86.1 88.4 94.7 92.9 95.0
FMNIST
PGD/AA 57.9 / 48.1 6.6 / 3.7 7.4 / 2.8 5.5 / 2.8 69.2 / 67.5 64.9 / 59.6 19.5 / 13.3 0.0 / 0.0 0.0 / 0.0 87.4 / 81.5 89.5 / 85.9
Clean 67.0 87.2 87.9 99.6 63.8 58.7 62.1 78.7 74.8 74.2 71.6
MVTecAD
PGD/AA 2.6 / 0.0 0.9 / 0.0 0.0 / 0.0 7.2 / 4.8 24.3 / 12.6 5.2 / 0.3 4.7 / 1.8 0.0 / 0.0 0.0 / 0.0 38.5 / 32.0 30.1 / 28.8
Clean 62.5 59.4 78.1 98.5 68.9 62.4 68.1 80.6 90.9 75.6 89.0
Head-CT
PGD/AA 0.0 / 0.0 0.0 / 0.0 6.4 / 3.2 1.5 / 0.0 27.8 / 16.2 13.1 / 8.5 6.6 / 3.8 0.0 / 0.0 0.0/ 0.0 66.7 / 62.8 80.1 / 75.6
High-Res

Clean 74.5 99.9 98.3 91.4 70.2 63.2 55.4 69.0 82.3 62.3 76.8
BrainMRI
PGD/AA 4.3 / 2.1 1.7 / 0.0 5.2 / 1.6 0.0 / 0.4 33.5 / 17.8 20.4 / 12.5 9.7 / 8.3 0.0 / 0.0 0.0 / 0.0 47.6 / 44.9 69.6 / 65.7
Clean 70.8 95.1 97.4 92.8 73.5 65.2 64.6 88.6 99.9 80.8 98.9
Tumor
PGD/AA 1.7 / 0.0 0.1 / 0.0 7.4 / 5.1 9.3 / 6.1 25.2 / 14.7 17.9 / 10.1 15.8 / 8.3 0.0 / 0.0 16.4 / 14.8 71.8 / 65.0 82.7 / 78.6
Clean 61.9 89.2 91.0 77.7 54.2 46.1 50.7 91.6 92.2 74.6 72.1
Covid-19
PGD/AA 0.0 / 0.0 4.7 / 1.9 10.6 / 4.4 4.2 / 0.5 15.3 / 9.1 9.0 / 6.5 11.2 / 8.7 0.0 / 0.0 19.8/ 14.1 49.3 / 40.4 56.7 / 54.8

Table 1: In this table, we assess ND methods on various datasets against PGD-100, AutoAttack (AA), and a clean setting, using
the AUROC(%) metric. We use Gaussian Mixture Model (GMM) and k-NN as the anomaly scores for the Zero-Shot setting.
ZARND, our introduced method, outperforms the other methods in the adversarial setting according to the table. Perturbations:
4 2
ϵ = 255 for low-res and ϵ = 255 for high-res datasets. Best scores are bolded; second-bests are underlined.

Method Datasets and metrics. We included CIFAR-10, CIFAR-


In Out
PrincipaLS OCSDF APAE ZARND (Ours) 100, MNIST and FashionMNIST for the low-resolution
Clean PGD-100 Clean PGD-100 Clean PGD-100 Clean PGD-100 datasets. Furthermore, we performed experiments on med-
CIFAR-100 CIFAR-10

CIFAR-100 54.8 14.6 51.0 12.8 53.6 1.2 76.6 40.2


SVHN 72.1 23.6 67.7 18.8 60.8 2.1 84.3 48.8
ical and industrial high-resolution datasets, namely Head-
MNIST 82.5 42.7 74.2 37.4 71.3 15.3 99.4 95.2 CT[20], MVTec-ad, Brain-MRI[21], Covid-19[22], and Tu-
FMNIST 78.3 38.5 64.5 33.7 59.4 9.4 98.2 89.4
CIFAR-10 47.6 8.1 51.1 6.3 50.5 0.7 64.6 26.8
mor Detection[23]. We reported the results in Table 1. We
SVHN 66.3 13.2 58.7 9.2 58.1 1.1 70.0 32.7 use AUROC as a well-known classification criterion. The
MNIST 80.4 30.4 76.4 28.9 74.7 11.8 87.0 61.7
FMNIST 72.7 18.7 62.8 14.3 60.9 9.7 97.3 85.8 AUROC value is in the range [0, 1], and the closer it is to 1,
the better the classifier performance.
Table 2: AUROC (%) of various ND methods trained on ND. We run experiments across N classes, treating each class
unlabeled CIFAR-10 and CIFAR-100. According to the ta- as normal in turn while all other classes act as outliers. Aver-
ble, ZARND performs much better than the other methods. aged results across all N setups are reported in Table 1.
Best adversarial scores are bolded; best clean scores are
Unlabeled Out-Of-Distribution (OOD) Detection. We in-
underlined.
troduce a label-free approach to OOD detection, using two
distinct datasets for in-distribution and OOD samples. This
( binary problem could significantly benefit ND. Relevant find-
1, if y = 0 ings are highlighted in Table 2.
β(y) =
−1, if y = 1
Our methodology incorporates adversarial perturbations tar-
geting the raw input images x instead of their corresponding 6. CONCLUSION
feature vectors z. We use the PGD to iteratively perturb each
image. The perturbation is designed to affect the k-NN-based
Our work introduces a robust and practical approach to
anomaly score as follows directly:
image-based ND, leveraging k-NN and feature extraction.
x∗t+1 = x∗t + αβ(y) sign(∇x∗t S(h(x∗t ))) Our method performs well in AUROC metrics and strongly
resists adversarial attacks. We aim to further enhance the
Here, α specifies the step size, t the iteration number, and robustness and adaptability of our system to diverse applica-
h(x∗t ) the feature vector derived from the perturbed image. tions.

7418

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 01,2024 at 18:17:19 UTC from IEEE Xplore. Restrictions apply.
7. REFERENCES [11] Antonio Mucherino, Petraq J. Papajorgji, and Panos M.
Pardalos, k-Nearest Neighbor Classification, pp. 83–
[1] Tal Reiss and Yedid Hoshen, “Mean-shifted con- 106, Springer New York, New York, NY, 2009. 1
trastive loss for anomaly detection,” arXiv preprint
[12] Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo
arXiv:2106.03844, 2021. 1, 2
Shin, “Csi: Novelty detection via contrastive learning
[2] Matan Jacob Cohen and Shai Avidan, “Transformaly– on distributionally shifted instances,” Advances in neu-
two (feature spaces) are better than one,” arXiv preprint ral information processing systems, vol. 33, pp. 11839–
arXiv:2112.04185, 2021. 1, 2 11852, 2020. 2
[13] Hossein Mirzaei, Mohammadreza Salehi, Sajjad Sha-
[3] Shao-Yuan Lo, Poojan Oza, and Vishal M Patel, “Ad- habi, Efstratios Gavves, Cees GM Snoek, Mohammad
versarially robust one-class novelty detection,” IEEE Sabokrou, and Mohammad Hossein Rohban, “Fake
Transactions on Pattern Analysis and Machine Intelli- it till you make it: Near-distribution novelty detec-
gence, 2022. 1, 2 tion by score-based generative models,” arXiv preprint
arXiv:2205.14297, 2022. 2
[4] Louis Béthune, Paul Novello, Thibaut Boissin, Guil-
laume Coiffier, Mathieu Serrurier, Quentin Vincenot, [14] Liron Bergman, Niv Cohen, and Yedid Hoshen, “Deep
and Andres Troya-Galvis, “Robust one-class classifica- nearest neighbor anomaly detection,” arXiv preprint
tion with signed distance function using 1-lipschitz neu- arXiv:2002.10445, 2020. 2
ral networks,” arXiv preprint arXiv:2303.01978, 2023.
[15] Tal Reiss, Niv Cohen, Liron Bergman, and Yedid
1, 2
Hoshen, “Panda: Adapting pretrained features for
[5] Adam Goodge, Bryan Hooi, See Kiong Ng, and anomaly detection and segmentation,” in Proceedings
Wee Siong Ng, “Robustness of autoencoders for of the IEEE/CVF Conference on Computer Vision and
anomaly detection under adversarial impact,” in Pro- Pattern Recognition, 2021, pp. 2806–2814. 2
ceedings of the Twenty-Ninth International Conference [16] Rui Shao, Pramuditha Perera, Pong C Yuen, and
on International Joint Conferences on Artificial Intelli- Vishal M Patel, “Open-set adversarial defense with
gence, 2021, pp. 1244–1250. 1, 2 clean-adversarial mutual learning,” International Jour-
nal of Computer Vision, vol. 130, no. 4, pp. 1070–1087,
[6] Pramuditha Perera, Poojan Oza, and Vishal M Patel, 2022. 2
“One-class classification: A survey,” arXiv preprint
arXiv:2101.03064, 2021. 1 [17] Aleksander Madry, Aleksandar Makelov, Ludwig
Schmidt, Dimitris Tsipras, and Adrian Vladu, “Towards
[7] Anh Nguyen, Jason Yosinski, and Jeff Clune, “Deep deep learning models resistant to adversarial attacks,”
neural networks are easily fooled: High confidence pre- arXiv preprint arXiv:1706.06083, 2017. 2
dictions for unrecognizable images,” in Proceedings
[18] Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish
of the IEEE conference on computer vision and pattern
Kapoor, and Aleksander Madry, “Do adversarially ro-
recognition, 2015, pp. 427–436. 1
bust imagenet models transfer better?,” CoRR, vol.
[8] Paul Bergmann, Michael Fauser, David Sattlegger, and abs/2007.08489, 2020. 3
Carsten Steger, “Mvtec ad–a comprehensive real-world [19] Francesco Croce and Matthias Hein, “Reliable eval-
dataset for unsupervised anomaly detection,” in Pro- uation of adversarial robustness with an ensemble of
ceedings of the IEEE/CVF conference on computer vi- diverse parameter-free attacks,” in International con-
sion and pattern recognition, 2019, pp. 9592–9600. 1 ference on machine learning. PMLR, 2020, pp. 2206–
2216. 3
[9] Kilian Batzner, Lars Heckler, and Rebecca König,
“Efficientad: Accurate visual anomaly detection [20] Felipe Campos Kitamura, “Head ct - hemorrhage,”
at millisecond-level latencies,” arXiv preprint 2018. 4
arXiv:2303.14535, 2023. 1 [21] Sartaj Bhuvaji, Ankita Kadam, Prajakta Bhumkar,
Sameer Dedge, and Swati Kanchan, “Brain tumor clas-
[10] Mohammadreza Salehi, Hossein Mirzaei, Dan
sification (mri),” 2020. 4
Hendrycks, Yixuan Li, Mohammad Hossein Ro-
hban, and Mohammad Sabokrou, “A unified survey [22] Joseph Paul Cohen, Paul Morrison, and Lan Dao,
on anomaly, novelty, open-set, and out-of-distribution “Covid-19 image data collection,” arXiv, 2020. 4
detection: Solutions and future challenges,” arXiv
[23] Msoud Nickparvar, “Brain tumor mri dataset,” 2021. 4
preprint arXiv:2110.14051, 2021. 1

7419

Authorized licensed use limited to: IEEE Xplore. Downloaded on April 01,2024 at 18:17:19 UTC from IEEE Xplore. Restrictions apply.

You might also like