Download as pdf or txt
Download as pdf or txt
You are on page 1of 54

Automated Technology for Verification

and Analysis 16th International


Symposium ATVA 2018 Los Angeles
CA USA October 7 10 2018 Proceedings
Shuvendu K. Lahiri
Visit to download the full and correct content document:
https://textbookfull.com/product/automated-technology-for-verification-and-analysis-1
6th-international-symposium-atva-2018-los-angeles-ca-usa-october-7-10-2018-proce
edings-shuvendu-k-lahiri/
More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Computer Aided Verification 32nd International


Conference CAV 2020 Los Angeles CA USA July 21 24 2020
Proceedings Part I Shuvendu K. Lahiri

https://textbookfull.com/product/computer-aided-
verification-32nd-international-conference-cav-2020-los-angeles-
ca-usa-july-21-24-2020-proceedings-part-i-shuvendu-k-lahiri/

Computer Aided Verification 32nd International


Conference CAV 2020 Los Angeles CA USA July 21 24 2020
Proceedings Part II Shuvendu K. Lahiri

https://textbookfull.com/product/computer-aided-
verification-32nd-international-conference-cav-2020-los-angeles-
ca-usa-july-21-24-2020-proceedings-part-ii-shuvendu-k-lahiri/

Verification Model Checking and Abstract Interpretation


19th International Conference VMCAI 2018 Los Angeles CA
USA January 7 9 2018 Proceedings 1st Edition Isil
Dillig
https://textbookfull.com/product/verification-model-checking-and-
abstract-interpretation-19th-international-conference-
vmcai-2018-los-angeles-ca-usa-january-7-9-2018-proceedings-1st-
edition-isil-dillig/

Runtime Verification 20th International Conference RV


2020 Los Angeles CA USA October 6 9 2020 Proceedings
Jyotirmoy Deshmukh

https://textbookfull.com/product/runtime-verification-20th-
international-conference-rv-2020-los-angeles-ca-usa-
october-6-9-2020-proceedings-jyotirmoy-deshmukh/
Automated Technology for Verification and Analysis 14th
International Symposium ATVA 2016 Chiba Japan October
17 20 2016 Proceedings 1st Edition Cyrille Artho

https://textbookfull.com/product/automated-technology-for-
verification-and-analysis-14th-international-symposium-
atva-2016-chiba-japan-october-17-20-2016-proceedings-1st-edition-
cyrille-artho/

Passive and Active Measurement 15th International


Conference PAM 2014 Los Angeles CA USA March 10 11 2014
Proceedings 1st Edition Michalis Faloutsos

https://textbookfull.com/product/passive-and-active-
measurement-15th-international-conference-pam-2014-los-angeles-
ca-usa-march-10-11-2014-proceedings-1st-edition-michalis-
faloutsos/

HCI International 2015 Posters Extended Abstracts


International Conference HCI International 2015 Los
Angeles CA USA August 2 7 2015 Proceedings Part I 1st
Edition Constantine Stephanidis (Eds.)
https://textbookfull.com/product/hci-international-2015-posters-
extended-abstracts-international-conference-hci-
international-2015-los-angeles-ca-usa-
august-2-7-2015-proceedings-part-i-1st-edition-constantine-
stephanidis-eds/
HCI International 2015 Posters Extended Abstracts
International Conference HCI International 2015 Los
Angeles CA USA August 2 7 2015 Proceedings Part II 1st
Edition Constantine Stephanidis (Eds.)
https://textbookfull.com/product/hci-international-2015-posters-
extended-abstracts-international-conference-hci-
international-2015-los-angeles-ca-usa-
august-2-7-2015-proceedings-part-ii-1st-edition-constantine-
stephanidis-eds/
Mathematical and Computational Oncology Second
International Symposium ISMCO 2020 San Diego CA USA
October 8 10 2020 Proceedings George Bebis

https://textbookfull.com/product/mathematical-and-computational-
oncology-second-international-symposium-ismco-2020-san-diego-ca-
usa-october-8-10-2020-proceedings-george-bebis/
Shuvendu K. Lahiri
Chao Wang (Eds.)
LNCS 11138

Automated Technology
for Verification and Analysis
16th International Symposium, ATVA 2018
Los Angeles, CA, USA, October 7–10, 2018
Proceedings

123
Lecture Notes in Computer Science 11138
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology Madras, Chennai, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7408
Shuvendu K. Lahiri Chao Wang (Eds.)

Automated Technology
for Verification and Analysis
16th International Symposium, ATVA 2018
Los Angeles, CA, USA, October 7–10, 2018
Proceedings

123
Editors
Shuvendu K. Lahiri Chao Wang
Microsoft Research University of Southern California
Redmond, WA Los Angeles, CA
USA USA

ISSN 0302-9743 ISSN 1611-3349 (electronic)


Lecture Notes in Computer Science
ISBN 978-3-030-01089-8 ISBN 978-3-030-01090-4 (eBook)
https://doi.org/10.1007/978-3-030-01090-4

Library of Congress Control Number: 2018955278

LNCS Sublibrary: SL2 – Programming and Software Engineering

© Springer Nature Switzerland AG 2018


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the
material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now
known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are
believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors
give a warranty, express or implied, with respect to the material contained herein or for any errors or
omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface

This volume contains the papers presented at the 16th International Symposium on
Automated Technology for Verification and Analysis (ATVA 2018) held during
October 7–10, 2018, in Los Angeles, California, USA.
ATVA is a series of symposia dedicated to the promotion of research on theoretical
and practical aspects of automated analysis, verification, and synthesis by providing a
forum for interaction between the regional and the international research communities
and industry in the field. Previous events were held in Taipei (2003–2005), Beijing
(2006), Tokyo (2007), Seoul (2008), Macao (2009), Singapore (2010), Taipei (2011),
Thiruvananthapuram (2012), Hanoi (2013), Sydney (2014), Shanghai (2015), Chiba
(2016), and Pune (2017).
ATVA 2018 received 82 high-quality paper submissions, each of which received
three reviews on average. After careful review, the Program Committee accepted 27
regular papers and six tool papers. The evaluation and selection process involved
thorough discussions among members of the Program Committee through the Easy-
Chair conference management system, before reaching a consensus on the final
decisions.
To complement the contributed papers, we included in the program three keynote
talks and tutorials given by Nikolaj Bjørner (Microsoft Research, USA), Corina
Păsăreanu (NASA Ames Research Center, USA), and Sanjit Seshia (University of
California, Berkeley, USA), resulting in an exceptionally strong technical program.
We would like to acknowledge the contributions that made ATVA 2018 a suc-
cessful event. First, we thank the authors of all submitted papers and we hope that they
continute to submit their high-quality work to ATVA in future. Second, we thank the
Program Committee members and external reviewers for their rigorous evaluation
of the submitted papers. Third, we thank the keynote speakers for enriching the pro-
gram by presenting their distinguished research. Finally, we thank Microsoft and
Springer for sponsoring ATVA 2018.
We sincerely hope that the readers find the ATVA 2018 proceedings informative
and rewarding.

October 2018 Shuvendu K. Lahiri


Chao Wang
Organization

Steering Committee
E. Allen Emerson University of Texas, Austin, USA
Teruo Higashino Osaka University, Japan
Oscar H. Ibarra University of California, Santa Barbara, USA
Insup Lee University of Pennsylvania, USA
Doron A. Peled Bar Ilan University, Israel
Farn Wang National Taiwan University, Taiwan
Hsu-Chun Yen National Taiwan University, Taiwan

Program Committee
Aws Albarghouthi University of Wisconsin-Madison, USA
Cyrille Valentin Artho KTH Royal Institute of Technology, Sweden
Gogul Balakrishnan Google, USA
Roderick Bloem Graz University of Technology, Austria
Tevfik Bultan University of California, Santa Barbara, USA
Pavol Cerny University of Colorado at Boulder, USA
Sagar Chaki Mentor Graphics, USA
Deepak D’Souza Indian Institute of Science, Bangalore, India
Jyotirmoy Deshmukh University of Southern California, USA
Constantin Enea University Paris Diderot, France
Grigory Fedyukovich Princeton University, USA
Masahiro Fujita The University of Tokyo, Japan
Sicun Gao University of California, San Diego, USA
Arie Gurfinkel University of Waterloo, Canada
Fei He Tsinghua University, China
Alan J. Hu The University of British Columbia, Canada
Joxan Jaffar National University of Singapore, Singapore
Akash Lal Microsoft, India
Axel Legay IRISA/Inria, Rennes, France
Yang Liu Nanyang Technological University, Singapore
Zhiming Liu Southwest University, China
K. Narayan Kumar Chennai Mathematical Institute, India
Doron Peled Bar Ilan University, Israel
Xiaokang Qiu Purdue University, USA
Giles Reger The University of Manchester, UK
Sandeep Shukla Indian Institute of Technology Kanpur, India
Oleg Sokolsky University of Pennsylvania, USA
Armando Solar-Lezama MIT, USA
Neeraj Suri TU Darmstadt, Germany
VIII Organization

Aditya Thakur University of California, Davis, USA


Willem Visser Stellenbosch University, South Africa
Bow-Yaw Wang Academia Sinica, Taiwan
Farn Wang National Taiwan University, Taiwan
Georg Weissenbacher Vienna University of Technology, Austria
Naijun Zhan Chinese Academy of Sciences, China

Additional Reviewers

Akshay, S. Lewchenko, Nicholas V. Sangnier, Arnaud


Alt, Leonardo Li, Yangjia Sankaranarayanan, Sriram
Bastani, Osbert Lin, Wang Song, Fu
Bouajjani, Ahmed Liu, Wanwei Srivathsan, B.
Chen, Mingshuai Luo, Chen Suresh, S. P.
Chen, Wei Maghareh, Rasool Ting, Gan
Chen, Zhenbang Matteplackel, Raj Mohan Tizpaz Niari, Saeid
Ebrahim, Masoud McClurg, Jedidiah Trung, Ta Quang
Ge, Ning Mordvinov, Dmitry Wang, Chao
Habermehl, Peter Pick, Lauren Wang, Lingtai
Jansen, Nils Poulsen, Danny Xue, Bai
Karl, Anja Poulsen, Dany Yang, Zhibin
Katelaan, Jens Praveen, M. Yin, Liangze
Katis, Andreas Qamar, Nafees Zhang, Xin
Khalimov, Ayrat Quilbeuf, Jean
Koenighofer, Bettina Riener, Martin
Z3Azure and AzureZ3
(Abstract)

Nikolaj Bjørner1, Marijn Heule2, Karthick Jayaraman3,


and Rahul Kumar1
1
Microsoft Research
{nbjorner,rahulku}@microsoft.com
2
UT Austin
marijn@heule.nl
3
Microsoft Azure
karjay@microsoft.com

Azure to the power of Z3: Cloud providers are increasingly embracing network
verification for managing complex datacenter network infrastructure. Microsoft’s
Azure cloud infrastructure integrates the SecGuru tool, which leverages the Z3 SMT
solver, for assuring that the network is configured to preserve desired intent. SecGuru
statically validates correctness of access-control policies and routing tables of hundreds
of thousands of network devices. For a structured network such as for Microsoft Azure
data-centers, the intent for routing tables and access-control policies can be automat-
ically derived from network architecture and metadata about address ranges hosted in
the datacenter. We leverage this aspect to perform local checks on a per router basis.
These local checks together assure reachability invariants for availability and perfor-
mance. To make the service truly scalable, while using modest resources, SecGuru
integrates a set of domain-specific optimizations that exploit properties of the config-
urations it needs to handle. Our experiences exemplify integration of general purpose
verification technologies, in this case bit-vector solving, for a specific domain: the
overal methodology available through SMT formalisms lowers the barrier of entry for
capturing and checking contracts. They also alleviate initial needs for writing custom
solvers. However, each domain reveals specific structure that produces new insights in
the quest for scalable verification: for checking reachability properties in networks,
methods for capturing symmetries in networks and header spaces can speed up veri-
fication by several orders of magnitude; for local checks on data-center routers we
exploit common patterns in configurations to take the time it takes to check a contract
from a few seconds to a few milliseconds.
Z3 to the power of Azure: Applications that rely on constraint solving may be in a
fortunate situation where efficient solving technologies are adequately available. Sev-
eral tools building on Z3 rely on this being the common case scenario. Then useful
feedback can be produced within the attention span of a person performing program
verification, and then the available compute time on a machine or a cluster is sufficient
to complete thousands of small checks. As long as this fortunate situation holds, search
techniques for SMT is a solved problem. Yet, in spite of rumors to the contrary, SAT
X N. Bjørner et al.

and SMT is by no means a solved problem. When current algorithmic techniques, in


particular modern SAT solving search methods based on conflict-driven clause
learning, are insufficient to quickly find solutions, a next remedy is to harness com-
putational resources at problems. In the context of SAT solving, the method of Cube &
Conquer, has been used with significant success to solve hard combinatorial problems
from mathematical conjectures. These are relatively small formulas, but require a
substantial search space for analysis. Formulas from applications, such as scheduling
and timetabling, are significantly larger and have wildly different structural properties.
In spite of these differences, we found that the Cube & Conquer methodology can make
a substantial difference as fixing even a limited number of variables can drastically
reduce the overhead solving subformulas. We describe a distributed version of Z3 that
scales with Azure’s elastic cloud. It integrates recent advances in lookahead and dis-
tributed SAT solving for Z3’s engines for SMT. A multi-threaded version of the Cube
& Conquer solver is also available parallelizing SAT and SMT queries.
Contents

Invited Papers

DeepSafe: A Data-Driven Approach for Assessing Robustness


of Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Divya Gopinath, Guy Katz, Corina S. Păsăreanu, and Clark Barrett

Formal Specification for Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . 20


Sanjit A. Seshia, Ankush Desai, Tommaso Dreossi, Daniel J. Fremont,
Shromona Ghosh, Edward Kim, Sumukh Shivakumar,
Marcell Vazquez-Chanlatte, and Xiangyu Yue

Regular Papers

Optimal Proofs for Linear Temporal Logic on Lasso Words . . . . . . . . . . . . . 37


David Basin, Bhargav Nagaraja Bhatt, and Dmitriy Traytel

What’s to Come is Still Unsure: Synthesizing Controllers Resilient


to Delayed Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Mingshuai Chen, Martin Fränzle, Yangjia Li, Peter N. Mosaad,
and Naijun Zhan

A Formally Verified Motion Planner for Autonomous Vehicles . . . . . . . . . . 75


Albert Rizaldi, Fabian Immler, Bastian Schürmann, and Matthias Althoff

Robustness Testing of Intermediate Verifiers . . . . . . . . . . . . . . . . . . . . . . . 91


YuTing Chen and Carlo A. Furia

Simulation Algorithms for Symbolic Automata . . . . . . . . . . . . . . . . . . . . . . 109


Lukáš Holík, Ondřej Lengál, Juraj Síč, Margus Veanes,
and Tomáš Vojnar

Quantitative Projection Coverage for Testing ML-enabled


Autonomous Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Chih-Hong Cheng, Chung-Hao Huang, and Hirotoshi Yasuoka

Recursive Online Enumeration of All Minimal Unsatisfiable Subsets . . . . . . . 143


Jaroslav Bendík, Ivana Černá, and Nikola Beneš

Synthesis in pMDPs: A Tale of 1001 Parameters . . . . . . . . . . . . . . . . . . . . 160


Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen,
and Ufuk Topcu
XII Contents

Temporal Logic Verification of Stochastic Systems Using Barrier


Certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Pushpak Jagtap, Sadegh Soudjani, and Majid Zamani

Bisimilarity Distances for Approximate Differential Privacy . . . . . . . . . . . . . 194


Dmitry Chistikov, Andrzej S. Murawski, and David Purser

A Symbolic Algorithm for Lazy Synthesis of Eager Strategies . . . . . . . . . . . 211


Swen Jacobs and Mouhammad Sakr

Modular Verification of Concurrent Programs via Sequential


Model Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Dan Rasin, Orna Grumberg, and Sharon Shoham

Quantifiers on Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248


Arie Gurfinkel, Sharon Shoham, and Yakir Vizel

Signal Convolution Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267


Simone Silvetti, Laura Nenzi, Ezio Bartocci, and Luca Bortolussi

Efficient Symbolic Representation of Convex Polyhedra


in High-Dimensional Spaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Bernard Boigelot and Isabelle Mainz

Accelerated Model Checking of Parametric Markov Chains . . . . . . . . . . . . . 300


Paul Gainer, Ernst Moritz Hahn, and Sven Schewe

Continuous-Time Markov Decisions Based on Partial Exploration. . . . . . . . . 317


Pranav Ashok, Yuliya Butkova, Holger Hermanns, and Jan Křetínský

A Fragment of Linear Temporal Logic for Universal Very Weak Automata . . . 335
Keerthi Adabala and Rüdiger Ehlers

Quadratic Word Equations with Length Constraints, Counter Systems,


and Presburger Arithmetic with Divisibility . . . . . . . . . . . . . . . . . . . . . . . . 352
Anthony W. Lin and Rupak Majumdar

Round-Bounded Control of Parameterized Systems . . . . . . . . . . . . . . . . . . . 370


Benedikt Bollig, Mathieu Lehaut, and Nathalie Sznajder

PSense: Automatic Sensitivity Analysis for Probabilistic Programs . . . . . . . . 387


Zixin Huang, Zhenbang Wang, and Sasa Misailovic

Information Leakage in Arbiter Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . 404


Nestan Tsiskaridze, Lucas Bang, Joseph McMahan, Tevfik Bultan,
and Timothy Sherwood
Contents XIII

Neural State Classification for Hybrid Systems . . . . . . . . . . . . . . . . . . . . . . 422


Dung Phan, Nicola Paoletti, Timothy Zhang, Radu Grosu,
Scott A. Smolka, and Scott D. Stoller

Bounded Synthesis of Reactive Programs. . . . . . . . . . . . . . . . . . . . . . . . . . 441


Carsten Gerstacker, Felix Klein, and Bernd Finkbeiner

Maximum Realizability for Linear Temporal Logic Specifications . . . . . . . . . 458


Rayna Dimitrova, Mahsa Ghasemi, and Ufuk Topcu

Ranking and Repulsing Supermartingales for Reachability


in Probabilistic Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
Toru Takisaka, Yuichiro Oyabu, Natsuki Urabe, and Ichiro Hasuo

Bounded Synthesis of Register Transducers . . . . . . . . . . . . . . . . . . . . . . . . 494


Ayrat Khalimov, Benedikt Maderbacher, and Roderick Bloem

Tool Papers

ETHIR: A Framework for High-Level Analysis of Ethereum Bytecode . . . . . . 513


Elvira Albert, Pablo Gordillo, Benjamin Livshits, Albert Rubio,
and Ilya Sergey

MGHyper: Checking Satisfiability of HyperLTL Formulas Beyond


the 9 8 Fragment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Bernd Finkbeiner, Christopher Hahn, and Tobias Hans

Verifying Rust Programs with SMACK . . . . . . . . . . . . . . . . . . . . . . . . . . . 528


Marek Baranowski, Shaobo He, and Zvonimir Rakamarić

SBIP 2.0: Statistical Model Checking Stochastic Real-Time Systems. . . . . . . 536


Braham Lotfi Mediouni, Ayoub Nouri, Marius Bozga,
Mahieddine Dellabani, Axel Legay, and Saddek Bensalem

Owl: A Library for x-Words, Automata, and LTL . . . . . . . . . . . . . . . . . . . 543


Jan Křetínský, Tobias Meggendorfer, and Salomon Sickert

EVE: A Tool for Temporal Equilibrium Analysis . . . . . . . . . . . . . . . . . . . . 551


Julian Gutierrez, Muhammad Najib, Giuseppe Perelli,
and Michael Wooldridge

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559


Invited Papers
DeepSafe: A Data-Driven Approach for
Assessing Robustness of Neural Networks

Divya Gopinath1 , Guy Katz3 , Corina S. Păsăreanu1,2(B) , and Clark Barrett3


1
Carnegie Mellon University, Silicon Valley, Mountain View, USA
divgml@gmail.com
2
NASA Ames Research Center, Mountain View, USA
corina.s.pasareanu@nasa.gov
3
Stanford University, Stanford, USA
{guyk,barrett}@cs.stanford.edu

Abstract. Deep neural networks have achieved impressive results in


many complex applications, including classification tasks for image and
speech recognition, pattern analysis or perception in self-driving vehi-
cles. However, it has been observed that even highly trained networks are
very vulnerable to adversarial perturbations. Adding minimal changes to
inputs that are correctly classified can lead to wrong predictions, rais-
ing serious security and safety concerns. Existing techniques for check-
ing robustness against such perturbations only consider searching locally
around a few individual inputs, providing limited guarantees. We pro-
pose DeepSafe, a novel approach for automatically assessing the overall
robustness of a neural network. DeepSafe applies clustering over known
labeled data and leverages off-the-shelf constraint solvers to automati-
cally identify and check safe regions in which the network is robust, i.e.
all the inputs in the region are guaranteed to be classified correctly. We
also introduce the concept of targeted robustness, which ensures that the
neural network is guaranteed not to misclassify inputs within a region to
a specific target (adversarial) label. We evaluate DeepSafe on a neural
network implementation of a controller for the next-generation Airborne
Collision Avoidance System for unmanned aircraft (ACAS Xu) and for
the well known MNIST network. For these networks, DeepSafe identified
many regions which were safe, and also found adversarial perturbations
of interest.

1 Introduction

Machine learning techniques such as deep neural networks (NN) are increasingly
used in a variety of applications, achieving impressive results in many domains,
and matching the cognitive ability of humans in complex tasks. In this paper, we
study a common use of NN as classifiers that take in complex, high dimensional
input, pass it through multiple layers of transformations, and finally assign to it
a specific output label or class. These networks can be used to perform pattern
analysis, image classification, or speech and audio recognition, and are now being
c Springer Nature Switzerland AG 2018
S. K. Lahiri and C. Wang (Eds.): ATVA 2018, LNCS 11138, pp. 3–19, 2018.
https://doi.org/10.1007/978-3-030-01090-4_1
4 D. Gopinath et al.

integrated into the perception modules of autonomous or semi-autonomous vehi-


cles, at major car companies such as Tesla, BMW, Ford, etc. It is expected that
this trend will continue and intensify, with neural networks being increasingly
used in safety critical applications which require high assurance guarantees.
However, it has been observed that state-of-the-art networks are highly vul-
nerable to adversarial perturbations: given a correctly-classified input x, it is
possible to find a new input x that is very similar to x but is assigned a differ-
ent label [17]. The vulnerability of neural networks to adversarial perturbations
is thus a major safety and security concern, and it is essential to explore sys-
tematic methods for evaluating and improving the robustness of neural networks
against such attacks.
To date, researchers have mostly focused on efficiently finding adversarial
perturbations around select individual input points. The problem is typically
cast as an optimization problem: for a given network F and an input x, find
an x such that F assigns different labels to x and x (denoted F (x ) = F (x)),
while minimizing the distance x − x , for different distance metrics. In other
words, the goal is to find an input x as close as possible to x such that x
and x are labeled differently. Finding the optimal solution for this problem is
computationally difficult, and so various approximation approaches have been
proposed [3,5,6,17]. There are also techniques that focus on generating targeted
attacks: adversarial perturbations that result in the network assigning the per-
turbed input a specific target label [5,6,17].
The approaches for finding adversarial perturbations have successfully
demonstrated the weakness of many state-of-the-art networks. However, using
these techniques in assessing a network’s robustness against adversarial pertur-
bations is difficult, for two reasons. First, these approaches are heuristic-based,
and provide no formal guarantees that they have not overlooked some adver-
sarial perturbations; and second, these approaches only operate on individual
input points, which may not be indicative of the network’s robustness around
other points. Approaches have also been proposed for training networks that
are robust against adversarial perturbations, but these, too, provide no formal
assurances [13].
Formal methods present a promising way for obtaining formal guarantees
about the robustness of networks. Recent approaches tackle neural network ver-
ification [7,10] by casting it as an SMT solving problem. Reluplex [10] can check
the local robustness at input point x, by checking if there is another point x
within a close distance δ to x (x − x  < δ) for which the network assigns a
different label. The initial value δ is picked arbitrarily and is tightened iteratively
until the check holds. Similarly, DLV [7] searches locally around given inputs x
within a small δ distance, but unlike Reluplex, it adopts discretization of the
input space to reduce the search space.
These techniques are still limited to checking local robustness around a few
individual points, giving no indication about the overall robustness of the net-
work. In principle, one can apply the local check to a set of inputs that are drawn
from some random distribution thought to represent the input space. However,
DeepSafe: A Data-Driven Approach 5

this would require coming up with minimally acceptable distance δ values for
all these checks, which can vary greatly between different input points. Further-
more, the check will likely fail (and produce invalid adversarial examples) for the
input points that are close to the legitimate boundaries between different labels.
The concept of global robustness is defined in [10,11], which could be checked
by Reluplex (we give the formal definition in Sect. 6). Whereas local robustness
is measured for a specific input point, global robustness applies to all inputs
simultaneously. The check requires encoding two side-by-side copies of the NN
in question, operating on separate input variables, and checking that the outputs
are similar. Global adversarial robustness is harder to prove than local robustness
for two reasons: (1) encoding two copies of the network results in twice as many
network nodes to analyze and (2) the problem is not restricted to a small domain,
and therefore can not take advantage of Reluplex’s heuristics, which work best
when the check is restricted to small neighborhoods. Furthermore, it requires
more manual tuning, as the user needs to provide minimally acceptable values
for two parameters (δ and ). As a result this global check could not be applied
in practice to any realistic network [10].
Thus the problem of assessing the overall robustness of a network remains
open.

DeepSafe. We propose DeepSafe, an automatic, data-driven approach for assess-


ing the overall robustness of a neural network. The key idea is to effectively
leverage the inputs with known correct labels to automatically identify regions
in the input space which have a high chance of being labelled consistently or
uniformly, thereby making global robustness checks achievable.
Specifically, DeepSafe applies a clustering algorithm over the labeled inputs
to group them into clusters, which are sets of inputs that are close to each other
(with respect to some given distance metric) and share the same label. The
labeled inputs can be drawn from the training set or can be generated directly
from executing the network on some random inputs; in the latter case the user
needs to validate that the labels are correct. We use k-means clustering [9], which
we modified to be guided by the labels of known inputs, but other clustering
algorithms could be employed as well.
Each cluster defines a region which is a hypersphere in the input space,
where the centroid is automatically computed by the clustering algorithm and
the radius is defined as the average distance of any element in the cluster from
the centroid. Our hypothesis it that the NN should assign the same label to all
the inputs in the region, not just to the elements (inputs with known labels)
that are used to form the cluster. The rationale is that even highly non-linear
networks should display consistent behavior in the neighborhoods of groups of
similar inputs known to share the same true label. We check this hypothesis by
formulating it as a query for an off-the-shelf constraint solver. For our experi-
ments we use the state-of-the-art tool Reluplex, but other solvers can be used.
If the solver finds no solutions, it means that the region is safe (all inputs are
labeled the same). If a solution is found, this indicates a potential adversar-
ial input. We use the adversarial example as feedback to iteratively reduce the
6 D. Gopinath et al.

radius of the region, and repeat the check until the region is proved to be safe.
If a time-out occurs during this process, we can not provide any guarantee for
that region (although the likely answer is that it is safe). The result of the anal-
ysis is a set of well-defined input regions in which the NN is guaranteed to be
robust and s set of examples of adversarial inputs that may be of interest to the
developers of the NN, and can be used in retraining.
Thus, DeepSafe decomposes the global robustness requirements into a set
of local robustness checks, one for each region, which can be solved efficiently
with a tool like Reluplex. These checks do not require two network copies and are
restricted to small input domains, as defined by the regions. The distance used for
the local checks is not picked arbitrarily as in previous approaches, but instead it
is the radius computed for each regions, backed up by the labeled data. Further-
more regions define natural decision boundaries in the input domain, thereby
increasing the chances of producing valid proofs or of finding valid adversarial
examples.
We introduce the concept targeted robustness in line with targeted attacks
[5,6,17]. Targeted robustness ensures that there are no inputs in a region that
are mapped by the NN to a specific incorrect label. Therefore, even if in that
region the NN is not completely robust against adversarial perturbations, we can
give guarantees that it is robust against specific targeted attacks. As a simple
example, consider a NN used for perception in an autonomous car that classifies
the images of a traffic light as red, green or yellow. Even if it is not the case that
the NN never misclassifies a red light, we may be willing to settle for a guarantee
that it never misclassifies a red light as a green light—leaving us with the more
tolerable case in which a red light is misclassified as yellow.
We implemented a prototype of DeepSafe and evaluated it on a neural net-
work implementation of a controller for the next-generation Airborne Collision
Avoidance System for unmanned aircraft (ACAS Xu) and on the well known
MNIST dataset. For these networks, our approach identified regions which were
completely safe, regions which were safe with respect to some target labels, and
also adversarial perturbations that were of interest to the network’s developers.

2 Background
2.1 Neural Networks
Neural networks and deep belief networks have been used in a variety of appli-
cations, including pattern analysis, image classification, speech and audio recog-
nition, and perception modules in self-driving cars. Typically, the objects in
such domains are high dimensional and the number of classes that the objects
need to be classified to is also high—and so the classification functions tend
to be highly non-linear over the input space. Deep learning operates with the
underlying rationale that groups of input parameters can be merged to derive
higher-level abstract features, which enable the discovery of a more linear and
continuous classification function. Neural networks are often used as classifiers,
meaning they assign to each input an output label/class. Such a neural network
DeepSafe: A Data-Driven Approach 7

F can thus be regarded as a function that assigns to input x an output label y,


denoted as F (x) = y.
Internally, a neural network is comprised of multiple layers of nodes, called
neurons. Each node refines and extracts information from values computed by
nodes in the previous layer. The structure of a typical 3 layer neural network
would be as follows: the first layer is the input layer, which takes in the input
variables (also called features) x1 , x2 , . . . , xn . The second layer is a hidden layer:
each of its neurons computes a weighted sum of the input variables according to
a unique weight vector and a bias value, and then applies a non-linear activation
function to the result. The sigmoid function (f (x) = 1/(1 + e−x ) is a widely
used activation function. Most recent networks use rectified linear units (ReLUs)
activation functions. A rectified linear unit has output 0 if the input is less than
0, and raw output otherwise; f (x) = max(x, 0). The last layer uses a softmax
function to assign an the output class is the input. The softmax function squashes
the outputs of each node of the previous layer to be between 0 and 1, equivalent
to a categorical probability distribution. The number of nodes in this layer is
equal to the number of output classes and their respective outputs gives the
probability of the input being classified to that class.

2.2 Neural Network Verification

Traditional verification techniques often cannot directly be applied to neu-


ral networks, and this has sparked a line of work focused on transforming
the problem into a format more amenable to existing tools, such as LP and
SMT solvers [4,7,15,16]. DeepSafe, while is not restricted to, currently uses the
recently-proposed Reluplex tool, which has been shown to perform better than
other solvers, such as Z3, CVC4, Yices or MathSat [10]. Reluplex is a sound and
complete simplex-based verification procedure, specifically tailored to achieve
scalability on deep neural networks. It is designed to operate on networks with
piecewise linear activation functions, such as ReLU. Intuitively, the algorithm
operates by eagerly solving the linear constraints posed by the neural network’s
weighted sums, while attempting to satisfy the non-linear constraints posed by
its activation functions in a lazy manner. This often allows Reluplex to safely
disregard many of these non-linear constraints, which is where the bulk of the
problem’s complexity originates.
Reluplex has been used in evaluating techniques for finding and defending
against adversarial perturbations [2], and it has also been successfully applied to
a real-world family of deep neural networks, designed to operate as controllers in
the next-generation Airborne Collision Avoidance System for unmanned aircraft
(ACAS Xu) [10]. However, as discussed, Reluplex could only be used to check
local robustness around a few individual points, giving limited guarantees about
the overall robustness of these networks.
8 D. Gopinath et al.

2.3 Clustering
Clustering is an approach used to divide a population of data-points into sets
called clusters, such that the data-points in each cluster are more similar (with
respect to some metric) to other points in the same cluster than to the rest of
the data-points. Here we focus on a particularly popular clustering algorithm
called kMeans [9] (although our approach could be implemented using different
clustering algorithms as well). Given a set of n data-points {x1 , . . . , xn } and k as
the desired number of clusters, the algorithm partitions the points into k clusters,
such that the variance (also referred to as “within cluster sum of squares”)
within each cluster is minimal. The metric used to calculate the distance between
points is customizable, and is typically the Euclidean distance (L2 norm) or
the Manhattan distance (L1 norm). For points x1 = x11 , . . . , x1n  and x2 =
x21 , . . . , x2n  these are defined as:

n  n
 
x1 − x2 L1 = |xi − xi |,
1 2
x1 − x2 L2 =  (x1i − x2i )2 (1)
i=1 i=1

The kMeans clustering algorithm is an iterative refinement algorithm, which


starts with k random points considered as the means (the centroids) of k clusters.
Each iteration is then comprised of two main steps:(i) assign each data-point to
the cluster whose centroid is closest to it with respect to the chosen distance
metric; and (ii) re-calculate the new means of the clusters, which will serve as
the new centroids. The iterations continue until the assignment of data-points to
clusters does not change. This indicates that the clusters satisfy the constraints
that the variance within each cluster is minimal and that the data-points within
each cluster are closer to the centroid of that cluster than to the centroid of any
other cluster.

3 The DeepSafe Approach


The DeepSafe approach is illustrated in Fig. 1. DeepSafe has two main compo-
nents: clustering and verification. The inputs with known labels are fed into a
clustering module which implements a modified version of the kMeans algorithm
(as described in Sect. 3.1). The module generates clusters of similar inputs (wrt
some distance metric) known to have the same label. Every such cluster defines
a region characterized by a centroid (cen), radius (r ) and label (l ), which cor-
responds to the label of the inputs used to form the cluster. Thus, a cluster
is a subset of the inputs while a region is the geometrical hypersphere defined
by the cluster; for the rest of the paper we sometimes use regions and clusters
interchangeably, when the meaning is clear from the context.
For every region, the verification module is invoked (as described in Sect. 3.2).
This module uses Reluplex to check if there exists any input within the region for
which the neural network assigns a different label than l (the score in the figure
will be explained below). This check is done separately for each label l’ other than
DeepSafe: A Data-Driven Approach 9

Fig. 1. The DeepSafe approach

l. If such an input is found (formula is SAT), then this is a potential adversarial


example that is reported to the user. The radius is then reduced to exclude the
adversarial input and the check is repeated. When no such adversarial example
is found, the network is declared to be robust with respect to target l’, and
correspondingly the region is declared targeted safe for that label l’. If for a
particular l’, the solver keeps finding adversarial examples until r gets reduced
to 0, the region is considered unsafe w.r.t. that label. If no adversarial examples
are found for all the checks, the region is completely safe. If adversarial cases are
found for all labels, it is unsafe.

3.1 Labeled-Guided Clustering

We employ a modified version of the kMeans clustering algorithm to perform


clustering over the inputs with known correct labels. These inputs can be drawn
from the training data or can be generated randomly and labeled according
to the outputs given by a trained network. The kMeans approach is typically
an unsupervised technique, meaning that the clustering is based purely on the
similarity of the data-points themselves. Here, however, we use the labels to guide
the clustering algorithm into generating clusters that have consistent labels (in
addition to containing points that are similar to each other). The number of
clusters, which is an input parameter of the kMeans algorithm, is often chosen
arbitrarily but in our case the algorithm starts by setting the number of clusters,
k, to be equal to the number of unique labels. Once the clusters are obtained, we
check whether each cluster contains only inputs with the same label. kMeans is
then applied again on each cluster that is found to contain multiple labels, with k
set to the number of unique labels within that cluster. This effectively breaks the
10 D. Gopinath et al.

Fig. 2. Original clusters with k = 2 (left), Clusters with modified kMeans (right)

“problematic” cluster into multiple sub-clusters. The process is repeated until


all the clusters contain inputs which share a single label.
To illustrate the clustering, let us consider a small example with training
data labeled as either stars or circles. Each training data point is characterized
by two dimensions/attributes (x,y). The original kMeans algorithm with k = 2,
will partition the training inputs into 2 groups, purely based on proximity w.r.t.
the 2 attributes (Fig. 2a). However, this simple approach would group stars and
circles together. Our modified algorithm creates the same partitions in its first
iteration, but because each cluster contains training inputs with multiple labels,
it then proceeds to iteratively divide each cluster into sub-clusters. This finally
creates 5 clusters as shown in (Fig. 2b): three with label star and two with label
circle. This example is typical for domains such as image classification, where
even a small change in some attribute value for certain inputs could change their
label.

Distance Metrics. We note that the similarity of the inputs within a cluster
is determined by the distance metric used for calculating the proximity of the
inputs. Therefore, it is important to choose a distance metric that generates
acceptable levels of similarity for the domain under consideration. We assume
that inputs are representable as vectors of numeric attributes and hence can
be considered as points in Euclidean space. The Euclidean distance (Eq. 1) is
a commonly used metric for measuring the proximity of points in Euclidean
space. However, recent studies indicate that the usefulness of Euclidean distance
in determining the proximity between points diminishes as the dimensionality
increases [1]. The Manhattan distance (Eq. 1) has been found to capture prox-
imity more accurately in high dimensions. Therefore, in our experiments, we set
the distance metric depending on the dimensionality of the input space (Sect. 4).
The clustering algorithm aims to produce neighborhoods of consistently-
labeled inputs. The underlying assumption of our approach is that each such
cluster will define a safe region, in which all inputs (and not just the inputs
used to form the cluster) should be labeled consistently. We define the regions to
have the centroid cen computed by kMeans and the radius to be the average dis-
tance r of any instance from the centroid. Note that we use the average instead
of maximum distance. The reason is that kMeans typically generates clusters
that are convex, compact and spherically shaped. However, the ideal boundaries
of regions encompassing consistently labeled points need not conform to this.
DeepSafe: A Data-Driven Approach 11

Further, while inputs deep within a region are expected to be labeled consis-
tently, the points that are close to the boundaries may have different labels. We
therefore shrink the radius to increase the chances of obtaining a region that is
indeed safe.

3.2 Region Verification


In this step, we check if the regions defined by the clusters formed in the previous
step are safe for a given NN. The main hypothesis behind our approach is as
follows:
Hypothesis 1. For a given region R, with centroid cen and radius r, any input
x within distance r from cen has the same true label l as that of the region:

x − cen ≤ r ⇒ label(x) = l
If this hypothesis holds, it follows that any point x in the region which is assigned
a different label F (x ) = l by the network constitutes an adversarial perturba-
tion. To illustrate this on our example; a NN, may represent an input which is
close to other stars in the input layer as a point further away from them in the
inner layers. Therefore, it may incorrectly classify it as a circle.
We use Reluplex [10] to check the satisfiability of a formula representing the
negation of Hypothesis 1 for every cluster for every label. The encoding is as
follows://shown in Eq. 2:

∃x. x − cenL1 ≤ r ∧ score(x, l ) ≥ score(x, l) (2)

Here, x represents an input point, and cen, r and l represent the centroid, radius
and label of the region, respectively. l represents a specific label, different from l.
Reluplex models the network without the final softmax layer, and so the outputs
of the model correspond to the outputs of the second last layer of the NN [REF].
This layer consists of as many nodes as the number of labels and the output value
for each node corresponds to the level of confidence that the network assigns to
that label for the given input. We use score(x, y) to denote the level of confidence
assigned to label y at point x.
The above formula holds for a given l if and only if there exists a point x
within distance at most r from cen, for which l is assigned higher confidence than
l. Consequently, if the property does not hold, then for every x within the cluster
l, its score is higher than l . This ensures targeted robustness of the network for
label l : the network is guaranteed to never map any input within the region to
the target label l . The formula in Eq. 2 is checked for every possible l ∈ L − {l},
where L denotes the set of all possible labels. If the query is unsatisfiable for all
l , it ensures complete robustness of the network for the region; i.e., the network is
guaranteed to map all the inputs in the region to the same label as the centroid.
This can be expressed formally as follows:

∀x. x − cenL1 ≤ r ⇒ ∀l ∈ L − {l}. score(x, l) ≥ score(x, l ) (3)


12 D. Gopinath et al.

from which it follows that Hypothesis 1 holds, i.e. that:


∀x. x − cenL1 ≤ r ⇒ label(x) = l (4)
Note that as is the case with many SMT-based solvers, Reluplex typically
solves satisfiable queries more quickly than unsatisfiable ones. Therefore, in order
to optimize performance, we test the possible target labels l in descending order
based on the scores that they are assigned at the centroid, score(cen, l ). Intu-
itively, this is because labels with higher scores are more likely to yield a satis-
fiable query.

Encoding Distance Metrics in Reluplex. Reluplex takes as input a conjunction


of linear equations and certain piecewise-linear constraints. Consequently, it is
straightforward to model the neural network itself and the query in Eq. 2. Our
ability to encode the distance constraint from the equation, x − cen ≤ r,
depends on the distance metric being used. While L1 is piecewise linear and
can be encoded, L2 unfortunately cannot. When dealing with domains where L2
distance is a better measure of proximity, we thus use the following approxima-
tion. We perform the clustering phase using the L2 distance metric as described
before and for each cluster obtain the radius r. When verifying the property in
Eq. 2, however, we use the L1 norm. Because x − cenL1 ≤ x − cenL2 , it is
guaranteed that any adversarial perturbation discovered would have also been
discovered using the L2 norm. If no such adversarial perturbation is discovered,
however, we can only conclude that the portion of the corresponding region that
was checked is safe. This limitation could be overcome by using a tool that
directly supports L2 (however no such tools currently exist), or by enhancing
Reluplex to support it.

Safe Regions and Scalability. The main source of computational complexity in


neural network verification is the presence of non-linear, non-convex activation
functions. However, when restricted to a small domain of the input space, these
functions may exhibit linear behavior—in which case they can be disregarded and
replaced with a linear constraint, which greatly simplifies the problem. Conse-
quently, performing verification within the small regions discovered by DeepSafe
is beneficial, as many activation functions can often be disregarded.
The search space within a region is further reduced by restricting the range
of values for each input attribute (input variable to the NN model). We calculate
the minimum and maximum values for every attribute based on the instances
with known labels encompassed within a cluster. Reluplex has built-in bound
tightening [10] functionality. We leverage this by setting the lower and upper
bounds for each of the input variables within the cluster based on the respective
minimum and maximum values.
Our approach lends itself to more scalable verification also through paral-
lelization. Because each region involves stand-alone verification queries, their
verification can be performed in parallel. Also, because Eq. 2 can be checked
independently for every l , these queries can be performed in parallel—expediting
the process even further.
DeepSafe: A Data-Driven Approach 13

4 Case Studies
We implemented DeepSpace using MATLAB R2017a for the clustering algorithm
and Reluplex v1.0 for verification. The runs were dispatched on a 8-Core 64GB
server running Ubuntu 16.0.4. We evaluated DeepSpace on two case studies. The
first network is part of a real-world controller for the next-generation Airborne
Collision Avoidance System for unmanned aircraft (ACAS Xu), a highly safety-
critical system. The second network is a digit classifier over the popular MNIST
image dataset.

Table 1. Summary of the analysis for the ACAS Xu network for 210 clusters

Property # clusters Min radius Time (hours) # queries


Safe 125 0.084 4 11.8
Targeted safe 52 0.135 7.6 14.4
Time out 33 NA 12 NA

4.1 ACAS Xu

ACAS X is a family of collision avoidance systems for aircraft which is currently


under development by the Federal Aviation Administration (FAA) [8]. ACAS Xu
is the version for unmanned aircraft control. It is intended to be airborne and
receive sensor information regarding the drone (the ownship) and any nearby
intruder drones, and then issue horizontal turning advisories aimed at prevent-
ing collisions. The input sensor data includes: (i) ρ: distance from ownship to
intruder; (ii) θ: angle of intruder relative to ownship heading direction; (iii) ψ:
heading angle of intruder relative to ownship heading direction; (iv) vown : speed
of ownship; (v) vint : speed of intruder; (vi) τ : time until loss of vertical sepa-
ration; and (vii) aprev : previous advisory. The five possible output actions are
as follows: Clear-of-Conflict (COC), Weak Right, Weak Left, Strong Right, and
Strong Left. Each advisory is assigned a score, with the lowest score correspond-
ing to the best action. The FAA is currently exploring an implementation of
ACAS Xu that uses an array of 45 deep neural networks. These networks were
obtained by discretizing the two parameters, τ and aprev , and so each network
contains five input dimensions and treats τ and aprev as constants. Each network
has 6 hidden layers and a total of 300 hidden ReLU activation nodes.
We applied our approach to several of the ACAS XU networks. We describe
here in detail the results for one network. Each input consists of 5 dimensions and
is assigned one of 5 possible output labels, corresponding to the 5 possible turning
advisories for the drone (0:COC, 1:Weak Right, 2:Weak Left, 3:Strong Right, and
4:Strong Left). We were supplied a set of cut-points, representing valid important
values for each dimension, by the domain experts [8]. We generated 2662704
inputs (cartesian product of the values for all the dimensions). The network was
14 D. Gopinath et al.

Table 2. Details of the analysis for some clusters for ACAS Xu

Cluster# Safe for label Radius # queries Time (min) Slice (Y/N)
5282 1 0.04 1 5.45 N
label:0 2 0.04 1 3.91 N
3 0.04 1 3.57 N
4 0.04 1 4.01 N
1783 1 0.16 4 1.28 Y
label:0 2 0.17 1 279 N
3 0.17 1 236 N
4 0.17 1 223 N
2072 0 0.06 1 11.51 N
label:1 2 0.014 9 0.98 N
3 0.011 7 0.71 N
4 0.012 5 0.58 N
6138 1 0.089 9 103.2 N
label:0 2 0.11 4 2.86 N

executed on these inputs and the output advisories (labels) were verified. These
were considered as the inputs with known labels for our experiments.
The labeled-guided clustering algorithm was applied on the inputs using the
L2 distance metric. Clustering yielded 6145 clusters with more than one input
and 321 single-input clusters. The clustering took 7 h. For each cluster we com-
puted a region, characterized by a centroid (computed by kMeans), radius (aver-
age distance of every cluster instance from the centroid), and the expected label
(the label of all the cluster instances).
We first evaluated the network on all the centroids as they are considered
representative of the entire cluster and should ideally have the expected label.
The network assigned the expected label for the centroids of 5116 clusters (83%
of total number of clusters). For the remaining 1029 clusters, we found that
they contained few labeled instances spread out in large areas. Therefore, we
considered these clusters were not precise and our analysis was inconclusive.
For singleton clusters, we fall back to checking local robustness using previous
techniques [10]. These stand-alone points serve to identify portions of the input
space which require more training data, thus potentially more vulnerable to
adversarial perturbations.
Amongst the remaining 5116 clusters, we picked randomly 210 clusters to
illustrate our technique. These clusters contain 659315 labeled inputs (24% of the
total inputs with known labels). For each region corresponding to the respective
clusters, we applied DeepSafe to check equation 2 for every label. The distance
metric used was L1 since L2 can not be handled by Reluplex (see Sect. 3 that
explains why this is still safe). The results are presented in Tables 1 and 2. The
min radius in Table 1, refers to the average minimum radius around the centroid
DeepSafe: A Data-Driven Approach 15

of each region for which the safety guarantee applies (averaged over the total
number of regions for that safety type). The # queries refers to the number of
times the solver had to be invoked until an UNSAT was obtained, averaged over
all the regions for that property.
DeepSafe was able to identify 125 regions which are completely safe, i.e.
the network yields a label consistent with the neighboring labeled inputs within
the region. 52 regions are targeted safe, the network is safe against misclassifying
inputs to certain labels. For instance, the inputs within region 6138 (Table 2) with
an expected label 0 (COC), were safe against misclassification only to labels 1
(weak right) and 2 (weak left). The solver timed out without returning any result
for the remaining labels. The analysis timed out without returning a concrete
result for any label for 33 clusters. A time out does not allow to provide a proof
for the regions, although the likely answer is safe (generally, solvers take much
longer when there is no solution).
The min radius in Table 1, refers to the average minimum radius around the
centroid of each region for which the safety guarantee applies (averaged over the
total number of regions for that safety type).
The # queries refers to the number of times the solver had to be invoked
until an UNSAT was obtained, averaged over all the regions for that property.

4.2 MNIST Image Dataset

The MNIST database is a large collection of handwritten digits that is commonly


used for training various image processing systems [12]. The dataset has 60,000
training input images, each characterized by 784 attributes and belonging to
one of 10 labels. We used a network that comprised of 3 layers, each with 10
ReLU activation nodes. Clustering was applied using the L1 distance metric. It
yielded 6654 clusters with more than one input and 5681 single-input clusters.
The clustering consumed 10 h. A separate process for verification of each cluster
was spawned with a time-out of 12 h.

Table 3. Summary of the analysis for MNIST network for 80 clusters

Property # clusters Min radius Time (hours) # queries


Safe 7 2.46 11.27 2.85
Targeted safe 63 5.19 11.02 4.87
Time out 10 NA 12 NA

For the singleton clusters, as is the case with ACAS Xu, we performed local
robustness checking as in previous approaches.
Table 3 shows the summary of the results for the runs for 80 clusters that we
selected for evaluation. In past studies, the MNIST network has been shown to
be extremely vulnerable to misclassification on adversarial perturbations even
16 D. Gopinath et al.

with state-of-the art networks [3]. Therefore, as expected, it is easy to determine


SAT solutions and they were discovered very fast (within a minute). However,
it is very time consuming to prove safety; the verification time is much higher
than that of the ACAS Xu application as it is mainly impacted by the large
number of input variables (784 attributes). We would like to highlight that our
work is the first to successfully identify safety regions for MNIST even on a fairly
vulnerable network.
For 7 clusters, the solver returned UNSAT for all labels within 12 h. For 30
clusters, the solver returned UNSAT only for few labels but timed out before
returning any solution for the other labels. These have been included in the
targeted safe property in the table. Additionally, based on the nature of this
domain, we can consider it safe to assume that if for any label the solver does
not return a SAT solution within 10 h, then it is safe w.r.t. that label even
if it does not prove unsatisfiability within this time. This happened to be the
case for 33 clusters, where the solver could not find a solution for a specific
target label despite executing for more than 10 h. These have been included in
the targeted safe type as well. For 10 of the remaining clusters, the solver kept
finding adversarial examples despite iterative reductions of the radius and the
time-out occurred before the radius reduced to 0. These have been included as
time out in the table, since we cannot determine for sure if the region should be
marked unsafe for the specific labels.

5 Discussion

We compared DeepSafe with a method of randomly choosing inputs with known


labels and checking for local adversarial robustness using previous work [10].
This technique searches for inputs around the given fixed input, by varying each
input variable (dimension) in the range of [f ixedvalue − , f ixedvalue + ] (L∞
distance metric). It checks if there exists any input in this range, for which the
network assigns a higher score to any other label than that of the fixed input.
The algorithm starts with a standard epsilon value of 0.1 and iteratively reduces
the value until UNSAT is obtained or the value reduces to 0.01. We chose 210
random points for ACAS Xu and 80 random points for MNIST respectively, in
line with the number of regions that we checked with DeepSafe.
We found that for MNIST, local robustness checking found no safe regions
around any of the 80 points, whereas DeepSafe found 7 safe regions and 63
targeted safe regions. For ACAS Xu, this technique yielded only 62 safe regions
which are completely safe compared to 125 safe regions that were found using
DeepSafe. This experiment shows that the choice of input points and the delta
around these points play an important role in effective adversarial robustness
checking. We also looked at the validity of the adversarial examples generated
by DeepSafe. If an adversarial example is invalid or spurious, it indicates that
the expected label is incorrect for that input and that the label generated by the
network is in fact correct. During our analysis for ACAS Xu we found adversarial
examples, which were validated by domain experts. The adversarial cases were
DeepSafe: A Data-Driven Approach 17

Fig. 3. Inputs highlighted in light blue are mis-classified as Strong Right instead of
COC (left). Inputs highlighted in light blue are mis-classified as Strong Right instead
of Strong Left(right).

Fig. 4. Images of 1 misclassified as 8, 4, and 3

found to be valid, albeit not of high criticality. The adversarial examples for
MNIST were converted to images and manually verified to be valid (see Fig. 4).
There could be scenarios where both the region and the network agree on
the labels for all inputs, and still this could not be the desired behavior. This
would impact the validity of the safety guarantees provided by DeepCheck. We
addressed this issue by validating the safety regions for ACAS Xu with the
domain experts. The mismatch of labels for the centroid of a region does poten-
tially indicate an imprecise oracle. However, we found that the number of such
regions is not high (1029 out of 6145 clusters for ACAS Xu).

6 Related Work

The vulnerability of neural networks to adversarial perturbations was first dis-


covered by Szegedy et al. in 2013 [17]. They model the problem of finding the
adversarial example as a constrained minimization problem. Goodfellow et al. [6]
introduced the Fast Gradient Sign Method for crafting adversarial perturbations
using the derivative of the model’s loss function with respect to the input feature
vector. Jacobian-based Saliency Map Attack (JSMA) [14] proposed a method for
targeted misclassification by exploiting the forward derivative of an NN to find
an adversarial perturbation that will force the model to misclassify into a specific
target class.
Carlini and Wagner [3] recently proposed an approach that could not be
resisted by state-of-the-art networks such as those using defensive distillation.
Another random document with
no related content on Scribd:
say? You know I don’t speak English very well.”
“Oh, yes!” answered Lady Jane; “I know what you say, and I like
you.”
“I’m glad of that,” said Pepsie brightly, “because I’ve been just
crazy to have you come over here. Now tell me, is Madame Jozain
your aunt or your grandma?”
“Why, she’s my Tante Pauline; that’s all,” replied the child
indifferently.
“Do you love her dearly?” asked Pepsie, who was something of a
little diplomat.
“No, I don’t love her,” said Lady Jane decidedly.
“Oh my! Why, isn’t she good to you?”
Lady Jane made no reply, but looked wistfully at Pepsie, as if she
would rather not express her opinion on the subject.
“Well, never mind. I guess she’s kind to you, only perhaps you
miss your ma. Has she gone away?” And Pepsie lowered her voice
and spoke very softly; she felt that she was treading on delicate
ground, but she so wanted to know all about the dear little thing, not
so much from curiosity as from the interest she felt in her.
Lady Jane did not reply, and Pepsie again asked very gently:
“Has your mama gone away?”
“Tante Pauline says so,” replied the child, as the woe-begone
expression settled on her little face again. “She says mama’s gone
away, and that she’ll come back. I think she’s gone to heaven to see
papa. You know papa went to heaven before we left the ranch—and
mama got tired waiting for him to come back, and so she’s gone to
see him; but I wish she’d taken me with her. I want to see papa too,
and I don’t like to wait so long.”
The soft, serious little voice fell to a sigh, and she looked solemnly
out of the window at the strip of sunset sky over Madame Jozain’s
house.
Pepsie’s great eyes filled with tears, and she turned away her
head to hide them.
“Heaven’s somewhere up there, isn’t it?” she continued, pointing
upward. “Every night when the stars come out, I watch to see if papa
and mama are looking at me. I think they like to stay up there, and
don’t want to come back, and perhaps they’ve forgotten all about
Lady Jane.”
“Lady Jane, is that your name? Why, how pretty!” said Pepsie,
trying to speak brightly; “and what a little darling you are! I don’t think
any one would ever forget you, much less your papa and mama.
Don’t get tired waiting; you’re sure to see them again, and you
needn’t to be lonesome, sitting there on the gallery every day alone.
While your aunt’s busy with her customers, you can come over here
with your bird, and sit with me. I’ll show you how to shell pecans and
sugar them, and I’ll read some pretty stories to you. And oh, I’ll teach
you to play solitaire.”
“What is solitaire?” asked Lady Jane, brightening visibly.
“It’s a game of cards,” and Pepsie nodded toward the table; “I was
playing when you came. It’s very amusing. Now tell me about your
bird. Where did you get him?”
“A boy gave him to me—a nice boy. It was on the cars, and mama
said I could have him; that was before mama’s dear head ached so.
It ached so, she couldn’t speak afterward.”
“And haven’t you a doll?” interrupted Pepsie, seeing that the child
was approaching dangerous ground.
“A doll? Oh yes, I’ve got ever so many at the ranch; but I haven’t
any here. Tante Pauline promised me one, but she hasn’t got it yet.”
“Well, never mind; I’ll make you one; I make lovely dolls for my
little cousins, the Paichoux. I must tell you about the Paichoux. There
is Uncle Paichoux, and Tante Modeste, and Marie, the eldest,—she
has taken her first communion, and goes to balls,—and then there is
Tiburce, a big boy, and Sophie and Nanette, and a lot of little ones,
all good, pleasant children, so healthy and so happy. Uncle Paichoux
is a dairyman; they live on Frenchman Street, way, way down where
it is like the country, and they have a big house, a great deal larger
than any house in this neighborhood, with a garden, and figs and
peaches, and lovely pomegranates that burst open when they are
ripe, and Marie has roses and crape myrtle and jasmine. It is lovely
there—just lovely. I went there once, long ago, before my back hurt
me so much.”
“Does your back hurt you now?” interrupted Lady Jane, diverted
from the charming description of the Paichoux home by sudden
sympathy for the speaker.
“Yes, sometimes; you see how crooked it is. It’s all grown out, and
I can’t bear to be jolted; that’s why I never go anywhere; besides, I
can’t walk,” added Pepsie, feeling a secret satisfaction in
enumerating her ills. “But it’s my back; my back’s the worst.”
“What ails it?” asked Lady Jane, with the deepest sympathy in her
grave little voice.
“I’ve got a spine in my back, and the doctor says I’ll never get over
it. It’s something when you once get it that you can’t be cured of, and
it’s mighty bad; but I’ve got used to it now,” and she smiled at Lady
Jane; a smile full of patience and resignation. “I wasn’t always so
bad,” she went on cheerfully, “before papa died. You see papa was a
fireman, and he was killed in a fire when I was very small; but before
that he used to take me out in his arms, and sometimes I used to go
out in Tante Modeste’s milk-cart—such a pretty cart, painted red, and
set up on two high wheels, and in front there are two great cans, as
tall as you are, and they shine like silver, and little measures hang on
the spouts where the milk comes out, and over the seat is a top just
like a buggy top, which they put up when the sun is too hot, or it
rains. Oh, it’s just beautiful to sit up on that high seat, and go like the
wind! I remember how it felt on my face,” and Pepsie leaned back
and closed her eyes in ecstasy, “and then the milk! When I was
thirsty, Tante Modeste would give me a cup of milk out of the big can,
and it was so sweet and fresh. Some day I’m sure she’ll take you,
and then you’ll know how it all was; but I don’t think I shall ever go
again, because I can’t bear the jolting; and besides,” said Pepsie,
with a very broad smile of satisfaction, “I’m so well off here; I can see
everything, and everybody, so I don’t mind; and then I’ve been once,
and know just what it’s like to go fast with the wind in my face.”
“I used to ride on my pony with papa,” began Lady Jane, her
memory of the past awakened by the description of Pepsie’s drive.
“My pony was named Sunflower, now I remember,” and her little face
grew radiant, and her eyes sparkled with joy; “papa used to put me
on Sunflower, and mama was afraid I’d fall.” Then the brief glow
faded out of her face, for she heard Madame Jozain call across the
street, “Lady! Lady! Come, child, come. It’s nearly dark, and time you
were in bed.”
With touching docility, and without the least hesitation, she
gathered up Tony, who was standing on one leg under her chair,
and, holding up her face for Pepsie to kiss, she said good-by.
“And you’ll come again in the morning,” cried Pepsie, hugging her
fondly; “you’ll be sure to come in the morning.”
And Lady Jane said yes.
CHAPTER X
LADY JANE FINDS OTHER FRIENDS

T HUS Lady Jane’s new life, in the quaint old Rue des Bons
Enfants, began under quite pleasant auspices. From the moment
that Pepsie, with a silent but not unrecorded vow, constituted herself
the champion and guardian angel of the lonely little stranger, she
was surrounded by friends, and hedged in with the most loyal
affection.
Because Pepsie loved the child, the good Madelon loved her also,
and although she saw her but seldom, being obliged to leave home
early and return late, she usually left her some substantial token of
good will, in the shape of cakes or pralines, or some odd little toy
that she picked up on Bourbon Street on her way to and from her
stand.
Madelon was a pleasant-faced, handsome woman, always clean
and always cheery; no matter how hard the day had been for her,
whether hot or cold, rainy or dusty, she returned home at night as
fresh and cheerful as when she went out in the morning. Pepsie
adored her mother, and no two human beings were ever happier
than they when the day’s work was over, and they sat down together
to their little supper.
Then Pepsie recounted to her mother everything that had
happened during the day, or at least everything that had come within
her line of vision as she sat at her window; and Madelon in turn
would tell her of all she had heard out in her world, the world of the
Rue Bourbon, and after the advent of Lady Jane the child was a
constant theme of conversation between them. Her beauty, her
intelligence, her pretty manners, her charming little ways were a
continual wonder to the homely woman and girl, who had seen little
beyond their own sphere of life.
If Madelon was fortunate enough to get home early, she always
found Lady Jane with Pepsie, and the loving way with which the child
would spring to meet her, clinging to her neck and nestling to her
broad motherly bosom, showed how deeply she needed the
maternal affection so freely lavished upon her.
At first Madame Jozain affected to be a little averse to such a
close intimacy, and even went so far as to say to Madame
Fernandez, the tobacconist’s wife, who sat all day with her husband
in his little shop rolling cigarettes and selling lottery tickets, that she
did not like her niece to be much with the lame girl opposite, whose
mother was called “Bonne Praline.” Perhaps they were honest
people, and would do the child no harm; but a woman who was
never called madame, and who sat all day on the Rue Bourbon, was
likely to have the manners of the streets. And Lady Jane had never
been thrown with such people; she had been raised very carefully,
and she didn’t want her to lose her pretty manners.
Madame Fernandez agreed that Madelon was not over-refined,
and that Pepsie lacked the accomplishments of a young lady. “But
they are very honest,” she said, “and the girl has a generous heart,
and is so patient and cheerful; besides, Madelon has a sister who is
rich. Monsieur Paichoux, her sister’s husband, is very well off, a solid
man, with a large dairy business; and their daughter Marie, who just
graduated at the Sacred Heart, is very pretty, and is fiancée to a
young man of superior family, a son of Judge Guiot, and you know
who the Guiots are.”
Yes, madame knew. Her father, Pierre Bergeron, and Judge Guiot
had always been friends, and the families had visited in other days. If
that was the case, the Paichoux must be very respectable; and if
“Bonne Praline” was the sister-in-law of a Paichoux, and prospective
aunt-in-law to the son of a judge, there was no reason why she
should keep the child away; therefore she allowed her to go
whenever she wished, which was from the time she was out of bed
in the morning until it was quite dark at night.
Lady Jane shared Pepsie’s meals, and sat at the table with her,
learning to crack and shell pecans with such wonderful facility that
Pepsie’s task was accomplished some hours sooner, therefore she
had a good deal of time each day to devote to her little friend. And it
was very amusing to witness Pepsie’s motherly care for the child.
She bathed her, and brushed her long silken hair; she trimmed her
bang to the most becoming length; she dressed her with the greatest
taste, and tied her sash with the chic of a French milliner; she
examined the little pink nails and pearls of teeth to see if they were
perfectly clean, and she joined with Lady Jane in rebelling against
madame’s decree that the child should go barefoot while the weather
was warm. “All the little creoles did, and she was not going to buy
shoes for the child to knock out every day.” Therefore, when her
shoes were worn, Madelon bought her a neat little pair on the Rue
Bourbon, and Pepsie darned her stockings and sewed on buttons
and strings with the most exemplary patience. When madame
complained that, with all the business she had to attend to, the white
frocks were too much trouble and expense to keep clean, Tite
Souris, who was a fair laundress, begged that she might be allowed
to wash them, which she did with such good-will that Lady Jane was
always neat and dainty.
Gradually the sorrowful, neglected look disappeared from her
small face, and she became rosy and dimpled again, and as
contented and happy a child as ever was seen in Good Children
Street. Every one in the neighborhood knew her; the gracious,
beautiful little creature, with her blue heron, became one of the
sights of the quarter. She was a picture and a poem in one to the
homely, good-natured creoles, and everywhere she went she carried
sunshine with her.
Little Gex, a tiny, shrunken, bent Frenchman, who kept a small
fruit and vegetable stall just above Madelon’s, felt that the day had
been dark indeed when Lady Jane’s radiant little face did not illumine
his dingy quarters. How his old, dull eyes would brighten when he
heard her cheery voice, “Good morning, Mr. Gex; Tante Pauline”—or
Pepsie, as the case might be—“would like a nickel of apples, onions,
or carrots”; and the orange that was always given her for lagniappe
was received with a charming smile, and a “Thank you,” that went
straight to the old, withered heart.
Gex was a quiet, polite little man, who seldom held any
conversation with his customers beyond the simple requirements of
his business; and children, as a general thing, he detested, for the
reason that the ill-bred little imps in the neighborhood made him the
butt of their mischievous ridicule, for his appearance was droll in the
extreme: his small face was destitute of beard and as wrinkled as a
withered apple, and he usually wore a red handkerchief tied over his
bald head with the ends hanging under his chin; his dress consisted
of rather short and very wide trousers, a little jacket, and an apron
that reached nearly to his feet. This very quaint costume gave him a
nondescript appearance, which excited the mirth of the juvenile
population to such a degree that they did not always restrain it within
proper bounds. Therefore it was very seldom that a child entered his
den, and such a thing as one receiving lagniappe was quite unheard
of.
MR. GEX AT THE DOOR OF HIS SHOP
All day long he sat on his small wooden chair behind the shelf
across his window, on which was laid in neat piles oranges, apples,
sweet potatoes, onions, cabbages, and even the odorous garlic; they
were always sound and clean, and for that reason, even if he did not
give lagniappe to small customers, he had a fair trade in the
neighborhood. And he was very neat and industrious. When he was
not engaged in preparing his vegetables, he was always tinkering at
something of interest to himself; he could mend china and glass,
clocks and jewelry, shoes and shirts; he washed and patched his
own wardrobe, and darned his own stockings. Often when a
customer came in he would push his spectacles upon his forehead,
lay down his stocking and needle, and deal out his cabbage and
carrots as unconcernedly as if he had been engaged in a more
manly occupation.
From some of the dingy corners of his den he had unearthed an
old chair, very stiff and high, and entirely destitute of a bottom; this
he cleaned and repaired by nailing across the frame an orange-box
cover decorated with a very bright picture, and one day he charmed
Lady Jane by asking her to sit down and eat her orange while he
mended his jacket.
She declined eating her orange, as she always shared it with
Pepsie, but accepted the invitation to be seated. Placing Tony to
forage on a basket of refuse vegetables, she climbed into the chair,
placed her little heels on the topmost rung, smoothed down her short
skirt, and, resting her elbows on her knees, leaned her rosy little
cheeks on her palms, and set herself to studying Gex seriously and
critically. At length, her curiosity overcoming her diffidence, she said
in a very polite tone, but with a little hesitation: “Mr. Gex, are you a
man or a woman?”
Gex, for the moment, was fairly startled out of himself, and,
perhaps for the first time in years, he threw back his head and
laughed heartily.
“Bon! bon! ’Tis good; ’tis vairy good. Vhy, my leetle lady, sometime
I don’t know myself; ’cause, you see, I have to be both the man and
the voman; but vhy in the vorld did you just ask me such a funny
question?”
“Because, Mr. Gex,” replied Lady Jane, very gravely, “I’ve thought
about it often. Because—men don’t sew, and wear aprons,—and—
women don’t wear trousers; so, you see, I couldn’t tell which you
were.”
“Oh, my foi!” and again Gex roared with laughter until a neighbor,
who was passing, thought he had gone crazy, and stopped to look at
him with wonder; but she only saw him leaning back, laughing with
all his might, while Lady Jane sat looking at him with a frowning,
flushed face, as if she was disgusted at his levity.
“I don’t know why you laugh so,” she said loftily, straightening up in
her chair, and regarding Gex as if he had disappointed her. “I think
it’s very bad for you to have no one to mend your clothes, and—and
to have to sew like a woman, if—if you’re a man.”
“Vhy, bless your leetle heart, so it is; but you see I am just one
poor, lonely creature, and it don’t make much difference vhether I’m
one or t’ other; nobody cares now.”
“I do,” returned Lady Jane brightly; “and I’m glad I know, because,
when Pepsie teaches me to sew, I’m going to mend your clothes, Mr.
Gex.”
“Vell, you are one leetle angel,” exclaimed Gex, quite overcome.
“Here, take another orange.”
“Oh, no; thank you. I’ve only bought one thing, and I can’t take two
lagniappes; that would be wrong. But I must go now.”
And, jumping down, she took Tony from his comfortable nest
among the cabbage-leaves, and with a polite good-by she darted
out, leaving the dingy little shop darker for her going.
For a long time after she went Gex sat looking thoughtfully at his
needlework. Then he sighed heavily, and muttered to himself: “If
Marie had lived! If she’d lived, I’d been more of a man.”
CHAPTER XI
THE VISIT TO THE PAICHOUX

O NE bright morning in October, while Pepsie and Lady Jane


were very busy over their pecans, there was a sudden rattling
of wheels and jingling of cans, and Tante Modeste’s milk-cart, gay in
a fresh coat of red paint, with the shining cans, and smart little mule
in a bright harness, drew up before the door, and Tante Modeste
herself jumped briskly down from the high seat, and entered like a
fresh breath of spring.
She and Madelon were twin sisters, and very much alike; the
same large, fair face, the same smooth, dark hair combed straight
back from the forehead, and twisted in a glossy knot at the back, and
like Madelon she wore a stiffly starched, light calico gown, finished at
the neck with a muslin scarf tied in a large bow; her head was bare,
and in her ears she wore gold hoops, and around her neck was a
heavy chain of the same precious metal.
When Pepsie saw her she held out her arms, flushing with
pleasure, and cried joyfully: “Oh, Tante Modeste, how glad I am! I
thought you’d forgotten to come for Lady Jane.”
Tante Modeste embraced her niece warmly, and then caught Lady
Jane to her heart just as Madelon did. “Forgotten her? Oh, no; I’ve
thought of her all the time since I was here; but I’ve been so busy.”
“What about, Tante Modeste?” asked Pepsie eagerly.
“Oh, you can’t think how your cousin Marie is turning us upside
down, since she decided to be a lady.” Here Tante Modeste made a
little grimace of disdain. “She must have our house changed, and her
papa can’t say ‘no’ to her. I like it best as it was, but Marie must have
paint and carpets; think of it—carpets! But I draw the line at the
parlor—the salon,” and again Tante Modeste shrugged and laughed.
“She wants a salon; well, she shall have a salon just as she likes it,
and I will have the other part of the house as I like it. Just imagine,
your uncle has gone on Rue Royale, and bought a mirror, a console,
a cabinet, a sofa, and a carpet.”
“Oh, oh, Tante Modeste, how lovely!” cried Pepsie, clasping her
hands in admiration. “I wish I could see the parlor just once.”
“You shall, my dear; you shall, if you have to be brought on a bed.
When there’s a wedding,”—and she nodded brightly, as much as to
say, “and there will be one soon,”—“you shall be brought there. I’ll
arrange it so you can come comfortably, my dear. Have patience,
you shall come.”
“How good you are, Tante Modeste,” cried Pepsie, enraptured at
the promise of such happiness.
“But now, chérie,” she said, turning to Lady Jane, whose little face
was expressing in pantomime her pleasure at Pepsie’s delight, “I’ve
come for you this morning to take you a ride in the cart, as I
promised.”
“Tante Pauline doesn’t know,” began Lady Jane dutifully. “I must
go and ask her if I can.”
“I’ll send Tite,” cried Pepsie, eager to have the child enjoy what to
her seemed the greatest pleasure on earth.
“Here, Tite,” she said, as the black visage appeared at the door.
“Run quick across to Madame Jozain, and ask if Miss Lady can go to
ride in the milk-cart with Madame Paichoux; and bring me a clean
frock and her hat and sash.”
Tite flew like the wind, her black legs making zigzag strokes
across the street, while Pepsie brushed the child’s beautiful hair until
it shone like gold.
Madame Jozain did not object. Of course, a milk-cart wasn’t a
carriage, but then Lady Jane was only a child, and it didn’t matter.
While Pepsie was putting the finishing touches to Lady Jane’s
toilet, Tante Modeste and Tite Souris were busy bringing various
packages from the milk-cart to the little room; butter, cream, cheese,
sausage, a piece of pig, and a fine capon. When Tante Modeste
came, she always left a substantial proof of her visit.
There was only one drawback to Lady Jane’s joy, and that was the
necessity of leaving Tony behind.
“You might take him,” said Tante Modeste, good-naturedly, “but
there are so many young ones home they’d pester the bird about to
death, and something might happen to him; he might get away, and
then you’d never forgive us.”
“I know I mustn’t take him,” said Lady Jane, with sweet
resignation. “Dear Tony, be a good bird while I’m gone, and you shall
have some bugs to-morrow.” Tony was something of an epicure, and
“bugs,” as Lady Jane called them, extracted from cabbage-leaves,
were a delight to him. Then she embraced him fondly, and fastened
him securely to Pepsie’s chair, and went away with many good-bys
and kisses for her friend and not a few lingering glances for her pet.
It was a perfectly enchanting situation to Lady Jane when she was
mounted up on the high seat, close under Tante Modeste’s
sheltering wing, with her little feet on the cream-cheese box, and the
two tall cans standing in front like sturdy tin footmen waiting for
orders. Then Tante Modeste pulled the top up over their heads and
shook her lines at the fat little mule, and away they clattered down
Good Children Street, with all the children and all the dogs running
on behind.
It was a long and delightful drive to Lady Jane before they got out
of town to where the cottages were scattered and set in broad fields,
with trees and pretty gardens. At length they turned out of the
beautiful Esplanade, with its shady rows of trees, into Frenchman
Street, and away down the river they stopped before a large double
cottage that stood well back from the street, surrounded by trees and
flowers; a good-natured, healthy-looking boy threw open the gate,
and Tante Modeste clattered into the yard, calling out:
“Here, Tiburce, quick, my boy; unhitch the mule, and turn him out.”
The little animal understood perfectly well what she said, and
shaking his long ears he nickered approvingly.
Lady Jane was lifted down from her high perch by Paichoux
himself, who gave her a right cordial welcome, and in a moment she
was surrounded by Tante Modeste’s good-natured brood. At first she
felt a little shy, there were so many, and they were such noisy
children; but they were so kind and friendly toward her that they soon
won her confidence and affection.
That day was a “red-letter day” to Lady Jane; she was introduced
to all the pets of the farm-yard, the poultry, the dogs, the kittens, the
calves, the ponies, and little colts, and the great soft motherly looking
cows that stood quietly in rows to be milked; and afterward they
played under the trees in the grass, while they gathered roses by the
armful to carry to Pepsie, and filled a basket with pecans for
Madelon.
She was feasted on gumbo, fried chicken, rice-cakes, and
delicious cream cheese until she could eat no more; she was
caressed and petted to her heart’s content from the pretty Marie
down to the smallest white-headed Paichoux; she saw the fine
parlor, the mirror, the pictures, the cabinet of shells, and the vases of
wax-flowers, and, to crown all, Paichoux himself lifted her on
Tiburce’s pony and rode her around the yard several times, while
Tante Modeste made her a beautiful cake, frosted like snow, with her
name in pink letters across the top.
At last, when the milk-cart came around with its evening load of
fresh milk for waiting customers, Lady Jane was lifted up again
beside Tante Modeste, overloaded with presents, caresses, and
good wishes, the happiest child, as well as the tiredest, that ever
rode in a milk-cart.
Long before they reached the noisy city streets, Lady Jane
became very silent, and Tante Modeste peeped under the broad hat
to see if she had fallen asleep; but no, the blue eyes were wide and
wistful, and the little face had lost its glow of happiness.
“Are you tired, chérie?” asked Tante Modeste kindly.
“No, thank you,” she replied, with a soft sigh. “I was thinking of
papa, and Sunflower, and the ranch, and dear mama. Oh, I wonder if
she’ll come back soon.”
Tante Modeste made no reply, but she fell to thinking too. There
was something strange about it all that she couldn’t understand.
The child’s remarks and Madame Jozain’s stories did not agree.
There was a mystery, and she meant to get at the bottom of it by
some means. And when Tante Modeste set out to accomplish a thing
she usually succeeded.
CHAPTER XII
TANTE MODESTE’S SUSPICIONS

“P AICHOUX,” said Tante Modeste to her husband, that same


night, before the tired dairyman went to bed, “I’ve been
thinking of something all the evening.”
“Vraiment! I’m surprised,” returned Paichoux facetiously; “I didn’t
know you ever wasted time thinking.”
“I don’t usually,” went on Tante Modeste, ignoring her husband’s
little attempt at pleasantry; “but really, papa, this thing is running
through my head constantly. It’s about that little girl of Madame
Jozain’s; there’s something wrong about the ménage there. That
child is no more a Jozain than I am. A Jozain, indeed!—she’s a little
aristocrat, if ever there was one, a born little lady.”
“Perhaps she’s a Bergeron,” suggested Paichoux, with a quizzical
smile. “Madame prides herself on being a Bergeron, and the
Bergerons are fairly decent people. Old Bergeron, the baker, was an
honest man.”
“That may be; but she isn’t a Bergeron, either. That child is
different, you can see it. Look at her beside our young ones. Why,
she’s a swan among geese.”
“Well, that happens naturally sometimes,” said the philosophic
Paichoux. “I’ve seen it over and over in common breeds. It’s an
accident, but it happens. In a litter of curs, there’ll be often one
stylish dog; the puppies’ll grow up together; but there’ll be one
different from the others, and the handsomest one may not be the
smartest, but he’ll be the master, and get the best of everything. Now
look at that black filly of mine; where did she get her style? Not from
either father or mother. It’s an accident—an accident,—and it may be
with children as it is with puppies and colts, and that little one may be
an example of it.”
“Nonsense, Paichoux!” said Tante Modeste sharply. “There’s no
accident about it; there’s a mystery, and Madame Jozain doesn’t tell
the truth when she talks about the child. I can feel it even when she
doesn’t contradict herself. The other day I stepped in there to buy
Marie a ribbon, and I spoke about the child! in fact, I asked which
side she came from, and madame answered very curtly that her
father was a Jozain. Now this is what set me to thinking: To-day,
when Pepsie was putting a clean frock on the child, I noticed that her
under-clothing was marked ‘J. C.’ Remember, ‘J. C.’ Well, the day
that I was in madame’s shop, she said to me in her smooth way that
she’d heard of Marie’s intended marriage, and that she had
something superior, exquisite, that she’d like to show me. Then she
took a box out of her armoire, and in it were a number of the most
beautiful sets of linen I ever saw, batiste as fine as cobwebs and real
lace. ‘They’re just what you need for mademoiselle,’ she said in her
wheedling tone; ‘since she’s going to marry into such a distinguished
family, you’ll want to give her the best.’
“‘They’re too fine for my daughter,’ I answered, as I turned them
over and examined them carefully. They were the handsomest
things!—and on every piece was a pretty little embroidered
monogram, J. C.; mind you, the same as the letters on the child’s
clothes. Then I asked her right out, for it’s no use mincing matters
with such a woman, where in the world she got such lovely linen.
“‘They belonged to my niece,’ she said, with a hypocritical sigh,
‘and I’d like to sell them; they’re no good to the child; before she’s
grown up they’ll be spoiled with damp and mildew; I’d rather have
the money to educate her.’
“‘But the monogram; it’s a pity they’re marked J. C.’ I repeated the
letters over to see what she would say, and as I live she was ready
for me.
“‘No, madame; it’s C. J.—Claire Jozain; her name was Claire,
you’re looking at it wrong, and really it don’t matter much how the
letters are placed, for they’re always misleading, you never know
which comes first; and, dear Madame Paichoux,’—she deared me,
and that made me still more suspicious,—‘don’t you see that the C
might easily be mistaken for G?—and no one will notice the J, it
looks so much like a part of the vine around it. I’ll make them a
bargain if you’ll take them.’
“I told her no, that they were too fine for my girl; par exemple! as if
I’d let Mariet wear stolen clothes, perhaps.”
“Hush, hush, Modeste!” exclaimed Paichoux; “you might get in the
courts for that.”
“Or get her there, which would be more to the purpose. I’d like to
know when and where that niece died, and who was with her;
besides, the child says such strange things, now and then, they set
one to thinking. To-day when I was taking her home, she began to
talk about the ranch, and her papa and mama. Sometimes I think
they’ve stolen her.”
“Oh, Modeste! The woman isn’t as bad as that; I’ve never heard
anything against her,” interrupted the peaceable Paichoux, “she’s got
a bad son, it’s true. That boy, Raste, is his father over again. Why, I
hear he’s already been in the courts; but she’s all right as far as I
know.”
“Well, we’ll see,” said Tante Modeste, oracularly; “but I’m not
satisfied about that monogram. It was J. C, as sure as I live, and not
C. J.”
“I’ll tell you what we’ll do, mama,” said Paichoux, after some
deliberate thought; he was slow, but he was sure; “we’ll keep a
watch on the little one, and if anything happens, I’ll stand by her. You
tell sister Madelon to let me know if anything happens, and I’ll see
her through all right.”
“Then I believe she’s safe,” said Tante Modeste proudly, “for every
one knows that when Paichoux says a thing, he means it.”
If Madam Jozain had only known how unfavorable were the
comments of her supposed friends, she would not have felt as
comfortable as she did. Although she was riding on the topmost
wave of prosperity, as far as her business was concerned, she was
not, as I said before, entirely happy unless she had the good opinion
of every one, and for some reason, probably the result of a guilty
conscience, she fancied that people looked askance at her; for, in
spite of her polite advances, she had not succeeded in making
friends of her neighbors. They came to her shop to chat and look,
and sometimes to buy, and she was as civil to them as it was
possible to be. She gave them her most comfortable chairs, and
pulled down everything for them to examine, and unfolded, untied,
and unpacked, only to have the trouble of putting them all away
again. It was true they bought a good deal at times, and she had got
rid of many of “those things” in a quiet way, and at fair prices; but still
the neighbors kept her at a distance; they were polite enough, but
they were not cordial, and it was cordiality, warmth, admiration,
flattery, for which she hungered.
It was true she had a great deal to be proud of, for Raste was
growing handsomer and more of a gentleman every day. He was the
best looking fellow in the quarter, and he dressed so well,—like his
father, he was large and showy,—and wore the whitest linen, the
gayest neckties, and the finest jewelry, among which was the
beautiful watch of the dead woman. This watch he was fond of
showing to his friends, and pointing out the monogram, C. J., in
diamonds; for, like his mother, he found it easy to transpose the
letters to suit himself.
All this went a long way with Raste’s intimates, and made him very
popular among a certain class of young men who lived by their wits
and yet kept up a show of respectability.
And then, beside her satisfaction in Raste, there was the little Lady
Jane, to whom every door in the neighborhood was open. She was
the most beautiful and the most stylish child that ever was seen in
Good Children Street, and she attracted more attention than all the
other people put together. She never went out but what she heard
something flattering about the little darling, and she knew that a
great many people came to the shop just to get a glimpse of her.
All this satisfied her ambition, but not her vanity. She knew that
Lady Jane cared more for Pepsie, Madelon, or even little Gex, than
she did for her. The child was always dutiful, but never affectionate.
Sometimes a feeling of bitterness would stir within her, and, thinking

You might also like