Professional Documents
Culture Documents
Textbook Automated Technology For Verification and Analysis 16Th International Symposium Atva 2018 Los Angeles Ca Usa October 7 10 2018 Proceedings Shuvendu K Lahiri Ebook All Chapter PDF
Textbook Automated Technology For Verification and Analysis 16Th International Symposium Atva 2018 Los Angeles Ca Usa October 7 10 2018 Proceedings Shuvendu K Lahiri Ebook All Chapter PDF
https://textbookfull.com/product/computer-aided-
verification-32nd-international-conference-cav-2020-los-angeles-
ca-usa-july-21-24-2020-proceedings-part-i-shuvendu-k-lahiri/
https://textbookfull.com/product/computer-aided-
verification-32nd-international-conference-cav-2020-los-angeles-
ca-usa-july-21-24-2020-proceedings-part-ii-shuvendu-k-lahiri/
https://textbookfull.com/product/runtime-verification-20th-
international-conference-rv-2020-los-angeles-ca-usa-
october-6-9-2020-proceedings-jyotirmoy-deshmukh/
Automated Technology for Verification and Analysis 14th
International Symposium ATVA 2016 Chiba Japan October
17 20 2016 Proceedings 1st Edition Cyrille Artho
https://textbookfull.com/product/automated-technology-for-
verification-and-analysis-14th-international-symposium-
atva-2016-chiba-japan-october-17-20-2016-proceedings-1st-edition-
cyrille-artho/
https://textbookfull.com/product/passive-and-active-
measurement-15th-international-conference-pam-2014-los-angeles-
ca-usa-march-10-11-2014-proceedings-1st-edition-michalis-
faloutsos/
https://textbookfull.com/product/mathematical-and-computational-
oncology-second-international-symposium-ismco-2020-san-diego-ca-
usa-october-8-10-2020-proceedings-george-bebis/
Shuvendu K. Lahiri
Chao Wang (Eds.)
LNCS 11138
Automated Technology
for Verification and Analysis
16th International Symposium, ATVA 2018
Los Angeles, CA, USA, October 7–10, 2018
Proceedings
123
Lecture Notes in Computer Science 11138
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology Madras, Chennai, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7408
Shuvendu K. Lahiri Chao Wang (Eds.)
•
Automated Technology
for Verification and Analysis
16th International Symposium, ATVA 2018
Los Angeles, CA, USA, October 7–10, 2018
Proceedings
123
Editors
Shuvendu K. Lahiri Chao Wang
Microsoft Research University of Southern California
Redmond, WA Los Angeles, CA
USA USA
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
This volume contains the papers presented at the 16th International Symposium on
Automated Technology for Verification and Analysis (ATVA 2018) held during
October 7–10, 2018, in Los Angeles, California, USA.
ATVA is a series of symposia dedicated to the promotion of research on theoretical
and practical aspects of automated analysis, verification, and synthesis by providing a
forum for interaction between the regional and the international research communities
and industry in the field. Previous events were held in Taipei (2003–2005), Beijing
(2006), Tokyo (2007), Seoul (2008), Macao (2009), Singapore (2010), Taipei (2011),
Thiruvananthapuram (2012), Hanoi (2013), Sydney (2014), Shanghai (2015), Chiba
(2016), and Pune (2017).
ATVA 2018 received 82 high-quality paper submissions, each of which received
three reviews on average. After careful review, the Program Committee accepted 27
regular papers and six tool papers. The evaluation and selection process involved
thorough discussions among members of the Program Committee through the Easy-
Chair conference management system, before reaching a consensus on the final
decisions.
To complement the contributed papers, we included in the program three keynote
talks and tutorials given by Nikolaj Bjørner (Microsoft Research, USA), Corina
Păsăreanu (NASA Ames Research Center, USA), and Sanjit Seshia (University of
California, Berkeley, USA), resulting in an exceptionally strong technical program.
We would like to acknowledge the contributions that made ATVA 2018 a suc-
cessful event. First, we thank the authors of all submitted papers and we hope that they
continute to submit their high-quality work to ATVA in future. Second, we thank the
Program Committee members and external reviewers for their rigorous evaluation
of the submitted papers. Third, we thank the keynote speakers for enriching the pro-
gram by presenting their distinguished research. Finally, we thank Microsoft and
Springer for sponsoring ATVA 2018.
We sincerely hope that the readers find the ATVA 2018 proceedings informative
and rewarding.
Steering Committee
E. Allen Emerson University of Texas, Austin, USA
Teruo Higashino Osaka University, Japan
Oscar H. Ibarra University of California, Santa Barbara, USA
Insup Lee University of Pennsylvania, USA
Doron A. Peled Bar Ilan University, Israel
Farn Wang National Taiwan University, Taiwan
Hsu-Chun Yen National Taiwan University, Taiwan
Program Committee
Aws Albarghouthi University of Wisconsin-Madison, USA
Cyrille Valentin Artho KTH Royal Institute of Technology, Sweden
Gogul Balakrishnan Google, USA
Roderick Bloem Graz University of Technology, Austria
Tevfik Bultan University of California, Santa Barbara, USA
Pavol Cerny University of Colorado at Boulder, USA
Sagar Chaki Mentor Graphics, USA
Deepak D’Souza Indian Institute of Science, Bangalore, India
Jyotirmoy Deshmukh University of Southern California, USA
Constantin Enea University Paris Diderot, France
Grigory Fedyukovich Princeton University, USA
Masahiro Fujita The University of Tokyo, Japan
Sicun Gao University of California, San Diego, USA
Arie Gurfinkel University of Waterloo, Canada
Fei He Tsinghua University, China
Alan J. Hu The University of British Columbia, Canada
Joxan Jaffar National University of Singapore, Singapore
Akash Lal Microsoft, India
Axel Legay IRISA/Inria, Rennes, France
Yang Liu Nanyang Technological University, Singapore
Zhiming Liu Southwest University, China
K. Narayan Kumar Chennai Mathematical Institute, India
Doron Peled Bar Ilan University, Israel
Xiaokang Qiu Purdue University, USA
Giles Reger The University of Manchester, UK
Sandeep Shukla Indian Institute of Technology Kanpur, India
Oleg Sokolsky University of Pennsylvania, USA
Armando Solar-Lezama MIT, USA
Neeraj Suri TU Darmstadt, Germany
VIII Organization
Additional Reviewers
Azure to the power of Z3: Cloud providers are increasingly embracing network
verification for managing complex datacenter network infrastructure. Microsoft’s
Azure cloud infrastructure integrates the SecGuru tool, which leverages the Z3 SMT
solver, for assuring that the network is configured to preserve desired intent. SecGuru
statically validates correctness of access-control policies and routing tables of hundreds
of thousands of network devices. For a structured network such as for Microsoft Azure
data-centers, the intent for routing tables and access-control policies can be automat-
ically derived from network architecture and metadata about address ranges hosted in
the datacenter. We leverage this aspect to perform local checks on a per router basis.
These local checks together assure reachability invariants for availability and perfor-
mance. To make the service truly scalable, while using modest resources, SecGuru
integrates a set of domain-specific optimizations that exploit properties of the config-
urations it needs to handle. Our experiences exemplify integration of general purpose
verification technologies, in this case bit-vector solving, for a specific domain: the
overal methodology available through SMT formalisms lowers the barrier of entry for
capturing and checking contracts. They also alleviate initial needs for writing custom
solvers. However, each domain reveals specific structure that produces new insights in
the quest for scalable verification: for checking reachability properties in networks,
methods for capturing symmetries in networks and header spaces can speed up veri-
fication by several orders of magnitude; for local checks on data-center routers we
exploit common patterns in configurations to take the time it takes to check a contract
from a few seconds to a few milliseconds.
Z3 to the power of Azure: Applications that rely on constraint solving may be in a
fortunate situation where efficient solving technologies are adequately available. Sev-
eral tools building on Z3 rely on this being the common case scenario. Then useful
feedback can be produced within the attention span of a person performing program
verification, and then the available compute time on a machine or a cluster is sufficient
to complete thousands of small checks. As long as this fortunate situation holds, search
techniques for SMT is a solved problem. Yet, in spite of rumors to the contrary, SAT
X N. Bjørner et al.
Invited Papers
Regular Papers
A Fragment of Linear Temporal Logic for Universal Very Weak Automata . . . 335
Keerthi Adabala and Rüdiger Ehlers
Tool Papers
1 Introduction
Machine learning techniques such as deep neural networks (NN) are increasingly
used in a variety of applications, achieving impressive results in many domains,
and matching the cognitive ability of humans in complex tasks. In this paper, we
study a common use of NN as classifiers that take in complex, high dimensional
input, pass it through multiple layers of transformations, and finally assign to it
a specific output label or class. These networks can be used to perform pattern
analysis, image classification, or speech and audio recognition, and are now being
c Springer Nature Switzerland AG 2018
S. K. Lahiri and C. Wang (Eds.): ATVA 2018, LNCS 11138, pp. 3–19, 2018.
https://doi.org/10.1007/978-3-030-01090-4_1
4 D. Gopinath et al.
this would require coming up with minimally acceptable distance δ values for
all these checks, which can vary greatly between different input points. Further-
more, the check will likely fail (and produce invalid adversarial examples) for the
input points that are close to the legitimate boundaries between different labels.
The concept of global robustness is defined in [10,11], which could be checked
by Reluplex (we give the formal definition in Sect. 6). Whereas local robustness
is measured for a specific input point, global robustness applies to all inputs
simultaneously. The check requires encoding two side-by-side copies of the NN
in question, operating on separate input variables, and checking that the outputs
are similar. Global adversarial robustness is harder to prove than local robustness
for two reasons: (1) encoding two copies of the network results in twice as many
network nodes to analyze and (2) the problem is not restricted to a small domain,
and therefore can not take advantage of Reluplex’s heuristics, which work best
when the check is restricted to small neighborhoods. Furthermore, it requires
more manual tuning, as the user needs to provide minimally acceptable values
for two parameters (δ and ). As a result this global check could not be applied
in practice to any realistic network [10].
Thus the problem of assessing the overall robustness of a network remains
open.
radius of the region, and repeat the check until the region is proved to be safe.
If a time-out occurs during this process, we can not provide any guarantee for
that region (although the likely answer is that it is safe). The result of the anal-
ysis is a set of well-defined input regions in which the NN is guaranteed to be
robust and s set of examples of adversarial inputs that may be of interest to the
developers of the NN, and can be used in retraining.
Thus, DeepSafe decomposes the global robustness requirements into a set
of local robustness checks, one for each region, which can be solved efficiently
with a tool like Reluplex. These checks do not require two network copies and are
restricted to small input domains, as defined by the regions. The distance used for
the local checks is not picked arbitrarily as in previous approaches, but instead it
is the radius computed for each regions, backed up by the labeled data. Further-
more regions define natural decision boundaries in the input domain, thereby
increasing the chances of producing valid proofs or of finding valid adversarial
examples.
We introduce the concept targeted robustness in line with targeted attacks
[5,6,17]. Targeted robustness ensures that there are no inputs in a region that
are mapped by the NN to a specific incorrect label. Therefore, even if in that
region the NN is not completely robust against adversarial perturbations, we can
give guarantees that it is robust against specific targeted attacks. As a simple
example, consider a NN used for perception in an autonomous car that classifies
the images of a traffic light as red, green or yellow. Even if it is not the case that
the NN never misclassifies a red light, we may be willing to settle for a guarantee
that it never misclassifies a red light as a green light—leaving us with the more
tolerable case in which a red light is misclassified as yellow.
We implemented a prototype of DeepSafe and evaluated it on a neural net-
work implementation of a controller for the next-generation Airborne Collision
Avoidance System for unmanned aircraft (ACAS Xu) and on the well known
MNIST dataset. For these networks, our approach identified regions which were
completely safe, regions which were safe with respect to some target labels, and
also adversarial perturbations that were of interest to the network’s developers.
2 Background
2.1 Neural Networks
Neural networks and deep belief networks have been used in a variety of appli-
cations, including pattern analysis, image classification, speech and audio recog-
nition, and perception modules in self-driving cars. Typically, the objects in
such domains are high dimensional and the number of classes that the objects
need to be classified to is also high—and so the classification functions tend
to be highly non-linear over the input space. Deep learning operates with the
underlying rationale that groups of input parameters can be merged to derive
higher-level abstract features, which enable the discovery of a more linear and
continuous classification function. Neural networks are often used as classifiers,
meaning they assign to each input an output label/class. Such a neural network
DeepSafe: A Data-Driven Approach 7
2.3 Clustering
Clustering is an approach used to divide a population of data-points into sets
called clusters, such that the data-points in each cluster are more similar (with
respect to some metric) to other points in the same cluster than to the rest of
the data-points. Here we focus on a particularly popular clustering algorithm
called kMeans [9] (although our approach could be implemented using different
clustering algorithms as well). Given a set of n data-points {x1 , . . . , xn } and k as
the desired number of clusters, the algorithm partitions the points into k clusters,
such that the variance (also referred to as “within cluster sum of squares”)
within each cluster is minimal. The metric used to calculate the distance between
points is customizable, and is typically the Euclidean distance (L2 norm) or
the Manhattan distance (L1 norm). For points x1 = x11 , . . . , x1n and x2 =
x21 , . . . , x2n these are defined as:
n n
x1 − x2 L1 = |xi − xi |,
1 2
x1 − x2 L2 = (x1i − x2i )2 (1)
i=1 i=1
Fig. 2. Original clusters with k = 2 (left), Clusters with modified kMeans (right)
Distance Metrics. We note that the similarity of the inputs within a cluster
is determined by the distance metric used for calculating the proximity of the
inputs. Therefore, it is important to choose a distance metric that generates
acceptable levels of similarity for the domain under consideration. We assume
that inputs are representable as vectors of numeric attributes and hence can
be considered as points in Euclidean space. The Euclidean distance (Eq. 1) is
a commonly used metric for measuring the proximity of points in Euclidean
space. However, recent studies indicate that the usefulness of Euclidean distance
in determining the proximity between points diminishes as the dimensionality
increases [1]. The Manhattan distance (Eq. 1) has been found to capture prox-
imity more accurately in high dimensions. Therefore, in our experiments, we set
the distance metric depending on the dimensionality of the input space (Sect. 4).
The clustering algorithm aims to produce neighborhoods of consistently-
labeled inputs. The underlying assumption of our approach is that each such
cluster will define a safe region, in which all inputs (and not just the inputs
used to form the cluster) should be labeled consistently. We define the regions to
have the centroid cen computed by kMeans and the radius to be the average dis-
tance r of any instance from the centroid. Note that we use the average instead
of maximum distance. The reason is that kMeans typically generates clusters
that are convex, compact and spherically shaped. However, the ideal boundaries
of regions encompassing consistently labeled points need not conform to this.
DeepSafe: A Data-Driven Approach 11
Further, while inputs deep within a region are expected to be labeled consis-
tently, the points that are close to the boundaries may have different labels. We
therefore shrink the radius to increase the chances of obtaining a region that is
indeed safe.
x − cen ≤ r ⇒ label(x) = l
If this hypothesis holds, it follows that any point x in the region which is assigned
a different label F (x ) = l by the network constitutes an adversarial perturba-
tion. To illustrate this on our example; a NN, may represent an input which is
close to other stars in the input layer as a point further away from them in the
inner layers. Therefore, it may incorrectly classify it as a circle.
We use Reluplex [10] to check the satisfiability of a formula representing the
negation of Hypothesis 1 for every cluster for every label. The encoding is as
follows://shown in Eq. 2:
Here, x represents an input point, and cen, r and l represent the centroid, radius
and label of the region, respectively. l represents a specific label, different from l.
Reluplex models the network without the final softmax layer, and so the outputs
of the model correspond to the outputs of the second last layer of the NN [REF].
This layer consists of as many nodes as the number of labels and the output value
for each node corresponds to the level of confidence that the network assigns to
that label for the given input. We use score(x, y) to denote the level of confidence
assigned to label y at point x.
The above formula holds for a given l if and only if there exists a point x
within distance at most r from cen, for which l is assigned higher confidence than
l. Consequently, if the property does not hold, then for every x within the cluster
l, its score is higher than l . This ensures targeted robustness of the network for
label l : the network is guaranteed to never map any input within the region to
the target label l . The formula in Eq. 2 is checked for every possible l ∈ L − {l},
where L denotes the set of all possible labels. If the query is unsatisfiable for all
l , it ensures complete robustness of the network for the region; i.e., the network is
guaranteed to map all the inputs in the region to the same label as the centroid.
This can be expressed formally as follows:
4 Case Studies
We implemented DeepSpace using MATLAB R2017a for the clustering algorithm
and Reluplex v1.0 for verification. The runs were dispatched on a 8-Core 64GB
server running Ubuntu 16.0.4. We evaluated DeepSpace on two case studies. The
first network is part of a real-world controller for the next-generation Airborne
Collision Avoidance System for unmanned aircraft (ACAS Xu), a highly safety-
critical system. The second network is a digit classifier over the popular MNIST
image dataset.
Table 1. Summary of the analysis for the ACAS Xu network for 210 clusters
4.1 ACAS Xu
Cluster# Safe for label Radius # queries Time (min) Slice (Y/N)
5282 1 0.04 1 5.45 N
label:0 2 0.04 1 3.91 N
3 0.04 1 3.57 N
4 0.04 1 4.01 N
1783 1 0.16 4 1.28 Y
label:0 2 0.17 1 279 N
3 0.17 1 236 N
4 0.17 1 223 N
2072 0 0.06 1 11.51 N
label:1 2 0.014 9 0.98 N
3 0.011 7 0.71 N
4 0.012 5 0.58 N
6138 1 0.089 9 103.2 N
label:0 2 0.11 4 2.86 N
executed on these inputs and the output advisories (labels) were verified. These
were considered as the inputs with known labels for our experiments.
The labeled-guided clustering algorithm was applied on the inputs using the
L2 distance metric. Clustering yielded 6145 clusters with more than one input
and 321 single-input clusters. The clustering took 7 h. For each cluster we com-
puted a region, characterized by a centroid (computed by kMeans), radius (aver-
age distance of every cluster instance from the centroid), and the expected label
(the label of all the cluster instances).
We first evaluated the network on all the centroids as they are considered
representative of the entire cluster and should ideally have the expected label.
The network assigned the expected label for the centroids of 5116 clusters (83%
of total number of clusters). For the remaining 1029 clusters, we found that
they contained few labeled instances spread out in large areas. Therefore, we
considered these clusters were not precise and our analysis was inconclusive.
For singleton clusters, we fall back to checking local robustness using previous
techniques [10]. These stand-alone points serve to identify portions of the input
space which require more training data, thus potentially more vulnerable to
adversarial perturbations.
Amongst the remaining 5116 clusters, we picked randomly 210 clusters to
illustrate our technique. These clusters contain 659315 labeled inputs (24% of the
total inputs with known labels). For each region corresponding to the respective
clusters, we applied DeepSafe to check equation 2 for every label. The distance
metric used was L1 since L2 can not be handled by Reluplex (see Sect. 3 that
explains why this is still safe). The results are presented in Tables 1 and 2. The
min radius in Table 1, refers to the average minimum radius around the centroid
DeepSafe: A Data-Driven Approach 15
of each region for which the safety guarantee applies (averaged over the total
number of regions for that safety type). The # queries refers to the number of
times the solver had to be invoked until an UNSAT was obtained, averaged over
all the regions for that property.
DeepSafe was able to identify 125 regions which are completely safe, i.e.
the network yields a label consistent with the neighboring labeled inputs within
the region. 52 regions are targeted safe, the network is safe against misclassifying
inputs to certain labels. For instance, the inputs within region 6138 (Table 2) with
an expected label 0 (COC), were safe against misclassification only to labels 1
(weak right) and 2 (weak left). The solver timed out without returning any result
for the remaining labels. The analysis timed out without returning a concrete
result for any label for 33 clusters. A time out does not allow to provide a proof
for the regions, although the likely answer is safe (generally, solvers take much
longer when there is no solution).
The min radius in Table 1, refers to the average minimum radius around the
centroid of each region for which the safety guarantee applies (averaged over the
total number of regions for that safety type).
The # queries refers to the number of times the solver had to be invoked
until an UNSAT was obtained, averaged over all the regions for that property.
For the singleton clusters, as is the case with ACAS Xu, we performed local
robustness checking as in previous approaches.
Table 3 shows the summary of the results for the runs for 80 clusters that we
selected for evaluation. In past studies, the MNIST network has been shown to
be extremely vulnerable to misclassification on adversarial perturbations even
16 D. Gopinath et al.
5 Discussion
Fig. 3. Inputs highlighted in light blue are mis-classified as Strong Right instead of
COC (left). Inputs highlighted in light blue are mis-classified as Strong Right instead
of Strong Left(right).
found to be valid, albeit not of high criticality. The adversarial examples for
MNIST were converted to images and manually verified to be valid (see Fig. 4).
There could be scenarios where both the region and the network agree on
the labels for all inputs, and still this could not be the desired behavior. This
would impact the validity of the safety guarantees provided by DeepCheck. We
addressed this issue by validating the safety regions for ACAS Xu with the
domain experts. The mismatch of labels for the centroid of a region does poten-
tially indicate an imprecise oracle. However, we found that the number of such
regions is not high (1029 out of 6145 clusters for ACAS Xu).
6 Related Work
T HUS Lady Jane’s new life, in the quaint old Rue des Bons
Enfants, began under quite pleasant auspices. From the moment
that Pepsie, with a silent but not unrecorded vow, constituted herself
the champion and guardian angel of the lonely little stranger, she
was surrounded by friends, and hedged in with the most loyal
affection.
Because Pepsie loved the child, the good Madelon loved her also,
and although she saw her but seldom, being obliged to leave home
early and return late, she usually left her some substantial token of
good will, in the shape of cakes or pralines, or some odd little toy
that she picked up on Bourbon Street on her way to and from her
stand.
Madelon was a pleasant-faced, handsome woman, always clean
and always cheery; no matter how hard the day had been for her,
whether hot or cold, rainy or dusty, she returned home at night as
fresh and cheerful as when she went out in the morning. Pepsie
adored her mother, and no two human beings were ever happier
than they when the day’s work was over, and they sat down together
to their little supper.
Then Pepsie recounted to her mother everything that had
happened during the day, or at least everything that had come within
her line of vision as she sat at her window; and Madelon in turn
would tell her of all she had heard out in her world, the world of the
Rue Bourbon, and after the advent of Lady Jane the child was a
constant theme of conversation between them. Her beauty, her
intelligence, her pretty manners, her charming little ways were a
continual wonder to the homely woman and girl, who had seen little
beyond their own sphere of life.
If Madelon was fortunate enough to get home early, she always
found Lady Jane with Pepsie, and the loving way with which the child
would spring to meet her, clinging to her neck and nestling to her
broad motherly bosom, showed how deeply she needed the
maternal affection so freely lavished upon her.
At first Madame Jozain affected to be a little averse to such a
close intimacy, and even went so far as to say to Madame
Fernandez, the tobacconist’s wife, who sat all day with her husband
in his little shop rolling cigarettes and selling lottery tickets, that she
did not like her niece to be much with the lame girl opposite, whose
mother was called “Bonne Praline.” Perhaps they were honest
people, and would do the child no harm; but a woman who was
never called madame, and who sat all day on the Rue Bourbon, was
likely to have the manners of the streets. And Lady Jane had never
been thrown with such people; she had been raised very carefully,
and she didn’t want her to lose her pretty manners.
Madame Fernandez agreed that Madelon was not over-refined,
and that Pepsie lacked the accomplishments of a young lady. “But
they are very honest,” she said, “and the girl has a generous heart,
and is so patient and cheerful; besides, Madelon has a sister who is
rich. Monsieur Paichoux, her sister’s husband, is very well off, a solid
man, with a large dairy business; and their daughter Marie, who just
graduated at the Sacred Heart, is very pretty, and is fiancée to a
young man of superior family, a son of Judge Guiot, and you know
who the Guiots are.”
Yes, madame knew. Her father, Pierre Bergeron, and Judge Guiot
had always been friends, and the families had visited in other days. If
that was the case, the Paichoux must be very respectable; and if
“Bonne Praline” was the sister-in-law of a Paichoux, and prospective
aunt-in-law to the son of a judge, there was no reason why she
should keep the child away; therefore she allowed her to go
whenever she wished, which was from the time she was out of bed
in the morning until it was quite dark at night.
Lady Jane shared Pepsie’s meals, and sat at the table with her,
learning to crack and shell pecans with such wonderful facility that
Pepsie’s task was accomplished some hours sooner, therefore she
had a good deal of time each day to devote to her little friend. And it
was very amusing to witness Pepsie’s motherly care for the child.
She bathed her, and brushed her long silken hair; she trimmed her
bang to the most becoming length; she dressed her with the greatest
taste, and tied her sash with the chic of a French milliner; she
examined the little pink nails and pearls of teeth to see if they were
perfectly clean, and she joined with Lady Jane in rebelling against
madame’s decree that the child should go barefoot while the weather
was warm. “All the little creoles did, and she was not going to buy
shoes for the child to knock out every day.” Therefore, when her
shoes were worn, Madelon bought her a neat little pair on the Rue
Bourbon, and Pepsie darned her stockings and sewed on buttons
and strings with the most exemplary patience. When madame
complained that, with all the business she had to attend to, the white
frocks were too much trouble and expense to keep clean, Tite
Souris, who was a fair laundress, begged that she might be allowed
to wash them, which she did with such good-will that Lady Jane was
always neat and dainty.
Gradually the sorrowful, neglected look disappeared from her
small face, and she became rosy and dimpled again, and as
contented and happy a child as ever was seen in Good Children
Street. Every one in the neighborhood knew her; the gracious,
beautiful little creature, with her blue heron, became one of the
sights of the quarter. She was a picture and a poem in one to the
homely, good-natured creoles, and everywhere she went she carried
sunshine with her.
Little Gex, a tiny, shrunken, bent Frenchman, who kept a small
fruit and vegetable stall just above Madelon’s, felt that the day had
been dark indeed when Lady Jane’s radiant little face did not illumine
his dingy quarters. How his old, dull eyes would brighten when he
heard her cheery voice, “Good morning, Mr. Gex; Tante Pauline”—or
Pepsie, as the case might be—“would like a nickel of apples, onions,
or carrots”; and the orange that was always given her for lagniappe
was received with a charming smile, and a “Thank you,” that went
straight to the old, withered heart.
Gex was a quiet, polite little man, who seldom held any
conversation with his customers beyond the simple requirements of
his business; and children, as a general thing, he detested, for the
reason that the ill-bred little imps in the neighborhood made him the
butt of their mischievous ridicule, for his appearance was droll in the
extreme: his small face was destitute of beard and as wrinkled as a
withered apple, and he usually wore a red handkerchief tied over his
bald head with the ends hanging under his chin; his dress consisted
of rather short and very wide trousers, a little jacket, and an apron
that reached nearly to his feet. This very quaint costume gave him a
nondescript appearance, which excited the mirth of the juvenile
population to such a degree that they did not always restrain it within
proper bounds. Therefore it was very seldom that a child entered his
den, and such a thing as one receiving lagniappe was quite unheard
of.
MR. GEX AT THE DOOR OF HIS SHOP
All day long he sat on his small wooden chair behind the shelf
across his window, on which was laid in neat piles oranges, apples,
sweet potatoes, onions, cabbages, and even the odorous garlic; they
were always sound and clean, and for that reason, even if he did not
give lagniappe to small customers, he had a fair trade in the
neighborhood. And he was very neat and industrious. When he was
not engaged in preparing his vegetables, he was always tinkering at
something of interest to himself; he could mend china and glass,
clocks and jewelry, shoes and shirts; he washed and patched his
own wardrobe, and darned his own stockings. Often when a
customer came in he would push his spectacles upon his forehead,
lay down his stocking and needle, and deal out his cabbage and
carrots as unconcernedly as if he had been engaged in a more
manly occupation.
From some of the dingy corners of his den he had unearthed an
old chair, very stiff and high, and entirely destitute of a bottom; this
he cleaned and repaired by nailing across the frame an orange-box
cover decorated with a very bright picture, and one day he charmed
Lady Jane by asking her to sit down and eat her orange while he
mended his jacket.
She declined eating her orange, as she always shared it with
Pepsie, but accepted the invitation to be seated. Placing Tony to
forage on a basket of refuse vegetables, she climbed into the chair,
placed her little heels on the topmost rung, smoothed down her short
skirt, and, resting her elbows on her knees, leaned her rosy little
cheeks on her palms, and set herself to studying Gex seriously and
critically. At length, her curiosity overcoming her diffidence, she said
in a very polite tone, but with a little hesitation: “Mr. Gex, are you a
man or a woman?”
Gex, for the moment, was fairly startled out of himself, and,
perhaps for the first time in years, he threw back his head and
laughed heartily.
“Bon! bon! ’Tis good; ’tis vairy good. Vhy, my leetle lady, sometime
I don’t know myself; ’cause, you see, I have to be both the man and
the voman; but vhy in the vorld did you just ask me such a funny
question?”
“Because, Mr. Gex,” replied Lady Jane, very gravely, “I’ve thought
about it often. Because—men don’t sew, and wear aprons,—and—
women don’t wear trousers; so, you see, I couldn’t tell which you
were.”
“Oh, my foi!” and again Gex roared with laughter until a neighbor,
who was passing, thought he had gone crazy, and stopped to look at
him with wonder; but she only saw him leaning back, laughing with
all his might, while Lady Jane sat looking at him with a frowning,
flushed face, as if she was disgusted at his levity.
“I don’t know why you laugh so,” she said loftily, straightening up in
her chair, and regarding Gex as if he had disappointed her. “I think
it’s very bad for you to have no one to mend your clothes, and—and
to have to sew like a woman, if—if you’re a man.”
“Vhy, bless your leetle heart, so it is; but you see I am just one
poor, lonely creature, and it don’t make much difference vhether I’m
one or t’ other; nobody cares now.”
“I do,” returned Lady Jane brightly; “and I’m glad I know, because,
when Pepsie teaches me to sew, I’m going to mend your clothes, Mr.
Gex.”
“Vell, you are one leetle angel,” exclaimed Gex, quite overcome.
“Here, take another orange.”
“Oh, no; thank you. I’ve only bought one thing, and I can’t take two
lagniappes; that would be wrong. But I must go now.”
And, jumping down, she took Tony from his comfortable nest
among the cabbage-leaves, and with a polite good-by she darted
out, leaving the dingy little shop darker for her going.
For a long time after she went Gex sat looking thoughtfully at his
needlework. Then he sighed heavily, and muttered to himself: “If
Marie had lived! If she’d lived, I’d been more of a man.”
CHAPTER XI
THE VISIT TO THE PAICHOUX