Professional Documents
Culture Documents
Full Chapter Automated Technology For Verification and Analysis 14Th International Symposium Atva 2016 Chiba Japan October 17 20 2016 Proceedings 1St Edition Cyrille Artho PDF
Full Chapter Automated Technology For Verification and Analysis 14Th International Symposium Atva 2016 Chiba Japan October 17 20 2016 Proceedings 1St Edition Cyrille Artho PDF
https://textbookfull.com/product/automated-technology-for-
verification-and-analysis-16th-international-symposium-
atva-2018-los-angeles-ca-usa-october-7-10-2018-proceedings-
shuvendu-k-lahiri/
https://textbookfull.com/product/string-processing-and-
information-retrieval-23rd-international-symposium-
spire-2016-beppu-japan-october-18-20-2016-proceedings-1st-
edition-shunsuke-inenaga/
https://textbookfull.com/product/knowledge-and-systems-
sciences-17th-international-symposium-kss-2016-kobe-japan-
november-4-6-2016-proceedings-1st-edition-jian-chen/
https://textbookfull.com/product/computer-aided-
verification-28th-international-conference-cav-2016-toronto-on-
canada-july-17-23-2016-proceedings-part-ii-1st-edition-swarat-
chaudhuri/
https://textbookfull.com/product/computer-and-information-
sciences-31st-international-symposium-iscis-2016-krakow-poland-
october-27-28-2016-proceedings-1st-edition-tadeusz-czachorski/
https://textbookfull.com/product/collaboration-and-
technology-22nd-international-conference-criwg-2016-kanazawa-
japan-september-14-16-2016-proceedings-1st-edition-takaya-
yuizono/
Automated Technology
LNCS 9938
for Verification
and Analysis
14th International Symposium, ATVA 2016
Chiba, Japan, October 17–20, 2016
Proceedings
123
Lecture Notes in Computer Science 9938
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7408
Cyrille Artho Axel Legay
•
Automated Technology
for Verification
and Analysis
14th International Symposium, ATVA 2016
Chiba, Japan, October 17–20, 2016
Proceedings
123
Editors
Cyrille Artho Doron Peled
AIST Bar Ilan University
Osaka Ramat Gan
Japan Israel
Axel Legay
Inria Rennes
Rennes
France
This volume contains the papers presented at ATVA 2016, the 14th International
Symposium on Automated Technology for Verification and Analysis held during
October 17–20 in Chiba, Japan. The purpose of ATVA is to promote research on
theoretical and practical aspects of automated analysis, verification, and synthesis by
providing an international forum for interaction among researchers in academia and
industry.
ATVA attracted 82 submissions in response to the call for papers. Each submission
was assigned to at least four reviewers of the Program Committee. The Program
Committee discussed the submissions electronically, judging them on their perceived
importance, originality, clarity, and appropriateness to the expected audience. The
Program Committee selected 31 papers for presentation, leading to an acceptance rate
of 38 %.
The program of ATVA also included three invited talks and three invited tutorial
given by Prof. Tevfik Bultan (University of Santa Barbara), Prof. Javier Esparza
(Technical University of Munich), and Prof. Masahiro Fujita (Fujita Laboratory and the
University of Tokyo).
The chairs would like to thank the authors for submitting their papers to ATVA
2016. We are grateful to the reviewers who contributed to nearly 320 informed and
detailed reports and discussions during the electronic Program Committee meeting. We
also sincerely thank the Steering Committee for their advice. Finally, we would like to
thank the local organizers, Prof. Mitsuharu Yamamoto and Prof. Yoshinori Tanabe,
who devoted a large amount of their time to the conference. ATVA received financial
help from the National Institute of Advanced Industrial Science and Technology
(AIST), Springer, Chiba Convention Bureau and International Center (CCB-IC), and
Chiba University.
Program Committee
Toshiaki Aoki JAIST, Japan
Cyrille Artho AIST, USA
Christel Baier Technical University of Dresden, Germany
Armin Biere Johannes Kepler University, Austria
Tevfik Bultan University of California at Santa Barbara, USA
Franck Cassez Macquarie University, Australia
Krishnendu Chaterjee Institute of Science and Technology (IST)
Allesandro Cimatti FBK-irst, Italy
Rance Cleaveland University of Maryland, USA
Deepak D’Souza Indian Institute of Science, Bangalore, India
Bernd Finkbeiner Saarland University, Germany
Radu Grosu Stony Brook University, USA
Klaus Havelund Jet Propulsion Laboratory, California Institute of
Technology, USA
Marieke Huisman University of Twente, The Netherlands
Ralf Huuck UNSW/SYNOPSYS, Australia
Moonzoo Kim KAIST, South Korea
Marta Kwiatkowska University of Oxford, UK
Kim Larsen Aalborg University, Denmark
Axel Legay IRISA/Inria, Rennes, France
Tiziana Margaria University of Limerick, Ireland
Madhavan Mukund Chennai Mathematical Institute, India
Anca Muscholl LaBRI, Université Bordeaux, France
Doron Peled Bar-Ilan University, Israel
Andreas Podelski University of Freiburg, Germany
Geguang Pu East China Normal University, China
Sven Schewe University of Liverpool, UK
Oleg Sokolsky University of Pennsylvania, USA
Marielle Stoelinga University of Twente, The Netherlands
Bow-Yaw Wang Academia Sinica, Taiwan
Chao Wang Virginia Tech, USA
Farn Wang National Taiwan University, Taiwan
Wang Yi Uppsala University, Sweden
Naijun Zhan Institute of Software, Chinese Academy of Sciences,
China
VIII Organization
Additional Reviewers
Keynote
How Hard is It to Verify Flat Affine Counter Systems with the Finite
Monoid Property? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Radu Iosif and Arnaud Sangnier
Parallelism, Concurrency
Complexity, Decidability
Synthesis, Refinement
Heuristics for Checking Liveness Properties with Partial Order Reductions. . . 340
Alexandre Duret-Lutz, Fabrice Kordon, Denis Poitrenaud,
and Etienne Renault
Program Analysis
Masahiro Fujita(B)
1 QBF Formulation
In general the synthesis of partial missing portions of the circuits can be for-
mulated as Quantified Boolean Formula (QBF) problems. Here missing portions
can be any sets of sub-circuits or single gates, and their synthesis covers logic
synthesis/optimization, logic debugging/Engineering Change Order (ECO), and
automatic test pattern generation (ATPG) for general multiple faults. In this
paper we deal with combinational circuits or sequential circuits with fixed num-
bers of time-frame expansions. By modelling the missing portions with Look Up
Table (LUT) or some kind of programmable circuits, their synthesis, verification,
and testing problems can be formulated as QBF problems as follows.
When the portion to be filled is just one, the problem is illustrated in Fig. 1.
In the figure, on the top there is a target circuit which has a missing portion, C1.
On the bottom, there is a corresponding specification which could be another
c Springer International Publishing AG 2016
C. Artho et al. (Eds.): ATVA 2016, LNCS 9938, pp. 3–10, 2016.
DOI: 10.1007/978-3-319-46520-3 1
4 M. Fujita
circuit or some sorts of logical specification. The problem here is to find out
an appropriate circuit for C1 which makes the entire target circuit logically
equivalent to the specification. There can be two situations: One is when a
logical specification as well as a formal equivalence checker which certifies the
correctness of the target circuit with C1 and can generate a counter example
in the case of non-equivalence. The other situation we consider is when only
simulation models are available and no formal verifiers are available. We present
an extended technique which can certify the correctness of the target with C1
even just with simulations.
If the inputs to the sub-circuit, C1, are fixed as shown in Fig. 1, C1 can be
generally represented as Look Up Table (LUT). By appropriately programming
the LUT, if we can make the entire target circuits equivalent to the specification,
we say that the problem is successfully resolved.
As can be seen from [1], the problems can be formulated as Quantified
Boolean Formula (QBF). That is, the problem is formulated as:
More recently, a new improved algorithm, which our work is also based on, has
been proposed [2,3]. By utilizing ideas from Counter Example Guided Abstrac-
tion Refinement (CEGAR) in formal verification fields, QBF problems can be
efficiently solved by repeatedly applying SAT solvers based on CEGAR para-
digm [4]. By utilizing this idea, much larger problems related to PPC can be
processed as shown in [2,3].
Although in [2,3] two SAT problems are solved in one iteration, the first
one is always a very simple one (as most of inputs are getting constant values)
and can be solved very quickly whereas the second one can be more efficiently
performed with combinational equivalence checkers. There had been significant
research on combinational equivalence checkers which are based on not only
powerful SAT solvers but also identification of internal equivalence points for
problem decompositions. Now they are commercialized and successfully used for
industrial designs having more than one million gates. We show that by utilizing
such combinational equivalence checkers, even if sizes of entire circuits is more
Synthesizing and Completely Testing Hardware Based on Templates 5
than 100,000 gates, if the portions to be partially synthesized (C1 in Fig. 1) are
not large, we can successfully synthesize.
Most of these techniques as well as the ones for test pattern generation, which
will be discussed in the next section, have been implemented on top of the logic
synthesis and verification tool, ABC from UCB [5].
Moreover, we also show techniques which can work for cases where only
simulation models are available. In this case, we cannot use formal equivalence
checkers since no usable logical specification is available. As simulation mod-
els are available, however, if we can generate two solution candidates, we can
check which one is correct or both are incorrect by simulation. By repeating
this process, if there is only one solution candidate existing, we may be able to
conclude that candidate is actually a real solution. We show by experiments that
with reasonably small numbers of iterations (hundreds), correct circuits can be
successfully synthesized.
Fig. 2. LUT is represented with multiplexed four variables as truth table values.
four variables are multiplexed with the two inputs of the original gate as control
variables, as shown in Fig. 2. In the figure a two-input AND gate is replaced
with a two-input LUT. The inputs, t1 , t2 , of the AND gate becomes the control
inputs to the multiplexer. With these control inputs, the output is selected from
the four values, v00 , v01 , v10 , v11 . If we introduce M of 2-input LUTs, the circuit
has 4 × M more variables than the variables that exist in the original circuit.
We represent those variables as vij or simply v which represents a vector of
vij . v variables are treated as pseudo primary inputs as they are programmed
(assigned appropriate values) before utilizing the circuit. t variables in the figure
correspond to intermediate variables in the circuit. They appear in the CNF of
the circuits for SAT/QBF solvers.
If the logic function at the output of the circuit is represented as fI (v, x)
where x is an input variable vector and v is a program variable vector, after
replacements with LUTs, the QBF formula to be solved becomes:
Non-equivalent
(with new counter example)
of the two cases where inputs/output values are (0, 1, 0)/1 and (1, 1, 0)/0 is
checked if satisfiable. If satisfiable, this gives possible solutions for LUTs. Then
using those solutions for LUTs, the circuit is programmed and is checked to
be equivalent with the specification. As we are using SAT solvers, usually non-
equivalence can be made sure by checking if the formula for non-equivalence is
unsatisfiable.
Satisfiability problem for QBF in general belongs to P-Space complete. In
general QBF satisfiability can be solved by repeatedly applying SAT solvers,
which was first discussed under FPGA synthesis in [6] and in program synthesis
in [7]. The techniques shown in [4,9] give a general framework on how to deal
with QBF only with SAT solvers. These ideas have also been applied to so called
partial logic synthesis in [3].
a
yi c
xi
a b
c Circuit to model
Primary Primary stuck-at 1 and 0 faults
b y=1: stuck-at 1
input output
x=1: stuck-at 0
Fig. 4. Modeling stuck-at faults at the inputs and the output of a gate
For example, if we like to model stuck-at 0 fault at the output of a gate and
stuck-at 1 faults at the inputs of a gate, we introduce the logic circuit (or logic
function) with parameter variables, p, q, r as shown in Fig. 4. Here the target gate
is an AND gate and its output is replaced with the circuit shown in the figure.
That is, the output of the AND gate, c is replaced with ((a∨p)∧(b∨q))∧¬r, which
becomes 0 (corresponding to stuck-at 1 on signal c) when r = 1, b (corresponding
to stuck-at 1 fault on signal a) when p = 1, q = r = 0, and a (corresponding
to stuck-at 1 fault on signal b) when q = 1, p = r = 0. When a stuck-at 1 fault
happens on the input signal a, that value becomes 1 and so the logic function
observed at the output, c is b assuming that there is no more fault in this gate.
If all of p, q, r are 0, the behavior remains the same as original AND function
which is non-faulty.
Stuck-at faulty behaviors for each location are realized with these additional
circuits. That is, circuits with additional ones can simulate the stuck-at 1 and
0 effects by appropriately setting the values of p, q, r, .... For m possibly faulty
locations, we use m of p, q, r, ... variables. As we deal with multiple faults, these
circuits for modeling various faults should be inserted into each gate in the
circuit. By introducing appropriate circuit (or logic functions) with parameter
variables, varieties of multiple faults can be formulated in a uniform way.
The ATPG problems can be naturally formulated as QBF (Quantified
Boolean Formula). The ATPG methods have been implemented on top of ABC
tool [5]. Experimental results show that we can perform ATPG for all com-
binations of various multiple faults for all ISCAS89 circuits. As shown in the
experiments, complete sets of test vectors (exclusive of redundant faults) are
successfully generated for several fault models for all ISCAS89 circuits.
The fault models defined through their resulting logic functions can be con-
sidered to be transformations of gates. That is, under faults, each gate change
its behavior, which are represented by the logic functions and correspond to cir-
cuit transformations. By interpreting this way, the proposed methods can also
be used for logic synthesis based on circuit transformations. If there are p trans-
formations possible on a gate and there are in total m gates in the circuit, logic
synthesis based on the proposed method can search synthesized circuits out of
pm possible multiple transformations. Although this is a very interesting topic,
we reserve it for a future research topic.
10 M. Fujita
4 Concluding Remarks
We have shown an ATPG framework for multiple various faults with user-define
fault models and implicit representation of fault lists. The problems can be
naturally formulated as QBF (Quantified Boolean Formula), but solved through
repeated application of SAT solvers. There have been works in this direction
[4,6,7]. From the discussion in this paper we may say that those problems could
also be processed as incremental SAT problems instead of QBF problem, and
those incremental SAT problems can essentially be solved as single (unsatisfiable)
SAT problems allowing additional constraints in the fly, which could be much
more efficient.
The experimental results shown in this paper are very preliminary. Although
complete sets of test patterns for various multiple faults have been successfully
generated, which is, as long as we know, the first time ever, there are lots of rooms
in the proposed methods to be improved and to be extended. One of them is
to try to compact the test pattern sets. Also use of the proposed methods for
verification rather than testing is definitely one of the future topics.
References
1. Mangassarian, H., Yoshida, H., Veneris, A.G., Yamashita, S., Fujita, M.: On error
tolerance and engineering change with partially programmable circuits. In: The
17th Asia and South Pacific Design Automation Conference (ASP-DAC 2012), pp.
695–700 (2012)
2. Jo, S., Matsumoto, T., Fujita, M.: SAT-based automatic rectification and debugging
of combinational circuits with LUT insertions. In: Asian Test Symposium (ATS),
pp. 19–24, November 2012
3. Fujita, M., Jo, S., Ono, S., Matsumoto, T.: Partial synthesis through sampling with
and without specification. In: International Conference on Computer Aided Design
(ICCAD), pp. 787–794, November 2013
4. Janota, M., Klieber, W., Marques-Silva, J., Clarke, E.: Solving QBF with counterex-
ample guided refinement. In: Cimatti, A., Sebastiani, R. (eds.) SAT 2012. LNCS, vol.
7317, pp. 114–128. Springer, Heidelberg (2012). doi:10.1007/978-3-642-31612-8 10
5. Brayton, R., Mishchenko, A.: ABC: an academic industrial-strength verification
tool. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp.
24–40. Springer, Heidelberg (2010). doi:10.1007/978-3-642-14295-6 5
6. Ling, A., Singh, D.P., Brown, S.D.: FPGA logic synthesis using quantified boolean
satisfiability. In: Bacchus, F., Walsh, T. (eds.) SAT 2005. LNCS, vol. 3569, pp.
444–450. Springer, Heidelberg (2005). doi:10.1007/11499107 37
7. Solar-Lezama, A., Tancau, L., Bodik, R., Seshia, S.A., Saraswat, V.A.: Combinato-
rial sketching for finite programs. In: ASPLOS 2006, pp. 404–415 (2006)
8. Jain, J., Mukherjee, R., Fujita, M.: Advanced verification techniques based on learn-
ing. In: The 32nd Annual ACM/IEEE Design Automation Conference, pp. 420–426
(1995)
9. Janota, M., Marques-Silva, J.: Abstraction-based algorithm for 2QBF. In:
Sakallah, K.A., Simon, L. (eds.) SAT 2011. LNCS, vol. 6695, pp. 230–244.
Springer, Heidelberg (2011). doi:10.1007/978-3-642-21581-0 19
Markov Models, Chains, and Decision
Processes
Approximate Policy Iteration for Markov
Decision Processes via Quantitative Adaptive
Aggregations
1 Introduction
Dynamic programming (DP) is one of the most celebrated algorithms in com-
puter science, optimisation, control theory, and operations research [3]. Applied
to reactive models with actions, it allows synthesising optimal policies that opti-
mise a given reward function over the state space of the model. According to
Bellman’s principle of optimality, the DP algorithm is a recursive procedure
over value functions. Value functions are defined over the whole state and action
spaces and over the time horizon of the decision problem. They are updated
backward-recursively by means of locally optimal policies and, evaluated at the
This work has been partially supported by the ERC Advanced Grant VERIWARE,
the EPSRC Mobile Autonomy Programme Grant EP/M019918/1, the Czech Grant
Agency grant No. GA16-17538S (M. Češka), and the John Fell Oxford University
Press (OUP) Research Fund.
c Springer International Publishing AG 2016
C. Artho et al. (Eds.): ATVA 2016, LNCS 9938, pp. 13–31, 2016.
DOI: 10.1007/978-3-319-46520-3 2
14 A. Abate et al.
initial time point or in steady-state, yield the optimised global reward and the
associated optimal policy. By construction, the DP scheme is prone to issues
related to state-space explosion, otherwise known as the “curse of dimensional-
ity”. An active research area [5,6] has investigated approaches to mitigate this
issue: we can broadly distinguish two classic approaches.
Sample-based schemes approximate the reward functions by sampling over
the model’s dynamics [6,14], either by regressing the associated value function
over a given (parameterised) function class, or by synthesising upper and lower
bounds for the reward function. As such, whilst in principle avoiding exhaustive
exploration of the state space, they are associated to known limitations: they
often require much tuning or selection of the function class; they are not always
associated with quantitative convergence properties or strong asymptotic statis-
tical guarantees; and they are prone to requiring naı̈ve search of the action space,
and hence scale badly over the non-determinism. In contrast to the state-space
aggregation scheme presented in this paper, they compute the optimal policy
only for the explored states, which can be in many cases insufficient.
Numerical schemes perform the recursion step for DP in a computation-
ally enhanced manner. We distinguish two known alternatives. Value iteration
updates backward-recursively value functions embedding the policy computa-
tion within each iteration. The iteration terminates once a non-linear equation
(the familiar “Bellman equation”) is verified. On the other hand, policy iter-
ation schemes [4] distinguish two steps: policy update, where a new policy is
computed; and policy evaluation, where the reward function associated to the
given policy is evaluated (this boils down to an iteration up to convergence, or
to the solution of a linear system of equations). Convergence proofs for both
schemes are widely known and discussed in [5]. Both approaches can be further
simplified by means of approximate schemes: for instance, the value iteration
steps can be performed with distributed iterations attempting a modularisation,
or via approximate value updates. Similarly to policy iteration, policy updates
can be approximated and, for instance, run via prioritised sweeping over specific
parts of the state space; furthermore, policy evaluations can be done optimisti-
cally (over a finite number of iterations), or by approximating the associated
value functions.
In this work, we focus on the following modelling context: we deal with
finite-state, discrete-time stochastic models, widely known as Markov decision
processes (MDP) [16], and with γ-discounted, additive reward decision problems
over an infinite time horizon. We set up an optimisation problem, seeking the
optimal policy maximising the expected value of the given (provably bounded)
reward function. We formulate the solution of this problem by means of a numer-
ical approximate scheme.
Model Semantics. Consider the model (S, A, P ) and a given policy μ. The
model is initialised via distribution π0 : S → [0, 1], where s∈S π0 (s) = 1, and
its transient probability distribution at time step k ≥ 0 is
πk+1 (s) = πk (s )Pμk (s , s) = PμTk πk , (1)
s ∈S
or more concisely as πk+1 = πk Pμk (where the πk ’s are row vectors), and where
of course Pμk (s , s) = Pμk (s ) (s , s).
The work in [1] has studied the derivation of a compact representation and
an efficient computation of the vectors πk for a Markov chain, which is an MDP
under a time-homogeneous policy.
for any s ∈ S, and where E denotes the expected value of a function of the process
(as in the previous formula). Notice that in this setup the reward function unfolds
Approximate Policy Iteration for Markov Decision Processes 17
It is well known [3] that the class M of policies is sufficient to characterise the
optimal policy given an MDP model and the additive optimisation setup above,
namely we need not seek beyond this class (say, to randomised or non-Markovian
policies). Further, the optimal policy is necessarily stationary (homogeneous in
time).
Remark 1. It is likewise possible to consider decision problems where cost func-
tions (similar in shape as those considered above) are infimised. Whilst in this
work we focus on the first setup, our results are directly applicable to this second
class of optimisation problems.
At the limit, the optimal value function satisfies the following fix-point equation:
∗ ∗
J (s) = sup g(s, a) + γ Pa (s, s )J (s ) , (3)
a∈A
s ∈S
18 A. Abate et al.
This is known as the policy update step. The obtained policy μ0 can be suc-
cessively evaluated over a fully updated value function Jμ0 , which is such that
Jμ0 = gμ0 + γPμ0 Jμ0 (as mentioned above). The scheme proceeds further by
updating the policy as μ1 : Tμ1 Jμ0 = T Jμ0 ; by later evaluating it via value
function Jμ1 ; and so forth until finite-step convergence.
We stress that the value update involved with the policy evaluation is in
general quite expensive, and can be performed either as a recursive numerical
scheme, or as a numerical solution of a linear system of equations. Approximate
policy iteration schemes introduce approximations either in the policy update
or in the policy evaluation steps, and are shown to attain suboptimal policies
whilst ameliorating the otherwise computationally expensive exact scheme [4].